All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V3 0/4] *ma: add Qualcomm Technologies HIDMA driver
@ 2015-11-08  4:52 ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: dmaengine, timur, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Sinan Kaya

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization in the future.

The management driver is executed in hypervisor context and
is the main management entity for all channels provided by
the device.

Changes from V2: (https://lkml.org/lkml/2015/11/2/43)
* compatible = "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0", "qcom,hidma-mgmt";
* change channel-reset-timeout-cycles
* more description for the HW operation and driver organization
* consolidate devm_ioremap_resource
* make print message more meaningfule.
* remove print at uninstall
* remove debugfs for now, create simple sysfs entries instead
* Move code to qcom directory.

Changes from V2: (https://lkml.org/lkml/2015/11/2/44)
* none

Changes from V2: (https://lkml.org/lkml/2015/11/2/45)
* Clarify the relaxed accessor usage in ISR.
* Replace all non-performance critical accessors with safe
versions.
* Correct multiline comment syntax.
* Remove name variable.
* Eliminate redundant list_empty checks.
* Remove locks in hidma_tx_status.
* Document why hidma_issue_pending is empty.
* Fix the return code to 0 for hidma_ll_request.
* Consolidate devm_ioremap_resource functions in probe.
* Move PM calls inside the paused conditional.
* One tasklet for all completions rather than one tasklet
per completion. Completions are queued into kfifo and consumed
by a single tasklet.
* Move start of actual transfer to issue_pending
* Simplify return path when DMA mask set fails.
* Move code to qcom directory.

Sinan Kaya (4):
  dma: qcom_bam_dma: move to qcom directory
  dma: add Qualcomm Technologies HIDMA management driver
  dmaselftest: add memcpy selftest support functions
  dma: add Qualcomm Technologies HIDMA channel driver

 .../devicetree/bindings/dma/qcom_hidma.txt         |   18 +
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |   62 +
 drivers/dma/Kconfig                                |   13 +-
 drivers/dma/Makefile                               |    2 +-
 drivers/dma/dmaengine.h                            |    2 +
 drivers/dma/dmaselftest.c                          |  669 +++++++++++
 drivers/dma/qcom/Kconfig                           |   29 +
 drivers/dma/qcom/Makefile                          |    4 +
 drivers/dma/qcom/bam_dma.c                         | 1259 ++++++++++++++++++++
 drivers/dma/qcom/hidma.c                           |  743 ++++++++++++
 drivers/dma/qcom/hidma.h                           |  157 +++
 drivers/dma/qcom/hidma_dbg.c                       |  225 ++++
 drivers/dma/qcom/hidma_ll.c                        |  944 +++++++++++++++
 drivers/dma/qcom/hidma_mgmt.c                      |  315 +++++
 drivers/dma/qcom/hidma_mgmt.h                      |   38 +
 drivers/dma/qcom/hidma_mgmt_sys.c                  |  232 ++++
 drivers/dma/qcom_bam_dma.c                         | 1259 --------------------
 17 files changed, 4701 insertions(+), 1270 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
 create mode 100644 drivers/dma/dmaselftest.c
 create mode 100644 drivers/dma/qcom/Kconfig
 create mode 100644 drivers/dma/qcom/Makefile
 create mode 100644 drivers/dma/qcom/bam_dma.c
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h
 create mode 100644 drivers/dma/qcom/hidma_dbg.c
 create mode 100644 drivers/dma/qcom/hidma_ll.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c
 delete mode 100644 drivers/dma/qcom_bam_dma.c

-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 0/4] *ma: add Qualcomm Technologies HIDMA driver
@ 2015-11-08  4:52 ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: linux-arm-kernel

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization in the future.

The management driver is executed in hypervisor context and
is the main management entity for all channels provided by
the device.

Changes from V2: (https://lkml.org/lkml/2015/11/2/43)
* compatible = "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0", "qcom,hidma-mgmt";
* change channel-reset-timeout-cycles
* more description for the HW operation and driver organization
* consolidate devm_ioremap_resource
* make print message more meaningfule.
* remove print at uninstall
* remove debugfs for now, create simple sysfs entries instead
* Move code to qcom directory.

Changes from V2: (https://lkml.org/lkml/2015/11/2/44)
* none

Changes from V2: (https://lkml.org/lkml/2015/11/2/45)
* Clarify the relaxed accessor usage in ISR.
* Replace all non-performance critical accessors with safe
versions.
* Correct multiline comment syntax.
* Remove name variable.
* Eliminate redundant list_empty checks.
* Remove locks in hidma_tx_status.
* Document why hidma_issue_pending is empty.
* Fix the return code to 0 for hidma_ll_request.
* Consolidate devm_ioremap_resource functions in probe.
* Move PM calls inside the paused conditional.
* One tasklet for all completions rather than one tasklet
per completion. Completions are queued into kfifo and consumed
by a single tasklet.
* Move start of actual transfer to issue_pending
* Simplify return path when DMA mask set fails.
* Move code to qcom directory.

Sinan Kaya (4):
  dma: qcom_bam_dma: move to qcom directory
  dma: add Qualcomm Technologies HIDMA management driver
  dmaselftest: add memcpy selftest support functions
  dma: add Qualcomm Technologies HIDMA channel driver

 .../devicetree/bindings/dma/qcom_hidma.txt         |   18 +
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |   62 +
 drivers/dma/Kconfig                                |   13 +-
 drivers/dma/Makefile                               |    2 +-
 drivers/dma/dmaengine.h                            |    2 +
 drivers/dma/dmaselftest.c                          |  669 +++++++++++
 drivers/dma/qcom/Kconfig                           |   29 +
 drivers/dma/qcom/Makefile                          |    4 +
 drivers/dma/qcom/bam_dma.c                         | 1259 ++++++++++++++++++++
 drivers/dma/qcom/hidma.c                           |  743 ++++++++++++
 drivers/dma/qcom/hidma.h                           |  157 +++
 drivers/dma/qcom/hidma_dbg.c                       |  225 ++++
 drivers/dma/qcom/hidma_ll.c                        |  944 +++++++++++++++
 drivers/dma/qcom/hidma_mgmt.c                      |  315 +++++
 drivers/dma/qcom/hidma_mgmt.h                      |   38 +
 drivers/dma/qcom/hidma_mgmt_sys.c                  |  232 ++++
 drivers/dma/qcom_bam_dma.c                         | 1259 --------------------
 17 files changed, 4701 insertions(+), 1270 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
 create mode 100644 drivers/dma/dmaselftest.c
 create mode 100644 drivers/dma/qcom/Kconfig
 create mode 100644 drivers/dma/qcom/Makefile
 create mode 100644 drivers/dma/qcom/bam_dma.c
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h
 create mode 100644 drivers/dma/qcom/hidma_dbg.c
 create mode 100644 drivers/dma/qcom/hidma_ll.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c
 delete mode 100644 drivers/dma/qcom_bam_dma.c

-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 1/4] dma: qcom_bam_dma: move to qcom directory
  2015-11-08  4:52 ` Sinan Kaya
@ 2015-11-08  4:52   ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: dmaengine, timur, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Sinan Kaya, Vinod Koul,
	Dan Williams, Archit Taneja, Stanimir Varbanov, Kumar Gala,
	Pramod Gurav, Maxime Ripard, linux-kernel

Creating a QCOM directory for all QCOM DMA
source files.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/dma/Kconfig        |   13 +-
 drivers/dma/Makefile       |    2 +-
 drivers/dma/qcom/Kconfig   |    9 +
 drivers/dma/qcom/Makefile  |    1 +
 drivers/dma/qcom/bam_dma.c | 1259 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/qcom_bam_dma.c | 1259 --------------------------------------------
 6 files changed, 1273 insertions(+), 1270 deletions(-)
 create mode 100644 drivers/dma/qcom/Kconfig
 create mode 100644 drivers/dma/qcom/Makefile
 create mode 100644 drivers/dma/qcom/bam_dma.c
 delete mode 100644 drivers/dma/qcom_bam_dma.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index b458475..d17d9ec 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -320,7 +320,7 @@ config MOXART_DMA
 	select DMA_VIRTUAL_CHANNELS
 	help
 	  Enable support for the MOXA ART SoC DMA controller.
- 
+
 	  Say Y here if you enabled MMP ADMA, otherwise say N.
 
 config MPC512X_DMA
@@ -408,15 +408,6 @@ config PXA_DMA
 	  16 to 32 channels for peripheral to memory or memory to memory
 	  transfers.
 
-config QCOM_BAM_DMA
-	tristate "QCOM BAM DMA support"
-	depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
-	select DMA_ENGINE
-	select DMA_VIRTUAL_CHANNELS
-	---help---
-	  Enable support for the QCOM BAM DMA controller.  This controller
-	  provides DMA capabilities for a variety of on-chip devices.
-
 config SIRF_DMA
 	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
 	depends on ARCH_SIRF
@@ -527,6 +518,8 @@ config ZX_DMA
 # driver files
 source "drivers/dma/bestcomm/Kconfig"
 
+source "drivers/dma/qcom/Kconfig"
+
 source "drivers/dma/dw/Kconfig"
 
 source "drivers/dma/hsu/Kconfig"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 7711a71..8dba90d 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -52,7 +52,6 @@ obj-$(CONFIG_PCH_DMA) += pch_dma.o
 obj-$(CONFIG_PL330_DMA) += pl330.o
 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
 obj-$(CONFIG_PXA_DMA) += pxa_dma.o
-obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
 obj-$(CONFIG_RENESAS_DMA) += sh/
 obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
 obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
@@ -66,4 +65,5 @@ obj-$(CONFIG_TI_EDMA) += edma.o
 obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx296702_dma.o
 
+obj-y += qcom/
 obj-y += xilinx/
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
new file mode 100644
index 0000000..17545df
--- /dev/null
+++ b/drivers/dma/qcom/Kconfig
@@ -0,0 +1,9 @@
+config QCOM_BAM_DMA
+	tristate "QCOM BAM DMA support"
+	depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	---help---
+	  Enable support for the QCOM BAM DMA controller.  This controller
+	  provides DMA capabilities for a variety of on-chip devices.
+
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
new file mode 100644
index 0000000..f612ae3
--- /dev/null
+++ b/drivers/dma/qcom/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
new file mode 100644
index 0000000..5359234
--- /dev/null
+++ b/drivers/dma/qcom/bam_dma.c
@@ -0,0 +1,1259 @@
+/*
+ * Copyright (c) 2013-2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+/*
+ * QCOM BAM DMA engine driver
+ *
+ * QCOM BAM DMA blocks are distributed amongst a number of the on-chip
+ * peripherals on the MSM 8x74.  The configuration of the channels are dependent
+ * on the way they are hard wired to that specific peripheral.  The peripheral
+ * device tree entries specify the configuration of each channel.
+ *
+ * The DMA controller requires the use of external memory for storage of the
+ * hardware descriptors for each channel.  The descriptor FIFO is accessed as a
+ * circular buffer and operations are managed according to the offset within the
+ * FIFO.  After pipe/channel reset, all of the pipe registers and internal state
+ * are back to defaults.
+ *
+ * During DMA operations, we write descriptors to the FIFO, being careful to
+ * handle wrapping and then write the last FIFO offset to that channel's
+ * P_EVNT_REG register to kick off the transaction.  The P_SW_OFSTS register
+ * indicates the current FIFO offset that is being processed, so there is some
+ * indication of where the hardware is currently working.
+ */
+
+#include <linux/kernel.h>
+#include <linux/io.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_dma.h>
+#include <linux/clk.h>
+#include <linux/dmaengine.h>
+
+#include "../dmaengine.h"
+#include "../virt-dma.h"
+
+struct bam_desc_hw {
+	u32 addr;		/* Buffer physical address */
+	u16 size;		/* Buffer size in bytes */
+	u16 flags;
+};
+
+#define DESC_FLAG_INT BIT(15)
+#define DESC_FLAG_EOT BIT(14)
+#define DESC_FLAG_EOB BIT(13)
+#define DESC_FLAG_NWD BIT(12)
+
+struct bam_async_desc {
+	struct virt_dma_desc vd;
+
+	u32 num_desc;
+	u32 xfer_len;
+
+	/* transaction flags, EOT|EOB|NWD */
+	u16 flags;
+
+	struct bam_desc_hw *curr_desc;
+
+	enum dma_transfer_direction dir;
+	size_t length;
+	struct bam_desc_hw desc[0];
+};
+
+enum bam_reg {
+	BAM_CTRL,
+	BAM_REVISION,
+	BAM_NUM_PIPES,
+	BAM_DESC_CNT_TRSHLD,
+	BAM_IRQ_SRCS,
+	BAM_IRQ_SRCS_MSK,
+	BAM_IRQ_SRCS_UNMASKED,
+	BAM_IRQ_STTS,
+	BAM_IRQ_CLR,
+	BAM_IRQ_EN,
+	BAM_CNFG_BITS,
+	BAM_IRQ_SRCS_EE,
+	BAM_IRQ_SRCS_MSK_EE,
+	BAM_P_CTRL,
+	BAM_P_RST,
+	BAM_P_HALT,
+	BAM_P_IRQ_STTS,
+	BAM_P_IRQ_CLR,
+	BAM_P_IRQ_EN,
+	BAM_P_EVNT_DEST_ADDR,
+	BAM_P_EVNT_REG,
+	BAM_P_SW_OFSTS,
+	BAM_P_DATA_FIFO_ADDR,
+	BAM_P_DESC_FIFO_ADDR,
+	BAM_P_EVNT_GEN_TRSHLD,
+	BAM_P_FIFO_SIZES,
+};
+
+struct reg_offset_data {
+	u32 base_offset;
+	unsigned int pipe_mult, evnt_mult, ee_mult;
+};
+
+static const struct reg_offset_data bam_v1_3_reg_info[] = {
+	[BAM_CTRL]		= { 0x0F80, 0x00, 0x00, 0x00 },
+	[BAM_REVISION]		= { 0x0F84, 0x00, 0x00, 0x00 },
+	[BAM_NUM_PIPES]		= { 0x0FBC, 0x00, 0x00, 0x00 },
+	[BAM_DESC_CNT_TRSHLD]	= { 0x0F88, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS]		= { 0x0F8C, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_MSK]	= { 0x0F90, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_UNMASKED]	= { 0x0FB0, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_STTS]		= { 0x0F94, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_CLR]		= { 0x0F98, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_EN]		= { 0x0F9C, 0x00, 0x00, 0x00 },
+	[BAM_CNFG_BITS]		= { 0x0FFC, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_EE]	= { 0x1800, 0x00, 0x00, 0x80 },
+	[BAM_IRQ_SRCS_MSK_EE]	= { 0x1804, 0x00, 0x00, 0x80 },
+	[BAM_P_CTRL]		= { 0x0000, 0x80, 0x00, 0x00 },
+	[BAM_P_RST]		= { 0x0004, 0x80, 0x00, 0x00 },
+	[BAM_P_HALT]		= { 0x0008, 0x80, 0x00, 0x00 },
+	[BAM_P_IRQ_STTS]	= { 0x0010, 0x80, 0x00, 0x00 },
+	[BAM_P_IRQ_CLR]		= { 0x0014, 0x80, 0x00, 0x00 },
+	[BAM_P_IRQ_EN]		= { 0x0018, 0x80, 0x00, 0x00 },
+	[BAM_P_EVNT_DEST_ADDR]	= { 0x102C, 0x00, 0x40, 0x00 },
+	[BAM_P_EVNT_REG]	= { 0x1018, 0x00, 0x40, 0x00 },
+	[BAM_P_SW_OFSTS]	= { 0x1000, 0x00, 0x40, 0x00 },
+	[BAM_P_DATA_FIFO_ADDR]	= { 0x1024, 0x00, 0x40, 0x00 },
+	[BAM_P_DESC_FIFO_ADDR]	= { 0x101C, 0x00, 0x40, 0x00 },
+	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x1028, 0x00, 0x40, 0x00 },
+	[BAM_P_FIFO_SIZES]	= { 0x1020, 0x00, 0x40, 0x00 },
+};
+
+static const struct reg_offset_data bam_v1_4_reg_info[] = {
+	[BAM_CTRL]		= { 0x0000, 0x00, 0x00, 0x00 },
+	[BAM_REVISION]		= { 0x0004, 0x00, 0x00, 0x00 },
+	[BAM_NUM_PIPES]		= { 0x003C, 0x00, 0x00, 0x00 },
+	[BAM_DESC_CNT_TRSHLD]	= { 0x0008, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS]		= { 0x000C, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_MSK]	= { 0x0010, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_UNMASKED]	= { 0x0030, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_STTS]		= { 0x0014, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_CLR]		= { 0x0018, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_EN]		= { 0x001C, 0x00, 0x00, 0x00 },
+	[BAM_CNFG_BITS]		= { 0x007C, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_EE]	= { 0x0800, 0x00, 0x00, 0x80 },
+	[BAM_IRQ_SRCS_MSK_EE]	= { 0x0804, 0x00, 0x00, 0x80 },
+	[BAM_P_CTRL]		= { 0x1000, 0x1000, 0x00, 0x00 },
+	[BAM_P_RST]		= { 0x1004, 0x1000, 0x00, 0x00 },
+	[BAM_P_HALT]		= { 0x1008, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_STTS]	= { 0x1010, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_CLR]		= { 0x1014, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_EN]		= { 0x1018, 0x1000, 0x00, 0x00 },
+	[BAM_P_EVNT_DEST_ADDR]	= { 0x182C, 0x00, 0x1000, 0x00 },
+	[BAM_P_EVNT_REG]	= { 0x1818, 0x00, 0x1000, 0x00 },
+	[BAM_P_SW_OFSTS]	= { 0x1800, 0x00, 0x1000, 0x00 },
+	[BAM_P_DATA_FIFO_ADDR]	= { 0x1824, 0x00, 0x1000, 0x00 },
+	[BAM_P_DESC_FIFO_ADDR]	= { 0x181C, 0x00, 0x1000, 0x00 },
+	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x1828, 0x00, 0x1000, 0x00 },
+	[BAM_P_FIFO_SIZES]	= { 0x1820, 0x00, 0x1000, 0x00 },
+};
+
+static const struct reg_offset_data bam_v1_7_reg_info[] = {
+	[BAM_CTRL]		= { 0x00000, 0x00, 0x00, 0x00 },
+	[BAM_REVISION]		= { 0x01000, 0x00, 0x00, 0x00 },
+	[BAM_NUM_PIPES]		= { 0x01008, 0x00, 0x00, 0x00 },
+	[BAM_DESC_CNT_TRSHLD]	= { 0x00008, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS]		= { 0x03010, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_MSK]	= { 0x03014, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_UNMASKED]	= { 0x03018, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_STTS]		= { 0x00014, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_CLR]		= { 0x00018, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_EN]		= { 0x0001C, 0x00, 0x00, 0x00 },
+	[BAM_CNFG_BITS]		= { 0x0007C, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_EE]	= { 0x03000, 0x00, 0x00, 0x1000 },
+	[BAM_IRQ_SRCS_MSK_EE]	= { 0x03004, 0x00, 0x00, 0x1000 },
+	[BAM_P_CTRL]		= { 0x13000, 0x1000, 0x00, 0x00 },
+	[BAM_P_RST]		= { 0x13004, 0x1000, 0x00, 0x00 },
+	[BAM_P_HALT]		= { 0x13008, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_STTS]	= { 0x13010, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_CLR]		= { 0x13014, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_EN]		= { 0x13018, 0x1000, 0x00, 0x00 },
+	[BAM_P_EVNT_DEST_ADDR]	= { 0x1382C, 0x00, 0x1000, 0x00 },
+	[BAM_P_EVNT_REG]	= { 0x13818, 0x00, 0x1000, 0x00 },
+	[BAM_P_SW_OFSTS]	= { 0x13800, 0x00, 0x1000, 0x00 },
+	[BAM_P_DATA_FIFO_ADDR]	= { 0x13824, 0x00, 0x1000, 0x00 },
+	[BAM_P_DESC_FIFO_ADDR]	= { 0x1381C, 0x00, 0x1000, 0x00 },
+	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x13828, 0x00, 0x1000, 0x00 },
+	[BAM_P_FIFO_SIZES]	= { 0x13820, 0x00, 0x1000, 0x00 },
+};
+
+/* BAM CTRL */
+#define BAM_SW_RST			BIT(0)
+#define BAM_EN				BIT(1)
+#define BAM_EN_ACCUM			BIT(4)
+#define BAM_TESTBUS_SEL_SHIFT		5
+#define BAM_TESTBUS_SEL_MASK		0x3F
+#define BAM_DESC_CACHE_SEL_SHIFT	13
+#define BAM_DESC_CACHE_SEL_MASK		0x3
+#define BAM_CACHED_DESC_STORE		BIT(15)
+#define IBC_DISABLE			BIT(16)
+
+/* BAM REVISION */
+#define REVISION_SHIFT		0
+#define REVISION_MASK		0xFF
+#define NUM_EES_SHIFT		8
+#define NUM_EES_MASK		0xF
+#define CE_BUFFER_SIZE		BIT(13)
+#define AXI_ACTIVE		BIT(14)
+#define USE_VMIDMT		BIT(15)
+#define SECURED			BIT(16)
+#define BAM_HAS_NO_BYPASS	BIT(17)
+#define HIGH_FREQUENCY_BAM	BIT(18)
+#define INACTIV_TMRS_EXST	BIT(19)
+#define NUM_INACTIV_TMRS	BIT(20)
+#define DESC_CACHE_DEPTH_SHIFT	21
+#define DESC_CACHE_DEPTH_1	(0 << DESC_CACHE_DEPTH_SHIFT)
+#define DESC_CACHE_DEPTH_2	(1 << DESC_CACHE_DEPTH_SHIFT)
+#define DESC_CACHE_DEPTH_3	(2 << DESC_CACHE_DEPTH_SHIFT)
+#define DESC_CACHE_DEPTH_4	(3 << DESC_CACHE_DEPTH_SHIFT)
+#define CMD_DESC_EN		BIT(23)
+#define INACTIV_TMR_BASE_SHIFT	24
+#define INACTIV_TMR_BASE_MASK	0xFF
+
+/* BAM NUM PIPES */
+#define BAM_NUM_PIPES_SHIFT		0
+#define BAM_NUM_PIPES_MASK		0xFF
+#define PERIPH_NON_PIPE_GRP_SHIFT	16
+#define PERIPH_NON_PIP_GRP_MASK		0xFF
+#define BAM_NON_PIPE_GRP_SHIFT		24
+#define BAM_NON_PIPE_GRP_MASK		0xFF
+
+/* BAM CNFG BITS */
+#define BAM_PIPE_CNFG		BIT(2)
+#define BAM_FULL_PIPE		BIT(11)
+#define BAM_NO_EXT_P_RST	BIT(12)
+#define BAM_IBC_DISABLE		BIT(13)
+#define BAM_SB_CLK_REQ		BIT(14)
+#define BAM_PSM_CSW_REQ		BIT(15)
+#define BAM_PSM_P_RES		BIT(16)
+#define BAM_AU_P_RES		BIT(17)
+#define BAM_SI_P_RES		BIT(18)
+#define BAM_WB_P_RES		BIT(19)
+#define BAM_WB_BLK_CSW		BIT(20)
+#define BAM_WB_CSW_ACK_IDL	BIT(21)
+#define BAM_WB_RETR_SVPNT	BIT(22)
+#define BAM_WB_DSC_AVL_P_RST	BIT(23)
+#define BAM_REG_P_EN		BIT(24)
+#define BAM_PSM_P_HD_DATA	BIT(25)
+#define BAM_AU_ACCUMED		BIT(26)
+#define BAM_CMD_ENABLE		BIT(27)
+
+#define BAM_CNFG_BITS_DEFAULT	(BAM_PIPE_CNFG |	\
+				 BAM_NO_EXT_P_RST |	\
+				 BAM_IBC_DISABLE |	\
+				 BAM_SB_CLK_REQ |	\
+				 BAM_PSM_CSW_REQ |	\
+				 BAM_PSM_P_RES |	\
+				 BAM_AU_P_RES |		\
+				 BAM_SI_P_RES |		\
+				 BAM_WB_P_RES |		\
+				 BAM_WB_BLK_CSW |	\
+				 BAM_WB_CSW_ACK_IDL |	\
+				 BAM_WB_RETR_SVPNT |	\
+				 BAM_WB_DSC_AVL_P_RST |	\
+				 BAM_REG_P_EN |		\
+				 BAM_PSM_P_HD_DATA |	\
+				 BAM_AU_ACCUMED |	\
+				 BAM_CMD_ENABLE)
+
+/* PIPE CTRL */
+#define P_EN			BIT(1)
+#define P_DIRECTION		BIT(3)
+#define P_SYS_STRM		BIT(4)
+#define P_SYS_MODE		BIT(5)
+#define P_AUTO_EOB		BIT(6)
+#define P_AUTO_EOB_SEL_SHIFT	7
+#define P_AUTO_EOB_SEL_512	(0 << P_AUTO_EOB_SEL_SHIFT)
+#define P_AUTO_EOB_SEL_256	(1 << P_AUTO_EOB_SEL_SHIFT)
+#define P_AUTO_EOB_SEL_128	(2 << P_AUTO_EOB_SEL_SHIFT)
+#define P_AUTO_EOB_SEL_64	(3 << P_AUTO_EOB_SEL_SHIFT)
+#define P_PREFETCH_LIMIT_SHIFT	9
+#define P_PREFETCH_LIMIT_32	(0 << P_PREFETCH_LIMIT_SHIFT)
+#define P_PREFETCH_LIMIT_16	(1 << P_PREFETCH_LIMIT_SHIFT)
+#define P_PREFETCH_LIMIT_4	(2 << P_PREFETCH_LIMIT_SHIFT)
+#define P_WRITE_NWD		BIT(11)
+#define P_LOCK_GROUP_SHIFT	16
+#define P_LOCK_GROUP_MASK	0x1F
+
+/* BAM_DESC_CNT_TRSHLD */
+#define CNT_TRSHLD		0xffff
+#define DEFAULT_CNT_THRSHLD	0x4
+
+/* BAM_IRQ_SRCS */
+#define BAM_IRQ			BIT(31)
+#define P_IRQ			0x7fffffff
+
+/* BAM_IRQ_SRCS_MSK */
+#define BAM_IRQ_MSK		BAM_IRQ
+#define P_IRQ_MSK		P_IRQ
+
+/* BAM_IRQ_STTS */
+#define BAM_TIMER_IRQ		BIT(4)
+#define BAM_EMPTY_IRQ		BIT(3)
+#define BAM_ERROR_IRQ		BIT(2)
+#define BAM_HRESP_ERR_IRQ	BIT(1)
+
+/* BAM_IRQ_CLR */
+#define BAM_TIMER_CLR		BIT(4)
+#define BAM_EMPTY_CLR		BIT(3)
+#define BAM_ERROR_CLR		BIT(2)
+#define BAM_HRESP_ERR_CLR	BIT(1)
+
+/* BAM_IRQ_EN */
+#define BAM_TIMER_EN		BIT(4)
+#define BAM_EMPTY_EN		BIT(3)
+#define BAM_ERROR_EN		BIT(2)
+#define BAM_HRESP_ERR_EN	BIT(1)
+
+/* BAM_P_IRQ_EN */
+#define P_PRCSD_DESC_EN		BIT(0)
+#define P_TIMER_EN		BIT(1)
+#define P_WAKE_EN		BIT(2)
+#define P_OUT_OF_DESC_EN	BIT(3)
+#define P_ERR_EN		BIT(4)
+#define P_TRNSFR_END_EN		BIT(5)
+#define P_DEFAULT_IRQS_EN	(P_PRCSD_DESC_EN | P_ERR_EN | P_TRNSFR_END_EN)
+
+/* BAM_P_SW_OFSTS */
+#define P_SW_OFSTS_MASK		0xffff
+
+#define BAM_DESC_FIFO_SIZE	SZ_32K
+#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
+#define BAM_MAX_DATA_SIZE	(SZ_32K - 8)
+
+struct bam_chan {
+	struct virt_dma_chan vc;
+
+	struct bam_device *bdev;
+
+	/* configuration from device tree */
+	u32 id;
+
+	struct bam_async_desc *curr_txd;	/* current running dma */
+
+	/* runtime configuration */
+	struct dma_slave_config slave;
+
+	/* fifo storage */
+	struct bam_desc_hw *fifo_virt;
+	dma_addr_t fifo_phys;
+
+	/* fifo markers */
+	unsigned short head;		/* start of active descriptor entries */
+	unsigned short tail;		/* end of active descriptor entries */
+
+	unsigned int initialized;	/* is the channel hw initialized? */
+	unsigned int paused;		/* is the channel paused? */
+	unsigned int reconfigure;	/* new slave config? */
+
+	struct list_head node;
+};
+
+static inline struct bam_chan *to_bam_chan(struct dma_chan *common)
+{
+	return container_of(common, struct bam_chan, vc.chan);
+}
+
+struct bam_device {
+	void __iomem *regs;
+	struct device *dev;
+	struct dma_device common;
+	struct device_dma_parameters dma_parms;
+	struct bam_chan *channels;
+	u32 num_channels;
+
+	/* execution environment ID, from DT */
+	u32 ee;
+
+	const struct reg_offset_data *layout;
+
+	struct clk *bamclk;
+	int irq;
+
+	/* dma start transaction tasklet */
+	struct tasklet_struct task;
+};
+
+/**
+ * bam_addr - returns BAM register address
+ * @bdev: bam device
+ * @pipe: pipe instance (ignored when register doesn't have multiple instances)
+ * @reg:  register enum
+ */
+static inline void __iomem *bam_addr(struct bam_device *bdev, u32 pipe,
+		enum bam_reg reg)
+{
+	const struct reg_offset_data r = bdev->layout[reg];
+
+	return bdev->regs + r.base_offset +
+		r.pipe_mult * pipe +
+		r.evnt_mult * pipe +
+		r.ee_mult * bdev->ee;
+}
+
+/**
+ * bam_reset_channel - Reset individual BAM DMA channel
+ * @bchan: bam channel
+ *
+ * This function resets a specific BAM channel
+ */
+static void bam_reset_channel(struct bam_chan *bchan)
+{
+	struct bam_device *bdev = bchan->bdev;
+
+	lockdep_assert_held(&bchan->vc.lock);
+
+	/* reset channel */
+	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_RST));
+	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_RST));
+
+	/* don't allow cpu to reorder BAM register accesses done after this */
+	wmb();
+
+	/* make sure hw is initialized when channel is used the first time  */
+	bchan->initialized = 0;
+}
+
+/**
+ * bam_chan_init_hw - Initialize channel hardware
+ * @bchan: bam channel
+ *
+ * This function resets and initializes the BAM channel
+ */
+static void bam_chan_init_hw(struct bam_chan *bchan,
+	enum dma_transfer_direction dir)
+{
+	struct bam_device *bdev = bchan->bdev;
+	u32 val;
+
+	/* Reset the channel to clear internal state of the FIFO */
+	bam_reset_channel(bchan);
+
+	/*
+	 * write out 8 byte aligned address.  We have enough space for this
+	 * because we allocated 1 more descriptor (8 bytes) than we can use
+	 */
+	writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)),
+			bam_addr(bdev, bchan->id, BAM_P_DESC_FIFO_ADDR));
+	writel_relaxed(BAM_DESC_FIFO_SIZE,
+			bam_addr(bdev, bchan->id, BAM_P_FIFO_SIZES));
+
+	/* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */
+	writel_relaxed(P_DEFAULT_IRQS_EN,
+			bam_addr(bdev, bchan->id, BAM_P_IRQ_EN));
+
+	/* unmask the specific pipe and EE combo */
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+	val |= BIT(bchan->id);
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+
+	/* don't allow cpu to reorder the channel enable done below */
+	wmb();
+
+	/* set fixed direction and mode, then enable channel */
+	val = P_EN | P_SYS_MODE;
+	if (dir == DMA_DEV_TO_MEM)
+		val |= P_DIRECTION;
+
+	writel_relaxed(val, bam_addr(bdev, bchan->id, BAM_P_CTRL));
+
+	bchan->initialized = 1;
+
+	/* init FIFO pointers */
+	bchan->head = 0;
+	bchan->tail = 0;
+}
+
+/**
+ * bam_alloc_chan - Allocate channel resources for DMA channel.
+ * @chan: specified channel
+ *
+ * This function allocates the FIFO descriptor memory
+ */
+static int bam_alloc_chan(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+
+	if (bchan->fifo_virt)
+		return 0;
+
+	/* allocate FIFO descriptor space, but only if necessary */
+	bchan->fifo_virt = dma_alloc_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
+				&bchan->fifo_phys, GFP_KERNEL);
+
+	if (!bchan->fifo_virt) {
+		dev_err(bdev->dev, "Failed to allocate desc fifo\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/**
+ * bam_free_chan - Frees dma resources associated with specific channel
+ * @chan: specified channel
+ *
+ * Free the allocated fifo descriptor memory and channel resources
+ *
+ */
+static void bam_free_chan(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+	u32 val;
+	unsigned long flags;
+
+	vchan_free_chan_resources(to_virt_chan(chan));
+
+	if (bchan->curr_txd) {
+		dev_err(bchan->bdev->dev, "Cannot free busy channel\n");
+		return;
+	}
+
+	spin_lock_irqsave(&bchan->vc.lock, flags);
+	bam_reset_channel(bchan);
+	spin_unlock_irqrestore(&bchan->vc.lock, flags);
+
+	dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE, bchan->fifo_virt,
+				bchan->fifo_phys);
+	bchan->fifo_virt = NULL;
+
+	/* mask irq for pipe/channel */
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+	val &= ~BIT(bchan->id);
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+
+	/* disable irq */
+	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_IRQ_EN));
+}
+
+/**
+ * bam_slave_config - set slave configuration for channel
+ * @chan: dma channel
+ * @cfg: slave configuration
+ *
+ * Sets slave configuration for channel
+ *
+ */
+static int bam_slave_config(struct dma_chan *chan,
+			    struct dma_slave_config *cfg)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	unsigned long flag;
+
+	spin_lock_irqsave(&bchan->vc.lock, flag);
+	memcpy(&bchan->slave, cfg, sizeof(*cfg));
+	bchan->reconfigure = 1;
+	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+	return 0;
+}
+
+/**
+ * bam_prep_slave_sg - Prep slave sg transaction
+ *
+ * @chan: dma channel
+ * @sgl: scatter gather list
+ * @sg_len: length of sg
+ * @direction: DMA transfer direction
+ * @flags: DMA flags
+ * @context: transfer context (unused)
+ */
+static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
+	struct scatterlist *sgl, unsigned int sg_len,
+	enum dma_transfer_direction direction, unsigned long flags,
+	void *context)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+	struct bam_async_desc *async_desc;
+	struct scatterlist *sg;
+	u32 i;
+	struct bam_desc_hw *desc;
+	unsigned int num_alloc = 0;
+
+
+	if (!is_slave_direction(direction)) {
+		dev_err(bdev->dev, "invalid dma direction\n");
+		return NULL;
+	}
+
+	/* calculate number of required entries */
+	for_each_sg(sgl, sg, sg_len, i)
+		num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_MAX_DATA_SIZE);
+
+	/* allocate enough room to accomodate the number of entries */
+	async_desc = kzalloc(sizeof(*async_desc) +
+			(num_alloc * sizeof(struct bam_desc_hw)), GFP_NOWAIT);
+
+	if (!async_desc)
+		goto err_out;
+
+	if (flags & DMA_PREP_FENCE)
+		async_desc->flags |= DESC_FLAG_NWD;
+
+	if (flags & DMA_PREP_INTERRUPT)
+		async_desc->flags |= DESC_FLAG_EOT;
+	else
+		async_desc->flags |= DESC_FLAG_INT;
+
+	async_desc->num_desc = num_alloc;
+	async_desc->curr_desc = async_desc->desc;
+	async_desc->dir = direction;
+
+	/* fill in temporary descriptors */
+	desc = async_desc->desc;
+	for_each_sg(sgl, sg, sg_len, i) {
+		unsigned int remainder = sg_dma_len(sg);
+		unsigned int curr_offset = 0;
+
+		do {
+			desc->addr = sg_dma_address(sg) + curr_offset;
+
+			if (remainder > BAM_MAX_DATA_SIZE) {
+				desc->size = BAM_MAX_DATA_SIZE;
+				remainder -= BAM_MAX_DATA_SIZE;
+				curr_offset += BAM_MAX_DATA_SIZE;
+			} else {
+				desc->size = remainder;
+				remainder = 0;
+			}
+
+			async_desc->length += desc->size;
+			desc++;
+		} while (remainder > 0);
+	}
+
+	return vchan_tx_prep(&bchan->vc, &async_desc->vd, flags);
+
+err_out:
+	kfree(async_desc);
+	return NULL;
+}
+
+/**
+ * bam_dma_terminate_all - terminate all transactions on a channel
+ * @bchan: bam dma channel
+ *
+ * Dequeues and frees all transactions
+ * No callbacks are done
+ *
+ */
+static int bam_dma_terminate_all(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	unsigned long flag;
+	LIST_HEAD(head);
+
+	/* remove all transactions, including active transaction */
+	spin_lock_irqsave(&bchan->vc.lock, flag);
+	if (bchan->curr_txd) {
+		list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued);
+		bchan->curr_txd = NULL;
+	}
+
+	vchan_get_all_descriptors(&bchan->vc, &head);
+	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+	vchan_dma_desc_free_list(&bchan->vc, &head);
+
+	return 0;
+}
+
+/**
+ * bam_pause - Pause DMA channel
+ * @chan: dma channel
+ *
+ */
+static int bam_pause(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+	unsigned long flag;
+
+	spin_lock_irqsave(&bchan->vc.lock, flag);
+	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_HALT));
+	bchan->paused = 1;
+	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+	return 0;
+}
+
+/**
+ * bam_resume - Resume DMA channel operations
+ * @chan: dma channel
+ *
+ */
+static int bam_resume(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+	unsigned long flag;
+
+	spin_lock_irqsave(&bchan->vc.lock, flag);
+	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_HALT));
+	bchan->paused = 0;
+	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+	return 0;
+}
+
+/**
+ * process_channel_irqs - processes the channel interrupts
+ * @bdev: bam controller
+ *
+ * This function processes the channel interrupts
+ *
+ */
+static u32 process_channel_irqs(struct bam_device *bdev)
+{
+	u32 i, srcs, pipe_stts;
+	unsigned long flags;
+	struct bam_async_desc *async_desc;
+
+	srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE));
+
+	/* return early if no pipe/channel interrupts are present */
+	if (!(srcs & P_IRQ))
+		return srcs;
+
+	for (i = 0; i < bdev->num_channels; i++) {
+		struct bam_chan *bchan = &bdev->channels[i];
+
+		if (!(srcs & BIT(i)))
+			continue;
+
+		/* clear pipe irq */
+		pipe_stts = readl_relaxed(bam_addr(bdev, i, BAM_P_IRQ_STTS));
+
+		writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR));
+
+		spin_lock_irqsave(&bchan->vc.lock, flags);
+		async_desc = bchan->curr_txd;
+
+		if (async_desc) {
+			async_desc->num_desc -= async_desc->xfer_len;
+			async_desc->curr_desc += async_desc->xfer_len;
+			bchan->curr_txd = NULL;
+
+			/* manage FIFO */
+			bchan->head += async_desc->xfer_len;
+			bchan->head %= MAX_DESCRIPTORS;
+
+			/*
+			 * if complete, process cookie.  Otherwise
+			 * push back to front of desc_issued so that
+			 * it gets restarted by the tasklet
+			 */
+			if (!async_desc->num_desc)
+				vchan_cookie_complete(&async_desc->vd);
+			else
+				list_add(&async_desc->vd.node,
+					&bchan->vc.desc_issued);
+		}
+
+		spin_unlock_irqrestore(&bchan->vc.lock, flags);
+	}
+
+	return srcs;
+}
+
+/**
+ * bam_dma_irq - irq handler for bam controller
+ * @irq: IRQ of interrupt
+ * @data: callback data
+ *
+ * IRQ handler for the bam controller
+ */
+static irqreturn_t bam_dma_irq(int irq, void *data)
+{
+	struct bam_device *bdev = data;
+	u32 clr_mask = 0, srcs = 0;
+
+	srcs |= process_channel_irqs(bdev);
+
+	/* kick off tasklet to start next dma transfer */
+	if (srcs & P_IRQ)
+		tasklet_schedule(&bdev->task);
+
+	if (srcs & BAM_IRQ)
+		clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS));
+
+	/* don't allow reorder of the various accesses to the BAM registers */
+	mb();
+
+	writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR));
+
+	return IRQ_HANDLED;
+}
+
+/**
+ * bam_tx_status - returns status of transaction
+ * @chan: dma channel
+ * @cookie: transaction cookie
+ * @txstate: DMA transaction state
+ *
+ * Return status of dma transaction
+ */
+static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+		struct dma_tx_state *txstate)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct virt_dma_desc *vd;
+	int ret;
+	size_t residue = 0;
+	unsigned int i;
+	unsigned long flags;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	if (!txstate)
+		return bchan->paused ? DMA_PAUSED : ret;
+
+	spin_lock_irqsave(&bchan->vc.lock, flags);
+	vd = vchan_find_desc(&bchan->vc, cookie);
+	if (vd)
+		residue = container_of(vd, struct bam_async_desc, vd)->length;
+	else if (bchan->curr_txd && bchan->curr_txd->vd.tx.cookie == cookie)
+		for (i = 0; i < bchan->curr_txd->num_desc; i++)
+			residue += bchan->curr_txd->curr_desc[i].size;
+
+	spin_unlock_irqrestore(&bchan->vc.lock, flags);
+
+	dma_set_residue(txstate, residue);
+
+	if (ret == DMA_IN_PROGRESS && bchan->paused)
+		ret = DMA_PAUSED;
+
+	return ret;
+}
+
+/**
+ * bam_apply_new_config
+ * @bchan: bam dma channel
+ * @dir: DMA direction
+ */
+static void bam_apply_new_config(struct bam_chan *bchan,
+	enum dma_transfer_direction dir)
+{
+	struct bam_device *bdev = bchan->bdev;
+	u32 maxburst;
+
+	if (dir == DMA_DEV_TO_MEM)
+		maxburst = bchan->slave.src_maxburst;
+	else
+		maxburst = bchan->slave.dst_maxburst;
+
+	writel_relaxed(maxburst, bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
+
+	bchan->reconfigure = 0;
+}
+
+/**
+ * bam_start_dma - start next transaction
+ * @bchan - bam dma channel
+ */
+static void bam_start_dma(struct bam_chan *bchan)
+{
+	struct virt_dma_desc *vd = vchan_next_desc(&bchan->vc);
+	struct bam_device *bdev = bchan->bdev;
+	struct bam_async_desc *async_desc;
+	struct bam_desc_hw *desc;
+	struct bam_desc_hw *fifo = PTR_ALIGN(bchan->fifo_virt,
+					sizeof(struct bam_desc_hw));
+
+	lockdep_assert_held(&bchan->vc.lock);
+
+	if (!vd)
+		return;
+
+	list_del(&vd->node);
+
+	async_desc = container_of(vd, struct bam_async_desc, vd);
+	bchan->curr_txd = async_desc;
+
+	/* on first use, initialize the channel hardware */
+	if (!bchan->initialized)
+		bam_chan_init_hw(bchan, async_desc->dir);
+
+	/* apply new slave config changes, if necessary */
+	if (bchan->reconfigure)
+		bam_apply_new_config(bchan, async_desc->dir);
+
+	desc = bchan->curr_txd->curr_desc;
+
+	if (async_desc->num_desc > MAX_DESCRIPTORS)
+		async_desc->xfer_len = MAX_DESCRIPTORS;
+	else
+		async_desc->xfer_len = async_desc->num_desc;
+
+	/* set any special flags on the last descriptor */
+	if (async_desc->num_desc == async_desc->xfer_len)
+		desc[async_desc->xfer_len - 1].flags = async_desc->flags;
+	else
+		desc[async_desc->xfer_len - 1].flags |= DESC_FLAG_INT;
+
+	if (bchan->tail + async_desc->xfer_len > MAX_DESCRIPTORS) {
+		u32 partial = MAX_DESCRIPTORS - bchan->tail;
+
+		memcpy(&fifo[bchan->tail], desc,
+				partial * sizeof(struct bam_desc_hw));
+		memcpy(fifo, &desc[partial], (async_desc->xfer_len - partial) *
+				sizeof(struct bam_desc_hw));
+	} else {
+		memcpy(&fifo[bchan->tail], desc,
+			async_desc->xfer_len * sizeof(struct bam_desc_hw));
+	}
+
+	bchan->tail += async_desc->xfer_len;
+	bchan->tail %= MAX_DESCRIPTORS;
+
+	/* ensure descriptor writes and dma start not reordered */
+	wmb();
+	writel_relaxed(bchan->tail * sizeof(struct bam_desc_hw),
+			bam_addr(bdev, bchan->id, BAM_P_EVNT_REG));
+}
+
+/**
+ * dma_tasklet - DMA IRQ tasklet
+ * @data: tasklet argument (bam controller structure)
+ *
+ * Sets up next DMA operation and then processes all completed transactions
+ */
+static void dma_tasklet(unsigned long data)
+{
+	struct bam_device *bdev = (struct bam_device *)data;
+	struct bam_chan *bchan;
+	unsigned long flags;
+	unsigned int i;
+
+	/* go through the channels and kick off transactions */
+	for (i = 0; i < bdev->num_channels; i++) {
+		bchan = &bdev->channels[i];
+		spin_lock_irqsave(&bchan->vc.lock, flags);
+
+		if (!list_empty(&bchan->vc.desc_issued) && !bchan->curr_txd)
+			bam_start_dma(bchan);
+		spin_unlock_irqrestore(&bchan->vc.lock, flags);
+	}
+}
+
+/**
+ * bam_issue_pending - starts pending transactions
+ * @chan: dma channel
+ *
+ * Calls tasklet directly which in turn starts any pending transactions
+ */
+static void bam_issue_pending(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&bchan->vc.lock, flags);
+
+	/* if work pending and idle, start a transaction */
+	if (vchan_issue_pending(&bchan->vc) && !bchan->curr_txd)
+		bam_start_dma(bchan);
+
+	spin_unlock_irqrestore(&bchan->vc.lock, flags);
+}
+
+/**
+ * bam_dma_free_desc - free descriptor memory
+ * @vd: virtual descriptor
+ *
+ */
+static void bam_dma_free_desc(struct virt_dma_desc *vd)
+{
+	struct bam_async_desc *async_desc = container_of(vd,
+			struct bam_async_desc, vd);
+
+	kfree(async_desc);
+}
+
+static struct dma_chan *bam_dma_xlate(struct of_phandle_args *dma_spec,
+		struct of_dma *of)
+{
+	struct bam_device *bdev = container_of(of->of_dma_data,
+					struct bam_device, common);
+	unsigned int request;
+
+	if (dma_spec->args_count != 1)
+		return NULL;
+
+	request = dma_spec->args[0];
+	if (request >= bdev->num_channels)
+		return NULL;
+
+	return dma_get_slave_channel(&(bdev->channels[request].vc.chan));
+}
+
+/**
+ * bam_init
+ * @bdev: bam device
+ *
+ * Initialization helper for global bam registers
+ */
+static int bam_init(struct bam_device *bdev)
+{
+	u32 val;
+
+	/* read revision and configuration information */
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_REVISION)) >> NUM_EES_SHIFT;
+	val &= NUM_EES_MASK;
+
+	/* check that configured EE is within range */
+	if (bdev->ee >= val)
+		return -EINVAL;
+
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES));
+	bdev->num_channels = val & BAM_NUM_PIPES_MASK;
+
+	/* s/w reset bam */
+	/* after reset all pipes are disabled and idle */
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_CTRL));
+	val |= BAM_SW_RST;
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
+	val &= ~BAM_SW_RST;
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
+
+	/* make sure previous stores are visible before enabling BAM */
+	wmb();
+
+	/* enable bam */
+	val |= BAM_EN;
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
+
+	/* set descriptor threshhold, start with 4 bytes */
+	writel_relaxed(DEFAULT_CNT_THRSHLD,
+			bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
+
+	/* Enable default set of h/w workarounds, ie all except BAM_FULL_PIPE */
+	writel_relaxed(BAM_CNFG_BITS_DEFAULT, bam_addr(bdev, 0, BAM_CNFG_BITS));
+
+	/* enable irqs for errors */
+	writel_relaxed(BAM_ERROR_EN | BAM_HRESP_ERR_EN,
+			bam_addr(bdev, 0, BAM_IRQ_EN));
+
+	/* unmask global bam interrupt */
+	writel_relaxed(BAM_IRQ_MSK, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+
+	return 0;
+}
+
+static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan,
+	u32 index)
+{
+	bchan->id = index;
+	bchan->bdev = bdev;
+
+	vchan_init(&bchan->vc, &bdev->common);
+	bchan->vc.desc_free = bam_dma_free_desc;
+}
+
+static const struct of_device_id bam_of_match[] = {
+	{ .compatible = "qcom,bam-v1.3.0", .data = &bam_v1_3_reg_info },
+	{ .compatible = "qcom,bam-v1.4.0", .data = &bam_v1_4_reg_info },
+	{ .compatible = "qcom,bam-v1.7.0", .data = &bam_v1_7_reg_info },
+	{}
+};
+
+MODULE_DEVICE_TABLE(of, bam_of_match);
+
+static int bam_dma_probe(struct platform_device *pdev)
+{
+	struct bam_device *bdev;
+	const struct of_device_id *match;
+	struct resource *iores;
+	int ret, i;
+
+	bdev = devm_kzalloc(&pdev->dev, sizeof(*bdev), GFP_KERNEL);
+	if (!bdev)
+		return -ENOMEM;
+
+	bdev->dev = &pdev->dev;
+
+	match = of_match_node(bam_of_match, pdev->dev.of_node);
+	if (!match) {
+		dev_err(&pdev->dev, "Unsupported BAM module\n");
+		return -ENODEV;
+	}
+
+	bdev->layout = match->data;
+
+	iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	bdev->regs = devm_ioremap_resource(&pdev->dev, iores);
+	if (IS_ERR(bdev->regs))
+		return PTR_ERR(bdev->regs);
+
+	bdev->irq = platform_get_irq(pdev, 0);
+	if (bdev->irq < 0)
+		return bdev->irq;
+
+	ret = of_property_read_u32(pdev->dev.of_node, "qcom,ee", &bdev->ee);
+	if (ret) {
+		dev_err(bdev->dev, "Execution environment unspecified\n");
+		return ret;
+	}
+
+	bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk");
+	if (IS_ERR(bdev->bamclk))
+		return PTR_ERR(bdev->bamclk);
+
+	ret = clk_prepare_enable(bdev->bamclk);
+	if (ret) {
+		dev_err(bdev->dev, "failed to prepare/enable clock\n");
+		return ret;
+	}
+
+	ret = bam_init(bdev);
+	if (ret)
+		goto err_disable_clk;
+
+	tasklet_init(&bdev->task, dma_tasklet, (unsigned long)bdev);
+
+	bdev->channels = devm_kcalloc(bdev->dev, bdev->num_channels,
+				sizeof(*bdev->channels), GFP_KERNEL);
+
+	if (!bdev->channels) {
+		ret = -ENOMEM;
+		goto err_tasklet_kill;
+	}
+
+	/* allocate and initialize channels */
+	INIT_LIST_HEAD(&bdev->common.channels);
+
+	for (i = 0; i < bdev->num_channels; i++)
+		bam_channel_init(bdev, &bdev->channels[i], i);
+
+	ret = devm_request_irq(bdev->dev, bdev->irq, bam_dma_irq,
+			IRQF_TRIGGER_HIGH, "bam_dma", bdev);
+	if (ret)
+		goto err_bam_channel_exit;
+
+	/* set max dma segment size */
+	bdev->common.dev = bdev->dev;
+	bdev->common.dev->dma_parms = &bdev->dma_parms;
+	ret = dma_set_max_seg_size(bdev->common.dev, BAM_MAX_DATA_SIZE);
+	if (ret) {
+		dev_err(bdev->dev, "cannot set maximum segment size\n");
+		goto err_bam_channel_exit;
+	}
+
+	platform_set_drvdata(pdev, bdev);
+
+	/* set capabilities */
+	dma_cap_zero(bdev->common.cap_mask);
+	dma_cap_set(DMA_SLAVE, bdev->common.cap_mask);
+
+	/* initialize dmaengine apis */
+	bdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	bdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+	bdev->common.src_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES;
+	bdev->common.dst_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES;
+	bdev->common.device_alloc_chan_resources = bam_alloc_chan;
+	bdev->common.device_free_chan_resources = bam_free_chan;
+	bdev->common.device_prep_slave_sg = bam_prep_slave_sg;
+	bdev->common.device_config = bam_slave_config;
+	bdev->common.device_pause = bam_pause;
+	bdev->common.device_resume = bam_resume;
+	bdev->common.device_terminate_all = bam_dma_terminate_all;
+	bdev->common.device_issue_pending = bam_issue_pending;
+	bdev->common.device_tx_status = bam_tx_status;
+	bdev->common.dev = bdev->dev;
+
+	ret = dma_async_device_register(&bdev->common);
+	if (ret) {
+		dev_err(bdev->dev, "failed to register dma async device\n");
+		goto err_bam_channel_exit;
+	}
+
+	ret = of_dma_controller_register(pdev->dev.of_node, bam_dma_xlate,
+					&bdev->common);
+	if (ret)
+		goto err_unregister_dma;
+
+	return 0;
+
+err_unregister_dma:
+	dma_async_device_unregister(&bdev->common);
+err_bam_channel_exit:
+	for (i = 0; i < bdev->num_channels; i++)
+		tasklet_kill(&bdev->channels[i].vc.task);
+err_tasklet_kill:
+	tasklet_kill(&bdev->task);
+err_disable_clk:
+	clk_disable_unprepare(bdev->bamclk);
+
+	return ret;
+}
+
+static int bam_dma_remove(struct platform_device *pdev)
+{
+	struct bam_device *bdev = platform_get_drvdata(pdev);
+	u32 i;
+
+	of_dma_controller_free(pdev->dev.of_node);
+	dma_async_device_unregister(&bdev->common);
+
+	/* mask all interrupts for this execution environment */
+	writel_relaxed(0, bam_addr(bdev, 0,  BAM_IRQ_SRCS_MSK_EE));
+
+	devm_free_irq(bdev->dev, bdev->irq, bdev);
+
+	for (i = 0; i < bdev->num_channels; i++) {
+		bam_dma_terminate_all(&bdev->channels[i].vc.chan);
+		tasklet_kill(&bdev->channels[i].vc.task);
+
+		dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
+			bdev->channels[i].fifo_virt,
+			bdev->channels[i].fifo_phys);
+	}
+
+	tasklet_kill(&bdev->task);
+
+	clk_disable_unprepare(bdev->bamclk);
+
+	return 0;
+}
+
+static struct platform_driver bam_dma_driver = {
+	.probe = bam_dma_probe,
+	.remove = bam_dma_remove,
+	.driver = {
+		.name = "bam-dma-engine",
+		.of_match_table = bam_of_match,
+	},
+};
+
+module_platform_driver(bam_dma_driver);
+
+MODULE_AUTHOR("Andy Gross <agross@codeaurora.org>");
+MODULE_DESCRIPTION("QCOM BAM DMA engine driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom_bam_dma.c b/drivers/dma/qcom_bam_dma.c
deleted file mode 100644
index 5a250cd..0000000
--- a/drivers/dma/qcom_bam_dma.c
+++ /dev/null
@@ -1,1259 +0,0 @@
-/*
- * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- */
-/*
- * QCOM BAM DMA engine driver
- *
- * QCOM BAM DMA blocks are distributed amongst a number of the on-chip
- * peripherals on the MSM 8x74.  The configuration of the channels are dependent
- * on the way they are hard wired to that specific peripheral.  The peripheral
- * device tree entries specify the configuration of each channel.
- *
- * The DMA controller requires the use of external memory for storage of the
- * hardware descriptors for each channel.  The descriptor FIFO is accessed as a
- * circular buffer and operations are managed according to the offset within the
- * FIFO.  After pipe/channel reset, all of the pipe registers and internal state
- * are back to defaults.
- *
- * During DMA operations, we write descriptors to the FIFO, being careful to
- * handle wrapping and then write the last FIFO offset to that channel's
- * P_EVNT_REG register to kick off the transaction.  The P_SW_OFSTS register
- * indicates the current FIFO offset that is being processed, so there is some
- * indication of where the hardware is currently working.
- */
-
-#include <linux/kernel.h>
-#include <linux/io.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include <linux/interrupt.h>
-#include <linux/dma-mapping.h>
-#include <linux/scatterlist.h>
-#include <linux/device.h>
-#include <linux/platform_device.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_irq.h>
-#include <linux/of_dma.h>
-#include <linux/clk.h>
-#include <linux/dmaengine.h>
-
-#include "dmaengine.h"
-#include "virt-dma.h"
-
-struct bam_desc_hw {
-	u32 addr;		/* Buffer physical address */
-	u16 size;		/* Buffer size in bytes */
-	u16 flags;
-};
-
-#define DESC_FLAG_INT BIT(15)
-#define DESC_FLAG_EOT BIT(14)
-#define DESC_FLAG_EOB BIT(13)
-#define DESC_FLAG_NWD BIT(12)
-
-struct bam_async_desc {
-	struct virt_dma_desc vd;
-
-	u32 num_desc;
-	u32 xfer_len;
-
-	/* transaction flags, EOT|EOB|NWD */
-	u16 flags;
-
-	struct bam_desc_hw *curr_desc;
-
-	enum dma_transfer_direction dir;
-	size_t length;
-	struct bam_desc_hw desc[0];
-};
-
-enum bam_reg {
-	BAM_CTRL,
-	BAM_REVISION,
-	BAM_NUM_PIPES,
-	BAM_DESC_CNT_TRSHLD,
-	BAM_IRQ_SRCS,
-	BAM_IRQ_SRCS_MSK,
-	BAM_IRQ_SRCS_UNMASKED,
-	BAM_IRQ_STTS,
-	BAM_IRQ_CLR,
-	BAM_IRQ_EN,
-	BAM_CNFG_BITS,
-	BAM_IRQ_SRCS_EE,
-	BAM_IRQ_SRCS_MSK_EE,
-	BAM_P_CTRL,
-	BAM_P_RST,
-	BAM_P_HALT,
-	BAM_P_IRQ_STTS,
-	BAM_P_IRQ_CLR,
-	BAM_P_IRQ_EN,
-	BAM_P_EVNT_DEST_ADDR,
-	BAM_P_EVNT_REG,
-	BAM_P_SW_OFSTS,
-	BAM_P_DATA_FIFO_ADDR,
-	BAM_P_DESC_FIFO_ADDR,
-	BAM_P_EVNT_GEN_TRSHLD,
-	BAM_P_FIFO_SIZES,
-};
-
-struct reg_offset_data {
-	u32 base_offset;
-	unsigned int pipe_mult, evnt_mult, ee_mult;
-};
-
-static const struct reg_offset_data bam_v1_3_reg_info[] = {
-	[BAM_CTRL]		= { 0x0F80, 0x00, 0x00, 0x00 },
-	[BAM_REVISION]		= { 0x0F84, 0x00, 0x00, 0x00 },
-	[BAM_NUM_PIPES]		= { 0x0FBC, 0x00, 0x00, 0x00 },
-	[BAM_DESC_CNT_TRSHLD]	= { 0x0F88, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS]		= { 0x0F8C, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_MSK]	= { 0x0F90, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_UNMASKED]	= { 0x0FB0, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_STTS]		= { 0x0F94, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_CLR]		= { 0x0F98, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_EN]		= { 0x0F9C, 0x00, 0x00, 0x00 },
-	[BAM_CNFG_BITS]		= { 0x0FFC, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_EE]	= { 0x1800, 0x00, 0x00, 0x80 },
-	[BAM_IRQ_SRCS_MSK_EE]	= { 0x1804, 0x00, 0x00, 0x80 },
-	[BAM_P_CTRL]		= { 0x0000, 0x80, 0x00, 0x00 },
-	[BAM_P_RST]		= { 0x0004, 0x80, 0x00, 0x00 },
-	[BAM_P_HALT]		= { 0x0008, 0x80, 0x00, 0x00 },
-	[BAM_P_IRQ_STTS]	= { 0x0010, 0x80, 0x00, 0x00 },
-	[BAM_P_IRQ_CLR]		= { 0x0014, 0x80, 0x00, 0x00 },
-	[BAM_P_IRQ_EN]		= { 0x0018, 0x80, 0x00, 0x00 },
-	[BAM_P_EVNT_DEST_ADDR]	= { 0x102C, 0x00, 0x40, 0x00 },
-	[BAM_P_EVNT_REG]	= { 0x1018, 0x00, 0x40, 0x00 },
-	[BAM_P_SW_OFSTS]	= { 0x1000, 0x00, 0x40, 0x00 },
-	[BAM_P_DATA_FIFO_ADDR]	= { 0x1024, 0x00, 0x40, 0x00 },
-	[BAM_P_DESC_FIFO_ADDR]	= { 0x101C, 0x00, 0x40, 0x00 },
-	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x1028, 0x00, 0x40, 0x00 },
-	[BAM_P_FIFO_SIZES]	= { 0x1020, 0x00, 0x40, 0x00 },
-};
-
-static const struct reg_offset_data bam_v1_4_reg_info[] = {
-	[BAM_CTRL]		= { 0x0000, 0x00, 0x00, 0x00 },
-	[BAM_REVISION]		= { 0x0004, 0x00, 0x00, 0x00 },
-	[BAM_NUM_PIPES]		= { 0x003C, 0x00, 0x00, 0x00 },
-	[BAM_DESC_CNT_TRSHLD]	= { 0x0008, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS]		= { 0x000C, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_MSK]	= { 0x0010, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_UNMASKED]	= { 0x0030, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_STTS]		= { 0x0014, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_CLR]		= { 0x0018, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_EN]		= { 0x001C, 0x00, 0x00, 0x00 },
-	[BAM_CNFG_BITS]		= { 0x007C, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_EE]	= { 0x0800, 0x00, 0x00, 0x80 },
-	[BAM_IRQ_SRCS_MSK_EE]	= { 0x0804, 0x00, 0x00, 0x80 },
-	[BAM_P_CTRL]		= { 0x1000, 0x1000, 0x00, 0x00 },
-	[BAM_P_RST]		= { 0x1004, 0x1000, 0x00, 0x00 },
-	[BAM_P_HALT]		= { 0x1008, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_STTS]	= { 0x1010, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_CLR]		= { 0x1014, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_EN]		= { 0x1018, 0x1000, 0x00, 0x00 },
-	[BAM_P_EVNT_DEST_ADDR]	= { 0x182C, 0x00, 0x1000, 0x00 },
-	[BAM_P_EVNT_REG]	= { 0x1818, 0x00, 0x1000, 0x00 },
-	[BAM_P_SW_OFSTS]	= { 0x1800, 0x00, 0x1000, 0x00 },
-	[BAM_P_DATA_FIFO_ADDR]	= { 0x1824, 0x00, 0x1000, 0x00 },
-	[BAM_P_DESC_FIFO_ADDR]	= { 0x181C, 0x00, 0x1000, 0x00 },
-	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x1828, 0x00, 0x1000, 0x00 },
-	[BAM_P_FIFO_SIZES]	= { 0x1820, 0x00, 0x1000, 0x00 },
-};
-
-static const struct reg_offset_data bam_v1_7_reg_info[] = {
-	[BAM_CTRL]		= { 0x00000, 0x00, 0x00, 0x00 },
-	[BAM_REVISION]		= { 0x01000, 0x00, 0x00, 0x00 },
-	[BAM_NUM_PIPES]		= { 0x01008, 0x00, 0x00, 0x00 },
-	[BAM_DESC_CNT_TRSHLD]	= { 0x00008, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS]		= { 0x03010, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_MSK]	= { 0x03014, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_UNMASKED]	= { 0x03018, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_STTS]		= { 0x00014, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_CLR]		= { 0x00018, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_EN]		= { 0x0001C, 0x00, 0x00, 0x00 },
-	[BAM_CNFG_BITS]		= { 0x0007C, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_EE]	= { 0x03000, 0x00, 0x00, 0x1000 },
-	[BAM_IRQ_SRCS_MSK_EE]	= { 0x03004, 0x00, 0x00, 0x1000 },
-	[BAM_P_CTRL]		= { 0x13000, 0x1000, 0x00, 0x00 },
-	[BAM_P_RST]		= { 0x13004, 0x1000, 0x00, 0x00 },
-	[BAM_P_HALT]		= { 0x13008, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_STTS]	= { 0x13010, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_CLR]		= { 0x13014, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_EN]		= { 0x13018, 0x1000, 0x00, 0x00 },
-	[BAM_P_EVNT_DEST_ADDR]	= { 0x1382C, 0x00, 0x1000, 0x00 },
-	[BAM_P_EVNT_REG]	= { 0x13818, 0x00, 0x1000, 0x00 },
-	[BAM_P_SW_OFSTS]	= { 0x13800, 0x00, 0x1000, 0x00 },
-	[BAM_P_DATA_FIFO_ADDR]	= { 0x13824, 0x00, 0x1000, 0x00 },
-	[BAM_P_DESC_FIFO_ADDR]	= { 0x1381C, 0x00, 0x1000, 0x00 },
-	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x13828, 0x00, 0x1000, 0x00 },
-	[BAM_P_FIFO_SIZES]	= { 0x13820, 0x00, 0x1000, 0x00 },
-};
-
-/* BAM CTRL */
-#define BAM_SW_RST			BIT(0)
-#define BAM_EN				BIT(1)
-#define BAM_EN_ACCUM			BIT(4)
-#define BAM_TESTBUS_SEL_SHIFT		5
-#define BAM_TESTBUS_SEL_MASK		0x3F
-#define BAM_DESC_CACHE_SEL_SHIFT	13
-#define BAM_DESC_CACHE_SEL_MASK		0x3
-#define BAM_CACHED_DESC_STORE		BIT(15)
-#define IBC_DISABLE			BIT(16)
-
-/* BAM REVISION */
-#define REVISION_SHIFT		0
-#define REVISION_MASK		0xFF
-#define NUM_EES_SHIFT		8
-#define NUM_EES_MASK		0xF
-#define CE_BUFFER_SIZE		BIT(13)
-#define AXI_ACTIVE		BIT(14)
-#define USE_VMIDMT		BIT(15)
-#define SECURED			BIT(16)
-#define BAM_HAS_NO_BYPASS	BIT(17)
-#define HIGH_FREQUENCY_BAM	BIT(18)
-#define INACTIV_TMRS_EXST	BIT(19)
-#define NUM_INACTIV_TMRS	BIT(20)
-#define DESC_CACHE_DEPTH_SHIFT	21
-#define DESC_CACHE_DEPTH_1	(0 << DESC_CACHE_DEPTH_SHIFT)
-#define DESC_CACHE_DEPTH_2	(1 << DESC_CACHE_DEPTH_SHIFT)
-#define DESC_CACHE_DEPTH_3	(2 << DESC_CACHE_DEPTH_SHIFT)
-#define DESC_CACHE_DEPTH_4	(3 << DESC_CACHE_DEPTH_SHIFT)
-#define CMD_DESC_EN		BIT(23)
-#define INACTIV_TMR_BASE_SHIFT	24
-#define INACTIV_TMR_BASE_MASK	0xFF
-
-/* BAM NUM PIPES */
-#define BAM_NUM_PIPES_SHIFT		0
-#define BAM_NUM_PIPES_MASK		0xFF
-#define PERIPH_NON_PIPE_GRP_SHIFT	16
-#define PERIPH_NON_PIP_GRP_MASK		0xFF
-#define BAM_NON_PIPE_GRP_SHIFT		24
-#define BAM_NON_PIPE_GRP_MASK		0xFF
-
-/* BAM CNFG BITS */
-#define BAM_PIPE_CNFG		BIT(2)
-#define BAM_FULL_PIPE		BIT(11)
-#define BAM_NO_EXT_P_RST	BIT(12)
-#define BAM_IBC_DISABLE		BIT(13)
-#define BAM_SB_CLK_REQ		BIT(14)
-#define BAM_PSM_CSW_REQ		BIT(15)
-#define BAM_PSM_P_RES		BIT(16)
-#define BAM_AU_P_RES		BIT(17)
-#define BAM_SI_P_RES		BIT(18)
-#define BAM_WB_P_RES		BIT(19)
-#define BAM_WB_BLK_CSW		BIT(20)
-#define BAM_WB_CSW_ACK_IDL	BIT(21)
-#define BAM_WB_RETR_SVPNT	BIT(22)
-#define BAM_WB_DSC_AVL_P_RST	BIT(23)
-#define BAM_REG_P_EN		BIT(24)
-#define BAM_PSM_P_HD_DATA	BIT(25)
-#define BAM_AU_ACCUMED		BIT(26)
-#define BAM_CMD_ENABLE		BIT(27)
-
-#define BAM_CNFG_BITS_DEFAULT	(BAM_PIPE_CNFG |	\
-				 BAM_NO_EXT_P_RST |	\
-				 BAM_IBC_DISABLE |	\
-				 BAM_SB_CLK_REQ |	\
-				 BAM_PSM_CSW_REQ |	\
-				 BAM_PSM_P_RES |	\
-				 BAM_AU_P_RES |		\
-				 BAM_SI_P_RES |		\
-				 BAM_WB_P_RES |		\
-				 BAM_WB_BLK_CSW |	\
-				 BAM_WB_CSW_ACK_IDL |	\
-				 BAM_WB_RETR_SVPNT |	\
-				 BAM_WB_DSC_AVL_P_RST |	\
-				 BAM_REG_P_EN |		\
-				 BAM_PSM_P_HD_DATA |	\
-				 BAM_AU_ACCUMED |	\
-				 BAM_CMD_ENABLE)
-
-/* PIPE CTRL */
-#define P_EN			BIT(1)
-#define P_DIRECTION		BIT(3)
-#define P_SYS_STRM		BIT(4)
-#define P_SYS_MODE		BIT(5)
-#define P_AUTO_EOB		BIT(6)
-#define P_AUTO_EOB_SEL_SHIFT	7
-#define P_AUTO_EOB_SEL_512	(0 << P_AUTO_EOB_SEL_SHIFT)
-#define P_AUTO_EOB_SEL_256	(1 << P_AUTO_EOB_SEL_SHIFT)
-#define P_AUTO_EOB_SEL_128	(2 << P_AUTO_EOB_SEL_SHIFT)
-#define P_AUTO_EOB_SEL_64	(3 << P_AUTO_EOB_SEL_SHIFT)
-#define P_PREFETCH_LIMIT_SHIFT	9
-#define P_PREFETCH_LIMIT_32	(0 << P_PREFETCH_LIMIT_SHIFT)
-#define P_PREFETCH_LIMIT_16	(1 << P_PREFETCH_LIMIT_SHIFT)
-#define P_PREFETCH_LIMIT_4	(2 << P_PREFETCH_LIMIT_SHIFT)
-#define P_WRITE_NWD		BIT(11)
-#define P_LOCK_GROUP_SHIFT	16
-#define P_LOCK_GROUP_MASK	0x1F
-
-/* BAM_DESC_CNT_TRSHLD */
-#define CNT_TRSHLD		0xffff
-#define DEFAULT_CNT_THRSHLD	0x4
-
-/* BAM_IRQ_SRCS */
-#define BAM_IRQ			BIT(31)
-#define P_IRQ			0x7fffffff
-
-/* BAM_IRQ_SRCS_MSK */
-#define BAM_IRQ_MSK		BAM_IRQ
-#define P_IRQ_MSK		P_IRQ
-
-/* BAM_IRQ_STTS */
-#define BAM_TIMER_IRQ		BIT(4)
-#define BAM_EMPTY_IRQ		BIT(3)
-#define BAM_ERROR_IRQ		BIT(2)
-#define BAM_HRESP_ERR_IRQ	BIT(1)
-
-/* BAM_IRQ_CLR */
-#define BAM_TIMER_CLR		BIT(4)
-#define BAM_EMPTY_CLR		BIT(3)
-#define BAM_ERROR_CLR		BIT(2)
-#define BAM_HRESP_ERR_CLR	BIT(1)
-
-/* BAM_IRQ_EN */
-#define BAM_TIMER_EN		BIT(4)
-#define BAM_EMPTY_EN		BIT(3)
-#define BAM_ERROR_EN		BIT(2)
-#define BAM_HRESP_ERR_EN	BIT(1)
-
-/* BAM_P_IRQ_EN */
-#define P_PRCSD_DESC_EN		BIT(0)
-#define P_TIMER_EN		BIT(1)
-#define P_WAKE_EN		BIT(2)
-#define P_OUT_OF_DESC_EN	BIT(3)
-#define P_ERR_EN		BIT(4)
-#define P_TRNSFR_END_EN		BIT(5)
-#define P_DEFAULT_IRQS_EN	(P_PRCSD_DESC_EN | P_ERR_EN | P_TRNSFR_END_EN)
-
-/* BAM_P_SW_OFSTS */
-#define P_SW_OFSTS_MASK		0xffff
-
-#define BAM_DESC_FIFO_SIZE	SZ_32K
-#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
-#define BAM_MAX_DATA_SIZE	(SZ_32K - 8)
-
-struct bam_chan {
-	struct virt_dma_chan vc;
-
-	struct bam_device *bdev;
-
-	/* configuration from device tree */
-	u32 id;
-
-	struct bam_async_desc *curr_txd;	/* current running dma */
-
-	/* runtime configuration */
-	struct dma_slave_config slave;
-
-	/* fifo storage */
-	struct bam_desc_hw *fifo_virt;
-	dma_addr_t fifo_phys;
-
-	/* fifo markers */
-	unsigned short head;		/* start of active descriptor entries */
-	unsigned short tail;		/* end of active descriptor entries */
-
-	unsigned int initialized;	/* is the channel hw initialized? */
-	unsigned int paused;		/* is the channel paused? */
-	unsigned int reconfigure;	/* new slave config? */
-
-	struct list_head node;
-};
-
-static inline struct bam_chan *to_bam_chan(struct dma_chan *common)
-{
-	return container_of(common, struct bam_chan, vc.chan);
-}
-
-struct bam_device {
-	void __iomem *regs;
-	struct device *dev;
-	struct dma_device common;
-	struct device_dma_parameters dma_parms;
-	struct bam_chan *channels;
-	u32 num_channels;
-
-	/* execution environment ID, from DT */
-	u32 ee;
-
-	const struct reg_offset_data *layout;
-
-	struct clk *bamclk;
-	int irq;
-
-	/* dma start transaction tasklet */
-	struct tasklet_struct task;
-};
-
-/**
- * bam_addr - returns BAM register address
- * @bdev: bam device
- * @pipe: pipe instance (ignored when register doesn't have multiple instances)
- * @reg:  register enum
- */
-static inline void __iomem *bam_addr(struct bam_device *bdev, u32 pipe,
-		enum bam_reg reg)
-{
-	const struct reg_offset_data r = bdev->layout[reg];
-
-	return bdev->regs + r.base_offset +
-		r.pipe_mult * pipe +
-		r.evnt_mult * pipe +
-		r.ee_mult * bdev->ee;
-}
-
-/**
- * bam_reset_channel - Reset individual BAM DMA channel
- * @bchan: bam channel
- *
- * This function resets a specific BAM channel
- */
-static void bam_reset_channel(struct bam_chan *bchan)
-{
-	struct bam_device *bdev = bchan->bdev;
-
-	lockdep_assert_held(&bchan->vc.lock);
-
-	/* reset channel */
-	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_RST));
-	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_RST));
-
-	/* don't allow cpu to reorder BAM register accesses done after this */
-	wmb();
-
-	/* make sure hw is initialized when channel is used the first time  */
-	bchan->initialized = 0;
-}
-
-/**
- * bam_chan_init_hw - Initialize channel hardware
- * @bchan: bam channel
- *
- * This function resets and initializes the BAM channel
- */
-static void bam_chan_init_hw(struct bam_chan *bchan,
-	enum dma_transfer_direction dir)
-{
-	struct bam_device *bdev = bchan->bdev;
-	u32 val;
-
-	/* Reset the channel to clear internal state of the FIFO */
-	bam_reset_channel(bchan);
-
-	/*
-	 * write out 8 byte aligned address.  We have enough space for this
-	 * because we allocated 1 more descriptor (8 bytes) than we can use
-	 */
-	writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)),
-			bam_addr(bdev, bchan->id, BAM_P_DESC_FIFO_ADDR));
-	writel_relaxed(BAM_DESC_FIFO_SIZE,
-			bam_addr(bdev, bchan->id, BAM_P_FIFO_SIZES));
-
-	/* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */
-	writel_relaxed(P_DEFAULT_IRQS_EN,
-			bam_addr(bdev, bchan->id, BAM_P_IRQ_EN));
-
-	/* unmask the specific pipe and EE combo */
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-	val |= BIT(bchan->id);
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-
-	/* don't allow cpu to reorder the channel enable done below */
-	wmb();
-
-	/* set fixed direction and mode, then enable channel */
-	val = P_EN | P_SYS_MODE;
-	if (dir == DMA_DEV_TO_MEM)
-		val |= P_DIRECTION;
-
-	writel_relaxed(val, bam_addr(bdev, bchan->id, BAM_P_CTRL));
-
-	bchan->initialized = 1;
-
-	/* init FIFO pointers */
-	bchan->head = 0;
-	bchan->tail = 0;
-}
-
-/**
- * bam_alloc_chan - Allocate channel resources for DMA channel.
- * @chan: specified channel
- *
- * This function allocates the FIFO descriptor memory
- */
-static int bam_alloc_chan(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-
-	if (bchan->fifo_virt)
-		return 0;
-
-	/* allocate FIFO descriptor space, but only if necessary */
-	bchan->fifo_virt = dma_alloc_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
-				&bchan->fifo_phys, GFP_KERNEL);
-
-	if (!bchan->fifo_virt) {
-		dev_err(bdev->dev, "Failed to allocate desc fifo\n");
-		return -ENOMEM;
-	}
-
-	return 0;
-}
-
-/**
- * bam_free_chan - Frees dma resources associated with specific channel
- * @chan: specified channel
- *
- * Free the allocated fifo descriptor memory and channel resources
- *
- */
-static void bam_free_chan(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-	u32 val;
-	unsigned long flags;
-
-	vchan_free_chan_resources(to_virt_chan(chan));
-
-	if (bchan->curr_txd) {
-		dev_err(bchan->bdev->dev, "Cannot free busy channel\n");
-		return;
-	}
-
-	spin_lock_irqsave(&bchan->vc.lock, flags);
-	bam_reset_channel(bchan);
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
-
-	dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE, bchan->fifo_virt,
-				bchan->fifo_phys);
-	bchan->fifo_virt = NULL;
-
-	/* mask irq for pipe/channel */
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-	val &= ~BIT(bchan->id);
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-
-	/* disable irq */
-	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_IRQ_EN));
-}
-
-/**
- * bam_slave_config - set slave configuration for channel
- * @chan: dma channel
- * @cfg: slave configuration
- *
- * Sets slave configuration for channel
- *
- */
-static int bam_slave_config(struct dma_chan *chan,
-			    struct dma_slave_config *cfg)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	unsigned long flag;
-
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	memcpy(&bchan->slave, cfg, sizeof(*cfg));
-	bchan->reconfigure = 1;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
-
-	return 0;
-}
-
-/**
- * bam_prep_slave_sg - Prep slave sg transaction
- *
- * @chan: dma channel
- * @sgl: scatter gather list
- * @sg_len: length of sg
- * @direction: DMA transfer direction
- * @flags: DMA flags
- * @context: transfer context (unused)
- */
-static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
-	struct scatterlist *sgl, unsigned int sg_len,
-	enum dma_transfer_direction direction, unsigned long flags,
-	void *context)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-	struct bam_async_desc *async_desc;
-	struct scatterlist *sg;
-	u32 i;
-	struct bam_desc_hw *desc;
-	unsigned int num_alloc = 0;
-
-
-	if (!is_slave_direction(direction)) {
-		dev_err(bdev->dev, "invalid dma direction\n");
-		return NULL;
-	}
-
-	/* calculate number of required entries */
-	for_each_sg(sgl, sg, sg_len, i)
-		num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_MAX_DATA_SIZE);
-
-	/* allocate enough room to accomodate the number of entries */
-	async_desc = kzalloc(sizeof(*async_desc) +
-			(num_alloc * sizeof(struct bam_desc_hw)), GFP_NOWAIT);
-
-	if (!async_desc)
-		goto err_out;
-
-	if (flags & DMA_PREP_FENCE)
-		async_desc->flags |= DESC_FLAG_NWD;
-
-	if (flags & DMA_PREP_INTERRUPT)
-		async_desc->flags |= DESC_FLAG_EOT;
-	else
-		async_desc->flags |= DESC_FLAG_INT;
-
-	async_desc->num_desc = num_alloc;
-	async_desc->curr_desc = async_desc->desc;
-	async_desc->dir = direction;
-
-	/* fill in temporary descriptors */
-	desc = async_desc->desc;
-	for_each_sg(sgl, sg, sg_len, i) {
-		unsigned int remainder = sg_dma_len(sg);
-		unsigned int curr_offset = 0;
-
-		do {
-			desc->addr = sg_dma_address(sg) + curr_offset;
-
-			if (remainder > BAM_MAX_DATA_SIZE) {
-				desc->size = BAM_MAX_DATA_SIZE;
-				remainder -= BAM_MAX_DATA_SIZE;
-				curr_offset += BAM_MAX_DATA_SIZE;
-			} else {
-				desc->size = remainder;
-				remainder = 0;
-			}
-
-			async_desc->length += desc->size;
-			desc++;
-		} while (remainder > 0);
-	}
-
-	return vchan_tx_prep(&bchan->vc, &async_desc->vd, flags);
-
-err_out:
-	kfree(async_desc);
-	return NULL;
-}
-
-/**
- * bam_dma_terminate_all - terminate all transactions on a channel
- * @bchan: bam dma channel
- *
- * Dequeues and frees all transactions
- * No callbacks are done
- *
- */
-static int bam_dma_terminate_all(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	unsigned long flag;
-	LIST_HEAD(head);
-
-	/* remove all transactions, including active transaction */
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	if (bchan->curr_txd) {
-		list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued);
-		bchan->curr_txd = NULL;
-	}
-
-	vchan_get_all_descriptors(&bchan->vc, &head);
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
-
-	vchan_dma_desc_free_list(&bchan->vc, &head);
-
-	return 0;
-}
-
-/**
- * bam_pause - Pause DMA channel
- * @chan: dma channel
- *
- */
-static int bam_pause(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-	unsigned long flag;
-
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_HALT));
-	bchan->paused = 1;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
-
-	return 0;
-}
-
-/**
- * bam_resume - Resume DMA channel operations
- * @chan: dma channel
- *
- */
-static int bam_resume(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-	unsigned long flag;
-
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_HALT));
-	bchan->paused = 0;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
-
-	return 0;
-}
-
-/**
- * process_channel_irqs - processes the channel interrupts
- * @bdev: bam controller
- *
- * This function processes the channel interrupts
- *
- */
-static u32 process_channel_irqs(struct bam_device *bdev)
-{
-	u32 i, srcs, pipe_stts;
-	unsigned long flags;
-	struct bam_async_desc *async_desc;
-
-	srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE));
-
-	/* return early if no pipe/channel interrupts are present */
-	if (!(srcs & P_IRQ))
-		return srcs;
-
-	for (i = 0; i < bdev->num_channels; i++) {
-		struct bam_chan *bchan = &bdev->channels[i];
-
-		if (!(srcs & BIT(i)))
-			continue;
-
-		/* clear pipe irq */
-		pipe_stts = readl_relaxed(bam_addr(bdev, i, BAM_P_IRQ_STTS));
-
-		writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR));
-
-		spin_lock_irqsave(&bchan->vc.lock, flags);
-		async_desc = bchan->curr_txd;
-
-		if (async_desc) {
-			async_desc->num_desc -= async_desc->xfer_len;
-			async_desc->curr_desc += async_desc->xfer_len;
-			bchan->curr_txd = NULL;
-
-			/* manage FIFO */
-			bchan->head += async_desc->xfer_len;
-			bchan->head %= MAX_DESCRIPTORS;
-
-			/*
-			 * if complete, process cookie.  Otherwise
-			 * push back to front of desc_issued so that
-			 * it gets restarted by the tasklet
-			 */
-			if (!async_desc->num_desc)
-				vchan_cookie_complete(&async_desc->vd);
-			else
-				list_add(&async_desc->vd.node,
-					&bchan->vc.desc_issued);
-		}
-
-		spin_unlock_irqrestore(&bchan->vc.lock, flags);
-	}
-
-	return srcs;
-}
-
-/**
- * bam_dma_irq - irq handler for bam controller
- * @irq: IRQ of interrupt
- * @data: callback data
- *
- * IRQ handler for the bam controller
- */
-static irqreturn_t bam_dma_irq(int irq, void *data)
-{
-	struct bam_device *bdev = data;
-	u32 clr_mask = 0, srcs = 0;
-
-	srcs |= process_channel_irqs(bdev);
-
-	/* kick off tasklet to start next dma transfer */
-	if (srcs & P_IRQ)
-		tasklet_schedule(&bdev->task);
-
-	if (srcs & BAM_IRQ)
-		clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS));
-
-	/* don't allow reorder of the various accesses to the BAM registers */
-	mb();
-
-	writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR));
-
-	return IRQ_HANDLED;
-}
-
-/**
- * bam_tx_status - returns status of transaction
- * @chan: dma channel
- * @cookie: transaction cookie
- * @txstate: DMA transaction state
- *
- * Return status of dma transaction
- */
-static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
-		struct dma_tx_state *txstate)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct virt_dma_desc *vd;
-	int ret;
-	size_t residue = 0;
-	unsigned int i;
-	unsigned long flags;
-
-	ret = dma_cookie_status(chan, cookie, txstate);
-	if (ret == DMA_COMPLETE)
-		return ret;
-
-	if (!txstate)
-		return bchan->paused ? DMA_PAUSED : ret;
-
-	spin_lock_irqsave(&bchan->vc.lock, flags);
-	vd = vchan_find_desc(&bchan->vc, cookie);
-	if (vd)
-		residue = container_of(vd, struct bam_async_desc, vd)->length;
-	else if (bchan->curr_txd && bchan->curr_txd->vd.tx.cookie == cookie)
-		for (i = 0; i < bchan->curr_txd->num_desc; i++)
-			residue += bchan->curr_txd->curr_desc[i].size;
-
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
-
-	dma_set_residue(txstate, residue);
-
-	if (ret == DMA_IN_PROGRESS && bchan->paused)
-		ret = DMA_PAUSED;
-
-	return ret;
-}
-
-/**
- * bam_apply_new_config
- * @bchan: bam dma channel
- * @dir: DMA direction
- */
-static void bam_apply_new_config(struct bam_chan *bchan,
-	enum dma_transfer_direction dir)
-{
-	struct bam_device *bdev = bchan->bdev;
-	u32 maxburst;
-
-	if (dir == DMA_DEV_TO_MEM)
-		maxburst = bchan->slave.src_maxburst;
-	else
-		maxburst = bchan->slave.dst_maxburst;
-
-	writel_relaxed(maxburst, bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
-
-	bchan->reconfigure = 0;
-}
-
-/**
- * bam_start_dma - start next transaction
- * @bchan - bam dma channel
- */
-static void bam_start_dma(struct bam_chan *bchan)
-{
-	struct virt_dma_desc *vd = vchan_next_desc(&bchan->vc);
-	struct bam_device *bdev = bchan->bdev;
-	struct bam_async_desc *async_desc;
-	struct bam_desc_hw *desc;
-	struct bam_desc_hw *fifo = PTR_ALIGN(bchan->fifo_virt,
-					sizeof(struct bam_desc_hw));
-
-	lockdep_assert_held(&bchan->vc.lock);
-
-	if (!vd)
-		return;
-
-	list_del(&vd->node);
-
-	async_desc = container_of(vd, struct bam_async_desc, vd);
-	bchan->curr_txd = async_desc;
-
-	/* on first use, initialize the channel hardware */
-	if (!bchan->initialized)
-		bam_chan_init_hw(bchan, async_desc->dir);
-
-	/* apply new slave config changes, if necessary */
-	if (bchan->reconfigure)
-		bam_apply_new_config(bchan, async_desc->dir);
-
-	desc = bchan->curr_txd->curr_desc;
-
-	if (async_desc->num_desc > MAX_DESCRIPTORS)
-		async_desc->xfer_len = MAX_DESCRIPTORS;
-	else
-		async_desc->xfer_len = async_desc->num_desc;
-
-	/* set any special flags on the last descriptor */
-	if (async_desc->num_desc == async_desc->xfer_len)
-		desc[async_desc->xfer_len - 1].flags = async_desc->flags;
-	else
-		desc[async_desc->xfer_len - 1].flags |= DESC_FLAG_INT;
-
-	if (bchan->tail + async_desc->xfer_len > MAX_DESCRIPTORS) {
-		u32 partial = MAX_DESCRIPTORS - bchan->tail;
-
-		memcpy(&fifo[bchan->tail], desc,
-				partial * sizeof(struct bam_desc_hw));
-		memcpy(fifo, &desc[partial], (async_desc->xfer_len - partial) *
-				sizeof(struct bam_desc_hw));
-	} else {
-		memcpy(&fifo[bchan->tail], desc,
-			async_desc->xfer_len * sizeof(struct bam_desc_hw));
-	}
-
-	bchan->tail += async_desc->xfer_len;
-	bchan->tail %= MAX_DESCRIPTORS;
-
-	/* ensure descriptor writes and dma start not reordered */
-	wmb();
-	writel_relaxed(bchan->tail * sizeof(struct bam_desc_hw),
-			bam_addr(bdev, bchan->id, BAM_P_EVNT_REG));
-}
-
-/**
- * dma_tasklet - DMA IRQ tasklet
- * @data: tasklet argument (bam controller structure)
- *
- * Sets up next DMA operation and then processes all completed transactions
- */
-static void dma_tasklet(unsigned long data)
-{
-	struct bam_device *bdev = (struct bam_device *)data;
-	struct bam_chan *bchan;
-	unsigned long flags;
-	unsigned int i;
-
-	/* go through the channels and kick off transactions */
-	for (i = 0; i < bdev->num_channels; i++) {
-		bchan = &bdev->channels[i];
-		spin_lock_irqsave(&bchan->vc.lock, flags);
-
-		if (!list_empty(&bchan->vc.desc_issued) && !bchan->curr_txd)
-			bam_start_dma(bchan);
-		spin_unlock_irqrestore(&bchan->vc.lock, flags);
-	}
-}
-
-/**
- * bam_issue_pending - starts pending transactions
- * @chan: dma channel
- *
- * Calls tasklet directly which in turn starts any pending transactions
- */
-static void bam_issue_pending(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	unsigned long flags;
-
-	spin_lock_irqsave(&bchan->vc.lock, flags);
-
-	/* if work pending and idle, start a transaction */
-	if (vchan_issue_pending(&bchan->vc) && !bchan->curr_txd)
-		bam_start_dma(bchan);
-
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
-}
-
-/**
- * bam_dma_free_desc - free descriptor memory
- * @vd: virtual descriptor
- *
- */
-static void bam_dma_free_desc(struct virt_dma_desc *vd)
-{
-	struct bam_async_desc *async_desc = container_of(vd,
-			struct bam_async_desc, vd);
-
-	kfree(async_desc);
-}
-
-static struct dma_chan *bam_dma_xlate(struct of_phandle_args *dma_spec,
-		struct of_dma *of)
-{
-	struct bam_device *bdev = container_of(of->of_dma_data,
-					struct bam_device, common);
-	unsigned int request;
-
-	if (dma_spec->args_count != 1)
-		return NULL;
-
-	request = dma_spec->args[0];
-	if (request >= bdev->num_channels)
-		return NULL;
-
-	return dma_get_slave_channel(&(bdev->channels[request].vc.chan));
-}
-
-/**
- * bam_init
- * @bdev: bam device
- *
- * Initialization helper for global bam registers
- */
-static int bam_init(struct bam_device *bdev)
-{
-	u32 val;
-
-	/* read revision and configuration information */
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_REVISION)) >> NUM_EES_SHIFT;
-	val &= NUM_EES_MASK;
-
-	/* check that configured EE is within range */
-	if (bdev->ee >= val)
-		return -EINVAL;
-
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES));
-	bdev->num_channels = val & BAM_NUM_PIPES_MASK;
-
-	/* s/w reset bam */
-	/* after reset all pipes are disabled and idle */
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_CTRL));
-	val |= BAM_SW_RST;
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
-	val &= ~BAM_SW_RST;
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
-
-	/* make sure previous stores are visible before enabling BAM */
-	wmb();
-
-	/* enable bam */
-	val |= BAM_EN;
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
-
-	/* set descriptor threshhold, start with 4 bytes */
-	writel_relaxed(DEFAULT_CNT_THRSHLD,
-			bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
-
-	/* Enable default set of h/w workarounds, ie all except BAM_FULL_PIPE */
-	writel_relaxed(BAM_CNFG_BITS_DEFAULT, bam_addr(bdev, 0, BAM_CNFG_BITS));
-
-	/* enable irqs for errors */
-	writel_relaxed(BAM_ERROR_EN | BAM_HRESP_ERR_EN,
-			bam_addr(bdev, 0, BAM_IRQ_EN));
-
-	/* unmask global bam interrupt */
-	writel_relaxed(BAM_IRQ_MSK, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-
-	return 0;
-}
-
-static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan,
-	u32 index)
-{
-	bchan->id = index;
-	bchan->bdev = bdev;
-
-	vchan_init(&bchan->vc, &bdev->common);
-	bchan->vc.desc_free = bam_dma_free_desc;
-}
-
-static const struct of_device_id bam_of_match[] = {
-	{ .compatible = "qcom,bam-v1.3.0", .data = &bam_v1_3_reg_info },
-	{ .compatible = "qcom,bam-v1.4.0", .data = &bam_v1_4_reg_info },
-	{ .compatible = "qcom,bam-v1.7.0", .data = &bam_v1_7_reg_info },
-	{}
-};
-
-MODULE_DEVICE_TABLE(of, bam_of_match);
-
-static int bam_dma_probe(struct platform_device *pdev)
-{
-	struct bam_device *bdev;
-	const struct of_device_id *match;
-	struct resource *iores;
-	int ret, i;
-
-	bdev = devm_kzalloc(&pdev->dev, sizeof(*bdev), GFP_KERNEL);
-	if (!bdev)
-		return -ENOMEM;
-
-	bdev->dev = &pdev->dev;
-
-	match = of_match_node(bam_of_match, pdev->dev.of_node);
-	if (!match) {
-		dev_err(&pdev->dev, "Unsupported BAM module\n");
-		return -ENODEV;
-	}
-
-	bdev->layout = match->data;
-
-	iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-	bdev->regs = devm_ioremap_resource(&pdev->dev, iores);
-	if (IS_ERR(bdev->regs))
-		return PTR_ERR(bdev->regs);
-
-	bdev->irq = platform_get_irq(pdev, 0);
-	if (bdev->irq < 0)
-		return bdev->irq;
-
-	ret = of_property_read_u32(pdev->dev.of_node, "qcom,ee", &bdev->ee);
-	if (ret) {
-		dev_err(bdev->dev, "Execution environment unspecified\n");
-		return ret;
-	}
-
-	bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk");
-	if (IS_ERR(bdev->bamclk))
-		return PTR_ERR(bdev->bamclk);
-
-	ret = clk_prepare_enable(bdev->bamclk);
-	if (ret) {
-		dev_err(bdev->dev, "failed to prepare/enable clock\n");
-		return ret;
-	}
-
-	ret = bam_init(bdev);
-	if (ret)
-		goto err_disable_clk;
-
-	tasklet_init(&bdev->task, dma_tasklet, (unsigned long)bdev);
-
-	bdev->channels = devm_kcalloc(bdev->dev, bdev->num_channels,
-				sizeof(*bdev->channels), GFP_KERNEL);
-
-	if (!bdev->channels) {
-		ret = -ENOMEM;
-		goto err_tasklet_kill;
-	}
-
-	/* allocate and initialize channels */
-	INIT_LIST_HEAD(&bdev->common.channels);
-
-	for (i = 0; i < bdev->num_channels; i++)
-		bam_channel_init(bdev, &bdev->channels[i], i);
-
-	ret = devm_request_irq(bdev->dev, bdev->irq, bam_dma_irq,
-			IRQF_TRIGGER_HIGH, "bam_dma", bdev);
-	if (ret)
-		goto err_bam_channel_exit;
-
-	/* set max dma segment size */
-	bdev->common.dev = bdev->dev;
-	bdev->common.dev->dma_parms = &bdev->dma_parms;
-	ret = dma_set_max_seg_size(bdev->common.dev, BAM_MAX_DATA_SIZE);
-	if (ret) {
-		dev_err(bdev->dev, "cannot set maximum segment size\n");
-		goto err_bam_channel_exit;
-	}
-
-	platform_set_drvdata(pdev, bdev);
-
-	/* set capabilities */
-	dma_cap_zero(bdev->common.cap_mask);
-	dma_cap_set(DMA_SLAVE, bdev->common.cap_mask);
-
-	/* initialize dmaengine apis */
-	bdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
-	bdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
-	bdev->common.src_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES;
-	bdev->common.dst_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES;
-	bdev->common.device_alloc_chan_resources = bam_alloc_chan;
-	bdev->common.device_free_chan_resources = bam_free_chan;
-	bdev->common.device_prep_slave_sg = bam_prep_slave_sg;
-	bdev->common.device_config = bam_slave_config;
-	bdev->common.device_pause = bam_pause;
-	bdev->common.device_resume = bam_resume;
-	bdev->common.device_terminate_all = bam_dma_terminate_all;
-	bdev->common.device_issue_pending = bam_issue_pending;
-	bdev->common.device_tx_status = bam_tx_status;
-	bdev->common.dev = bdev->dev;
-
-	ret = dma_async_device_register(&bdev->common);
-	if (ret) {
-		dev_err(bdev->dev, "failed to register dma async device\n");
-		goto err_bam_channel_exit;
-	}
-
-	ret = of_dma_controller_register(pdev->dev.of_node, bam_dma_xlate,
-					&bdev->common);
-	if (ret)
-		goto err_unregister_dma;
-
-	return 0;
-
-err_unregister_dma:
-	dma_async_device_unregister(&bdev->common);
-err_bam_channel_exit:
-	for (i = 0; i < bdev->num_channels; i++)
-		tasklet_kill(&bdev->channels[i].vc.task);
-err_tasklet_kill:
-	tasklet_kill(&bdev->task);
-err_disable_clk:
-	clk_disable_unprepare(bdev->bamclk);
-
-	return ret;
-}
-
-static int bam_dma_remove(struct platform_device *pdev)
-{
-	struct bam_device *bdev = platform_get_drvdata(pdev);
-	u32 i;
-
-	of_dma_controller_free(pdev->dev.of_node);
-	dma_async_device_unregister(&bdev->common);
-
-	/* mask all interrupts for this execution environment */
-	writel_relaxed(0, bam_addr(bdev, 0,  BAM_IRQ_SRCS_MSK_EE));
-
-	devm_free_irq(bdev->dev, bdev->irq, bdev);
-
-	for (i = 0; i < bdev->num_channels; i++) {
-		bam_dma_terminate_all(&bdev->channels[i].vc.chan);
-		tasklet_kill(&bdev->channels[i].vc.task);
-
-		dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
-			bdev->channels[i].fifo_virt,
-			bdev->channels[i].fifo_phys);
-	}
-
-	tasklet_kill(&bdev->task);
-
-	clk_disable_unprepare(bdev->bamclk);
-
-	return 0;
-}
-
-static struct platform_driver bam_dma_driver = {
-	.probe = bam_dma_probe,
-	.remove = bam_dma_remove,
-	.driver = {
-		.name = "bam-dma-engine",
-		.of_match_table = bam_of_match,
-	},
-};
-
-module_platform_driver(bam_dma_driver);
-
-MODULE_AUTHOR("Andy Gross <agross@codeaurora.org>");
-MODULE_DESCRIPTION("QCOM BAM DMA engine driver");
-MODULE_LICENSE("GPL v2");
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH V3 1/4] dma: qcom_bam_dma: move to qcom directory
@ 2015-11-08  4:52   ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: linux-arm-kernel

Creating a QCOM directory for all QCOM DMA
source files.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/dma/Kconfig        |   13 +-
 drivers/dma/Makefile       |    2 +-
 drivers/dma/qcom/Kconfig   |    9 +
 drivers/dma/qcom/Makefile  |    1 +
 drivers/dma/qcom/bam_dma.c | 1259 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/qcom_bam_dma.c | 1259 --------------------------------------------
 6 files changed, 1273 insertions(+), 1270 deletions(-)
 create mode 100644 drivers/dma/qcom/Kconfig
 create mode 100644 drivers/dma/qcom/Makefile
 create mode 100644 drivers/dma/qcom/bam_dma.c
 delete mode 100644 drivers/dma/qcom_bam_dma.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index b458475..d17d9ec 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -320,7 +320,7 @@ config MOXART_DMA
 	select DMA_VIRTUAL_CHANNELS
 	help
 	  Enable support for the MOXA ART SoC DMA controller.
- 
+
 	  Say Y here if you enabled MMP ADMA, otherwise say N.
 
 config MPC512X_DMA
@@ -408,15 +408,6 @@ config PXA_DMA
 	  16 to 32 channels for peripheral to memory or memory to memory
 	  transfers.
 
-config QCOM_BAM_DMA
-	tristate "QCOM BAM DMA support"
-	depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
-	select DMA_ENGINE
-	select DMA_VIRTUAL_CHANNELS
-	---help---
-	  Enable support for the QCOM BAM DMA controller.  This controller
-	  provides DMA capabilities for a variety of on-chip devices.
-
 config SIRF_DMA
 	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
 	depends on ARCH_SIRF
@@ -527,6 +518,8 @@ config ZX_DMA
 # driver files
 source "drivers/dma/bestcomm/Kconfig"
 
+source "drivers/dma/qcom/Kconfig"
+
 source "drivers/dma/dw/Kconfig"
 
 source "drivers/dma/hsu/Kconfig"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 7711a71..8dba90d 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -52,7 +52,6 @@ obj-$(CONFIG_PCH_DMA) += pch_dma.o
 obj-$(CONFIG_PL330_DMA) += pl330.o
 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
 obj-$(CONFIG_PXA_DMA) += pxa_dma.o
-obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
 obj-$(CONFIG_RENESAS_DMA) += sh/
 obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
 obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
@@ -66,4 +65,5 @@ obj-$(CONFIG_TI_EDMA) += edma.o
 obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx296702_dma.o
 
+obj-y += qcom/
 obj-y += xilinx/
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
new file mode 100644
index 0000000..17545df
--- /dev/null
+++ b/drivers/dma/qcom/Kconfig
@@ -0,0 +1,9 @@
+config QCOM_BAM_DMA
+	tristate "QCOM BAM DMA support"
+	depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	---help---
+	  Enable support for the QCOM BAM DMA controller.  This controller
+	  provides DMA capabilities for a variety of on-chip devices.
+
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
new file mode 100644
index 0000000..f612ae3
--- /dev/null
+++ b/drivers/dma/qcom/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
new file mode 100644
index 0000000..5359234
--- /dev/null
+++ b/drivers/dma/qcom/bam_dma.c
@@ -0,0 +1,1259 @@
+/*
+ * Copyright (c) 2013-2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+/*
+ * QCOM BAM DMA engine driver
+ *
+ * QCOM BAM DMA blocks are distributed amongst a number of the on-chip
+ * peripherals on the MSM 8x74.  The configuration of the channels are dependent
+ * on the way they are hard wired to that specific peripheral.  The peripheral
+ * device tree entries specify the configuration of each channel.
+ *
+ * The DMA controller requires the use of external memory for storage of the
+ * hardware descriptors for each channel.  The descriptor FIFO is accessed as a
+ * circular buffer and operations are managed according to the offset within the
+ * FIFO.  After pipe/channel reset, all of the pipe registers and internal state
+ * are back to defaults.
+ *
+ * During DMA operations, we write descriptors to the FIFO, being careful to
+ * handle wrapping and then write the last FIFO offset to that channel's
+ * P_EVNT_REG register to kick off the transaction.  The P_SW_OFSTS register
+ * indicates the current FIFO offset that is being processed, so there is some
+ * indication of where the hardware is currently working.
+ */
+
+#include <linux/kernel.h>
+#include <linux/io.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_dma.h>
+#include <linux/clk.h>
+#include <linux/dmaengine.h>
+
+#include "../dmaengine.h"
+#include "../virt-dma.h"
+
+struct bam_desc_hw {
+	u32 addr;		/* Buffer physical address */
+	u16 size;		/* Buffer size in bytes */
+	u16 flags;
+};
+
+#define DESC_FLAG_INT BIT(15)
+#define DESC_FLAG_EOT BIT(14)
+#define DESC_FLAG_EOB BIT(13)
+#define DESC_FLAG_NWD BIT(12)
+
+struct bam_async_desc {
+	struct virt_dma_desc vd;
+
+	u32 num_desc;
+	u32 xfer_len;
+
+	/* transaction flags, EOT|EOB|NWD */
+	u16 flags;
+
+	struct bam_desc_hw *curr_desc;
+
+	enum dma_transfer_direction dir;
+	size_t length;
+	struct bam_desc_hw desc[0];
+};
+
+enum bam_reg {
+	BAM_CTRL,
+	BAM_REVISION,
+	BAM_NUM_PIPES,
+	BAM_DESC_CNT_TRSHLD,
+	BAM_IRQ_SRCS,
+	BAM_IRQ_SRCS_MSK,
+	BAM_IRQ_SRCS_UNMASKED,
+	BAM_IRQ_STTS,
+	BAM_IRQ_CLR,
+	BAM_IRQ_EN,
+	BAM_CNFG_BITS,
+	BAM_IRQ_SRCS_EE,
+	BAM_IRQ_SRCS_MSK_EE,
+	BAM_P_CTRL,
+	BAM_P_RST,
+	BAM_P_HALT,
+	BAM_P_IRQ_STTS,
+	BAM_P_IRQ_CLR,
+	BAM_P_IRQ_EN,
+	BAM_P_EVNT_DEST_ADDR,
+	BAM_P_EVNT_REG,
+	BAM_P_SW_OFSTS,
+	BAM_P_DATA_FIFO_ADDR,
+	BAM_P_DESC_FIFO_ADDR,
+	BAM_P_EVNT_GEN_TRSHLD,
+	BAM_P_FIFO_SIZES,
+};
+
+struct reg_offset_data {
+	u32 base_offset;
+	unsigned int pipe_mult, evnt_mult, ee_mult;
+};
+
+static const struct reg_offset_data bam_v1_3_reg_info[] = {
+	[BAM_CTRL]		= { 0x0F80, 0x00, 0x00, 0x00 },
+	[BAM_REVISION]		= { 0x0F84, 0x00, 0x00, 0x00 },
+	[BAM_NUM_PIPES]		= { 0x0FBC, 0x00, 0x00, 0x00 },
+	[BAM_DESC_CNT_TRSHLD]	= { 0x0F88, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS]		= { 0x0F8C, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_MSK]	= { 0x0F90, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_UNMASKED]	= { 0x0FB0, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_STTS]		= { 0x0F94, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_CLR]		= { 0x0F98, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_EN]		= { 0x0F9C, 0x00, 0x00, 0x00 },
+	[BAM_CNFG_BITS]		= { 0x0FFC, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_EE]	= { 0x1800, 0x00, 0x00, 0x80 },
+	[BAM_IRQ_SRCS_MSK_EE]	= { 0x1804, 0x00, 0x00, 0x80 },
+	[BAM_P_CTRL]		= { 0x0000, 0x80, 0x00, 0x00 },
+	[BAM_P_RST]		= { 0x0004, 0x80, 0x00, 0x00 },
+	[BAM_P_HALT]		= { 0x0008, 0x80, 0x00, 0x00 },
+	[BAM_P_IRQ_STTS]	= { 0x0010, 0x80, 0x00, 0x00 },
+	[BAM_P_IRQ_CLR]		= { 0x0014, 0x80, 0x00, 0x00 },
+	[BAM_P_IRQ_EN]		= { 0x0018, 0x80, 0x00, 0x00 },
+	[BAM_P_EVNT_DEST_ADDR]	= { 0x102C, 0x00, 0x40, 0x00 },
+	[BAM_P_EVNT_REG]	= { 0x1018, 0x00, 0x40, 0x00 },
+	[BAM_P_SW_OFSTS]	= { 0x1000, 0x00, 0x40, 0x00 },
+	[BAM_P_DATA_FIFO_ADDR]	= { 0x1024, 0x00, 0x40, 0x00 },
+	[BAM_P_DESC_FIFO_ADDR]	= { 0x101C, 0x00, 0x40, 0x00 },
+	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x1028, 0x00, 0x40, 0x00 },
+	[BAM_P_FIFO_SIZES]	= { 0x1020, 0x00, 0x40, 0x00 },
+};
+
+static const struct reg_offset_data bam_v1_4_reg_info[] = {
+	[BAM_CTRL]		= { 0x0000, 0x00, 0x00, 0x00 },
+	[BAM_REVISION]		= { 0x0004, 0x00, 0x00, 0x00 },
+	[BAM_NUM_PIPES]		= { 0x003C, 0x00, 0x00, 0x00 },
+	[BAM_DESC_CNT_TRSHLD]	= { 0x0008, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS]		= { 0x000C, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_MSK]	= { 0x0010, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_UNMASKED]	= { 0x0030, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_STTS]		= { 0x0014, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_CLR]		= { 0x0018, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_EN]		= { 0x001C, 0x00, 0x00, 0x00 },
+	[BAM_CNFG_BITS]		= { 0x007C, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_EE]	= { 0x0800, 0x00, 0x00, 0x80 },
+	[BAM_IRQ_SRCS_MSK_EE]	= { 0x0804, 0x00, 0x00, 0x80 },
+	[BAM_P_CTRL]		= { 0x1000, 0x1000, 0x00, 0x00 },
+	[BAM_P_RST]		= { 0x1004, 0x1000, 0x00, 0x00 },
+	[BAM_P_HALT]		= { 0x1008, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_STTS]	= { 0x1010, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_CLR]		= { 0x1014, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_EN]		= { 0x1018, 0x1000, 0x00, 0x00 },
+	[BAM_P_EVNT_DEST_ADDR]	= { 0x182C, 0x00, 0x1000, 0x00 },
+	[BAM_P_EVNT_REG]	= { 0x1818, 0x00, 0x1000, 0x00 },
+	[BAM_P_SW_OFSTS]	= { 0x1800, 0x00, 0x1000, 0x00 },
+	[BAM_P_DATA_FIFO_ADDR]	= { 0x1824, 0x00, 0x1000, 0x00 },
+	[BAM_P_DESC_FIFO_ADDR]	= { 0x181C, 0x00, 0x1000, 0x00 },
+	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x1828, 0x00, 0x1000, 0x00 },
+	[BAM_P_FIFO_SIZES]	= { 0x1820, 0x00, 0x1000, 0x00 },
+};
+
+static const struct reg_offset_data bam_v1_7_reg_info[] = {
+	[BAM_CTRL]		= { 0x00000, 0x00, 0x00, 0x00 },
+	[BAM_REVISION]		= { 0x01000, 0x00, 0x00, 0x00 },
+	[BAM_NUM_PIPES]		= { 0x01008, 0x00, 0x00, 0x00 },
+	[BAM_DESC_CNT_TRSHLD]	= { 0x00008, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS]		= { 0x03010, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_MSK]	= { 0x03014, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_UNMASKED]	= { 0x03018, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_STTS]		= { 0x00014, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_CLR]		= { 0x00018, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_EN]		= { 0x0001C, 0x00, 0x00, 0x00 },
+	[BAM_CNFG_BITS]		= { 0x0007C, 0x00, 0x00, 0x00 },
+	[BAM_IRQ_SRCS_EE]	= { 0x03000, 0x00, 0x00, 0x1000 },
+	[BAM_IRQ_SRCS_MSK_EE]	= { 0x03004, 0x00, 0x00, 0x1000 },
+	[BAM_P_CTRL]		= { 0x13000, 0x1000, 0x00, 0x00 },
+	[BAM_P_RST]		= { 0x13004, 0x1000, 0x00, 0x00 },
+	[BAM_P_HALT]		= { 0x13008, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_STTS]	= { 0x13010, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_CLR]		= { 0x13014, 0x1000, 0x00, 0x00 },
+	[BAM_P_IRQ_EN]		= { 0x13018, 0x1000, 0x00, 0x00 },
+	[BAM_P_EVNT_DEST_ADDR]	= { 0x1382C, 0x00, 0x1000, 0x00 },
+	[BAM_P_EVNT_REG]	= { 0x13818, 0x00, 0x1000, 0x00 },
+	[BAM_P_SW_OFSTS]	= { 0x13800, 0x00, 0x1000, 0x00 },
+	[BAM_P_DATA_FIFO_ADDR]	= { 0x13824, 0x00, 0x1000, 0x00 },
+	[BAM_P_DESC_FIFO_ADDR]	= { 0x1381C, 0x00, 0x1000, 0x00 },
+	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x13828, 0x00, 0x1000, 0x00 },
+	[BAM_P_FIFO_SIZES]	= { 0x13820, 0x00, 0x1000, 0x00 },
+};
+
+/* BAM CTRL */
+#define BAM_SW_RST			BIT(0)
+#define BAM_EN				BIT(1)
+#define BAM_EN_ACCUM			BIT(4)
+#define BAM_TESTBUS_SEL_SHIFT		5
+#define BAM_TESTBUS_SEL_MASK		0x3F
+#define BAM_DESC_CACHE_SEL_SHIFT	13
+#define BAM_DESC_CACHE_SEL_MASK		0x3
+#define BAM_CACHED_DESC_STORE		BIT(15)
+#define IBC_DISABLE			BIT(16)
+
+/* BAM REVISION */
+#define REVISION_SHIFT		0
+#define REVISION_MASK		0xFF
+#define NUM_EES_SHIFT		8
+#define NUM_EES_MASK		0xF
+#define CE_BUFFER_SIZE		BIT(13)
+#define AXI_ACTIVE		BIT(14)
+#define USE_VMIDMT		BIT(15)
+#define SECURED			BIT(16)
+#define BAM_HAS_NO_BYPASS	BIT(17)
+#define HIGH_FREQUENCY_BAM	BIT(18)
+#define INACTIV_TMRS_EXST	BIT(19)
+#define NUM_INACTIV_TMRS	BIT(20)
+#define DESC_CACHE_DEPTH_SHIFT	21
+#define DESC_CACHE_DEPTH_1	(0 << DESC_CACHE_DEPTH_SHIFT)
+#define DESC_CACHE_DEPTH_2	(1 << DESC_CACHE_DEPTH_SHIFT)
+#define DESC_CACHE_DEPTH_3	(2 << DESC_CACHE_DEPTH_SHIFT)
+#define DESC_CACHE_DEPTH_4	(3 << DESC_CACHE_DEPTH_SHIFT)
+#define CMD_DESC_EN		BIT(23)
+#define INACTIV_TMR_BASE_SHIFT	24
+#define INACTIV_TMR_BASE_MASK	0xFF
+
+/* BAM NUM PIPES */
+#define BAM_NUM_PIPES_SHIFT		0
+#define BAM_NUM_PIPES_MASK		0xFF
+#define PERIPH_NON_PIPE_GRP_SHIFT	16
+#define PERIPH_NON_PIP_GRP_MASK		0xFF
+#define BAM_NON_PIPE_GRP_SHIFT		24
+#define BAM_NON_PIPE_GRP_MASK		0xFF
+
+/* BAM CNFG BITS */
+#define BAM_PIPE_CNFG		BIT(2)
+#define BAM_FULL_PIPE		BIT(11)
+#define BAM_NO_EXT_P_RST	BIT(12)
+#define BAM_IBC_DISABLE		BIT(13)
+#define BAM_SB_CLK_REQ		BIT(14)
+#define BAM_PSM_CSW_REQ		BIT(15)
+#define BAM_PSM_P_RES		BIT(16)
+#define BAM_AU_P_RES		BIT(17)
+#define BAM_SI_P_RES		BIT(18)
+#define BAM_WB_P_RES		BIT(19)
+#define BAM_WB_BLK_CSW		BIT(20)
+#define BAM_WB_CSW_ACK_IDL	BIT(21)
+#define BAM_WB_RETR_SVPNT	BIT(22)
+#define BAM_WB_DSC_AVL_P_RST	BIT(23)
+#define BAM_REG_P_EN		BIT(24)
+#define BAM_PSM_P_HD_DATA	BIT(25)
+#define BAM_AU_ACCUMED		BIT(26)
+#define BAM_CMD_ENABLE		BIT(27)
+
+#define BAM_CNFG_BITS_DEFAULT	(BAM_PIPE_CNFG |	\
+				 BAM_NO_EXT_P_RST |	\
+				 BAM_IBC_DISABLE |	\
+				 BAM_SB_CLK_REQ |	\
+				 BAM_PSM_CSW_REQ |	\
+				 BAM_PSM_P_RES |	\
+				 BAM_AU_P_RES |		\
+				 BAM_SI_P_RES |		\
+				 BAM_WB_P_RES |		\
+				 BAM_WB_BLK_CSW |	\
+				 BAM_WB_CSW_ACK_IDL |	\
+				 BAM_WB_RETR_SVPNT |	\
+				 BAM_WB_DSC_AVL_P_RST |	\
+				 BAM_REG_P_EN |		\
+				 BAM_PSM_P_HD_DATA |	\
+				 BAM_AU_ACCUMED |	\
+				 BAM_CMD_ENABLE)
+
+/* PIPE CTRL */
+#define P_EN			BIT(1)
+#define P_DIRECTION		BIT(3)
+#define P_SYS_STRM		BIT(4)
+#define P_SYS_MODE		BIT(5)
+#define P_AUTO_EOB		BIT(6)
+#define P_AUTO_EOB_SEL_SHIFT	7
+#define P_AUTO_EOB_SEL_512	(0 << P_AUTO_EOB_SEL_SHIFT)
+#define P_AUTO_EOB_SEL_256	(1 << P_AUTO_EOB_SEL_SHIFT)
+#define P_AUTO_EOB_SEL_128	(2 << P_AUTO_EOB_SEL_SHIFT)
+#define P_AUTO_EOB_SEL_64	(3 << P_AUTO_EOB_SEL_SHIFT)
+#define P_PREFETCH_LIMIT_SHIFT	9
+#define P_PREFETCH_LIMIT_32	(0 << P_PREFETCH_LIMIT_SHIFT)
+#define P_PREFETCH_LIMIT_16	(1 << P_PREFETCH_LIMIT_SHIFT)
+#define P_PREFETCH_LIMIT_4	(2 << P_PREFETCH_LIMIT_SHIFT)
+#define P_WRITE_NWD		BIT(11)
+#define P_LOCK_GROUP_SHIFT	16
+#define P_LOCK_GROUP_MASK	0x1F
+
+/* BAM_DESC_CNT_TRSHLD */
+#define CNT_TRSHLD		0xffff
+#define DEFAULT_CNT_THRSHLD	0x4
+
+/* BAM_IRQ_SRCS */
+#define BAM_IRQ			BIT(31)
+#define P_IRQ			0x7fffffff
+
+/* BAM_IRQ_SRCS_MSK */
+#define BAM_IRQ_MSK		BAM_IRQ
+#define P_IRQ_MSK		P_IRQ
+
+/* BAM_IRQ_STTS */
+#define BAM_TIMER_IRQ		BIT(4)
+#define BAM_EMPTY_IRQ		BIT(3)
+#define BAM_ERROR_IRQ		BIT(2)
+#define BAM_HRESP_ERR_IRQ	BIT(1)
+
+/* BAM_IRQ_CLR */
+#define BAM_TIMER_CLR		BIT(4)
+#define BAM_EMPTY_CLR		BIT(3)
+#define BAM_ERROR_CLR		BIT(2)
+#define BAM_HRESP_ERR_CLR	BIT(1)
+
+/* BAM_IRQ_EN */
+#define BAM_TIMER_EN		BIT(4)
+#define BAM_EMPTY_EN		BIT(3)
+#define BAM_ERROR_EN		BIT(2)
+#define BAM_HRESP_ERR_EN	BIT(1)
+
+/* BAM_P_IRQ_EN */
+#define P_PRCSD_DESC_EN		BIT(0)
+#define P_TIMER_EN		BIT(1)
+#define P_WAKE_EN		BIT(2)
+#define P_OUT_OF_DESC_EN	BIT(3)
+#define P_ERR_EN		BIT(4)
+#define P_TRNSFR_END_EN		BIT(5)
+#define P_DEFAULT_IRQS_EN	(P_PRCSD_DESC_EN | P_ERR_EN | P_TRNSFR_END_EN)
+
+/* BAM_P_SW_OFSTS */
+#define P_SW_OFSTS_MASK		0xffff
+
+#define BAM_DESC_FIFO_SIZE	SZ_32K
+#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
+#define BAM_MAX_DATA_SIZE	(SZ_32K - 8)
+
+struct bam_chan {
+	struct virt_dma_chan vc;
+
+	struct bam_device *bdev;
+
+	/* configuration from device tree */
+	u32 id;
+
+	struct bam_async_desc *curr_txd;	/* current running dma */
+
+	/* runtime configuration */
+	struct dma_slave_config slave;
+
+	/* fifo storage */
+	struct bam_desc_hw *fifo_virt;
+	dma_addr_t fifo_phys;
+
+	/* fifo markers */
+	unsigned short head;		/* start of active descriptor entries */
+	unsigned short tail;		/* end of active descriptor entries */
+
+	unsigned int initialized;	/* is the channel hw initialized? */
+	unsigned int paused;		/* is the channel paused? */
+	unsigned int reconfigure;	/* new slave config? */
+
+	struct list_head node;
+};
+
+static inline struct bam_chan *to_bam_chan(struct dma_chan *common)
+{
+	return container_of(common, struct bam_chan, vc.chan);
+}
+
+struct bam_device {
+	void __iomem *regs;
+	struct device *dev;
+	struct dma_device common;
+	struct device_dma_parameters dma_parms;
+	struct bam_chan *channels;
+	u32 num_channels;
+
+	/* execution environment ID, from DT */
+	u32 ee;
+
+	const struct reg_offset_data *layout;
+
+	struct clk *bamclk;
+	int irq;
+
+	/* dma start transaction tasklet */
+	struct tasklet_struct task;
+};
+
+/**
+ * bam_addr - returns BAM register address
+ * @bdev: bam device
+ * @pipe: pipe instance (ignored when register doesn't have multiple instances)
+ * @reg:  register enum
+ */
+static inline void __iomem *bam_addr(struct bam_device *bdev, u32 pipe,
+		enum bam_reg reg)
+{
+	const struct reg_offset_data r = bdev->layout[reg];
+
+	return bdev->regs + r.base_offset +
+		r.pipe_mult * pipe +
+		r.evnt_mult * pipe +
+		r.ee_mult * bdev->ee;
+}
+
+/**
+ * bam_reset_channel - Reset individual BAM DMA channel
+ * @bchan: bam channel
+ *
+ * This function resets a specific BAM channel
+ */
+static void bam_reset_channel(struct bam_chan *bchan)
+{
+	struct bam_device *bdev = bchan->bdev;
+
+	lockdep_assert_held(&bchan->vc.lock);
+
+	/* reset channel */
+	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_RST));
+	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_RST));
+
+	/* don't allow cpu to reorder BAM register accesses done after this */
+	wmb();
+
+	/* make sure hw is initialized when channel is used the first time  */
+	bchan->initialized = 0;
+}
+
+/**
+ * bam_chan_init_hw - Initialize channel hardware
+ * @bchan: bam channel
+ *
+ * This function resets and initializes the BAM channel
+ */
+static void bam_chan_init_hw(struct bam_chan *bchan,
+	enum dma_transfer_direction dir)
+{
+	struct bam_device *bdev = bchan->bdev;
+	u32 val;
+
+	/* Reset the channel to clear internal state of the FIFO */
+	bam_reset_channel(bchan);
+
+	/*
+	 * write out 8 byte aligned address.  We have enough space for this
+	 * because we allocated 1 more descriptor (8 bytes) than we can use
+	 */
+	writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)),
+			bam_addr(bdev, bchan->id, BAM_P_DESC_FIFO_ADDR));
+	writel_relaxed(BAM_DESC_FIFO_SIZE,
+			bam_addr(bdev, bchan->id, BAM_P_FIFO_SIZES));
+
+	/* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */
+	writel_relaxed(P_DEFAULT_IRQS_EN,
+			bam_addr(bdev, bchan->id, BAM_P_IRQ_EN));
+
+	/* unmask the specific pipe and EE combo */
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+	val |= BIT(bchan->id);
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+
+	/* don't allow cpu to reorder the channel enable done below */
+	wmb();
+
+	/* set fixed direction and mode, then enable channel */
+	val = P_EN | P_SYS_MODE;
+	if (dir == DMA_DEV_TO_MEM)
+		val |= P_DIRECTION;
+
+	writel_relaxed(val, bam_addr(bdev, bchan->id, BAM_P_CTRL));
+
+	bchan->initialized = 1;
+
+	/* init FIFO pointers */
+	bchan->head = 0;
+	bchan->tail = 0;
+}
+
+/**
+ * bam_alloc_chan - Allocate channel resources for DMA channel.
+ * @chan: specified channel
+ *
+ * This function allocates the FIFO descriptor memory
+ */
+static int bam_alloc_chan(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+
+	if (bchan->fifo_virt)
+		return 0;
+
+	/* allocate FIFO descriptor space, but only if necessary */
+	bchan->fifo_virt = dma_alloc_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
+				&bchan->fifo_phys, GFP_KERNEL);
+
+	if (!bchan->fifo_virt) {
+		dev_err(bdev->dev, "Failed to allocate desc fifo\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/**
+ * bam_free_chan - Frees dma resources associated with specific channel
+ * @chan: specified channel
+ *
+ * Free the allocated fifo descriptor memory and channel resources
+ *
+ */
+static void bam_free_chan(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+	u32 val;
+	unsigned long flags;
+
+	vchan_free_chan_resources(to_virt_chan(chan));
+
+	if (bchan->curr_txd) {
+		dev_err(bchan->bdev->dev, "Cannot free busy channel\n");
+		return;
+	}
+
+	spin_lock_irqsave(&bchan->vc.lock, flags);
+	bam_reset_channel(bchan);
+	spin_unlock_irqrestore(&bchan->vc.lock, flags);
+
+	dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE, bchan->fifo_virt,
+				bchan->fifo_phys);
+	bchan->fifo_virt = NULL;
+
+	/* mask irq for pipe/channel */
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+	val &= ~BIT(bchan->id);
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+
+	/* disable irq */
+	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_IRQ_EN));
+}
+
+/**
+ * bam_slave_config - set slave configuration for channel
+ * @chan: dma channel
+ * @cfg: slave configuration
+ *
+ * Sets slave configuration for channel
+ *
+ */
+static int bam_slave_config(struct dma_chan *chan,
+			    struct dma_slave_config *cfg)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	unsigned long flag;
+
+	spin_lock_irqsave(&bchan->vc.lock, flag);
+	memcpy(&bchan->slave, cfg, sizeof(*cfg));
+	bchan->reconfigure = 1;
+	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+	return 0;
+}
+
+/**
+ * bam_prep_slave_sg - Prep slave sg transaction
+ *
+ * @chan: dma channel
+ * @sgl: scatter gather list
+ * @sg_len: length of sg
+ * @direction: DMA transfer direction
+ * @flags: DMA flags
+ * @context: transfer context (unused)
+ */
+static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
+	struct scatterlist *sgl, unsigned int sg_len,
+	enum dma_transfer_direction direction, unsigned long flags,
+	void *context)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+	struct bam_async_desc *async_desc;
+	struct scatterlist *sg;
+	u32 i;
+	struct bam_desc_hw *desc;
+	unsigned int num_alloc = 0;
+
+
+	if (!is_slave_direction(direction)) {
+		dev_err(bdev->dev, "invalid dma direction\n");
+		return NULL;
+	}
+
+	/* calculate number of required entries */
+	for_each_sg(sgl, sg, sg_len, i)
+		num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_MAX_DATA_SIZE);
+
+	/* allocate enough room to accomodate the number of entries */
+	async_desc = kzalloc(sizeof(*async_desc) +
+			(num_alloc * sizeof(struct bam_desc_hw)), GFP_NOWAIT);
+
+	if (!async_desc)
+		goto err_out;
+
+	if (flags & DMA_PREP_FENCE)
+		async_desc->flags |= DESC_FLAG_NWD;
+
+	if (flags & DMA_PREP_INTERRUPT)
+		async_desc->flags |= DESC_FLAG_EOT;
+	else
+		async_desc->flags |= DESC_FLAG_INT;
+
+	async_desc->num_desc = num_alloc;
+	async_desc->curr_desc = async_desc->desc;
+	async_desc->dir = direction;
+
+	/* fill in temporary descriptors */
+	desc = async_desc->desc;
+	for_each_sg(sgl, sg, sg_len, i) {
+		unsigned int remainder = sg_dma_len(sg);
+		unsigned int curr_offset = 0;
+
+		do {
+			desc->addr = sg_dma_address(sg) + curr_offset;
+
+			if (remainder > BAM_MAX_DATA_SIZE) {
+				desc->size = BAM_MAX_DATA_SIZE;
+				remainder -= BAM_MAX_DATA_SIZE;
+				curr_offset += BAM_MAX_DATA_SIZE;
+			} else {
+				desc->size = remainder;
+				remainder = 0;
+			}
+
+			async_desc->length += desc->size;
+			desc++;
+		} while (remainder > 0);
+	}
+
+	return vchan_tx_prep(&bchan->vc, &async_desc->vd, flags);
+
+err_out:
+	kfree(async_desc);
+	return NULL;
+}
+
+/**
+ * bam_dma_terminate_all - terminate all transactions on a channel
+ * @bchan: bam dma channel
+ *
+ * Dequeues and frees all transactions
+ * No callbacks are done
+ *
+ */
+static int bam_dma_terminate_all(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	unsigned long flag;
+	LIST_HEAD(head);
+
+	/* remove all transactions, including active transaction */
+	spin_lock_irqsave(&bchan->vc.lock, flag);
+	if (bchan->curr_txd) {
+		list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued);
+		bchan->curr_txd = NULL;
+	}
+
+	vchan_get_all_descriptors(&bchan->vc, &head);
+	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+	vchan_dma_desc_free_list(&bchan->vc, &head);
+
+	return 0;
+}
+
+/**
+ * bam_pause - Pause DMA channel
+ * @chan: dma channel
+ *
+ */
+static int bam_pause(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+	unsigned long flag;
+
+	spin_lock_irqsave(&bchan->vc.lock, flag);
+	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_HALT));
+	bchan->paused = 1;
+	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+	return 0;
+}
+
+/**
+ * bam_resume - Resume DMA channel operations
+ * @chan: dma channel
+ *
+ */
+static int bam_resume(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct bam_device *bdev = bchan->bdev;
+	unsigned long flag;
+
+	spin_lock_irqsave(&bchan->vc.lock, flag);
+	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_HALT));
+	bchan->paused = 0;
+	spin_unlock_irqrestore(&bchan->vc.lock, flag);
+
+	return 0;
+}
+
+/**
+ * process_channel_irqs - processes the channel interrupts
+ * @bdev: bam controller
+ *
+ * This function processes the channel interrupts
+ *
+ */
+static u32 process_channel_irqs(struct bam_device *bdev)
+{
+	u32 i, srcs, pipe_stts;
+	unsigned long flags;
+	struct bam_async_desc *async_desc;
+
+	srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE));
+
+	/* return early if no pipe/channel interrupts are present */
+	if (!(srcs & P_IRQ))
+		return srcs;
+
+	for (i = 0; i < bdev->num_channels; i++) {
+		struct bam_chan *bchan = &bdev->channels[i];
+
+		if (!(srcs & BIT(i)))
+			continue;
+
+		/* clear pipe irq */
+		pipe_stts = readl_relaxed(bam_addr(bdev, i, BAM_P_IRQ_STTS));
+
+		writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR));
+
+		spin_lock_irqsave(&bchan->vc.lock, flags);
+		async_desc = bchan->curr_txd;
+
+		if (async_desc) {
+			async_desc->num_desc -= async_desc->xfer_len;
+			async_desc->curr_desc += async_desc->xfer_len;
+			bchan->curr_txd = NULL;
+
+			/* manage FIFO */
+			bchan->head += async_desc->xfer_len;
+			bchan->head %= MAX_DESCRIPTORS;
+
+			/*
+			 * if complete, process cookie.  Otherwise
+			 * push back to front of desc_issued so that
+			 * it gets restarted by the tasklet
+			 */
+			if (!async_desc->num_desc)
+				vchan_cookie_complete(&async_desc->vd);
+			else
+				list_add(&async_desc->vd.node,
+					&bchan->vc.desc_issued);
+		}
+
+		spin_unlock_irqrestore(&bchan->vc.lock, flags);
+	}
+
+	return srcs;
+}
+
+/**
+ * bam_dma_irq - irq handler for bam controller
+ * @irq: IRQ of interrupt
+ * @data: callback data
+ *
+ * IRQ handler for the bam controller
+ */
+static irqreturn_t bam_dma_irq(int irq, void *data)
+{
+	struct bam_device *bdev = data;
+	u32 clr_mask = 0, srcs = 0;
+
+	srcs |= process_channel_irqs(bdev);
+
+	/* kick off tasklet to start next dma transfer */
+	if (srcs & P_IRQ)
+		tasklet_schedule(&bdev->task);
+
+	if (srcs & BAM_IRQ)
+		clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS));
+
+	/* don't allow reorder of the various accesses to the BAM registers */
+	mb();
+
+	writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR));
+
+	return IRQ_HANDLED;
+}
+
+/**
+ * bam_tx_status - returns status of transaction
+ * @chan: dma channel
+ * @cookie: transaction cookie
+ * @txstate: DMA transaction state
+ *
+ * Return status of dma transaction
+ */
+static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+		struct dma_tx_state *txstate)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	struct virt_dma_desc *vd;
+	int ret;
+	size_t residue = 0;
+	unsigned int i;
+	unsigned long flags;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	if (!txstate)
+		return bchan->paused ? DMA_PAUSED : ret;
+
+	spin_lock_irqsave(&bchan->vc.lock, flags);
+	vd = vchan_find_desc(&bchan->vc, cookie);
+	if (vd)
+		residue = container_of(vd, struct bam_async_desc, vd)->length;
+	else if (bchan->curr_txd && bchan->curr_txd->vd.tx.cookie == cookie)
+		for (i = 0; i < bchan->curr_txd->num_desc; i++)
+			residue += bchan->curr_txd->curr_desc[i].size;
+
+	spin_unlock_irqrestore(&bchan->vc.lock, flags);
+
+	dma_set_residue(txstate, residue);
+
+	if (ret == DMA_IN_PROGRESS && bchan->paused)
+		ret = DMA_PAUSED;
+
+	return ret;
+}
+
+/**
+ * bam_apply_new_config
+ * @bchan: bam dma channel
+ * @dir: DMA direction
+ */
+static void bam_apply_new_config(struct bam_chan *bchan,
+	enum dma_transfer_direction dir)
+{
+	struct bam_device *bdev = bchan->bdev;
+	u32 maxburst;
+
+	if (dir == DMA_DEV_TO_MEM)
+		maxburst = bchan->slave.src_maxburst;
+	else
+		maxburst = bchan->slave.dst_maxburst;
+
+	writel_relaxed(maxburst, bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
+
+	bchan->reconfigure = 0;
+}
+
+/**
+ * bam_start_dma - start next transaction
+ * @bchan - bam dma channel
+ */
+static void bam_start_dma(struct bam_chan *bchan)
+{
+	struct virt_dma_desc *vd = vchan_next_desc(&bchan->vc);
+	struct bam_device *bdev = bchan->bdev;
+	struct bam_async_desc *async_desc;
+	struct bam_desc_hw *desc;
+	struct bam_desc_hw *fifo = PTR_ALIGN(bchan->fifo_virt,
+					sizeof(struct bam_desc_hw));
+
+	lockdep_assert_held(&bchan->vc.lock);
+
+	if (!vd)
+		return;
+
+	list_del(&vd->node);
+
+	async_desc = container_of(vd, struct bam_async_desc, vd);
+	bchan->curr_txd = async_desc;
+
+	/* on first use, initialize the channel hardware */
+	if (!bchan->initialized)
+		bam_chan_init_hw(bchan, async_desc->dir);
+
+	/* apply new slave config changes, if necessary */
+	if (bchan->reconfigure)
+		bam_apply_new_config(bchan, async_desc->dir);
+
+	desc = bchan->curr_txd->curr_desc;
+
+	if (async_desc->num_desc > MAX_DESCRIPTORS)
+		async_desc->xfer_len = MAX_DESCRIPTORS;
+	else
+		async_desc->xfer_len = async_desc->num_desc;
+
+	/* set any special flags on the last descriptor */
+	if (async_desc->num_desc == async_desc->xfer_len)
+		desc[async_desc->xfer_len - 1].flags = async_desc->flags;
+	else
+		desc[async_desc->xfer_len - 1].flags |= DESC_FLAG_INT;
+
+	if (bchan->tail + async_desc->xfer_len > MAX_DESCRIPTORS) {
+		u32 partial = MAX_DESCRIPTORS - bchan->tail;
+
+		memcpy(&fifo[bchan->tail], desc,
+				partial * sizeof(struct bam_desc_hw));
+		memcpy(fifo, &desc[partial], (async_desc->xfer_len - partial) *
+				sizeof(struct bam_desc_hw));
+	} else {
+		memcpy(&fifo[bchan->tail], desc,
+			async_desc->xfer_len * sizeof(struct bam_desc_hw));
+	}
+
+	bchan->tail += async_desc->xfer_len;
+	bchan->tail %= MAX_DESCRIPTORS;
+
+	/* ensure descriptor writes and dma start not reordered */
+	wmb();
+	writel_relaxed(bchan->tail * sizeof(struct bam_desc_hw),
+			bam_addr(bdev, bchan->id, BAM_P_EVNT_REG));
+}
+
+/**
+ * dma_tasklet - DMA IRQ tasklet
+ * @data: tasklet argument (bam controller structure)
+ *
+ * Sets up next DMA operation and then processes all completed transactions
+ */
+static void dma_tasklet(unsigned long data)
+{
+	struct bam_device *bdev = (struct bam_device *)data;
+	struct bam_chan *bchan;
+	unsigned long flags;
+	unsigned int i;
+
+	/* go through the channels and kick off transactions */
+	for (i = 0; i < bdev->num_channels; i++) {
+		bchan = &bdev->channels[i];
+		spin_lock_irqsave(&bchan->vc.lock, flags);
+
+		if (!list_empty(&bchan->vc.desc_issued) && !bchan->curr_txd)
+			bam_start_dma(bchan);
+		spin_unlock_irqrestore(&bchan->vc.lock, flags);
+	}
+}
+
+/**
+ * bam_issue_pending - starts pending transactions
+ * @chan: dma channel
+ *
+ * Calls tasklet directly which in turn starts any pending transactions
+ */
+static void bam_issue_pending(struct dma_chan *chan)
+{
+	struct bam_chan *bchan = to_bam_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&bchan->vc.lock, flags);
+
+	/* if work pending and idle, start a transaction */
+	if (vchan_issue_pending(&bchan->vc) && !bchan->curr_txd)
+		bam_start_dma(bchan);
+
+	spin_unlock_irqrestore(&bchan->vc.lock, flags);
+}
+
+/**
+ * bam_dma_free_desc - free descriptor memory
+ * @vd: virtual descriptor
+ *
+ */
+static void bam_dma_free_desc(struct virt_dma_desc *vd)
+{
+	struct bam_async_desc *async_desc = container_of(vd,
+			struct bam_async_desc, vd);
+
+	kfree(async_desc);
+}
+
+static struct dma_chan *bam_dma_xlate(struct of_phandle_args *dma_spec,
+		struct of_dma *of)
+{
+	struct bam_device *bdev = container_of(of->of_dma_data,
+					struct bam_device, common);
+	unsigned int request;
+
+	if (dma_spec->args_count != 1)
+		return NULL;
+
+	request = dma_spec->args[0];
+	if (request >= bdev->num_channels)
+		return NULL;
+
+	return dma_get_slave_channel(&(bdev->channels[request].vc.chan));
+}
+
+/**
+ * bam_init
+ * @bdev: bam device
+ *
+ * Initialization helper for global bam registers
+ */
+static int bam_init(struct bam_device *bdev)
+{
+	u32 val;
+
+	/* read revision and configuration information */
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_REVISION)) >> NUM_EES_SHIFT;
+	val &= NUM_EES_MASK;
+
+	/* check that configured EE is within range */
+	if (bdev->ee >= val)
+		return -EINVAL;
+
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES));
+	bdev->num_channels = val & BAM_NUM_PIPES_MASK;
+
+	/* s/w reset bam */
+	/* after reset all pipes are disabled and idle */
+	val = readl_relaxed(bam_addr(bdev, 0, BAM_CTRL));
+	val |= BAM_SW_RST;
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
+	val &= ~BAM_SW_RST;
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
+
+	/* make sure previous stores are visible before enabling BAM */
+	wmb();
+
+	/* enable bam */
+	val |= BAM_EN;
+	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
+
+	/* set descriptor threshhold, start with 4 bytes */
+	writel_relaxed(DEFAULT_CNT_THRSHLD,
+			bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
+
+	/* Enable default set of h/w workarounds, ie all except BAM_FULL_PIPE */
+	writel_relaxed(BAM_CNFG_BITS_DEFAULT, bam_addr(bdev, 0, BAM_CNFG_BITS));
+
+	/* enable irqs for errors */
+	writel_relaxed(BAM_ERROR_EN | BAM_HRESP_ERR_EN,
+			bam_addr(bdev, 0, BAM_IRQ_EN));
+
+	/* unmask global bam interrupt */
+	writel_relaxed(BAM_IRQ_MSK, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
+
+	return 0;
+}
+
+static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan,
+	u32 index)
+{
+	bchan->id = index;
+	bchan->bdev = bdev;
+
+	vchan_init(&bchan->vc, &bdev->common);
+	bchan->vc.desc_free = bam_dma_free_desc;
+}
+
+static const struct of_device_id bam_of_match[] = {
+	{ .compatible = "qcom,bam-v1.3.0", .data = &bam_v1_3_reg_info },
+	{ .compatible = "qcom,bam-v1.4.0", .data = &bam_v1_4_reg_info },
+	{ .compatible = "qcom,bam-v1.7.0", .data = &bam_v1_7_reg_info },
+	{}
+};
+
+MODULE_DEVICE_TABLE(of, bam_of_match);
+
+static int bam_dma_probe(struct platform_device *pdev)
+{
+	struct bam_device *bdev;
+	const struct of_device_id *match;
+	struct resource *iores;
+	int ret, i;
+
+	bdev = devm_kzalloc(&pdev->dev, sizeof(*bdev), GFP_KERNEL);
+	if (!bdev)
+		return -ENOMEM;
+
+	bdev->dev = &pdev->dev;
+
+	match = of_match_node(bam_of_match, pdev->dev.of_node);
+	if (!match) {
+		dev_err(&pdev->dev, "Unsupported BAM module\n");
+		return -ENODEV;
+	}
+
+	bdev->layout = match->data;
+
+	iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	bdev->regs = devm_ioremap_resource(&pdev->dev, iores);
+	if (IS_ERR(bdev->regs))
+		return PTR_ERR(bdev->regs);
+
+	bdev->irq = platform_get_irq(pdev, 0);
+	if (bdev->irq < 0)
+		return bdev->irq;
+
+	ret = of_property_read_u32(pdev->dev.of_node, "qcom,ee", &bdev->ee);
+	if (ret) {
+		dev_err(bdev->dev, "Execution environment unspecified\n");
+		return ret;
+	}
+
+	bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk");
+	if (IS_ERR(bdev->bamclk))
+		return PTR_ERR(bdev->bamclk);
+
+	ret = clk_prepare_enable(bdev->bamclk);
+	if (ret) {
+		dev_err(bdev->dev, "failed to prepare/enable clock\n");
+		return ret;
+	}
+
+	ret = bam_init(bdev);
+	if (ret)
+		goto err_disable_clk;
+
+	tasklet_init(&bdev->task, dma_tasklet, (unsigned long)bdev);
+
+	bdev->channels = devm_kcalloc(bdev->dev, bdev->num_channels,
+				sizeof(*bdev->channels), GFP_KERNEL);
+
+	if (!bdev->channels) {
+		ret = -ENOMEM;
+		goto err_tasklet_kill;
+	}
+
+	/* allocate and initialize channels */
+	INIT_LIST_HEAD(&bdev->common.channels);
+
+	for (i = 0; i < bdev->num_channels; i++)
+		bam_channel_init(bdev, &bdev->channels[i], i);
+
+	ret = devm_request_irq(bdev->dev, bdev->irq, bam_dma_irq,
+			IRQF_TRIGGER_HIGH, "bam_dma", bdev);
+	if (ret)
+		goto err_bam_channel_exit;
+
+	/* set max dma segment size */
+	bdev->common.dev = bdev->dev;
+	bdev->common.dev->dma_parms = &bdev->dma_parms;
+	ret = dma_set_max_seg_size(bdev->common.dev, BAM_MAX_DATA_SIZE);
+	if (ret) {
+		dev_err(bdev->dev, "cannot set maximum segment size\n");
+		goto err_bam_channel_exit;
+	}
+
+	platform_set_drvdata(pdev, bdev);
+
+	/* set capabilities */
+	dma_cap_zero(bdev->common.cap_mask);
+	dma_cap_set(DMA_SLAVE, bdev->common.cap_mask);
+
+	/* initialize dmaengine apis */
+	bdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	bdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+	bdev->common.src_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES;
+	bdev->common.dst_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES;
+	bdev->common.device_alloc_chan_resources = bam_alloc_chan;
+	bdev->common.device_free_chan_resources = bam_free_chan;
+	bdev->common.device_prep_slave_sg = bam_prep_slave_sg;
+	bdev->common.device_config = bam_slave_config;
+	bdev->common.device_pause = bam_pause;
+	bdev->common.device_resume = bam_resume;
+	bdev->common.device_terminate_all = bam_dma_terminate_all;
+	bdev->common.device_issue_pending = bam_issue_pending;
+	bdev->common.device_tx_status = bam_tx_status;
+	bdev->common.dev = bdev->dev;
+
+	ret = dma_async_device_register(&bdev->common);
+	if (ret) {
+		dev_err(bdev->dev, "failed to register dma async device\n");
+		goto err_bam_channel_exit;
+	}
+
+	ret = of_dma_controller_register(pdev->dev.of_node, bam_dma_xlate,
+					&bdev->common);
+	if (ret)
+		goto err_unregister_dma;
+
+	return 0;
+
+err_unregister_dma:
+	dma_async_device_unregister(&bdev->common);
+err_bam_channel_exit:
+	for (i = 0; i < bdev->num_channels; i++)
+		tasklet_kill(&bdev->channels[i].vc.task);
+err_tasklet_kill:
+	tasklet_kill(&bdev->task);
+err_disable_clk:
+	clk_disable_unprepare(bdev->bamclk);
+
+	return ret;
+}
+
+static int bam_dma_remove(struct platform_device *pdev)
+{
+	struct bam_device *bdev = platform_get_drvdata(pdev);
+	u32 i;
+
+	of_dma_controller_free(pdev->dev.of_node);
+	dma_async_device_unregister(&bdev->common);
+
+	/* mask all interrupts for this execution environment */
+	writel_relaxed(0, bam_addr(bdev, 0,  BAM_IRQ_SRCS_MSK_EE));
+
+	devm_free_irq(bdev->dev, bdev->irq, bdev);
+
+	for (i = 0; i < bdev->num_channels; i++) {
+		bam_dma_terminate_all(&bdev->channels[i].vc.chan);
+		tasklet_kill(&bdev->channels[i].vc.task);
+
+		dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
+			bdev->channels[i].fifo_virt,
+			bdev->channels[i].fifo_phys);
+	}
+
+	tasklet_kill(&bdev->task);
+
+	clk_disable_unprepare(bdev->bamclk);
+
+	return 0;
+}
+
+static struct platform_driver bam_dma_driver = {
+	.probe = bam_dma_probe,
+	.remove = bam_dma_remove,
+	.driver = {
+		.name = "bam-dma-engine",
+		.of_match_table = bam_of_match,
+	},
+};
+
+module_platform_driver(bam_dma_driver);
+
+MODULE_AUTHOR("Andy Gross <agross@codeaurora.org>");
+MODULE_DESCRIPTION("QCOM BAM DMA engine driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom_bam_dma.c b/drivers/dma/qcom_bam_dma.c
deleted file mode 100644
index 5a250cd..0000000
--- a/drivers/dma/qcom_bam_dma.c
+++ /dev/null
@@ -1,1259 +0,0 @@
-/*
- * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- */
-/*
- * QCOM BAM DMA engine driver
- *
- * QCOM BAM DMA blocks are distributed amongst a number of the on-chip
- * peripherals on the MSM 8x74.  The configuration of the channels are dependent
- * on the way they are hard wired to that specific peripheral.  The peripheral
- * device tree entries specify the configuration of each channel.
- *
- * The DMA controller requires the use of external memory for storage of the
- * hardware descriptors for each channel.  The descriptor FIFO is accessed as a
- * circular buffer and operations are managed according to the offset within the
- * FIFO.  After pipe/channel reset, all of the pipe registers and internal state
- * are back to defaults.
- *
- * During DMA operations, we write descriptors to the FIFO, being careful to
- * handle wrapping and then write the last FIFO offset to that channel's
- * P_EVNT_REG register to kick off the transaction.  The P_SW_OFSTS register
- * indicates the current FIFO offset that is being processed, so there is some
- * indication of where the hardware is currently working.
- */
-
-#include <linux/kernel.h>
-#include <linux/io.h>
-#include <linux/init.h>
-#include <linux/slab.h>
-#include <linux/module.h>
-#include <linux/interrupt.h>
-#include <linux/dma-mapping.h>
-#include <linux/scatterlist.h>
-#include <linux/device.h>
-#include <linux/platform_device.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_irq.h>
-#include <linux/of_dma.h>
-#include <linux/clk.h>
-#include <linux/dmaengine.h>
-
-#include "dmaengine.h"
-#include "virt-dma.h"
-
-struct bam_desc_hw {
-	u32 addr;		/* Buffer physical address */
-	u16 size;		/* Buffer size in bytes */
-	u16 flags;
-};
-
-#define DESC_FLAG_INT BIT(15)
-#define DESC_FLAG_EOT BIT(14)
-#define DESC_FLAG_EOB BIT(13)
-#define DESC_FLAG_NWD BIT(12)
-
-struct bam_async_desc {
-	struct virt_dma_desc vd;
-
-	u32 num_desc;
-	u32 xfer_len;
-
-	/* transaction flags, EOT|EOB|NWD */
-	u16 flags;
-
-	struct bam_desc_hw *curr_desc;
-
-	enum dma_transfer_direction dir;
-	size_t length;
-	struct bam_desc_hw desc[0];
-};
-
-enum bam_reg {
-	BAM_CTRL,
-	BAM_REVISION,
-	BAM_NUM_PIPES,
-	BAM_DESC_CNT_TRSHLD,
-	BAM_IRQ_SRCS,
-	BAM_IRQ_SRCS_MSK,
-	BAM_IRQ_SRCS_UNMASKED,
-	BAM_IRQ_STTS,
-	BAM_IRQ_CLR,
-	BAM_IRQ_EN,
-	BAM_CNFG_BITS,
-	BAM_IRQ_SRCS_EE,
-	BAM_IRQ_SRCS_MSK_EE,
-	BAM_P_CTRL,
-	BAM_P_RST,
-	BAM_P_HALT,
-	BAM_P_IRQ_STTS,
-	BAM_P_IRQ_CLR,
-	BAM_P_IRQ_EN,
-	BAM_P_EVNT_DEST_ADDR,
-	BAM_P_EVNT_REG,
-	BAM_P_SW_OFSTS,
-	BAM_P_DATA_FIFO_ADDR,
-	BAM_P_DESC_FIFO_ADDR,
-	BAM_P_EVNT_GEN_TRSHLD,
-	BAM_P_FIFO_SIZES,
-};
-
-struct reg_offset_data {
-	u32 base_offset;
-	unsigned int pipe_mult, evnt_mult, ee_mult;
-};
-
-static const struct reg_offset_data bam_v1_3_reg_info[] = {
-	[BAM_CTRL]		= { 0x0F80, 0x00, 0x00, 0x00 },
-	[BAM_REVISION]		= { 0x0F84, 0x00, 0x00, 0x00 },
-	[BAM_NUM_PIPES]		= { 0x0FBC, 0x00, 0x00, 0x00 },
-	[BAM_DESC_CNT_TRSHLD]	= { 0x0F88, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS]		= { 0x0F8C, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_MSK]	= { 0x0F90, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_UNMASKED]	= { 0x0FB0, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_STTS]		= { 0x0F94, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_CLR]		= { 0x0F98, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_EN]		= { 0x0F9C, 0x00, 0x00, 0x00 },
-	[BAM_CNFG_BITS]		= { 0x0FFC, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_EE]	= { 0x1800, 0x00, 0x00, 0x80 },
-	[BAM_IRQ_SRCS_MSK_EE]	= { 0x1804, 0x00, 0x00, 0x80 },
-	[BAM_P_CTRL]		= { 0x0000, 0x80, 0x00, 0x00 },
-	[BAM_P_RST]		= { 0x0004, 0x80, 0x00, 0x00 },
-	[BAM_P_HALT]		= { 0x0008, 0x80, 0x00, 0x00 },
-	[BAM_P_IRQ_STTS]	= { 0x0010, 0x80, 0x00, 0x00 },
-	[BAM_P_IRQ_CLR]		= { 0x0014, 0x80, 0x00, 0x00 },
-	[BAM_P_IRQ_EN]		= { 0x0018, 0x80, 0x00, 0x00 },
-	[BAM_P_EVNT_DEST_ADDR]	= { 0x102C, 0x00, 0x40, 0x00 },
-	[BAM_P_EVNT_REG]	= { 0x1018, 0x00, 0x40, 0x00 },
-	[BAM_P_SW_OFSTS]	= { 0x1000, 0x00, 0x40, 0x00 },
-	[BAM_P_DATA_FIFO_ADDR]	= { 0x1024, 0x00, 0x40, 0x00 },
-	[BAM_P_DESC_FIFO_ADDR]	= { 0x101C, 0x00, 0x40, 0x00 },
-	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x1028, 0x00, 0x40, 0x00 },
-	[BAM_P_FIFO_SIZES]	= { 0x1020, 0x00, 0x40, 0x00 },
-};
-
-static const struct reg_offset_data bam_v1_4_reg_info[] = {
-	[BAM_CTRL]		= { 0x0000, 0x00, 0x00, 0x00 },
-	[BAM_REVISION]		= { 0x0004, 0x00, 0x00, 0x00 },
-	[BAM_NUM_PIPES]		= { 0x003C, 0x00, 0x00, 0x00 },
-	[BAM_DESC_CNT_TRSHLD]	= { 0x0008, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS]		= { 0x000C, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_MSK]	= { 0x0010, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_UNMASKED]	= { 0x0030, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_STTS]		= { 0x0014, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_CLR]		= { 0x0018, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_EN]		= { 0x001C, 0x00, 0x00, 0x00 },
-	[BAM_CNFG_BITS]		= { 0x007C, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_EE]	= { 0x0800, 0x00, 0x00, 0x80 },
-	[BAM_IRQ_SRCS_MSK_EE]	= { 0x0804, 0x00, 0x00, 0x80 },
-	[BAM_P_CTRL]		= { 0x1000, 0x1000, 0x00, 0x00 },
-	[BAM_P_RST]		= { 0x1004, 0x1000, 0x00, 0x00 },
-	[BAM_P_HALT]		= { 0x1008, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_STTS]	= { 0x1010, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_CLR]		= { 0x1014, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_EN]		= { 0x1018, 0x1000, 0x00, 0x00 },
-	[BAM_P_EVNT_DEST_ADDR]	= { 0x182C, 0x00, 0x1000, 0x00 },
-	[BAM_P_EVNT_REG]	= { 0x1818, 0x00, 0x1000, 0x00 },
-	[BAM_P_SW_OFSTS]	= { 0x1800, 0x00, 0x1000, 0x00 },
-	[BAM_P_DATA_FIFO_ADDR]	= { 0x1824, 0x00, 0x1000, 0x00 },
-	[BAM_P_DESC_FIFO_ADDR]	= { 0x181C, 0x00, 0x1000, 0x00 },
-	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x1828, 0x00, 0x1000, 0x00 },
-	[BAM_P_FIFO_SIZES]	= { 0x1820, 0x00, 0x1000, 0x00 },
-};
-
-static const struct reg_offset_data bam_v1_7_reg_info[] = {
-	[BAM_CTRL]		= { 0x00000, 0x00, 0x00, 0x00 },
-	[BAM_REVISION]		= { 0x01000, 0x00, 0x00, 0x00 },
-	[BAM_NUM_PIPES]		= { 0x01008, 0x00, 0x00, 0x00 },
-	[BAM_DESC_CNT_TRSHLD]	= { 0x00008, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS]		= { 0x03010, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_MSK]	= { 0x03014, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_UNMASKED]	= { 0x03018, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_STTS]		= { 0x00014, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_CLR]		= { 0x00018, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_EN]		= { 0x0001C, 0x00, 0x00, 0x00 },
-	[BAM_CNFG_BITS]		= { 0x0007C, 0x00, 0x00, 0x00 },
-	[BAM_IRQ_SRCS_EE]	= { 0x03000, 0x00, 0x00, 0x1000 },
-	[BAM_IRQ_SRCS_MSK_EE]	= { 0x03004, 0x00, 0x00, 0x1000 },
-	[BAM_P_CTRL]		= { 0x13000, 0x1000, 0x00, 0x00 },
-	[BAM_P_RST]		= { 0x13004, 0x1000, 0x00, 0x00 },
-	[BAM_P_HALT]		= { 0x13008, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_STTS]	= { 0x13010, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_CLR]		= { 0x13014, 0x1000, 0x00, 0x00 },
-	[BAM_P_IRQ_EN]		= { 0x13018, 0x1000, 0x00, 0x00 },
-	[BAM_P_EVNT_DEST_ADDR]	= { 0x1382C, 0x00, 0x1000, 0x00 },
-	[BAM_P_EVNT_REG]	= { 0x13818, 0x00, 0x1000, 0x00 },
-	[BAM_P_SW_OFSTS]	= { 0x13800, 0x00, 0x1000, 0x00 },
-	[BAM_P_DATA_FIFO_ADDR]	= { 0x13824, 0x00, 0x1000, 0x00 },
-	[BAM_P_DESC_FIFO_ADDR]	= { 0x1381C, 0x00, 0x1000, 0x00 },
-	[BAM_P_EVNT_GEN_TRSHLD]	= { 0x13828, 0x00, 0x1000, 0x00 },
-	[BAM_P_FIFO_SIZES]	= { 0x13820, 0x00, 0x1000, 0x00 },
-};
-
-/* BAM CTRL */
-#define BAM_SW_RST			BIT(0)
-#define BAM_EN				BIT(1)
-#define BAM_EN_ACCUM			BIT(4)
-#define BAM_TESTBUS_SEL_SHIFT		5
-#define BAM_TESTBUS_SEL_MASK		0x3F
-#define BAM_DESC_CACHE_SEL_SHIFT	13
-#define BAM_DESC_CACHE_SEL_MASK		0x3
-#define BAM_CACHED_DESC_STORE		BIT(15)
-#define IBC_DISABLE			BIT(16)
-
-/* BAM REVISION */
-#define REVISION_SHIFT		0
-#define REVISION_MASK		0xFF
-#define NUM_EES_SHIFT		8
-#define NUM_EES_MASK		0xF
-#define CE_BUFFER_SIZE		BIT(13)
-#define AXI_ACTIVE		BIT(14)
-#define USE_VMIDMT		BIT(15)
-#define SECURED			BIT(16)
-#define BAM_HAS_NO_BYPASS	BIT(17)
-#define HIGH_FREQUENCY_BAM	BIT(18)
-#define INACTIV_TMRS_EXST	BIT(19)
-#define NUM_INACTIV_TMRS	BIT(20)
-#define DESC_CACHE_DEPTH_SHIFT	21
-#define DESC_CACHE_DEPTH_1	(0 << DESC_CACHE_DEPTH_SHIFT)
-#define DESC_CACHE_DEPTH_2	(1 << DESC_CACHE_DEPTH_SHIFT)
-#define DESC_CACHE_DEPTH_3	(2 << DESC_CACHE_DEPTH_SHIFT)
-#define DESC_CACHE_DEPTH_4	(3 << DESC_CACHE_DEPTH_SHIFT)
-#define CMD_DESC_EN		BIT(23)
-#define INACTIV_TMR_BASE_SHIFT	24
-#define INACTIV_TMR_BASE_MASK	0xFF
-
-/* BAM NUM PIPES */
-#define BAM_NUM_PIPES_SHIFT		0
-#define BAM_NUM_PIPES_MASK		0xFF
-#define PERIPH_NON_PIPE_GRP_SHIFT	16
-#define PERIPH_NON_PIP_GRP_MASK		0xFF
-#define BAM_NON_PIPE_GRP_SHIFT		24
-#define BAM_NON_PIPE_GRP_MASK		0xFF
-
-/* BAM CNFG BITS */
-#define BAM_PIPE_CNFG		BIT(2)
-#define BAM_FULL_PIPE		BIT(11)
-#define BAM_NO_EXT_P_RST	BIT(12)
-#define BAM_IBC_DISABLE		BIT(13)
-#define BAM_SB_CLK_REQ		BIT(14)
-#define BAM_PSM_CSW_REQ		BIT(15)
-#define BAM_PSM_P_RES		BIT(16)
-#define BAM_AU_P_RES		BIT(17)
-#define BAM_SI_P_RES		BIT(18)
-#define BAM_WB_P_RES		BIT(19)
-#define BAM_WB_BLK_CSW		BIT(20)
-#define BAM_WB_CSW_ACK_IDL	BIT(21)
-#define BAM_WB_RETR_SVPNT	BIT(22)
-#define BAM_WB_DSC_AVL_P_RST	BIT(23)
-#define BAM_REG_P_EN		BIT(24)
-#define BAM_PSM_P_HD_DATA	BIT(25)
-#define BAM_AU_ACCUMED		BIT(26)
-#define BAM_CMD_ENABLE		BIT(27)
-
-#define BAM_CNFG_BITS_DEFAULT	(BAM_PIPE_CNFG |	\
-				 BAM_NO_EXT_P_RST |	\
-				 BAM_IBC_DISABLE |	\
-				 BAM_SB_CLK_REQ |	\
-				 BAM_PSM_CSW_REQ |	\
-				 BAM_PSM_P_RES |	\
-				 BAM_AU_P_RES |		\
-				 BAM_SI_P_RES |		\
-				 BAM_WB_P_RES |		\
-				 BAM_WB_BLK_CSW |	\
-				 BAM_WB_CSW_ACK_IDL |	\
-				 BAM_WB_RETR_SVPNT |	\
-				 BAM_WB_DSC_AVL_P_RST |	\
-				 BAM_REG_P_EN |		\
-				 BAM_PSM_P_HD_DATA |	\
-				 BAM_AU_ACCUMED |	\
-				 BAM_CMD_ENABLE)
-
-/* PIPE CTRL */
-#define P_EN			BIT(1)
-#define P_DIRECTION		BIT(3)
-#define P_SYS_STRM		BIT(4)
-#define P_SYS_MODE		BIT(5)
-#define P_AUTO_EOB		BIT(6)
-#define P_AUTO_EOB_SEL_SHIFT	7
-#define P_AUTO_EOB_SEL_512	(0 << P_AUTO_EOB_SEL_SHIFT)
-#define P_AUTO_EOB_SEL_256	(1 << P_AUTO_EOB_SEL_SHIFT)
-#define P_AUTO_EOB_SEL_128	(2 << P_AUTO_EOB_SEL_SHIFT)
-#define P_AUTO_EOB_SEL_64	(3 << P_AUTO_EOB_SEL_SHIFT)
-#define P_PREFETCH_LIMIT_SHIFT	9
-#define P_PREFETCH_LIMIT_32	(0 << P_PREFETCH_LIMIT_SHIFT)
-#define P_PREFETCH_LIMIT_16	(1 << P_PREFETCH_LIMIT_SHIFT)
-#define P_PREFETCH_LIMIT_4	(2 << P_PREFETCH_LIMIT_SHIFT)
-#define P_WRITE_NWD		BIT(11)
-#define P_LOCK_GROUP_SHIFT	16
-#define P_LOCK_GROUP_MASK	0x1F
-
-/* BAM_DESC_CNT_TRSHLD */
-#define CNT_TRSHLD		0xffff
-#define DEFAULT_CNT_THRSHLD	0x4
-
-/* BAM_IRQ_SRCS */
-#define BAM_IRQ			BIT(31)
-#define P_IRQ			0x7fffffff
-
-/* BAM_IRQ_SRCS_MSK */
-#define BAM_IRQ_MSK		BAM_IRQ
-#define P_IRQ_MSK		P_IRQ
-
-/* BAM_IRQ_STTS */
-#define BAM_TIMER_IRQ		BIT(4)
-#define BAM_EMPTY_IRQ		BIT(3)
-#define BAM_ERROR_IRQ		BIT(2)
-#define BAM_HRESP_ERR_IRQ	BIT(1)
-
-/* BAM_IRQ_CLR */
-#define BAM_TIMER_CLR		BIT(4)
-#define BAM_EMPTY_CLR		BIT(3)
-#define BAM_ERROR_CLR		BIT(2)
-#define BAM_HRESP_ERR_CLR	BIT(1)
-
-/* BAM_IRQ_EN */
-#define BAM_TIMER_EN		BIT(4)
-#define BAM_EMPTY_EN		BIT(3)
-#define BAM_ERROR_EN		BIT(2)
-#define BAM_HRESP_ERR_EN	BIT(1)
-
-/* BAM_P_IRQ_EN */
-#define P_PRCSD_DESC_EN		BIT(0)
-#define P_TIMER_EN		BIT(1)
-#define P_WAKE_EN		BIT(2)
-#define P_OUT_OF_DESC_EN	BIT(3)
-#define P_ERR_EN		BIT(4)
-#define P_TRNSFR_END_EN		BIT(5)
-#define P_DEFAULT_IRQS_EN	(P_PRCSD_DESC_EN | P_ERR_EN | P_TRNSFR_END_EN)
-
-/* BAM_P_SW_OFSTS */
-#define P_SW_OFSTS_MASK		0xffff
-
-#define BAM_DESC_FIFO_SIZE	SZ_32K
-#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
-#define BAM_MAX_DATA_SIZE	(SZ_32K - 8)
-
-struct bam_chan {
-	struct virt_dma_chan vc;
-
-	struct bam_device *bdev;
-
-	/* configuration from device tree */
-	u32 id;
-
-	struct bam_async_desc *curr_txd;	/* current running dma */
-
-	/* runtime configuration */
-	struct dma_slave_config slave;
-
-	/* fifo storage */
-	struct bam_desc_hw *fifo_virt;
-	dma_addr_t fifo_phys;
-
-	/* fifo markers */
-	unsigned short head;		/* start of active descriptor entries */
-	unsigned short tail;		/* end of active descriptor entries */
-
-	unsigned int initialized;	/* is the channel hw initialized? */
-	unsigned int paused;		/* is the channel paused? */
-	unsigned int reconfigure;	/* new slave config? */
-
-	struct list_head node;
-};
-
-static inline struct bam_chan *to_bam_chan(struct dma_chan *common)
-{
-	return container_of(common, struct bam_chan, vc.chan);
-}
-
-struct bam_device {
-	void __iomem *regs;
-	struct device *dev;
-	struct dma_device common;
-	struct device_dma_parameters dma_parms;
-	struct bam_chan *channels;
-	u32 num_channels;
-
-	/* execution environment ID, from DT */
-	u32 ee;
-
-	const struct reg_offset_data *layout;
-
-	struct clk *bamclk;
-	int irq;
-
-	/* dma start transaction tasklet */
-	struct tasklet_struct task;
-};
-
-/**
- * bam_addr - returns BAM register address
- * @bdev: bam device
- * @pipe: pipe instance (ignored when register doesn't have multiple instances)
- * @reg:  register enum
- */
-static inline void __iomem *bam_addr(struct bam_device *bdev, u32 pipe,
-		enum bam_reg reg)
-{
-	const struct reg_offset_data r = bdev->layout[reg];
-
-	return bdev->regs + r.base_offset +
-		r.pipe_mult * pipe +
-		r.evnt_mult * pipe +
-		r.ee_mult * bdev->ee;
-}
-
-/**
- * bam_reset_channel - Reset individual BAM DMA channel
- * @bchan: bam channel
- *
- * This function resets a specific BAM channel
- */
-static void bam_reset_channel(struct bam_chan *bchan)
-{
-	struct bam_device *bdev = bchan->bdev;
-
-	lockdep_assert_held(&bchan->vc.lock);
-
-	/* reset channel */
-	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_RST));
-	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_RST));
-
-	/* don't allow cpu to reorder BAM register accesses done after this */
-	wmb();
-
-	/* make sure hw is initialized when channel is used the first time  */
-	bchan->initialized = 0;
-}
-
-/**
- * bam_chan_init_hw - Initialize channel hardware
- * @bchan: bam channel
- *
- * This function resets and initializes the BAM channel
- */
-static void bam_chan_init_hw(struct bam_chan *bchan,
-	enum dma_transfer_direction dir)
-{
-	struct bam_device *bdev = bchan->bdev;
-	u32 val;
-
-	/* Reset the channel to clear internal state of the FIFO */
-	bam_reset_channel(bchan);
-
-	/*
-	 * write out 8 byte aligned address.  We have enough space for this
-	 * because we allocated 1 more descriptor (8 bytes) than we can use
-	 */
-	writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)),
-			bam_addr(bdev, bchan->id, BAM_P_DESC_FIFO_ADDR));
-	writel_relaxed(BAM_DESC_FIFO_SIZE,
-			bam_addr(bdev, bchan->id, BAM_P_FIFO_SIZES));
-
-	/* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */
-	writel_relaxed(P_DEFAULT_IRQS_EN,
-			bam_addr(bdev, bchan->id, BAM_P_IRQ_EN));
-
-	/* unmask the specific pipe and EE combo */
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-	val |= BIT(bchan->id);
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-
-	/* don't allow cpu to reorder the channel enable done below */
-	wmb();
-
-	/* set fixed direction and mode, then enable channel */
-	val = P_EN | P_SYS_MODE;
-	if (dir == DMA_DEV_TO_MEM)
-		val |= P_DIRECTION;
-
-	writel_relaxed(val, bam_addr(bdev, bchan->id, BAM_P_CTRL));
-
-	bchan->initialized = 1;
-
-	/* init FIFO pointers */
-	bchan->head = 0;
-	bchan->tail = 0;
-}
-
-/**
- * bam_alloc_chan - Allocate channel resources for DMA channel.
- * @chan: specified channel
- *
- * This function allocates the FIFO descriptor memory
- */
-static int bam_alloc_chan(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-
-	if (bchan->fifo_virt)
-		return 0;
-
-	/* allocate FIFO descriptor space, but only if necessary */
-	bchan->fifo_virt = dma_alloc_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
-				&bchan->fifo_phys, GFP_KERNEL);
-
-	if (!bchan->fifo_virt) {
-		dev_err(bdev->dev, "Failed to allocate desc fifo\n");
-		return -ENOMEM;
-	}
-
-	return 0;
-}
-
-/**
- * bam_free_chan - Frees dma resources associated with specific channel
- * @chan: specified channel
- *
- * Free the allocated fifo descriptor memory and channel resources
- *
- */
-static void bam_free_chan(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-	u32 val;
-	unsigned long flags;
-
-	vchan_free_chan_resources(to_virt_chan(chan));
-
-	if (bchan->curr_txd) {
-		dev_err(bchan->bdev->dev, "Cannot free busy channel\n");
-		return;
-	}
-
-	spin_lock_irqsave(&bchan->vc.lock, flags);
-	bam_reset_channel(bchan);
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
-
-	dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE, bchan->fifo_virt,
-				bchan->fifo_phys);
-	bchan->fifo_virt = NULL;
-
-	/* mask irq for pipe/channel */
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-	val &= ~BIT(bchan->id);
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-
-	/* disable irq */
-	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_IRQ_EN));
-}
-
-/**
- * bam_slave_config - set slave configuration for channel
- * @chan: dma channel
- * @cfg: slave configuration
- *
- * Sets slave configuration for channel
- *
- */
-static int bam_slave_config(struct dma_chan *chan,
-			    struct dma_slave_config *cfg)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	unsigned long flag;
-
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	memcpy(&bchan->slave, cfg, sizeof(*cfg));
-	bchan->reconfigure = 1;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
-
-	return 0;
-}
-
-/**
- * bam_prep_slave_sg - Prep slave sg transaction
- *
- * @chan: dma channel
- * @sgl: scatter gather list
- * @sg_len: length of sg
- * @direction: DMA transfer direction
- * @flags: DMA flags
- * @context: transfer context (unused)
- */
-static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
-	struct scatterlist *sgl, unsigned int sg_len,
-	enum dma_transfer_direction direction, unsigned long flags,
-	void *context)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-	struct bam_async_desc *async_desc;
-	struct scatterlist *sg;
-	u32 i;
-	struct bam_desc_hw *desc;
-	unsigned int num_alloc = 0;
-
-
-	if (!is_slave_direction(direction)) {
-		dev_err(bdev->dev, "invalid dma direction\n");
-		return NULL;
-	}
-
-	/* calculate number of required entries */
-	for_each_sg(sgl, sg, sg_len, i)
-		num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_MAX_DATA_SIZE);
-
-	/* allocate enough room to accomodate the number of entries */
-	async_desc = kzalloc(sizeof(*async_desc) +
-			(num_alloc * sizeof(struct bam_desc_hw)), GFP_NOWAIT);
-
-	if (!async_desc)
-		goto err_out;
-
-	if (flags & DMA_PREP_FENCE)
-		async_desc->flags |= DESC_FLAG_NWD;
-
-	if (flags & DMA_PREP_INTERRUPT)
-		async_desc->flags |= DESC_FLAG_EOT;
-	else
-		async_desc->flags |= DESC_FLAG_INT;
-
-	async_desc->num_desc = num_alloc;
-	async_desc->curr_desc = async_desc->desc;
-	async_desc->dir = direction;
-
-	/* fill in temporary descriptors */
-	desc = async_desc->desc;
-	for_each_sg(sgl, sg, sg_len, i) {
-		unsigned int remainder = sg_dma_len(sg);
-		unsigned int curr_offset = 0;
-
-		do {
-			desc->addr = sg_dma_address(sg) + curr_offset;
-
-			if (remainder > BAM_MAX_DATA_SIZE) {
-				desc->size = BAM_MAX_DATA_SIZE;
-				remainder -= BAM_MAX_DATA_SIZE;
-				curr_offset += BAM_MAX_DATA_SIZE;
-			} else {
-				desc->size = remainder;
-				remainder = 0;
-			}
-
-			async_desc->length += desc->size;
-			desc++;
-		} while (remainder > 0);
-	}
-
-	return vchan_tx_prep(&bchan->vc, &async_desc->vd, flags);
-
-err_out:
-	kfree(async_desc);
-	return NULL;
-}
-
-/**
- * bam_dma_terminate_all - terminate all transactions on a channel
- * @bchan: bam dma channel
- *
- * Dequeues and frees all transactions
- * No callbacks are done
- *
- */
-static int bam_dma_terminate_all(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	unsigned long flag;
-	LIST_HEAD(head);
-
-	/* remove all transactions, including active transaction */
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	if (bchan->curr_txd) {
-		list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued);
-		bchan->curr_txd = NULL;
-	}
-
-	vchan_get_all_descriptors(&bchan->vc, &head);
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
-
-	vchan_dma_desc_free_list(&bchan->vc, &head);
-
-	return 0;
-}
-
-/**
- * bam_pause - Pause DMA channel
- * @chan: dma channel
- *
- */
-static int bam_pause(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-	unsigned long flag;
-
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	writel_relaxed(1, bam_addr(bdev, bchan->id, BAM_P_HALT));
-	bchan->paused = 1;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
-
-	return 0;
-}
-
-/**
- * bam_resume - Resume DMA channel operations
- * @chan: dma channel
- *
- */
-static int bam_resume(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct bam_device *bdev = bchan->bdev;
-	unsigned long flag;
-
-	spin_lock_irqsave(&bchan->vc.lock, flag);
-	writel_relaxed(0, bam_addr(bdev, bchan->id, BAM_P_HALT));
-	bchan->paused = 0;
-	spin_unlock_irqrestore(&bchan->vc.lock, flag);
-
-	return 0;
-}
-
-/**
- * process_channel_irqs - processes the channel interrupts
- * @bdev: bam controller
- *
- * This function processes the channel interrupts
- *
- */
-static u32 process_channel_irqs(struct bam_device *bdev)
-{
-	u32 i, srcs, pipe_stts;
-	unsigned long flags;
-	struct bam_async_desc *async_desc;
-
-	srcs = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_SRCS_EE));
-
-	/* return early if no pipe/channel interrupts are present */
-	if (!(srcs & P_IRQ))
-		return srcs;
-
-	for (i = 0; i < bdev->num_channels; i++) {
-		struct bam_chan *bchan = &bdev->channels[i];
-
-		if (!(srcs & BIT(i)))
-			continue;
-
-		/* clear pipe irq */
-		pipe_stts = readl_relaxed(bam_addr(bdev, i, BAM_P_IRQ_STTS));
-
-		writel_relaxed(pipe_stts, bam_addr(bdev, i, BAM_P_IRQ_CLR));
-
-		spin_lock_irqsave(&bchan->vc.lock, flags);
-		async_desc = bchan->curr_txd;
-
-		if (async_desc) {
-			async_desc->num_desc -= async_desc->xfer_len;
-			async_desc->curr_desc += async_desc->xfer_len;
-			bchan->curr_txd = NULL;
-
-			/* manage FIFO */
-			bchan->head += async_desc->xfer_len;
-			bchan->head %= MAX_DESCRIPTORS;
-
-			/*
-			 * if complete, process cookie.  Otherwise
-			 * push back to front of desc_issued so that
-			 * it gets restarted by the tasklet
-			 */
-			if (!async_desc->num_desc)
-				vchan_cookie_complete(&async_desc->vd);
-			else
-				list_add(&async_desc->vd.node,
-					&bchan->vc.desc_issued);
-		}
-
-		spin_unlock_irqrestore(&bchan->vc.lock, flags);
-	}
-
-	return srcs;
-}
-
-/**
- * bam_dma_irq - irq handler for bam controller
- * @irq: IRQ of interrupt
- * @data: callback data
- *
- * IRQ handler for the bam controller
- */
-static irqreturn_t bam_dma_irq(int irq, void *data)
-{
-	struct bam_device *bdev = data;
-	u32 clr_mask = 0, srcs = 0;
-
-	srcs |= process_channel_irqs(bdev);
-
-	/* kick off tasklet to start next dma transfer */
-	if (srcs & P_IRQ)
-		tasklet_schedule(&bdev->task);
-
-	if (srcs & BAM_IRQ)
-		clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS));
-
-	/* don't allow reorder of the various accesses to the BAM registers */
-	mb();
-
-	writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR));
-
-	return IRQ_HANDLED;
-}
-
-/**
- * bam_tx_status - returns status of transaction
- * @chan: dma channel
- * @cookie: transaction cookie
- * @txstate: DMA transaction state
- *
- * Return status of dma transaction
- */
-static enum dma_status bam_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
-		struct dma_tx_state *txstate)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	struct virt_dma_desc *vd;
-	int ret;
-	size_t residue = 0;
-	unsigned int i;
-	unsigned long flags;
-
-	ret = dma_cookie_status(chan, cookie, txstate);
-	if (ret == DMA_COMPLETE)
-		return ret;
-
-	if (!txstate)
-		return bchan->paused ? DMA_PAUSED : ret;
-
-	spin_lock_irqsave(&bchan->vc.lock, flags);
-	vd = vchan_find_desc(&bchan->vc, cookie);
-	if (vd)
-		residue = container_of(vd, struct bam_async_desc, vd)->length;
-	else if (bchan->curr_txd && bchan->curr_txd->vd.tx.cookie == cookie)
-		for (i = 0; i < bchan->curr_txd->num_desc; i++)
-			residue += bchan->curr_txd->curr_desc[i].size;
-
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
-
-	dma_set_residue(txstate, residue);
-
-	if (ret == DMA_IN_PROGRESS && bchan->paused)
-		ret = DMA_PAUSED;
-
-	return ret;
-}
-
-/**
- * bam_apply_new_config
- * @bchan: bam dma channel
- * @dir: DMA direction
- */
-static void bam_apply_new_config(struct bam_chan *bchan,
-	enum dma_transfer_direction dir)
-{
-	struct bam_device *bdev = bchan->bdev;
-	u32 maxburst;
-
-	if (dir == DMA_DEV_TO_MEM)
-		maxburst = bchan->slave.src_maxburst;
-	else
-		maxburst = bchan->slave.dst_maxburst;
-
-	writel_relaxed(maxburst, bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
-
-	bchan->reconfigure = 0;
-}
-
-/**
- * bam_start_dma - start next transaction
- * @bchan - bam dma channel
- */
-static void bam_start_dma(struct bam_chan *bchan)
-{
-	struct virt_dma_desc *vd = vchan_next_desc(&bchan->vc);
-	struct bam_device *bdev = bchan->bdev;
-	struct bam_async_desc *async_desc;
-	struct bam_desc_hw *desc;
-	struct bam_desc_hw *fifo = PTR_ALIGN(bchan->fifo_virt,
-					sizeof(struct bam_desc_hw));
-
-	lockdep_assert_held(&bchan->vc.lock);
-
-	if (!vd)
-		return;
-
-	list_del(&vd->node);
-
-	async_desc = container_of(vd, struct bam_async_desc, vd);
-	bchan->curr_txd = async_desc;
-
-	/* on first use, initialize the channel hardware */
-	if (!bchan->initialized)
-		bam_chan_init_hw(bchan, async_desc->dir);
-
-	/* apply new slave config changes, if necessary */
-	if (bchan->reconfigure)
-		bam_apply_new_config(bchan, async_desc->dir);
-
-	desc = bchan->curr_txd->curr_desc;
-
-	if (async_desc->num_desc > MAX_DESCRIPTORS)
-		async_desc->xfer_len = MAX_DESCRIPTORS;
-	else
-		async_desc->xfer_len = async_desc->num_desc;
-
-	/* set any special flags on the last descriptor */
-	if (async_desc->num_desc == async_desc->xfer_len)
-		desc[async_desc->xfer_len - 1].flags = async_desc->flags;
-	else
-		desc[async_desc->xfer_len - 1].flags |= DESC_FLAG_INT;
-
-	if (bchan->tail + async_desc->xfer_len > MAX_DESCRIPTORS) {
-		u32 partial = MAX_DESCRIPTORS - bchan->tail;
-
-		memcpy(&fifo[bchan->tail], desc,
-				partial * sizeof(struct bam_desc_hw));
-		memcpy(fifo, &desc[partial], (async_desc->xfer_len - partial) *
-				sizeof(struct bam_desc_hw));
-	} else {
-		memcpy(&fifo[bchan->tail], desc,
-			async_desc->xfer_len * sizeof(struct bam_desc_hw));
-	}
-
-	bchan->tail += async_desc->xfer_len;
-	bchan->tail %= MAX_DESCRIPTORS;
-
-	/* ensure descriptor writes and dma start not reordered */
-	wmb();
-	writel_relaxed(bchan->tail * sizeof(struct bam_desc_hw),
-			bam_addr(bdev, bchan->id, BAM_P_EVNT_REG));
-}
-
-/**
- * dma_tasklet - DMA IRQ tasklet
- * @data: tasklet argument (bam controller structure)
- *
- * Sets up next DMA operation and then processes all completed transactions
- */
-static void dma_tasklet(unsigned long data)
-{
-	struct bam_device *bdev = (struct bam_device *)data;
-	struct bam_chan *bchan;
-	unsigned long flags;
-	unsigned int i;
-
-	/* go through the channels and kick off transactions */
-	for (i = 0; i < bdev->num_channels; i++) {
-		bchan = &bdev->channels[i];
-		spin_lock_irqsave(&bchan->vc.lock, flags);
-
-		if (!list_empty(&bchan->vc.desc_issued) && !bchan->curr_txd)
-			bam_start_dma(bchan);
-		spin_unlock_irqrestore(&bchan->vc.lock, flags);
-	}
-}
-
-/**
- * bam_issue_pending - starts pending transactions
- * @chan: dma channel
- *
- * Calls tasklet directly which in turn starts any pending transactions
- */
-static void bam_issue_pending(struct dma_chan *chan)
-{
-	struct bam_chan *bchan = to_bam_chan(chan);
-	unsigned long flags;
-
-	spin_lock_irqsave(&bchan->vc.lock, flags);
-
-	/* if work pending and idle, start a transaction */
-	if (vchan_issue_pending(&bchan->vc) && !bchan->curr_txd)
-		bam_start_dma(bchan);
-
-	spin_unlock_irqrestore(&bchan->vc.lock, flags);
-}
-
-/**
- * bam_dma_free_desc - free descriptor memory
- * @vd: virtual descriptor
- *
- */
-static void bam_dma_free_desc(struct virt_dma_desc *vd)
-{
-	struct bam_async_desc *async_desc = container_of(vd,
-			struct bam_async_desc, vd);
-
-	kfree(async_desc);
-}
-
-static struct dma_chan *bam_dma_xlate(struct of_phandle_args *dma_spec,
-		struct of_dma *of)
-{
-	struct bam_device *bdev = container_of(of->of_dma_data,
-					struct bam_device, common);
-	unsigned int request;
-
-	if (dma_spec->args_count != 1)
-		return NULL;
-
-	request = dma_spec->args[0];
-	if (request >= bdev->num_channels)
-		return NULL;
-
-	return dma_get_slave_channel(&(bdev->channels[request].vc.chan));
-}
-
-/**
- * bam_init
- * @bdev: bam device
- *
- * Initialization helper for global bam registers
- */
-static int bam_init(struct bam_device *bdev)
-{
-	u32 val;
-
-	/* read revision and configuration information */
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_REVISION)) >> NUM_EES_SHIFT;
-	val &= NUM_EES_MASK;
-
-	/* check that configured EE is within range */
-	if (bdev->ee >= val)
-		return -EINVAL;
-
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES));
-	bdev->num_channels = val & BAM_NUM_PIPES_MASK;
-
-	/* s/w reset bam */
-	/* after reset all pipes are disabled and idle */
-	val = readl_relaxed(bam_addr(bdev, 0, BAM_CTRL));
-	val |= BAM_SW_RST;
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
-	val &= ~BAM_SW_RST;
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
-
-	/* make sure previous stores are visible before enabling BAM */
-	wmb();
-
-	/* enable bam */
-	val |= BAM_EN;
-	writel_relaxed(val, bam_addr(bdev, 0, BAM_CTRL));
-
-	/* set descriptor threshhold, start with 4 bytes */
-	writel_relaxed(DEFAULT_CNT_THRSHLD,
-			bam_addr(bdev, 0, BAM_DESC_CNT_TRSHLD));
-
-	/* Enable default set of h/w workarounds, ie all except BAM_FULL_PIPE */
-	writel_relaxed(BAM_CNFG_BITS_DEFAULT, bam_addr(bdev, 0, BAM_CNFG_BITS));
-
-	/* enable irqs for errors */
-	writel_relaxed(BAM_ERROR_EN | BAM_HRESP_ERR_EN,
-			bam_addr(bdev, 0, BAM_IRQ_EN));
-
-	/* unmask global bam interrupt */
-	writel_relaxed(BAM_IRQ_MSK, bam_addr(bdev, 0, BAM_IRQ_SRCS_MSK_EE));
-
-	return 0;
-}
-
-static void bam_channel_init(struct bam_device *bdev, struct bam_chan *bchan,
-	u32 index)
-{
-	bchan->id = index;
-	bchan->bdev = bdev;
-
-	vchan_init(&bchan->vc, &bdev->common);
-	bchan->vc.desc_free = bam_dma_free_desc;
-}
-
-static const struct of_device_id bam_of_match[] = {
-	{ .compatible = "qcom,bam-v1.3.0", .data = &bam_v1_3_reg_info },
-	{ .compatible = "qcom,bam-v1.4.0", .data = &bam_v1_4_reg_info },
-	{ .compatible = "qcom,bam-v1.7.0", .data = &bam_v1_7_reg_info },
-	{}
-};
-
-MODULE_DEVICE_TABLE(of, bam_of_match);
-
-static int bam_dma_probe(struct platform_device *pdev)
-{
-	struct bam_device *bdev;
-	const struct of_device_id *match;
-	struct resource *iores;
-	int ret, i;
-
-	bdev = devm_kzalloc(&pdev->dev, sizeof(*bdev), GFP_KERNEL);
-	if (!bdev)
-		return -ENOMEM;
-
-	bdev->dev = &pdev->dev;
-
-	match = of_match_node(bam_of_match, pdev->dev.of_node);
-	if (!match) {
-		dev_err(&pdev->dev, "Unsupported BAM module\n");
-		return -ENODEV;
-	}
-
-	bdev->layout = match->data;
-
-	iores = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-	bdev->regs = devm_ioremap_resource(&pdev->dev, iores);
-	if (IS_ERR(bdev->regs))
-		return PTR_ERR(bdev->regs);
-
-	bdev->irq = platform_get_irq(pdev, 0);
-	if (bdev->irq < 0)
-		return bdev->irq;
-
-	ret = of_property_read_u32(pdev->dev.of_node, "qcom,ee", &bdev->ee);
-	if (ret) {
-		dev_err(bdev->dev, "Execution environment unspecified\n");
-		return ret;
-	}
-
-	bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk");
-	if (IS_ERR(bdev->bamclk))
-		return PTR_ERR(bdev->bamclk);
-
-	ret = clk_prepare_enable(bdev->bamclk);
-	if (ret) {
-		dev_err(bdev->dev, "failed to prepare/enable clock\n");
-		return ret;
-	}
-
-	ret = bam_init(bdev);
-	if (ret)
-		goto err_disable_clk;
-
-	tasklet_init(&bdev->task, dma_tasklet, (unsigned long)bdev);
-
-	bdev->channels = devm_kcalloc(bdev->dev, bdev->num_channels,
-				sizeof(*bdev->channels), GFP_KERNEL);
-
-	if (!bdev->channels) {
-		ret = -ENOMEM;
-		goto err_tasklet_kill;
-	}
-
-	/* allocate and initialize channels */
-	INIT_LIST_HEAD(&bdev->common.channels);
-
-	for (i = 0; i < bdev->num_channels; i++)
-		bam_channel_init(bdev, &bdev->channels[i], i);
-
-	ret = devm_request_irq(bdev->dev, bdev->irq, bam_dma_irq,
-			IRQF_TRIGGER_HIGH, "bam_dma", bdev);
-	if (ret)
-		goto err_bam_channel_exit;
-
-	/* set max dma segment size */
-	bdev->common.dev = bdev->dev;
-	bdev->common.dev->dma_parms = &bdev->dma_parms;
-	ret = dma_set_max_seg_size(bdev->common.dev, BAM_MAX_DATA_SIZE);
-	if (ret) {
-		dev_err(bdev->dev, "cannot set maximum segment size\n");
-		goto err_bam_channel_exit;
-	}
-
-	platform_set_drvdata(pdev, bdev);
-
-	/* set capabilities */
-	dma_cap_zero(bdev->common.cap_mask);
-	dma_cap_set(DMA_SLAVE, bdev->common.cap_mask);
-
-	/* initialize dmaengine apis */
-	bdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
-	bdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
-	bdev->common.src_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES;
-	bdev->common.dst_addr_widths = DMA_SLAVE_BUSWIDTH_4_BYTES;
-	bdev->common.device_alloc_chan_resources = bam_alloc_chan;
-	bdev->common.device_free_chan_resources = bam_free_chan;
-	bdev->common.device_prep_slave_sg = bam_prep_slave_sg;
-	bdev->common.device_config = bam_slave_config;
-	bdev->common.device_pause = bam_pause;
-	bdev->common.device_resume = bam_resume;
-	bdev->common.device_terminate_all = bam_dma_terminate_all;
-	bdev->common.device_issue_pending = bam_issue_pending;
-	bdev->common.device_tx_status = bam_tx_status;
-	bdev->common.dev = bdev->dev;
-
-	ret = dma_async_device_register(&bdev->common);
-	if (ret) {
-		dev_err(bdev->dev, "failed to register dma async device\n");
-		goto err_bam_channel_exit;
-	}
-
-	ret = of_dma_controller_register(pdev->dev.of_node, bam_dma_xlate,
-					&bdev->common);
-	if (ret)
-		goto err_unregister_dma;
-
-	return 0;
-
-err_unregister_dma:
-	dma_async_device_unregister(&bdev->common);
-err_bam_channel_exit:
-	for (i = 0; i < bdev->num_channels; i++)
-		tasklet_kill(&bdev->channels[i].vc.task);
-err_tasklet_kill:
-	tasklet_kill(&bdev->task);
-err_disable_clk:
-	clk_disable_unprepare(bdev->bamclk);
-
-	return ret;
-}
-
-static int bam_dma_remove(struct platform_device *pdev)
-{
-	struct bam_device *bdev = platform_get_drvdata(pdev);
-	u32 i;
-
-	of_dma_controller_free(pdev->dev.of_node);
-	dma_async_device_unregister(&bdev->common);
-
-	/* mask all interrupts for this execution environment */
-	writel_relaxed(0, bam_addr(bdev, 0,  BAM_IRQ_SRCS_MSK_EE));
-
-	devm_free_irq(bdev->dev, bdev->irq, bdev);
-
-	for (i = 0; i < bdev->num_channels; i++) {
-		bam_dma_terminate_all(&bdev->channels[i].vc.chan);
-		tasklet_kill(&bdev->channels[i].vc.task);
-
-		dma_free_writecombine(bdev->dev, BAM_DESC_FIFO_SIZE,
-			bdev->channels[i].fifo_virt,
-			bdev->channels[i].fifo_phys);
-	}
-
-	tasklet_kill(&bdev->task);
-
-	clk_disable_unprepare(bdev->bamclk);
-
-	return 0;
-}
-
-static struct platform_driver bam_dma_driver = {
-	.probe = bam_dma_probe,
-	.remove = bam_dma_remove,
-	.driver = {
-		.name = "bam-dma-engine",
-		.of_match_table = bam_of_match,
-	},
-};
-
-module_platform_driver(bam_dma_driver);
-
-MODULE_AUTHOR("Andy Gross <agross@codeaurora.org>");
-MODULE_DESCRIPTION("QCOM BAM DMA engine driver");
-MODULE_LICENSE("GPL v2");
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
  2015-11-08  4:52 ` Sinan Kaya
  (?)
@ 2015-11-08  4:52     ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: dmaengine-u79uwXL29TY76Z2rM5mHXA, timur-sgV2jX0FEOL9JmXXK+q4OQ,
	cov-sgV2jX0FEOL9JmXXK+q4OQ, jcm-H+wXaHxf7aLQT0dZR+AlfA
  Cc: agross-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Sinan Kaya,
	Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	Vinod Koul, Dan Williams, devicetree-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization in the future.

The management driver is executed in hypervisor context and
is the main management entity for all channels provided by
the device.

Signed-off-by: Sinan Kaya <okaya-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
---
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  62 ++++
 drivers/dma/qcom/Kconfig                           |  11 +
 drivers/dma/qcom/Makefile                          |   1 +
 drivers/dma/qcom/hidma_mgmt.c                      | 315 +++++++++++++++++++++
 drivers/dma/qcom/hidma_mgmt.h                      |  38 +++
 drivers/dma/qcom/hidma_mgmt_sys.c                  | 232 +++++++++++++++
 6 files changed, 659 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
new file mode 100644
index 0000000..b906170
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -0,0 +1,62 @@
+Qualcomm Technologies HIDMA Management interface
+
+The Qualcomm Technologies HIDMA device has been designed
+to support virtualization technology. The driver has been
+divided into two to follow the hardware design. The management
+driver is executed in hypervisor context and is the main
+management entity for all channels provided by the device.
+
+Each HIDMA HW consists of multiple channels. These channels
+share some set of common parameters. These parameters are
+initialized by the management driver during power up.
+Same management driver is used for monitoring the execution
+of the channels. Management driver can change the performance
+behavior dynamically such as bandwidth allocation and
+prioritization.
+
+All channel devices get probed in the hypervisor
+context during power up. They show up as DMA engine
+DMA channels. Then, before starting the virtualization; each
+channel device is unbound from the hypervisor by VFIO
+and assign to the guest machine for control.
+
+This management driver will  be used by the system
+admin to monitor/reset the execution state of the DMA
+channels. This will be the management interface.
+
+
+Required properties:
+- compatible: must contain one of these.
+  "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0", "qcom,hidma-mgmt";
+- reg: Address range for DMA device
+- dma-channels: Number of channels supported by this DMA controller.
+- max-write-burst-bytes: Maximum write burst in bytes. A memcpy requested is
+  fragmented to multiples of this amount.
+- max-read-burst-bytes: Maximum read burst in bytes. A memcpy request is
+  fragmented to multiples of this amount.
+- max-write-transactions: Maximum write transactions to perform in a burst
+- max-read-transactions: Maximum read transactions to perform in a burst
+- channel-reset-timeout-cycles: Channel reset timeout in cycles for this SOC.
+- channel-priority: Priority of the channel.
+  Each dma channel share the same HW bandwidth with other dma channels.
+  If two requests reach to the HW at the same time from a low priority and
+  high priority channel, high priority channel will claim the bus.
+  0=low priority, 1=high priority
+- channel-weight: Round robin weight of the channel
+  Since there are only two priority levels supported, scheduling among
+  the equal priority channels is done via weights.
+
+Example:
+
+	hidma-mgmt@f9984000 = {
+		compatible = "qcom,hidma-mgmt-1.0";
+		reg = <0xf9984000 0x15000>;
+		dma-channels = 6;
+		max-write-burst-bytes = 1024;
+		max-read-burst-bytes = 1024;
+		max-write-transactions = 31;
+		max-read-transactions = 31;
+		channel-reset-timeout-cycles = 0x500;
+		channel-priority = < 1 1 0 0 0 0>;
+		channel-weight = < 1 13 10 3 4 5>;
+	};
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index 17545df..f3e2d4c 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -7,3 +7,14 @@ config QCOM_BAM_DMA
 	  Enable support for the QCOM BAM DMA controller.  This controller
 	  provides DMA capabilities for a variety of on-chip devices.
 
+config QCOM_HIDMA_MGMT
+	tristate "Qualcomm Technologies HIDMA Management support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA Management.
+	  Each DMA device requires one management interface driver
+	  for basic initialization before QCOM_HIDMA channel driver can
+	  start managing the channels. In a virtualized environment,
+	  the guest OS would run QCOM_HIDMA channel driver and the
+	  hypervisor would run the QCOM_HIDMA_MGMT management driver.
+
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index f612ae3..1a5a96d 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1 +1,2 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
+obj-$(CONFIG_QCOM_HIDMA_MGMT) += hidma_mgmt.o hidma_mgmt_sys.o
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
new file mode 100644
index 0000000..94510d6
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -0,0 +1,315 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine Management interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/acpi.h>
+#include <linux/of.h>
+#include <linux/property.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma_mgmt.h"
+
+#define QOS_N_OFFSET			0x300
+#define CFG_OFFSET			0x400
+#define MAX_BUS_REQ_LEN_OFFSET		0x41C
+#define MAX_XACTIONS_OFFSET		0x420
+#define HW_VERSION_OFFSET		0x424
+#define CHRESET_TIMEOUT_OFFSET		0x418
+
+#define MAX_WR_XACTIONS_MASK		0x1F
+#define MAX_RD_XACTIONS_MASK		0x1F
+#define WEIGHT_MASK			0x7F
+#define MAX_BUS_REQ_LEN_MASK		0xFFFF
+#define CHRESET_TIMEOUUT_MASK		0xFFFFF
+
+#define MAX_WR_XACTIONS_BIT_POS	16
+#define MAX_BUS_WR_REQ_BIT_POS		16
+#define WRR_BIT_POS			8
+#define PRIORITY_BIT_POS		15
+
+#define AUTOSUSPEND_TIMEOUT		2000
+#define MAX_CHANNEL_WEIGHT		15
+
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev)
+{
+	u32 val;
+	u32 i;
+
+	if (!is_power_of_2(mgmtdev->max_write_request) ||
+		(mgmtdev->max_write_request < 128) ||
+		(mgmtdev->max_write_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid write request %d\n",
+			mgmtdev->max_write_request);
+		return -EINVAL;
+	}
+
+	if (!is_power_of_2(mgmtdev->max_read_request) ||
+		(mgmtdev->max_read_request < 128) ||
+		(mgmtdev->max_read_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid read request %d\n",
+			mgmtdev->max_read_request);
+		return  -EINVAL;
+	}
+
+	if (mgmtdev->max_wr_xactions > MAX_WR_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_wr_xactions cannot be bigger than %d\n",
+			MAX_WR_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	if (mgmtdev->max_rd_xactions > MAX_RD_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_rd_xactions cannot be bigger than %d\n",
+			MAX_RD_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		if (mgmtdev->priority[i] > 1) {
+			dev_err(&mgmtdev->pdev->dev, "priority can be 0 or 1\n");
+			return -EINVAL;
+		}
+
+		if (mgmtdev->weight[i] > MAX_CHANNEL_WEIGHT) {
+			dev_err(&mgmtdev->pdev->dev,
+				"max value of weight can be %d.\n",
+				MAX_CHANNEL_WEIGHT);
+			return -EINVAL;
+		}
+
+		/* weight needs to be at least one */
+		if (mgmtdev->weight[i] == 0)
+			mgmtdev->weight[i] = 1;
+	}
+
+	pm_runtime_get_sync(&mgmtdev->pdev->dev);
+	val = readl(mgmtdev->dev_virtaddr + MAX_BUS_REQ_LEN_OFFSET);
+	val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
+	val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
+	val = val & ~(MAX_BUS_REQ_LEN_MASK);
+	val = val | (mgmtdev->max_read_request);
+	writel(val, mgmtdev->dev_virtaddr + MAX_BUS_REQ_LEN_OFFSET);
+
+	val = readl(mgmtdev->dev_virtaddr + MAX_XACTIONS_OFFSET);
+	val = val & ~(MAX_WR_XACTIONS_MASK << MAX_WR_XACTIONS_BIT_POS);
+	val = val | (mgmtdev->max_wr_xactions << MAX_WR_XACTIONS_BIT_POS);
+	val = val & ~(MAX_RD_XACTIONS_MASK);
+	val = val | (mgmtdev->max_rd_xactions);
+	writel(val, mgmtdev->dev_virtaddr + MAX_XACTIONS_OFFSET);
+
+	mgmtdev->hw_version = readl(mgmtdev->dev_virtaddr + HW_VERSION_OFFSET);
+	mgmtdev->hw_version_major = (mgmtdev->hw_version >> 28) & 0xF;
+	mgmtdev->hw_version_minor = (mgmtdev->hw_version >> 16) & 0xF;
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		val = readl(mgmtdev->dev_virtaddr + QOS_N_OFFSET + (4 * i));
+		val = val & ~(1 << PRIORITY_BIT_POS);
+		val = val |
+			((mgmtdev->priority[i] & 0x1) << PRIORITY_BIT_POS);
+		val = val & ~(WEIGHT_MASK << WRR_BIT_POS);
+		val = val
+			| ((mgmtdev->weight[i] & WEIGHT_MASK) << WRR_BIT_POS);
+		writel(val, mgmtdev->dev_virtaddr + QOS_N_OFFSET + (4 * i));
+	}
+
+	val = readl(mgmtdev->dev_virtaddr + CHRESET_TIMEOUT_OFFSET);
+	val = val & ~CHRESET_TIMEOUUT_MASK;
+	val = val | (mgmtdev->chreset_timeout_cycles & CHRESET_TIMEOUUT_MASK);
+	writel(val, mgmtdev->dev_virtaddr + CHRESET_TIMEOUT_OFFSET);
+
+	pm_runtime_mark_last_busy(&mgmtdev->pdev->dev);
+	pm_runtime_put_autosuspend(&mgmtdev->pdev->dev);
+	return 0;
+}
+
+static int hidma_mgmt_probe(struct platform_device *pdev)
+{
+	struct hidma_mgmt_dev *mgmtdev;
+	struct resource *dma_resource;
+	void *dev_virtaddr;
+	int irq;
+	int rc;
+	u32 val;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+	dma_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!dma_resource) {
+		dev_err(&pdev->dev, "No memory resources found\n");
+		rc = -ENODEV;
+		goto out;
+	}
+	dev_virtaddr = devm_ioremap_resource(&pdev->dev, dma_resource);
+	if (IS_ERR(dev_virtaddr)) {
+		dev_err(&pdev->dev, "can't map i/o memory\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(&pdev->dev, "irq resources not found\n");
+		rc = -ENODEV;
+		goto out;
+	}
+
+	mgmtdev = devm_kzalloc(&pdev->dev, sizeof(*mgmtdev), GFP_KERNEL);
+	if (!mgmtdev) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->pdev = pdev;
+
+	mgmtdev->dev_addrsize = resource_size(dma_resource);
+	mgmtdev->dev_virtaddr = dev_virtaddr;
+
+	if (device_property_read_u32(&pdev->dev, "dma-channels",
+		&mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "number of channels missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "channel-reset-timeout-cycles",
+		&mgmtdev->chreset_timeout_cycles)) {
+		dev_err(&pdev->dev, "channel reset timeout missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-write-burst-bytes",
+		&mgmtdev->max_write_request)) {
+		dev_err(&pdev->dev, "max-write-burst-bytes missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-read-burst-bytes",
+		&mgmtdev->max_read_request)) {
+		dev_err(&pdev->dev, "max-read-burst-bytes missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-write-transactions",
+		&mgmtdev->max_wr_xactions)) {
+		dev_err(&pdev->dev, "max-write-transactions missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-read-transactions",
+		&mgmtdev->max_rd_xactions)) {
+		dev_err(&pdev->dev, "max-read-transactions missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	mgmtdev->priority = devm_kcalloc(&pdev->dev,
+		mgmtdev->dma_channels, sizeof(*mgmtdev->priority), GFP_KERNEL);
+	if (!mgmtdev->priority) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->weight = devm_kcalloc(&pdev->dev,
+		mgmtdev->dma_channels, sizeof(*mgmtdev->weight), GFP_KERNEL);
+	if (!mgmtdev->weight) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	if (device_property_read_u32_array(&pdev->dev, "channel-priority",
+				mgmtdev->priority, mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "channel-priority missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32_array(&pdev->dev, "channel-weight",
+				mgmtdev->weight, mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "channel-weight missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	rc = hidma_mgmt_setup(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "setup failed\n");
+		goto out;
+	}
+
+	/* start the HW */
+	val = readl(mgmtdev->dev_virtaddr + CFG_OFFSET);
+	val = val | 1;
+	writel(val, mgmtdev->dev_virtaddr + CFG_OFFSET);
+
+
+	rc = hidma_mgmt_init_sys(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "sysfs setup failed\n");
+		goto out;
+	}
+
+	dev_info(&pdev->dev,
+		 "HW rev: %d.%d @ %pa with %d physical channels\n",
+		 mgmtdev->hw_version_major, mgmtdev->hw_version_minor,
+		 &dma_resource->start, mgmtdev->dma_channels);
+
+	platform_set_drvdata(pdev, mgmtdev);
+	pm_runtime_mark_last_busy(&pdev->dev);
+	pm_runtime_put_autosuspend(&pdev->dev);
+	return 0;
+out:
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	return rc;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_mgmt_acpi_ids[] = {
+	{"QCOM8060"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_mgmt_match[] = {
+	{ .compatible = "qcom,hidma-mgmt", },
+	{ .compatible = "qcom,hidma-mgmt-1.0", },
+	{ .compatible = "qcom,hidma-mgmt-1.1", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, hidma_mgmt_match);
+
+static struct platform_driver hidma_mgmt_driver = {
+	.probe = hidma_mgmt_probe,
+	.driver = {
+		.name = "hidma-mgmt",
+		.of_match_table = hidma_mgmt_match,
+		.acpi_match_table = ACPI_PTR(hidma_mgmt_acpi_ids),
+	},
+};
+module_platform_driver(hidma_mgmt_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma_mgmt.h b/drivers/dma/qcom/hidma_mgmt.h
new file mode 100644
index 0000000..d6f8fa0
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.h
@@ -0,0 +1,38 @@
+/*
+ * Qualcomm Technologies HIDMA Management common header
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+struct hidma_mgmt_dev {
+	u8 hw_version_major;
+	u8 hw_version_minor;
+
+	u32 max_wr_xactions;
+	u32 max_rd_xactions;
+	u32 max_write_request;
+	u32 max_read_request;
+	u32 dma_channels;
+	u32 chreset_timeout_cycles;
+	u32 hw_version;
+	u32 *priority;
+	u32 *weight;
+
+	/* Hardware device constants */
+	void __iomem *dev_virtaddr;
+	resource_size_t dev_addrsize;
+
+	struct platform_device *pdev;
+};
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev);
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev);
diff --git a/drivers/dma/qcom/hidma_mgmt_sys.c b/drivers/dma/qcom/hidma_mgmt_sys.c
new file mode 100644
index 0000000..b1eb9d6
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt_sys.c
@@ -0,0 +1,232 @@
+/*
+ * Qualcomm Technologies HIDMA Management SYS interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/sysfs.h>
+#include <linux/platform_device.h>
+
+#include "hidma_mgmt.h"
+
+struct fileinfo {
+	char *name;
+	int mode;
+	int (*get)(struct hidma_mgmt_dev *mdev);
+	int (*set)(struct hidma_mgmt_dev *mdev, u64 val);
+};
+
+#define IMPLEMENT_GETSET(name)					\
+static int get_##name(struct hidma_mgmt_dev *mdev)		\
+{								\
+	return mdev->name;					\
+}								\
+static int set_##name(struct hidma_mgmt_dev *mdev, u64 val)	\
+{								\
+	u64 tmp;						\
+	int rc;							\
+								\
+	tmp = mdev->name;					\
+	mdev->name = val;					\
+	rc = hidma_mgmt_setup(mdev);				\
+	if (rc)							\
+		mdev->name = tmp;				\
+	return rc;						\
+}
+
+#define DECLARE_ATTRIBUTE(name, mode)				\
+	{#name, mode, get_##name, set_##name}
+
+IMPLEMENT_GETSET(hw_version_major)
+IMPLEMENT_GETSET(hw_version_minor)
+IMPLEMENT_GETSET(max_wr_xactions)
+IMPLEMENT_GETSET(max_rd_xactions)
+IMPLEMENT_GETSET(max_write_request)
+IMPLEMENT_GETSET(max_read_request)
+IMPLEMENT_GETSET(dma_channels)
+IMPLEMENT_GETSET(chreset_timeout_cycles)
+
+static int set_priority(struct hidma_mgmt_dev *mdev, int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i > mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->priority[i];
+	mdev->priority[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->priority[i] = tmp;
+	return rc;
+}
+
+static int set_weight(struct hidma_mgmt_dev *mdev, int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i > mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->weight[i];
+	mdev->weight[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->weight[i] = tmp;
+	return rc;
+}
+
+static struct fileinfo files[] = {
+	DECLARE_ATTRIBUTE(hw_version_major, S_IRUGO),
+	DECLARE_ATTRIBUTE(hw_version_minor, S_IRUGO),
+	DECLARE_ATTRIBUTE(dma_channels, S_IRUGO),
+	DECLARE_ATTRIBUTE(chreset_timeout_cycles, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_wr_xactions, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_rd_xactions, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_write_request, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_read_request, (S_IRUGO|S_IWUGO)),
+};
+
+static ssize_t show_values(struct device *dev, struct device_attribute *attr,
+				char *buf)
+{
+	int i;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		if (strcmp(attr->attr.name, files[i].name) == 0) {
+			sprintf(buf, "%d\n", files[i].get(mdev));
+			goto done;
+		}
+	}
+
+	for (i = 0; i < mdev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			sprintf(buf, "%d\n", mdev->priority[i]);
+			goto done;
+		}
+
+		sprintf(name, "channel%d_weight", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			sprintf(buf, "%d\n", mdev->weight[i]);
+			goto done;
+		}
+	}
+
+done:
+	return strlen(buf);
+}
+
+static ssize_t set_values(struct device *dev, struct device_attribute *attr,
+				const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+	unsigned long tmp;
+	int i, rc;
+
+	rc = kstrtoul(buf, 0, &tmp);
+	if (rc)
+		return rc;
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		if (strcmp(attr->attr.name, files[i].name) == 0) {
+			rc = files[i].set(mdev, tmp);
+			if (rc)
+				return rc;
+
+			goto done;
+		}
+	}
+
+	for (i = 0; i < mdev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			rc = set_priority(mdev, i, tmp);
+			if (rc)
+				return rc;
+			goto done;
+		}
+
+		sprintf(name, "channel%d_weight", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			rc = set_weight(mdev, i, tmp);
+			if (rc)
+				return rc;
+		}
+	}
+done:
+	return count;
+}
+
+static int create_sysfs_entry(struct hidma_mgmt_dev *dev, char *name, int mode)
+{
+	struct device_attribute *port_attrs;
+	char *name_copy;
+
+	port_attrs = devm_kmalloc(&dev->pdev->dev,
+			sizeof(struct device_attribute), GFP_KERNEL);
+	if (!port_attrs)
+		return -ENOMEM;
+
+	name_copy = devm_kcalloc(&dev->pdev->dev, 1, strlen(name) + 1,
+				GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	sprintf(name_copy, "%s", name);
+
+	port_attrs->attr.name = name_copy;
+	port_attrs->attr.mode = mode;
+	port_attrs->show      = show_values;
+	port_attrs->store     = set_values;
+	sysfs_attr_init(&port_attrs->attr);
+
+	return device_create_file(&dev->pdev->dev, port_attrs);
+}
+
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev)
+{
+	int rc;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		rc = create_sysfs_entry(dev, files[i].name, files[i].mode);
+		if (rc)
+			return rc;
+	}
+
+	for (i = 0; i < dev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		rc = create_sysfs_entry(dev, name, (S_IRUGO|S_IWUGO));
+		if (rc)
+			return rc;
+
+		sprintf(name, "channel%d_weight", i);
+		rc = create_sysfs_entry(dev, name, (S_IRUGO|S_IWUGO));
+		if (rc)
+			return rc;
+	}
+
+	return 0;
+}
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
@ 2015-11-08  4:52     ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: dmaengine, timur, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Sinan Kaya, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, Vinod Koul,
	Dan Williams, devicetree, linux-kernel

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization in the future.

The management driver is executed in hypervisor context and
is the main management entity for all channels provided by
the device.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  62 ++++
 drivers/dma/qcom/Kconfig                           |  11 +
 drivers/dma/qcom/Makefile                          |   1 +
 drivers/dma/qcom/hidma_mgmt.c                      | 315 +++++++++++++++++++++
 drivers/dma/qcom/hidma_mgmt.h                      |  38 +++
 drivers/dma/qcom/hidma_mgmt_sys.c                  | 232 +++++++++++++++
 6 files changed, 659 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
new file mode 100644
index 0000000..b906170
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -0,0 +1,62 @@
+Qualcomm Technologies HIDMA Management interface
+
+The Qualcomm Technologies HIDMA device has been designed
+to support virtualization technology. The driver has been
+divided into two to follow the hardware design. The management
+driver is executed in hypervisor context and is the main
+management entity for all channels provided by the device.
+
+Each HIDMA HW consists of multiple channels. These channels
+share some set of common parameters. These parameters are
+initialized by the management driver during power up.
+Same management driver is used for monitoring the execution
+of the channels. Management driver can change the performance
+behavior dynamically such as bandwidth allocation and
+prioritization.
+
+All channel devices get probed in the hypervisor
+context during power up. They show up as DMA engine
+DMA channels. Then, before starting the virtualization; each
+channel device is unbound from the hypervisor by VFIO
+and assign to the guest machine for control.
+
+This management driver will  be used by the system
+admin to monitor/reset the execution state of the DMA
+channels. This will be the management interface.
+
+
+Required properties:
+- compatible: must contain one of these.
+  "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0", "qcom,hidma-mgmt";
+- reg: Address range for DMA device
+- dma-channels: Number of channels supported by this DMA controller.
+- max-write-burst-bytes: Maximum write burst in bytes. A memcpy requested is
+  fragmented to multiples of this amount.
+- max-read-burst-bytes: Maximum read burst in bytes. A memcpy request is
+  fragmented to multiples of this amount.
+- max-write-transactions: Maximum write transactions to perform in a burst
+- max-read-transactions: Maximum read transactions to perform in a burst
+- channel-reset-timeout-cycles: Channel reset timeout in cycles for this SOC.
+- channel-priority: Priority of the channel.
+  Each dma channel share the same HW bandwidth with other dma channels.
+  If two requests reach to the HW at the same time from a low priority and
+  high priority channel, high priority channel will claim the bus.
+  0=low priority, 1=high priority
+- channel-weight: Round robin weight of the channel
+  Since there are only two priority levels supported, scheduling among
+  the equal priority channels is done via weights.
+
+Example:
+
+	hidma-mgmt@f9984000 = {
+		compatible = "qcom,hidma-mgmt-1.0";
+		reg = <0xf9984000 0x15000>;
+		dma-channels = 6;
+		max-write-burst-bytes = 1024;
+		max-read-burst-bytes = 1024;
+		max-write-transactions = 31;
+		max-read-transactions = 31;
+		channel-reset-timeout-cycles = 0x500;
+		channel-priority = < 1 1 0 0 0 0>;
+		channel-weight = < 1 13 10 3 4 5>;
+	};
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index 17545df..f3e2d4c 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -7,3 +7,14 @@ config QCOM_BAM_DMA
 	  Enable support for the QCOM BAM DMA controller.  This controller
 	  provides DMA capabilities for a variety of on-chip devices.
 
+config QCOM_HIDMA_MGMT
+	tristate "Qualcomm Technologies HIDMA Management support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA Management.
+	  Each DMA device requires one management interface driver
+	  for basic initialization before QCOM_HIDMA channel driver can
+	  start managing the channels. In a virtualized environment,
+	  the guest OS would run QCOM_HIDMA channel driver and the
+	  hypervisor would run the QCOM_HIDMA_MGMT management driver.
+
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index f612ae3..1a5a96d 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1 +1,2 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
+obj-$(CONFIG_QCOM_HIDMA_MGMT) += hidma_mgmt.o hidma_mgmt_sys.o
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
new file mode 100644
index 0000000..94510d6
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -0,0 +1,315 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine Management interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/acpi.h>
+#include <linux/of.h>
+#include <linux/property.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma_mgmt.h"
+
+#define QOS_N_OFFSET			0x300
+#define CFG_OFFSET			0x400
+#define MAX_BUS_REQ_LEN_OFFSET		0x41C
+#define MAX_XACTIONS_OFFSET		0x420
+#define HW_VERSION_OFFSET		0x424
+#define CHRESET_TIMEOUT_OFFSET		0x418
+
+#define MAX_WR_XACTIONS_MASK		0x1F
+#define MAX_RD_XACTIONS_MASK		0x1F
+#define WEIGHT_MASK			0x7F
+#define MAX_BUS_REQ_LEN_MASK		0xFFFF
+#define CHRESET_TIMEOUUT_MASK		0xFFFFF
+
+#define MAX_WR_XACTIONS_BIT_POS	16
+#define MAX_BUS_WR_REQ_BIT_POS		16
+#define WRR_BIT_POS			8
+#define PRIORITY_BIT_POS		15
+
+#define AUTOSUSPEND_TIMEOUT		2000
+#define MAX_CHANNEL_WEIGHT		15
+
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev)
+{
+	u32 val;
+	u32 i;
+
+	if (!is_power_of_2(mgmtdev->max_write_request) ||
+		(mgmtdev->max_write_request < 128) ||
+		(mgmtdev->max_write_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid write request %d\n",
+			mgmtdev->max_write_request);
+		return -EINVAL;
+	}
+
+	if (!is_power_of_2(mgmtdev->max_read_request) ||
+		(mgmtdev->max_read_request < 128) ||
+		(mgmtdev->max_read_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid read request %d\n",
+			mgmtdev->max_read_request);
+		return  -EINVAL;
+	}
+
+	if (mgmtdev->max_wr_xactions > MAX_WR_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_wr_xactions cannot be bigger than %d\n",
+			MAX_WR_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	if (mgmtdev->max_rd_xactions > MAX_RD_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_rd_xactions cannot be bigger than %d\n",
+			MAX_RD_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		if (mgmtdev->priority[i] > 1) {
+			dev_err(&mgmtdev->pdev->dev, "priority can be 0 or 1\n");
+			return -EINVAL;
+		}
+
+		if (mgmtdev->weight[i] > MAX_CHANNEL_WEIGHT) {
+			dev_err(&mgmtdev->pdev->dev,
+				"max value of weight can be %d.\n",
+				MAX_CHANNEL_WEIGHT);
+			return -EINVAL;
+		}
+
+		/* weight needs to be at least one */
+		if (mgmtdev->weight[i] == 0)
+			mgmtdev->weight[i] = 1;
+	}
+
+	pm_runtime_get_sync(&mgmtdev->pdev->dev);
+	val = readl(mgmtdev->dev_virtaddr + MAX_BUS_REQ_LEN_OFFSET);
+	val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
+	val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
+	val = val & ~(MAX_BUS_REQ_LEN_MASK);
+	val = val | (mgmtdev->max_read_request);
+	writel(val, mgmtdev->dev_virtaddr + MAX_BUS_REQ_LEN_OFFSET);
+
+	val = readl(mgmtdev->dev_virtaddr + MAX_XACTIONS_OFFSET);
+	val = val & ~(MAX_WR_XACTIONS_MASK << MAX_WR_XACTIONS_BIT_POS);
+	val = val | (mgmtdev->max_wr_xactions << MAX_WR_XACTIONS_BIT_POS);
+	val = val & ~(MAX_RD_XACTIONS_MASK);
+	val = val | (mgmtdev->max_rd_xactions);
+	writel(val, mgmtdev->dev_virtaddr + MAX_XACTIONS_OFFSET);
+
+	mgmtdev->hw_version = readl(mgmtdev->dev_virtaddr + HW_VERSION_OFFSET);
+	mgmtdev->hw_version_major = (mgmtdev->hw_version >> 28) & 0xF;
+	mgmtdev->hw_version_minor = (mgmtdev->hw_version >> 16) & 0xF;
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		val = readl(mgmtdev->dev_virtaddr + QOS_N_OFFSET + (4 * i));
+		val = val & ~(1 << PRIORITY_BIT_POS);
+		val = val |
+			((mgmtdev->priority[i] & 0x1) << PRIORITY_BIT_POS);
+		val = val & ~(WEIGHT_MASK << WRR_BIT_POS);
+		val = val
+			| ((mgmtdev->weight[i] & WEIGHT_MASK) << WRR_BIT_POS);
+		writel(val, mgmtdev->dev_virtaddr + QOS_N_OFFSET + (4 * i));
+	}
+
+	val = readl(mgmtdev->dev_virtaddr + CHRESET_TIMEOUT_OFFSET);
+	val = val & ~CHRESET_TIMEOUUT_MASK;
+	val = val | (mgmtdev->chreset_timeout_cycles & CHRESET_TIMEOUUT_MASK);
+	writel(val, mgmtdev->dev_virtaddr + CHRESET_TIMEOUT_OFFSET);
+
+	pm_runtime_mark_last_busy(&mgmtdev->pdev->dev);
+	pm_runtime_put_autosuspend(&mgmtdev->pdev->dev);
+	return 0;
+}
+
+static int hidma_mgmt_probe(struct platform_device *pdev)
+{
+	struct hidma_mgmt_dev *mgmtdev;
+	struct resource *dma_resource;
+	void *dev_virtaddr;
+	int irq;
+	int rc;
+	u32 val;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+	dma_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!dma_resource) {
+		dev_err(&pdev->dev, "No memory resources found\n");
+		rc = -ENODEV;
+		goto out;
+	}
+	dev_virtaddr = devm_ioremap_resource(&pdev->dev, dma_resource);
+	if (IS_ERR(dev_virtaddr)) {
+		dev_err(&pdev->dev, "can't map i/o memory\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(&pdev->dev, "irq resources not found\n");
+		rc = -ENODEV;
+		goto out;
+	}
+
+	mgmtdev = devm_kzalloc(&pdev->dev, sizeof(*mgmtdev), GFP_KERNEL);
+	if (!mgmtdev) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->pdev = pdev;
+
+	mgmtdev->dev_addrsize = resource_size(dma_resource);
+	mgmtdev->dev_virtaddr = dev_virtaddr;
+
+	if (device_property_read_u32(&pdev->dev, "dma-channels",
+		&mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "number of channels missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "channel-reset-timeout-cycles",
+		&mgmtdev->chreset_timeout_cycles)) {
+		dev_err(&pdev->dev, "channel reset timeout missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-write-burst-bytes",
+		&mgmtdev->max_write_request)) {
+		dev_err(&pdev->dev, "max-write-burst-bytes missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-read-burst-bytes",
+		&mgmtdev->max_read_request)) {
+		dev_err(&pdev->dev, "max-read-burst-bytes missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-write-transactions",
+		&mgmtdev->max_wr_xactions)) {
+		dev_err(&pdev->dev, "max-write-transactions missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-read-transactions",
+		&mgmtdev->max_rd_xactions)) {
+		dev_err(&pdev->dev, "max-read-transactions missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	mgmtdev->priority = devm_kcalloc(&pdev->dev,
+		mgmtdev->dma_channels, sizeof(*mgmtdev->priority), GFP_KERNEL);
+	if (!mgmtdev->priority) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->weight = devm_kcalloc(&pdev->dev,
+		mgmtdev->dma_channels, sizeof(*mgmtdev->weight), GFP_KERNEL);
+	if (!mgmtdev->weight) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	if (device_property_read_u32_array(&pdev->dev, "channel-priority",
+				mgmtdev->priority, mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "channel-priority missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32_array(&pdev->dev, "channel-weight",
+				mgmtdev->weight, mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "channel-weight missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	rc = hidma_mgmt_setup(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "setup failed\n");
+		goto out;
+	}
+
+	/* start the HW */
+	val = readl(mgmtdev->dev_virtaddr + CFG_OFFSET);
+	val = val | 1;
+	writel(val, mgmtdev->dev_virtaddr + CFG_OFFSET);
+
+
+	rc = hidma_mgmt_init_sys(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "sysfs setup failed\n");
+		goto out;
+	}
+
+	dev_info(&pdev->dev,
+		 "HW rev: %d.%d @ %pa with %d physical channels\n",
+		 mgmtdev->hw_version_major, mgmtdev->hw_version_minor,
+		 &dma_resource->start, mgmtdev->dma_channels);
+
+	platform_set_drvdata(pdev, mgmtdev);
+	pm_runtime_mark_last_busy(&pdev->dev);
+	pm_runtime_put_autosuspend(&pdev->dev);
+	return 0;
+out:
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	return rc;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_mgmt_acpi_ids[] = {
+	{"QCOM8060"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_mgmt_match[] = {
+	{ .compatible = "qcom,hidma-mgmt", },
+	{ .compatible = "qcom,hidma-mgmt-1.0", },
+	{ .compatible = "qcom,hidma-mgmt-1.1", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, hidma_mgmt_match);
+
+static struct platform_driver hidma_mgmt_driver = {
+	.probe = hidma_mgmt_probe,
+	.driver = {
+		.name = "hidma-mgmt",
+		.of_match_table = hidma_mgmt_match,
+		.acpi_match_table = ACPI_PTR(hidma_mgmt_acpi_ids),
+	},
+};
+module_platform_driver(hidma_mgmt_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma_mgmt.h b/drivers/dma/qcom/hidma_mgmt.h
new file mode 100644
index 0000000..d6f8fa0
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.h
@@ -0,0 +1,38 @@
+/*
+ * Qualcomm Technologies HIDMA Management common header
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+struct hidma_mgmt_dev {
+	u8 hw_version_major;
+	u8 hw_version_minor;
+
+	u32 max_wr_xactions;
+	u32 max_rd_xactions;
+	u32 max_write_request;
+	u32 max_read_request;
+	u32 dma_channels;
+	u32 chreset_timeout_cycles;
+	u32 hw_version;
+	u32 *priority;
+	u32 *weight;
+
+	/* Hardware device constants */
+	void __iomem *dev_virtaddr;
+	resource_size_t dev_addrsize;
+
+	struct platform_device *pdev;
+};
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev);
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev);
diff --git a/drivers/dma/qcom/hidma_mgmt_sys.c b/drivers/dma/qcom/hidma_mgmt_sys.c
new file mode 100644
index 0000000..b1eb9d6
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt_sys.c
@@ -0,0 +1,232 @@
+/*
+ * Qualcomm Technologies HIDMA Management SYS interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/sysfs.h>
+#include <linux/platform_device.h>
+
+#include "hidma_mgmt.h"
+
+struct fileinfo {
+	char *name;
+	int mode;
+	int (*get)(struct hidma_mgmt_dev *mdev);
+	int (*set)(struct hidma_mgmt_dev *mdev, u64 val);
+};
+
+#define IMPLEMENT_GETSET(name)					\
+static int get_##name(struct hidma_mgmt_dev *mdev)		\
+{								\
+	return mdev->name;					\
+}								\
+static int set_##name(struct hidma_mgmt_dev *mdev, u64 val)	\
+{								\
+	u64 tmp;						\
+	int rc;							\
+								\
+	tmp = mdev->name;					\
+	mdev->name = val;					\
+	rc = hidma_mgmt_setup(mdev);				\
+	if (rc)							\
+		mdev->name = tmp;				\
+	return rc;						\
+}
+
+#define DECLARE_ATTRIBUTE(name, mode)				\
+	{#name, mode, get_##name, set_##name}
+
+IMPLEMENT_GETSET(hw_version_major)
+IMPLEMENT_GETSET(hw_version_minor)
+IMPLEMENT_GETSET(max_wr_xactions)
+IMPLEMENT_GETSET(max_rd_xactions)
+IMPLEMENT_GETSET(max_write_request)
+IMPLEMENT_GETSET(max_read_request)
+IMPLEMENT_GETSET(dma_channels)
+IMPLEMENT_GETSET(chreset_timeout_cycles)
+
+static int set_priority(struct hidma_mgmt_dev *mdev, int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i > mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->priority[i];
+	mdev->priority[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->priority[i] = tmp;
+	return rc;
+}
+
+static int set_weight(struct hidma_mgmt_dev *mdev, int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i > mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->weight[i];
+	mdev->weight[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->weight[i] = tmp;
+	return rc;
+}
+
+static struct fileinfo files[] = {
+	DECLARE_ATTRIBUTE(hw_version_major, S_IRUGO),
+	DECLARE_ATTRIBUTE(hw_version_minor, S_IRUGO),
+	DECLARE_ATTRIBUTE(dma_channels, S_IRUGO),
+	DECLARE_ATTRIBUTE(chreset_timeout_cycles, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_wr_xactions, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_rd_xactions, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_write_request, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_read_request, (S_IRUGO|S_IWUGO)),
+};
+
+static ssize_t show_values(struct device *dev, struct device_attribute *attr,
+				char *buf)
+{
+	int i;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		if (strcmp(attr->attr.name, files[i].name) == 0) {
+			sprintf(buf, "%d\n", files[i].get(mdev));
+			goto done;
+		}
+	}
+
+	for (i = 0; i < mdev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			sprintf(buf, "%d\n", mdev->priority[i]);
+			goto done;
+		}
+
+		sprintf(name, "channel%d_weight", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			sprintf(buf, "%d\n", mdev->weight[i]);
+			goto done;
+		}
+	}
+
+done:
+	return strlen(buf);
+}
+
+static ssize_t set_values(struct device *dev, struct device_attribute *attr,
+				const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+	unsigned long tmp;
+	int i, rc;
+
+	rc = kstrtoul(buf, 0, &tmp);
+	if (rc)
+		return rc;
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		if (strcmp(attr->attr.name, files[i].name) == 0) {
+			rc = files[i].set(mdev, tmp);
+			if (rc)
+				return rc;
+
+			goto done;
+		}
+	}
+
+	for (i = 0; i < mdev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			rc = set_priority(mdev, i, tmp);
+			if (rc)
+				return rc;
+			goto done;
+		}
+
+		sprintf(name, "channel%d_weight", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			rc = set_weight(mdev, i, tmp);
+			if (rc)
+				return rc;
+		}
+	}
+done:
+	return count;
+}
+
+static int create_sysfs_entry(struct hidma_mgmt_dev *dev, char *name, int mode)
+{
+	struct device_attribute *port_attrs;
+	char *name_copy;
+
+	port_attrs = devm_kmalloc(&dev->pdev->dev,
+			sizeof(struct device_attribute), GFP_KERNEL);
+	if (!port_attrs)
+		return -ENOMEM;
+
+	name_copy = devm_kcalloc(&dev->pdev->dev, 1, strlen(name) + 1,
+				GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	sprintf(name_copy, "%s", name);
+
+	port_attrs->attr.name = name_copy;
+	port_attrs->attr.mode = mode;
+	port_attrs->show      = show_values;
+	port_attrs->store     = set_values;
+	sysfs_attr_init(&port_attrs->attr);
+
+	return device_create_file(&dev->pdev->dev, port_attrs);
+}
+
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev)
+{
+	int rc;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		rc = create_sysfs_entry(dev, files[i].name, files[i].mode);
+		if (rc)
+			return rc;
+	}
+
+	for (i = 0; i < dev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		rc = create_sysfs_entry(dev, name, (S_IRUGO|S_IWUGO));
+		if (rc)
+			return rc;
+
+		sprintf(name, "channel%d_weight", i);
+		rc = create_sysfs_entry(dev, name, (S_IRUGO|S_IWUGO));
+		if (rc)
+			return rc;
+	}
+
+	return 0;
+}
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project


^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
@ 2015-11-08  4:52     ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: linux-arm-kernel

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization in the future.

The management driver is executed in hypervisor context and
is the main management entity for all channels provided by
the device.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  62 ++++
 drivers/dma/qcom/Kconfig                           |  11 +
 drivers/dma/qcom/Makefile                          |   1 +
 drivers/dma/qcom/hidma_mgmt.c                      | 315 +++++++++++++++++++++
 drivers/dma/qcom/hidma_mgmt.h                      |  38 +++
 drivers/dma/qcom/hidma_mgmt_sys.c                  | 232 +++++++++++++++
 6 files changed, 659 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
new file mode 100644
index 0000000..b906170
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -0,0 +1,62 @@
+Qualcomm Technologies HIDMA Management interface
+
+The Qualcomm Technologies HIDMA device has been designed
+to support virtualization technology. The driver has been
+divided into two to follow the hardware design. The management
+driver is executed in hypervisor context and is the main
+management entity for all channels provided by the device.
+
+Each HIDMA HW consists of multiple channels. These channels
+share some set of common parameters. These parameters are
+initialized by the management driver during power up.
+Same management driver is used for monitoring the execution
+of the channels. Management driver can change the performance
+behavior dynamically such as bandwidth allocation and
+prioritization.
+
+All channel devices get probed in the hypervisor
+context during power up. They show up as DMA engine
+DMA channels. Then, before starting the virtualization; each
+channel device is unbound from the hypervisor by VFIO
+and assign to the guest machine for control.
+
+This management driver will  be used by the system
+admin to monitor/reset the execution state of the DMA
+channels. This will be the management interface.
+
+
+Required properties:
+- compatible: must contain one of these.
+  "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0", "qcom,hidma-mgmt";
+- reg: Address range for DMA device
+- dma-channels: Number of channels supported by this DMA controller.
+- max-write-burst-bytes: Maximum write burst in bytes. A memcpy requested is
+  fragmented to multiples of this amount.
+- max-read-burst-bytes: Maximum read burst in bytes. A memcpy request is
+  fragmented to multiples of this amount.
+- max-write-transactions: Maximum write transactions to perform in a burst
+- max-read-transactions: Maximum read transactions to perform in a burst
+- channel-reset-timeout-cycles: Channel reset timeout in cycles for this SOC.
+- channel-priority: Priority of the channel.
+  Each dma channel share the same HW bandwidth with other dma channels.
+  If two requests reach to the HW at the same time from a low priority and
+  high priority channel, high priority channel will claim the bus.
+  0=low priority, 1=high priority
+- channel-weight: Round robin weight of the channel
+  Since there are only two priority levels supported, scheduling among
+  the equal priority channels is done via weights.
+
+Example:
+
+	hidma-mgmt at f9984000 = {
+		compatible = "qcom,hidma-mgmt-1.0";
+		reg = <0xf9984000 0x15000>;
+		dma-channels = 6;
+		max-write-burst-bytes = 1024;
+		max-read-burst-bytes = 1024;
+		max-write-transactions = 31;
+		max-read-transactions = 31;
+		channel-reset-timeout-cycles = 0x500;
+		channel-priority = < 1 1 0 0 0 0>;
+		channel-weight = < 1 13 10 3 4 5>;
+	};
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index 17545df..f3e2d4c 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -7,3 +7,14 @@ config QCOM_BAM_DMA
 	  Enable support for the QCOM BAM DMA controller.  This controller
 	  provides DMA capabilities for a variety of on-chip devices.
 
+config QCOM_HIDMA_MGMT
+	tristate "Qualcomm Technologies HIDMA Management support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA Management.
+	  Each DMA device requires one management interface driver
+	  for basic initialization before QCOM_HIDMA channel driver can
+	  start managing the channels. In a virtualized environment,
+	  the guest OS would run QCOM_HIDMA channel driver and the
+	  hypervisor would run the QCOM_HIDMA_MGMT management driver.
+
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index f612ae3..1a5a96d 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1 +1,2 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
+obj-$(CONFIG_QCOM_HIDMA_MGMT) += hidma_mgmt.o hidma_mgmt_sys.o
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
new file mode 100644
index 0000000..94510d6
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -0,0 +1,315 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine Management interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/acpi.h>
+#include <linux/of.h>
+#include <linux/property.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma_mgmt.h"
+
+#define QOS_N_OFFSET			0x300
+#define CFG_OFFSET			0x400
+#define MAX_BUS_REQ_LEN_OFFSET		0x41C
+#define MAX_XACTIONS_OFFSET		0x420
+#define HW_VERSION_OFFSET		0x424
+#define CHRESET_TIMEOUT_OFFSET		0x418
+
+#define MAX_WR_XACTIONS_MASK		0x1F
+#define MAX_RD_XACTIONS_MASK		0x1F
+#define WEIGHT_MASK			0x7F
+#define MAX_BUS_REQ_LEN_MASK		0xFFFF
+#define CHRESET_TIMEOUUT_MASK		0xFFFFF
+
+#define MAX_WR_XACTIONS_BIT_POS	16
+#define MAX_BUS_WR_REQ_BIT_POS		16
+#define WRR_BIT_POS			8
+#define PRIORITY_BIT_POS		15
+
+#define AUTOSUSPEND_TIMEOUT		2000
+#define MAX_CHANNEL_WEIGHT		15
+
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev)
+{
+	u32 val;
+	u32 i;
+
+	if (!is_power_of_2(mgmtdev->max_write_request) ||
+		(mgmtdev->max_write_request < 128) ||
+		(mgmtdev->max_write_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid write request %d\n",
+			mgmtdev->max_write_request);
+		return -EINVAL;
+	}
+
+	if (!is_power_of_2(mgmtdev->max_read_request) ||
+		(mgmtdev->max_read_request < 128) ||
+		(mgmtdev->max_read_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid read request %d\n",
+			mgmtdev->max_read_request);
+		return  -EINVAL;
+	}
+
+	if (mgmtdev->max_wr_xactions > MAX_WR_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_wr_xactions cannot be bigger than %d\n",
+			MAX_WR_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	if (mgmtdev->max_rd_xactions > MAX_RD_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_rd_xactions cannot be bigger than %d\n",
+			MAX_RD_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		if (mgmtdev->priority[i] > 1) {
+			dev_err(&mgmtdev->pdev->dev, "priority can be 0 or 1\n");
+			return -EINVAL;
+		}
+
+		if (mgmtdev->weight[i] > MAX_CHANNEL_WEIGHT) {
+			dev_err(&mgmtdev->pdev->dev,
+				"max value of weight can be %d.\n",
+				MAX_CHANNEL_WEIGHT);
+			return -EINVAL;
+		}
+
+		/* weight needs to be at least one */
+		if (mgmtdev->weight[i] == 0)
+			mgmtdev->weight[i] = 1;
+	}
+
+	pm_runtime_get_sync(&mgmtdev->pdev->dev);
+	val = readl(mgmtdev->dev_virtaddr + MAX_BUS_REQ_LEN_OFFSET);
+	val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
+	val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
+	val = val & ~(MAX_BUS_REQ_LEN_MASK);
+	val = val | (mgmtdev->max_read_request);
+	writel(val, mgmtdev->dev_virtaddr + MAX_BUS_REQ_LEN_OFFSET);
+
+	val = readl(mgmtdev->dev_virtaddr + MAX_XACTIONS_OFFSET);
+	val = val & ~(MAX_WR_XACTIONS_MASK << MAX_WR_XACTIONS_BIT_POS);
+	val = val | (mgmtdev->max_wr_xactions << MAX_WR_XACTIONS_BIT_POS);
+	val = val & ~(MAX_RD_XACTIONS_MASK);
+	val = val | (mgmtdev->max_rd_xactions);
+	writel(val, mgmtdev->dev_virtaddr + MAX_XACTIONS_OFFSET);
+
+	mgmtdev->hw_version = readl(mgmtdev->dev_virtaddr + HW_VERSION_OFFSET);
+	mgmtdev->hw_version_major = (mgmtdev->hw_version >> 28) & 0xF;
+	mgmtdev->hw_version_minor = (mgmtdev->hw_version >> 16) & 0xF;
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		val = readl(mgmtdev->dev_virtaddr + QOS_N_OFFSET + (4 * i));
+		val = val & ~(1 << PRIORITY_BIT_POS);
+		val = val |
+			((mgmtdev->priority[i] & 0x1) << PRIORITY_BIT_POS);
+		val = val & ~(WEIGHT_MASK << WRR_BIT_POS);
+		val = val
+			| ((mgmtdev->weight[i] & WEIGHT_MASK) << WRR_BIT_POS);
+		writel(val, mgmtdev->dev_virtaddr + QOS_N_OFFSET + (4 * i));
+	}
+
+	val = readl(mgmtdev->dev_virtaddr + CHRESET_TIMEOUT_OFFSET);
+	val = val & ~CHRESET_TIMEOUUT_MASK;
+	val = val | (mgmtdev->chreset_timeout_cycles & CHRESET_TIMEOUUT_MASK);
+	writel(val, mgmtdev->dev_virtaddr + CHRESET_TIMEOUT_OFFSET);
+
+	pm_runtime_mark_last_busy(&mgmtdev->pdev->dev);
+	pm_runtime_put_autosuspend(&mgmtdev->pdev->dev);
+	return 0;
+}
+
+static int hidma_mgmt_probe(struct platform_device *pdev)
+{
+	struct hidma_mgmt_dev *mgmtdev;
+	struct resource *dma_resource;
+	void *dev_virtaddr;
+	int irq;
+	int rc;
+	u32 val;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+	dma_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!dma_resource) {
+		dev_err(&pdev->dev, "No memory resources found\n");
+		rc = -ENODEV;
+		goto out;
+	}
+	dev_virtaddr = devm_ioremap_resource(&pdev->dev, dma_resource);
+	if (IS_ERR(dev_virtaddr)) {
+		dev_err(&pdev->dev, "can't map i/o memory\n");
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(&pdev->dev, "irq resources not found\n");
+		rc = -ENODEV;
+		goto out;
+	}
+
+	mgmtdev = devm_kzalloc(&pdev->dev, sizeof(*mgmtdev), GFP_KERNEL);
+	if (!mgmtdev) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->pdev = pdev;
+
+	mgmtdev->dev_addrsize = resource_size(dma_resource);
+	mgmtdev->dev_virtaddr = dev_virtaddr;
+
+	if (device_property_read_u32(&pdev->dev, "dma-channels",
+		&mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "number of channels missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "channel-reset-timeout-cycles",
+		&mgmtdev->chreset_timeout_cycles)) {
+		dev_err(&pdev->dev, "channel reset timeout missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-write-burst-bytes",
+		&mgmtdev->max_write_request)) {
+		dev_err(&pdev->dev, "max-write-burst-bytes missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-read-burst-bytes",
+		&mgmtdev->max_read_request)) {
+		dev_err(&pdev->dev, "max-read-burst-bytes missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-write-transactions",
+		&mgmtdev->max_wr_xactions)) {
+		dev_err(&pdev->dev, "max-write-transactions missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32(&pdev->dev, "max-read-transactions",
+		&mgmtdev->max_rd_xactions)) {
+		dev_err(&pdev->dev, "max-read-transactions missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	mgmtdev->priority = devm_kcalloc(&pdev->dev,
+		mgmtdev->dma_channels, sizeof(*mgmtdev->priority), GFP_KERNEL);
+	if (!mgmtdev->priority) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->weight = devm_kcalloc(&pdev->dev,
+		mgmtdev->dma_channels, sizeof(*mgmtdev->weight), GFP_KERNEL);
+	if (!mgmtdev->weight) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	if (device_property_read_u32_array(&pdev->dev, "channel-priority",
+				mgmtdev->priority, mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "channel-priority missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	if (device_property_read_u32_array(&pdev->dev, "channel-weight",
+				mgmtdev->weight, mgmtdev->dma_channels)) {
+		dev_err(&pdev->dev, "channel-weight missing\n");
+		rc = -EINVAL;
+		goto out;
+	}
+
+	rc = hidma_mgmt_setup(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "setup failed\n");
+		goto out;
+	}
+
+	/* start the HW */
+	val = readl(mgmtdev->dev_virtaddr + CFG_OFFSET);
+	val = val | 1;
+	writel(val, mgmtdev->dev_virtaddr + CFG_OFFSET);
+
+
+	rc = hidma_mgmt_init_sys(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "sysfs setup failed\n");
+		goto out;
+	}
+
+	dev_info(&pdev->dev,
+		 "HW rev: %d.%d @ %pa with %d physical channels\n",
+		 mgmtdev->hw_version_major, mgmtdev->hw_version_minor,
+		 &dma_resource->start, mgmtdev->dma_channels);
+
+	platform_set_drvdata(pdev, mgmtdev);
+	pm_runtime_mark_last_busy(&pdev->dev);
+	pm_runtime_put_autosuspend(&pdev->dev);
+	return 0;
+out:
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	return rc;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_mgmt_acpi_ids[] = {
+	{"QCOM8060"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_mgmt_match[] = {
+	{ .compatible = "qcom,hidma-mgmt", },
+	{ .compatible = "qcom,hidma-mgmt-1.0", },
+	{ .compatible = "qcom,hidma-mgmt-1.1", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, hidma_mgmt_match);
+
+static struct platform_driver hidma_mgmt_driver = {
+	.probe = hidma_mgmt_probe,
+	.driver = {
+		.name = "hidma-mgmt",
+		.of_match_table = hidma_mgmt_match,
+		.acpi_match_table = ACPI_PTR(hidma_mgmt_acpi_ids),
+	},
+};
+module_platform_driver(hidma_mgmt_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma_mgmt.h b/drivers/dma/qcom/hidma_mgmt.h
new file mode 100644
index 0000000..d6f8fa0
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.h
@@ -0,0 +1,38 @@
+/*
+ * Qualcomm Technologies HIDMA Management common header
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+struct hidma_mgmt_dev {
+	u8 hw_version_major;
+	u8 hw_version_minor;
+
+	u32 max_wr_xactions;
+	u32 max_rd_xactions;
+	u32 max_write_request;
+	u32 max_read_request;
+	u32 dma_channels;
+	u32 chreset_timeout_cycles;
+	u32 hw_version;
+	u32 *priority;
+	u32 *weight;
+
+	/* Hardware device constants */
+	void __iomem *dev_virtaddr;
+	resource_size_t dev_addrsize;
+
+	struct platform_device *pdev;
+};
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev);
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev);
diff --git a/drivers/dma/qcom/hidma_mgmt_sys.c b/drivers/dma/qcom/hidma_mgmt_sys.c
new file mode 100644
index 0000000..b1eb9d6
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt_sys.c
@@ -0,0 +1,232 @@
+/*
+ * Qualcomm Technologies HIDMA Management SYS interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/sysfs.h>
+#include <linux/platform_device.h>
+
+#include "hidma_mgmt.h"
+
+struct fileinfo {
+	char *name;
+	int mode;
+	int (*get)(struct hidma_mgmt_dev *mdev);
+	int (*set)(struct hidma_mgmt_dev *mdev, u64 val);
+};
+
+#define IMPLEMENT_GETSET(name)					\
+static int get_##name(struct hidma_mgmt_dev *mdev)		\
+{								\
+	return mdev->name;					\
+}								\
+static int set_##name(struct hidma_mgmt_dev *mdev, u64 val)	\
+{								\
+	u64 tmp;						\
+	int rc;							\
+								\
+	tmp = mdev->name;					\
+	mdev->name = val;					\
+	rc = hidma_mgmt_setup(mdev);				\
+	if (rc)							\
+		mdev->name = tmp;				\
+	return rc;						\
+}
+
+#define DECLARE_ATTRIBUTE(name, mode)				\
+	{#name, mode, get_##name, set_##name}
+
+IMPLEMENT_GETSET(hw_version_major)
+IMPLEMENT_GETSET(hw_version_minor)
+IMPLEMENT_GETSET(max_wr_xactions)
+IMPLEMENT_GETSET(max_rd_xactions)
+IMPLEMENT_GETSET(max_write_request)
+IMPLEMENT_GETSET(max_read_request)
+IMPLEMENT_GETSET(dma_channels)
+IMPLEMENT_GETSET(chreset_timeout_cycles)
+
+static int set_priority(struct hidma_mgmt_dev *mdev, int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i > mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->priority[i];
+	mdev->priority[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->priority[i] = tmp;
+	return rc;
+}
+
+static int set_weight(struct hidma_mgmt_dev *mdev, int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i > mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->weight[i];
+	mdev->weight[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->weight[i] = tmp;
+	return rc;
+}
+
+static struct fileinfo files[] = {
+	DECLARE_ATTRIBUTE(hw_version_major, S_IRUGO),
+	DECLARE_ATTRIBUTE(hw_version_minor, S_IRUGO),
+	DECLARE_ATTRIBUTE(dma_channels, S_IRUGO),
+	DECLARE_ATTRIBUTE(chreset_timeout_cycles, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_wr_xactions, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_rd_xactions, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_write_request, (S_IRUGO|S_IWUGO)),
+	DECLARE_ATTRIBUTE(max_read_request, (S_IRUGO|S_IWUGO)),
+};
+
+static ssize_t show_values(struct device *dev, struct device_attribute *attr,
+				char *buf)
+{
+	int i;
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		if (strcmp(attr->attr.name, files[i].name) == 0) {
+			sprintf(buf, "%d\n", files[i].get(mdev));
+			goto done;
+		}
+	}
+
+	for (i = 0; i < mdev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			sprintf(buf, "%d\n", mdev->priority[i]);
+			goto done;
+		}
+
+		sprintf(name, "channel%d_weight", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			sprintf(buf, "%d\n", mdev->weight[i]);
+			goto done;
+		}
+	}
+
+done:
+	return strlen(buf);
+}
+
+static ssize_t set_values(struct device *dev, struct device_attribute *attr,
+				const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+	unsigned long tmp;
+	int i, rc;
+
+	rc = kstrtoul(buf, 0, &tmp);
+	if (rc)
+		return rc;
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		if (strcmp(attr->attr.name, files[i].name) == 0) {
+			rc = files[i].set(mdev, tmp);
+			if (rc)
+				return rc;
+
+			goto done;
+		}
+	}
+
+	for (i = 0; i < mdev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			rc = set_priority(mdev, i, tmp);
+			if (rc)
+				return rc;
+			goto done;
+		}
+
+		sprintf(name, "channel%d_weight", i);
+		if (strcmp(attr->attr.name, name) == 0) {
+			rc = set_weight(mdev, i, tmp);
+			if (rc)
+				return rc;
+		}
+	}
+done:
+	return count;
+}
+
+static int create_sysfs_entry(struct hidma_mgmt_dev *dev, char *name, int mode)
+{
+	struct device_attribute *port_attrs;
+	char *name_copy;
+
+	port_attrs = devm_kmalloc(&dev->pdev->dev,
+			sizeof(struct device_attribute), GFP_KERNEL);
+	if (!port_attrs)
+		return -ENOMEM;
+
+	name_copy = devm_kcalloc(&dev->pdev->dev, 1, strlen(name) + 1,
+				GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	sprintf(name_copy, "%s", name);
+
+	port_attrs->attr.name = name_copy;
+	port_attrs->attr.mode = mode;
+	port_attrs->show      = show_values;
+	port_attrs->store     = set_values;
+	sysfs_attr_init(&port_attrs->attr);
+
+	return device_create_file(&dev->pdev->dev, port_attrs);
+}
+
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev)
+{
+	int rc;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(files); i++) {
+		rc = create_sysfs_entry(dev, files[i].name, files[i].mode);
+		if (rc)
+			return rc;
+	}
+
+	for (i = 0; i < dev->dma_channels; i++) {
+		char name[30];
+
+		sprintf(name, "channel%d_priority", i);
+		rc = create_sysfs_entry(dev, name, (S_IRUGO|S_IWUGO));
+		if (rc)
+			return rc;
+
+		sprintf(name, "channel%d_weight", i);
+		rc = create_sysfs_entry(dev, name, (S_IRUGO|S_IWUGO));
+		if (rc)
+			return rc;
+	}
+
+	return 0;
+}
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-08  4:52 ` Sinan Kaya
@ 2015-11-08  4:52   ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: dmaengine, timur, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Sinan Kaya, Vinod Koul,
	Dan Williams, linux-kernel

This patch adds supporting utility functions
for selftest. The intention is to share the self
test code between different drivers.

Supported test cases include:
1. dma_map_single
2. streaming DMA
3. coherent DMA
4. scatter-gather DMA

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/dma/dmaengine.h   |   2 +
 drivers/dma/dmaselftest.c | 669 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 671 insertions(+)
 create mode 100644 drivers/dma/dmaselftest.c

diff --git a/drivers/dma/dmaengine.h b/drivers/dma/dmaengine.h
index 17f983a..05b5a84 100644
--- a/drivers/dma/dmaengine.h
+++ b/drivers/dma/dmaengine.h
@@ -86,4 +86,6 @@ static inline void dma_set_residue(struct dma_tx_state *state, u32 residue)
 		state->residue = residue;
 }
 
+int dma_selftest_memcpy(struct dma_device *dmadev);
+
 #endif
diff --git a/drivers/dma/dmaselftest.c b/drivers/dma/dmaselftest.c
new file mode 100644
index 0000000..324f7c4
--- /dev/null
+++ b/drivers/dma/dmaselftest.c
@@ -0,0 +1,669 @@
+/*
+ * DMA self test code borrowed from Qualcomm Technologies HIDMA driver
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/list.h>
+#include <linux/atomic.h>
+#include <linux/wait.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+
+struct test_result {
+	atomic_t counter;
+	wait_queue_head_t wq;
+	struct dma_device *dmadev;
+};
+
+static void dma_selftest_complete(void *arg)
+{
+	struct test_result *result = arg;
+	struct dma_device *dmadev = result->dmadev;
+
+	atomic_inc(&result->counter);
+	wake_up(&result->wq);
+	dev_dbg(dmadev->dev, "self test transfer complete :%d\n",
+		atomic_read(&result->counter));
+}
+
+/*
+ * Perform a transaction to verify the HW works.
+ */
+static int dma_selftest_sg(struct dma_device *dmadev,
+			struct dma_chan *dma_chanptr, u64 size,
+			unsigned long flags)
+{
+	dma_addr_t src_dma, dest_dma, dest_dma_it;
+	u8 *dest_buf;
+	u32 i, j = 0;
+	dma_cookie_t cookie;
+	struct dma_async_tx_descriptor *tx;
+	int err = 0;
+	int ret;
+	struct sg_table sg_table;
+	struct scatterlist	*sg;
+	int nents = 10, count;
+	bool free_channel = 1;
+	u8 *src_buf;
+	int map_count;
+	struct test_result result;
+
+	init_waitqueue_head(&result.wq);
+	atomic_set(&result.counter, 0);
+	result.dmadev = dmadev;
+
+	if (!dma_chanptr)
+		return -ENOMEM;
+
+	if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
+		return -ENODEV;
+
+	if (!dma_chanptr->device || !dmadev->dev) {
+		dmadev->device_free_chan_resources(dma_chanptr);
+		return -ENODEV;
+	}
+
+	ret = sg_alloc_table(&sg_table, nents, GFP_KERNEL);
+	if (ret) {
+		err = ret;
+		goto sg_table_alloc_failed;
+	}
+
+	for_each_sg(sg_table.sgl, sg, nents, i) {
+		u64 alloc_sz;
+		void *cpu_addr;
+
+		alloc_sz = round_up(size, nents);
+		do_div(alloc_sz, nents);
+		cpu_addr = kmalloc(alloc_sz, GFP_KERNEL);
+
+		if (!cpu_addr) {
+			err = -ENOMEM;
+			goto sg_buf_alloc_failed;
+		}
+
+		dev_dbg(dmadev->dev, "set sg buf[%d] :%p\n", i, cpu_addr);
+		sg_set_buf(sg, cpu_addr, alloc_sz);
+	}
+
+	dest_buf = kmalloc(round_up(size, nents), GFP_KERNEL);
+	if (!dest_buf) {
+		err = -ENOMEM;
+		goto dst_alloc_failed;
+	}
+	dev_dbg(dmadev->dev, "dest:%p\n", dest_buf);
+
+	/* Fill in src buffer */
+	count = 0;
+	for_each_sg(sg_table.sgl, sg, nents, i) {
+		src_buf = sg_virt(sg);
+		dev_dbg(dmadev->dev,
+			"set src[%d, %d, %p] = %d\n", i, j, src_buf, count);
+
+		for (j = 0; j < sg_dma_len(sg); j++)
+			src_buf[j] = count++;
+	}
+
+	/* dma_map_sg cleans and invalidates the cache in arm64 when
+	 * DMA_TO_DEVICE is selected for src. That's why, we need to do
+	 * the mapping after the data is copied.
+	 */
+	map_count = dma_map_sg(dmadev->dev, sg_table.sgl, nents,
+				DMA_TO_DEVICE);
+	if (!map_count) {
+		err =  -EINVAL;
+		goto src_map_failed;
+	}
+
+	dest_dma = dma_map_single(dmadev->dev, dest_buf,
+				size, DMA_FROM_DEVICE);
+
+	err = dma_mapping_error(dmadev->dev, dest_dma);
+	if (err)
+		goto dest_map_failed;
+
+	/* check scatter gather list contents */
+	for_each_sg(sg_table.sgl, sg, map_count, i)
+		dev_dbg(dmadev->dev,
+			"[%d/%d] src va=%p, iova = %pa len:%d\n",
+			i, map_count, sg_virt(sg), &sg_dma_address(sg),
+			sg_dma_len(sg));
+
+	dest_dma_it = dest_dma;
+	for_each_sg(sg_table.sgl, sg, map_count, i) {
+		src_buf = sg_virt(sg);
+		src_dma = sg_dma_address(sg);
+		dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n",
+			&src_dma, &dest_dma_it);
+
+		tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma_it,
+				src_dma, sg_dma_len(sg), flags);
+		if (!tx) {
+			dev_err(dmadev->dev,
+				"Self-test sg failed, disabling\n");
+			err = -ENODEV;
+			goto prep_memcpy_failed;
+		}
+
+		tx->callback_param = &result;
+		tx->callback = dma_selftest_complete;
+		cookie = tx->tx_submit(tx);
+		dest_dma_it += sg_dma_len(sg);
+	}
+
+	dmadev->device_issue_pending(dma_chanptr);
+
+	/*
+	 * It is assumed that the hardware can move the data within 1s
+	 * and signal the OS of the completion
+	 */
+	ret = wait_event_timeout(result.wq,
+		atomic_read(&result.counter) == (map_count),
+				msecs_to_jiffies(10000));
+
+	if (ret <= 0) {
+		dev_err(dmadev->dev,
+			"Self-test sg copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+	dev_dbg(dmadev->dev,
+		"Self-test complete signal received\n");
+
+	if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
+				DMA_COMPLETE) {
+		dev_err(dmadev->dev,
+			"Self-test sg status not complete, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+
+	dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
+				DMA_FROM_DEVICE);
+
+	count = 0;
+	for_each_sg(sg_table.sgl, sg, map_count, i) {
+		src_buf = sg_virt(sg);
+		if (memcmp(src_buf, &dest_buf[count], sg_dma_len(sg)) == 0) {
+			count += sg_dma_len(sg);
+			continue;
+		}
+
+		for (j = 0; j < sg_dma_len(sg); j++) {
+			if (src_buf[j] != dest_buf[count]) {
+				dev_dbg(dmadev->dev,
+				"[%d, %d] (%p) src :%x dest (%p):%x cnt:%d\n",
+					i, j, &src_buf[j], src_buf[j],
+					&dest_buf[count], dest_buf[count],
+					count);
+				dev_err(dmadev->dev,
+				 "Self-test copy failed compare, disabling\n");
+				err = -EFAULT;
+				return err;
+				goto compare_failed;
+			}
+			count++;
+		}
+	}
+
+	/*
+	 * do not release the channel
+	 * we want to consume all the channels on self test
+	 */
+	free_channel = 0;
+
+compare_failed:
+tx_status:
+prep_memcpy_failed:
+	dma_unmap_single(dmadev->dev, dest_dma, size,
+			 DMA_FROM_DEVICE);
+dest_map_failed:
+	dma_unmap_sg(dmadev->dev, sg_table.sgl, nents,
+			DMA_TO_DEVICE);
+
+src_map_failed:
+	kfree(dest_buf);
+
+dst_alloc_failed:
+sg_buf_alloc_failed:
+	for_each_sg(sg_table.sgl, sg, nents, i) {
+		if (sg_virt(sg))
+			kfree(sg_virt(sg));
+	}
+	sg_free_table(&sg_table);
+sg_table_alloc_failed:
+	if (free_channel)
+		dmadev->device_free_chan_resources(dma_chanptr);
+
+	return err;
+}
+
+/*
+ * Perform a streaming transaction to verify the HW works.
+ */
+static int dma_selftest_streaming(struct dma_device *dmadev,
+			struct dma_chan *dma_chanptr, u64 size,
+			unsigned long flags)
+{
+	dma_addr_t src_dma, dest_dma;
+	u8 *dest_buf, *src_buf;
+	u32 i;
+	dma_cookie_t cookie;
+	struct dma_async_tx_descriptor *tx;
+	int err = 0;
+	int ret;
+	bool free_channel = 1;
+	struct test_result result;
+
+	init_waitqueue_head(&result.wq);
+	atomic_set(&result.counter, 0);
+	result.dmadev = dmadev;
+
+	if (!dma_chanptr)
+		return -ENOMEM;
+
+	if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
+		return -ENODEV;
+
+	if (!dma_chanptr->device || !dmadev->dev) {
+		dmadev->device_free_chan_resources(dma_chanptr);
+		return -ENODEV;
+	}
+
+	src_buf = kmalloc(size, GFP_KERNEL);
+	if (!src_buf) {
+		err = -ENOMEM;
+		goto src_alloc_failed;
+	}
+
+	dest_buf = kmalloc(size, GFP_KERNEL);
+	if (!dest_buf) {
+		err = -ENOMEM;
+		goto dst_alloc_failed;
+	}
+
+	dev_dbg(dmadev->dev, "src: %p dest:%p\n", src_buf, dest_buf);
+
+	/* Fill in src buffer */
+	for (i = 0; i < size; i++)
+		src_buf[i] = (u8)i;
+
+	/* dma_map_single cleans and invalidates the cache in arm64 when
+	 * DMA_TO_DEVICE is selected for src. That's why, we need to do
+	 * the mapping after the data is copied.
+	 */
+	src_dma = dma_map_single(dmadev->dev, src_buf,
+				 size, DMA_TO_DEVICE);
+
+	err = dma_mapping_error(dmadev->dev, src_dma);
+	if (err)
+		goto src_map_failed;
+
+	dest_dma = dma_map_single(dmadev->dev, dest_buf,
+				size, DMA_FROM_DEVICE);
+
+	err = dma_mapping_error(dmadev->dev, dest_dma);
+	if (err)
+		goto dest_map_failed;
+	dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n", &src_dma,
+		&dest_dma);
+	tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma, src_dma,
+					size, flags);
+	if (!tx) {
+		dev_err(dmadev->dev,
+			"Self-test streaming failed, disabling\n");
+		err = -ENODEV;
+		goto prep_memcpy_failed;
+	}
+
+	tx->callback_param = &result;
+	tx->callback = dma_selftest_complete;
+	cookie = tx->tx_submit(tx);
+	dmadev->device_issue_pending(dma_chanptr);
+
+	/*
+	 * It is assumed that the hardware can move the data within 1s
+	 * and signal the OS of the completion
+	 */
+	ret = wait_event_timeout(result.wq,
+				atomic_read(&result.counter) == 1,
+				msecs_to_jiffies(10000));
+
+	if (ret <= 0) {
+		dev_err(dmadev->dev,
+			"Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+	dev_dbg(dmadev->dev, "Self-test complete signal received\n");
+
+	if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
+				DMA_COMPLETE) {
+		dev_err(dmadev->dev,
+			"Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+
+	dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
+				DMA_FROM_DEVICE);
+
+	if (memcmp(src_buf, dest_buf, size)) {
+		for (i = 0; i < size/4; i++) {
+			if (((u32 *)src_buf)[i] != ((u32 *)(dest_buf))[i]) {
+				dev_dbg(dmadev->dev,
+					"[%d] src data:%x dest data:%x\n",
+					i, ((u32 *)src_buf)[i],
+					((u32 *)(dest_buf))[i]);
+				break;
+			}
+		}
+		dev_err(dmadev->dev,
+			"Self-test copy failed compare, disabling\n");
+		err = -EFAULT;
+		goto compare_failed;
+	}
+
+	/*
+	 * do not release the channel
+	 * we want to consume all the channels on self test
+	 */
+	free_channel = 0;
+
+compare_failed:
+tx_status:
+prep_memcpy_failed:
+	dma_unmap_single(dmadev->dev, dest_dma, size,
+			 DMA_FROM_DEVICE);
+dest_map_failed:
+	dma_unmap_single(dmadev->dev, src_dma, size,
+			DMA_TO_DEVICE);
+
+src_map_failed:
+	kfree(dest_buf);
+
+dst_alloc_failed:
+	kfree(src_buf);
+
+src_alloc_failed:
+	if (free_channel)
+		dmadev->device_free_chan_resources(dma_chanptr);
+
+	return err;
+}
+
+/*
+ * Perform a coherent transaction to verify the HW works.
+ */
+static int dma_selftest_one_coherent(struct dma_device *dmadev,
+			struct dma_chan *dma_chanptr, u64 size,
+			unsigned long flags)
+{
+	dma_addr_t src_dma, dest_dma;
+	u8 *dest_buf, *src_buf;
+	u32 i;
+	dma_cookie_t cookie;
+	struct dma_async_tx_descriptor *tx;
+	int err = 0;
+	int ret;
+	bool free_channel = true;
+	struct test_result result;
+
+	init_waitqueue_head(&result.wq);
+	atomic_set(&result.counter, 0);
+	result.dmadev = dmadev;
+
+	if (!dma_chanptr)
+		return -ENOMEM;
+
+	if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
+		return -ENODEV;
+
+	if (!dma_chanptr->device || !dmadev->dev) {
+		dmadev->device_free_chan_resources(dma_chanptr);
+		return -ENODEV;
+	}
+
+	src_buf = dma_alloc_coherent(dmadev->dev, size,
+				&src_dma, GFP_KERNEL);
+	if (!src_buf) {
+		err = -ENOMEM;
+		goto src_alloc_failed;
+	}
+
+	dest_buf = dma_alloc_coherent(dmadev->dev, size,
+				&dest_dma, GFP_KERNEL);
+	if (!dest_buf) {
+		err = -ENOMEM;
+		goto dst_alloc_failed;
+	}
+
+	dev_dbg(dmadev->dev, "src: %p dest:%p\n", src_buf, dest_buf);
+
+	/* Fill in src buffer */
+	for (i = 0; i < size; i++)
+		src_buf[i] = (u8)i;
+
+	dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n", &src_dma,
+		&dest_dma);
+	tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma, src_dma,
+					size,
+					flags);
+	if (!tx) {
+		dev_err(dmadev->dev,
+			"Self-test coherent failed, disabling\n");
+		err = -ENODEV;
+		goto prep_memcpy_failed;
+	}
+
+	tx->callback_param = &result;
+	tx->callback = dma_selftest_complete;
+	cookie = tx->tx_submit(tx);
+	dmadev->device_issue_pending(dma_chanptr);
+
+	/*
+	 * It is assumed that the hardware can move the data within 1s
+	 * and signal the OS of the completion
+	 */
+	ret = wait_event_timeout(result.wq,
+				atomic_read(&result.counter) == 1,
+				msecs_to_jiffies(10000));
+
+	if (ret <= 0) {
+		dev_err(dmadev->dev,
+			"Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+	dev_dbg(dmadev->dev, "Self-test complete signal received\n");
+
+	if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
+				DMA_COMPLETE) {
+		dev_err(dmadev->dev,
+			"Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+
+	if (memcmp(src_buf, dest_buf, size)) {
+		for (i = 0; i < size/4; i++) {
+			if (((u32 *)src_buf)[i] != ((u32 *)(dest_buf))[i]) {
+				dev_dbg(dmadev->dev,
+					"[%d] src data:%x dest data:%x\n",
+					i, ((u32 *)src_buf)[i],
+					((u32 *)(dest_buf))[i]);
+				break;
+			}
+		}
+		dev_err(dmadev->dev,
+			"Self-test copy failed compare, disabling\n");
+		err = -EFAULT;
+		goto compare_failed;
+	}
+
+	/*
+	 * do not release the channel
+	 * we want to consume all the channels on self test
+	 */
+	free_channel = 0;
+
+compare_failed:
+tx_status:
+prep_memcpy_failed:
+	dma_free_coherent(dmadev->dev, size, dest_buf, dest_dma);
+
+dst_alloc_failed:
+	dma_free_coherent(dmadev->dev, size, src_buf, src_dma);
+
+src_alloc_failed:
+	if (free_channel)
+		dmadev->device_free_chan_resources(dma_chanptr);
+
+	return err;
+}
+
+static int dma_selftest_all(struct dma_device *dmadev,
+				bool req_coherent, bool req_sg)
+{
+	int rc = -ENODEV, i = 0;
+	struct dma_chan **dmach_ptr = NULL;
+	u32 max_channels = 0;
+	u64 sizes[] = {PAGE_SIZE - 1, PAGE_SIZE, PAGE_SIZE + 1, 2801, 13295};
+	int count = 0;
+	u32 j;
+	u64 size;
+	int failed = 0;
+	struct dma_chan *dmach = NULL;
+
+	list_for_each_entry(dmach, &dmadev->channels,
+			device_node) {
+		max_channels++;
+	}
+
+	dmach_ptr = kcalloc(max_channels, sizeof(*dmach_ptr), GFP_KERNEL);
+	if (!dmach_ptr) {
+		rc = -ENOMEM;
+		goto failed_exit;
+	}
+
+	for (j = 0; j < ARRAY_SIZE(sizes); j++) {
+		size = sizes[j];
+		count = 0;
+		dev_dbg(dmadev->dev, "test start for size:%llx\n", size);
+		list_for_each_entry(dmach, &dmadev->channels,
+				device_node) {
+			dmach_ptr[count] = dmach;
+			if (req_coherent)
+				rc = dma_selftest_one_coherent(dmadev,
+					dmach, size,
+					DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+			else if (req_sg)
+				rc = dma_selftest_sg(dmadev,
+					dmach, size,
+					DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+			else
+				rc = dma_selftest_streaming(dmadev,
+					dmach, size,
+					DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+			if (rc) {
+				failed = 1;
+				break;
+			}
+			dev_dbg(dmadev->dev,
+				"self test passed for ch:%d\n", count);
+			count++;
+		}
+
+		/*
+		 * free the channels where the test passed
+		 * Channel resources are freed for a test that fails.
+		 */
+		for (i = 0; i < count; i++)
+			dmadev->device_free_chan_resources(dmach_ptr[i]);
+
+		if (failed)
+			break;
+	}
+
+failed_exit:
+	kfree(dmach_ptr);
+
+	return rc;
+}
+
+static int dma_selftest_mapsngle(struct device *dev)
+{
+	u32 buf_size = 256;
+	char *src;
+	int ret = -ENOMEM;
+	dma_addr_t dma_src;
+
+	src = kmalloc(buf_size, GFP_KERNEL);
+	if (!src)
+		return -ENOMEM;
+
+	strcpy(src, "hello world");
+
+	dma_src = dma_map_single(dev, src, buf_size, DMA_TO_DEVICE);
+	dev_dbg(dev, "mapsingle: src:%p src_dma:%pad\n", src, &dma_src);
+
+	ret = dma_mapping_error(dev, dma_src);
+	if (ret) {
+		dev_err(dev, "dma_mapping_error with ret:%d\n", ret);
+		ret = -ENOMEM;
+	} else {
+		if (strcmp(src, "hello world") != 0) {
+			dev_err(dev, "memory content mismatch\n");
+			ret = -EINVAL;
+		} else
+			dev_dbg(dev, "mapsingle:dma_map_single works\n");
+
+		dma_unmap_single(dev, dma_src, buf_size, DMA_TO_DEVICE);
+	}
+	kfree(src);
+	return ret;
+}
+
+/*
+ * Self test all DMA channels.
+ */
+int dma_selftest_memcpy(struct dma_device *dmadev)
+{
+	int rc;
+
+	dma_selftest_mapsngle(dmadev->dev);
+
+	/* streaming test */
+	rc = dma_selftest_all(dmadev, false, false);
+	if (rc)
+		return rc;
+	dev_dbg(dmadev->dev, "streaming self test passed\n");
+
+	/* coherent test */
+	rc = dma_selftest_all(dmadev, true, false);
+	if (rc)
+		return rc;
+
+	dev_dbg(dmadev->dev, "coherent self test passed\n");
+
+	/* scatter gather test */
+	rc = dma_selftest_all(dmadev, false, true);
+	if (rc)
+		return rc;
+
+	dev_dbg(dmadev->dev, "scatter gather self test passed\n");
+	return 0;
+}
+EXPORT_SYMBOL_GPL(dma_selftest_memcpy);
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-08  4:52   ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:52 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds supporting utility functions
for selftest. The intention is to share the self
test code between different drivers.

Supported test cases include:
1. dma_map_single
2. streaming DMA
3. coherent DMA
4. scatter-gather DMA

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/dma/dmaengine.h   |   2 +
 drivers/dma/dmaselftest.c | 669 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 671 insertions(+)
 create mode 100644 drivers/dma/dmaselftest.c

diff --git a/drivers/dma/dmaengine.h b/drivers/dma/dmaengine.h
index 17f983a..05b5a84 100644
--- a/drivers/dma/dmaengine.h
+++ b/drivers/dma/dmaengine.h
@@ -86,4 +86,6 @@ static inline void dma_set_residue(struct dma_tx_state *state, u32 residue)
 		state->residue = residue;
 }
 
+int dma_selftest_memcpy(struct dma_device *dmadev);
+
 #endif
diff --git a/drivers/dma/dmaselftest.c b/drivers/dma/dmaselftest.c
new file mode 100644
index 0000000..324f7c4
--- /dev/null
+++ b/drivers/dma/dmaselftest.c
@@ -0,0 +1,669 @@
+/*
+ * DMA self test code borrowed from Qualcomm Technologies HIDMA driver
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/list.h>
+#include <linux/atomic.h>
+#include <linux/wait.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+
+struct test_result {
+	atomic_t counter;
+	wait_queue_head_t wq;
+	struct dma_device *dmadev;
+};
+
+static void dma_selftest_complete(void *arg)
+{
+	struct test_result *result = arg;
+	struct dma_device *dmadev = result->dmadev;
+
+	atomic_inc(&result->counter);
+	wake_up(&result->wq);
+	dev_dbg(dmadev->dev, "self test transfer complete :%d\n",
+		atomic_read(&result->counter));
+}
+
+/*
+ * Perform a transaction to verify the HW works.
+ */
+static int dma_selftest_sg(struct dma_device *dmadev,
+			struct dma_chan *dma_chanptr, u64 size,
+			unsigned long flags)
+{
+	dma_addr_t src_dma, dest_dma, dest_dma_it;
+	u8 *dest_buf;
+	u32 i, j = 0;
+	dma_cookie_t cookie;
+	struct dma_async_tx_descriptor *tx;
+	int err = 0;
+	int ret;
+	struct sg_table sg_table;
+	struct scatterlist	*sg;
+	int nents = 10, count;
+	bool free_channel = 1;
+	u8 *src_buf;
+	int map_count;
+	struct test_result result;
+
+	init_waitqueue_head(&result.wq);
+	atomic_set(&result.counter, 0);
+	result.dmadev = dmadev;
+
+	if (!dma_chanptr)
+		return -ENOMEM;
+
+	if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
+		return -ENODEV;
+
+	if (!dma_chanptr->device || !dmadev->dev) {
+		dmadev->device_free_chan_resources(dma_chanptr);
+		return -ENODEV;
+	}
+
+	ret = sg_alloc_table(&sg_table, nents, GFP_KERNEL);
+	if (ret) {
+		err = ret;
+		goto sg_table_alloc_failed;
+	}
+
+	for_each_sg(sg_table.sgl, sg, nents, i) {
+		u64 alloc_sz;
+		void *cpu_addr;
+
+		alloc_sz = round_up(size, nents);
+		do_div(alloc_sz, nents);
+		cpu_addr = kmalloc(alloc_sz, GFP_KERNEL);
+
+		if (!cpu_addr) {
+			err = -ENOMEM;
+			goto sg_buf_alloc_failed;
+		}
+
+		dev_dbg(dmadev->dev, "set sg buf[%d] :%p\n", i, cpu_addr);
+		sg_set_buf(sg, cpu_addr, alloc_sz);
+	}
+
+	dest_buf = kmalloc(round_up(size, nents), GFP_KERNEL);
+	if (!dest_buf) {
+		err = -ENOMEM;
+		goto dst_alloc_failed;
+	}
+	dev_dbg(dmadev->dev, "dest:%p\n", dest_buf);
+
+	/* Fill in src buffer */
+	count = 0;
+	for_each_sg(sg_table.sgl, sg, nents, i) {
+		src_buf = sg_virt(sg);
+		dev_dbg(dmadev->dev,
+			"set src[%d, %d, %p] = %d\n", i, j, src_buf, count);
+
+		for (j = 0; j < sg_dma_len(sg); j++)
+			src_buf[j] = count++;
+	}
+
+	/* dma_map_sg cleans and invalidates the cache in arm64 when
+	 * DMA_TO_DEVICE is selected for src. That's why, we need to do
+	 * the mapping after the data is copied.
+	 */
+	map_count = dma_map_sg(dmadev->dev, sg_table.sgl, nents,
+				DMA_TO_DEVICE);
+	if (!map_count) {
+		err =  -EINVAL;
+		goto src_map_failed;
+	}
+
+	dest_dma = dma_map_single(dmadev->dev, dest_buf,
+				size, DMA_FROM_DEVICE);
+
+	err = dma_mapping_error(dmadev->dev, dest_dma);
+	if (err)
+		goto dest_map_failed;
+
+	/* check scatter gather list contents */
+	for_each_sg(sg_table.sgl, sg, map_count, i)
+		dev_dbg(dmadev->dev,
+			"[%d/%d] src va=%p, iova = %pa len:%d\n",
+			i, map_count, sg_virt(sg), &sg_dma_address(sg),
+			sg_dma_len(sg));
+
+	dest_dma_it = dest_dma;
+	for_each_sg(sg_table.sgl, sg, map_count, i) {
+		src_buf = sg_virt(sg);
+		src_dma = sg_dma_address(sg);
+		dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n",
+			&src_dma, &dest_dma_it);
+
+		tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma_it,
+				src_dma, sg_dma_len(sg), flags);
+		if (!tx) {
+			dev_err(dmadev->dev,
+				"Self-test sg failed, disabling\n");
+			err = -ENODEV;
+			goto prep_memcpy_failed;
+		}
+
+		tx->callback_param = &result;
+		tx->callback = dma_selftest_complete;
+		cookie = tx->tx_submit(tx);
+		dest_dma_it += sg_dma_len(sg);
+	}
+
+	dmadev->device_issue_pending(dma_chanptr);
+
+	/*
+	 * It is assumed that the hardware can move the data within 1s
+	 * and signal the OS of the completion
+	 */
+	ret = wait_event_timeout(result.wq,
+		atomic_read(&result.counter) == (map_count),
+				msecs_to_jiffies(10000));
+
+	if (ret <= 0) {
+		dev_err(dmadev->dev,
+			"Self-test sg copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+	dev_dbg(dmadev->dev,
+		"Self-test complete signal received\n");
+
+	if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
+				DMA_COMPLETE) {
+		dev_err(dmadev->dev,
+			"Self-test sg status not complete, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+
+	dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
+				DMA_FROM_DEVICE);
+
+	count = 0;
+	for_each_sg(sg_table.sgl, sg, map_count, i) {
+		src_buf = sg_virt(sg);
+		if (memcmp(src_buf, &dest_buf[count], sg_dma_len(sg)) == 0) {
+			count += sg_dma_len(sg);
+			continue;
+		}
+
+		for (j = 0; j < sg_dma_len(sg); j++) {
+			if (src_buf[j] != dest_buf[count]) {
+				dev_dbg(dmadev->dev,
+				"[%d, %d] (%p) src :%x dest (%p):%x cnt:%d\n",
+					i, j, &src_buf[j], src_buf[j],
+					&dest_buf[count], dest_buf[count],
+					count);
+				dev_err(dmadev->dev,
+				 "Self-test copy failed compare, disabling\n");
+				err = -EFAULT;
+				return err;
+				goto compare_failed;
+			}
+			count++;
+		}
+	}
+
+	/*
+	 * do not release the channel
+	 * we want to consume all the channels on self test
+	 */
+	free_channel = 0;
+
+compare_failed:
+tx_status:
+prep_memcpy_failed:
+	dma_unmap_single(dmadev->dev, dest_dma, size,
+			 DMA_FROM_DEVICE);
+dest_map_failed:
+	dma_unmap_sg(dmadev->dev, sg_table.sgl, nents,
+			DMA_TO_DEVICE);
+
+src_map_failed:
+	kfree(dest_buf);
+
+dst_alloc_failed:
+sg_buf_alloc_failed:
+	for_each_sg(sg_table.sgl, sg, nents, i) {
+		if (sg_virt(sg))
+			kfree(sg_virt(sg));
+	}
+	sg_free_table(&sg_table);
+sg_table_alloc_failed:
+	if (free_channel)
+		dmadev->device_free_chan_resources(dma_chanptr);
+
+	return err;
+}
+
+/*
+ * Perform a streaming transaction to verify the HW works.
+ */
+static int dma_selftest_streaming(struct dma_device *dmadev,
+			struct dma_chan *dma_chanptr, u64 size,
+			unsigned long flags)
+{
+	dma_addr_t src_dma, dest_dma;
+	u8 *dest_buf, *src_buf;
+	u32 i;
+	dma_cookie_t cookie;
+	struct dma_async_tx_descriptor *tx;
+	int err = 0;
+	int ret;
+	bool free_channel = 1;
+	struct test_result result;
+
+	init_waitqueue_head(&result.wq);
+	atomic_set(&result.counter, 0);
+	result.dmadev = dmadev;
+
+	if (!dma_chanptr)
+		return -ENOMEM;
+
+	if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
+		return -ENODEV;
+
+	if (!dma_chanptr->device || !dmadev->dev) {
+		dmadev->device_free_chan_resources(dma_chanptr);
+		return -ENODEV;
+	}
+
+	src_buf = kmalloc(size, GFP_KERNEL);
+	if (!src_buf) {
+		err = -ENOMEM;
+		goto src_alloc_failed;
+	}
+
+	dest_buf = kmalloc(size, GFP_KERNEL);
+	if (!dest_buf) {
+		err = -ENOMEM;
+		goto dst_alloc_failed;
+	}
+
+	dev_dbg(dmadev->dev, "src: %p dest:%p\n", src_buf, dest_buf);
+
+	/* Fill in src buffer */
+	for (i = 0; i < size; i++)
+		src_buf[i] = (u8)i;
+
+	/* dma_map_single cleans and invalidates the cache in arm64 when
+	 * DMA_TO_DEVICE is selected for src. That's why, we need to do
+	 * the mapping after the data is copied.
+	 */
+	src_dma = dma_map_single(dmadev->dev, src_buf,
+				 size, DMA_TO_DEVICE);
+
+	err = dma_mapping_error(dmadev->dev, src_dma);
+	if (err)
+		goto src_map_failed;
+
+	dest_dma = dma_map_single(dmadev->dev, dest_buf,
+				size, DMA_FROM_DEVICE);
+
+	err = dma_mapping_error(dmadev->dev, dest_dma);
+	if (err)
+		goto dest_map_failed;
+	dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n", &src_dma,
+		&dest_dma);
+	tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma, src_dma,
+					size, flags);
+	if (!tx) {
+		dev_err(dmadev->dev,
+			"Self-test streaming failed, disabling\n");
+		err = -ENODEV;
+		goto prep_memcpy_failed;
+	}
+
+	tx->callback_param = &result;
+	tx->callback = dma_selftest_complete;
+	cookie = tx->tx_submit(tx);
+	dmadev->device_issue_pending(dma_chanptr);
+
+	/*
+	 * It is assumed that the hardware can move the data within 1s
+	 * and signal the OS of the completion
+	 */
+	ret = wait_event_timeout(result.wq,
+				atomic_read(&result.counter) == 1,
+				msecs_to_jiffies(10000));
+
+	if (ret <= 0) {
+		dev_err(dmadev->dev,
+			"Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+	dev_dbg(dmadev->dev, "Self-test complete signal received\n");
+
+	if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
+				DMA_COMPLETE) {
+		dev_err(dmadev->dev,
+			"Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+
+	dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
+				DMA_FROM_DEVICE);
+
+	if (memcmp(src_buf, dest_buf, size)) {
+		for (i = 0; i < size/4; i++) {
+			if (((u32 *)src_buf)[i] != ((u32 *)(dest_buf))[i]) {
+				dev_dbg(dmadev->dev,
+					"[%d] src data:%x dest data:%x\n",
+					i, ((u32 *)src_buf)[i],
+					((u32 *)(dest_buf))[i]);
+				break;
+			}
+		}
+		dev_err(dmadev->dev,
+			"Self-test copy failed compare, disabling\n");
+		err = -EFAULT;
+		goto compare_failed;
+	}
+
+	/*
+	 * do not release the channel
+	 * we want to consume all the channels on self test
+	 */
+	free_channel = 0;
+
+compare_failed:
+tx_status:
+prep_memcpy_failed:
+	dma_unmap_single(dmadev->dev, dest_dma, size,
+			 DMA_FROM_DEVICE);
+dest_map_failed:
+	dma_unmap_single(dmadev->dev, src_dma, size,
+			DMA_TO_DEVICE);
+
+src_map_failed:
+	kfree(dest_buf);
+
+dst_alloc_failed:
+	kfree(src_buf);
+
+src_alloc_failed:
+	if (free_channel)
+		dmadev->device_free_chan_resources(dma_chanptr);
+
+	return err;
+}
+
+/*
+ * Perform a coherent transaction to verify the HW works.
+ */
+static int dma_selftest_one_coherent(struct dma_device *dmadev,
+			struct dma_chan *dma_chanptr, u64 size,
+			unsigned long flags)
+{
+	dma_addr_t src_dma, dest_dma;
+	u8 *dest_buf, *src_buf;
+	u32 i;
+	dma_cookie_t cookie;
+	struct dma_async_tx_descriptor *tx;
+	int err = 0;
+	int ret;
+	bool free_channel = true;
+	struct test_result result;
+
+	init_waitqueue_head(&result.wq);
+	atomic_set(&result.counter, 0);
+	result.dmadev = dmadev;
+
+	if (!dma_chanptr)
+		return -ENOMEM;
+
+	if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
+		return -ENODEV;
+
+	if (!dma_chanptr->device || !dmadev->dev) {
+		dmadev->device_free_chan_resources(dma_chanptr);
+		return -ENODEV;
+	}
+
+	src_buf = dma_alloc_coherent(dmadev->dev, size,
+				&src_dma, GFP_KERNEL);
+	if (!src_buf) {
+		err = -ENOMEM;
+		goto src_alloc_failed;
+	}
+
+	dest_buf = dma_alloc_coherent(dmadev->dev, size,
+				&dest_dma, GFP_KERNEL);
+	if (!dest_buf) {
+		err = -ENOMEM;
+		goto dst_alloc_failed;
+	}
+
+	dev_dbg(dmadev->dev, "src: %p dest:%p\n", src_buf, dest_buf);
+
+	/* Fill in src buffer */
+	for (i = 0; i < size; i++)
+		src_buf[i] = (u8)i;
+
+	dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n", &src_dma,
+		&dest_dma);
+	tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma, src_dma,
+					size,
+					flags);
+	if (!tx) {
+		dev_err(dmadev->dev,
+			"Self-test coherent failed, disabling\n");
+		err = -ENODEV;
+		goto prep_memcpy_failed;
+	}
+
+	tx->callback_param = &result;
+	tx->callback = dma_selftest_complete;
+	cookie = tx->tx_submit(tx);
+	dmadev->device_issue_pending(dma_chanptr);
+
+	/*
+	 * It is assumed that the hardware can move the data within 1s
+	 * and signal the OS of the completion
+	 */
+	ret = wait_event_timeout(result.wq,
+				atomic_read(&result.counter) == 1,
+				msecs_to_jiffies(10000));
+
+	if (ret <= 0) {
+		dev_err(dmadev->dev,
+			"Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+	dev_dbg(dmadev->dev, "Self-test complete signal received\n");
+
+	if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
+				DMA_COMPLETE) {
+		dev_err(dmadev->dev,
+			"Self-test copy timed out, disabling\n");
+		err = -ENODEV;
+		goto tx_status;
+	}
+
+	if (memcmp(src_buf, dest_buf, size)) {
+		for (i = 0; i < size/4; i++) {
+			if (((u32 *)src_buf)[i] != ((u32 *)(dest_buf))[i]) {
+				dev_dbg(dmadev->dev,
+					"[%d] src data:%x dest data:%x\n",
+					i, ((u32 *)src_buf)[i],
+					((u32 *)(dest_buf))[i]);
+				break;
+			}
+		}
+		dev_err(dmadev->dev,
+			"Self-test copy failed compare, disabling\n");
+		err = -EFAULT;
+		goto compare_failed;
+	}
+
+	/*
+	 * do not release the channel
+	 * we want to consume all the channels on self test
+	 */
+	free_channel = 0;
+
+compare_failed:
+tx_status:
+prep_memcpy_failed:
+	dma_free_coherent(dmadev->dev, size, dest_buf, dest_dma);
+
+dst_alloc_failed:
+	dma_free_coherent(dmadev->dev, size, src_buf, src_dma);
+
+src_alloc_failed:
+	if (free_channel)
+		dmadev->device_free_chan_resources(dma_chanptr);
+
+	return err;
+}
+
+static int dma_selftest_all(struct dma_device *dmadev,
+				bool req_coherent, bool req_sg)
+{
+	int rc = -ENODEV, i = 0;
+	struct dma_chan **dmach_ptr = NULL;
+	u32 max_channels = 0;
+	u64 sizes[] = {PAGE_SIZE - 1, PAGE_SIZE, PAGE_SIZE + 1, 2801, 13295};
+	int count = 0;
+	u32 j;
+	u64 size;
+	int failed = 0;
+	struct dma_chan *dmach = NULL;
+
+	list_for_each_entry(dmach, &dmadev->channels,
+			device_node) {
+		max_channels++;
+	}
+
+	dmach_ptr = kcalloc(max_channels, sizeof(*dmach_ptr), GFP_KERNEL);
+	if (!dmach_ptr) {
+		rc = -ENOMEM;
+		goto failed_exit;
+	}
+
+	for (j = 0; j < ARRAY_SIZE(sizes); j++) {
+		size = sizes[j];
+		count = 0;
+		dev_dbg(dmadev->dev, "test start for size:%llx\n", size);
+		list_for_each_entry(dmach, &dmadev->channels,
+				device_node) {
+			dmach_ptr[count] = dmach;
+			if (req_coherent)
+				rc = dma_selftest_one_coherent(dmadev,
+					dmach, size,
+					DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+			else if (req_sg)
+				rc = dma_selftest_sg(dmadev,
+					dmach, size,
+					DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+			else
+				rc = dma_selftest_streaming(dmadev,
+					dmach, size,
+					DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+			if (rc) {
+				failed = 1;
+				break;
+			}
+			dev_dbg(dmadev->dev,
+				"self test passed for ch:%d\n", count);
+			count++;
+		}
+
+		/*
+		 * free the channels where the test passed
+		 * Channel resources are freed for a test that fails.
+		 */
+		for (i = 0; i < count; i++)
+			dmadev->device_free_chan_resources(dmach_ptr[i]);
+
+		if (failed)
+			break;
+	}
+
+failed_exit:
+	kfree(dmach_ptr);
+
+	return rc;
+}
+
+static int dma_selftest_mapsngle(struct device *dev)
+{
+	u32 buf_size = 256;
+	char *src;
+	int ret = -ENOMEM;
+	dma_addr_t dma_src;
+
+	src = kmalloc(buf_size, GFP_KERNEL);
+	if (!src)
+		return -ENOMEM;
+
+	strcpy(src, "hello world");
+
+	dma_src = dma_map_single(dev, src, buf_size, DMA_TO_DEVICE);
+	dev_dbg(dev, "mapsingle: src:%p src_dma:%pad\n", src, &dma_src);
+
+	ret = dma_mapping_error(dev, dma_src);
+	if (ret) {
+		dev_err(dev, "dma_mapping_error with ret:%d\n", ret);
+		ret = -ENOMEM;
+	} else {
+		if (strcmp(src, "hello world") != 0) {
+			dev_err(dev, "memory content mismatch\n");
+			ret = -EINVAL;
+		} else
+			dev_dbg(dev, "mapsingle:dma_map_single works\n");
+
+		dma_unmap_single(dev, dma_src, buf_size, DMA_TO_DEVICE);
+	}
+	kfree(src);
+	return ret;
+}
+
+/*
+ * Self test all DMA channels.
+ */
+int dma_selftest_memcpy(struct dma_device *dmadev)
+{
+	int rc;
+
+	dma_selftest_mapsngle(dmadev->dev);
+
+	/* streaming test */
+	rc = dma_selftest_all(dmadev, false, false);
+	if (rc)
+		return rc;
+	dev_dbg(dmadev->dev, "streaming self test passed\n");
+
+	/* coherent test */
+	rc = dma_selftest_all(dmadev, true, false);
+	if (rc)
+		return rc;
+
+	dev_dbg(dmadev->dev, "coherent self test passed\n");
+
+	/* scatter gather test */
+	rc = dma_selftest_all(dmadev, false, true);
+	if (rc)
+		return rc;
+
+	dev_dbg(dmadev->dev, "scatter gather self test passed\n");
+	return 0;
+}
+EXPORT_SYMBOL_GPL(dma_selftest_memcpy);
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-08  4:52 ` Sinan Kaya
@ 2015-11-08  4:53   ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:53 UTC (permalink / raw)
  To: dmaengine, timur, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Sinan Kaya, Rob Herring,
	Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala, Vinod Koul,
	Dan Williams, devicetree, linux-kernel

This patch adds support for hidma engine. The driver
consists of two logical blocks. The DMA engine interface
and the low-level interface. The hardware only supports
memcpy/memset and this driver only support memcpy
interface. HW and driver doesn't support slave interface.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 .../devicetree/bindings/dma/qcom_hidma.txt         |  18 +
 drivers/dma/qcom/Kconfig                           |   9 +
 drivers/dma/qcom/Makefile                          |   2 +
 drivers/dma/qcom/hidma.c                           | 743 ++++++++++++++++
 drivers/dma/qcom/hidma.h                           | 157 ++++
 drivers/dma/qcom/hidma_dbg.c                       | 225 +++++
 drivers/dma/qcom/hidma_ll.c                        | 944 +++++++++++++++++++++
 7 files changed, 2098 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h
 create mode 100644 drivers/dma/qcom/hidma_dbg.c
 create mode 100644 drivers/dma/qcom/hidma_ll.c

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma.txt b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
new file mode 100644
index 0000000..c9fb2d44
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
@@ -0,0 +1,18 @@
+Qualcomm Technologies HIDMA Channel driver
+
+Required properties:
+- compatible: must contain "qcom,hidma"
+- reg: Addresses for the transfer and event channel
+- interrupts: Should contain the event interrupt
+- desc-count: Number of asynchronous requests this channel can handle
+- event-channel: The HW event channel completions will be delivered.
+Example:
+
+	hidma_24: dma-controller@0x5c050000 {
+		compatible = "qcom,hidma-1.0";
+		reg = <0 0x5c050000 0x0 0x1000>,
+		      <0 0x5c0b0000 0x0 0x1000>;
+		interrupts = <0 389 0>;
+		desc-count = <10>;
+		event-channel = <4>;
+	};
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index f3e2d4c..5588e1c 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -18,3 +18,12 @@ config QCOM_HIDMA_MGMT
 	  the guest OS would run QCOM_HIDMA channel driver and the
 	  hypervisor would run the QCOM_HIDMA_MGMT management driver.
 
+config QCOM_HIDMA
+	tristate "Qualcomm Technologies HIDMA Channel support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA controller.
+	  The HIDMA controller supports optimized buffer copies
+	  (user to kernel, kernel to kernel, etc.).  It only supports
+	  memcpy interface. The core is not intended for general
+	  purpose slave DMA.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index 1a5a96d..2b68c9c 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1,2 +1,4 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
 obj-$(CONFIG_QCOM_HIDMA_MGMT) += hidma_mgmt.o hidma_mgmt_sys.o
+obj-$(CONFIG_QCOM_HIDMA) +=  hdma.o
+hdma-objs        := hidma_ll.o hidma.o hidma_dbg.o ../dmaselftest.o
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
new file mode 100644
index 0000000..dadc289
--- /dev/null
+++ b/drivers/dma/qcom/hidma.c
@@ -0,0 +1,743 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
+ * Copyright (C) Semihalf 2009
+ * Copyright (C) Ilya Yanok, Emcraft Systems 2010
+ * Copyright (C) Alexander Popov, Promcontroller 2014
+ *
+ * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
+ * (defines, structures and comments) was taken from MPC5121 DMA driver
+ * written by Hongjun Chen <hong-jun.chen@freescale.com>.
+ *
+ * Approved as OSADL project by a majority of OSADL members and funded
+ * by OSADL membership fees in 2009;  for details see www.osadl.org.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ */
+
+/* Linux Foundation elects GPLv2 license only. */
+
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/of_dma.h>
+#include <linux/property.h>
+#include <linux/delay.h>
+#include <linux/highmem.h>
+#include <linux/io.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/acpi.h>
+#include <linux/irq.h>
+#include <linux/atomic.h>
+#include <linux/pm_runtime.h>
+
+#include "../dmaengine.h"
+#include "hidma.h"
+
+/*
+ * Default idle time is 2 seconds. This parameter can
+ * be overridden by changing the following
+ * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
+ * during kernel boot.
+ */
+#define AUTOSUSPEND_TIMEOUT		2000
+#define ERR_INFO_SW			0xFF
+#define ERR_CODE_UNEXPECTED_TERMINATE	0x0
+
+static inline
+struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
+{
+	return container_of(dmadev, struct hidma_dev, ddev);
+}
+
+static inline
+struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
+{
+	return container_of(_lldevp, struct hidma_dev, lldev);
+}
+
+static inline
+struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
+{
+	return container_of(dmach, struct hidma_chan, chan);
+}
+
+static inline struct hidma_desc *
+to_hidma_desc(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct hidma_desc, desc);
+}
+
+static void hidma_free(struct hidma_dev *dmadev)
+{
+	dev_dbg(dmadev->ddev.dev, "free dmadev\n");
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+}
+
+static unsigned int nr_desc_prm;
+module_param(nr_desc_prm, uint, 0644);
+MODULE_PARM_DESC(nr_desc_prm,
+		 "number of descriptors (default: 0)");
+
+#define MAX_HIDMA_CHANNELS	64
+static int event_channel_idx[MAX_HIDMA_CHANNELS] = {
+	[0 ... (MAX_HIDMA_CHANNELS - 1)] = -1};
+static unsigned int num_event_channel_idx;
+module_param_array_named(event_channel_idx, event_channel_idx, int,
+			&num_event_channel_idx, 0644);
+MODULE_PARM_DESC(event_channel_idx,
+		"event channel index array for the notifications");
+static atomic_t channel_ref_count;
+
+/* process completed descriptors */
+static void hidma_process_completed(struct hidma_dev *mdma)
+{
+	dma_cookie_t last_cookie = 0;
+	struct hidma_chan *mchan;
+	struct hidma_desc *mdesc;
+	struct dma_async_tx_descriptor *desc;
+	unsigned long irqflags;
+	struct list_head list;
+	struct dma_chan *dmach = NULL;
+
+	list_for_each_entry(dmach, &mdma->ddev.channels,
+			device_node) {
+		mchan = to_hidma_chan(dmach);
+		INIT_LIST_HEAD(&list);
+
+		/* Get all completed descriptors */
+		spin_lock_irqsave(&mchan->lock, irqflags);
+		list_splice_tail_init(&mchan->completed, &list);
+		spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+		/* Execute callbacks and run dependencies */
+		list_for_each_entry(mdesc, &list, node) {
+			desc = &mdesc->desc;
+
+			spin_lock_irqsave(&mchan->lock, irqflags);
+			dma_cookie_complete(desc);
+			spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+			if (desc->callback &&
+				(hidma_ll_status(mdma->lldev, mdesc->tre_ch)
+				== DMA_COMPLETE))
+				desc->callback(desc->callback_param);
+
+			last_cookie = desc->cookie;
+			dma_run_dependencies(desc);
+		}
+
+		/* Free descriptors */
+		spin_lock_irqsave(&mchan->lock, irqflags);
+		list_splice_tail_init(&list, &mchan->free);
+		spin_unlock_irqrestore(&mchan->lock, irqflags);
+	}
+}
+
+/*
+ * Called once for each submitted descriptor.
+ * PM is locked once for each descriptor that is currently
+ * in execution.
+ */
+static void hidma_callback(void *data)
+{
+	struct hidma_desc *mdesc = data;
+	struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
+	unsigned long irqflags;
+	struct dma_device *ddev = mchan->chan.device;
+	struct hidma_dev *dmadev = to_hidma_dev(ddev);
+	bool queued = false;
+
+	dev_dbg(dmadev->ddev.dev, "callback: data:0x%p\n", data);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	if (mdesc->node.next) {
+		/* Delete from the active list, add to completed list */
+		list_move_tail(&mdesc->node, &mchan->completed);
+		queued = true;
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	hidma_process_completed(dmadev);
+
+	if (queued) {
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+}
+
+static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
+{
+	struct hidma_chan *mchan;
+	struct dma_device *ddev;
+
+	mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
+	if (!mchan)
+		return -ENOMEM;
+
+	ddev = &dmadev->ddev;
+	mchan->dma_sig = dma_sig;
+	mchan->dmadev = dmadev;
+	mchan->chan.device = ddev;
+	dma_cookie_init(&mchan->chan);
+
+	INIT_LIST_HEAD(&mchan->free);
+	INIT_LIST_HEAD(&mchan->prepared);
+	INIT_LIST_HEAD(&mchan->active);
+	INIT_LIST_HEAD(&mchan->completed);
+
+	spin_lock_init(&mchan->lock);
+	list_add_tail(&mchan->chan.device_node, &ddev->channels);
+	dmadev->ddev.chancnt++;
+	return 0;
+}
+
+static void hidma_issue_pending(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+
+	/* PM will be released in hidma_callback function. */
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	hidma_ll_start(dmadev->lldev);
+}
+
+static enum dma_status hidma_tx_status(struct dma_chan *dmach,
+					dma_cookie_t cookie,
+					struct dma_tx_state *txstate)
+{
+	enum dma_status ret;
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+
+	if (mchan->paused)
+		ret = DMA_PAUSED;
+	else
+		ret = dma_cookie_status(dmach, cookie, txstate);
+
+	return ret;
+}
+
+/*
+ * Submit descriptor to hardware.
+ * Lock the PM for each descriptor we are sending.
+ */
+static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
+{
+	struct hidma_chan *mchan = to_hidma_chan(txd->chan);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	struct hidma_desc *mdesc;
+	unsigned long irqflags;
+	dma_cookie_t cookie;
+
+	if (!hidma_ll_isenabled(dmadev->lldev))
+		return -ENODEV;
+
+	mdesc = container_of(txd, struct hidma_desc, desc);
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	/* Move descriptor to active */
+	list_move_tail(&mdesc->node, &mchan->active);
+
+	/* Update cookie */
+	cookie = dma_cookie_assign(txd);
+
+	hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return cookie;
+}
+
+static int hidma_alloc_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	int rc = 0;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+	u32 i;
+
+	if (mchan->allocated)
+		return 0;
+
+	/* Alloc descriptors for this channel */
+	for (i = 0; i < dmadev->nr_descriptors; i++) {
+		mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
+		if (!mdesc) {
+			rc = -ENOMEM;
+			break;
+		}
+		dma_async_tx_descriptor_init(&mdesc->desc, dmach);
+		mdesc->desc.flags = DMA_CTRL_ACK;
+		mdesc->desc.tx_submit = hidma_tx_submit;
+
+		rc = hidma_ll_request(dmadev->lldev,
+				mchan->dma_sig, "DMA engine", hidma_callback,
+				mdesc, &mdesc->tre_ch);
+		if (rc) {
+			dev_err(dmach->device->dev,
+				"channel alloc failed at %u\n", i);
+			kfree(mdesc);
+			break;
+		}
+		list_add_tail(&mdesc->node, &descs);
+	}
+
+	if (rc) {
+		/* return the allocated descriptors */
+		list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+			hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
+			kfree(mdesc);
+		}
+		return rc;
+	}
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&descs, &mchan->free);
+	mchan->allocated = true;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+	dev_dbg(dmadev->ddev.dev,
+		"allocated channel for %u\n", mchan->dma_sig);
+	return 1;
+}
+
+static void hidma_free_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *mdma = mchan->dmadev;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+
+	if (!list_empty(&mchan->prepared) ||
+		!list_empty(&mchan->active) ||
+		!list_empty(&mchan->completed)) {
+		/*
+		 * We have unfinished requests waiting.
+		 * Terminate the request from the hardware.
+		 */
+		hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
+				ERR_CODE_UNEXPECTED_TERMINATE);
+
+		/* Give enough time for completions to be called. */
+		msleep(100);
+	}
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	/* Channel must be idle */
+	WARN_ON(!list_empty(&mchan->prepared));
+	WARN_ON(!list_empty(&mchan->active));
+	WARN_ON(!list_empty(&mchan->completed));
+
+	/* Move data */
+	list_splice_tail_init(&mchan->free, &descs);
+
+	/* Free descriptors */
+	list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+		hidma_ll_free(mdma->lldev, mdesc->tre_ch);
+		list_del(&mdesc->node);
+		kfree(mdesc);
+	}
+
+	mchan->allocated = 0;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+	dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
+}
+
+
+static struct dma_async_tx_descriptor *
+hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
+			dma_addr_t dma_src, size_t len, unsigned long flags)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_desc *mdesc = NULL;
+	struct hidma_dev *mdma = mchan->dmadev;
+	unsigned long irqflags;
+
+	dev_dbg(mdma->ddev.dev,
+		"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
+		&dma_dest, &dma_src, len);
+
+	/* Get free descriptor */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	if (!list_empty(&mchan->free)) {
+		mdesc = list_first_entry(&mchan->free, struct hidma_desc,
+					node);
+		list_del(&mdesc->node);
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	if (!mdesc)
+		return NULL;
+
+	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+			dma_src, dma_dest, len, flags);
+
+	/* Place descriptor in prepared list */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_add_tail(&mdesc->node, &mchan->prepared);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return &mdesc->desc;
+}
+
+static int hidma_terminate_all(struct dma_chan *chan)
+{
+	struct hidma_dev *dmadev;
+	LIST_HEAD(head);
+	unsigned long irqflags;
+	LIST_HEAD(list);
+	struct hidma_desc *tmp, *mdesc = NULL;
+	int rc;
+	struct hidma_chan *mchan;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	dev_dbg(dmadev->ddev.dev, "terminateall: chan:0x%p\n", mchan);
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	/* give completed requests a chance to finish */
+	hidma_process_completed(dmadev);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_init(&mchan->active, &list);
+	list_splice_init(&mchan->prepared, &list);
+	list_splice_init(&mchan->completed, &list);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	/* this suspends the existing transfer */
+	rc = hidma_ll_pause(dmadev->lldev);
+	if (rc) {
+		dev_err(dmadev->ddev.dev, "channel did not pause\n");
+		goto out;
+	}
+
+	/* return all user requests */
+	list_for_each_entry_safe(mdesc, tmp, &list, node) {
+		struct dma_async_tx_descriptor	*txd = &mdesc->desc;
+		dma_async_tx_callback callback = mdesc->desc.callback;
+		void *param = mdesc->desc.callback_param;
+		enum dma_status status;
+
+		dma_descriptor_unmap(txd);
+
+		status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
+		/*
+		 * The API requires that no submissions are done from a
+		 * callback, so we don't need to drop the lock here
+		 */
+		if (callback && (status == DMA_COMPLETE))
+			callback(param);
+
+		dma_run_dependencies(txd);
+
+		/* move myself to free_list */
+		list_move(&mdesc->node, &mchan->free);
+	}
+
+	/* reinitialize the hardware */
+	rc = hidma_ll_setup(dmadev->lldev);
+
+out:
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return rc;
+}
+
+static int hidma_pause(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	dev_dbg(dmadev->ddev.dev, "pause: chan:0x%p\n", mchan);
+
+	if (!mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		if (hidma_ll_pause(dmadev->lldev))
+			dev_warn(dmadev->ddev.dev, "channel did not stop\n");
+		mchan->paused = true;
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return 0;
+}
+
+static int hidma_resume(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+	int rc = 0;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	dev_dbg(dmadev->ddev.dev, "resume: chan:0x%p\n", mchan);
+
+	if (mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		rc = hidma_ll_resume(dmadev->lldev);
+		if (!rc)
+			mchan->paused = false;
+		else
+			dev_err(dmadev->ddev.dev,
+					"failed to resume the channel");
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return rc;
+}
+
+static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
+{
+	struct hidma_lldev **lldev_ptr = arg;
+	irqreturn_t ret;
+	struct hidma_dev *dmadev = to_hidma_dev_from_lldev(lldev_ptr);
+
+	/*
+	 * All interrupts are request driven.
+	 * HW doesn't send an interrupt by itself.
+	 */
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	ret = hidma_ll_inthandler(chirq, *lldev_ptr);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return ret;
+}
+
+static int hidma_probe(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev;
+	int rc = 0;
+	struct resource *trca_resource;
+	struct resource *evca_resource;
+	int chirq;
+	int current_channel_index = atomic_read(&channel_ref_count);
+	void *evca;
+	void *trca;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+
+	trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!trca_resource) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	trca = devm_ioremap_resource(&pdev->dev, trca_resource);
+	if (IS_ERR(trca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+	if (!evca_resource) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	evca = devm_ioremap_resource(&pdev->dev, evca_resource);
+	if (IS_ERR(evca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	/*
+	 * This driver only handles the channel IRQs.
+	 * Common IRQ is handled by the management driver.
+	 */
+	chirq = platform_get_irq(pdev, 0);
+	if (chirq < 0) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
+	if (!dmadev) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	spin_lock_init(&dmadev->lock);
+	dmadev->ddev.dev = &pdev->dev;
+	pm_runtime_get_sync(dmadev->ddev.dev);
+
+	dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
+	if (WARN_ON(!pdev->dev.dma_mask)) {
+		rc = -ENXIO;
+		goto dmafree;
+	}
+
+	dmadev->dev_evca = evca;
+	dmadev->evca_resource = evca_resource;
+	dmadev->dev_trca = trca;
+	dmadev->trca_resource = trca_resource;
+	dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
+	dmadev->ddev.device_alloc_chan_resources =
+		hidma_alloc_chan_resources;
+	dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
+	dmadev->ddev.device_tx_status = hidma_tx_status;
+	dmadev->ddev.device_issue_pending = hidma_issue_pending;
+	dmadev->ddev.device_pause = hidma_pause;
+	dmadev->ddev.device_resume = hidma_resume;
+	dmadev->ddev.device_terminate_all = hidma_terminate_all;
+	dmadev->ddev.copy_align = 8;
+
+	device_property_read_u32(&pdev->dev, "desc-count",
+				&dmadev->nr_descriptors);
+
+	if (!dmadev->nr_descriptors && nr_desc_prm)
+		dmadev->nr_descriptors = nr_desc_prm;
+
+	if (!dmadev->nr_descriptors)
+		goto dmafree;
+
+	if (current_channel_index > MAX_HIDMA_CHANNELS)
+		goto dmafree;
+
+	dmadev->evridx = -1;
+	device_property_read_u32(&pdev->dev, "event-channel", &dmadev->evridx);
+
+	/* kernel command line override for the guest machine */
+	if (event_channel_idx[current_channel_index] != -1)
+		dmadev->evridx = event_channel_idx[current_channel_index];
+
+	if (dmadev->evridx == -1)
+		goto dmafree;
+
+	/* Set DMA mask to 64 bits. */
+	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (rc) {
+		dev_warn(&pdev->dev, "unable to set coherent mask to 64");
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+		if (rc)
+			goto dmafree;
+	}
+
+	dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
+				dmadev->nr_descriptors, dmadev->dev_trca,
+				dmadev->dev_evca, dmadev->evridx);
+	if (!dmadev->lldev) {
+		rc = -EPROBE_DEFER;
+		goto dmafree;
+	}
+
+	rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
+			      "qcom-hidma", &dmadev->lldev);
+	if (rc)
+		goto uninit;
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	rc = hidma_chan_init(dmadev, 0);
+	if (rc)
+		goto uninit;
+
+	rc = dma_selftest_memcpy(&dmadev->ddev);
+	if (rc)
+		goto uninit;
+
+	rc = dma_async_device_register(&dmadev->ddev);
+	if (rc)
+		goto uninit;
+
+	hidma_debug_init(dmadev);
+	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
+	platform_set_drvdata(pdev, dmadev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	atomic_inc(&channel_ref_count);
+	return 0;
+
+uninit:
+	hidma_debug_uninit(dmadev);
+	hidma_ll_uninit(dmadev->lldev);
+dmafree:
+	if (dmadev)
+		hidma_free(dmadev);
+bailout:
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	return rc;
+}
+
+static int hidma_remove(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev = platform_get_drvdata(pdev);
+
+	dev_dbg(&pdev->dev, "removing\n");
+	pm_runtime_get_sync(dmadev->ddev.dev);
+
+	dma_async_device_unregister(&dmadev->ddev);
+	hidma_debug_uninit(dmadev);
+	hidma_ll_uninit(dmadev->lldev);
+	hidma_free(dmadev);
+
+	dev_info(&pdev->dev, "HI-DMA engine removed\n");
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_acpi_ids[] = {
+	{"QCOM8061"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_match[] = {
+	{ .compatible = "qcom,hidma-1.0", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, hidma_match);
+
+static struct platform_driver hidma_driver = {
+	.probe = hidma_probe,
+	.remove = hidma_remove,
+	.driver = {
+		.name = "hidma",
+		.of_match_table = hidma_match,
+		.acpi_match_table = ACPI_PTR(hidma_acpi_ids),
+	},
+};
+module_platform_driver(hidma_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
new file mode 100644
index 0000000..195d6b5
--- /dev/null
+++ b/drivers/dma/qcom/hidma.h
@@ -0,0 +1,157 @@
+/*
+ * Qualcomm Technologies HIDMA data structures
+ *
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef QCOM_HIDMA_H
+#define QCOM_HIDMA_H
+
+#include <linux/kfifo.h>
+#include <linux/interrupt.h>
+#include <linux/dmaengine.h>
+
+#define TRE_SIZE			32 /* each TRE is 32 bytes  */
+#define TRE_CFG_IDX			0
+#define TRE_LEN_IDX			1
+#define TRE_SRC_LOW_IDX		2
+#define TRE_SRC_HI_IDX			3
+#define TRE_DEST_LOW_IDX		4
+#define TRE_DEST_HI_IDX		5
+
+struct hidma_tx_status {
+	u8 err_info;			/* error record in this transfer    */
+	u8 err_code;			/* completion code		    */
+};
+
+struct hidma_tre {
+	atomic_t allocated;		/* if this channel is allocated	    */
+	bool queued;			/* flag whether this is pending     */
+	u16 status;			/* status			    */
+	u32 chidx;			/* index of the tre	    */
+	u32 dma_sig;			/* signature of the tre	    */
+	const char *dev_name;		/* name of the device		    */
+	void (*callback)(void *data);	/* requester callback		    */
+	void *data;			/* Data associated with this channel*/
+	struct hidma_lldev *lldev;	/* lldma device pointer		    */
+	u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy        */
+	u32 tre_index;			/* the offset where this was written*/
+	u32 int_flags;			/* interrupt flags*/
+};
+
+struct hidma_lldev {
+	bool initialized;		/* initialized flag               */
+	u8 trch_state;			/* trch_state of the device	  */
+	u8 evch_state;			/* evch_state of the device	  */
+	u8 evridx;			/* event channel to notify	  */
+	u32 nr_tres;			/* max number of configs          */
+	spinlock_t lock;		/* reentrancy                     */
+	struct hidma_tre *trepool;	/* trepool of user configs */
+	struct device *dev;		/* device			  */
+	void __iomem *trca;		/* Transfer Channel address       */
+	void __iomem *evca;		/* Event Channel address          */
+	struct hidma_tre
+		**pending_tre_list;	/* Pointers to pending TREs	  */
+	struct hidma_tx_status
+		*tx_status_list;	/* Pointers to pending TREs status*/
+	s32 pending_tre_count;		/* Number of TREs pending	  */
+
+	void *tre_ring;		/* TRE ring			  */
+	dma_addr_t tre_ring_handle;	/* TRE ring to be shared with HW  */
+	u32 tre_ring_size;		/* Byte size of the ring	  */
+	u32 tre_processed_off;		/* last processed TRE		   */
+
+	void *evre_ring;		/* EVRE ring			   */
+	dma_addr_t evre_ring_handle;	/* EVRE ring to be shared with HW  */
+	u32 evre_ring_size;		/* Byte size of the ring	  */
+	u32 evre_processed_off;	/* last processed EVRE		   */
+
+	u32 tre_write_offset;           /* TRE write location              */
+	struct tasklet_struct task;	/* task delivering notifications   */
+	DECLARE_KFIFO_PTR(handoff_fifo,
+		struct hidma_tre *);    /* pending TREs FIFO              */
+};
+
+struct hidma_desc {
+	struct dma_async_tx_descriptor	desc;
+	/* link list node for this channel*/
+	struct list_head		node;
+	u32				tre_ch;
+};
+
+struct hidma_chan {
+	bool				paused;
+	bool				allocated;
+	char				dbg_name[16];
+	u32				dma_sig;
+
+	/*
+	 * active descriptor on this channel
+	 * It is used by the DMA complete notification to
+	 * locate the descriptor that initiated the transfer.
+	 */
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+	struct hidma_dev		*dmadev;
+
+	struct dma_chan			chan;
+	struct list_head		free;
+	struct list_head		prepared;
+	struct list_head		active;
+	struct list_head		completed;
+
+	/* Lock for this structure */
+	spinlock_t			lock;
+};
+
+struct hidma_dev {
+	int				evridx;
+	u32				nr_descriptors;
+
+	struct hidma_lldev		*lldev;
+	void				__iomem *dev_trca;
+	struct resource			*trca_resource;
+	void				__iomem *dev_evca;
+	struct resource			*evca_resource;
+
+	/* used to protect the pending channel list*/
+	spinlock_t			lock;
+	struct dma_device		ddev;
+
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+};
+
+int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
+			const char *dev_name,
+			void (*callback)(void *data), void *data, u32 *tre_ch);
+
+void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
+enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
+bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
+int hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
+int hidma_ll_start(struct hidma_lldev *llhndl);
+int hidma_ll_pause(struct hidma_lldev *llhndl);
+int hidma_ll_resume(struct hidma_lldev *llhndl);
+void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
+	dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
+int hidma_ll_setup(struct hidma_lldev *lldev);
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
+			void __iomem *trca, void __iomem *evca,
+			u8 evridx);
+int hidma_ll_uninit(struct hidma_lldev *llhndl);
+irqreturn_t hidma_ll_inthandler(int irq, void *arg);
+void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
+				u8 err_code);
+int hidma_debug_init(struct hidma_dev *dmadev);
+void hidma_debug_uninit(struct hidma_dev *dmadev);
+#endif
diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
new file mode 100644
index 0000000..e0e6711
--- /dev/null
+++ b/drivers/dma/qcom/hidma_dbg.c
@@ -0,0 +1,225 @@
+/*
+ * Qualcomm Technologies HIDMA debug file
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma.h"
+
+void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
+{
+	struct hidma_lldev *lldev = llhndl;
+	struct hidma_tre *tre;
+	u32 length;
+	dma_addr_t src_start;
+	dma_addr_t dest_start;
+	u32 *tre_local;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev, "invalid TRE number in chstats:%d",
+			tre_ch);
+		return;
+	}
+	tre = &lldev->trepool[tre_ch];
+	seq_printf(s, "------Channel %d -----\n", tre_ch);
+	seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
+	seq_printf(s, "queued=0x%x\n", tre->queued);
+	seq_printf(s, "err_info=0x%x\n",
+		   lldev->tx_status_list[tre->chidx].err_info);
+	seq_printf(s, "err_code=0x%x\n",
+		   lldev->tx_status_list[tre->chidx].err_code);
+	seq_printf(s, "status=0x%x\n", tre->status);
+	seq_printf(s, "chidx=0x%x\n", tre->chidx);
+	seq_printf(s, "dma_sig=0x%x\n", tre->dma_sig);
+	seq_printf(s, "dev_name=%s\n", tre->dev_name);
+	seq_printf(s, "callback=%p\n", tre->callback);
+	seq_printf(s, "data=%p\n", tre->data);
+	seq_printf(s, "tre_index=0x%x\n", tre->tre_index);
+
+	tre_local = &tre->tre_local[0];
+	src_start = tre_local[TRE_SRC_LOW_IDX];
+	src_start = ((u64)(tre_local[TRE_SRC_HI_IDX]) << 32) + src_start;
+	dest_start = tre_local[TRE_DEST_LOW_IDX];
+	dest_start += ((u64)(tre_local[TRE_DEST_HI_IDX]) << 32);
+	length = tre_local[TRE_LEN_IDX];
+
+	seq_printf(s, "src=%pap\n", &src_start);
+	seq_printf(s, "dest=%pap\n", &dest_start);
+	seq_printf(s, "length=0x%x\n", length);
+}
+
+void hidma_ll_devstats(struct seq_file *s, void *llhndl)
+{
+	struct hidma_lldev *lldev = llhndl;
+
+	seq_puts(s, "------Device -----\n");
+	seq_printf(s, "lldev init=0x%x\n", lldev->initialized);
+	seq_printf(s, "trch_state=0x%x\n", lldev->trch_state);
+	seq_printf(s, "evch_state=0x%x\n", lldev->evch_state);
+	seq_printf(s, "evridx=0x%x\n", lldev->evridx);
+	seq_printf(s, "nr_tres=0x%x\n", lldev->nr_tres);
+	seq_printf(s, "trca=%p\n", lldev->trca);
+	seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
+	seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_ring_handle);
+	seq_printf(s, "tre_ring_size=0x%x\n", lldev->tre_ring_size);
+	seq_printf(s, "tre_processed_off=0x%x\n", lldev->tre_processed_off);
+	seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
+	seq_printf(s, "evca=%p\n", lldev->evca);
+	seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
+	seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_ring_handle);
+	seq_printf(s, "evre_ring_size=0x%x\n", lldev->evre_ring_size);
+	seq_printf(s, "evre_processed_off=0x%x\n", lldev->evre_processed_off);
+	seq_printf(s, "tre_write_offset=0x%x\n", lldev->tre_write_offset);
+}
+
+/**
+ * hidma_chan_stats: display HIDMA channel statistics
+ *
+ * Display the statistics for the current HIDMA virtual channel device.
+ */
+static int hidma_chan_stats(struct seq_file *s, void *unused)
+{
+	struct hidma_chan *mchan = s->private;
+	struct hidma_desc *mdesc;
+	struct hidma_dev *dmadev = mchan->dmadev;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	seq_printf(s, "paused=%u\n", mchan->paused);
+	seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
+	seq_puts(s, "prepared\n");
+	list_for_each_entry(mdesc, &mchan->prepared, node)
+		hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+	seq_puts(s, "active\n");
+		list_for_each_entry(mdesc, &mchan->active, node)
+			hidma_ll_chstats(s, mchan->dmadev->lldev,
+				mdesc->tre_ch);
+
+	seq_puts(s, "completed\n");
+		list_for_each_entry(mdesc, &mchan->completed, node)
+			hidma_ll_chstats(s, mchan->dmadev->lldev,
+				mdesc->tre_ch);
+
+	hidma_ll_devstats(s, mchan->dmadev->lldev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return 0;
+}
+
+/**
+ * hidma_dma_info: display HIDMA device info
+ *
+ * Display the info for the current HIDMA device.
+ */
+static int hidma_dma_info(struct seq_file *s, void *unused)
+{
+	struct hidma_dev *dmadev = s->private;
+	resource_size_t sz;
+
+	seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
+	seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
+	seq_printf(s, "dev_trca_phys=%pa\n",
+		&dmadev->trca_resource->start);
+	sz = resource_size(dmadev->trca_resource);
+	seq_printf(s, "dev_trca_size=%pa\n", &sz);
+	seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
+	seq_printf(s, "dev_evca_phys=%pa\n",
+		&dmadev->evca_resource->start);
+	sz = resource_size(dmadev->evca_resource);
+	seq_printf(s, "dev_evca_size=%pa\n", &sz);
+	return 0;
+}
+
+static int hidma_chan_stats_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, hidma_chan_stats, inode->i_private);
+}
+
+static int hidma_dma_info_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, hidma_dma_info, inode->i_private);
+}
+
+static const struct file_operations hidma_chan_fops = {
+	.open = hidma_chan_stats_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+static const struct file_operations hidma_dma_fops = {
+	.open = hidma_dma_info_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+void hidma_debug_uninit(struct hidma_dev *dmadev)
+{
+	debugfs_remove_recursive(dmadev->debugfs);
+	debugfs_remove_recursive(dmadev->stats);
+}
+
+int hidma_debug_init(struct hidma_dev *dmadev)
+{
+	int rc = 0;
+	int chidx = 0;
+	struct list_head *position = NULL;
+
+	dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev),
+						NULL);
+	if (!dmadev->debugfs) {
+		rc = -ENODEV;
+		return rc;
+	}
+
+	/* walk through the virtual channel list */
+	list_for_each(position, &dmadev->ddev.channels) {
+		struct hidma_chan *chan;
+
+		chan = list_entry(position, struct hidma_chan,
+				chan.device_node);
+		sprintf(chan->dbg_name, "chan%d", chidx);
+		chan->debugfs = debugfs_create_dir(chan->dbg_name,
+						dmadev->debugfs);
+		if (!chan->debugfs) {
+			rc = -ENOMEM;
+			goto cleanup;
+		}
+		chan->stats = debugfs_create_file("stats", S_IRUGO,
+				chan->debugfs, chan,
+				&hidma_chan_fops);
+		if (!chan->stats) {
+			rc = -ENOMEM;
+			goto cleanup;
+		}
+		chidx++;
+	}
+
+	dmadev->stats = debugfs_create_file("stats", S_IRUGO,
+			dmadev->debugfs, dmadev,
+			&hidma_dma_fops);
+	if (!dmadev->stats) {
+		rc = -ENOMEM;
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	hidma_debug_uninit(dmadev);
+	return rc;
+}
diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
new file mode 100644
index 0000000..f5c0b8b
--- /dev/null
+++ b/drivers/dma/qcom/hidma_ll.c
@@ -0,0 +1,944 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine low level code
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/atomic.h>
+#include <linux/iopoll.h>
+#include <linux/kfifo.h>
+
+#include "hidma.h"
+
+#define EVRE_SIZE			16 /* each EVRE is 16 bytes */
+
+#define TRCA_CTRLSTS_OFFSET		0x0
+#define TRCA_RING_LOW_OFFSET		0x8
+#define TRCA_RING_HIGH_OFFSET		0xC
+#define TRCA_RING_LEN_OFFSET		0x10
+#define TRCA_READ_PTR_OFFSET		0x18
+#define TRCA_WRITE_PTR_OFFSET		0x20
+#define TRCA_DOORBELL_OFFSET		0x400
+
+#define EVCA_CTRLSTS_OFFSET		0x0
+#define EVCA_INTCTRL_OFFSET		0x4
+#define EVCA_RING_LOW_OFFSET		0x8
+#define EVCA_RING_HIGH_OFFSET		0xC
+#define EVCA_RING_LEN_OFFSET		0x10
+#define EVCA_READ_PTR_OFFSET		0x18
+#define EVCA_WRITE_PTR_OFFSET		0x20
+#define EVCA_DOORBELL_OFFSET		0x400
+
+#define EVCA_IRQ_STAT_OFFSET		0x100
+#define EVCA_IRQ_CLR_OFFSET		0x108
+#define EVCA_IRQ_EN_OFFSET		0x110
+
+#define EVRE_CFG_IDX			0
+#define EVRE_LEN_IDX			1
+#define EVRE_DEST_LOW_IDX		2
+#define EVRE_DEST_HI_IDX		3
+
+#define EVRE_ERRINFO_BIT_POS		24
+#define EVRE_CODE_BIT_POS		28
+
+#define EVRE_ERRINFO_MASK		0xF
+#define EVRE_CODE_MASK			0xF
+
+#define CH_CONTROL_MASK		0xFF
+#define CH_STATE_MASK			0xFF
+#define CH_STATE_BIT_POS		0x8
+
+#define MAKE64(high, low) (((u64)(high) << 32) | (low))
+
+#define IRQ_EV_CH_EOB_IRQ_BIT_POS	0
+#define IRQ_EV_CH_WR_RESP_BIT_POS	1
+#define IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
+#define IRQ_TR_CH_DATA_RD_ER_BIT_POS	10
+#define IRQ_TR_CH_DATA_WR_ER_BIT_POS	11
+#define IRQ_TR_CH_INVALID_TRE_BIT_POS	14
+
+#define	ENABLE_IRQS (BIT(IRQ_EV_CH_EOB_IRQ_BIT_POS) | \
+		BIT(IRQ_EV_CH_WR_RESP_BIT_POS) | \
+		BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS) |	 \
+		BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS) |		 \
+		BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS) |		 \
+		BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))
+
+enum ch_command {
+	CH_DISABLE = 0,
+	CH_ENABLE = 1,
+	CH_SUSPEND = 2,
+	CH_RESET = 9,
+};
+
+enum ch_state {
+	CH_DISABLED = 0,
+	CH_ENABLED = 1,
+	CH_RUNNING = 2,
+	CH_SUSPENDED = 3,
+	CH_STOPPED = 4,
+	CH_ERROR = 5,
+	CH_IN_RESET = 9,
+};
+
+enum tre_type {
+	TRE_MEMCPY = 3,
+	TRE_MEMSET = 4,
+};
+
+enum evre_type {
+	EVRE_DMA_COMPLETE = 0x23,
+	EVRE_IMM_DATA = 0x24,
+};
+
+enum err_code {
+	EVRE_STATUS_COMPLETE = 1,
+	EVRE_STATUS_ERROR = 4,
+};
+
+void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	struct hidma_tre *tre;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
+		return;
+	}
+
+	tre = &lldev->trepool[tre_ch];
+	if (atomic_read(&tre->allocated) != true) {
+		dev_err(lldev->dev, "trying to free an unused TRE:%d",
+			tre_ch);
+		return;
+	}
+
+	atomic_set(&tre->allocated, 0);
+	dev_dbg(lldev->dev, "free_dma: allocated:%d tre_ch:%d\n",
+		atomic_read(&tre->allocated), tre_ch);
+}
+
+int hidma_ll_request(struct hidma_lldev *lldev, u32 dma_sig,
+			const char *dev_name,
+			void (*callback)(void *data), void *data, u32 *tre_ch)
+{
+	u32 i;
+	struct hidma_tre *tre = NULL;
+	u32 *tre_local;
+
+	if (!tre_ch || !lldev)
+		return -EINVAL;
+
+	/* need to have at least one empty spot in the queue */
+	for (i = 0; i < lldev->nr_tres - 1; i++) {
+		if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
+			break;
+	}
+
+	if (i == (lldev->nr_tres - 1))
+		return -ENOMEM;
+
+	tre = &lldev->trepool[i];
+	tre->dma_sig = dma_sig;
+	tre->dev_name = dev_name;
+	tre->callback = callback;
+	tre->data = data;
+	tre->chidx = i;
+	tre->status = 0;
+	tre->queued = 0;
+	lldev->tx_status_list[i].err_code = 0;
+	tre->lldev = lldev;
+	tre_local = &tre->tre_local[0];
+	tre_local[TRE_CFG_IDX] = TRE_MEMCPY;
+	tre_local[TRE_CFG_IDX] |= ((lldev->evridx & 0xFF) << 8);
+	tre_local[TRE_CFG_IDX] |= BIT(16);	/* set IEOB */
+	*tre_ch = i;
+	if (callback)
+		callback(data);
+	return 0;
+}
+
+/*
+ * Multiple TREs may be queued and waiting in the
+ * pending queue.
+ */
+static void hidma_ll_tre_complete(unsigned long arg)
+{
+	struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
+	struct hidma_tre *tre;
+
+	while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
+		/* call the user if it has been read by the hardware*/
+		if (tre->callback)
+			tre->callback(tre->data);
+	}
+}
+
+/*
+ * Called to handle the interrupt for the channel.
+ * Return a positive number if TRE or EVRE were consumed on this run.
+ * Return a positive number if there are pending TREs or EVREs.
+ * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
+ */
+static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
+{
+	struct hidma_tre *tre;
+	u32 evre_write_off;
+	u32 evre_ring_size = lldev->evre_ring_size;
+	u32 tre_ring_size = lldev->tre_ring_size;
+	u32 num_completed = 0, tre_iterator, evre_iterator;
+	unsigned long flags;
+
+	evre_write_off = readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+	tre_iterator = lldev->tre_processed_off;
+	evre_iterator = lldev->evre_processed_off;
+
+	if ((evre_write_off > evre_ring_size) ||
+		((evre_write_off % EVRE_SIZE) != 0)) {
+		dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
+		return 0;
+	}
+
+	/*
+	 * By the time control reaches here the number of EVREs and TREs
+	 * may not match. Only consume the ones that hardware told us.
+	 */
+	while ((evre_iterator != evre_write_off)) {
+		u32 *current_evre = lldev->evre_ring + evre_iterator;
+		u32 cfg;
+		u8 err_info;
+
+		spin_lock_irqsave(&lldev->lock, flags);
+		tre = lldev->pending_tre_list[tre_iterator / TRE_SIZE];
+		if (!tre) {
+			spin_unlock_irqrestore(&lldev->lock, flags);
+			dev_warn(lldev->dev,
+				"tre_index [%d] and tre out of sync\n",
+				tre_iterator / TRE_SIZE);
+			tre_iterator += TRE_SIZE;
+			if (tre_iterator >= tre_ring_size)
+				tre_iterator -= tre_ring_size;
+			evre_iterator += EVRE_SIZE;
+			if (evre_iterator >= evre_ring_size)
+				evre_iterator -= evre_ring_size;
+
+			continue;
+		}
+		lldev->pending_tre_list[tre->tre_index] = NULL;
+
+		/*
+		 * Keep track of pending TREs that SW is expecting to receive
+		 * from HW. We got one now. Decrement our counter.
+		 */
+		lldev->pending_tre_count--;
+		if (lldev->pending_tre_count < 0) {
+			dev_warn(lldev->dev,
+				"tre count mismatch on completion");
+			lldev->pending_tre_count = 0;
+		}
+
+		spin_unlock_irqrestore(&lldev->lock, flags);
+
+		cfg = current_evre[EVRE_CFG_IDX];
+		err_info = (cfg >> EVRE_ERRINFO_BIT_POS);
+		err_info = err_info & EVRE_ERRINFO_MASK;
+		lldev->tx_status_list[tre->chidx].err_info = err_info;
+		lldev->tx_status_list[tre->chidx].err_code =
+			(cfg >> EVRE_CODE_BIT_POS) & EVRE_CODE_MASK;
+		tre->queued = 0;
+
+		kfifo_put(&lldev->handoff_fifo, tre);
+		tasklet_schedule(&lldev->task);
+
+		tre_iterator += TRE_SIZE;
+		if (tre_iterator >= tre_ring_size)
+			tre_iterator -= tre_ring_size;
+		evre_iterator += EVRE_SIZE;
+		if (evre_iterator >= evre_ring_size)
+			evre_iterator -= evre_ring_size;
+
+		/*
+		 * Read the new event descriptor written by the HW.
+		 * As we are processing the delivered events, other events
+		 * get queued to the SW for processing.
+		 */
+		evre_write_off =
+			readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+		num_completed++;
+	}
+
+	if (num_completed) {
+		u32 evre_read_off = (lldev->evre_processed_off +
+				EVRE_SIZE * num_completed);
+		u32 tre_read_off = (lldev->tre_processed_off +
+				TRE_SIZE * num_completed);
+
+		evre_read_off = evre_read_off % evre_ring_size;
+		tre_read_off = tre_read_off % tre_ring_size;
+
+		writel(evre_read_off, lldev->evca + EVCA_DOORBELL_OFFSET);
+
+		/* record the last processed tre offset */
+		lldev->tre_processed_off = tre_read_off;
+		lldev->evre_processed_off = evre_read_off;
+	}
+
+	return num_completed;
+}
+
+void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
+				u8 err_code)
+{
+	u32 tre_iterator;
+	struct hidma_tre *tre;
+	u32 tre_ring_size = lldev->tre_ring_size;
+	int num_completed = 0;
+	u32 tre_read_off;
+	unsigned long flags;
+
+	tre_iterator = lldev->tre_processed_off;
+	while (lldev->pending_tre_count) {
+		int tre_index = tre_iterator / TRE_SIZE;
+
+		spin_lock_irqsave(&lldev->lock, flags);
+		tre = lldev->pending_tre_list[tre_index];
+		if (!tre) {
+			spin_unlock_irqrestore(&lldev->lock, flags);
+			tre_iterator += TRE_SIZE;
+			if (tre_iterator >= tre_ring_size)
+				tre_iterator -= tre_ring_size;
+			continue;
+		}
+		lldev->pending_tre_list[tre_index] = NULL;
+		lldev->pending_tre_count--;
+		if (lldev->pending_tre_count < 0) {
+			dev_warn(lldev->dev,
+				"tre count mismatch on completion");
+			lldev->pending_tre_count = 0;
+		}
+		spin_unlock_irqrestore(&lldev->lock, flags);
+
+		lldev->tx_status_list[tre->chidx].err_info = err_info;
+		lldev->tx_status_list[tre->chidx].err_code = err_code;
+		tre->queued = 0;
+
+		kfifo_put(&lldev->handoff_fifo, tre);
+		tasklet_schedule(&lldev->task);
+
+		tre_iterator += TRE_SIZE;
+		if (tre_iterator >= tre_ring_size)
+			tre_iterator -= tre_ring_size;
+
+		num_completed++;
+	}
+	tre_read_off = (lldev->tre_processed_off +
+			TRE_SIZE * num_completed);
+
+	tre_read_off = tre_read_off % tre_ring_size;
+
+	/* record the last processed tre offset */
+	lldev->tre_processed_off = tre_read_off;
+}
+
+static int hidma_ll_reset(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_RESET << 16);
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Delay 10ms after reset to allow DMA logic to quiesce.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
+		1000, 10000);
+	if (ret) {
+		dev_err(lldev->dev,
+			"transfer channel did not reset\n");
+		return ret;
+	}
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_RESET << 16);
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Delay 10ms after reset to allow DMA logic to quiesce.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
+		1000, 10000);
+	if (ret)
+		return ret;
+
+	lldev->trch_state = CH_DISABLED;
+	lldev->evch_state = CH_DISABLED;
+	return 0;
+}
+
+static void hidma_ll_enable_irq(struct hidma_lldev *lldev, u32 irq_bits)
+{
+	writel(irq_bits, lldev->evca + EVCA_IRQ_EN_OFFSET);
+	dev_dbg(lldev->dev, "enableirq\n");
+}
+
+/*
+ * The interrupt handler for HIDMA will try to consume as many pending
+ * EVRE from the event queue as possible. Each EVRE has an associated
+ * TRE that holds the user interface parameters. EVRE reports the
+ * result of the transaction. Hardware guarantees ordering between EVREs
+ * and TREs. We use last processed offset to figure out which TRE is
+ * associated with which EVRE. If two TREs are consumed by HW, the EVREs
+ * are in order in the event ring.
+ *
+ * This handler will do a one pass for consuming EVREs. Other EVREs may
+ * be delivered while we are working. It will try to consume incoming
+ * EVREs one more time and return.
+ *
+ * For unprocessed EVREs, hardware will trigger another interrupt until
+ * all the interrupt bits are cleared.
+ *
+ * Hardware guarantees that by the time interrupt is observed, all data
+ * transactions in flight are delivered to their respective places and
+ * are visible to the CPU.
+ *
+ * On demand paging for IOMMU is only supported for PCIe via PRI
+ * (Page Request Interface) not for HIDMA. All other hardware instances
+ * including HIDMA work on pinned DMA addresses.
+ *
+ * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
+ * IOMMU latency will be built into the data movement time. By the time
+ * interrupt happens, IOMMU lookups + data movement has already taken place.
+ *
+ * While the first read in a typical PCI endpoint ISR flushes all outstanding
+ * requests traditionally to the destination, this concept does not apply
+ * here for this HW.
+ */
+static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev)
+{
+	u32 status;
+	u32 enable;
+	u32 cause;
+	int repeat = 2;
+	unsigned long timeout;
+
+	/*
+	 * Fine tuned for this HW...
+	 *
+	 * This ISR has been designed for this particular hardware. Relaxed read
+	 * and write accessors are used for performance reasons due to interrupt
+	 * delivery guarantees. Do not copy this code blindly and expect
+	 * that to work.
+	 */
+	status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+	cause = status & enable;
+
+	if ((cause & (BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))) ||
+			(cause & BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)) ||
+			(cause & BIT(IRQ_EV_CH_WR_RESP_BIT_POS)) ||
+			(cause & BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)) ||
+			(cause & BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS))) {
+		u8 err_code = EVRE_STATUS_ERROR;
+		u8 err_info = 0xFF;
+
+		/* Clear out pending interrupts */
+		writel(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+		dev_err(lldev->dev,
+			"error 0x%x, resetting...\n", cause);
+
+		hidma_cleanup_pending_tre(lldev, err_info, err_code);
+
+		/* reset the channel for recovery */
+		if (hidma_ll_setup(lldev)) {
+			dev_err(lldev->dev,
+				"channel reinitialize failed after error\n");
+			return;
+		}
+		hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+		return;
+	}
+
+	/*
+	 * Try to consume as many EVREs as possible.
+	 * skip this loop if the interrupt is spurious.
+	 */
+	while (cause && repeat) {
+		unsigned long start = jiffies;
+
+		/* This timeout should be sufficent for core to finish */
+		timeout = start + msecs_to_jiffies(500);
+
+		while (lldev->pending_tre_count) {
+			hidma_handle_tre_completion(lldev);
+			if (time_is_before_jiffies(timeout)) {
+				dev_warn(lldev->dev,
+					"ISR timeout %lx-%lx from %lx [%d]\n",
+					jiffies, timeout, start,
+					lldev->pending_tre_count);
+				break;
+			}
+		}
+
+		/* We consumed TREs or there are pending TREs or EVREs. */
+		writel_relaxed(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+		/*
+		 * Another interrupt might have arrived while we are
+		 * processing this one. Read the new cause.
+		 */
+		status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+		enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+		cause = status & enable;
+
+		repeat--;
+	}
+}
+
+
+static int hidma_ll_enable(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= (CH_ENABLE << 16);
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+		((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
+		1000, 10000);
+	if (ret) {
+		dev_err(lldev->dev,
+			"event channel did not get enabled\n");
+		return ret;
+	}
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_ENABLE << 16);
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+		((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
+		1000, 10000);
+	if (ret) {
+		dev_err(lldev->dev,
+			"transfer channel did not get enabled\n");
+		return ret;
+	}
+
+	lldev->trch_state = CH_ENABLED;
+	lldev->evch_state = CH_ENABLED;
+
+	return 0;
+}
+
+int hidma_ll_resume(struct hidma_lldev *lldev)
+{
+	return hidma_ll_enable(lldev);
+}
+
+static int hidma_ll_hw_start(struct hidma_lldev *lldev)
+{
+	int rc = 0;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&lldev->lock, irqflags);
+	writel(lldev->tre_write_offset, lldev->trca + TRCA_DOORBELL_OFFSET);
+	spin_unlock_irqrestore(&lldev->lock, irqflags);
+
+	return rc;
+}
+
+bool hidma_ll_isenabled(struct hidma_lldev *lldev)
+{
+	u32 val;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+
+	/* both channels have to be enabled before calling this function*/
+	if (((lldev->trch_state == CH_ENABLED) ||
+		(lldev->trch_state == CH_RUNNING)) &&
+		((lldev->evch_state == CH_ENABLED) ||
+			(lldev->evch_state == CH_RUNNING)))
+		return true;
+
+	dev_dbg(lldev->dev, "channels are not enabled or are in error state");
+	return false;
+}
+
+int hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	struct hidma_tre *tre;
+	int rc = 0;
+	unsigned long flags;
+
+	tre = &lldev->trepool[tre_ch];
+
+	/* copy the TRE into its location in the TRE ring */
+	spin_lock_irqsave(&lldev->lock, flags);
+	tre->tre_index = lldev->tre_write_offset / TRE_SIZE;
+	lldev->pending_tre_list[tre->tre_index] = tre;
+	memcpy(lldev->tre_ring + lldev->tre_write_offset, &tre->tre_local[0],
+		TRE_SIZE);
+	lldev->tx_status_list[tre->chidx].err_code = 0;
+	lldev->tx_status_list[tre->chidx].err_info = 0;
+	tre->queued = 1;
+	lldev->pending_tre_count++;
+	lldev->tre_write_offset = (lldev->tre_write_offset + TRE_SIZE)
+				% lldev->tre_ring_size;
+	spin_unlock_irqrestore(&lldev->lock, flags);
+	return rc;
+}
+
+int hidma_ll_start(struct hidma_lldev *lldev)
+{
+	return hidma_ll_hw_start(lldev);
+}
+
+/*
+ * Note that even though we stop this channel
+ * if there is a pending transaction in flight
+ * it will complete and follow the callback.
+ * This request will prevent further requests
+ * to be made.
+ */
+int hidma_ll_pause(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+
+	/* already suspended by this OS */
+	if ((lldev->trch_state == CH_SUSPENDED) ||
+		(lldev->evch_state == CH_SUSPENDED))
+		return 0;
+
+	/* already stopped by the manager */
+	if ((lldev->trch_state == CH_STOPPED) ||
+		(lldev->evch_state == CH_STOPPED))
+		return 0;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_SUSPEND << 16);
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Start the wait right after the suspend is confirmed.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
+		1000, 10000);
+	if (ret)
+		return ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_SUSPEND << 16);
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Start the wait right after the suspend is confirmed
+	 * Delay up to 10ms after reset to allow DMA logic to quiesce.
+	 */
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
+		1000, 10000);
+	if (ret)
+		return ret;
+
+	lldev->trch_state = CH_SUSPENDED;
+	lldev->evch_state = CH_SUSPENDED;
+	dev_dbg(lldev->dev, "stop\n");
+
+	return 0;
+}
+
+void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
+	dma_addr_t src, dma_addr_t dest, u32 len, u32 flags)
+{
+	struct hidma_tre *tre;
+	u32 *tre_local;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev,
+			"invalid TRE number in transfer params:%d", tre_ch);
+		return;
+	}
+
+	tre = &lldev->trepool[tre_ch];
+	if (atomic_read(&tre->allocated) != true) {
+		dev_err(lldev->dev,
+			"trying to set params on an unused TRE:%d", tre_ch);
+		return;
+	}
+
+	tre_local = &tre->tre_local[0];
+	tre_local[TRE_LEN_IDX] = len;
+	tre_local[TRE_SRC_LOW_IDX] = lower_32_bits(src);
+	tre_local[TRE_SRC_HI_IDX] = upper_32_bits(src);
+	tre_local[TRE_DEST_LOW_IDX] = lower_32_bits(dest);
+	tre_local[TRE_DEST_HI_IDX] = upper_32_bits(dest);
+	tre->int_flags = flags;
+
+	dev_dbg(lldev->dev, "transferparams: tre_ch:%d %pap->%pap len:%u\n",
+		tre_ch, &src, &dest, len);
+}
+
+/*
+ * Called during initialization and after an error condition
+ * to restore hardware state.
+ */
+int hidma_ll_setup(struct hidma_lldev *lldev)
+{
+	int rc;
+	u64 addr;
+	u32 val;
+	u32 nr_tres = lldev->nr_tres;
+
+	lldev->pending_tre_count = 0;
+	lldev->tre_processed_off = 0;
+	lldev->evre_processed_off = 0;
+	lldev->tre_write_offset = 0;
+
+	/* disable interrupts */
+	hidma_ll_enable_irq(lldev, 0);
+
+	/* clear all pending interrupts */
+	val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+	rc = hidma_ll_reset(lldev);
+	if (rc)
+		return rc;
+
+	/*
+	 * Clear all pending interrupts again.
+	 * Otherwise, we observe reset complete interrupts.
+	 */
+	val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+	/* disable interrupts again after reset */
+	hidma_ll_enable_irq(lldev, 0);
+
+	addr = lldev->tre_ring_handle;
+	writel(lower_32_bits(addr), lldev->trca + TRCA_RING_LOW_OFFSET);
+	writel(upper_32_bits(addr), lldev->trca + TRCA_RING_HIGH_OFFSET);
+	writel(lldev->tre_ring_size, lldev->trca + TRCA_RING_LEN_OFFSET);
+
+	addr = lldev->evre_ring_handle;
+	writel(lower_32_bits(addr), lldev->evca + EVCA_RING_LOW_OFFSET);
+	writel(upper_32_bits(addr), lldev->evca + EVCA_RING_HIGH_OFFSET);
+	writel(EVRE_SIZE * nr_tres, lldev->evca + EVCA_RING_LEN_OFFSET);
+
+	/* support IRQ only for now */
+	val = readl(lldev->evca + EVCA_INTCTRL_OFFSET);
+	val = val & ~(0xF);
+	val = val | 0x1;
+	writel(val, lldev->evca + EVCA_INTCTRL_OFFSET);
+
+	/* clear all pending interrupts and enable them*/
+	writel(ENABLE_IRQS, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+	hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+
+	rc = hidma_ll_enable(lldev);
+	if (rc)
+		return rc;
+
+	return rc;
+}
+
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
+			void __iomem *trca, void __iomem *evca,
+			u8 evridx)
+{
+	u32 required_bytes;
+	struct hidma_lldev *lldev;
+	int rc;
+
+	if (!trca || !evca || !dev || !nr_tres)
+		return NULL;
+
+	/* need at least four TREs */
+	if (nr_tres < 4)
+		return NULL;
+
+	/* need an extra space */
+	nr_tres += 1;
+
+	lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
+	if (!lldev)
+		return NULL;
+
+	lldev->evca = evca;
+	lldev->trca = trca;
+	lldev->dev = dev;
+	required_bytes = sizeof(struct hidma_tre) * nr_tres;
+	lldev->trepool = devm_kzalloc(lldev->dev, required_bytes, GFP_KERNEL);
+	if (!lldev->trepool)
+		return NULL;
+
+	required_bytes = sizeof(lldev->pending_tre_list[0]) * nr_tres;
+	lldev->pending_tre_list = devm_kzalloc(dev, required_bytes,
+					GFP_KERNEL);
+	if (!lldev->pending_tre_list)
+		return NULL;
+
+	required_bytes = sizeof(lldev->tx_status_list[0]) * nr_tres;
+	lldev->tx_status_list = devm_kzalloc(dev, required_bytes, GFP_KERNEL);
+	if (!lldev->tx_status_list)
+		return NULL;
+
+	lldev->tre_ring = dmam_alloc_coherent(dev, (TRE_SIZE + 1) * nr_tres,
+					&lldev->tre_ring_handle, GFP_KERNEL);
+	if (!lldev->tre_ring)
+		return NULL;
+
+	memset(lldev->tre_ring, 0, (TRE_SIZE + 1) * nr_tres);
+	lldev->tre_ring_size = TRE_SIZE * nr_tres;
+	lldev->nr_tres = nr_tres;
+
+	/* the TRE ring has to be TRE_SIZE aligned */
+	if (!IS_ALIGNED(lldev->tre_ring_handle, TRE_SIZE)) {
+		u8  tre_ring_shift;
+
+		tre_ring_shift = lldev->tre_ring_handle % TRE_SIZE;
+		tre_ring_shift = TRE_SIZE - tre_ring_shift;
+		lldev->tre_ring_handle += tre_ring_shift;
+		lldev->tre_ring += tre_ring_shift;
+	}
+
+	lldev->evre_ring = dmam_alloc_coherent(dev, (EVRE_SIZE + 1) * nr_tres,
+					&lldev->evre_ring_handle, GFP_KERNEL);
+	if (!lldev->evre_ring)
+		return NULL;
+
+	memset(lldev->evre_ring, 0, (EVRE_SIZE + 1) * nr_tres);
+	lldev->evre_ring_size = EVRE_SIZE * nr_tres;
+
+	/* the EVRE ring has to be EVRE_SIZE aligned */
+	if (!IS_ALIGNED(lldev->evre_ring_handle, EVRE_SIZE)) {
+		u8  evre_ring_shift;
+
+		evre_ring_shift = lldev->evre_ring_handle % EVRE_SIZE;
+		evre_ring_shift = EVRE_SIZE - evre_ring_shift;
+		lldev->evre_ring_handle += evre_ring_shift;
+		lldev->evre_ring += evre_ring_shift;
+	}
+	lldev->nr_tres = nr_tres;
+	lldev->evridx = evridx;
+
+	rc = kfifo_alloc(&lldev->handoff_fifo,
+		nr_tres * sizeof(struct hidma_tre *), GFP_KERNEL);
+	if (rc)
+		return NULL;
+
+	rc = hidma_ll_setup(lldev);
+	if (rc)
+		return NULL;
+
+	spin_lock_init(&lldev->lock);
+	tasklet_init(&lldev->task, hidma_ll_tre_complete,
+			(unsigned long)lldev);
+	lldev->initialized = 1;
+	hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+	return lldev;
+}
+
+int hidma_ll_uninit(struct hidma_lldev *lldev)
+{
+	int rc = 0;
+	u32 val;
+
+	if (!lldev)
+		return -ENODEV;
+
+	if (lldev->initialized) {
+		u32 required_bytes;
+
+		lldev->initialized = 0;
+
+		required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
+		tasklet_kill(&lldev->task);
+		memset(lldev->trepool, 0, required_bytes);
+		lldev->trepool = NULL;
+		lldev->pending_tre_count = 0;
+		lldev->tre_write_offset = 0;
+
+		rc = hidma_ll_reset(lldev);
+
+		/*
+		 * Clear all pending interrupts again.
+		 * Otherwise, we observe reset complete interrupts.
+		 */
+		val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+		writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+		hidma_ll_enable_irq(lldev, 0);
+	}
+	return rc;
+}
+
+irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
+{
+	struct hidma_lldev *lldev = arg;
+
+	hidma_ll_int_handler_internal(lldev);
+	return IRQ_HANDLED;
+}
+
+enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	enum dma_status ret = DMA_ERROR;
+	unsigned long flags;
+	u8 err_code;
+
+	spin_lock_irqsave(&lldev->lock, flags);
+	err_code = lldev->tx_status_list[tre_ch].err_code;
+
+	if (err_code & EVRE_STATUS_COMPLETE)
+		ret = DMA_COMPLETE;
+	else if (err_code & EVRE_STATUS_ERROR)
+		ret = DMA_ERROR;
+	else
+		ret = DMA_IN_PROGRESS;
+	spin_unlock_irqrestore(&lldev->lock, flags);
+
+	return ret;
+}
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-08  4:53   ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08  4:53 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds support for hidma engine. The driver
consists of two logical blocks. The DMA engine interface
and the low-level interface. The hardware only supports
memcpy/memset and this driver only support memcpy
interface. HW and driver doesn't support slave interface.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 .../devicetree/bindings/dma/qcom_hidma.txt         |  18 +
 drivers/dma/qcom/Kconfig                           |   9 +
 drivers/dma/qcom/Makefile                          |   2 +
 drivers/dma/qcom/hidma.c                           | 743 ++++++++++++++++
 drivers/dma/qcom/hidma.h                           | 157 ++++
 drivers/dma/qcom/hidma_dbg.c                       | 225 +++++
 drivers/dma/qcom/hidma_ll.c                        | 944 +++++++++++++++++++++
 7 files changed, 2098 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h
 create mode 100644 drivers/dma/qcom/hidma_dbg.c
 create mode 100644 drivers/dma/qcom/hidma_ll.c

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma.txt b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
new file mode 100644
index 0000000..c9fb2d44
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
@@ -0,0 +1,18 @@
+Qualcomm Technologies HIDMA Channel driver
+
+Required properties:
+- compatible: must contain "qcom,hidma"
+- reg: Addresses for the transfer and event channel
+- interrupts: Should contain the event interrupt
+- desc-count: Number of asynchronous requests this channel can handle
+- event-channel: The HW event channel completions will be delivered.
+Example:
+
+	hidma_24: dma-controller at 0x5c050000 {
+		compatible = "qcom,hidma-1.0";
+		reg = <0 0x5c050000 0x0 0x1000>,
+		      <0 0x5c0b0000 0x0 0x1000>;
+		interrupts = <0 389 0>;
+		desc-count = <10>;
+		event-channel = <4>;
+	};
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index f3e2d4c..5588e1c 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -18,3 +18,12 @@ config QCOM_HIDMA_MGMT
 	  the guest OS would run QCOM_HIDMA channel driver and the
 	  hypervisor would run the QCOM_HIDMA_MGMT management driver.
 
+config QCOM_HIDMA
+	tristate "Qualcomm Technologies HIDMA Channel support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA controller.
+	  The HIDMA controller supports optimized buffer copies
+	  (user to kernel, kernel to kernel, etc.).  It only supports
+	  memcpy interface. The core is not intended for general
+	  purpose slave DMA.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index 1a5a96d..2b68c9c 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1,2 +1,4 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
 obj-$(CONFIG_QCOM_HIDMA_MGMT) += hidma_mgmt.o hidma_mgmt_sys.o
+obj-$(CONFIG_QCOM_HIDMA) +=  hdma.o
+hdma-objs        := hidma_ll.o hidma.o hidma_dbg.o ../dmaselftest.o
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
new file mode 100644
index 0000000..dadc289
--- /dev/null
+++ b/drivers/dma/qcom/hidma.c
@@ -0,0 +1,743 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
+ * Copyright (C) Semihalf 2009
+ * Copyright (C) Ilya Yanok, Emcraft Systems 2010
+ * Copyright (C) Alexander Popov, Promcontroller 2014
+ *
+ * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
+ * (defines, structures and comments) was taken from MPC5121 DMA driver
+ * written by Hongjun Chen <hong-jun.chen@freescale.com>.
+ *
+ * Approved as OSADL project by a majority of OSADL members and funded
+ * by OSADL membership fees in 2009;  for details see www.osadl.org.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ */
+
+/* Linux Foundation elects GPLv2 license only. */
+
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/of_dma.h>
+#include <linux/property.h>
+#include <linux/delay.h>
+#include <linux/highmem.h>
+#include <linux/io.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/acpi.h>
+#include <linux/irq.h>
+#include <linux/atomic.h>
+#include <linux/pm_runtime.h>
+
+#include "../dmaengine.h"
+#include "hidma.h"
+
+/*
+ * Default idle time is 2 seconds. This parameter can
+ * be overridden by changing the following
+ * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
+ * during kernel boot.
+ */
+#define AUTOSUSPEND_TIMEOUT		2000
+#define ERR_INFO_SW			0xFF
+#define ERR_CODE_UNEXPECTED_TERMINATE	0x0
+
+static inline
+struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
+{
+	return container_of(dmadev, struct hidma_dev, ddev);
+}
+
+static inline
+struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
+{
+	return container_of(_lldevp, struct hidma_dev, lldev);
+}
+
+static inline
+struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
+{
+	return container_of(dmach, struct hidma_chan, chan);
+}
+
+static inline struct hidma_desc *
+to_hidma_desc(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct hidma_desc, desc);
+}
+
+static void hidma_free(struct hidma_dev *dmadev)
+{
+	dev_dbg(dmadev->ddev.dev, "free dmadev\n");
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+}
+
+static unsigned int nr_desc_prm;
+module_param(nr_desc_prm, uint, 0644);
+MODULE_PARM_DESC(nr_desc_prm,
+		 "number of descriptors (default: 0)");
+
+#define MAX_HIDMA_CHANNELS	64
+static int event_channel_idx[MAX_HIDMA_CHANNELS] = {
+	[0 ... (MAX_HIDMA_CHANNELS - 1)] = -1};
+static unsigned int num_event_channel_idx;
+module_param_array_named(event_channel_idx, event_channel_idx, int,
+			&num_event_channel_idx, 0644);
+MODULE_PARM_DESC(event_channel_idx,
+		"event channel index array for the notifications");
+static atomic_t channel_ref_count;
+
+/* process completed descriptors */
+static void hidma_process_completed(struct hidma_dev *mdma)
+{
+	dma_cookie_t last_cookie = 0;
+	struct hidma_chan *mchan;
+	struct hidma_desc *mdesc;
+	struct dma_async_tx_descriptor *desc;
+	unsigned long irqflags;
+	struct list_head list;
+	struct dma_chan *dmach = NULL;
+
+	list_for_each_entry(dmach, &mdma->ddev.channels,
+			device_node) {
+		mchan = to_hidma_chan(dmach);
+		INIT_LIST_HEAD(&list);
+
+		/* Get all completed descriptors */
+		spin_lock_irqsave(&mchan->lock, irqflags);
+		list_splice_tail_init(&mchan->completed, &list);
+		spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+		/* Execute callbacks and run dependencies */
+		list_for_each_entry(mdesc, &list, node) {
+			desc = &mdesc->desc;
+
+			spin_lock_irqsave(&mchan->lock, irqflags);
+			dma_cookie_complete(desc);
+			spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+			if (desc->callback &&
+				(hidma_ll_status(mdma->lldev, mdesc->tre_ch)
+				== DMA_COMPLETE))
+				desc->callback(desc->callback_param);
+
+			last_cookie = desc->cookie;
+			dma_run_dependencies(desc);
+		}
+
+		/* Free descriptors */
+		spin_lock_irqsave(&mchan->lock, irqflags);
+		list_splice_tail_init(&list, &mchan->free);
+		spin_unlock_irqrestore(&mchan->lock, irqflags);
+	}
+}
+
+/*
+ * Called once for each submitted descriptor.
+ * PM is locked once for each descriptor that is currently
+ * in execution.
+ */
+static void hidma_callback(void *data)
+{
+	struct hidma_desc *mdesc = data;
+	struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
+	unsigned long irqflags;
+	struct dma_device *ddev = mchan->chan.device;
+	struct hidma_dev *dmadev = to_hidma_dev(ddev);
+	bool queued = false;
+
+	dev_dbg(dmadev->ddev.dev, "callback: data:0x%p\n", data);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	if (mdesc->node.next) {
+		/* Delete from the active list, add to completed list */
+		list_move_tail(&mdesc->node, &mchan->completed);
+		queued = true;
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	hidma_process_completed(dmadev);
+
+	if (queued) {
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+}
+
+static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
+{
+	struct hidma_chan *mchan;
+	struct dma_device *ddev;
+
+	mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
+	if (!mchan)
+		return -ENOMEM;
+
+	ddev = &dmadev->ddev;
+	mchan->dma_sig = dma_sig;
+	mchan->dmadev = dmadev;
+	mchan->chan.device = ddev;
+	dma_cookie_init(&mchan->chan);
+
+	INIT_LIST_HEAD(&mchan->free);
+	INIT_LIST_HEAD(&mchan->prepared);
+	INIT_LIST_HEAD(&mchan->active);
+	INIT_LIST_HEAD(&mchan->completed);
+
+	spin_lock_init(&mchan->lock);
+	list_add_tail(&mchan->chan.device_node, &ddev->channels);
+	dmadev->ddev.chancnt++;
+	return 0;
+}
+
+static void hidma_issue_pending(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+
+	/* PM will be released in hidma_callback function. */
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	hidma_ll_start(dmadev->lldev);
+}
+
+static enum dma_status hidma_tx_status(struct dma_chan *dmach,
+					dma_cookie_t cookie,
+					struct dma_tx_state *txstate)
+{
+	enum dma_status ret;
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+
+	if (mchan->paused)
+		ret = DMA_PAUSED;
+	else
+		ret = dma_cookie_status(dmach, cookie, txstate);
+
+	return ret;
+}
+
+/*
+ * Submit descriptor to hardware.
+ * Lock the PM for each descriptor we are sending.
+ */
+static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
+{
+	struct hidma_chan *mchan = to_hidma_chan(txd->chan);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	struct hidma_desc *mdesc;
+	unsigned long irqflags;
+	dma_cookie_t cookie;
+
+	if (!hidma_ll_isenabled(dmadev->lldev))
+		return -ENODEV;
+
+	mdesc = container_of(txd, struct hidma_desc, desc);
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	/* Move descriptor to active */
+	list_move_tail(&mdesc->node, &mchan->active);
+
+	/* Update cookie */
+	cookie = dma_cookie_assign(txd);
+
+	hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return cookie;
+}
+
+static int hidma_alloc_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	int rc = 0;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+	u32 i;
+
+	if (mchan->allocated)
+		return 0;
+
+	/* Alloc descriptors for this channel */
+	for (i = 0; i < dmadev->nr_descriptors; i++) {
+		mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
+		if (!mdesc) {
+			rc = -ENOMEM;
+			break;
+		}
+		dma_async_tx_descriptor_init(&mdesc->desc, dmach);
+		mdesc->desc.flags = DMA_CTRL_ACK;
+		mdesc->desc.tx_submit = hidma_tx_submit;
+
+		rc = hidma_ll_request(dmadev->lldev,
+				mchan->dma_sig, "DMA engine", hidma_callback,
+				mdesc, &mdesc->tre_ch);
+		if (rc) {
+			dev_err(dmach->device->dev,
+				"channel alloc failed at %u\n", i);
+			kfree(mdesc);
+			break;
+		}
+		list_add_tail(&mdesc->node, &descs);
+	}
+
+	if (rc) {
+		/* return the allocated descriptors */
+		list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+			hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
+			kfree(mdesc);
+		}
+		return rc;
+	}
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&descs, &mchan->free);
+	mchan->allocated = true;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+	dev_dbg(dmadev->ddev.dev,
+		"allocated channel for %u\n", mchan->dma_sig);
+	return 1;
+}
+
+static void hidma_free_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *mdma = mchan->dmadev;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+
+	if (!list_empty(&mchan->prepared) ||
+		!list_empty(&mchan->active) ||
+		!list_empty(&mchan->completed)) {
+		/*
+		 * We have unfinished requests waiting.
+		 * Terminate the request from the hardware.
+		 */
+		hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
+				ERR_CODE_UNEXPECTED_TERMINATE);
+
+		/* Give enough time for completions to be called. */
+		msleep(100);
+	}
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	/* Channel must be idle */
+	WARN_ON(!list_empty(&mchan->prepared));
+	WARN_ON(!list_empty(&mchan->active));
+	WARN_ON(!list_empty(&mchan->completed));
+
+	/* Move data */
+	list_splice_tail_init(&mchan->free, &descs);
+
+	/* Free descriptors */
+	list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+		hidma_ll_free(mdma->lldev, mdesc->tre_ch);
+		list_del(&mdesc->node);
+		kfree(mdesc);
+	}
+
+	mchan->allocated = 0;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+	dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
+}
+
+
+static struct dma_async_tx_descriptor *
+hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
+			dma_addr_t dma_src, size_t len, unsigned long flags)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_desc *mdesc = NULL;
+	struct hidma_dev *mdma = mchan->dmadev;
+	unsigned long irqflags;
+
+	dev_dbg(mdma->ddev.dev,
+		"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
+		&dma_dest, &dma_src, len);
+
+	/* Get free descriptor */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	if (!list_empty(&mchan->free)) {
+		mdesc = list_first_entry(&mchan->free, struct hidma_desc,
+					node);
+		list_del(&mdesc->node);
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	if (!mdesc)
+		return NULL;
+
+	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+			dma_src, dma_dest, len, flags);
+
+	/* Place descriptor in prepared list */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_add_tail(&mdesc->node, &mchan->prepared);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return &mdesc->desc;
+}
+
+static int hidma_terminate_all(struct dma_chan *chan)
+{
+	struct hidma_dev *dmadev;
+	LIST_HEAD(head);
+	unsigned long irqflags;
+	LIST_HEAD(list);
+	struct hidma_desc *tmp, *mdesc = NULL;
+	int rc;
+	struct hidma_chan *mchan;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	dev_dbg(dmadev->ddev.dev, "terminateall: chan:0x%p\n", mchan);
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	/* give completed requests a chance to finish */
+	hidma_process_completed(dmadev);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_init(&mchan->active, &list);
+	list_splice_init(&mchan->prepared, &list);
+	list_splice_init(&mchan->completed, &list);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	/* this suspends the existing transfer */
+	rc = hidma_ll_pause(dmadev->lldev);
+	if (rc) {
+		dev_err(dmadev->ddev.dev, "channel did not pause\n");
+		goto out;
+	}
+
+	/* return all user requests */
+	list_for_each_entry_safe(mdesc, tmp, &list, node) {
+		struct dma_async_tx_descriptor	*txd = &mdesc->desc;
+		dma_async_tx_callback callback = mdesc->desc.callback;
+		void *param = mdesc->desc.callback_param;
+		enum dma_status status;
+
+		dma_descriptor_unmap(txd);
+
+		status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
+		/*
+		 * The API requires that no submissions are done from a
+		 * callback, so we don't need to drop the lock here
+		 */
+		if (callback && (status == DMA_COMPLETE))
+			callback(param);
+
+		dma_run_dependencies(txd);
+
+		/* move myself to free_list */
+		list_move(&mdesc->node, &mchan->free);
+	}
+
+	/* reinitialize the hardware */
+	rc = hidma_ll_setup(dmadev->lldev);
+
+out:
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return rc;
+}
+
+static int hidma_pause(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	dev_dbg(dmadev->ddev.dev, "pause: chan:0x%p\n", mchan);
+
+	if (!mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		if (hidma_ll_pause(dmadev->lldev))
+			dev_warn(dmadev->ddev.dev, "channel did not stop\n");
+		mchan->paused = true;
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return 0;
+}
+
+static int hidma_resume(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+	int rc = 0;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	dev_dbg(dmadev->ddev.dev, "resume: chan:0x%p\n", mchan);
+
+	if (mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		rc = hidma_ll_resume(dmadev->lldev);
+		if (!rc)
+			mchan->paused = false;
+		else
+			dev_err(dmadev->ddev.dev,
+					"failed to resume the channel");
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return rc;
+}
+
+static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
+{
+	struct hidma_lldev **lldev_ptr = arg;
+	irqreturn_t ret;
+	struct hidma_dev *dmadev = to_hidma_dev_from_lldev(lldev_ptr);
+
+	/*
+	 * All interrupts are request driven.
+	 * HW doesn't send an interrupt by itself.
+	 */
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	ret = hidma_ll_inthandler(chirq, *lldev_ptr);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return ret;
+}
+
+static int hidma_probe(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev;
+	int rc = 0;
+	struct resource *trca_resource;
+	struct resource *evca_resource;
+	int chirq;
+	int current_channel_index = atomic_read(&channel_ref_count);
+	void *evca;
+	void *trca;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+
+	trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!trca_resource) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	trca = devm_ioremap_resource(&pdev->dev, trca_resource);
+	if (IS_ERR(trca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+	if (!evca_resource) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	evca = devm_ioremap_resource(&pdev->dev, evca_resource);
+	if (IS_ERR(evca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	/*
+	 * This driver only handles the channel IRQs.
+	 * Common IRQ is handled by the management driver.
+	 */
+	chirq = platform_get_irq(pdev, 0);
+	if (chirq < 0) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
+	if (!dmadev) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	spin_lock_init(&dmadev->lock);
+	dmadev->ddev.dev = &pdev->dev;
+	pm_runtime_get_sync(dmadev->ddev.dev);
+
+	dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
+	if (WARN_ON(!pdev->dev.dma_mask)) {
+		rc = -ENXIO;
+		goto dmafree;
+	}
+
+	dmadev->dev_evca = evca;
+	dmadev->evca_resource = evca_resource;
+	dmadev->dev_trca = trca;
+	dmadev->trca_resource = trca_resource;
+	dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
+	dmadev->ddev.device_alloc_chan_resources =
+		hidma_alloc_chan_resources;
+	dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
+	dmadev->ddev.device_tx_status = hidma_tx_status;
+	dmadev->ddev.device_issue_pending = hidma_issue_pending;
+	dmadev->ddev.device_pause = hidma_pause;
+	dmadev->ddev.device_resume = hidma_resume;
+	dmadev->ddev.device_terminate_all = hidma_terminate_all;
+	dmadev->ddev.copy_align = 8;
+
+	device_property_read_u32(&pdev->dev, "desc-count",
+				&dmadev->nr_descriptors);
+
+	if (!dmadev->nr_descriptors && nr_desc_prm)
+		dmadev->nr_descriptors = nr_desc_prm;
+
+	if (!dmadev->nr_descriptors)
+		goto dmafree;
+
+	if (current_channel_index > MAX_HIDMA_CHANNELS)
+		goto dmafree;
+
+	dmadev->evridx = -1;
+	device_property_read_u32(&pdev->dev, "event-channel", &dmadev->evridx);
+
+	/* kernel command line override for the guest machine */
+	if (event_channel_idx[current_channel_index] != -1)
+		dmadev->evridx = event_channel_idx[current_channel_index];
+
+	if (dmadev->evridx == -1)
+		goto dmafree;
+
+	/* Set DMA mask to 64 bits. */
+	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (rc) {
+		dev_warn(&pdev->dev, "unable to set coherent mask to 64");
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+		if (rc)
+			goto dmafree;
+	}
+
+	dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
+				dmadev->nr_descriptors, dmadev->dev_trca,
+				dmadev->dev_evca, dmadev->evridx);
+	if (!dmadev->lldev) {
+		rc = -EPROBE_DEFER;
+		goto dmafree;
+	}
+
+	rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
+			      "qcom-hidma", &dmadev->lldev);
+	if (rc)
+		goto uninit;
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	rc = hidma_chan_init(dmadev, 0);
+	if (rc)
+		goto uninit;
+
+	rc = dma_selftest_memcpy(&dmadev->ddev);
+	if (rc)
+		goto uninit;
+
+	rc = dma_async_device_register(&dmadev->ddev);
+	if (rc)
+		goto uninit;
+
+	hidma_debug_init(dmadev);
+	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
+	platform_set_drvdata(pdev, dmadev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	atomic_inc(&channel_ref_count);
+	return 0;
+
+uninit:
+	hidma_debug_uninit(dmadev);
+	hidma_ll_uninit(dmadev->lldev);
+dmafree:
+	if (dmadev)
+		hidma_free(dmadev);
+bailout:
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	return rc;
+}
+
+static int hidma_remove(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev = platform_get_drvdata(pdev);
+
+	dev_dbg(&pdev->dev, "removing\n");
+	pm_runtime_get_sync(dmadev->ddev.dev);
+
+	dma_async_device_unregister(&dmadev->ddev);
+	hidma_debug_uninit(dmadev);
+	hidma_ll_uninit(dmadev->lldev);
+	hidma_free(dmadev);
+
+	dev_info(&pdev->dev, "HI-DMA engine removed\n");
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_acpi_ids[] = {
+	{"QCOM8061"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_match[] = {
+	{ .compatible = "qcom,hidma-1.0", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, hidma_match);
+
+static struct platform_driver hidma_driver = {
+	.probe = hidma_probe,
+	.remove = hidma_remove,
+	.driver = {
+		.name = "hidma",
+		.of_match_table = hidma_match,
+		.acpi_match_table = ACPI_PTR(hidma_acpi_ids),
+	},
+};
+module_platform_driver(hidma_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
new file mode 100644
index 0000000..195d6b5
--- /dev/null
+++ b/drivers/dma/qcom/hidma.h
@@ -0,0 +1,157 @@
+/*
+ * Qualcomm Technologies HIDMA data structures
+ *
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef QCOM_HIDMA_H
+#define QCOM_HIDMA_H
+
+#include <linux/kfifo.h>
+#include <linux/interrupt.h>
+#include <linux/dmaengine.h>
+
+#define TRE_SIZE			32 /* each TRE is 32 bytes  */
+#define TRE_CFG_IDX			0
+#define TRE_LEN_IDX			1
+#define TRE_SRC_LOW_IDX		2
+#define TRE_SRC_HI_IDX			3
+#define TRE_DEST_LOW_IDX		4
+#define TRE_DEST_HI_IDX		5
+
+struct hidma_tx_status {
+	u8 err_info;			/* error record in this transfer    */
+	u8 err_code;			/* completion code		    */
+};
+
+struct hidma_tre {
+	atomic_t allocated;		/* if this channel is allocated	    */
+	bool queued;			/* flag whether this is pending     */
+	u16 status;			/* status			    */
+	u32 chidx;			/* index of the tre	    */
+	u32 dma_sig;			/* signature of the tre	    */
+	const char *dev_name;		/* name of the device		    */
+	void (*callback)(void *data);	/* requester callback		    */
+	void *data;			/* Data associated with this channel*/
+	struct hidma_lldev *lldev;	/* lldma device pointer		    */
+	u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy        */
+	u32 tre_index;			/* the offset where this was written*/
+	u32 int_flags;			/* interrupt flags*/
+};
+
+struct hidma_lldev {
+	bool initialized;		/* initialized flag               */
+	u8 trch_state;			/* trch_state of the device	  */
+	u8 evch_state;			/* evch_state of the device	  */
+	u8 evridx;			/* event channel to notify	  */
+	u32 nr_tres;			/* max number of configs          */
+	spinlock_t lock;		/* reentrancy                     */
+	struct hidma_tre *trepool;	/* trepool of user configs */
+	struct device *dev;		/* device			  */
+	void __iomem *trca;		/* Transfer Channel address       */
+	void __iomem *evca;		/* Event Channel address          */
+	struct hidma_tre
+		**pending_tre_list;	/* Pointers to pending TREs	  */
+	struct hidma_tx_status
+		*tx_status_list;	/* Pointers to pending TREs status*/
+	s32 pending_tre_count;		/* Number of TREs pending	  */
+
+	void *tre_ring;		/* TRE ring			  */
+	dma_addr_t tre_ring_handle;	/* TRE ring to be shared with HW  */
+	u32 tre_ring_size;		/* Byte size of the ring	  */
+	u32 tre_processed_off;		/* last processed TRE		   */
+
+	void *evre_ring;		/* EVRE ring			   */
+	dma_addr_t evre_ring_handle;	/* EVRE ring to be shared with HW  */
+	u32 evre_ring_size;		/* Byte size of the ring	  */
+	u32 evre_processed_off;	/* last processed EVRE		   */
+
+	u32 tre_write_offset;           /* TRE write location              */
+	struct tasklet_struct task;	/* task delivering notifications   */
+	DECLARE_KFIFO_PTR(handoff_fifo,
+		struct hidma_tre *);    /* pending TREs FIFO              */
+};
+
+struct hidma_desc {
+	struct dma_async_tx_descriptor	desc;
+	/* link list node for this channel*/
+	struct list_head		node;
+	u32				tre_ch;
+};
+
+struct hidma_chan {
+	bool				paused;
+	bool				allocated;
+	char				dbg_name[16];
+	u32				dma_sig;
+
+	/*
+	 * active descriptor on this channel
+	 * It is used by the DMA complete notification to
+	 * locate the descriptor that initiated the transfer.
+	 */
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+	struct hidma_dev		*dmadev;
+
+	struct dma_chan			chan;
+	struct list_head		free;
+	struct list_head		prepared;
+	struct list_head		active;
+	struct list_head		completed;
+
+	/* Lock for this structure */
+	spinlock_t			lock;
+};
+
+struct hidma_dev {
+	int				evridx;
+	u32				nr_descriptors;
+
+	struct hidma_lldev		*lldev;
+	void				__iomem *dev_trca;
+	struct resource			*trca_resource;
+	void				__iomem *dev_evca;
+	struct resource			*evca_resource;
+
+	/* used to protect the pending channel list*/
+	spinlock_t			lock;
+	struct dma_device		ddev;
+
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+};
+
+int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
+			const char *dev_name,
+			void (*callback)(void *data), void *data, u32 *tre_ch);
+
+void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
+enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
+bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
+int hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
+int hidma_ll_start(struct hidma_lldev *llhndl);
+int hidma_ll_pause(struct hidma_lldev *llhndl);
+int hidma_ll_resume(struct hidma_lldev *llhndl);
+void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
+	dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
+int hidma_ll_setup(struct hidma_lldev *lldev);
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
+			void __iomem *trca, void __iomem *evca,
+			u8 evridx);
+int hidma_ll_uninit(struct hidma_lldev *llhndl);
+irqreturn_t hidma_ll_inthandler(int irq, void *arg);
+void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
+				u8 err_code);
+int hidma_debug_init(struct hidma_dev *dmadev);
+void hidma_debug_uninit(struct hidma_dev *dmadev);
+#endif
diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
new file mode 100644
index 0000000..e0e6711
--- /dev/null
+++ b/drivers/dma/qcom/hidma_dbg.c
@@ -0,0 +1,225 @@
+/*
+ * Qualcomm Technologies HIDMA debug file
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma.h"
+
+void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
+{
+	struct hidma_lldev *lldev = llhndl;
+	struct hidma_tre *tre;
+	u32 length;
+	dma_addr_t src_start;
+	dma_addr_t dest_start;
+	u32 *tre_local;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev, "invalid TRE number in chstats:%d",
+			tre_ch);
+		return;
+	}
+	tre = &lldev->trepool[tre_ch];
+	seq_printf(s, "------Channel %d -----\n", tre_ch);
+	seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
+	seq_printf(s, "queued=0x%x\n", tre->queued);
+	seq_printf(s, "err_info=0x%x\n",
+		   lldev->tx_status_list[tre->chidx].err_info);
+	seq_printf(s, "err_code=0x%x\n",
+		   lldev->tx_status_list[tre->chidx].err_code);
+	seq_printf(s, "status=0x%x\n", tre->status);
+	seq_printf(s, "chidx=0x%x\n", tre->chidx);
+	seq_printf(s, "dma_sig=0x%x\n", tre->dma_sig);
+	seq_printf(s, "dev_name=%s\n", tre->dev_name);
+	seq_printf(s, "callback=%p\n", tre->callback);
+	seq_printf(s, "data=%p\n", tre->data);
+	seq_printf(s, "tre_index=0x%x\n", tre->tre_index);
+
+	tre_local = &tre->tre_local[0];
+	src_start = tre_local[TRE_SRC_LOW_IDX];
+	src_start = ((u64)(tre_local[TRE_SRC_HI_IDX]) << 32) + src_start;
+	dest_start = tre_local[TRE_DEST_LOW_IDX];
+	dest_start += ((u64)(tre_local[TRE_DEST_HI_IDX]) << 32);
+	length = tre_local[TRE_LEN_IDX];
+
+	seq_printf(s, "src=%pap\n", &src_start);
+	seq_printf(s, "dest=%pap\n", &dest_start);
+	seq_printf(s, "length=0x%x\n", length);
+}
+
+void hidma_ll_devstats(struct seq_file *s, void *llhndl)
+{
+	struct hidma_lldev *lldev = llhndl;
+
+	seq_puts(s, "------Device -----\n");
+	seq_printf(s, "lldev init=0x%x\n", lldev->initialized);
+	seq_printf(s, "trch_state=0x%x\n", lldev->trch_state);
+	seq_printf(s, "evch_state=0x%x\n", lldev->evch_state);
+	seq_printf(s, "evridx=0x%x\n", lldev->evridx);
+	seq_printf(s, "nr_tres=0x%x\n", lldev->nr_tres);
+	seq_printf(s, "trca=%p\n", lldev->trca);
+	seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
+	seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_ring_handle);
+	seq_printf(s, "tre_ring_size=0x%x\n", lldev->tre_ring_size);
+	seq_printf(s, "tre_processed_off=0x%x\n", lldev->tre_processed_off);
+	seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
+	seq_printf(s, "evca=%p\n", lldev->evca);
+	seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
+	seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_ring_handle);
+	seq_printf(s, "evre_ring_size=0x%x\n", lldev->evre_ring_size);
+	seq_printf(s, "evre_processed_off=0x%x\n", lldev->evre_processed_off);
+	seq_printf(s, "tre_write_offset=0x%x\n", lldev->tre_write_offset);
+}
+
+/**
+ * hidma_chan_stats: display HIDMA channel statistics
+ *
+ * Display the statistics for the current HIDMA virtual channel device.
+ */
+static int hidma_chan_stats(struct seq_file *s, void *unused)
+{
+	struct hidma_chan *mchan = s->private;
+	struct hidma_desc *mdesc;
+	struct hidma_dev *dmadev = mchan->dmadev;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	seq_printf(s, "paused=%u\n", mchan->paused);
+	seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
+	seq_puts(s, "prepared\n");
+	list_for_each_entry(mdesc, &mchan->prepared, node)
+		hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+	seq_puts(s, "active\n");
+		list_for_each_entry(mdesc, &mchan->active, node)
+			hidma_ll_chstats(s, mchan->dmadev->lldev,
+				mdesc->tre_ch);
+
+	seq_puts(s, "completed\n");
+		list_for_each_entry(mdesc, &mchan->completed, node)
+			hidma_ll_chstats(s, mchan->dmadev->lldev,
+				mdesc->tre_ch);
+
+	hidma_ll_devstats(s, mchan->dmadev->lldev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return 0;
+}
+
+/**
+ * hidma_dma_info: display HIDMA device info
+ *
+ * Display the info for the current HIDMA device.
+ */
+static int hidma_dma_info(struct seq_file *s, void *unused)
+{
+	struct hidma_dev *dmadev = s->private;
+	resource_size_t sz;
+
+	seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
+	seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
+	seq_printf(s, "dev_trca_phys=%pa\n",
+		&dmadev->trca_resource->start);
+	sz = resource_size(dmadev->trca_resource);
+	seq_printf(s, "dev_trca_size=%pa\n", &sz);
+	seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
+	seq_printf(s, "dev_evca_phys=%pa\n",
+		&dmadev->evca_resource->start);
+	sz = resource_size(dmadev->evca_resource);
+	seq_printf(s, "dev_evca_size=%pa\n", &sz);
+	return 0;
+}
+
+static int hidma_chan_stats_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, hidma_chan_stats, inode->i_private);
+}
+
+static int hidma_dma_info_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, hidma_dma_info, inode->i_private);
+}
+
+static const struct file_operations hidma_chan_fops = {
+	.open = hidma_chan_stats_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+static const struct file_operations hidma_dma_fops = {
+	.open = hidma_dma_info_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+void hidma_debug_uninit(struct hidma_dev *dmadev)
+{
+	debugfs_remove_recursive(dmadev->debugfs);
+	debugfs_remove_recursive(dmadev->stats);
+}
+
+int hidma_debug_init(struct hidma_dev *dmadev)
+{
+	int rc = 0;
+	int chidx = 0;
+	struct list_head *position = NULL;
+
+	dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev),
+						NULL);
+	if (!dmadev->debugfs) {
+		rc = -ENODEV;
+		return rc;
+	}
+
+	/* walk through the virtual channel list */
+	list_for_each(position, &dmadev->ddev.channels) {
+		struct hidma_chan *chan;
+
+		chan = list_entry(position, struct hidma_chan,
+				chan.device_node);
+		sprintf(chan->dbg_name, "chan%d", chidx);
+		chan->debugfs = debugfs_create_dir(chan->dbg_name,
+						dmadev->debugfs);
+		if (!chan->debugfs) {
+			rc = -ENOMEM;
+			goto cleanup;
+		}
+		chan->stats = debugfs_create_file("stats", S_IRUGO,
+				chan->debugfs, chan,
+				&hidma_chan_fops);
+		if (!chan->stats) {
+			rc = -ENOMEM;
+			goto cleanup;
+		}
+		chidx++;
+	}
+
+	dmadev->stats = debugfs_create_file("stats", S_IRUGO,
+			dmadev->debugfs, dmadev,
+			&hidma_dma_fops);
+	if (!dmadev->stats) {
+		rc = -ENOMEM;
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	hidma_debug_uninit(dmadev);
+	return rc;
+}
diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
new file mode 100644
index 0000000..f5c0b8b
--- /dev/null
+++ b/drivers/dma/qcom/hidma_ll.c
@@ -0,0 +1,944 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine low level code
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/atomic.h>
+#include <linux/iopoll.h>
+#include <linux/kfifo.h>
+
+#include "hidma.h"
+
+#define EVRE_SIZE			16 /* each EVRE is 16 bytes */
+
+#define TRCA_CTRLSTS_OFFSET		0x0
+#define TRCA_RING_LOW_OFFSET		0x8
+#define TRCA_RING_HIGH_OFFSET		0xC
+#define TRCA_RING_LEN_OFFSET		0x10
+#define TRCA_READ_PTR_OFFSET		0x18
+#define TRCA_WRITE_PTR_OFFSET		0x20
+#define TRCA_DOORBELL_OFFSET		0x400
+
+#define EVCA_CTRLSTS_OFFSET		0x0
+#define EVCA_INTCTRL_OFFSET		0x4
+#define EVCA_RING_LOW_OFFSET		0x8
+#define EVCA_RING_HIGH_OFFSET		0xC
+#define EVCA_RING_LEN_OFFSET		0x10
+#define EVCA_READ_PTR_OFFSET		0x18
+#define EVCA_WRITE_PTR_OFFSET		0x20
+#define EVCA_DOORBELL_OFFSET		0x400
+
+#define EVCA_IRQ_STAT_OFFSET		0x100
+#define EVCA_IRQ_CLR_OFFSET		0x108
+#define EVCA_IRQ_EN_OFFSET		0x110
+
+#define EVRE_CFG_IDX			0
+#define EVRE_LEN_IDX			1
+#define EVRE_DEST_LOW_IDX		2
+#define EVRE_DEST_HI_IDX		3
+
+#define EVRE_ERRINFO_BIT_POS		24
+#define EVRE_CODE_BIT_POS		28
+
+#define EVRE_ERRINFO_MASK		0xF
+#define EVRE_CODE_MASK			0xF
+
+#define CH_CONTROL_MASK		0xFF
+#define CH_STATE_MASK			0xFF
+#define CH_STATE_BIT_POS		0x8
+
+#define MAKE64(high, low) (((u64)(high) << 32) | (low))
+
+#define IRQ_EV_CH_EOB_IRQ_BIT_POS	0
+#define IRQ_EV_CH_WR_RESP_BIT_POS	1
+#define IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
+#define IRQ_TR_CH_DATA_RD_ER_BIT_POS	10
+#define IRQ_TR_CH_DATA_WR_ER_BIT_POS	11
+#define IRQ_TR_CH_INVALID_TRE_BIT_POS	14
+
+#define	ENABLE_IRQS (BIT(IRQ_EV_CH_EOB_IRQ_BIT_POS) | \
+		BIT(IRQ_EV_CH_WR_RESP_BIT_POS) | \
+		BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS) |	 \
+		BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS) |		 \
+		BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS) |		 \
+		BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))
+
+enum ch_command {
+	CH_DISABLE = 0,
+	CH_ENABLE = 1,
+	CH_SUSPEND = 2,
+	CH_RESET = 9,
+};
+
+enum ch_state {
+	CH_DISABLED = 0,
+	CH_ENABLED = 1,
+	CH_RUNNING = 2,
+	CH_SUSPENDED = 3,
+	CH_STOPPED = 4,
+	CH_ERROR = 5,
+	CH_IN_RESET = 9,
+};
+
+enum tre_type {
+	TRE_MEMCPY = 3,
+	TRE_MEMSET = 4,
+};
+
+enum evre_type {
+	EVRE_DMA_COMPLETE = 0x23,
+	EVRE_IMM_DATA = 0x24,
+};
+
+enum err_code {
+	EVRE_STATUS_COMPLETE = 1,
+	EVRE_STATUS_ERROR = 4,
+};
+
+void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	struct hidma_tre *tre;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
+		return;
+	}
+
+	tre = &lldev->trepool[tre_ch];
+	if (atomic_read(&tre->allocated) != true) {
+		dev_err(lldev->dev, "trying to free an unused TRE:%d",
+			tre_ch);
+		return;
+	}
+
+	atomic_set(&tre->allocated, 0);
+	dev_dbg(lldev->dev, "free_dma: allocated:%d tre_ch:%d\n",
+		atomic_read(&tre->allocated), tre_ch);
+}
+
+int hidma_ll_request(struct hidma_lldev *lldev, u32 dma_sig,
+			const char *dev_name,
+			void (*callback)(void *data), void *data, u32 *tre_ch)
+{
+	u32 i;
+	struct hidma_tre *tre = NULL;
+	u32 *tre_local;
+
+	if (!tre_ch || !lldev)
+		return -EINVAL;
+
+	/* need to have at least one empty spot in the queue */
+	for (i = 0; i < lldev->nr_tres - 1; i++) {
+		if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
+			break;
+	}
+
+	if (i == (lldev->nr_tres - 1))
+		return -ENOMEM;
+
+	tre = &lldev->trepool[i];
+	tre->dma_sig = dma_sig;
+	tre->dev_name = dev_name;
+	tre->callback = callback;
+	tre->data = data;
+	tre->chidx = i;
+	tre->status = 0;
+	tre->queued = 0;
+	lldev->tx_status_list[i].err_code = 0;
+	tre->lldev = lldev;
+	tre_local = &tre->tre_local[0];
+	tre_local[TRE_CFG_IDX] = TRE_MEMCPY;
+	tre_local[TRE_CFG_IDX] |= ((lldev->evridx & 0xFF) << 8);
+	tre_local[TRE_CFG_IDX] |= BIT(16);	/* set IEOB */
+	*tre_ch = i;
+	if (callback)
+		callback(data);
+	return 0;
+}
+
+/*
+ * Multiple TREs may be queued and waiting in the
+ * pending queue.
+ */
+static void hidma_ll_tre_complete(unsigned long arg)
+{
+	struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
+	struct hidma_tre *tre;
+
+	while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
+		/* call the user if it has been read by the hardware*/
+		if (tre->callback)
+			tre->callback(tre->data);
+	}
+}
+
+/*
+ * Called to handle the interrupt for the channel.
+ * Return a positive number if TRE or EVRE were consumed on this run.
+ * Return a positive number if there are pending TREs or EVREs.
+ * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
+ */
+static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
+{
+	struct hidma_tre *tre;
+	u32 evre_write_off;
+	u32 evre_ring_size = lldev->evre_ring_size;
+	u32 tre_ring_size = lldev->tre_ring_size;
+	u32 num_completed = 0, tre_iterator, evre_iterator;
+	unsigned long flags;
+
+	evre_write_off = readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+	tre_iterator = lldev->tre_processed_off;
+	evre_iterator = lldev->evre_processed_off;
+
+	if ((evre_write_off > evre_ring_size) ||
+		((evre_write_off % EVRE_SIZE) != 0)) {
+		dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
+		return 0;
+	}
+
+	/*
+	 * By the time control reaches here the number of EVREs and TREs
+	 * may not match. Only consume the ones that hardware told us.
+	 */
+	while ((evre_iterator != evre_write_off)) {
+		u32 *current_evre = lldev->evre_ring + evre_iterator;
+		u32 cfg;
+		u8 err_info;
+
+		spin_lock_irqsave(&lldev->lock, flags);
+		tre = lldev->pending_tre_list[tre_iterator / TRE_SIZE];
+		if (!tre) {
+			spin_unlock_irqrestore(&lldev->lock, flags);
+			dev_warn(lldev->dev,
+				"tre_index [%d] and tre out of sync\n",
+				tre_iterator / TRE_SIZE);
+			tre_iterator += TRE_SIZE;
+			if (tre_iterator >= tre_ring_size)
+				tre_iterator -= tre_ring_size;
+			evre_iterator += EVRE_SIZE;
+			if (evre_iterator >= evre_ring_size)
+				evre_iterator -= evre_ring_size;
+
+			continue;
+		}
+		lldev->pending_tre_list[tre->tre_index] = NULL;
+
+		/*
+		 * Keep track of pending TREs that SW is expecting to receive
+		 * from HW. We got one now. Decrement our counter.
+		 */
+		lldev->pending_tre_count--;
+		if (lldev->pending_tre_count < 0) {
+			dev_warn(lldev->dev,
+				"tre count mismatch on completion");
+			lldev->pending_tre_count = 0;
+		}
+
+		spin_unlock_irqrestore(&lldev->lock, flags);
+
+		cfg = current_evre[EVRE_CFG_IDX];
+		err_info = (cfg >> EVRE_ERRINFO_BIT_POS);
+		err_info = err_info & EVRE_ERRINFO_MASK;
+		lldev->tx_status_list[tre->chidx].err_info = err_info;
+		lldev->tx_status_list[tre->chidx].err_code =
+			(cfg >> EVRE_CODE_BIT_POS) & EVRE_CODE_MASK;
+		tre->queued = 0;
+
+		kfifo_put(&lldev->handoff_fifo, tre);
+		tasklet_schedule(&lldev->task);
+
+		tre_iterator += TRE_SIZE;
+		if (tre_iterator >= tre_ring_size)
+			tre_iterator -= tre_ring_size;
+		evre_iterator += EVRE_SIZE;
+		if (evre_iterator >= evre_ring_size)
+			evre_iterator -= evre_ring_size;
+
+		/*
+		 * Read the new event descriptor written by the HW.
+		 * As we are processing the delivered events, other events
+		 * get queued to the SW for processing.
+		 */
+		evre_write_off =
+			readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+		num_completed++;
+	}
+
+	if (num_completed) {
+		u32 evre_read_off = (lldev->evre_processed_off +
+				EVRE_SIZE * num_completed);
+		u32 tre_read_off = (lldev->tre_processed_off +
+				TRE_SIZE * num_completed);
+
+		evre_read_off = evre_read_off % evre_ring_size;
+		tre_read_off = tre_read_off % tre_ring_size;
+
+		writel(evre_read_off, lldev->evca + EVCA_DOORBELL_OFFSET);
+
+		/* record the last processed tre offset */
+		lldev->tre_processed_off = tre_read_off;
+		lldev->evre_processed_off = evre_read_off;
+	}
+
+	return num_completed;
+}
+
+void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
+				u8 err_code)
+{
+	u32 tre_iterator;
+	struct hidma_tre *tre;
+	u32 tre_ring_size = lldev->tre_ring_size;
+	int num_completed = 0;
+	u32 tre_read_off;
+	unsigned long flags;
+
+	tre_iterator = lldev->tre_processed_off;
+	while (lldev->pending_tre_count) {
+		int tre_index = tre_iterator / TRE_SIZE;
+
+		spin_lock_irqsave(&lldev->lock, flags);
+		tre = lldev->pending_tre_list[tre_index];
+		if (!tre) {
+			spin_unlock_irqrestore(&lldev->lock, flags);
+			tre_iterator += TRE_SIZE;
+			if (tre_iterator >= tre_ring_size)
+				tre_iterator -= tre_ring_size;
+			continue;
+		}
+		lldev->pending_tre_list[tre_index] = NULL;
+		lldev->pending_tre_count--;
+		if (lldev->pending_tre_count < 0) {
+			dev_warn(lldev->dev,
+				"tre count mismatch on completion");
+			lldev->pending_tre_count = 0;
+		}
+		spin_unlock_irqrestore(&lldev->lock, flags);
+
+		lldev->tx_status_list[tre->chidx].err_info = err_info;
+		lldev->tx_status_list[tre->chidx].err_code = err_code;
+		tre->queued = 0;
+
+		kfifo_put(&lldev->handoff_fifo, tre);
+		tasklet_schedule(&lldev->task);
+
+		tre_iterator += TRE_SIZE;
+		if (tre_iterator >= tre_ring_size)
+			tre_iterator -= tre_ring_size;
+
+		num_completed++;
+	}
+	tre_read_off = (lldev->tre_processed_off +
+			TRE_SIZE * num_completed);
+
+	tre_read_off = tre_read_off % tre_ring_size;
+
+	/* record the last processed tre offset */
+	lldev->tre_processed_off = tre_read_off;
+}
+
+static int hidma_ll_reset(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_RESET << 16);
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Delay 10ms after reset to allow DMA logic to quiesce.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
+		1000, 10000);
+	if (ret) {
+		dev_err(lldev->dev,
+			"transfer channel did not reset\n");
+		return ret;
+	}
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_RESET << 16);
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Delay 10ms after reset to allow DMA logic to quiesce.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
+		1000, 10000);
+	if (ret)
+		return ret;
+
+	lldev->trch_state = CH_DISABLED;
+	lldev->evch_state = CH_DISABLED;
+	return 0;
+}
+
+static void hidma_ll_enable_irq(struct hidma_lldev *lldev, u32 irq_bits)
+{
+	writel(irq_bits, lldev->evca + EVCA_IRQ_EN_OFFSET);
+	dev_dbg(lldev->dev, "enableirq\n");
+}
+
+/*
+ * The interrupt handler for HIDMA will try to consume as many pending
+ * EVRE from the event queue as possible. Each EVRE has an associated
+ * TRE that holds the user interface parameters. EVRE reports the
+ * result of the transaction. Hardware guarantees ordering between EVREs
+ * and TREs. We use last processed offset to figure out which TRE is
+ * associated with which EVRE. If two TREs are consumed by HW, the EVREs
+ * are in order in the event ring.
+ *
+ * This handler will do a one pass for consuming EVREs. Other EVREs may
+ * be delivered while we are working. It will try to consume incoming
+ * EVREs one more time and return.
+ *
+ * For unprocessed EVREs, hardware will trigger another interrupt until
+ * all the interrupt bits are cleared.
+ *
+ * Hardware guarantees that by the time interrupt is observed, all data
+ * transactions in flight are delivered to their respective places and
+ * are visible to the CPU.
+ *
+ * On demand paging for IOMMU is only supported for PCIe via PRI
+ * (Page Request Interface) not for HIDMA. All other hardware instances
+ * including HIDMA work on pinned DMA addresses.
+ *
+ * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
+ * IOMMU latency will be built into the data movement time. By the time
+ * interrupt happens, IOMMU lookups + data movement has already taken place.
+ *
+ * While the first read in a typical PCI endpoint ISR flushes all outstanding
+ * requests traditionally to the destination, this concept does not apply
+ * here for this HW.
+ */
+static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev)
+{
+	u32 status;
+	u32 enable;
+	u32 cause;
+	int repeat = 2;
+	unsigned long timeout;
+
+	/*
+	 * Fine tuned for this HW...
+	 *
+	 * This ISR has been designed for this particular hardware. Relaxed read
+	 * and write accessors are used for performance reasons due to interrupt
+	 * delivery guarantees. Do not copy this code blindly and expect
+	 * that to work.
+	 */
+	status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+	cause = status & enable;
+
+	if ((cause & (BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))) ||
+			(cause & BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)) ||
+			(cause & BIT(IRQ_EV_CH_WR_RESP_BIT_POS)) ||
+			(cause & BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)) ||
+			(cause & BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS))) {
+		u8 err_code = EVRE_STATUS_ERROR;
+		u8 err_info = 0xFF;
+
+		/* Clear out pending interrupts */
+		writel(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+		dev_err(lldev->dev,
+			"error 0x%x, resetting...\n", cause);
+
+		hidma_cleanup_pending_tre(lldev, err_info, err_code);
+
+		/* reset the channel for recovery */
+		if (hidma_ll_setup(lldev)) {
+			dev_err(lldev->dev,
+				"channel reinitialize failed after error\n");
+			return;
+		}
+		hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+		return;
+	}
+
+	/*
+	 * Try to consume as many EVREs as possible.
+	 * skip this loop if the interrupt is spurious.
+	 */
+	while (cause && repeat) {
+		unsigned long start = jiffies;
+
+		/* This timeout should be sufficent for core to finish */
+		timeout = start + msecs_to_jiffies(500);
+
+		while (lldev->pending_tre_count) {
+			hidma_handle_tre_completion(lldev);
+			if (time_is_before_jiffies(timeout)) {
+				dev_warn(lldev->dev,
+					"ISR timeout %lx-%lx from %lx [%d]\n",
+					jiffies, timeout, start,
+					lldev->pending_tre_count);
+				break;
+			}
+		}
+
+		/* We consumed TREs or there are pending TREs or EVREs. */
+		writel_relaxed(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+		/*
+		 * Another interrupt might have arrived while we are
+		 * processing this one. Read the new cause.
+		 */
+		status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+		enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+		cause = status & enable;
+
+		repeat--;
+	}
+}
+
+
+static int hidma_ll_enable(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= (CH_ENABLE << 16);
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+		((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
+		1000, 10000);
+	if (ret) {
+		dev_err(lldev->dev,
+			"event channel did not get enabled\n");
+		return ret;
+	}
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_ENABLE << 16);
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+		((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
+		1000, 10000);
+	if (ret) {
+		dev_err(lldev->dev,
+			"transfer channel did not get enabled\n");
+		return ret;
+	}
+
+	lldev->trch_state = CH_ENABLED;
+	lldev->evch_state = CH_ENABLED;
+
+	return 0;
+}
+
+int hidma_ll_resume(struct hidma_lldev *lldev)
+{
+	return hidma_ll_enable(lldev);
+}
+
+static int hidma_ll_hw_start(struct hidma_lldev *lldev)
+{
+	int rc = 0;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&lldev->lock, irqflags);
+	writel(lldev->tre_write_offset, lldev->trca + TRCA_DOORBELL_OFFSET);
+	spin_unlock_irqrestore(&lldev->lock, irqflags);
+
+	return rc;
+}
+
+bool hidma_ll_isenabled(struct hidma_lldev *lldev)
+{
+	u32 val;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+
+	/* both channels have to be enabled before calling this function*/
+	if (((lldev->trch_state == CH_ENABLED) ||
+		(lldev->trch_state == CH_RUNNING)) &&
+		((lldev->evch_state == CH_ENABLED) ||
+			(lldev->evch_state == CH_RUNNING)))
+		return true;
+
+	dev_dbg(lldev->dev, "channels are not enabled or are in error state");
+	return false;
+}
+
+int hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	struct hidma_tre *tre;
+	int rc = 0;
+	unsigned long flags;
+
+	tre = &lldev->trepool[tre_ch];
+
+	/* copy the TRE into its location in the TRE ring */
+	spin_lock_irqsave(&lldev->lock, flags);
+	tre->tre_index = lldev->tre_write_offset / TRE_SIZE;
+	lldev->pending_tre_list[tre->tre_index] = tre;
+	memcpy(lldev->tre_ring + lldev->tre_write_offset, &tre->tre_local[0],
+		TRE_SIZE);
+	lldev->tx_status_list[tre->chidx].err_code = 0;
+	lldev->tx_status_list[tre->chidx].err_info = 0;
+	tre->queued = 1;
+	lldev->pending_tre_count++;
+	lldev->tre_write_offset = (lldev->tre_write_offset + TRE_SIZE)
+				% lldev->tre_ring_size;
+	spin_unlock_irqrestore(&lldev->lock, flags);
+	return rc;
+}
+
+int hidma_ll_start(struct hidma_lldev *lldev)
+{
+	return hidma_ll_hw_start(lldev);
+}
+
+/*
+ * Note that even though we stop this channel
+ * if there is a pending transaction in flight
+ * it will complete and follow the callback.
+ * This request will prevent further requests
+ * to be made.
+ */
+int hidma_ll_pause(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
+
+	/* already suspended by this OS */
+	if ((lldev->trch_state == CH_SUSPENDED) ||
+		(lldev->evch_state == CH_SUSPENDED))
+		return 0;
+
+	/* already stopped by the manager */
+	if ((lldev->trch_state == CH_STOPPED) ||
+		(lldev->evch_state == CH_STOPPED))
+		return 0;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_SUSPEND << 16);
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Start the wait right after the suspend is confirmed.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
+		1000, 10000);
+	if (ret)
+		return ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val = val & ~(CH_CONTROL_MASK << 16);
+	val = val | (CH_SUSPEND << 16);
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Start the wait right after the suspend is confirmed
+	 * Delay up to 10ms after reset to allow DMA logic to quiesce.
+	 */
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+		(((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
+		1000, 10000);
+	if (ret)
+		return ret;
+
+	lldev->trch_state = CH_SUSPENDED;
+	lldev->evch_state = CH_SUSPENDED;
+	dev_dbg(lldev->dev, "stop\n");
+
+	return 0;
+}
+
+void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
+	dma_addr_t src, dma_addr_t dest, u32 len, u32 flags)
+{
+	struct hidma_tre *tre;
+	u32 *tre_local;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev,
+			"invalid TRE number in transfer params:%d", tre_ch);
+		return;
+	}
+
+	tre = &lldev->trepool[tre_ch];
+	if (atomic_read(&tre->allocated) != true) {
+		dev_err(lldev->dev,
+			"trying to set params on an unused TRE:%d", tre_ch);
+		return;
+	}
+
+	tre_local = &tre->tre_local[0];
+	tre_local[TRE_LEN_IDX] = len;
+	tre_local[TRE_SRC_LOW_IDX] = lower_32_bits(src);
+	tre_local[TRE_SRC_HI_IDX] = upper_32_bits(src);
+	tre_local[TRE_DEST_LOW_IDX] = lower_32_bits(dest);
+	tre_local[TRE_DEST_HI_IDX] = upper_32_bits(dest);
+	tre->int_flags = flags;
+
+	dev_dbg(lldev->dev, "transferparams: tre_ch:%d %pap->%pap len:%u\n",
+		tre_ch, &src, &dest, len);
+}
+
+/*
+ * Called during initialization and after an error condition
+ * to restore hardware state.
+ */
+int hidma_ll_setup(struct hidma_lldev *lldev)
+{
+	int rc;
+	u64 addr;
+	u32 val;
+	u32 nr_tres = lldev->nr_tres;
+
+	lldev->pending_tre_count = 0;
+	lldev->tre_processed_off = 0;
+	lldev->evre_processed_off = 0;
+	lldev->tre_write_offset = 0;
+
+	/* disable interrupts */
+	hidma_ll_enable_irq(lldev, 0);
+
+	/* clear all pending interrupts */
+	val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+	rc = hidma_ll_reset(lldev);
+	if (rc)
+		return rc;
+
+	/*
+	 * Clear all pending interrupts again.
+	 * Otherwise, we observe reset complete interrupts.
+	 */
+	val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+	/* disable interrupts again after reset */
+	hidma_ll_enable_irq(lldev, 0);
+
+	addr = lldev->tre_ring_handle;
+	writel(lower_32_bits(addr), lldev->trca + TRCA_RING_LOW_OFFSET);
+	writel(upper_32_bits(addr), lldev->trca + TRCA_RING_HIGH_OFFSET);
+	writel(lldev->tre_ring_size, lldev->trca + TRCA_RING_LEN_OFFSET);
+
+	addr = lldev->evre_ring_handle;
+	writel(lower_32_bits(addr), lldev->evca + EVCA_RING_LOW_OFFSET);
+	writel(upper_32_bits(addr), lldev->evca + EVCA_RING_HIGH_OFFSET);
+	writel(EVRE_SIZE * nr_tres, lldev->evca + EVCA_RING_LEN_OFFSET);
+
+	/* support IRQ only for now */
+	val = readl(lldev->evca + EVCA_INTCTRL_OFFSET);
+	val = val & ~(0xF);
+	val = val | 0x1;
+	writel(val, lldev->evca + EVCA_INTCTRL_OFFSET);
+
+	/* clear all pending interrupts and enable them*/
+	writel(ENABLE_IRQS, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+	hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+
+	rc = hidma_ll_enable(lldev);
+	if (rc)
+		return rc;
+
+	return rc;
+}
+
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
+			void __iomem *trca, void __iomem *evca,
+			u8 evridx)
+{
+	u32 required_bytes;
+	struct hidma_lldev *lldev;
+	int rc;
+
+	if (!trca || !evca || !dev || !nr_tres)
+		return NULL;
+
+	/* need at least four TREs */
+	if (nr_tres < 4)
+		return NULL;
+
+	/* need an extra space */
+	nr_tres += 1;
+
+	lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
+	if (!lldev)
+		return NULL;
+
+	lldev->evca = evca;
+	lldev->trca = trca;
+	lldev->dev = dev;
+	required_bytes = sizeof(struct hidma_tre) * nr_tres;
+	lldev->trepool = devm_kzalloc(lldev->dev, required_bytes, GFP_KERNEL);
+	if (!lldev->trepool)
+		return NULL;
+
+	required_bytes = sizeof(lldev->pending_tre_list[0]) * nr_tres;
+	lldev->pending_tre_list = devm_kzalloc(dev, required_bytes,
+					GFP_KERNEL);
+	if (!lldev->pending_tre_list)
+		return NULL;
+
+	required_bytes = sizeof(lldev->tx_status_list[0]) * nr_tres;
+	lldev->tx_status_list = devm_kzalloc(dev, required_bytes, GFP_KERNEL);
+	if (!lldev->tx_status_list)
+		return NULL;
+
+	lldev->tre_ring = dmam_alloc_coherent(dev, (TRE_SIZE + 1) * nr_tres,
+					&lldev->tre_ring_handle, GFP_KERNEL);
+	if (!lldev->tre_ring)
+		return NULL;
+
+	memset(lldev->tre_ring, 0, (TRE_SIZE + 1) * nr_tres);
+	lldev->tre_ring_size = TRE_SIZE * nr_tres;
+	lldev->nr_tres = nr_tres;
+
+	/* the TRE ring has to be TRE_SIZE aligned */
+	if (!IS_ALIGNED(lldev->tre_ring_handle, TRE_SIZE)) {
+		u8  tre_ring_shift;
+
+		tre_ring_shift = lldev->tre_ring_handle % TRE_SIZE;
+		tre_ring_shift = TRE_SIZE - tre_ring_shift;
+		lldev->tre_ring_handle += tre_ring_shift;
+		lldev->tre_ring += tre_ring_shift;
+	}
+
+	lldev->evre_ring = dmam_alloc_coherent(dev, (EVRE_SIZE + 1) * nr_tres,
+					&lldev->evre_ring_handle, GFP_KERNEL);
+	if (!lldev->evre_ring)
+		return NULL;
+
+	memset(lldev->evre_ring, 0, (EVRE_SIZE + 1) * nr_tres);
+	lldev->evre_ring_size = EVRE_SIZE * nr_tres;
+
+	/* the EVRE ring has to be EVRE_SIZE aligned */
+	if (!IS_ALIGNED(lldev->evre_ring_handle, EVRE_SIZE)) {
+		u8  evre_ring_shift;
+
+		evre_ring_shift = lldev->evre_ring_handle % EVRE_SIZE;
+		evre_ring_shift = EVRE_SIZE - evre_ring_shift;
+		lldev->evre_ring_handle += evre_ring_shift;
+		lldev->evre_ring += evre_ring_shift;
+	}
+	lldev->nr_tres = nr_tres;
+	lldev->evridx = evridx;
+
+	rc = kfifo_alloc(&lldev->handoff_fifo,
+		nr_tres * sizeof(struct hidma_tre *), GFP_KERNEL);
+	if (rc)
+		return NULL;
+
+	rc = hidma_ll_setup(lldev);
+	if (rc)
+		return NULL;
+
+	spin_lock_init(&lldev->lock);
+	tasklet_init(&lldev->task, hidma_ll_tre_complete,
+			(unsigned long)lldev);
+	lldev->initialized = 1;
+	hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+	return lldev;
+}
+
+int hidma_ll_uninit(struct hidma_lldev *lldev)
+{
+	int rc = 0;
+	u32 val;
+
+	if (!lldev)
+		return -ENODEV;
+
+	if (lldev->initialized) {
+		u32 required_bytes;
+
+		lldev->initialized = 0;
+
+		required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
+		tasklet_kill(&lldev->task);
+		memset(lldev->trepool, 0, required_bytes);
+		lldev->trepool = NULL;
+		lldev->pending_tre_count = 0;
+		lldev->tre_write_offset = 0;
+
+		rc = hidma_ll_reset(lldev);
+
+		/*
+		 * Clear all pending interrupts again.
+		 * Otherwise, we observe reset complete interrupts.
+		 */
+		val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+		writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+		hidma_ll_enable_irq(lldev, 0);
+	}
+	return rc;
+}
+
+irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
+{
+	struct hidma_lldev *lldev = arg;
+
+	hidma_ll_int_handler_internal(lldev);
+	return IRQ_HANDLED;
+}
+
+enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	enum dma_status ret = DMA_ERROR;
+	unsigned long flags;
+	u8 err_code;
+
+	spin_lock_irqsave(&lldev->lock, flags);
+	err_code = lldev->tx_status_list[tre_ch].err_code;
+
+	if (err_code & EVRE_STATUS_COMPLETE)
+		ret = DMA_COMPLETE;
+	else if (err_code & EVRE_STATUS_ERROR)
+		ret = DMA_ERROR;
+	else
+		ret = DMA_IN_PROGRESS;
+	spin_unlock_irqrestore(&lldev->lock, flags);
+
+	return ret;
+}
-- 
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 1/4] dma: qcom_bam_dma: move to qcom directory
  2015-11-08  4:52   ` Sinan Kaya
@ 2015-11-08  5:02     ` Timur Tabi
  -1 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-08  5:02 UTC (permalink / raw)
  To: Sinan Kaya, dmaengine, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Vinod Koul,
	Dan Williams, Archit Taneja, Stanimir Varbanov, Kumar Gala,
	Pramod Gurav, Maxime Ripard, linux-kernel

Sinan Kaya wrote:
> Creating a QCOM directory for all QCOM DMA
> source files.
>
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>   drivers/dma/Kconfig        |   13 +-
>   drivers/dma/Makefile       |    2 +-
>   drivers/dma/qcom/Kconfig   |    9 +
>   drivers/dma/qcom/Makefile  |    1 +
>   drivers/dma/qcom/bam_dma.c | 1259 ++++++++++++++++++++++++++++++++++++++++++++
>   drivers/dma/qcom_bam_dma.c | 1259 --------------------------------------------
>   6 files changed, 1273 insertions(+), 1270 deletions(-)
>   create mode 100644 drivers/dma/qcom/Kconfig
>   create mode 100644 drivers/dma/qcom/Makefile
>   create mode 100644 drivers/dma/qcom/bam_dma.c
>   delete mode 100644 drivers/dma/qcom_bam_dma.c
>
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index b458475..d17d9ec 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -320,7 +320,7 @@ config MOXART_DMA
>   	select DMA_VIRTUAL_CHANNELS
>   	help
>   	  Enable support for the MOXA ART SoC DMA controller.
> -
> +

Unrelated whitespace change.

> diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
> new file mode 100644
> index 0000000..f612ae3
> --- /dev/null
> +++ b/drivers/dma/qcom/Makefile
> @@ -0,0 +1 @@
> +obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
> diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
> new file mode 100644
> index 0000000..5359234

Please use "git format-patch -M" when creating patches for files that 
have moved.



-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 1/4] dma: qcom_bam_dma: move to qcom directory
@ 2015-11-08  5:02     ` Timur Tabi
  0 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-08  5:02 UTC (permalink / raw)
  To: linux-arm-kernel

Sinan Kaya wrote:
> Creating a QCOM directory for all QCOM DMA
> source files.
>
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>   drivers/dma/Kconfig        |   13 +-
>   drivers/dma/Makefile       |    2 +-
>   drivers/dma/qcom/Kconfig   |    9 +
>   drivers/dma/qcom/Makefile  |    1 +
>   drivers/dma/qcom/bam_dma.c | 1259 ++++++++++++++++++++++++++++++++++++++++++++
>   drivers/dma/qcom_bam_dma.c | 1259 --------------------------------------------
>   6 files changed, 1273 insertions(+), 1270 deletions(-)
>   create mode 100644 drivers/dma/qcom/Kconfig
>   create mode 100644 drivers/dma/qcom/Makefile
>   create mode 100644 drivers/dma/qcom/bam_dma.c
>   delete mode 100644 drivers/dma/qcom_bam_dma.c
>
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index b458475..d17d9ec 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -320,7 +320,7 @@ config MOXART_DMA
>   	select DMA_VIRTUAL_CHANNELS
>   	help
>   	  Enable support for the MOXA ART SoC DMA controller.
> -
> +

Unrelated whitespace change.

> diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
> new file mode 100644
> index 0000000..f612ae3
> --- /dev/null
> +++ b/drivers/dma/qcom/Makefile
> @@ -0,0 +1 @@
> +obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
> diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
> new file mode 100644
> index 0000000..5359234

Please use "git format-patch -M" when creating patches for files that 
have moved.



-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
  2015-11-08  4:52     ` Sinan Kaya
@ 2015-11-08  5:08       ` Timur Tabi
  -1 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-08  5:08 UTC (permalink / raw)
  To: Sinan Kaya, dmaengine, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams,
	devicetree, linux-kernel

Sinan Kaya wrote:
> +	val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
> +	val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
> +	val = val & ~(MAX_BUS_REQ_LEN_MASK);
> +	val = val | (mgmtdev->max_read_request);

val &= ~MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS;
val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
val &= ~MAX_BUS_REQ_LEN_MASK;
val |= mgmtdev->max_read_request;

> +static const struct of_device_id hidma_mgmt_match[] = {
> +	{ .compatible = "qcom,hidma-mgmt", },
> +	{ .compatible = "qcom,hidma-mgmt-1.0", },
> +	{ .compatible = "qcom,hidma-mgmt-1.1", },
> +	{},
> +};

I thought Rob said that he did NOT want to use version numbers in 
compatible strings.  And what's the difference between these three 
versions anyway?

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
@ 2015-11-08  5:08       ` Timur Tabi
  0 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-08  5:08 UTC (permalink / raw)
  To: linux-arm-kernel

Sinan Kaya wrote:
> +	val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
> +	val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
> +	val = val & ~(MAX_BUS_REQ_LEN_MASK);
> +	val = val | (mgmtdev->max_read_request);

val &= ~MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS;
val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
val &= ~MAX_BUS_REQ_LEN_MASK;
val |= mgmtdev->max_read_request;

> +static const struct of_device_id hidma_mgmt_match[] = {
> +	{ .compatible = "qcom,hidma-mgmt", },
> +	{ .compatible = "qcom,hidma-mgmt-1.0", },
> +	{ .compatible = "qcom,hidma-mgmt-1.1", },
> +	{},
> +};

I thought Rob said that he did NOT want to use version numbers in 
compatible strings.  And what's the difference between these three 
versions anyway?

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-08  4:52   ` Sinan Kaya
@ 2015-11-08  5:13     ` Timur Tabi
  -1 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-08  5:13 UTC (permalink / raw)
  To: Sinan Kaya, dmaengine, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Vinod Koul,
	Dan Williams, linux-kernel

Sinan Kaya wrote:

> +static int dma_selftest_sg(struct dma_device *dmadev,
> +			struct dma_chan *dma_chanptr, u64 size,
> +			unsigned long flags)
> +{
> +	dma_addr_t src_dma, dest_dma, dest_dma_it;
> +	u8 *dest_buf;
> +	u32 i, j = 0;
> +	dma_cookie_t cookie;
> +	struct dma_async_tx_descriptor *tx;
> +	int err = 0;
> +	int ret;
> +	struct sg_table sg_table;
> +	struct scatterlist	*sg;
> +	int nents = 10, count;
> +	bool free_channel = 1;

Booleans are either 'true' or 'false'.

> +static int dma_selftest_mapsngle(struct device *dev)
> +{
> +	u32 buf_size = 256;
> +	char *src;
> +	int ret = -ENOMEM;
> +	dma_addr_t dma_src;
> +
> +	src = kmalloc(buf_size, GFP_KERNEL);
> +	if (!src)
> +		return -ENOMEM;
> +
> +	strcpy(src, "hello world");

kstrdup()?

And why kmalloc anyway?  Why not leave it on the stack?

	char src[] = "hello world";

?


-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-08  5:13     ` Timur Tabi
  0 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-08  5:13 UTC (permalink / raw)
  To: linux-arm-kernel

Sinan Kaya wrote:

> +static int dma_selftest_sg(struct dma_device *dmadev,
> +			struct dma_chan *dma_chanptr, u64 size,
> +			unsigned long flags)
> +{
> +	dma_addr_t src_dma, dest_dma, dest_dma_it;
> +	u8 *dest_buf;
> +	u32 i, j = 0;
> +	dma_cookie_t cookie;
> +	struct dma_async_tx_descriptor *tx;
> +	int err = 0;
> +	int ret;
> +	struct sg_table sg_table;
> +	struct scatterlist	*sg;
> +	int nents = 10, count;
> +	bool free_channel = 1;

Booleans are either 'true' or 'false'.

> +static int dma_selftest_mapsngle(struct device *dev)
> +{
> +	u32 buf_size = 256;
> +	char *src;
> +	int ret = -ENOMEM;
> +	dma_addr_t dma_src;
> +
> +	src = kmalloc(buf_size, GFP_KERNEL);
> +	if (!src)
> +		return -ENOMEM;
> +
> +	strcpy(src, "hello world");

kstrdup()?

And why kmalloc anyway?  Why not leave it on the stack?

	char src[] = "hello world";

?


-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
  2015-11-08  4:52     ` Sinan Kaya
  (?)
@ 2015-11-08  9:32       ` kbuild test robot
  -1 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2015-11-08  9:32 UTC (permalink / raw)
  Cc: kbuild-all, dmaengine, timur, cov, jcm, agross, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams,
	devicetree, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 718 bytes --]

Hi Sinan,

[auto build test ERROR on: robh/for-next]
[also build test ERROR on: v4.3 next-20151106]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
config: i386-allmodconfig (attached as .config)
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

>> ERROR: "hidma_mgmt_setup" undefined!
>> ERROR: "hidma_mgmt_init_sys" undefined!

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 51612 bytes --]

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
@ 2015-11-08  9:32       ` kbuild test robot
  0 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2015-11-08  9:32 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: kbuild-all, dmaengine, timur, cov, jcm, agross, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams,
	devicetree, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 718 bytes --]

Hi Sinan,

[auto build test ERROR on: robh/for-next]
[also build test ERROR on: v4.3 next-20151106]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
config: i386-allmodconfig (attached as .config)
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

>> ERROR: "hidma_mgmt_setup" undefined!
>> ERROR: "hidma_mgmt_init_sys" undefined!

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 51612 bytes --]

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
@ 2015-11-08  9:32       ` kbuild test robot
  0 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2015-11-08  9:32 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sinan,

[auto build test ERROR on: robh/for-next]
[also build test ERROR on: v4.3 next-20151106]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
config: i386-allmodconfig (attached as .config)
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

>> ERROR: "hidma_mgmt_setup" undefined!
>> ERROR: "hidma_mgmt_init_sys" undefined!

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 51612 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20151108/914a6771/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-08  4:53   ` Sinan Kaya
  (?)
@ 2015-11-08 19:13       ` kbuild test robot
  -1 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2015-11-08 19:13 UTC (permalink / raw)
  Cc: kbuild-all-JC7UmRfGjtg, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	timur-sgV2jX0FEOL9JmXXK+q4OQ, cov-sgV2jX0FEOL9JmXXK+q4OQ,
	jcm-H+wXaHxf7aLQT0dZR+AlfA, agross-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Sinan Kaya,
	Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell, Kumar Gala,
	Vinod Koul, Dan Williams, devicetree-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 2836 bytes --]

Hi Sinan,

[auto build test WARNING on: robh/for-next]
[also build test WARNING on: v4.3 next-20151106]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
config: mn10300-allyesconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=mn10300 

All warnings (new ones prefixed by >>):

   In file included from include/linux/printk.h:277:0,
                    from include/linux/kernel.h:13,
                    from include/linux/list.h:8,
                    from include/linux/kobject.h:20,
                    from include/linux/device.h:17,
                    from include/linux/dmaengine.h:20,
                    from drivers/dma/qcom/hidma.c:45:
   drivers/dma/qcom/hidma.c: In function 'hidma_prep_dma_memcpy':
   include/linux/dynamic_debug.h:64:16: warning: format '%zu' expects argument of type 'size_t', but argument 7 has type 'unsigned int' [-Wformat=]
     static struct _ddebug  __aligned(8)   \
                   ^
   include/linux/dynamic_debug.h:84:2: note: in expansion of macro 'DEFINE_DYNAMIC_DEBUG_METADATA'
     DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);  \
     ^
   include/linux/device.h:1171:2: note: in expansion of macro 'dynamic_dev_dbg'
     dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
     ^
>> drivers/dma/qcom/hidma.c:391:2: note: in expansion of macro 'dev_dbg'
     dev_dbg(mdma->ddev.dev,
     ^

vim +/dev_dbg +391 drivers/dma/qcom/hidma.c

   375	
   376		mchan->allocated = 0;
   377		spin_unlock_irqrestore(&mchan->lock, irqflags);
   378		dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
   379	}
   380	
   381	
   382	static struct dma_async_tx_descriptor *
   383	hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
   384				dma_addr_t dma_src, size_t len, unsigned long flags)
   385	{
   386		struct hidma_chan *mchan = to_hidma_chan(dmach);
   387		struct hidma_desc *mdesc = NULL;
   388		struct hidma_dev *mdma = mchan->dmadev;
   389		unsigned long irqflags;
   390	
 > 391		dev_dbg(mdma->ddev.dev,
   392			"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
   393			&dma_dest, &dma_src, len);
   394	
   395		/* Get free descriptor */
   396		spin_lock_irqsave(&mchan->lock, irqflags);
   397		if (!list_empty(&mchan->free)) {
   398			mdesc = list_first_entry(&mchan->free, struct hidma_desc,
   399						node);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 36268 bytes --]

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-08 19:13       ` kbuild test robot
  0 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2015-11-08 19:13 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: kbuild-all, dmaengine, timur, cov, jcm, agross, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams,
	devicetree, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2836 bytes --]

Hi Sinan,

[auto build test WARNING on: robh/for-next]
[also build test WARNING on: v4.3 next-20151106]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
config: mn10300-allyesconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=mn10300 

All warnings (new ones prefixed by >>):

   In file included from include/linux/printk.h:277:0,
                    from include/linux/kernel.h:13,
                    from include/linux/list.h:8,
                    from include/linux/kobject.h:20,
                    from include/linux/device.h:17,
                    from include/linux/dmaengine.h:20,
                    from drivers/dma/qcom/hidma.c:45:
   drivers/dma/qcom/hidma.c: In function 'hidma_prep_dma_memcpy':
   include/linux/dynamic_debug.h:64:16: warning: format '%zu' expects argument of type 'size_t', but argument 7 has type 'unsigned int' [-Wformat=]
     static struct _ddebug  __aligned(8)   \
                   ^
   include/linux/dynamic_debug.h:84:2: note: in expansion of macro 'DEFINE_DYNAMIC_DEBUG_METADATA'
     DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);  \
     ^
   include/linux/device.h:1171:2: note: in expansion of macro 'dynamic_dev_dbg'
     dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
     ^
>> drivers/dma/qcom/hidma.c:391:2: note: in expansion of macro 'dev_dbg'
     dev_dbg(mdma->ddev.dev,
     ^

vim +/dev_dbg +391 drivers/dma/qcom/hidma.c

   375	
   376		mchan->allocated = 0;
   377		spin_unlock_irqrestore(&mchan->lock, irqflags);
   378		dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
   379	}
   380	
   381	
   382	static struct dma_async_tx_descriptor *
   383	hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
   384				dma_addr_t dma_src, size_t len, unsigned long flags)
   385	{
   386		struct hidma_chan *mchan = to_hidma_chan(dmach);
   387		struct hidma_desc *mdesc = NULL;
   388		struct hidma_dev *mdma = mchan->dmadev;
   389		unsigned long irqflags;
   390	
 > 391		dev_dbg(mdma->ddev.dev,
   392			"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
   393			&dma_dest, &dma_src, len);
   394	
   395		/* Get free descriptor */
   396		spin_lock_irqsave(&mchan->lock, irqflags);
   397		if (!list_empty(&mchan->free)) {
   398			mdesc = list_first_entry(&mchan->free, struct hidma_desc,
   399						node);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 36268 bytes --]

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-08 19:13       ` kbuild test robot
  0 siblings, 0 replies; 71+ messages in thread
From: kbuild test robot @ 2015-11-08 19:13 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sinan,

[auto build test WARNING on: robh/for-next]
[also build test WARNING on: v4.3 next-20151106]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
config: mn10300-allyesconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=mn10300 

All warnings (new ones prefixed by >>):

   In file included from include/linux/printk.h:277:0,
                    from include/linux/kernel.h:13,
                    from include/linux/list.h:8,
                    from include/linux/kobject.h:20,
                    from include/linux/device.h:17,
                    from include/linux/dmaengine.h:20,
                    from drivers/dma/qcom/hidma.c:45:
   drivers/dma/qcom/hidma.c: In function 'hidma_prep_dma_memcpy':
   include/linux/dynamic_debug.h:64:16: warning: format '%zu' expects argument of type 'size_t', but argument 7 has type 'unsigned int' [-Wformat=]
     static struct _ddebug  __aligned(8)   \
                   ^
   include/linux/dynamic_debug.h:84:2: note: in expansion of macro 'DEFINE_DYNAMIC_DEBUG_METADATA'
     DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);  \
     ^
   include/linux/device.h:1171:2: note: in expansion of macro 'dynamic_dev_dbg'
     dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
     ^
>> drivers/dma/qcom/hidma.c:391:2: note: in expansion of macro 'dev_dbg'
     dev_dbg(mdma->ddev.dev,
     ^

vim +/dev_dbg +391 drivers/dma/qcom/hidma.c

   375	
   376		mchan->allocated = 0;
   377		spin_unlock_irqrestore(&mchan->lock, irqflags);
   378		dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
   379	}
   380	
   381	
   382	static struct dma_async_tx_descriptor *
   383	hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
   384				dma_addr_t dma_src, size_t len, unsigned long flags)
   385	{
   386		struct hidma_chan *mchan = to_hidma_chan(dmach);
   387		struct hidma_desc *mdesc = NULL;
   388		struct hidma_dev *mdma = mchan->dmadev;
   389		unsigned long irqflags;
   390	
 > 391		dev_dbg(mdma->ddev.dev,
   392			"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
   393			&dma_dest, &dma_src, len);
   394	
   395		/* Get free descriptor */
   396		spin_lock_irqsave(&mchan->lock, irqflags);
   397		if (!list_empty(&mchan->free)) {
   398			mdesc = list_first_entry(&mchan->free, struct hidma_desc,
   399						node);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 36268 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20151109/27ad99a9/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-08  4:52   ` Sinan Kaya
@ 2015-11-08 20:09     ` Andy Shevchenko
  -1 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-08 20:09 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, timur, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Vinod Koul, Dan Williams, linux-kernel

On Sun, Nov 8, 2015 at 6:52 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
> This patch adds supporting utility functions
> for selftest. The intention is to share the self
> test code between different drivers.
>
> Supported test cases include:
> 1. dma_map_single
> 2. streaming DMA
> 3. coherent DMA
> 4. scatter-gather DMA

All below comments about entire file, please check and update.

> +struct test_result {
> +       atomic_t counter;
> +       wait_queue_head_t wq;
> +       struct dma_device *dmadev;

dmadev -> dma.

> +};
> +
> +static void dma_selftest_complete(void *arg)
> +{
> +       struct test_result *result = arg;
> +       struct dma_device *dmadev = result->dmadev;
> +
> +       atomic_inc(&result->counter);
> +       wake_up(&result->wq);
> +       dev_dbg(dmadev->dev, "self test transfer complete :%d\n",
> +               atomic_read(&result->counter));
> +}
> +
> +/*
> + * Perform a transaction to verify the HW works.
> + */
> +static int dma_selftest_sg(struct dma_device *dmadev,

dmdev -> dma

> +                       struct dma_chan *dma_chanptr, u64 size,

dma_chanptr -> chan

> +                       unsigned long flags)
> +{
> +       dma_addr_t src_dma, dest_dma, dest_dma_it;

src_dma -> src, dest_dma_it -> dst ?

> +       u8 *dest_buf;

Perhaps put nearby src_buf definition?

> +       u32 i, j = 0;

unsigned int

> +       dma_cookie_t cookie;
> +       struct dma_async_tx_descriptor *tx;

> +       int err = 0;
> +       int ret;

Any reason to have two instead of one of similar meaning?

> +       struct sg_table sg_table;
> +       struct scatterlist      *sg;
> +       int nents = 10, count;
> +       bool free_channel = 1;
> +       u8 *src_buf;
> +       int map_count;
> +       struct test_result result;

Hmm… Maybe make names shorter?

> +
> +       init_waitqueue_head(&result.wq);
> +       atomic_set(&result.counter, 0);
> +       result.dmadev = dmadev;
> +
> +       if (!dma_chanptr)
> +               return -ENOMEM;
> +
> +       if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)


> +               return -ENODEV;
> +
> +       if (!dma_chanptr->device || !dmadev->dev) {
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +               return -ENODEV;
> +       }
> +
> +       ret = sg_alloc_table(&sg_table, nents, GFP_KERNEL);
> +       if (ret) {
> +               err = ret;
> +               goto sg_table_alloc_failed;
> +       }
> +
> +       for_each_sg(sg_table.sgl, sg, nents, i) {
> +               u64 alloc_sz;
> +               void *cpu_addr;
> +
> +               alloc_sz = round_up(size, nents);
> +               do_div(alloc_sz, nents);
> +               cpu_addr = kmalloc(alloc_sz, GFP_KERNEL);
> +
> +               if (!cpu_addr) {
> +                       err = -ENOMEM;
> +                       goto sg_buf_alloc_failed;
> +               }
> +
> +               dev_dbg(dmadev->dev, "set sg buf[%d] :%p\n", i, cpu_addr);
> +               sg_set_buf(sg, cpu_addr, alloc_sz);
> +       }
> +
> +       dest_buf = kmalloc(round_up(size, nents), GFP_KERNEL);
> +       if (!dest_buf) {
> +               err = -ENOMEM;
> +               goto dst_alloc_failed;
> +       }
> +       dev_dbg(dmadev->dev, "dest:%p\n", dest_buf);
> +
> +       /* Fill in src buffer */
> +       count = 0;
> +       for_each_sg(sg_table.sgl, sg, nents, i) {
> +               src_buf = sg_virt(sg);
> +               dev_dbg(dmadev->dev,
> +                       "set src[%d, %d, %p] = %d\n", i, j, src_buf, count);
> +
> +               for (j = 0; j < sg_dma_len(sg); j++)
> +                       src_buf[j] = count++;
> +       }
> +
> +       /* dma_map_sg cleans and invalidates the cache in arm64 when
> +        * DMA_TO_DEVICE is selected for src. That's why, we need to do
> +        * the mapping after the data is copied.
> +        */
> +       map_count = dma_map_sg(dmadev->dev, sg_table.sgl, nents,
> +                               DMA_TO_DEVICE);
> +       if (!map_count) {
> +               err =  -EINVAL;
> +               goto src_map_failed;
> +       }
> +
> +       dest_dma = dma_map_single(dmadev->dev, dest_buf,
> +                               size, DMA_FROM_DEVICE);
> +
> +       err = dma_mapping_error(dmadev->dev, dest_dma);
> +       if (err)
> +               goto dest_map_failed;
> +
> +       /* check scatter gather list contents */
> +       for_each_sg(sg_table.sgl, sg, map_count, i)
> +               dev_dbg(dmadev->dev,
> +                       "[%d/%d] src va=%p, iova = %pa len:%d\n",
> +                       i, map_count, sg_virt(sg), &sg_dma_address(sg),
> +                       sg_dma_len(sg));
> +
> +       dest_dma_it = dest_dma;
> +       for_each_sg(sg_table.sgl, sg, map_count, i) {
> +               src_buf = sg_virt(sg);
> +               src_dma = sg_dma_address(sg);
> +               dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n",
> +                       &src_dma, &dest_dma_it);
> +
> +               tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma_it,
> +                               src_dma, sg_dma_len(sg), flags);
> +               if (!tx) {
> +                       dev_err(dmadev->dev,
> +                               "Self-test sg failed, disabling\n");
> +                       err = -ENODEV;
> +                       goto prep_memcpy_failed;
> +               }
> +
> +               tx->callback_param = &result;
> +               tx->callback = dma_selftest_complete;
> +               cookie = tx->tx_submit(tx);
> +               dest_dma_it += sg_dma_len(sg);
> +       }
> +
> +       dmadev->device_issue_pending(dma_chanptr);
> +
> +       /*
> +        * It is assumed that the hardware can move the data within 1s
> +        * and signal the OS of the completion
> +        */
> +       ret = wait_event_timeout(result.wq,
> +               atomic_read(&result.counter) == (map_count),
> +                               msecs_to_jiffies(10000));
> +
> +       if (ret <= 0) {
> +               dev_err(dmadev->dev,
> +                       "Self-test sg copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +       dev_dbg(dmadev->dev,
> +               "Self-test complete signal received\n");
> +
> +       if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
> +                               DMA_COMPLETE) {
> +               dev_err(dmadev->dev,
> +                       "Self-test sg status not complete, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +
> +       dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
> +                               DMA_FROM_DEVICE);
> +
> +       count = 0;
> +       for_each_sg(sg_table.sgl, sg, map_count, i) {
> +               src_buf = sg_virt(sg);
> +               if (memcmp(src_buf, &dest_buf[count], sg_dma_len(sg)) == 0) {
> +                       count += sg_dma_len(sg);
> +                       continue;
> +               }
> +
> +               for (j = 0; j < sg_dma_len(sg); j++) {
> +                       if (src_buf[j] != dest_buf[count]) {
> +                               dev_dbg(dmadev->dev,
> +                               "[%d, %d] (%p) src :%x dest (%p):%x cnt:%d\n",
> +                                       i, j, &src_buf[j], src_buf[j],
> +                                       &dest_buf[count], dest_buf[count],
> +                                       count);
> +                               dev_err(dmadev->dev,
> +                                "Self-test copy failed compare, disabling\n");
> +                               err = -EFAULT;
> +                               return err;
> +                               goto compare_failed;

Here something wrong.

> +                       }
> +                       count++;
> +               }
> +       }
> +
> +       /*
> +        * do not release the channel
> +        * we want to consume all the channels on self test
> +        */
> +       free_channel = 0;
> +
> +compare_failed:
> +tx_status:
> +prep_memcpy_failed:
> +       dma_unmap_single(dmadev->dev, dest_dma, size,
> +                        DMA_FROM_DEVICE);
> +dest_map_failed:
> +       dma_unmap_sg(dmadev->dev, sg_table.sgl, nents,
> +                       DMA_TO_DEVICE);
> +
> +src_map_failed:
> +       kfree(dest_buf);
> +
> +dst_alloc_failed:
> +sg_buf_alloc_failed:
> +       for_each_sg(sg_table.sgl, sg, nents, i) {
> +               if (sg_virt(sg))
> +                       kfree(sg_virt(sg));
> +       }
> +       sg_free_table(&sg_table);
> +sg_table_alloc_failed:
> +       if (free_channel)
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +
> +       return err;
> +}
> +
> +/*
> + * Perform a streaming transaction to verify the HW works.
> + */
> +static int dma_selftest_streaming(struct dma_device *dmadev,
> +                       struct dma_chan *dma_chanptr, u64 size,
> +                       unsigned long flags)
> +{
> +       dma_addr_t src_dma, dest_dma;
> +       u8 *dest_buf, *src_buf;
> +       u32 i;
> +       dma_cookie_t cookie;
> +       struct dma_async_tx_descriptor *tx;
> +       int err = 0;
> +       int ret;
> +       bool free_channel = 1;
> +       struct test_result result;
> +
> +       init_waitqueue_head(&result.wq);
> +       atomic_set(&result.counter, 0);
> +       result.dmadev = dmadev;
> +
> +       if (!dma_chanptr)
> +               return -ENOMEM;
> +
> +       if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
> +               return -ENODEV;
> +
> +       if (!dma_chanptr->device || !dmadev->dev) {
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +               return -ENODEV;
> +       }
> +
> +       src_buf = kmalloc(size, GFP_KERNEL);
> +       if (!src_buf) {
> +               err = -ENOMEM;
> +               goto src_alloc_failed;
> +       }
> +
> +       dest_buf = kmalloc(size, GFP_KERNEL);
> +       if (!dest_buf) {
> +               err = -ENOMEM;
> +               goto dst_alloc_failed;
> +       }
> +
> +       dev_dbg(dmadev->dev, "src: %p dest:%p\n", src_buf, dest_buf);
> +
> +       /* Fill in src buffer */
> +       for (i = 0; i < size; i++)
> +               src_buf[i] = (u8)i;
> +
> +       /* dma_map_single cleans and invalidates the cache in arm64 when
> +        * DMA_TO_DEVICE is selected for src. That's why, we need to do
> +        * the mapping after the data is copied.
> +        */
> +       src_dma = dma_map_single(dmadev->dev, src_buf,
> +                                size, DMA_TO_DEVICE);
> +
> +       err = dma_mapping_error(dmadev->dev, src_dma);
> +       if (err)
> +               goto src_map_failed;
> +
> +       dest_dma = dma_map_single(dmadev->dev, dest_buf,
> +                               size, DMA_FROM_DEVICE);
> +
> +       err = dma_mapping_error(dmadev->dev, dest_dma);
> +       if (err)
> +               goto dest_map_failed;
> +       dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n", &src_dma,
> +               &dest_dma);
> +       tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma, src_dma,
> +                                       size, flags);
> +       if (!tx) {
> +               dev_err(dmadev->dev,
> +                       "Self-test streaming failed, disabling\n");
> +               err = -ENODEV;
> +               goto prep_memcpy_failed;
> +       }
> +
> +       tx->callback_param = &result;
> +       tx->callback = dma_selftest_complete;
> +       cookie = tx->tx_submit(tx);
> +       dmadev->device_issue_pending(dma_chanptr);
> +
> +       /*
> +        * It is assumed that the hardware can move the data within 1s
> +        * and signal the OS of the completion
> +        */
> +       ret = wait_event_timeout(result.wq,
> +                               atomic_read(&result.counter) == 1,
> +                               msecs_to_jiffies(10000));
> +
> +       if (ret <= 0) {
> +               dev_err(dmadev->dev,
> +                       "Self-test copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +       dev_dbg(dmadev->dev, "Self-test complete signal received\n");
> +
> +       if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
> +                               DMA_COMPLETE) {
> +               dev_err(dmadev->dev,
> +                       "Self-test copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +
> +       dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
> +                               DMA_FROM_DEVICE);
> +
> +       if (memcmp(src_buf, dest_buf, size)) {
> +               for (i = 0; i < size/4; i++) {
> +                       if (((u32 *)src_buf)[i] != ((u32 *)(dest_buf))[i]) {
> +                               dev_dbg(dmadev->dev,
> +                                       "[%d] src data:%x dest data:%x\n",
> +                                       i, ((u32 *)src_buf)[i],
> +                                       ((u32 *)(dest_buf))[i]);
> +                               break;
> +                       }
> +               }
> +               dev_err(dmadev->dev,
> +                       "Self-test copy failed compare, disabling\n");
> +               err = -EFAULT;
> +               goto compare_failed;
> +       }
> +
> +       /*
> +        * do not release the channel
> +        * we want to consume all the channels on self test
> +        */
> +       free_channel = 0;
> +
> +compare_failed:
> +tx_status:
> +prep_memcpy_failed:
> +       dma_unmap_single(dmadev->dev, dest_dma, size,
> +                        DMA_FROM_DEVICE);
> +dest_map_failed:
> +       dma_unmap_single(dmadev->dev, src_dma, size,
> +                       DMA_TO_DEVICE);
> +
> +src_map_failed:
> +       kfree(dest_buf);
> +
> +dst_alloc_failed:
> +       kfree(src_buf);
> +
> +src_alloc_failed:
> +       if (free_channel)
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +
> +       return err;
> +}
> +
> +/*
> + * Perform a coherent transaction to verify the HW works.
> + */
> +static int dma_selftest_one_coherent(struct dma_device *dmadev,
> +                       struct dma_chan *dma_chanptr, u64 size,
> +                       unsigned long flags)
> +{
> +       dma_addr_t src_dma, dest_dma;
> +       u8 *dest_buf, *src_buf;
> +       u32 i;
> +       dma_cookie_t cookie;
> +       struct dma_async_tx_descriptor *tx;
> +       int err = 0;
> +       int ret;
> +       bool free_channel = true;
> +       struct test_result result;
> +
> +       init_waitqueue_head(&result.wq);
> +       atomic_set(&result.counter, 0);
> +       result.dmadev = dmadev;
> +
> +       if (!dma_chanptr)
> +               return -ENOMEM;
> +
> +       if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
> +               return -ENODEV;
> +
> +       if (!dma_chanptr->device || !dmadev->dev) {
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +               return -ENODEV;
> +       }
> +
> +       src_buf = dma_alloc_coherent(dmadev->dev, size,
> +                               &src_dma, GFP_KERNEL);
> +       if (!src_buf) {
> +               err = -ENOMEM;
> +               goto src_alloc_failed;
> +       }
> +
> +       dest_buf = dma_alloc_coherent(dmadev->dev, size,
> +                               &dest_dma, GFP_KERNEL);
> +       if (!dest_buf) {
> +               err = -ENOMEM;
> +               goto dst_alloc_failed;
> +       }
> +
> +       dev_dbg(dmadev->dev, "src: %p dest:%p\n", src_buf, dest_buf);
> +
> +       /* Fill in src buffer */
> +       for (i = 0; i < size; i++)
> +               src_buf[i] = (u8)i;
> +
> +       dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n", &src_dma,
> +               &dest_dma);
> +       tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma, src_dma,
> +                                       size,
> +                                       flags);
> +       if (!tx) {
> +               dev_err(dmadev->dev,
> +                       "Self-test coherent failed, disabling\n");
> +               err = -ENODEV;
> +               goto prep_memcpy_failed;
> +       }
> +
> +       tx->callback_param = &result;
> +       tx->callback = dma_selftest_complete;
> +       cookie = tx->tx_submit(tx);
> +       dmadev->device_issue_pending(dma_chanptr);
> +
> +       /*
> +        * It is assumed that the hardware can move the data within 1s
> +        * and signal the OS of the completion
> +        */
> +       ret = wait_event_timeout(result.wq,
> +                               atomic_read(&result.counter) == 1,
> +                               msecs_to_jiffies(10000));
> +
> +       if (ret <= 0) {
> +               dev_err(dmadev->dev,
> +                       "Self-test copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +       dev_dbg(dmadev->dev, "Self-test complete signal received\n");
> +
> +       if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
> +                               DMA_COMPLETE) {
> +               dev_err(dmadev->dev,
> +                       "Self-test copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +
> +       if (memcmp(src_buf, dest_buf, size)) {
> +               for (i = 0; i < size/4; i++) {
> +                       if (((u32 *)src_buf)[i] != ((u32 *)(dest_buf))[i]) {
> +                               dev_dbg(dmadev->dev,
> +                                       "[%d] src data:%x dest data:%x\n",
> +                                       i, ((u32 *)src_buf)[i],
> +                                       ((u32 *)(dest_buf))[i]);
> +                               break;
> +                       }
> +               }
> +               dev_err(dmadev->dev,
> +                       "Self-test copy failed compare, disabling\n");
> +               err = -EFAULT;
> +               goto compare_failed;
> +       }
> +
> +       /*
> +        * do not release the channel
> +        * we want to consume all the channels on self test
> +        */
> +       free_channel = 0;
> +
> +compare_failed:
> +tx_status:
> +prep_memcpy_failed:
> +       dma_free_coherent(dmadev->dev, size, dest_buf, dest_dma);
> +
> +dst_alloc_failed:
> +       dma_free_coherent(dmadev->dev, size, src_buf, src_dma);
> +
> +src_alloc_failed:
> +       if (free_channel)
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +
> +       return err;
> +}
> +
> +static int dma_selftest_all(struct dma_device *dmadev,
> +                               bool req_coherent, bool req_sg)
> +{
> +       int rc = -ENODEV, i = 0;
> +       struct dma_chan **dmach_ptr = NULL;
> +       u32 max_channels = 0;
> +       u64 sizes[] = {PAGE_SIZE - 1, PAGE_SIZE, PAGE_SIZE + 1, 2801, 13295};
> +       int count = 0;
> +       u32 j;
> +       u64 size;
> +       int failed = 0;
> +       struct dma_chan *dmach = NULL;
> +
> +       list_for_each_entry(dmach, &dmadev->channels,
> +                       device_node) {
> +               max_channels++;
> +       }
> +
> +       dmach_ptr = kcalloc(max_channels, sizeof(*dmach_ptr), GFP_KERNEL);
> +       if (!dmach_ptr) {
> +               rc = -ENOMEM;
> +               goto failed_exit;
> +       }
> +
> +       for (j = 0; j < ARRAY_SIZE(sizes); j++) {
> +               size = sizes[j];
> +               count = 0;
> +               dev_dbg(dmadev->dev, "test start for size:%llx\n", size);
> +               list_for_each_entry(dmach, &dmadev->channels,
> +                               device_node) {
> +                       dmach_ptr[count] = dmach;
> +                       if (req_coherent)
> +                               rc = dma_selftest_one_coherent(dmadev,
> +                                       dmach, size,
> +                                       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
> +                       else if (req_sg)
> +                               rc = dma_selftest_sg(dmadev,
> +                                       dmach, size,
> +                                       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
> +                       else
> +                               rc = dma_selftest_streaming(dmadev,
> +                                       dmach, size,
> +                                       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
> +                       if (rc) {
> +                               failed = 1;
> +                               break;
> +                       }
> +                       dev_dbg(dmadev->dev,
> +                               "self test passed for ch:%d\n", count);
> +                       count++;
> +               }
> +
> +               /*
> +                * free the channels where the test passed
> +                * Channel resources are freed for a test that fails.
> +                */
> +               for (i = 0; i < count; i++)
> +                       dmadev->device_free_chan_resources(dmach_ptr[i]);
> +
> +               if (failed)
> +                       break;
> +       }
> +
> +failed_exit:
> +       kfree(dmach_ptr);
> +
> +       return rc;
> +}
> +
> +static int dma_selftest_mapsngle(struct device *dev)
> +{
> +       u32 buf_size = 256;
> +       char *src;
> +       int ret = -ENOMEM;
> +       dma_addr_t dma_src;
> +
> +       src = kmalloc(buf_size, GFP_KERNEL);
> +       if (!src)
> +               return -ENOMEM;
> +
> +       strcpy(src, "hello world");
> +
> +       dma_src = dma_map_single(dev, src, buf_size, DMA_TO_DEVICE);
> +       dev_dbg(dev, "mapsingle: src:%p src_dma:%pad\n", src, &dma_src);
> +
> +       ret = dma_mapping_error(dev, dma_src);
> +       if (ret) {
> +               dev_err(dev, "dma_mapping_error with ret:%d\n", ret);
> +               ret = -ENOMEM;
> +       } else {
> +               if (strcmp(src, "hello world") != 0) {
> +                       dev_err(dev, "memory content mismatch\n");
> +                       ret = -EINVAL;
> +               } else
> +                       dev_dbg(dev, "mapsingle:dma_map_single works\n");
> +
> +               dma_unmap_single(dev, dma_src, buf_size, DMA_TO_DEVICE);
> +       }
> +       kfree(src);
> +       return ret;
> +}
> +
> +/*
> + * Self test all DMA channels.
> + */
> +int dma_selftest_memcpy(struct dma_device *dmadev)
> +{
> +       int rc;
> +
> +       dma_selftest_mapsngle(dmadev->dev);
> +
> +       /* streaming test */
> +       rc = dma_selftest_all(dmadev, false, false);
> +       if (rc)
> +               return rc;
> +       dev_dbg(dmadev->dev, "streaming self test passed\n");
> +
> +       /* coherent test */
> +       rc = dma_selftest_all(dmadev, true, false);
> +       if (rc)
> +               return rc;
> +
> +       dev_dbg(dmadev->dev, "coherent self test passed\n");
> +
> +       /* scatter gather test */
> +       rc = dma_selftest_all(dmadev, false, true);
> +       if (rc)
> +               return rc;
> +
> +       dev_dbg(dmadev->dev, "scatter gather self test passed\n");
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(dma_selftest_memcpy);
> --
> Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-08 20:09     ` Andy Shevchenko
  0 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-08 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Nov 8, 2015 at 6:52 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
> This patch adds supporting utility functions
> for selftest. The intention is to share the self
> test code between different drivers.
>
> Supported test cases include:
> 1. dma_map_single
> 2. streaming DMA
> 3. coherent DMA
> 4. scatter-gather DMA

All below comments about entire file, please check and update.

> +struct test_result {
> +       atomic_t counter;
> +       wait_queue_head_t wq;
> +       struct dma_device *dmadev;

dmadev -> dma.

> +};
> +
> +static void dma_selftest_complete(void *arg)
> +{
> +       struct test_result *result = arg;
> +       struct dma_device *dmadev = result->dmadev;
> +
> +       atomic_inc(&result->counter);
> +       wake_up(&result->wq);
> +       dev_dbg(dmadev->dev, "self test transfer complete :%d\n",
> +               atomic_read(&result->counter));
> +}
> +
> +/*
> + * Perform a transaction to verify the HW works.
> + */
> +static int dma_selftest_sg(struct dma_device *dmadev,

dmdev -> dma

> +                       struct dma_chan *dma_chanptr, u64 size,

dma_chanptr -> chan

> +                       unsigned long flags)
> +{
> +       dma_addr_t src_dma, dest_dma, dest_dma_it;

src_dma -> src, dest_dma_it -> dst ?

> +       u8 *dest_buf;

Perhaps put nearby src_buf definition?

> +       u32 i, j = 0;

unsigned int

> +       dma_cookie_t cookie;
> +       struct dma_async_tx_descriptor *tx;

> +       int err = 0;
> +       int ret;

Any reason to have two instead of one of similar meaning?

> +       struct sg_table sg_table;
> +       struct scatterlist      *sg;
> +       int nents = 10, count;
> +       bool free_channel = 1;
> +       u8 *src_buf;
> +       int map_count;
> +       struct test_result result;

Hmm? Maybe make names shorter?

> +
> +       init_waitqueue_head(&result.wq);
> +       atomic_set(&result.counter, 0);
> +       result.dmadev = dmadev;
> +
> +       if (!dma_chanptr)
> +               return -ENOMEM;
> +
> +       if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)


> +               return -ENODEV;
> +
> +       if (!dma_chanptr->device || !dmadev->dev) {
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +               return -ENODEV;
> +       }
> +
> +       ret = sg_alloc_table(&sg_table, nents, GFP_KERNEL);
> +       if (ret) {
> +               err = ret;
> +               goto sg_table_alloc_failed;
> +       }
> +
> +       for_each_sg(sg_table.sgl, sg, nents, i) {
> +               u64 alloc_sz;
> +               void *cpu_addr;
> +
> +               alloc_sz = round_up(size, nents);
> +               do_div(alloc_sz, nents);
> +               cpu_addr = kmalloc(alloc_sz, GFP_KERNEL);
> +
> +               if (!cpu_addr) {
> +                       err = -ENOMEM;
> +                       goto sg_buf_alloc_failed;
> +               }
> +
> +               dev_dbg(dmadev->dev, "set sg buf[%d] :%p\n", i, cpu_addr);
> +               sg_set_buf(sg, cpu_addr, alloc_sz);
> +       }
> +
> +       dest_buf = kmalloc(round_up(size, nents), GFP_KERNEL);
> +       if (!dest_buf) {
> +               err = -ENOMEM;
> +               goto dst_alloc_failed;
> +       }
> +       dev_dbg(dmadev->dev, "dest:%p\n", dest_buf);
> +
> +       /* Fill in src buffer */
> +       count = 0;
> +       for_each_sg(sg_table.sgl, sg, nents, i) {
> +               src_buf = sg_virt(sg);
> +               dev_dbg(dmadev->dev,
> +                       "set src[%d, %d, %p] = %d\n", i, j, src_buf, count);
> +
> +               for (j = 0; j < sg_dma_len(sg); j++)
> +                       src_buf[j] = count++;
> +       }
> +
> +       /* dma_map_sg cleans and invalidates the cache in arm64 when
> +        * DMA_TO_DEVICE is selected for src. That's why, we need to do
> +        * the mapping after the data is copied.
> +        */
> +       map_count = dma_map_sg(dmadev->dev, sg_table.sgl, nents,
> +                               DMA_TO_DEVICE);
> +       if (!map_count) {
> +               err =  -EINVAL;
> +               goto src_map_failed;
> +       }
> +
> +       dest_dma = dma_map_single(dmadev->dev, dest_buf,
> +                               size, DMA_FROM_DEVICE);
> +
> +       err = dma_mapping_error(dmadev->dev, dest_dma);
> +       if (err)
> +               goto dest_map_failed;
> +
> +       /* check scatter gather list contents */
> +       for_each_sg(sg_table.sgl, sg, map_count, i)
> +               dev_dbg(dmadev->dev,
> +                       "[%d/%d] src va=%p, iova = %pa len:%d\n",
> +                       i, map_count, sg_virt(sg), &sg_dma_address(sg),
> +                       sg_dma_len(sg));
> +
> +       dest_dma_it = dest_dma;
> +       for_each_sg(sg_table.sgl, sg, map_count, i) {
> +               src_buf = sg_virt(sg);
> +               src_dma = sg_dma_address(sg);
> +               dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n",
> +                       &src_dma, &dest_dma_it);
> +
> +               tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma_it,
> +                               src_dma, sg_dma_len(sg), flags);
> +               if (!tx) {
> +                       dev_err(dmadev->dev,
> +                               "Self-test sg failed, disabling\n");
> +                       err = -ENODEV;
> +                       goto prep_memcpy_failed;
> +               }
> +
> +               tx->callback_param = &result;
> +               tx->callback = dma_selftest_complete;
> +               cookie = tx->tx_submit(tx);
> +               dest_dma_it += sg_dma_len(sg);
> +       }
> +
> +       dmadev->device_issue_pending(dma_chanptr);
> +
> +       /*
> +        * It is assumed that the hardware can move the data within 1s
> +        * and signal the OS of the completion
> +        */
> +       ret = wait_event_timeout(result.wq,
> +               atomic_read(&result.counter) == (map_count),
> +                               msecs_to_jiffies(10000));
> +
> +       if (ret <= 0) {
> +               dev_err(dmadev->dev,
> +                       "Self-test sg copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +       dev_dbg(dmadev->dev,
> +               "Self-test complete signal received\n");
> +
> +       if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
> +                               DMA_COMPLETE) {
> +               dev_err(dmadev->dev,
> +                       "Self-test sg status not complete, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +
> +       dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
> +                               DMA_FROM_DEVICE);
> +
> +       count = 0;
> +       for_each_sg(sg_table.sgl, sg, map_count, i) {
> +               src_buf = sg_virt(sg);
> +               if (memcmp(src_buf, &dest_buf[count], sg_dma_len(sg)) == 0) {
> +                       count += sg_dma_len(sg);
> +                       continue;
> +               }
> +
> +               for (j = 0; j < sg_dma_len(sg); j++) {
> +                       if (src_buf[j] != dest_buf[count]) {
> +                               dev_dbg(dmadev->dev,
> +                               "[%d, %d] (%p) src :%x dest (%p):%x cnt:%d\n",
> +                                       i, j, &src_buf[j], src_buf[j],
> +                                       &dest_buf[count], dest_buf[count],
> +                                       count);
> +                               dev_err(dmadev->dev,
> +                                "Self-test copy failed compare, disabling\n");
> +                               err = -EFAULT;
> +                               return err;
> +                               goto compare_failed;

Here something wrong.

> +                       }
> +                       count++;
> +               }
> +       }
> +
> +       /*
> +        * do not release the channel
> +        * we want to consume all the channels on self test
> +        */
> +       free_channel = 0;
> +
> +compare_failed:
> +tx_status:
> +prep_memcpy_failed:
> +       dma_unmap_single(dmadev->dev, dest_dma, size,
> +                        DMA_FROM_DEVICE);
> +dest_map_failed:
> +       dma_unmap_sg(dmadev->dev, sg_table.sgl, nents,
> +                       DMA_TO_DEVICE);
> +
> +src_map_failed:
> +       kfree(dest_buf);
> +
> +dst_alloc_failed:
> +sg_buf_alloc_failed:
> +       for_each_sg(sg_table.sgl, sg, nents, i) {
> +               if (sg_virt(sg))
> +                       kfree(sg_virt(sg));
> +       }
> +       sg_free_table(&sg_table);
> +sg_table_alloc_failed:
> +       if (free_channel)
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +
> +       return err;
> +}
> +
> +/*
> + * Perform a streaming transaction to verify the HW works.
> + */
> +static int dma_selftest_streaming(struct dma_device *dmadev,
> +                       struct dma_chan *dma_chanptr, u64 size,
> +                       unsigned long flags)
> +{
> +       dma_addr_t src_dma, dest_dma;
> +       u8 *dest_buf, *src_buf;
> +       u32 i;
> +       dma_cookie_t cookie;
> +       struct dma_async_tx_descriptor *tx;
> +       int err = 0;
> +       int ret;
> +       bool free_channel = 1;
> +       struct test_result result;
> +
> +       init_waitqueue_head(&result.wq);
> +       atomic_set(&result.counter, 0);
> +       result.dmadev = dmadev;
> +
> +       if (!dma_chanptr)
> +               return -ENOMEM;
> +
> +       if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
> +               return -ENODEV;
> +
> +       if (!dma_chanptr->device || !dmadev->dev) {
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +               return -ENODEV;
> +       }
> +
> +       src_buf = kmalloc(size, GFP_KERNEL);
> +       if (!src_buf) {
> +               err = -ENOMEM;
> +               goto src_alloc_failed;
> +       }
> +
> +       dest_buf = kmalloc(size, GFP_KERNEL);
> +       if (!dest_buf) {
> +               err = -ENOMEM;
> +               goto dst_alloc_failed;
> +       }
> +
> +       dev_dbg(dmadev->dev, "src: %p dest:%p\n", src_buf, dest_buf);
> +
> +       /* Fill in src buffer */
> +       for (i = 0; i < size; i++)
> +               src_buf[i] = (u8)i;
> +
> +       /* dma_map_single cleans and invalidates the cache in arm64 when
> +        * DMA_TO_DEVICE is selected for src. That's why, we need to do
> +        * the mapping after the data is copied.
> +        */
> +       src_dma = dma_map_single(dmadev->dev, src_buf,
> +                                size, DMA_TO_DEVICE);
> +
> +       err = dma_mapping_error(dmadev->dev, src_dma);
> +       if (err)
> +               goto src_map_failed;
> +
> +       dest_dma = dma_map_single(dmadev->dev, dest_buf,
> +                               size, DMA_FROM_DEVICE);
> +
> +       err = dma_mapping_error(dmadev->dev, dest_dma);
> +       if (err)
> +               goto dest_map_failed;
> +       dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n", &src_dma,
> +               &dest_dma);
> +       tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma, src_dma,
> +                                       size, flags);
> +       if (!tx) {
> +               dev_err(dmadev->dev,
> +                       "Self-test streaming failed, disabling\n");
> +               err = -ENODEV;
> +               goto prep_memcpy_failed;
> +       }
> +
> +       tx->callback_param = &result;
> +       tx->callback = dma_selftest_complete;
> +       cookie = tx->tx_submit(tx);
> +       dmadev->device_issue_pending(dma_chanptr);
> +
> +       /*
> +        * It is assumed that the hardware can move the data within 1s
> +        * and signal the OS of the completion
> +        */
> +       ret = wait_event_timeout(result.wq,
> +                               atomic_read(&result.counter) == 1,
> +                               msecs_to_jiffies(10000));
> +
> +       if (ret <= 0) {
> +               dev_err(dmadev->dev,
> +                       "Self-test copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +       dev_dbg(dmadev->dev, "Self-test complete signal received\n");
> +
> +       if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
> +                               DMA_COMPLETE) {
> +               dev_err(dmadev->dev,
> +                       "Self-test copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +
> +       dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
> +                               DMA_FROM_DEVICE);
> +
> +       if (memcmp(src_buf, dest_buf, size)) {
> +               for (i = 0; i < size/4; i++) {
> +                       if (((u32 *)src_buf)[i] != ((u32 *)(dest_buf))[i]) {
> +                               dev_dbg(dmadev->dev,
> +                                       "[%d] src data:%x dest data:%x\n",
> +                                       i, ((u32 *)src_buf)[i],
> +                                       ((u32 *)(dest_buf))[i]);
> +                               break;
> +                       }
> +               }
> +               dev_err(dmadev->dev,
> +                       "Self-test copy failed compare, disabling\n");
> +               err = -EFAULT;
> +               goto compare_failed;
> +       }
> +
> +       /*
> +        * do not release the channel
> +        * we want to consume all the channels on self test
> +        */
> +       free_channel = 0;
> +
> +compare_failed:
> +tx_status:
> +prep_memcpy_failed:
> +       dma_unmap_single(dmadev->dev, dest_dma, size,
> +                        DMA_FROM_DEVICE);
> +dest_map_failed:
> +       dma_unmap_single(dmadev->dev, src_dma, size,
> +                       DMA_TO_DEVICE);
> +
> +src_map_failed:
> +       kfree(dest_buf);
> +
> +dst_alloc_failed:
> +       kfree(src_buf);
> +
> +src_alloc_failed:
> +       if (free_channel)
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +
> +       return err;
> +}
> +
> +/*
> + * Perform a coherent transaction to verify the HW works.
> + */
> +static int dma_selftest_one_coherent(struct dma_device *dmadev,
> +                       struct dma_chan *dma_chanptr, u64 size,
> +                       unsigned long flags)
> +{
> +       dma_addr_t src_dma, dest_dma;
> +       u8 *dest_buf, *src_buf;
> +       u32 i;
> +       dma_cookie_t cookie;
> +       struct dma_async_tx_descriptor *tx;
> +       int err = 0;
> +       int ret;
> +       bool free_channel = true;
> +       struct test_result result;
> +
> +       init_waitqueue_head(&result.wq);
> +       atomic_set(&result.counter, 0);
> +       result.dmadev = dmadev;
> +
> +       if (!dma_chanptr)
> +               return -ENOMEM;
> +
> +       if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
> +               return -ENODEV;
> +
> +       if (!dma_chanptr->device || !dmadev->dev) {
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +               return -ENODEV;
> +       }
> +
> +       src_buf = dma_alloc_coherent(dmadev->dev, size,
> +                               &src_dma, GFP_KERNEL);
> +       if (!src_buf) {
> +               err = -ENOMEM;
> +               goto src_alloc_failed;
> +       }
> +
> +       dest_buf = dma_alloc_coherent(dmadev->dev, size,
> +                               &dest_dma, GFP_KERNEL);
> +       if (!dest_buf) {
> +               err = -ENOMEM;
> +               goto dst_alloc_failed;
> +       }
> +
> +       dev_dbg(dmadev->dev, "src: %p dest:%p\n", src_buf, dest_buf);
> +
> +       /* Fill in src buffer */
> +       for (i = 0; i < size; i++)
> +               src_buf[i] = (u8)i;
> +
> +       dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n", &src_dma,
> +               &dest_dma);
> +       tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma, src_dma,
> +                                       size,
> +                                       flags);
> +       if (!tx) {
> +               dev_err(dmadev->dev,
> +                       "Self-test coherent failed, disabling\n");
> +               err = -ENODEV;
> +               goto prep_memcpy_failed;
> +       }
> +
> +       tx->callback_param = &result;
> +       tx->callback = dma_selftest_complete;
> +       cookie = tx->tx_submit(tx);
> +       dmadev->device_issue_pending(dma_chanptr);
> +
> +       /*
> +        * It is assumed that the hardware can move the data within 1s
> +        * and signal the OS of the completion
> +        */
> +       ret = wait_event_timeout(result.wq,
> +                               atomic_read(&result.counter) == 1,
> +                               msecs_to_jiffies(10000));
> +
> +       if (ret <= 0) {
> +               dev_err(dmadev->dev,
> +                       "Self-test copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +       dev_dbg(dmadev->dev, "Self-test complete signal received\n");
> +
> +       if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
> +                               DMA_COMPLETE) {
> +               dev_err(dmadev->dev,
> +                       "Self-test copy timed out, disabling\n");
> +               err = -ENODEV;
> +               goto tx_status;
> +       }
> +
> +       if (memcmp(src_buf, dest_buf, size)) {
> +               for (i = 0; i < size/4; i++) {
> +                       if (((u32 *)src_buf)[i] != ((u32 *)(dest_buf))[i]) {
> +                               dev_dbg(dmadev->dev,
> +                                       "[%d] src data:%x dest data:%x\n",
> +                                       i, ((u32 *)src_buf)[i],
> +                                       ((u32 *)(dest_buf))[i]);
> +                               break;
> +                       }
> +               }
> +               dev_err(dmadev->dev,
> +                       "Self-test copy failed compare, disabling\n");
> +               err = -EFAULT;
> +               goto compare_failed;
> +       }
> +
> +       /*
> +        * do not release the channel
> +        * we want to consume all the channels on self test
> +        */
> +       free_channel = 0;
> +
> +compare_failed:
> +tx_status:
> +prep_memcpy_failed:
> +       dma_free_coherent(dmadev->dev, size, dest_buf, dest_dma);
> +
> +dst_alloc_failed:
> +       dma_free_coherent(dmadev->dev, size, src_buf, src_dma);
> +
> +src_alloc_failed:
> +       if (free_channel)
> +               dmadev->device_free_chan_resources(dma_chanptr);
> +
> +       return err;
> +}
> +
> +static int dma_selftest_all(struct dma_device *dmadev,
> +                               bool req_coherent, bool req_sg)
> +{
> +       int rc = -ENODEV, i = 0;
> +       struct dma_chan **dmach_ptr = NULL;
> +       u32 max_channels = 0;
> +       u64 sizes[] = {PAGE_SIZE - 1, PAGE_SIZE, PAGE_SIZE + 1, 2801, 13295};
> +       int count = 0;
> +       u32 j;
> +       u64 size;
> +       int failed = 0;
> +       struct dma_chan *dmach = NULL;
> +
> +       list_for_each_entry(dmach, &dmadev->channels,
> +                       device_node) {
> +               max_channels++;
> +       }
> +
> +       dmach_ptr = kcalloc(max_channels, sizeof(*dmach_ptr), GFP_KERNEL);
> +       if (!dmach_ptr) {
> +               rc = -ENOMEM;
> +               goto failed_exit;
> +       }
> +
> +       for (j = 0; j < ARRAY_SIZE(sizes); j++) {
> +               size = sizes[j];
> +               count = 0;
> +               dev_dbg(dmadev->dev, "test start for size:%llx\n", size);
> +               list_for_each_entry(dmach, &dmadev->channels,
> +                               device_node) {
> +                       dmach_ptr[count] = dmach;
> +                       if (req_coherent)
> +                               rc = dma_selftest_one_coherent(dmadev,
> +                                       dmach, size,
> +                                       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
> +                       else if (req_sg)
> +                               rc = dma_selftest_sg(dmadev,
> +                                       dmach, size,
> +                                       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
> +                       else
> +                               rc = dma_selftest_streaming(dmadev,
> +                                       dmach, size,
> +                                       DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
> +                       if (rc) {
> +                               failed = 1;
> +                               break;
> +                       }
> +                       dev_dbg(dmadev->dev,
> +                               "self test passed for ch:%d\n", count);
> +                       count++;
> +               }
> +
> +               /*
> +                * free the channels where the test passed
> +                * Channel resources are freed for a test that fails.
> +                */
> +               for (i = 0; i < count; i++)
> +                       dmadev->device_free_chan_resources(dmach_ptr[i]);
> +
> +               if (failed)
> +                       break;
> +       }
> +
> +failed_exit:
> +       kfree(dmach_ptr);
> +
> +       return rc;
> +}
> +
> +static int dma_selftest_mapsngle(struct device *dev)
> +{
> +       u32 buf_size = 256;
> +       char *src;
> +       int ret = -ENOMEM;
> +       dma_addr_t dma_src;
> +
> +       src = kmalloc(buf_size, GFP_KERNEL);
> +       if (!src)
> +               return -ENOMEM;
> +
> +       strcpy(src, "hello world");
> +
> +       dma_src = dma_map_single(dev, src, buf_size, DMA_TO_DEVICE);
> +       dev_dbg(dev, "mapsingle: src:%p src_dma:%pad\n", src, &dma_src);
> +
> +       ret = dma_mapping_error(dev, dma_src);
> +       if (ret) {
> +               dev_err(dev, "dma_mapping_error with ret:%d\n", ret);
> +               ret = -ENOMEM;
> +       } else {
> +               if (strcmp(src, "hello world") != 0) {
> +                       dev_err(dev, "memory content mismatch\n");
> +                       ret = -EINVAL;
> +               } else
> +                       dev_dbg(dev, "mapsingle:dma_map_single works\n");
> +
> +               dma_unmap_single(dev, dma_src, buf_size, DMA_TO_DEVICE);
> +       }
> +       kfree(src);
> +       return ret;
> +}
> +
> +/*
> + * Self test all DMA channels.
> + */
> +int dma_selftest_memcpy(struct dma_device *dmadev)
> +{
> +       int rc;
> +
> +       dma_selftest_mapsngle(dmadev->dev);
> +
> +       /* streaming test */
> +       rc = dma_selftest_all(dmadev, false, false);
> +       if (rc)
> +               return rc;
> +       dev_dbg(dmadev->dev, "streaming self test passed\n");
> +
> +       /* coherent test */
> +       rc = dma_selftest_all(dmadev, true, false);
> +       if (rc)
> +               return rc;
> +
> +       dev_dbg(dmadev->dev, "coherent self test passed\n");
> +
> +       /* scatter gather test */
> +       rc = dma_selftest_all(dmadev, false, true);
> +       if (rc)
> +               return rc;
> +
> +       dev_dbg(dmadev->dev, "scatter gather self test passed\n");
> +       return 0;
> +}
> +EXPORT_SYMBOL_GPL(dma_selftest_memcpy);
> --
> Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-08  4:53   ` Sinan Kaya
  (?)
@ 2015-11-08 20:47       ` Andy Shevchenko
  -1 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-08 20:47 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, timur-sgV2jX0FEOL9JmXXK+q4OQ,
	cov-sgV2jX0FEOL9JmXXK+q4OQ, jcm-H+wXaHxf7aLQT0dZR+AlfA,
	Andy Gross, linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm Mailing List, Rob Herring, Pawel Moll, Mark Rutland,
	Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams, devicetree,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

On Sun, Nov 8, 2015 at 6:53 AM, Sinan Kaya <okaya-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org> wrote:
> This patch adds support for hidma engine. The driver
> consists of two logical blocks. The DMA engine interface
> and the low-level interface. The hardware only supports
> memcpy/memset and this driver only support memcpy
> interface. HW and driver doesn't support slave interface.

Make lines a bit longer.

> +/*
> + * Qualcomm Technologies HIDMA DMA engine interface
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +/*
> + * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
> + * Copyright (C) Semihalf 2009
> + * Copyright (C) Ilya Yanok, Emcraft Systems 2010
> + * Copyright (C) Alexander Popov, Promcontroller 2014
> + *
> + * Written by Piotr Ziecik <kosmo-nYOzD4b6Jr9Wk0Htik3J/w@public.gmane.org>. Hardware description
> + * (defines, structures and comments) was taken from MPC5121 DMA driver
> + * written by Hongjun Chen <hong-jun.chen-KZfg59tc24xl57MIdRCFDg@public.gmane.org>.
> + *
> + * Approved as OSADL project by a majority of OSADL members and funded
> + * by OSADL membership fees in 2009;  for details see www.osadl.org.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License as published by the Free
> + * Software Foundation; either version 2 of the License, or (at your option)
> + * any later version.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * The full GNU General Public License is included in this distribution in the
> + * file called COPYING.
> + */
> +
> +/* Linux Foundation elects GPLv2 license only. */
> +
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/of_dma.h>
> +#include <linux/property.h>
> +#include <linux/delay.h>
> +#include <linux/highmem.h>
> +#include <linux/io.h>
> +#include <linux/sched.h>
> +#include <linux/wait.h>
> +#include <linux/acpi.h>
> +#include <linux/irq.h>
> +#include <linux/atomic.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "../dmaengine.h"
> +#include "hidma.h"
> +
> +/*
> + * Default idle time is 2 seconds. This parameter can
> + * be overridden by changing the following
> + * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
> + * during kernel boot.
> + */
> +#define AUTOSUSPEND_TIMEOUT            2000
> +#define ERR_INFO_SW                    0xFF
> +#define ERR_CODE_UNEXPECTED_TERMINATE  0x0
> +
> +static inline
> +struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
> +{
> +       return container_of(dmadev, struct hidma_dev, ddev);
> +}
> +
> +static inline
> +struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
> +{
> +       return container_of(_lldevp, struct hidma_dev, lldev);
> +}
> +
> +static inline
> +struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
> +{
> +       return container_of(dmach, struct hidma_chan, chan);
> +}
> +
> +static inline struct hidma_desc *
> +to_hidma_desc(struct dma_async_tx_descriptor *t)
> +{
> +       return container_of(t, struct hidma_desc, desc);
> +}
> +
> +static void hidma_free(struct hidma_dev *dmadev)
> +{
> +       dev_dbg(dmadev->ddev.dev, "free dmadev\n");
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +}
> +
> +static unsigned int nr_desc_prm;
> +module_param(nr_desc_prm, uint, 0644);
> +MODULE_PARM_DESC(nr_desc_prm,
> +                "number of descriptors (default: 0)");
> +
> +#define MAX_HIDMA_CHANNELS     64
> +static int event_channel_idx[MAX_HIDMA_CHANNELS] = {
> +       [0 ... (MAX_HIDMA_CHANNELS - 1)] = -1};
> +static unsigned int num_event_channel_idx;
> +module_param_array_named(event_channel_idx, event_channel_idx, int,
> +                       &num_event_channel_idx, 0644);
> +MODULE_PARM_DESC(event_channel_idx,
> +               "event channel index array for the notifications");
> +static atomic_t channel_ref_count;
> +
> +/* process completed descriptors */
> +static void hidma_process_completed(struct hidma_dev *mdma)
> +{
> +       dma_cookie_t last_cookie = 0;
> +       struct hidma_chan *mchan;
> +       struct hidma_desc *mdesc;
> +       struct dma_async_tx_descriptor *desc;
> +       unsigned long irqflags;
> +       struct list_head list;
> +       struct dma_chan *dmach = NULL;

Redundant assignment.

> +
> +       list_for_each_entry(dmach, &mdma->ddev.channels,
> +                       device_node) {
> +               mchan = to_hidma_chan(dmach);
> +               INIT_LIST_HEAD(&list);
> +
> +               /* Get all completed descriptors */
> +               spin_lock_irqsave(&mchan->lock, irqflags);
> +               list_splice_tail_init(&mchan->completed, &list);
> +               spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +               /* Execute callbacks and run dependencies */
> +               list_for_each_entry(mdesc, &list, node) {
> +                       desc = &mdesc->desc;
> +
> +                       spin_lock_irqsave(&mchan->lock, irqflags);
> +                       dma_cookie_complete(desc);
> +                       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +                       if (desc->callback &&
> +                               (hidma_ll_status(mdma->lldev, mdesc->tre_ch)
> +                               == DMA_COMPLETE))
> +                               desc->callback(desc->callback_param);
> +
> +                       last_cookie = desc->cookie;
> +                       dma_run_dependencies(desc);
> +               }
> +
> +               /* Free descriptors */
> +               spin_lock_irqsave(&mchan->lock, irqflags);
> +               list_splice_tail_init(&list, &mchan->free);
> +               spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       }
> +}
> +
> +/*
> + * Called once for each submitted descriptor.
> + * PM is locked once for each descriptor that is currently
> + * in execution.
> + */
> +static void hidma_callback(void *data)
> +{
> +       struct hidma_desc *mdesc = data;
> +       struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
> +       unsigned long irqflags;
> +       struct dma_device *ddev = mchan->chan.device;
> +       struct hidma_dev *dmadev = to_hidma_dev(ddev);
> +       bool queued = false;
> +
> +       dev_dbg(dmadev->ddev.dev, "callback: data:0x%p\n", data);
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +
> +       if (mdesc->node.next) {
> +               /* Delete from the active list, add to completed list */
> +               list_move_tail(&mdesc->node, &mchan->completed);
> +               queued = true;
> +       }
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       hidma_process_completed(dmadev);
> +
> +       if (queued) {
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +}
> +
> +static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
> +{
> +       struct hidma_chan *mchan;
> +       struct dma_device *ddev;
> +
> +       mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
> +       if (!mchan)
> +               return -ENOMEM;
> +
> +       ddev = &dmadev->ddev;
> +       mchan->dma_sig = dma_sig;
> +       mchan->dmadev = dmadev;
> +       mchan->chan.device = ddev;
> +       dma_cookie_init(&mchan->chan);
> +
> +       INIT_LIST_HEAD(&mchan->free);
> +       INIT_LIST_HEAD(&mchan->prepared);
> +       INIT_LIST_HEAD(&mchan->active);
> +       INIT_LIST_HEAD(&mchan->completed);
> +
> +       spin_lock_init(&mchan->lock);
> +       list_add_tail(&mchan->chan.device_node, &ddev->channels);
> +       dmadev->ddev.chancnt++;
> +       return 0;
> +}
> +
> +static void hidma_issue_pending(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +
> +       /* PM will be released in hidma_callback function. */
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       hidma_ll_start(dmadev->lldev);
> +}
> +
> +static enum dma_status hidma_tx_status(struct dma_chan *dmach,
> +                                       dma_cookie_t cookie,
> +                                       struct dma_tx_state *txstate)
> +{
> +       enum dma_status ret;
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +
> +       if (mchan->paused)
> +               ret = DMA_PAUSED;
> +       else
> +               ret = dma_cookie_status(dmach, cookie, txstate);
> +
> +       return ret;
> +}
> +
> +/*
> + * Submit descriptor to hardware.
> + * Lock the PM for each descriptor we are sending.
> + */
> +static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(txd->chan);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +       struct hidma_desc *mdesc;
> +       unsigned long irqflags;
> +       dma_cookie_t cookie;
> +
> +       if (!hidma_ll_isenabled(dmadev->lldev))
> +               return -ENODEV;
> +
> +       mdesc = container_of(txd, struct hidma_desc, desc);
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +
> +       /* Move descriptor to active */
> +       list_move_tail(&mdesc->node, &mchan->active);
> +
> +       /* Update cookie */
> +       cookie = dma_cookie_assign(txd);
> +
> +       hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       return cookie;
> +}
> +
> +static int hidma_alloc_chan_resources(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +       int rc = 0;
> +       struct hidma_desc *mdesc, *tmp;
> +       unsigned long irqflags;
> +       LIST_HEAD(descs);
> +       u32 i;
> +
> +       if (mchan->allocated)
> +               return 0;
> +
> +       /* Alloc descriptors for this channel */
> +       for (i = 0; i < dmadev->nr_descriptors; i++) {
> +               mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
> +               if (!mdesc) {
> +                       rc = -ENOMEM;
> +                       break;
> +               }
> +               dma_async_tx_descriptor_init(&mdesc->desc, dmach);
> +               mdesc->desc.flags = DMA_CTRL_ACK;
> +               mdesc->desc.tx_submit = hidma_tx_submit;
> +
> +               rc = hidma_ll_request(dmadev->lldev,
> +                               mchan->dma_sig, "DMA engine", hidma_callback,
> +                               mdesc, &mdesc->tre_ch);
> +               if (rc) {
> +                       dev_err(dmach->device->dev,
> +                               "channel alloc failed at %u\n", i);
> +                       kfree(mdesc);
> +                       break;
> +               }
> +               list_add_tail(&mdesc->node, &descs);
> +       }
> +
> +       if (rc) {
> +               /* return the allocated descriptors */
> +               list_for_each_entry_safe(mdesc, tmp, &descs, node) {
> +                       hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
> +                       kfree(mdesc);
> +               }
> +               return rc;
> +       }
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_splice_tail_init(&descs, &mchan->free);
> +       mchan->allocated = true;
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       dev_dbg(dmadev->ddev.dev,
> +               "allocated channel for %u\n", mchan->dma_sig);
> +       return 1;
> +}
> +
> +static void hidma_free_chan_resources(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *mdma = mchan->dmadev;
> +       struct hidma_desc *mdesc, *tmp;
> +       unsigned long irqflags;
> +       LIST_HEAD(descs);
> +
> +       if (!list_empty(&mchan->prepared) ||
> +               !list_empty(&mchan->active) ||
> +               !list_empty(&mchan->completed)) {
> +               /*
> +                * We have unfinished requests waiting.
> +                * Terminate the request from the hardware.
> +                */
> +               hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
> +                               ERR_CODE_UNEXPECTED_TERMINATE);
> +
> +               /* Give enough time for completions to be called. */
> +               msleep(100);
> +       }
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       /* Channel must be idle */
> +       WARN_ON(!list_empty(&mchan->prepared));
> +       WARN_ON(!list_empty(&mchan->active));
> +       WARN_ON(!list_empty(&mchan->completed));
> +
> +       /* Move data */
> +       list_splice_tail_init(&mchan->free, &descs);
> +
> +       /* Free descriptors */
> +       list_for_each_entry_safe(mdesc, tmp, &descs, node) {
> +               hidma_ll_free(mdma->lldev, mdesc->tre_ch);
> +               list_del(&mdesc->node);
> +               kfree(mdesc);
> +       }
> +
> +       mchan->allocated = 0;
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
> +}
> +
> +
> +static struct dma_async_tx_descriptor *
> +hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
> +                       dma_addr_t dma_src, size_t len, unsigned long flags)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_desc *mdesc = NULL;
> +       struct hidma_dev *mdma = mchan->dmadev;
> +       unsigned long irqflags;
> +
> +       dev_dbg(mdma->ddev.dev,
> +               "memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
> +               &dma_dest, &dma_src, len);
> +
> +       /* Get free descriptor */
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       if (!list_empty(&mchan->free)) {
> +               mdesc = list_first_entry(&mchan->free, struct hidma_desc,
> +                                       node);
> +               list_del(&mdesc->node);
> +       }
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       if (!mdesc)
> +               return NULL;
> +
> +       hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
> +                       dma_src, dma_dest, len, flags);
> +
> +       /* Place descriptor in prepared list */
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_add_tail(&mdesc->node, &mchan->prepared);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       return &mdesc->desc;
> +}
> +
> +static int hidma_terminate_all(struct dma_chan *chan)
> +{
> +       struct hidma_dev *dmadev;
> +       LIST_HEAD(head);
> +       unsigned long irqflags;
> +       LIST_HEAD(list);
> +       struct hidma_desc *tmp, *mdesc = NULL;
> +       int rc;
> +       struct hidma_chan *mchan;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "terminateall: chan:0x%p\n", mchan);
> +
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       /* give completed requests a chance to finish */
> +       hidma_process_completed(dmadev);
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_splice_init(&mchan->active, &list);
> +       list_splice_init(&mchan->prepared, &list);
> +       list_splice_init(&mchan->completed, &list);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       /* this suspends the existing transfer */
> +       rc = hidma_ll_pause(dmadev->lldev);
> +       if (rc) {
> +               dev_err(dmadev->ddev.dev, "channel did not pause\n");
> +               goto out;
> +       }
> +
> +       /* return all user requests */
> +       list_for_each_entry_safe(mdesc, tmp, &list, node) {
> +               struct dma_async_tx_descriptor  *txd = &mdesc->desc;
> +               dma_async_tx_callback callback = mdesc->desc.callback;
> +               void *param = mdesc->desc.callback_param;
> +               enum dma_status status;
> +
> +               dma_descriptor_unmap(txd);
> +
> +               status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
> +               /*
> +                * The API requires that no submissions are done from a
> +                * callback, so we don't need to drop the lock here
> +                */
> +               if (callback && (status == DMA_COMPLETE))
> +                       callback(param);
> +
> +               dma_run_dependencies(txd);
> +
> +               /* move myself to free_list */
> +               list_move(&mdesc->node, &mchan->free);
> +       }
> +
> +       /* reinitialize the hardware */
> +       rc = hidma_ll_setup(dmadev->lldev);
> +
> +out:
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return rc;
> +}
> +
> +static int hidma_pause(struct dma_chan *chan)
> +{
> +       struct hidma_chan *mchan;
> +       struct hidma_dev *dmadev;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "pause: chan:0x%p\n", mchan);
> +
> +       if (!mchan->paused) {
> +               pm_runtime_get_sync(dmadev->ddev.dev);
> +               if (hidma_ll_pause(dmadev->lldev))
> +                       dev_warn(dmadev->ddev.dev, "channel did not stop\n");
> +               mchan->paused = true;
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +       return 0;
> +}
> +
> +static int hidma_resume(struct dma_chan *chan)
> +{
> +       struct hidma_chan *mchan;
> +       struct hidma_dev *dmadev;
> +       int rc = 0;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "resume: chan:0x%p\n", mchan);
> +
> +       if (mchan->paused) {
> +               pm_runtime_get_sync(dmadev->ddev.dev);
> +               rc = hidma_ll_resume(dmadev->lldev);
> +               if (!rc)
> +                       mchan->paused = false;
> +               else
> +                       dev_err(dmadev->ddev.dev,
> +                                       "failed to resume the channel");
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +       return rc;
> +}
> +
> +static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
> +{
> +       struct hidma_lldev **lldev_ptr = arg;
> +       irqreturn_t ret;
> +       struct hidma_dev *dmadev = to_hidma_dev_from_lldev(lldev_ptr);
> +
> +       /*
> +        * All interrupts are request driven.
> +        * HW doesn't send an interrupt by itself.
> +        */
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       ret = hidma_ll_inthandler(chirq, *lldev_ptr);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return ret;
> +}
> +
> +static int hidma_probe(struct platform_device *pdev)
> +{
> +       struct hidma_dev *dmadev;
> +       int rc = 0;
> +       struct resource *trca_resource;
> +       struct resource *evca_resource;
> +       int chirq;
> +       int current_channel_index = atomic_read(&channel_ref_count);
> +       void *evca;
> +       void *trca;
> +
> +       pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
> +       pm_runtime_use_autosuspend(&pdev->dev);
> +       pm_runtime_set_active(&pdev->dev);
> +       pm_runtime_enable(&pdev->dev);
> +
> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);

> +       if (!trca_resource) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }

Why did you ignore my comment about this block?
Remove that condition entirely.

> +
> +       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
> +       if (IS_ERR(trca)) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
> +       if (!evca_resource) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }

Ditto.

> +
> +       evca = devm_ioremap_resource(&pdev->dev, evca_resource);
> +       if (IS_ERR(evca)) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       /*
> +        * This driver only handles the channel IRQs.
> +        * Common IRQ is handled by the management driver.
> +        */
> +       chirq = platform_get_irq(pdev, 0);
> +       if (chirq < 0) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }
> +
> +       dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
> +       if (!dmadev) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +       spin_lock_init(&dmadev->lock);
> +       dmadev->ddev.dev = &pdev->dev;
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +
> +       dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
> +       if (WARN_ON(!pdev->dev.dma_mask)) {
> +               rc = -ENXIO;
> +               goto dmafree;
> +       }
> +
> +       dmadev->dev_evca = evca;
> +       dmadev->evca_resource = evca_resource;
> +       dmadev->dev_trca = trca;
> +       dmadev->trca_resource = trca_resource;
> +       dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
> +       dmadev->ddev.device_alloc_chan_resources =
> +               hidma_alloc_chan_resources;
> +       dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
> +       dmadev->ddev.device_tx_status = hidma_tx_status;
> +       dmadev->ddev.device_issue_pending = hidma_issue_pending;
> +       dmadev->ddev.device_pause = hidma_pause;
> +       dmadev->ddev.device_resume = hidma_resume;
> +       dmadev->ddev.device_terminate_all = hidma_terminate_all;
> +       dmadev->ddev.copy_align = 8;
> +
> +       device_property_read_u32(&pdev->dev, "desc-count",
> +                               &dmadev->nr_descriptors);
> +
> +       if (!dmadev->nr_descriptors && nr_desc_prm)
> +               dmadev->nr_descriptors = nr_desc_prm;
> +
> +       if (!dmadev->nr_descriptors)
> +               goto dmafree;
> +
> +       if (current_channel_index > MAX_HIDMA_CHANNELS)
> +               goto dmafree;
> +
> +       dmadev->evridx = -1;
> +       device_property_read_u32(&pdev->dev, "event-channel", &dmadev->evridx);
> +
> +       /* kernel command line override for the guest machine */
> +       if (event_channel_idx[current_channel_index] != -1)
> +               dmadev->evridx = event_channel_idx[current_channel_index];
> +
> +       if (dmadev->evridx == -1)
> +               goto dmafree;
> +
> +       /* Set DMA mask to 64 bits. */
> +       rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> +       if (rc) {
> +               dev_warn(&pdev->dev, "unable to set coherent mask to 64");
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> +               if (rc)
> +                       goto dmafree;
> +       }
> +
> +       dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
> +                               dmadev->nr_descriptors, dmadev->dev_trca,
> +                               dmadev->dev_evca, dmadev->evridx);
> +       if (!dmadev->lldev) {
> +               rc = -EPROBE_DEFER;
> +               goto dmafree;
> +       }
> +
> +       rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
> +                             "qcom-hidma", &dmadev->lldev);
> +       if (rc)
> +               goto uninit;
> +
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +       rc = hidma_chan_init(dmadev, 0);
> +       if (rc)
> +               goto uninit;
> +
> +       rc = dma_selftest_memcpy(&dmadev->ddev);
> +       if (rc)
> +               goto uninit;
> +
> +       rc = dma_async_device_register(&dmadev->ddev);
> +       if (rc)
> +               goto uninit;
> +
> +       hidma_debug_init(dmadev);
> +       dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
> +       platform_set_drvdata(pdev, dmadev);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       atomic_inc(&channel_ref_count);
> +       return 0;
> +
> +uninit:
> +       hidma_debug_uninit(dmadev);
> +       hidma_ll_uninit(dmadev->lldev);
> +dmafree:
> +       if (dmadev)
> +               hidma_free(dmadev);
> +bailout:
> +       pm_runtime_disable(&pdev->dev);
> +       pm_runtime_put_sync_suspend(&pdev->dev);

Are you sure this is appropriate sequence?

I think

pm_runtime_put();
pm_runtime_disable();

will do the job.

> +       return rc;
> +}
> +
> +static int hidma_remove(struct platform_device *pdev)
> +{
> +       struct hidma_dev *dmadev = platform_get_drvdata(pdev);
> +
> +       dev_dbg(&pdev->dev, "removing\n");

Useless message.

> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +
> +       dma_async_device_unregister(&dmadev->ddev);
> +       hidma_debug_uninit(dmadev);
> +       hidma_ll_uninit(dmadev->lldev);
> +       hidma_free(dmadev);
> +
> +       dev_info(&pdev->dev, "HI-DMA engine removed\n");
> +       pm_runtime_put_sync_suspend(&pdev->dev);
> +       pm_runtime_disable(&pdev->dev);
> +
> +       return 0;
> +}
> +
> +#if IS_ENABLED(CONFIG_ACPI)
> +static const struct acpi_device_id hidma_acpi_ids[] = {
> +       {"QCOM8061"},
> +       {},
> +};
> +#endif
> +
> +static const struct of_device_id hidma_match[] = {
> +       { .compatible = "qcom,hidma-1.0", },
> +       {},
> +};
> +MODULE_DEVICE_TABLE(of, hidma_match);
> +
> +static struct platform_driver hidma_driver = {
> +       .probe = hidma_probe,
> +       .remove = hidma_remove,
> +       .driver = {
> +               .name = "hidma",
> +               .of_match_table = hidma_match,
> +               .acpi_match_table = ACPI_PTR(hidma_acpi_ids),
> +       },
> +};
> +module_platform_driver(hidma_driver);
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
> new file mode 100644
> index 0000000..195d6b5
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma.h
> @@ -0,0 +1,157 @@
> +/*
> + * Qualcomm Technologies HIDMA data structures
> + *
> + * Copyright (c) 2014, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef QCOM_HIDMA_H
> +#define QCOM_HIDMA_H
> +
> +#include <linux/kfifo.h>
> +#include <linux/interrupt.h>
> +#include <linux/dmaengine.h>
> +
> +#define TRE_SIZE                       32 /* each TRE is 32 bytes  */
> +#define TRE_CFG_IDX                    0
> +#define TRE_LEN_IDX                    1
> +#define TRE_SRC_LOW_IDX                2
> +#define TRE_SRC_HI_IDX                 3
> +#define TRE_DEST_LOW_IDX               4
> +#define TRE_DEST_HI_IDX                5
> +
> +struct hidma_tx_status {
> +       u8 err_info;                    /* error record in this transfer    */
> +       u8 err_code;                    /* completion code                  */
> +};
> +
> +struct hidma_tre {
> +       atomic_t allocated;             /* if this channel is allocated     */
> +       bool queued;                    /* flag whether this is pending     */
> +       u16 status;                     /* status                           */
> +       u32 chidx;                      /* index of the tre         */
> +       u32 dma_sig;                    /* signature of the tre     */
> +       const char *dev_name;           /* name of the device               */
> +       void (*callback)(void *data);   /* requester callback               */
> +       void *data;                     /* Data associated with this channel*/
> +       struct hidma_lldev *lldev;      /* lldma device pointer             */
> +       u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy        */
> +       u32 tre_index;                  /* the offset where this was written*/
> +       u32 int_flags;                  /* interrupt flags*/
> +};
> +
> +struct hidma_lldev {
> +       bool initialized;               /* initialized flag               */
> +       u8 trch_state;                  /* trch_state of the device       */
> +       u8 evch_state;                  /* evch_state of the device       */
> +       u8 evridx;                      /* event channel to notify        */
> +       u32 nr_tres;                    /* max number of configs          */
> +       spinlock_t lock;                /* reentrancy                     */
> +       struct hidma_tre *trepool;      /* trepool of user configs */
> +       struct device *dev;             /* device                         */
> +       void __iomem *trca;             /* Transfer Channel address       */
> +       void __iomem *evca;             /* Event Channel address          */
> +       struct hidma_tre
> +               **pending_tre_list;     /* Pointers to pending TREs       */
> +       struct hidma_tx_status
> +               *tx_status_list;        /* Pointers to pending TREs status*/
> +       s32 pending_tre_count;          /* Number of TREs pending         */
> +
> +       void *tre_ring;         /* TRE ring                       */
> +       dma_addr_t tre_ring_handle;     /* TRE ring to be shared with HW  */
> +       u32 tre_ring_size;              /* Byte size of the ring          */
> +       u32 tre_processed_off;          /* last processed TRE              */
> +
> +       void *evre_ring;                /* EVRE ring                       */
> +       dma_addr_t evre_ring_handle;    /* EVRE ring to be shared with HW  */
> +       u32 evre_ring_size;             /* Byte size of the ring          */
> +       u32 evre_processed_off; /* last processed EVRE             */
> +
> +       u32 tre_write_offset;           /* TRE write location              */
> +       struct tasklet_struct task;     /* task delivering notifications   */
> +       DECLARE_KFIFO_PTR(handoff_fifo,
> +               struct hidma_tre *);    /* pending TREs FIFO              */
> +};
> +
> +struct hidma_desc {
> +       struct dma_async_tx_descriptor  desc;
> +       /* link list node for this channel*/
> +       struct list_head                node;
> +       u32                             tre_ch;
> +};
> +
> +struct hidma_chan {
> +       bool                            paused;
> +       bool                            allocated;
> +       char                            dbg_name[16];
> +       u32                             dma_sig;
> +
> +       /*
> +        * active descriptor on this channel
> +        * It is used by the DMA complete notification to
> +        * locate the descriptor that initiated the transfer.
> +        */
> +       struct dentry                   *debugfs;
> +       struct dentry                   *stats;
> +       struct hidma_dev                *dmadev;
> +
> +       struct dma_chan                 chan;
> +       struct list_head                free;
> +       struct list_head                prepared;
> +       struct list_head                active;
> +       struct list_head                completed;
> +
> +       /* Lock for this structure */
> +       spinlock_t                      lock;
> +};
> +
> +struct hidma_dev {
> +       int                             evridx;
> +       u32                             nr_descriptors;
> +
> +       struct hidma_lldev              *lldev;
> +       void                            __iomem *dev_trca;
> +       struct resource                 *trca_resource;
> +       void                            __iomem *dev_evca;
> +       struct resource                 *evca_resource;
> +
> +       /* used to protect the pending channel list*/
> +       spinlock_t                      lock;
> +       struct dma_device               ddev;
> +
> +       struct dentry                   *debugfs;
> +       struct dentry                   *stats;
> +};
> +
> +int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
> +                       const char *dev_name,
> +                       void (*callback)(void *data), void *data, u32 *tre_ch);
> +
> +void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
> +enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
> +bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
> +int hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
> +int hidma_ll_start(struct hidma_lldev *llhndl);
> +int hidma_ll_pause(struct hidma_lldev *llhndl);
> +int hidma_ll_resume(struct hidma_lldev *llhndl);
> +void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
> +       dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
> +int hidma_ll_setup(struct hidma_lldev *lldev);
> +struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
> +                       void __iomem *trca, void __iomem *evca,
> +                       u8 evridx);
> +int hidma_ll_uninit(struct hidma_lldev *llhndl);
> +irqreturn_t hidma_ll_inthandler(int irq, void *arg);
> +void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
> +                               u8 err_code);
> +int hidma_debug_init(struct hidma_dev *dmadev);
> +void hidma_debug_uninit(struct hidma_dev *dmadev);
> +#endif
> diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
> new file mode 100644
> index 0000000..e0e6711
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_dbg.c
> @@ -0,0 +1,225 @@
> +/*
> + * Qualcomm Technologies HIDMA debug file
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/list.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "hidma.h"
> +
> +void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
> +{
> +       struct hidma_lldev *lldev = llhndl;
> +       struct hidma_tre *tre;
> +       u32 length;
> +       dma_addr_t src_start;
> +       dma_addr_t dest_start;
> +       u32 *tre_local;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev, "invalid TRE number in chstats:%d",
> +                       tre_ch);
> +               return;
> +       }
> +       tre = &lldev->trepool[tre_ch];
> +       seq_printf(s, "------Channel %d -----\n", tre_ch);
> +       seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
> +       seq_printf(s, "queued=0x%x\n", tre->queued);
> +       seq_printf(s, "err_info=0x%x\n",
> +                  lldev->tx_status_list[tre->chidx].err_info);
> +       seq_printf(s, "err_code=0x%x\n",
> +                  lldev->tx_status_list[tre->chidx].err_code);
> +       seq_printf(s, "status=0x%x\n", tre->status);
> +       seq_printf(s, "chidx=0x%x\n", tre->chidx);
> +       seq_printf(s, "dma_sig=0x%x\n", tre->dma_sig);
> +       seq_printf(s, "dev_name=%s\n", tre->dev_name);
> +       seq_printf(s, "callback=%p\n", tre->callback);
> +       seq_printf(s, "data=%p\n", tre->data);
> +       seq_printf(s, "tre_index=0x%x\n", tre->tre_index);
> +
> +       tre_local = &tre->tre_local[0];
> +       src_start = tre_local[TRE_SRC_LOW_IDX];
> +       src_start = ((u64)(tre_local[TRE_SRC_HI_IDX]) << 32) + src_start;
> +       dest_start = tre_local[TRE_DEST_LOW_IDX];
> +       dest_start += ((u64)(tre_local[TRE_DEST_HI_IDX]) << 32);
> +       length = tre_local[TRE_LEN_IDX];
> +
> +       seq_printf(s, "src=%pap\n", &src_start);
> +       seq_printf(s, "dest=%pap\n", &dest_start);
> +       seq_printf(s, "length=0x%x\n", length);
> +}
> +
> +void hidma_ll_devstats(struct seq_file *s, void *llhndl)
> +{
> +       struct hidma_lldev *lldev = llhndl;
> +
> +       seq_puts(s, "------Device -----\n");
> +       seq_printf(s, "lldev init=0x%x\n", lldev->initialized);
> +       seq_printf(s, "trch_state=0x%x\n", lldev->trch_state);
> +       seq_printf(s, "evch_state=0x%x\n", lldev->evch_state);
> +       seq_printf(s, "evridx=0x%x\n", lldev->evridx);
> +       seq_printf(s, "nr_tres=0x%x\n", lldev->nr_tres);
> +       seq_printf(s, "trca=%p\n", lldev->trca);
> +       seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
> +       seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_ring_handle);
> +       seq_printf(s, "tre_ring_size=0x%x\n", lldev->tre_ring_size);
> +       seq_printf(s, "tre_processed_off=0x%x\n", lldev->tre_processed_off);
> +       seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
> +       seq_printf(s, "evca=%p\n", lldev->evca);
> +       seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
> +       seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_ring_handle);
> +       seq_printf(s, "evre_ring_size=0x%x\n", lldev->evre_ring_size);
> +       seq_printf(s, "evre_processed_off=0x%x\n", lldev->evre_processed_off);
> +       seq_printf(s, "tre_write_offset=0x%x\n", lldev->tre_write_offset);
> +}
> +
> +/**
> + * hidma_chan_stats: display HIDMA channel statistics
> + *
> + * Display the statistics for the current HIDMA virtual channel device.
> + */
> +static int hidma_chan_stats(struct seq_file *s, void *unused)
> +{
> +       struct hidma_chan *mchan = s->private;
> +       struct hidma_desc *mdesc;
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       seq_printf(s, "paused=%u\n", mchan->paused);
> +       seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
> +       seq_puts(s, "prepared\n");
> +       list_for_each_entry(mdesc, &mchan->prepared, node)
> +               hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
> +
> +       seq_puts(s, "active\n");
> +               list_for_each_entry(mdesc, &mchan->active, node)
> +                       hidma_ll_chstats(s, mchan->dmadev->lldev,
> +                               mdesc->tre_ch);
> +
> +       seq_puts(s, "completed\n");
> +               list_for_each_entry(mdesc, &mchan->completed, node)
> +                       hidma_ll_chstats(s, mchan->dmadev->lldev,
> +                               mdesc->tre_ch);
> +
> +       hidma_ll_devstats(s, mchan->dmadev->lldev);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return 0;
> +}
> +
> +/**
> + * hidma_dma_info: display HIDMA device info
> + *
> + * Display the info for the current HIDMA device.
> + */
> +static int hidma_dma_info(struct seq_file *s, void *unused)
> +{
> +       struct hidma_dev *dmadev = s->private;
> +       resource_size_t sz;
> +
> +       seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
> +       seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
> +       seq_printf(s, "dev_trca_phys=%pa\n",
> +               &dmadev->trca_resource->start);
> +       sz = resource_size(dmadev->trca_resource);
> +       seq_printf(s, "dev_trca_size=%pa\n", &sz);
> +       seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
> +       seq_printf(s, "dev_evca_phys=%pa\n",
> +               &dmadev->evca_resource->start);
> +       sz = resource_size(dmadev->evca_resource);
> +       seq_printf(s, "dev_evca_size=%pa\n", &sz);
> +       return 0;
> +}
> +
> +static int hidma_chan_stats_open(struct inode *inode, struct file *file)
> +{
> +       return single_open(file, hidma_chan_stats, inode->i_private);
> +}
> +
> +static int hidma_dma_info_open(struct inode *inode, struct file *file)
> +{
> +       return single_open(file, hidma_dma_info, inode->i_private);
> +}
> +
> +static const struct file_operations hidma_chan_fops = {
> +       .open = hidma_chan_stats_open,
> +       .read = seq_read,
> +       .llseek = seq_lseek,
> +       .release = single_release,
> +};
> +
> +static const struct file_operations hidma_dma_fops = {
> +       .open = hidma_dma_info_open,
> +       .read = seq_read,
> +       .llseek = seq_lseek,
> +       .release = single_release,
> +};
> +
> +void hidma_debug_uninit(struct hidma_dev *dmadev)
> +{
> +       debugfs_remove_recursive(dmadev->debugfs);
> +       debugfs_remove_recursive(dmadev->stats);
> +}
> +
> +int hidma_debug_init(struct hidma_dev *dmadev)
> +{
> +       int rc = 0;
> +       int chidx = 0;
> +       struct list_head *position = NULL;
> +
> +       dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev),
> +                                               NULL);
> +       if (!dmadev->debugfs) {
> +               rc = -ENODEV;
> +               return rc;
> +       }
> +
> +       /* walk through the virtual channel list */
> +       list_for_each(position, &dmadev->ddev.channels) {
> +               struct hidma_chan *chan;
> +
> +               chan = list_entry(position, struct hidma_chan,
> +                               chan.device_node);
> +               sprintf(chan->dbg_name, "chan%d", chidx);
> +               chan->debugfs = debugfs_create_dir(chan->dbg_name,
> +                                               dmadev->debugfs);
> +               if (!chan->debugfs) {
> +                       rc = -ENOMEM;
> +                       goto cleanup;
> +               }
> +               chan->stats = debugfs_create_file("stats", S_IRUGO,
> +                               chan->debugfs, chan,
> +                               &hidma_chan_fops);
> +               if (!chan->stats) {
> +                       rc = -ENOMEM;
> +                       goto cleanup;
> +               }
> +               chidx++;
> +       }
> +
> +       dmadev->stats = debugfs_create_file("stats", S_IRUGO,
> +                       dmadev->debugfs, dmadev,
> +                       &hidma_dma_fops);
> +       if (!dmadev->stats) {
> +               rc = -ENOMEM;
> +               goto cleanup;
> +       }
> +
> +       return 0;
> +cleanup:
> +       hidma_debug_uninit(dmadev);
> +       return rc;
> +}
> diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
> new file mode 100644
> index 0000000..f5c0b8b
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_ll.c
> @@ -0,0 +1,944 @@
> +/*
> + * Qualcomm Technologies HIDMA DMA engine low level code
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/dmaengine.h>
> +#include <linux/slab.h>
> +#include <linux/interrupt.h>
> +#include <linux/mm.h>
> +#include <linux/highmem.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/delay.h>
> +#include <linux/atomic.h>
> +#include <linux/iopoll.h>
> +#include <linux/kfifo.h>
> +
> +#include "hidma.h"
> +
> +#define EVRE_SIZE                      16 /* each EVRE is 16 bytes */
> +
> +#define TRCA_CTRLSTS_OFFSET            0x0
> +#define TRCA_RING_LOW_OFFSET           0x8
> +#define TRCA_RING_HIGH_OFFSET          0xC
> +#define TRCA_RING_LEN_OFFSET           0x10
> +#define TRCA_READ_PTR_OFFSET           0x18
> +#define TRCA_WRITE_PTR_OFFSET          0x20
> +#define TRCA_DOORBELL_OFFSET           0x400
> +
> +#define EVCA_CTRLSTS_OFFSET            0x0
> +#define EVCA_INTCTRL_OFFSET            0x4
> +#define EVCA_RING_LOW_OFFSET           0x8
> +#define EVCA_RING_HIGH_OFFSET          0xC
> +#define EVCA_RING_LEN_OFFSET           0x10
> +#define EVCA_READ_PTR_OFFSET           0x18
> +#define EVCA_WRITE_PTR_OFFSET          0x20
> +#define EVCA_DOORBELL_OFFSET           0x400
> +
> +#define EVCA_IRQ_STAT_OFFSET           0x100
> +#define EVCA_IRQ_CLR_OFFSET            0x108
> +#define EVCA_IRQ_EN_OFFSET             0x110
> +
> +#define EVRE_CFG_IDX                   0
> +#define EVRE_LEN_IDX                   1
> +#define EVRE_DEST_LOW_IDX              2
> +#define EVRE_DEST_HI_IDX               3
> +
> +#define EVRE_ERRINFO_BIT_POS           24
> +#define EVRE_CODE_BIT_POS              28
> +
> +#define EVRE_ERRINFO_MASK              0xF
> +#define EVRE_CODE_MASK                 0xF
> +
> +#define CH_CONTROL_MASK                0xFF
> +#define CH_STATE_MASK                  0xFF
> +#define CH_STATE_BIT_POS               0x8
> +
> +#define MAKE64(high, low) (((u64)(high) << 32) | (low))
> +
> +#define IRQ_EV_CH_EOB_IRQ_BIT_POS      0
> +#define IRQ_EV_CH_WR_RESP_BIT_POS      1
> +#define IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
> +#define IRQ_TR_CH_DATA_RD_ER_BIT_POS   10
> +#define IRQ_TR_CH_DATA_WR_ER_BIT_POS   11
> +#define IRQ_TR_CH_INVALID_TRE_BIT_POS  14
> +
> +#define        ENABLE_IRQS (BIT(IRQ_EV_CH_EOB_IRQ_BIT_POS) | \
> +               BIT(IRQ_EV_CH_WR_RESP_BIT_POS) | \
> +               BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS) |   \
> +               BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS) |              \
> +               BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS) |              \
> +               BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))
> +
> +enum ch_command {
> +       CH_DISABLE = 0,
> +       CH_ENABLE = 1,
> +       CH_SUSPEND = 2,
> +       CH_RESET = 9,
> +};
> +
> +enum ch_state {
> +       CH_DISABLED = 0,
> +       CH_ENABLED = 1,
> +       CH_RUNNING = 2,
> +       CH_SUSPENDED = 3,
> +       CH_STOPPED = 4,
> +       CH_ERROR = 5,
> +       CH_IN_RESET = 9,
> +};
> +
> +enum tre_type {
> +       TRE_MEMCPY = 3,
> +       TRE_MEMSET = 4,
> +};
> +
> +enum evre_type {
> +       EVRE_DMA_COMPLETE = 0x23,
> +       EVRE_IMM_DATA = 0x24,
> +};
> +
> +enum err_code {
> +       EVRE_STATUS_COMPLETE = 1,
> +       EVRE_STATUS_ERROR = 4,
> +};
> +
> +void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       struct hidma_tre *tre;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre = &lldev->trepool[tre_ch];
> +       if (atomic_read(&tre->allocated) != true) {
> +               dev_err(lldev->dev, "trying to free an unused TRE:%d",
> +                       tre_ch);
> +               return;
> +       }
> +
> +       atomic_set(&tre->allocated, 0);
> +       dev_dbg(lldev->dev, "free_dma: allocated:%d tre_ch:%d\n",
> +               atomic_read(&tre->allocated), tre_ch);
> +}
> +
> +int hidma_ll_request(struct hidma_lldev *lldev, u32 dma_sig,
> +                       const char *dev_name,
> +                       void (*callback)(void *data), void *data, u32 *tre_ch)
> +{
> +       u32 i;
> +       struct hidma_tre *tre = NULL;
> +       u32 *tre_local;
> +
> +       if (!tre_ch || !lldev)
> +               return -EINVAL;
> +
> +       /* need to have at least one empty spot in the queue */
> +       for (i = 0; i < lldev->nr_tres - 1; i++) {
> +               if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
> +                       break;
> +       }
> +
> +       if (i == (lldev->nr_tres - 1))
> +               return -ENOMEM;
> +
> +       tre = &lldev->trepool[i];
> +       tre->dma_sig = dma_sig;
> +       tre->dev_name = dev_name;
> +       tre->callback = callback;
> +       tre->data = data;
> +       tre->chidx = i;
> +       tre->status = 0;
> +       tre->queued = 0;
> +       lldev->tx_status_list[i].err_code = 0;
> +       tre->lldev = lldev;
> +       tre_local = &tre->tre_local[0];
> +       tre_local[TRE_CFG_IDX] = TRE_MEMCPY;
> +       tre_local[TRE_CFG_IDX] |= ((lldev->evridx & 0xFF) << 8);
> +       tre_local[TRE_CFG_IDX] |= BIT(16);      /* set IEOB */
> +       *tre_ch = i;
> +       if (callback)
> +               callback(data);
> +       return 0;
> +}
> +
> +/*
> + * Multiple TREs may be queued and waiting in the
> + * pending queue.
> + */
> +static void hidma_ll_tre_complete(unsigned long arg)
> +{
> +       struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
> +       struct hidma_tre *tre;
> +
> +       while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
> +               /* call the user if it has been read by the hardware*/
> +               if (tre->callback)
> +                       tre->callback(tre->data);
> +       }
> +}
> +
> +/*
> + * Called to handle the interrupt for the channel.
> + * Return a positive number if TRE or EVRE were consumed on this run.
> + * Return a positive number if there are pending TREs or EVREs.
> + * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
> + */
> +static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
> +{
> +       struct hidma_tre *tre;
> +       u32 evre_write_off;
> +       u32 evre_ring_size = lldev->evre_ring_size;
> +       u32 tre_ring_size = lldev->tre_ring_size;
> +       u32 num_completed = 0, tre_iterator, evre_iterator;
> +       unsigned long flags;
> +
> +       evre_write_off = readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
> +       tre_iterator = lldev->tre_processed_off;
> +       evre_iterator = lldev->evre_processed_off;
> +
> +       if ((evre_write_off > evre_ring_size) ||
> +               ((evre_write_off % EVRE_SIZE) != 0)) {
> +               dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
> +               return 0;
> +       }
> +
> +       /*
> +        * By the time control reaches here the number of EVREs and TREs
> +        * may not match. Only consume the ones that hardware told us.
> +        */
> +       while ((evre_iterator != evre_write_off)) {
> +               u32 *current_evre = lldev->evre_ring + evre_iterator;
> +               u32 cfg;
> +               u8 err_info;
> +
> +               spin_lock_irqsave(&lldev->lock, flags);
> +               tre = lldev->pending_tre_list[tre_iterator / TRE_SIZE];
> +               if (!tre) {
> +                       spin_unlock_irqrestore(&lldev->lock, flags);
> +                       dev_warn(lldev->dev,
> +                               "tre_index [%d] and tre out of sync\n",
> +                               tre_iterator / TRE_SIZE);
> +                       tre_iterator += TRE_SIZE;
> +                       if (tre_iterator >= tre_ring_size)
> +                               tre_iterator -= tre_ring_size;
> +                       evre_iterator += EVRE_SIZE;
> +                       if (evre_iterator >= evre_ring_size)
> +                               evre_iterator -= evre_ring_size;
> +
> +                       continue;
> +               }
> +               lldev->pending_tre_list[tre->tre_index] = NULL;
> +
> +               /*
> +                * Keep track of pending TREs that SW is expecting to receive
> +                * from HW. We got one now. Decrement our counter.
> +                */
> +               lldev->pending_tre_count--;
> +               if (lldev->pending_tre_count < 0) {
> +                       dev_warn(lldev->dev,
> +                               "tre count mismatch on completion");
> +                       lldev->pending_tre_count = 0;
> +               }
> +
> +               spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +               cfg = current_evre[EVRE_CFG_IDX];
> +               err_info = (cfg >> EVRE_ERRINFO_BIT_POS);
> +               err_info = err_info & EVRE_ERRINFO_MASK;
> +               lldev->tx_status_list[tre->chidx].err_info = err_info;
> +               lldev->tx_status_list[tre->chidx].err_code =
> +                       (cfg >> EVRE_CODE_BIT_POS) & EVRE_CODE_MASK;
> +               tre->queued = 0;
> +
> +               kfifo_put(&lldev->handoff_fifo, tre);
> +               tasklet_schedule(&lldev->task);
> +
> +               tre_iterator += TRE_SIZE;
> +               if (tre_iterator >= tre_ring_size)
> +                       tre_iterator -= tre_ring_size;
> +               evre_iterator += EVRE_SIZE;
> +               if (evre_iterator >= evre_ring_size)
> +                       evre_iterator -= evre_ring_size;
> +
> +               /*
> +                * Read the new event descriptor written by the HW.
> +                * As we are processing the delivered events, other events
> +                * get queued to the SW for processing.
> +                */
> +               evre_write_off =
> +                       readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
> +               num_completed++;
> +       }
> +
> +       if (num_completed) {
> +               u32 evre_read_off = (lldev->evre_processed_off +
> +                               EVRE_SIZE * num_completed);
> +               u32 tre_read_off = (lldev->tre_processed_off +
> +                               TRE_SIZE * num_completed);
> +
> +               evre_read_off = evre_read_off % evre_ring_size;
> +               tre_read_off = tre_read_off % tre_ring_size;
> +
> +               writel(evre_read_off, lldev->evca + EVCA_DOORBELL_OFFSET);
> +
> +               /* record the last processed tre offset */
> +               lldev->tre_processed_off = tre_read_off;
> +               lldev->evre_processed_off = evre_read_off;
> +       }
> +
> +       return num_completed;
> +}
> +
> +void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
> +                               u8 err_code)
> +{
> +       u32 tre_iterator;
> +       struct hidma_tre *tre;
> +       u32 tre_ring_size = lldev->tre_ring_size;
> +       int num_completed = 0;
> +       u32 tre_read_off;
> +       unsigned long flags;
> +
> +       tre_iterator = lldev->tre_processed_off;
> +       while (lldev->pending_tre_count) {
> +               int tre_index = tre_iterator / TRE_SIZE;
> +
> +               spin_lock_irqsave(&lldev->lock, flags);
> +               tre = lldev->pending_tre_list[tre_index];
> +               if (!tre) {
> +                       spin_unlock_irqrestore(&lldev->lock, flags);
> +                       tre_iterator += TRE_SIZE;
> +                       if (tre_iterator >= tre_ring_size)
> +                               tre_iterator -= tre_ring_size;
> +                       continue;
> +               }
> +               lldev->pending_tre_list[tre_index] = NULL;
> +               lldev->pending_tre_count--;
> +               if (lldev->pending_tre_count < 0) {
> +                       dev_warn(lldev->dev,
> +                               "tre count mismatch on completion");
> +                       lldev->pending_tre_count = 0;
> +               }
> +               spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +               lldev->tx_status_list[tre->chidx].err_info = err_info;
> +               lldev->tx_status_list[tre->chidx].err_code = err_code;
> +               tre->queued = 0;
> +
> +               kfifo_put(&lldev->handoff_fifo, tre);
> +               tasklet_schedule(&lldev->task);
> +
> +               tre_iterator += TRE_SIZE;
> +               if (tre_iterator >= tre_ring_size)
> +                       tre_iterator -= tre_ring_size;
> +
> +               num_completed++;
> +       }
> +       tre_read_off = (lldev->tre_processed_off +
> +                       TRE_SIZE * num_completed);
> +
> +       tre_read_off = tre_read_off % tre_ring_size;
> +
> +       /* record the last processed tre offset */
> +       lldev->tre_processed_off = tre_read_off;
> +}
> +
> +static int hidma_ll_reset(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_RESET << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Delay 10ms after reset to allow DMA logic to quiesce.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "transfer channel did not reset\n");
> +               return ret;
> +       }
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_RESET << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Delay 10ms after reset to allow DMA logic to quiesce.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       lldev->trch_state = CH_DISABLED;
> +       lldev->evch_state = CH_DISABLED;
> +       return 0;
> +}
> +
> +static void hidma_ll_enable_irq(struct hidma_lldev *lldev, u32 irq_bits)
> +{
> +       writel(irq_bits, lldev->evca + EVCA_IRQ_EN_OFFSET);
> +       dev_dbg(lldev->dev, "enableirq\n");
> +}
> +
> +/*
> + * The interrupt handler for HIDMA will try to consume as many pending
> + * EVRE from the event queue as possible. Each EVRE has an associated
> + * TRE that holds the user interface parameters. EVRE reports the
> + * result of the transaction. Hardware guarantees ordering between EVREs
> + * and TREs. We use last processed offset to figure out which TRE is
> + * associated with which EVRE. If two TREs are consumed by HW, the EVREs
> + * are in order in the event ring.
> + *
> + * This handler will do a one pass for consuming EVREs. Other EVREs may
> + * be delivered while we are working. It will try to consume incoming
> + * EVREs one more time and return.
> + *
> + * For unprocessed EVREs, hardware will trigger another interrupt until
> + * all the interrupt bits are cleared.
> + *
> + * Hardware guarantees that by the time interrupt is observed, all data
> + * transactions in flight are delivered to their respective places and
> + * are visible to the CPU.
> + *
> + * On demand paging for IOMMU is only supported for PCIe via PRI
> + * (Page Request Interface) not for HIDMA. All other hardware instances
> + * including HIDMA work on pinned DMA addresses.
> + *
> + * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
> + * IOMMU latency will be built into the data movement time. By the time
> + * interrupt happens, IOMMU lookups + data movement has already taken place.
> + *
> + * While the first read in a typical PCI endpoint ISR flushes all outstanding
> + * requests traditionally to the destination, this concept does not apply
> + * here for this HW.
> + */
> +static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev)
> +{
> +       u32 status;
> +       u32 enable;
> +       u32 cause;
> +       int repeat = 2;
> +       unsigned long timeout;
> +
> +       /*
> +        * Fine tuned for this HW...
> +        *
> +        * This ISR has been designed for this particular hardware. Relaxed read
> +        * and write accessors are used for performance reasons due to interrupt
> +        * delivery guarantees. Do not copy this code blindly and expect
> +        * that to work.
> +        */
> +       status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
> +       cause = status & enable;
> +
> +       if ((cause & (BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))) ||
> +                       (cause & BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)) ||
> +                       (cause & BIT(IRQ_EV_CH_WR_RESP_BIT_POS)) ||
> +                       (cause & BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)) ||
> +                       (cause & BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS))) {
> +               u8 err_code = EVRE_STATUS_ERROR;
> +               u8 err_info = 0xFF;
> +
> +               /* Clear out pending interrupts */
> +               writel(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +               dev_err(lldev->dev,
> +                       "error 0x%x, resetting...\n", cause);
> +
> +               hidma_cleanup_pending_tre(lldev, err_info, err_code);
> +
> +               /* reset the channel for recovery */
> +               if (hidma_ll_setup(lldev)) {
> +                       dev_err(lldev->dev,
> +                               "channel reinitialize failed after error\n");
> +                       return;
> +               }
> +               hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +               return;
> +       }
> +
> +       /*
> +        * Try to consume as many EVREs as possible.
> +        * skip this loop if the interrupt is spurious.
> +        */
> +       while (cause && repeat) {
> +               unsigned long start = jiffies;
> +
> +               /* This timeout should be sufficent for core to finish */
> +               timeout = start + msecs_to_jiffies(500);
> +
> +               while (lldev->pending_tre_count) {
> +                       hidma_handle_tre_completion(lldev);
> +                       if (time_is_before_jiffies(timeout)) {
> +                               dev_warn(lldev->dev,
> +                                       "ISR timeout %lx-%lx from %lx [%d]\n",
> +                                       jiffies, timeout, start,
> +                                       lldev->pending_tre_count);
> +                               break;
> +                       }
> +               }
> +
> +               /* We consumed TREs or there are pending TREs or EVREs. */
> +               writel_relaxed(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +               /*
> +                * Another interrupt might have arrived while we are
> +                * processing this one. Read the new cause.
> +                */
> +               status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +               enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
> +               cause = status & enable;
> +
> +               repeat--;
> +       }
> +}
> +
> +
> +static int hidma_ll_enable(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val &= ~(CH_CONTROL_MASK << 16);
> +       val |= (CH_ENABLE << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               ((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "event channel did not get enabled\n");
> +               return ret;
> +       }
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_ENABLE << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               ((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "transfer channel did not get enabled\n");
> +               return ret;
> +       }
> +
> +       lldev->trch_state = CH_ENABLED;
> +       lldev->evch_state = CH_ENABLED;
> +
> +       return 0;
> +}
> +
> +int hidma_ll_resume(struct hidma_lldev *lldev)
> +{
> +       return hidma_ll_enable(lldev);
> +}
> +
> +static int hidma_ll_hw_start(struct hidma_lldev *lldev)
> +{
> +       int rc = 0;
> +       unsigned long irqflags;
> +
> +       spin_lock_irqsave(&lldev->lock, irqflags);
> +       writel(lldev->tre_write_offset, lldev->trca + TRCA_DOORBELL_OFFSET);
> +       spin_unlock_irqrestore(&lldev->lock, irqflags);
> +
> +       return rc;
> +}
> +
> +bool hidma_ll_isenabled(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +
> +       /* both channels have to be enabled before calling this function*/
> +       if (((lldev->trch_state == CH_ENABLED) ||
> +               (lldev->trch_state == CH_RUNNING)) &&
> +               ((lldev->evch_state == CH_ENABLED) ||
> +                       (lldev->evch_state == CH_RUNNING)))
> +               return true;
> +
> +       dev_dbg(lldev->dev, "channels are not enabled or are in error state");
> +       return false;
> +}
> +
> +int hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       struct hidma_tre *tre;
> +       int rc = 0;
> +       unsigned long flags;
> +
> +       tre = &lldev->trepool[tre_ch];
> +
> +       /* copy the TRE into its location in the TRE ring */
> +       spin_lock_irqsave(&lldev->lock, flags);
> +       tre->tre_index = lldev->tre_write_offset / TRE_SIZE;
> +       lldev->pending_tre_list[tre->tre_index] = tre;
> +       memcpy(lldev->tre_ring + lldev->tre_write_offset, &tre->tre_local[0],
> +               TRE_SIZE);
> +       lldev->tx_status_list[tre->chidx].err_code = 0;
> +       lldev->tx_status_list[tre->chidx].err_info = 0;
> +       tre->queued = 1;
> +       lldev->pending_tre_count++;
> +       lldev->tre_write_offset = (lldev->tre_write_offset + TRE_SIZE)
> +                               % lldev->tre_ring_size;
> +       spin_unlock_irqrestore(&lldev->lock, flags);
> +       return rc;
> +}
> +
> +int hidma_ll_start(struct hidma_lldev *lldev)
> +{
> +       return hidma_ll_hw_start(lldev);
> +}
> +
> +/*
> + * Note that even though we stop this channel
> + * if there is a pending transaction in flight
> + * it will complete and follow the callback.
> + * This request will prevent further requests
> + * to be made.
> + */
> +int hidma_ll_pause(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +
> +       /* already suspended by this OS */
> +       if ((lldev->trch_state == CH_SUSPENDED) ||
> +               (lldev->evch_state == CH_SUSPENDED))
> +               return 0;
> +
> +       /* already stopped by the manager */
> +       if ((lldev->trch_state == CH_STOPPED) ||
> +               (lldev->evch_state == CH_STOPPED))
> +               return 0;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_SUSPEND << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Start the wait right after the suspend is confirmed.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_SUSPEND << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Start the wait right after the suspend is confirmed
> +        * Delay up to 10ms after reset to allow DMA logic to quiesce.
> +        */
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       lldev->trch_state = CH_SUSPENDED;
> +       lldev->evch_state = CH_SUSPENDED;
> +       dev_dbg(lldev->dev, "stop\n");
> +
> +       return 0;
> +}
> +
> +void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
> +       dma_addr_t src, dma_addr_t dest, u32 len, u32 flags)
> +{
> +       struct hidma_tre *tre;
> +       u32 *tre_local;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev,
> +                       "invalid TRE number in transfer params:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre = &lldev->trepool[tre_ch];
> +       if (atomic_read(&tre->allocated) != true) {
> +               dev_err(lldev->dev,
> +                       "trying to set params on an unused TRE:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre_local = &tre->tre_local[0];
> +       tre_local[TRE_LEN_IDX] = len;
> +       tre_local[TRE_SRC_LOW_IDX] = lower_32_bits(src);
> +       tre_local[TRE_SRC_HI_IDX] = upper_32_bits(src);
> +       tre_local[TRE_DEST_LOW_IDX] = lower_32_bits(dest);
> +       tre_local[TRE_DEST_HI_IDX] = upper_32_bits(dest);
> +       tre->int_flags = flags;
> +
> +       dev_dbg(lldev->dev, "transferparams: tre_ch:%d %pap->%pap len:%u\n",
> +               tre_ch, &src, &dest, len);
> +}
> +
> +/*
> + * Called during initialization and after an error condition
> + * to restore hardware state.
> + */
> +int hidma_ll_setup(struct hidma_lldev *lldev)
> +{
> +       int rc;
> +       u64 addr;
> +       u32 val;
> +       u32 nr_tres = lldev->nr_tres;
> +
> +       lldev->pending_tre_count = 0;
> +       lldev->tre_processed_off = 0;
> +       lldev->evre_processed_off = 0;
> +       lldev->tre_write_offset = 0;
> +
> +       /* disable interrupts */
> +       hidma_ll_enable_irq(lldev, 0);
> +
> +       /* clear all pending interrupts */
> +       val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +       rc = hidma_ll_reset(lldev);
> +       if (rc)
> +               return rc;
> +
> +       /*
> +        * Clear all pending interrupts again.
> +        * Otherwise, we observe reset complete interrupts.
> +        */
> +       val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +       /* disable interrupts again after reset */
> +       hidma_ll_enable_irq(lldev, 0);
> +
> +       addr = lldev->tre_ring_handle;
> +       writel(lower_32_bits(addr), lldev->trca + TRCA_RING_LOW_OFFSET);
> +       writel(upper_32_bits(addr), lldev->trca + TRCA_RING_HIGH_OFFSET);
> +       writel(lldev->tre_ring_size, lldev->trca + TRCA_RING_LEN_OFFSET);
> +
> +       addr = lldev->evre_ring_handle;
> +       writel(lower_32_bits(addr), lldev->evca + EVCA_RING_LOW_OFFSET);
> +       writel(upper_32_bits(addr), lldev->evca + EVCA_RING_HIGH_OFFSET);
> +       writel(EVRE_SIZE * nr_tres, lldev->evca + EVCA_RING_LEN_OFFSET);
> +
> +       /* support IRQ only for now */
> +       val = readl(lldev->evca + EVCA_INTCTRL_OFFSET);
> +       val = val & ~(0xF);
> +       val = val | 0x1;
> +       writel(val, lldev->evca + EVCA_INTCTRL_OFFSET);
> +
> +       /* clear all pending interrupts and enable them*/
> +       writel(ENABLE_IRQS, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +       hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +
> +       rc = hidma_ll_enable(lldev);
> +       if (rc)
> +               return rc;
> +
> +       return rc;
> +}
> +
> +struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
> +                       void __iomem *trca, void __iomem *evca,
> +                       u8 evridx)
> +{
> +       u32 required_bytes;
> +       struct hidma_lldev *lldev;
> +       int rc;
> +
> +       if (!trca || !evca || !dev || !nr_tres)
> +               return NULL;
> +
> +       /* need at least four TREs */
> +       if (nr_tres < 4)
> +               return NULL;
> +
> +       /* need an extra space */
> +       nr_tres += 1;
> +
> +       lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
> +       if (!lldev)
> +               return NULL;
> +
> +       lldev->evca = evca;
> +       lldev->trca = trca;
> +       lldev->dev = dev;
> +       required_bytes = sizeof(struct hidma_tre) * nr_tres;
> +       lldev->trepool = devm_kzalloc(lldev->dev, required_bytes, GFP_KERNEL);
> +       if (!lldev->trepool)
> +               return NULL;
> +
> +       required_bytes = sizeof(lldev->pending_tre_list[0]) * nr_tres;
> +       lldev->pending_tre_list = devm_kzalloc(dev, required_bytes,
> +                                       GFP_KERNEL);
> +       if (!lldev->pending_tre_list)
> +               return NULL;
> +
> +       required_bytes = sizeof(lldev->tx_status_list[0]) * nr_tres;
> +       lldev->tx_status_list = devm_kzalloc(dev, required_bytes, GFP_KERNEL);
> +       if (!lldev->tx_status_list)
> +               return NULL;
> +
> +       lldev->tre_ring = dmam_alloc_coherent(dev, (TRE_SIZE + 1) * nr_tres,
> +                                       &lldev->tre_ring_handle, GFP_KERNEL);
> +       if (!lldev->tre_ring)
> +               return NULL;
> +
> +       memset(lldev->tre_ring, 0, (TRE_SIZE + 1) * nr_tres);
> +       lldev->tre_ring_size = TRE_SIZE * nr_tres;
> +       lldev->nr_tres = nr_tres;
> +
> +       /* the TRE ring has to be TRE_SIZE aligned */
> +       if (!IS_ALIGNED(lldev->tre_ring_handle, TRE_SIZE)) {
> +               u8  tre_ring_shift;
> +
> +               tre_ring_shift = lldev->tre_ring_handle % TRE_SIZE;
> +               tre_ring_shift = TRE_SIZE - tre_ring_shift;
> +               lldev->tre_ring_handle += tre_ring_shift;
> +               lldev->tre_ring += tre_ring_shift;
> +       }
> +
> +       lldev->evre_ring = dmam_alloc_coherent(dev, (EVRE_SIZE + 1) * nr_tres,
> +                                       &lldev->evre_ring_handle, GFP_KERNEL);
> +       if (!lldev->evre_ring)
> +               return NULL;
> +
> +       memset(lldev->evre_ring, 0, (EVRE_SIZE + 1) * nr_tres);
> +       lldev->evre_ring_size = EVRE_SIZE * nr_tres;
> +
> +       /* the EVRE ring has to be EVRE_SIZE aligned */
> +       if (!IS_ALIGNED(lldev->evre_ring_handle, EVRE_SIZE)) {
> +               u8  evre_ring_shift;
> +
> +               evre_ring_shift = lldev->evre_ring_handle % EVRE_SIZE;
> +               evre_ring_shift = EVRE_SIZE - evre_ring_shift;
> +               lldev->evre_ring_handle += evre_ring_shift;
> +               lldev->evre_ring += evre_ring_shift;
> +       }
> +       lldev->nr_tres = nr_tres;
> +       lldev->evridx = evridx;
> +
> +       rc = kfifo_alloc(&lldev->handoff_fifo,
> +               nr_tres * sizeof(struct hidma_tre *), GFP_KERNEL);
> +       if (rc)
> +               return NULL;
> +
> +       rc = hidma_ll_setup(lldev);
> +       if (rc)
> +               return NULL;
> +
> +       spin_lock_init(&lldev->lock);
> +       tasklet_init(&lldev->task, hidma_ll_tre_complete,
> +                       (unsigned long)lldev);
> +       lldev->initialized = 1;
> +       hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +       return lldev;
> +}
> +
> +int hidma_ll_uninit(struct hidma_lldev *lldev)
> +{
> +       int rc = 0;
> +       u32 val;
> +
> +       if (!lldev)
> +               return -ENODEV;
> +
> +       if (lldev->initialized) {
> +               u32 required_bytes;
> +
> +               lldev->initialized = 0;
> +
> +               required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
> +               tasklet_kill(&lldev->task);
> +               memset(lldev->trepool, 0, required_bytes);
> +               lldev->trepool = NULL;
> +               lldev->pending_tre_count = 0;
> +               lldev->tre_write_offset = 0;
> +
> +               rc = hidma_ll_reset(lldev);
> +
> +               /*
> +                * Clear all pending interrupts again.
> +                * Otherwise, we observe reset complete interrupts.
> +                */
> +               val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +               writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +               hidma_ll_enable_irq(lldev, 0);
> +       }
> +       return rc;
> +}
> +
> +irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
> +{
> +       struct hidma_lldev *lldev = arg;
> +
> +       hidma_ll_int_handler_internal(lldev);
> +       return IRQ_HANDLED;
> +}
> +
> +enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       enum dma_status ret = DMA_ERROR;
> +       unsigned long flags;
> +       u8 err_code;
> +
> +       spin_lock_irqsave(&lldev->lock, flags);
> +       err_code = lldev->tx_status_list[tre_ch].err_code;
> +
> +       if (err_code & EVRE_STATUS_COMPLETE)
> +               ret = DMA_COMPLETE;
> +       else if (err_code & EVRE_STATUS_ERROR)
> +               ret = DMA_ERROR;
> +       else
> +               ret = DMA_IN_PROGRESS;
> +       spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +       return ret;
> +}
> --
> Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



-- 
With Best Regards,
Andy Shevchenko
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-08 20:47       ` Andy Shevchenko
  0 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-08 20:47 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, timur, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Rob Herring, Pawel Moll, Mark Rutland,
	Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams, devicetree,
	linux-kernel

On Sun, Nov 8, 2015 at 6:53 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
> This patch adds support for hidma engine. The driver
> consists of two logical blocks. The DMA engine interface
> and the low-level interface. The hardware only supports
> memcpy/memset and this driver only support memcpy
> interface. HW and driver doesn't support slave interface.

Make lines a bit longer.

> +/*
> + * Qualcomm Technologies HIDMA DMA engine interface
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +/*
> + * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
> + * Copyright (C) Semihalf 2009
> + * Copyright (C) Ilya Yanok, Emcraft Systems 2010
> + * Copyright (C) Alexander Popov, Promcontroller 2014
> + *
> + * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
> + * (defines, structures and comments) was taken from MPC5121 DMA driver
> + * written by Hongjun Chen <hong-jun.chen@freescale.com>.
> + *
> + * Approved as OSADL project by a majority of OSADL members and funded
> + * by OSADL membership fees in 2009;  for details see www.osadl.org.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License as published by the Free
> + * Software Foundation; either version 2 of the License, or (at your option)
> + * any later version.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * The full GNU General Public License is included in this distribution in the
> + * file called COPYING.
> + */
> +
> +/* Linux Foundation elects GPLv2 license only. */
> +
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/of_dma.h>
> +#include <linux/property.h>
> +#include <linux/delay.h>
> +#include <linux/highmem.h>
> +#include <linux/io.h>
> +#include <linux/sched.h>
> +#include <linux/wait.h>
> +#include <linux/acpi.h>
> +#include <linux/irq.h>
> +#include <linux/atomic.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "../dmaengine.h"
> +#include "hidma.h"
> +
> +/*
> + * Default idle time is 2 seconds. This parameter can
> + * be overridden by changing the following
> + * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
> + * during kernel boot.
> + */
> +#define AUTOSUSPEND_TIMEOUT            2000
> +#define ERR_INFO_SW                    0xFF
> +#define ERR_CODE_UNEXPECTED_TERMINATE  0x0
> +
> +static inline
> +struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
> +{
> +       return container_of(dmadev, struct hidma_dev, ddev);
> +}
> +
> +static inline
> +struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
> +{
> +       return container_of(_lldevp, struct hidma_dev, lldev);
> +}
> +
> +static inline
> +struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
> +{
> +       return container_of(dmach, struct hidma_chan, chan);
> +}
> +
> +static inline struct hidma_desc *
> +to_hidma_desc(struct dma_async_tx_descriptor *t)
> +{
> +       return container_of(t, struct hidma_desc, desc);
> +}
> +
> +static void hidma_free(struct hidma_dev *dmadev)
> +{
> +       dev_dbg(dmadev->ddev.dev, "free dmadev\n");
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +}
> +
> +static unsigned int nr_desc_prm;
> +module_param(nr_desc_prm, uint, 0644);
> +MODULE_PARM_DESC(nr_desc_prm,
> +                "number of descriptors (default: 0)");
> +
> +#define MAX_HIDMA_CHANNELS     64
> +static int event_channel_idx[MAX_HIDMA_CHANNELS] = {
> +       [0 ... (MAX_HIDMA_CHANNELS - 1)] = -1};
> +static unsigned int num_event_channel_idx;
> +module_param_array_named(event_channel_idx, event_channel_idx, int,
> +                       &num_event_channel_idx, 0644);
> +MODULE_PARM_DESC(event_channel_idx,
> +               "event channel index array for the notifications");
> +static atomic_t channel_ref_count;
> +
> +/* process completed descriptors */
> +static void hidma_process_completed(struct hidma_dev *mdma)
> +{
> +       dma_cookie_t last_cookie = 0;
> +       struct hidma_chan *mchan;
> +       struct hidma_desc *mdesc;
> +       struct dma_async_tx_descriptor *desc;
> +       unsigned long irqflags;
> +       struct list_head list;
> +       struct dma_chan *dmach = NULL;

Redundant assignment.

> +
> +       list_for_each_entry(dmach, &mdma->ddev.channels,
> +                       device_node) {
> +               mchan = to_hidma_chan(dmach);
> +               INIT_LIST_HEAD(&list);
> +
> +               /* Get all completed descriptors */
> +               spin_lock_irqsave(&mchan->lock, irqflags);
> +               list_splice_tail_init(&mchan->completed, &list);
> +               spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +               /* Execute callbacks and run dependencies */
> +               list_for_each_entry(mdesc, &list, node) {
> +                       desc = &mdesc->desc;
> +
> +                       spin_lock_irqsave(&mchan->lock, irqflags);
> +                       dma_cookie_complete(desc);
> +                       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +                       if (desc->callback &&
> +                               (hidma_ll_status(mdma->lldev, mdesc->tre_ch)
> +                               == DMA_COMPLETE))
> +                               desc->callback(desc->callback_param);
> +
> +                       last_cookie = desc->cookie;
> +                       dma_run_dependencies(desc);
> +               }
> +
> +               /* Free descriptors */
> +               spin_lock_irqsave(&mchan->lock, irqflags);
> +               list_splice_tail_init(&list, &mchan->free);
> +               spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       }
> +}
> +
> +/*
> + * Called once for each submitted descriptor.
> + * PM is locked once for each descriptor that is currently
> + * in execution.
> + */
> +static void hidma_callback(void *data)
> +{
> +       struct hidma_desc *mdesc = data;
> +       struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
> +       unsigned long irqflags;
> +       struct dma_device *ddev = mchan->chan.device;
> +       struct hidma_dev *dmadev = to_hidma_dev(ddev);
> +       bool queued = false;
> +
> +       dev_dbg(dmadev->ddev.dev, "callback: data:0x%p\n", data);
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +
> +       if (mdesc->node.next) {
> +               /* Delete from the active list, add to completed list */
> +               list_move_tail(&mdesc->node, &mchan->completed);
> +               queued = true;
> +       }
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       hidma_process_completed(dmadev);
> +
> +       if (queued) {
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +}
> +
> +static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
> +{
> +       struct hidma_chan *mchan;
> +       struct dma_device *ddev;
> +
> +       mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
> +       if (!mchan)
> +               return -ENOMEM;
> +
> +       ddev = &dmadev->ddev;
> +       mchan->dma_sig = dma_sig;
> +       mchan->dmadev = dmadev;
> +       mchan->chan.device = ddev;
> +       dma_cookie_init(&mchan->chan);
> +
> +       INIT_LIST_HEAD(&mchan->free);
> +       INIT_LIST_HEAD(&mchan->prepared);
> +       INIT_LIST_HEAD(&mchan->active);
> +       INIT_LIST_HEAD(&mchan->completed);
> +
> +       spin_lock_init(&mchan->lock);
> +       list_add_tail(&mchan->chan.device_node, &ddev->channels);
> +       dmadev->ddev.chancnt++;
> +       return 0;
> +}
> +
> +static void hidma_issue_pending(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +
> +       /* PM will be released in hidma_callback function. */
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       hidma_ll_start(dmadev->lldev);
> +}
> +
> +static enum dma_status hidma_tx_status(struct dma_chan *dmach,
> +                                       dma_cookie_t cookie,
> +                                       struct dma_tx_state *txstate)
> +{
> +       enum dma_status ret;
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +
> +       if (mchan->paused)
> +               ret = DMA_PAUSED;
> +       else
> +               ret = dma_cookie_status(dmach, cookie, txstate);
> +
> +       return ret;
> +}
> +
> +/*
> + * Submit descriptor to hardware.
> + * Lock the PM for each descriptor we are sending.
> + */
> +static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(txd->chan);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +       struct hidma_desc *mdesc;
> +       unsigned long irqflags;
> +       dma_cookie_t cookie;
> +
> +       if (!hidma_ll_isenabled(dmadev->lldev))
> +               return -ENODEV;
> +
> +       mdesc = container_of(txd, struct hidma_desc, desc);
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +
> +       /* Move descriptor to active */
> +       list_move_tail(&mdesc->node, &mchan->active);
> +
> +       /* Update cookie */
> +       cookie = dma_cookie_assign(txd);
> +
> +       hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       return cookie;
> +}
> +
> +static int hidma_alloc_chan_resources(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +       int rc = 0;
> +       struct hidma_desc *mdesc, *tmp;
> +       unsigned long irqflags;
> +       LIST_HEAD(descs);
> +       u32 i;
> +
> +       if (mchan->allocated)
> +               return 0;
> +
> +       /* Alloc descriptors for this channel */
> +       for (i = 0; i < dmadev->nr_descriptors; i++) {
> +               mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
> +               if (!mdesc) {
> +                       rc = -ENOMEM;
> +                       break;
> +               }
> +               dma_async_tx_descriptor_init(&mdesc->desc, dmach);
> +               mdesc->desc.flags = DMA_CTRL_ACK;
> +               mdesc->desc.tx_submit = hidma_tx_submit;
> +
> +               rc = hidma_ll_request(dmadev->lldev,
> +                               mchan->dma_sig, "DMA engine", hidma_callback,
> +                               mdesc, &mdesc->tre_ch);
> +               if (rc) {
> +                       dev_err(dmach->device->dev,
> +                               "channel alloc failed at %u\n", i);
> +                       kfree(mdesc);
> +                       break;
> +               }
> +               list_add_tail(&mdesc->node, &descs);
> +       }
> +
> +       if (rc) {
> +               /* return the allocated descriptors */
> +               list_for_each_entry_safe(mdesc, tmp, &descs, node) {
> +                       hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
> +                       kfree(mdesc);
> +               }
> +               return rc;
> +       }
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_splice_tail_init(&descs, &mchan->free);
> +       mchan->allocated = true;
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       dev_dbg(dmadev->ddev.dev,
> +               "allocated channel for %u\n", mchan->dma_sig);
> +       return 1;
> +}
> +
> +static void hidma_free_chan_resources(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *mdma = mchan->dmadev;
> +       struct hidma_desc *mdesc, *tmp;
> +       unsigned long irqflags;
> +       LIST_HEAD(descs);
> +
> +       if (!list_empty(&mchan->prepared) ||
> +               !list_empty(&mchan->active) ||
> +               !list_empty(&mchan->completed)) {
> +               /*
> +                * We have unfinished requests waiting.
> +                * Terminate the request from the hardware.
> +                */
> +               hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
> +                               ERR_CODE_UNEXPECTED_TERMINATE);
> +
> +               /* Give enough time for completions to be called. */
> +               msleep(100);
> +       }
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       /* Channel must be idle */
> +       WARN_ON(!list_empty(&mchan->prepared));
> +       WARN_ON(!list_empty(&mchan->active));
> +       WARN_ON(!list_empty(&mchan->completed));
> +
> +       /* Move data */
> +       list_splice_tail_init(&mchan->free, &descs);
> +
> +       /* Free descriptors */
> +       list_for_each_entry_safe(mdesc, tmp, &descs, node) {
> +               hidma_ll_free(mdma->lldev, mdesc->tre_ch);
> +               list_del(&mdesc->node);
> +               kfree(mdesc);
> +       }
> +
> +       mchan->allocated = 0;
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
> +}
> +
> +
> +static struct dma_async_tx_descriptor *
> +hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
> +                       dma_addr_t dma_src, size_t len, unsigned long flags)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_desc *mdesc = NULL;
> +       struct hidma_dev *mdma = mchan->dmadev;
> +       unsigned long irqflags;
> +
> +       dev_dbg(mdma->ddev.dev,
> +               "memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
> +               &dma_dest, &dma_src, len);
> +
> +       /* Get free descriptor */
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       if (!list_empty(&mchan->free)) {
> +               mdesc = list_first_entry(&mchan->free, struct hidma_desc,
> +                                       node);
> +               list_del(&mdesc->node);
> +       }
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       if (!mdesc)
> +               return NULL;
> +
> +       hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
> +                       dma_src, dma_dest, len, flags);
> +
> +       /* Place descriptor in prepared list */
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_add_tail(&mdesc->node, &mchan->prepared);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       return &mdesc->desc;
> +}
> +
> +static int hidma_terminate_all(struct dma_chan *chan)
> +{
> +       struct hidma_dev *dmadev;
> +       LIST_HEAD(head);
> +       unsigned long irqflags;
> +       LIST_HEAD(list);
> +       struct hidma_desc *tmp, *mdesc = NULL;
> +       int rc;
> +       struct hidma_chan *mchan;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "terminateall: chan:0x%p\n", mchan);
> +
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       /* give completed requests a chance to finish */
> +       hidma_process_completed(dmadev);
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_splice_init(&mchan->active, &list);
> +       list_splice_init(&mchan->prepared, &list);
> +       list_splice_init(&mchan->completed, &list);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       /* this suspends the existing transfer */
> +       rc = hidma_ll_pause(dmadev->lldev);
> +       if (rc) {
> +               dev_err(dmadev->ddev.dev, "channel did not pause\n");
> +               goto out;
> +       }
> +
> +       /* return all user requests */
> +       list_for_each_entry_safe(mdesc, tmp, &list, node) {
> +               struct dma_async_tx_descriptor  *txd = &mdesc->desc;
> +               dma_async_tx_callback callback = mdesc->desc.callback;
> +               void *param = mdesc->desc.callback_param;
> +               enum dma_status status;
> +
> +               dma_descriptor_unmap(txd);
> +
> +               status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
> +               /*
> +                * The API requires that no submissions are done from a
> +                * callback, so we don't need to drop the lock here
> +                */
> +               if (callback && (status == DMA_COMPLETE))
> +                       callback(param);
> +
> +               dma_run_dependencies(txd);
> +
> +               /* move myself to free_list */
> +               list_move(&mdesc->node, &mchan->free);
> +       }
> +
> +       /* reinitialize the hardware */
> +       rc = hidma_ll_setup(dmadev->lldev);
> +
> +out:
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return rc;
> +}
> +
> +static int hidma_pause(struct dma_chan *chan)
> +{
> +       struct hidma_chan *mchan;
> +       struct hidma_dev *dmadev;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "pause: chan:0x%p\n", mchan);
> +
> +       if (!mchan->paused) {
> +               pm_runtime_get_sync(dmadev->ddev.dev);
> +               if (hidma_ll_pause(dmadev->lldev))
> +                       dev_warn(dmadev->ddev.dev, "channel did not stop\n");
> +               mchan->paused = true;
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +       return 0;
> +}
> +
> +static int hidma_resume(struct dma_chan *chan)
> +{
> +       struct hidma_chan *mchan;
> +       struct hidma_dev *dmadev;
> +       int rc = 0;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "resume: chan:0x%p\n", mchan);
> +
> +       if (mchan->paused) {
> +               pm_runtime_get_sync(dmadev->ddev.dev);
> +               rc = hidma_ll_resume(dmadev->lldev);
> +               if (!rc)
> +                       mchan->paused = false;
> +               else
> +                       dev_err(dmadev->ddev.dev,
> +                                       "failed to resume the channel");
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +       return rc;
> +}
> +
> +static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
> +{
> +       struct hidma_lldev **lldev_ptr = arg;
> +       irqreturn_t ret;
> +       struct hidma_dev *dmadev = to_hidma_dev_from_lldev(lldev_ptr);
> +
> +       /*
> +        * All interrupts are request driven.
> +        * HW doesn't send an interrupt by itself.
> +        */
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       ret = hidma_ll_inthandler(chirq, *lldev_ptr);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return ret;
> +}
> +
> +static int hidma_probe(struct platform_device *pdev)
> +{
> +       struct hidma_dev *dmadev;
> +       int rc = 0;
> +       struct resource *trca_resource;
> +       struct resource *evca_resource;
> +       int chirq;
> +       int current_channel_index = atomic_read(&channel_ref_count);
> +       void *evca;
> +       void *trca;
> +
> +       pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
> +       pm_runtime_use_autosuspend(&pdev->dev);
> +       pm_runtime_set_active(&pdev->dev);
> +       pm_runtime_enable(&pdev->dev);
> +
> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);

> +       if (!trca_resource) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }

Why did you ignore my comment about this block?
Remove that condition entirely.

> +
> +       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
> +       if (IS_ERR(trca)) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
> +       if (!evca_resource) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }

Ditto.

> +
> +       evca = devm_ioremap_resource(&pdev->dev, evca_resource);
> +       if (IS_ERR(evca)) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       /*
> +        * This driver only handles the channel IRQs.
> +        * Common IRQ is handled by the management driver.
> +        */
> +       chirq = platform_get_irq(pdev, 0);
> +       if (chirq < 0) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }
> +
> +       dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
> +       if (!dmadev) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +       spin_lock_init(&dmadev->lock);
> +       dmadev->ddev.dev = &pdev->dev;
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +
> +       dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
> +       if (WARN_ON(!pdev->dev.dma_mask)) {
> +               rc = -ENXIO;
> +               goto dmafree;
> +       }
> +
> +       dmadev->dev_evca = evca;
> +       dmadev->evca_resource = evca_resource;
> +       dmadev->dev_trca = trca;
> +       dmadev->trca_resource = trca_resource;
> +       dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
> +       dmadev->ddev.device_alloc_chan_resources =
> +               hidma_alloc_chan_resources;
> +       dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
> +       dmadev->ddev.device_tx_status = hidma_tx_status;
> +       dmadev->ddev.device_issue_pending = hidma_issue_pending;
> +       dmadev->ddev.device_pause = hidma_pause;
> +       dmadev->ddev.device_resume = hidma_resume;
> +       dmadev->ddev.device_terminate_all = hidma_terminate_all;
> +       dmadev->ddev.copy_align = 8;
> +
> +       device_property_read_u32(&pdev->dev, "desc-count",
> +                               &dmadev->nr_descriptors);
> +
> +       if (!dmadev->nr_descriptors && nr_desc_prm)
> +               dmadev->nr_descriptors = nr_desc_prm;
> +
> +       if (!dmadev->nr_descriptors)
> +               goto dmafree;
> +
> +       if (current_channel_index > MAX_HIDMA_CHANNELS)
> +               goto dmafree;
> +
> +       dmadev->evridx = -1;
> +       device_property_read_u32(&pdev->dev, "event-channel", &dmadev->evridx);
> +
> +       /* kernel command line override for the guest machine */
> +       if (event_channel_idx[current_channel_index] != -1)
> +               dmadev->evridx = event_channel_idx[current_channel_index];
> +
> +       if (dmadev->evridx == -1)
> +               goto dmafree;
> +
> +       /* Set DMA mask to 64 bits. */
> +       rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> +       if (rc) {
> +               dev_warn(&pdev->dev, "unable to set coherent mask to 64");
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> +               if (rc)
> +                       goto dmafree;
> +       }
> +
> +       dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
> +                               dmadev->nr_descriptors, dmadev->dev_trca,
> +                               dmadev->dev_evca, dmadev->evridx);
> +       if (!dmadev->lldev) {
> +               rc = -EPROBE_DEFER;
> +               goto dmafree;
> +       }
> +
> +       rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
> +                             "qcom-hidma", &dmadev->lldev);
> +       if (rc)
> +               goto uninit;
> +
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +       rc = hidma_chan_init(dmadev, 0);
> +       if (rc)
> +               goto uninit;
> +
> +       rc = dma_selftest_memcpy(&dmadev->ddev);
> +       if (rc)
> +               goto uninit;
> +
> +       rc = dma_async_device_register(&dmadev->ddev);
> +       if (rc)
> +               goto uninit;
> +
> +       hidma_debug_init(dmadev);
> +       dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
> +       platform_set_drvdata(pdev, dmadev);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       atomic_inc(&channel_ref_count);
> +       return 0;
> +
> +uninit:
> +       hidma_debug_uninit(dmadev);
> +       hidma_ll_uninit(dmadev->lldev);
> +dmafree:
> +       if (dmadev)
> +               hidma_free(dmadev);
> +bailout:
> +       pm_runtime_disable(&pdev->dev);
> +       pm_runtime_put_sync_suspend(&pdev->dev);

Are you sure this is appropriate sequence?

I think

pm_runtime_put();
pm_runtime_disable();

will do the job.

> +       return rc;
> +}
> +
> +static int hidma_remove(struct platform_device *pdev)
> +{
> +       struct hidma_dev *dmadev = platform_get_drvdata(pdev);
> +
> +       dev_dbg(&pdev->dev, "removing\n");

Useless message.

> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +
> +       dma_async_device_unregister(&dmadev->ddev);
> +       hidma_debug_uninit(dmadev);
> +       hidma_ll_uninit(dmadev->lldev);
> +       hidma_free(dmadev);
> +
> +       dev_info(&pdev->dev, "HI-DMA engine removed\n");
> +       pm_runtime_put_sync_suspend(&pdev->dev);
> +       pm_runtime_disable(&pdev->dev);
> +
> +       return 0;
> +}
> +
> +#if IS_ENABLED(CONFIG_ACPI)
> +static const struct acpi_device_id hidma_acpi_ids[] = {
> +       {"QCOM8061"},
> +       {},
> +};
> +#endif
> +
> +static const struct of_device_id hidma_match[] = {
> +       { .compatible = "qcom,hidma-1.0", },
> +       {},
> +};
> +MODULE_DEVICE_TABLE(of, hidma_match);
> +
> +static struct platform_driver hidma_driver = {
> +       .probe = hidma_probe,
> +       .remove = hidma_remove,
> +       .driver = {
> +               .name = "hidma",
> +               .of_match_table = hidma_match,
> +               .acpi_match_table = ACPI_PTR(hidma_acpi_ids),
> +       },
> +};
> +module_platform_driver(hidma_driver);
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
> new file mode 100644
> index 0000000..195d6b5
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma.h
> @@ -0,0 +1,157 @@
> +/*
> + * Qualcomm Technologies HIDMA data structures
> + *
> + * Copyright (c) 2014, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef QCOM_HIDMA_H
> +#define QCOM_HIDMA_H
> +
> +#include <linux/kfifo.h>
> +#include <linux/interrupt.h>
> +#include <linux/dmaengine.h>
> +
> +#define TRE_SIZE                       32 /* each TRE is 32 bytes  */
> +#define TRE_CFG_IDX                    0
> +#define TRE_LEN_IDX                    1
> +#define TRE_SRC_LOW_IDX                2
> +#define TRE_SRC_HI_IDX                 3
> +#define TRE_DEST_LOW_IDX               4
> +#define TRE_DEST_HI_IDX                5
> +
> +struct hidma_tx_status {
> +       u8 err_info;                    /* error record in this transfer    */
> +       u8 err_code;                    /* completion code                  */
> +};
> +
> +struct hidma_tre {
> +       atomic_t allocated;             /* if this channel is allocated     */
> +       bool queued;                    /* flag whether this is pending     */
> +       u16 status;                     /* status                           */
> +       u32 chidx;                      /* index of the tre         */
> +       u32 dma_sig;                    /* signature of the tre     */
> +       const char *dev_name;           /* name of the device               */
> +       void (*callback)(void *data);   /* requester callback               */
> +       void *data;                     /* Data associated with this channel*/
> +       struct hidma_lldev *lldev;      /* lldma device pointer             */
> +       u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy        */
> +       u32 tre_index;                  /* the offset where this was written*/
> +       u32 int_flags;                  /* interrupt flags*/
> +};
> +
> +struct hidma_lldev {
> +       bool initialized;               /* initialized flag               */
> +       u8 trch_state;                  /* trch_state of the device       */
> +       u8 evch_state;                  /* evch_state of the device       */
> +       u8 evridx;                      /* event channel to notify        */
> +       u32 nr_tres;                    /* max number of configs          */
> +       spinlock_t lock;                /* reentrancy                     */
> +       struct hidma_tre *trepool;      /* trepool of user configs */
> +       struct device *dev;             /* device                         */
> +       void __iomem *trca;             /* Transfer Channel address       */
> +       void __iomem *evca;             /* Event Channel address          */
> +       struct hidma_tre
> +               **pending_tre_list;     /* Pointers to pending TREs       */
> +       struct hidma_tx_status
> +               *tx_status_list;        /* Pointers to pending TREs status*/
> +       s32 pending_tre_count;          /* Number of TREs pending         */
> +
> +       void *tre_ring;         /* TRE ring                       */
> +       dma_addr_t tre_ring_handle;     /* TRE ring to be shared with HW  */
> +       u32 tre_ring_size;              /* Byte size of the ring          */
> +       u32 tre_processed_off;          /* last processed TRE              */
> +
> +       void *evre_ring;                /* EVRE ring                       */
> +       dma_addr_t evre_ring_handle;    /* EVRE ring to be shared with HW  */
> +       u32 evre_ring_size;             /* Byte size of the ring          */
> +       u32 evre_processed_off; /* last processed EVRE             */
> +
> +       u32 tre_write_offset;           /* TRE write location              */
> +       struct tasklet_struct task;     /* task delivering notifications   */
> +       DECLARE_KFIFO_PTR(handoff_fifo,
> +               struct hidma_tre *);    /* pending TREs FIFO              */
> +};
> +
> +struct hidma_desc {
> +       struct dma_async_tx_descriptor  desc;
> +       /* link list node for this channel*/
> +       struct list_head                node;
> +       u32                             tre_ch;
> +};
> +
> +struct hidma_chan {
> +       bool                            paused;
> +       bool                            allocated;
> +       char                            dbg_name[16];
> +       u32                             dma_sig;
> +
> +       /*
> +        * active descriptor on this channel
> +        * It is used by the DMA complete notification to
> +        * locate the descriptor that initiated the transfer.
> +        */
> +       struct dentry                   *debugfs;
> +       struct dentry                   *stats;
> +       struct hidma_dev                *dmadev;
> +
> +       struct dma_chan                 chan;
> +       struct list_head                free;
> +       struct list_head                prepared;
> +       struct list_head                active;
> +       struct list_head                completed;
> +
> +       /* Lock for this structure */
> +       spinlock_t                      lock;
> +};
> +
> +struct hidma_dev {
> +       int                             evridx;
> +       u32                             nr_descriptors;
> +
> +       struct hidma_lldev              *lldev;
> +       void                            __iomem *dev_trca;
> +       struct resource                 *trca_resource;
> +       void                            __iomem *dev_evca;
> +       struct resource                 *evca_resource;
> +
> +       /* used to protect the pending channel list*/
> +       spinlock_t                      lock;
> +       struct dma_device               ddev;
> +
> +       struct dentry                   *debugfs;
> +       struct dentry                   *stats;
> +};
> +
> +int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
> +                       const char *dev_name,
> +                       void (*callback)(void *data), void *data, u32 *tre_ch);
> +
> +void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
> +enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
> +bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
> +int hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
> +int hidma_ll_start(struct hidma_lldev *llhndl);
> +int hidma_ll_pause(struct hidma_lldev *llhndl);
> +int hidma_ll_resume(struct hidma_lldev *llhndl);
> +void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
> +       dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
> +int hidma_ll_setup(struct hidma_lldev *lldev);
> +struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
> +                       void __iomem *trca, void __iomem *evca,
> +                       u8 evridx);
> +int hidma_ll_uninit(struct hidma_lldev *llhndl);
> +irqreturn_t hidma_ll_inthandler(int irq, void *arg);
> +void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
> +                               u8 err_code);
> +int hidma_debug_init(struct hidma_dev *dmadev);
> +void hidma_debug_uninit(struct hidma_dev *dmadev);
> +#endif
> diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
> new file mode 100644
> index 0000000..e0e6711
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_dbg.c
> @@ -0,0 +1,225 @@
> +/*
> + * Qualcomm Technologies HIDMA debug file
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/list.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "hidma.h"
> +
> +void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
> +{
> +       struct hidma_lldev *lldev = llhndl;
> +       struct hidma_tre *tre;
> +       u32 length;
> +       dma_addr_t src_start;
> +       dma_addr_t dest_start;
> +       u32 *tre_local;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev, "invalid TRE number in chstats:%d",
> +                       tre_ch);
> +               return;
> +       }
> +       tre = &lldev->trepool[tre_ch];
> +       seq_printf(s, "------Channel %d -----\n", tre_ch);
> +       seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
> +       seq_printf(s, "queued=0x%x\n", tre->queued);
> +       seq_printf(s, "err_info=0x%x\n",
> +                  lldev->tx_status_list[tre->chidx].err_info);
> +       seq_printf(s, "err_code=0x%x\n",
> +                  lldev->tx_status_list[tre->chidx].err_code);
> +       seq_printf(s, "status=0x%x\n", tre->status);
> +       seq_printf(s, "chidx=0x%x\n", tre->chidx);
> +       seq_printf(s, "dma_sig=0x%x\n", tre->dma_sig);
> +       seq_printf(s, "dev_name=%s\n", tre->dev_name);
> +       seq_printf(s, "callback=%p\n", tre->callback);
> +       seq_printf(s, "data=%p\n", tre->data);
> +       seq_printf(s, "tre_index=0x%x\n", tre->tre_index);
> +
> +       tre_local = &tre->tre_local[0];
> +       src_start = tre_local[TRE_SRC_LOW_IDX];
> +       src_start = ((u64)(tre_local[TRE_SRC_HI_IDX]) << 32) + src_start;
> +       dest_start = tre_local[TRE_DEST_LOW_IDX];
> +       dest_start += ((u64)(tre_local[TRE_DEST_HI_IDX]) << 32);
> +       length = tre_local[TRE_LEN_IDX];
> +
> +       seq_printf(s, "src=%pap\n", &src_start);
> +       seq_printf(s, "dest=%pap\n", &dest_start);
> +       seq_printf(s, "length=0x%x\n", length);
> +}
> +
> +void hidma_ll_devstats(struct seq_file *s, void *llhndl)
> +{
> +       struct hidma_lldev *lldev = llhndl;
> +
> +       seq_puts(s, "------Device -----\n");
> +       seq_printf(s, "lldev init=0x%x\n", lldev->initialized);
> +       seq_printf(s, "trch_state=0x%x\n", lldev->trch_state);
> +       seq_printf(s, "evch_state=0x%x\n", lldev->evch_state);
> +       seq_printf(s, "evridx=0x%x\n", lldev->evridx);
> +       seq_printf(s, "nr_tres=0x%x\n", lldev->nr_tres);
> +       seq_printf(s, "trca=%p\n", lldev->trca);
> +       seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
> +       seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_ring_handle);
> +       seq_printf(s, "tre_ring_size=0x%x\n", lldev->tre_ring_size);
> +       seq_printf(s, "tre_processed_off=0x%x\n", lldev->tre_processed_off);
> +       seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
> +       seq_printf(s, "evca=%p\n", lldev->evca);
> +       seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
> +       seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_ring_handle);
> +       seq_printf(s, "evre_ring_size=0x%x\n", lldev->evre_ring_size);
> +       seq_printf(s, "evre_processed_off=0x%x\n", lldev->evre_processed_off);
> +       seq_printf(s, "tre_write_offset=0x%x\n", lldev->tre_write_offset);
> +}
> +
> +/**
> + * hidma_chan_stats: display HIDMA channel statistics
> + *
> + * Display the statistics for the current HIDMA virtual channel device.
> + */
> +static int hidma_chan_stats(struct seq_file *s, void *unused)
> +{
> +       struct hidma_chan *mchan = s->private;
> +       struct hidma_desc *mdesc;
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       seq_printf(s, "paused=%u\n", mchan->paused);
> +       seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
> +       seq_puts(s, "prepared\n");
> +       list_for_each_entry(mdesc, &mchan->prepared, node)
> +               hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
> +
> +       seq_puts(s, "active\n");
> +               list_for_each_entry(mdesc, &mchan->active, node)
> +                       hidma_ll_chstats(s, mchan->dmadev->lldev,
> +                               mdesc->tre_ch);
> +
> +       seq_puts(s, "completed\n");
> +               list_for_each_entry(mdesc, &mchan->completed, node)
> +                       hidma_ll_chstats(s, mchan->dmadev->lldev,
> +                               mdesc->tre_ch);
> +
> +       hidma_ll_devstats(s, mchan->dmadev->lldev);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return 0;
> +}
> +
> +/**
> + * hidma_dma_info: display HIDMA device info
> + *
> + * Display the info for the current HIDMA device.
> + */
> +static int hidma_dma_info(struct seq_file *s, void *unused)
> +{
> +       struct hidma_dev *dmadev = s->private;
> +       resource_size_t sz;
> +
> +       seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
> +       seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
> +       seq_printf(s, "dev_trca_phys=%pa\n",
> +               &dmadev->trca_resource->start);
> +       sz = resource_size(dmadev->trca_resource);
> +       seq_printf(s, "dev_trca_size=%pa\n", &sz);
> +       seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
> +       seq_printf(s, "dev_evca_phys=%pa\n",
> +               &dmadev->evca_resource->start);
> +       sz = resource_size(dmadev->evca_resource);
> +       seq_printf(s, "dev_evca_size=%pa\n", &sz);
> +       return 0;
> +}
> +
> +static int hidma_chan_stats_open(struct inode *inode, struct file *file)
> +{
> +       return single_open(file, hidma_chan_stats, inode->i_private);
> +}
> +
> +static int hidma_dma_info_open(struct inode *inode, struct file *file)
> +{
> +       return single_open(file, hidma_dma_info, inode->i_private);
> +}
> +
> +static const struct file_operations hidma_chan_fops = {
> +       .open = hidma_chan_stats_open,
> +       .read = seq_read,
> +       .llseek = seq_lseek,
> +       .release = single_release,
> +};
> +
> +static const struct file_operations hidma_dma_fops = {
> +       .open = hidma_dma_info_open,
> +       .read = seq_read,
> +       .llseek = seq_lseek,
> +       .release = single_release,
> +};
> +
> +void hidma_debug_uninit(struct hidma_dev *dmadev)
> +{
> +       debugfs_remove_recursive(dmadev->debugfs);
> +       debugfs_remove_recursive(dmadev->stats);
> +}
> +
> +int hidma_debug_init(struct hidma_dev *dmadev)
> +{
> +       int rc = 0;
> +       int chidx = 0;
> +       struct list_head *position = NULL;
> +
> +       dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev),
> +                                               NULL);
> +       if (!dmadev->debugfs) {
> +               rc = -ENODEV;
> +               return rc;
> +       }
> +
> +       /* walk through the virtual channel list */
> +       list_for_each(position, &dmadev->ddev.channels) {
> +               struct hidma_chan *chan;
> +
> +               chan = list_entry(position, struct hidma_chan,
> +                               chan.device_node);
> +               sprintf(chan->dbg_name, "chan%d", chidx);
> +               chan->debugfs = debugfs_create_dir(chan->dbg_name,
> +                                               dmadev->debugfs);
> +               if (!chan->debugfs) {
> +                       rc = -ENOMEM;
> +                       goto cleanup;
> +               }
> +               chan->stats = debugfs_create_file("stats", S_IRUGO,
> +                               chan->debugfs, chan,
> +                               &hidma_chan_fops);
> +               if (!chan->stats) {
> +                       rc = -ENOMEM;
> +                       goto cleanup;
> +               }
> +               chidx++;
> +       }
> +
> +       dmadev->stats = debugfs_create_file("stats", S_IRUGO,
> +                       dmadev->debugfs, dmadev,
> +                       &hidma_dma_fops);
> +       if (!dmadev->stats) {
> +               rc = -ENOMEM;
> +               goto cleanup;
> +       }
> +
> +       return 0;
> +cleanup:
> +       hidma_debug_uninit(dmadev);
> +       return rc;
> +}
> diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
> new file mode 100644
> index 0000000..f5c0b8b
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_ll.c
> @@ -0,0 +1,944 @@
> +/*
> + * Qualcomm Technologies HIDMA DMA engine low level code
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/dmaengine.h>
> +#include <linux/slab.h>
> +#include <linux/interrupt.h>
> +#include <linux/mm.h>
> +#include <linux/highmem.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/delay.h>
> +#include <linux/atomic.h>
> +#include <linux/iopoll.h>
> +#include <linux/kfifo.h>
> +
> +#include "hidma.h"
> +
> +#define EVRE_SIZE                      16 /* each EVRE is 16 bytes */
> +
> +#define TRCA_CTRLSTS_OFFSET            0x0
> +#define TRCA_RING_LOW_OFFSET           0x8
> +#define TRCA_RING_HIGH_OFFSET          0xC
> +#define TRCA_RING_LEN_OFFSET           0x10
> +#define TRCA_READ_PTR_OFFSET           0x18
> +#define TRCA_WRITE_PTR_OFFSET          0x20
> +#define TRCA_DOORBELL_OFFSET           0x400
> +
> +#define EVCA_CTRLSTS_OFFSET            0x0
> +#define EVCA_INTCTRL_OFFSET            0x4
> +#define EVCA_RING_LOW_OFFSET           0x8
> +#define EVCA_RING_HIGH_OFFSET          0xC
> +#define EVCA_RING_LEN_OFFSET           0x10
> +#define EVCA_READ_PTR_OFFSET           0x18
> +#define EVCA_WRITE_PTR_OFFSET          0x20
> +#define EVCA_DOORBELL_OFFSET           0x400
> +
> +#define EVCA_IRQ_STAT_OFFSET           0x100
> +#define EVCA_IRQ_CLR_OFFSET            0x108
> +#define EVCA_IRQ_EN_OFFSET             0x110
> +
> +#define EVRE_CFG_IDX                   0
> +#define EVRE_LEN_IDX                   1
> +#define EVRE_DEST_LOW_IDX              2
> +#define EVRE_DEST_HI_IDX               3
> +
> +#define EVRE_ERRINFO_BIT_POS           24
> +#define EVRE_CODE_BIT_POS              28
> +
> +#define EVRE_ERRINFO_MASK              0xF
> +#define EVRE_CODE_MASK                 0xF
> +
> +#define CH_CONTROL_MASK                0xFF
> +#define CH_STATE_MASK                  0xFF
> +#define CH_STATE_BIT_POS               0x8
> +
> +#define MAKE64(high, low) (((u64)(high) << 32) | (low))
> +
> +#define IRQ_EV_CH_EOB_IRQ_BIT_POS      0
> +#define IRQ_EV_CH_WR_RESP_BIT_POS      1
> +#define IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
> +#define IRQ_TR_CH_DATA_RD_ER_BIT_POS   10
> +#define IRQ_TR_CH_DATA_WR_ER_BIT_POS   11
> +#define IRQ_TR_CH_INVALID_TRE_BIT_POS  14
> +
> +#define        ENABLE_IRQS (BIT(IRQ_EV_CH_EOB_IRQ_BIT_POS) | \
> +               BIT(IRQ_EV_CH_WR_RESP_BIT_POS) | \
> +               BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS) |   \
> +               BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS) |              \
> +               BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS) |              \
> +               BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))
> +
> +enum ch_command {
> +       CH_DISABLE = 0,
> +       CH_ENABLE = 1,
> +       CH_SUSPEND = 2,
> +       CH_RESET = 9,
> +};
> +
> +enum ch_state {
> +       CH_DISABLED = 0,
> +       CH_ENABLED = 1,
> +       CH_RUNNING = 2,
> +       CH_SUSPENDED = 3,
> +       CH_STOPPED = 4,
> +       CH_ERROR = 5,
> +       CH_IN_RESET = 9,
> +};
> +
> +enum tre_type {
> +       TRE_MEMCPY = 3,
> +       TRE_MEMSET = 4,
> +};
> +
> +enum evre_type {
> +       EVRE_DMA_COMPLETE = 0x23,
> +       EVRE_IMM_DATA = 0x24,
> +};
> +
> +enum err_code {
> +       EVRE_STATUS_COMPLETE = 1,
> +       EVRE_STATUS_ERROR = 4,
> +};
> +
> +void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       struct hidma_tre *tre;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre = &lldev->trepool[tre_ch];
> +       if (atomic_read(&tre->allocated) != true) {
> +               dev_err(lldev->dev, "trying to free an unused TRE:%d",
> +                       tre_ch);
> +               return;
> +       }
> +
> +       atomic_set(&tre->allocated, 0);
> +       dev_dbg(lldev->dev, "free_dma: allocated:%d tre_ch:%d\n",
> +               atomic_read(&tre->allocated), tre_ch);
> +}
> +
> +int hidma_ll_request(struct hidma_lldev *lldev, u32 dma_sig,
> +                       const char *dev_name,
> +                       void (*callback)(void *data), void *data, u32 *tre_ch)
> +{
> +       u32 i;
> +       struct hidma_tre *tre = NULL;
> +       u32 *tre_local;
> +
> +       if (!tre_ch || !lldev)
> +               return -EINVAL;
> +
> +       /* need to have at least one empty spot in the queue */
> +       for (i = 0; i < lldev->nr_tres - 1; i++) {
> +               if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
> +                       break;
> +       }
> +
> +       if (i == (lldev->nr_tres - 1))
> +               return -ENOMEM;
> +
> +       tre = &lldev->trepool[i];
> +       tre->dma_sig = dma_sig;
> +       tre->dev_name = dev_name;
> +       tre->callback = callback;
> +       tre->data = data;
> +       tre->chidx = i;
> +       tre->status = 0;
> +       tre->queued = 0;
> +       lldev->tx_status_list[i].err_code = 0;
> +       tre->lldev = lldev;
> +       tre_local = &tre->tre_local[0];
> +       tre_local[TRE_CFG_IDX] = TRE_MEMCPY;
> +       tre_local[TRE_CFG_IDX] |= ((lldev->evridx & 0xFF) << 8);
> +       tre_local[TRE_CFG_IDX] |= BIT(16);      /* set IEOB */
> +       *tre_ch = i;
> +       if (callback)
> +               callback(data);
> +       return 0;
> +}
> +
> +/*
> + * Multiple TREs may be queued and waiting in the
> + * pending queue.
> + */
> +static void hidma_ll_tre_complete(unsigned long arg)
> +{
> +       struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
> +       struct hidma_tre *tre;
> +
> +       while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
> +               /* call the user if it has been read by the hardware*/
> +               if (tre->callback)
> +                       tre->callback(tre->data);
> +       }
> +}
> +
> +/*
> + * Called to handle the interrupt for the channel.
> + * Return a positive number if TRE or EVRE were consumed on this run.
> + * Return a positive number if there are pending TREs or EVREs.
> + * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
> + */
> +static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
> +{
> +       struct hidma_tre *tre;
> +       u32 evre_write_off;
> +       u32 evre_ring_size = lldev->evre_ring_size;
> +       u32 tre_ring_size = lldev->tre_ring_size;
> +       u32 num_completed = 0, tre_iterator, evre_iterator;
> +       unsigned long flags;
> +
> +       evre_write_off = readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
> +       tre_iterator = lldev->tre_processed_off;
> +       evre_iterator = lldev->evre_processed_off;
> +
> +       if ((evre_write_off > evre_ring_size) ||
> +               ((evre_write_off % EVRE_SIZE) != 0)) {
> +               dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
> +               return 0;
> +       }
> +
> +       /*
> +        * By the time control reaches here the number of EVREs and TREs
> +        * may not match. Only consume the ones that hardware told us.
> +        */
> +       while ((evre_iterator != evre_write_off)) {
> +               u32 *current_evre = lldev->evre_ring + evre_iterator;
> +               u32 cfg;
> +               u8 err_info;
> +
> +               spin_lock_irqsave(&lldev->lock, flags);
> +               tre = lldev->pending_tre_list[tre_iterator / TRE_SIZE];
> +               if (!tre) {
> +                       spin_unlock_irqrestore(&lldev->lock, flags);
> +                       dev_warn(lldev->dev,
> +                               "tre_index [%d] and tre out of sync\n",
> +                               tre_iterator / TRE_SIZE);
> +                       tre_iterator += TRE_SIZE;
> +                       if (tre_iterator >= tre_ring_size)
> +                               tre_iterator -= tre_ring_size;
> +                       evre_iterator += EVRE_SIZE;
> +                       if (evre_iterator >= evre_ring_size)
> +                               evre_iterator -= evre_ring_size;
> +
> +                       continue;
> +               }
> +               lldev->pending_tre_list[tre->tre_index] = NULL;
> +
> +               /*
> +                * Keep track of pending TREs that SW is expecting to receive
> +                * from HW. We got one now. Decrement our counter.
> +                */
> +               lldev->pending_tre_count--;
> +               if (lldev->pending_tre_count < 0) {
> +                       dev_warn(lldev->dev,
> +                               "tre count mismatch on completion");
> +                       lldev->pending_tre_count = 0;
> +               }
> +
> +               spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +               cfg = current_evre[EVRE_CFG_IDX];
> +               err_info = (cfg >> EVRE_ERRINFO_BIT_POS);
> +               err_info = err_info & EVRE_ERRINFO_MASK;
> +               lldev->tx_status_list[tre->chidx].err_info = err_info;
> +               lldev->tx_status_list[tre->chidx].err_code =
> +                       (cfg >> EVRE_CODE_BIT_POS) & EVRE_CODE_MASK;
> +               tre->queued = 0;
> +
> +               kfifo_put(&lldev->handoff_fifo, tre);
> +               tasklet_schedule(&lldev->task);
> +
> +               tre_iterator += TRE_SIZE;
> +               if (tre_iterator >= tre_ring_size)
> +                       tre_iterator -= tre_ring_size;
> +               evre_iterator += EVRE_SIZE;
> +               if (evre_iterator >= evre_ring_size)
> +                       evre_iterator -= evre_ring_size;
> +
> +               /*
> +                * Read the new event descriptor written by the HW.
> +                * As we are processing the delivered events, other events
> +                * get queued to the SW for processing.
> +                */
> +               evre_write_off =
> +                       readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
> +               num_completed++;
> +       }
> +
> +       if (num_completed) {
> +               u32 evre_read_off = (lldev->evre_processed_off +
> +                               EVRE_SIZE * num_completed);
> +               u32 tre_read_off = (lldev->tre_processed_off +
> +                               TRE_SIZE * num_completed);
> +
> +               evre_read_off = evre_read_off % evre_ring_size;
> +               tre_read_off = tre_read_off % tre_ring_size;
> +
> +               writel(evre_read_off, lldev->evca + EVCA_DOORBELL_OFFSET);
> +
> +               /* record the last processed tre offset */
> +               lldev->tre_processed_off = tre_read_off;
> +               lldev->evre_processed_off = evre_read_off;
> +       }
> +
> +       return num_completed;
> +}
> +
> +void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
> +                               u8 err_code)
> +{
> +       u32 tre_iterator;
> +       struct hidma_tre *tre;
> +       u32 tre_ring_size = lldev->tre_ring_size;
> +       int num_completed = 0;
> +       u32 tre_read_off;
> +       unsigned long flags;
> +
> +       tre_iterator = lldev->tre_processed_off;
> +       while (lldev->pending_tre_count) {
> +               int tre_index = tre_iterator / TRE_SIZE;
> +
> +               spin_lock_irqsave(&lldev->lock, flags);
> +               tre = lldev->pending_tre_list[tre_index];
> +               if (!tre) {
> +                       spin_unlock_irqrestore(&lldev->lock, flags);
> +                       tre_iterator += TRE_SIZE;
> +                       if (tre_iterator >= tre_ring_size)
> +                               tre_iterator -= tre_ring_size;
> +                       continue;
> +               }
> +               lldev->pending_tre_list[tre_index] = NULL;
> +               lldev->pending_tre_count--;
> +               if (lldev->pending_tre_count < 0) {
> +                       dev_warn(lldev->dev,
> +                               "tre count mismatch on completion");
> +                       lldev->pending_tre_count = 0;
> +               }
> +               spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +               lldev->tx_status_list[tre->chidx].err_info = err_info;
> +               lldev->tx_status_list[tre->chidx].err_code = err_code;
> +               tre->queued = 0;
> +
> +               kfifo_put(&lldev->handoff_fifo, tre);
> +               tasklet_schedule(&lldev->task);
> +
> +               tre_iterator += TRE_SIZE;
> +               if (tre_iterator >= tre_ring_size)
> +                       tre_iterator -= tre_ring_size;
> +
> +               num_completed++;
> +       }
> +       tre_read_off = (lldev->tre_processed_off +
> +                       TRE_SIZE * num_completed);
> +
> +       tre_read_off = tre_read_off % tre_ring_size;
> +
> +       /* record the last processed tre offset */
> +       lldev->tre_processed_off = tre_read_off;
> +}
> +
> +static int hidma_ll_reset(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_RESET << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Delay 10ms after reset to allow DMA logic to quiesce.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "transfer channel did not reset\n");
> +               return ret;
> +       }
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_RESET << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Delay 10ms after reset to allow DMA logic to quiesce.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       lldev->trch_state = CH_DISABLED;
> +       lldev->evch_state = CH_DISABLED;
> +       return 0;
> +}
> +
> +static void hidma_ll_enable_irq(struct hidma_lldev *lldev, u32 irq_bits)
> +{
> +       writel(irq_bits, lldev->evca + EVCA_IRQ_EN_OFFSET);
> +       dev_dbg(lldev->dev, "enableirq\n");
> +}
> +
> +/*
> + * The interrupt handler for HIDMA will try to consume as many pending
> + * EVRE from the event queue as possible. Each EVRE has an associated
> + * TRE that holds the user interface parameters. EVRE reports the
> + * result of the transaction. Hardware guarantees ordering between EVREs
> + * and TREs. We use last processed offset to figure out which TRE is
> + * associated with which EVRE. If two TREs are consumed by HW, the EVREs
> + * are in order in the event ring.
> + *
> + * This handler will do a one pass for consuming EVREs. Other EVREs may
> + * be delivered while we are working. It will try to consume incoming
> + * EVREs one more time and return.
> + *
> + * For unprocessed EVREs, hardware will trigger another interrupt until
> + * all the interrupt bits are cleared.
> + *
> + * Hardware guarantees that by the time interrupt is observed, all data
> + * transactions in flight are delivered to their respective places and
> + * are visible to the CPU.
> + *
> + * On demand paging for IOMMU is only supported for PCIe via PRI
> + * (Page Request Interface) not for HIDMA. All other hardware instances
> + * including HIDMA work on pinned DMA addresses.
> + *
> + * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
> + * IOMMU latency will be built into the data movement time. By the time
> + * interrupt happens, IOMMU lookups + data movement has already taken place.
> + *
> + * While the first read in a typical PCI endpoint ISR flushes all outstanding
> + * requests traditionally to the destination, this concept does not apply
> + * here for this HW.
> + */
> +static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev)
> +{
> +       u32 status;
> +       u32 enable;
> +       u32 cause;
> +       int repeat = 2;
> +       unsigned long timeout;
> +
> +       /*
> +        * Fine tuned for this HW...
> +        *
> +        * This ISR has been designed for this particular hardware. Relaxed read
> +        * and write accessors are used for performance reasons due to interrupt
> +        * delivery guarantees. Do not copy this code blindly and expect
> +        * that to work.
> +        */
> +       status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
> +       cause = status & enable;
> +
> +       if ((cause & (BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))) ||
> +                       (cause & BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)) ||
> +                       (cause & BIT(IRQ_EV_CH_WR_RESP_BIT_POS)) ||
> +                       (cause & BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)) ||
> +                       (cause & BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS))) {
> +               u8 err_code = EVRE_STATUS_ERROR;
> +               u8 err_info = 0xFF;
> +
> +               /* Clear out pending interrupts */
> +               writel(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +               dev_err(lldev->dev,
> +                       "error 0x%x, resetting...\n", cause);
> +
> +               hidma_cleanup_pending_tre(lldev, err_info, err_code);
> +
> +               /* reset the channel for recovery */
> +               if (hidma_ll_setup(lldev)) {
> +                       dev_err(lldev->dev,
> +                               "channel reinitialize failed after error\n");
> +                       return;
> +               }
> +               hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +               return;
> +       }
> +
> +       /*
> +        * Try to consume as many EVREs as possible.
> +        * skip this loop if the interrupt is spurious.
> +        */
> +       while (cause && repeat) {
> +               unsigned long start = jiffies;
> +
> +               /* This timeout should be sufficent for core to finish */
> +               timeout = start + msecs_to_jiffies(500);
> +
> +               while (lldev->pending_tre_count) {
> +                       hidma_handle_tre_completion(lldev);
> +                       if (time_is_before_jiffies(timeout)) {
> +                               dev_warn(lldev->dev,
> +                                       "ISR timeout %lx-%lx from %lx [%d]\n",
> +                                       jiffies, timeout, start,
> +                                       lldev->pending_tre_count);
> +                               break;
> +                       }
> +               }
> +
> +               /* We consumed TREs or there are pending TREs or EVREs. */
> +               writel_relaxed(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +               /*
> +                * Another interrupt might have arrived while we are
> +                * processing this one. Read the new cause.
> +                */
> +               status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +               enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
> +               cause = status & enable;
> +
> +               repeat--;
> +       }
> +}
> +
> +
> +static int hidma_ll_enable(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val &= ~(CH_CONTROL_MASK << 16);
> +       val |= (CH_ENABLE << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               ((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "event channel did not get enabled\n");
> +               return ret;
> +       }
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_ENABLE << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               ((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "transfer channel did not get enabled\n");
> +               return ret;
> +       }
> +
> +       lldev->trch_state = CH_ENABLED;
> +       lldev->evch_state = CH_ENABLED;
> +
> +       return 0;
> +}
> +
> +int hidma_ll_resume(struct hidma_lldev *lldev)
> +{
> +       return hidma_ll_enable(lldev);
> +}
> +
> +static int hidma_ll_hw_start(struct hidma_lldev *lldev)
> +{
> +       int rc = 0;
> +       unsigned long irqflags;
> +
> +       spin_lock_irqsave(&lldev->lock, irqflags);
> +       writel(lldev->tre_write_offset, lldev->trca + TRCA_DOORBELL_OFFSET);
> +       spin_unlock_irqrestore(&lldev->lock, irqflags);
> +
> +       return rc;
> +}
> +
> +bool hidma_ll_isenabled(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +
> +       /* both channels have to be enabled before calling this function*/
> +       if (((lldev->trch_state == CH_ENABLED) ||
> +               (lldev->trch_state == CH_RUNNING)) &&
> +               ((lldev->evch_state == CH_ENABLED) ||
> +                       (lldev->evch_state == CH_RUNNING)))
> +               return true;
> +
> +       dev_dbg(lldev->dev, "channels are not enabled or are in error state");
> +       return false;
> +}
> +
> +int hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       struct hidma_tre *tre;
> +       int rc = 0;
> +       unsigned long flags;
> +
> +       tre = &lldev->trepool[tre_ch];
> +
> +       /* copy the TRE into its location in the TRE ring */
> +       spin_lock_irqsave(&lldev->lock, flags);
> +       tre->tre_index = lldev->tre_write_offset / TRE_SIZE;
> +       lldev->pending_tre_list[tre->tre_index] = tre;
> +       memcpy(lldev->tre_ring + lldev->tre_write_offset, &tre->tre_local[0],
> +               TRE_SIZE);
> +       lldev->tx_status_list[tre->chidx].err_code = 0;
> +       lldev->tx_status_list[tre->chidx].err_info = 0;
> +       tre->queued = 1;
> +       lldev->pending_tre_count++;
> +       lldev->tre_write_offset = (lldev->tre_write_offset + TRE_SIZE)
> +                               % lldev->tre_ring_size;
> +       spin_unlock_irqrestore(&lldev->lock, flags);
> +       return rc;
> +}
> +
> +int hidma_ll_start(struct hidma_lldev *lldev)
> +{
> +       return hidma_ll_hw_start(lldev);
> +}
> +
> +/*
> + * Note that even though we stop this channel
> + * if there is a pending transaction in flight
> + * it will complete and follow the callback.
> + * This request will prevent further requests
> + * to be made.
> + */
> +int hidma_ll_pause(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +
> +       /* already suspended by this OS */
> +       if ((lldev->trch_state == CH_SUSPENDED) ||
> +               (lldev->evch_state == CH_SUSPENDED))
> +               return 0;
> +
> +       /* already stopped by the manager */
> +       if ((lldev->trch_state == CH_STOPPED) ||
> +               (lldev->evch_state == CH_STOPPED))
> +               return 0;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_SUSPEND << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Start the wait right after the suspend is confirmed.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_SUSPEND << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Start the wait right after the suspend is confirmed
> +        * Delay up to 10ms after reset to allow DMA logic to quiesce.
> +        */
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       lldev->trch_state = CH_SUSPENDED;
> +       lldev->evch_state = CH_SUSPENDED;
> +       dev_dbg(lldev->dev, "stop\n");
> +
> +       return 0;
> +}
> +
> +void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
> +       dma_addr_t src, dma_addr_t dest, u32 len, u32 flags)
> +{
> +       struct hidma_tre *tre;
> +       u32 *tre_local;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev,
> +                       "invalid TRE number in transfer params:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre = &lldev->trepool[tre_ch];
> +       if (atomic_read(&tre->allocated) != true) {
> +               dev_err(lldev->dev,
> +                       "trying to set params on an unused TRE:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre_local = &tre->tre_local[0];
> +       tre_local[TRE_LEN_IDX] = len;
> +       tre_local[TRE_SRC_LOW_IDX] = lower_32_bits(src);
> +       tre_local[TRE_SRC_HI_IDX] = upper_32_bits(src);
> +       tre_local[TRE_DEST_LOW_IDX] = lower_32_bits(dest);
> +       tre_local[TRE_DEST_HI_IDX] = upper_32_bits(dest);
> +       tre->int_flags = flags;
> +
> +       dev_dbg(lldev->dev, "transferparams: tre_ch:%d %pap->%pap len:%u\n",
> +               tre_ch, &src, &dest, len);
> +}
> +
> +/*
> + * Called during initialization and after an error condition
> + * to restore hardware state.
> + */
> +int hidma_ll_setup(struct hidma_lldev *lldev)
> +{
> +       int rc;
> +       u64 addr;
> +       u32 val;
> +       u32 nr_tres = lldev->nr_tres;
> +
> +       lldev->pending_tre_count = 0;
> +       lldev->tre_processed_off = 0;
> +       lldev->evre_processed_off = 0;
> +       lldev->tre_write_offset = 0;
> +
> +       /* disable interrupts */
> +       hidma_ll_enable_irq(lldev, 0);
> +
> +       /* clear all pending interrupts */
> +       val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +       rc = hidma_ll_reset(lldev);
> +       if (rc)
> +               return rc;
> +
> +       /*
> +        * Clear all pending interrupts again.
> +        * Otherwise, we observe reset complete interrupts.
> +        */
> +       val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +       /* disable interrupts again after reset */
> +       hidma_ll_enable_irq(lldev, 0);
> +
> +       addr = lldev->tre_ring_handle;
> +       writel(lower_32_bits(addr), lldev->trca + TRCA_RING_LOW_OFFSET);
> +       writel(upper_32_bits(addr), lldev->trca + TRCA_RING_HIGH_OFFSET);
> +       writel(lldev->tre_ring_size, lldev->trca + TRCA_RING_LEN_OFFSET);
> +
> +       addr = lldev->evre_ring_handle;
> +       writel(lower_32_bits(addr), lldev->evca + EVCA_RING_LOW_OFFSET);
> +       writel(upper_32_bits(addr), lldev->evca + EVCA_RING_HIGH_OFFSET);
> +       writel(EVRE_SIZE * nr_tres, lldev->evca + EVCA_RING_LEN_OFFSET);
> +
> +       /* support IRQ only for now */
> +       val = readl(lldev->evca + EVCA_INTCTRL_OFFSET);
> +       val = val & ~(0xF);
> +       val = val | 0x1;
> +       writel(val, lldev->evca + EVCA_INTCTRL_OFFSET);
> +
> +       /* clear all pending interrupts and enable them*/
> +       writel(ENABLE_IRQS, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +       hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +
> +       rc = hidma_ll_enable(lldev);
> +       if (rc)
> +               return rc;
> +
> +       return rc;
> +}
> +
> +struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
> +                       void __iomem *trca, void __iomem *evca,
> +                       u8 evridx)
> +{
> +       u32 required_bytes;
> +       struct hidma_lldev *lldev;
> +       int rc;
> +
> +       if (!trca || !evca || !dev || !nr_tres)
> +               return NULL;
> +
> +       /* need at least four TREs */
> +       if (nr_tres < 4)
> +               return NULL;
> +
> +       /* need an extra space */
> +       nr_tres += 1;
> +
> +       lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
> +       if (!lldev)
> +               return NULL;
> +
> +       lldev->evca = evca;
> +       lldev->trca = trca;
> +       lldev->dev = dev;
> +       required_bytes = sizeof(struct hidma_tre) * nr_tres;
> +       lldev->trepool = devm_kzalloc(lldev->dev, required_bytes, GFP_KERNEL);
> +       if (!lldev->trepool)
> +               return NULL;
> +
> +       required_bytes = sizeof(lldev->pending_tre_list[0]) * nr_tres;
> +       lldev->pending_tre_list = devm_kzalloc(dev, required_bytes,
> +                                       GFP_KERNEL);
> +       if (!lldev->pending_tre_list)
> +               return NULL;
> +
> +       required_bytes = sizeof(lldev->tx_status_list[0]) * nr_tres;
> +       lldev->tx_status_list = devm_kzalloc(dev, required_bytes, GFP_KERNEL);
> +       if (!lldev->tx_status_list)
> +               return NULL;
> +
> +       lldev->tre_ring = dmam_alloc_coherent(dev, (TRE_SIZE + 1) * nr_tres,
> +                                       &lldev->tre_ring_handle, GFP_KERNEL);
> +       if (!lldev->tre_ring)
> +               return NULL;
> +
> +       memset(lldev->tre_ring, 0, (TRE_SIZE + 1) * nr_tres);
> +       lldev->tre_ring_size = TRE_SIZE * nr_tres;
> +       lldev->nr_tres = nr_tres;
> +
> +       /* the TRE ring has to be TRE_SIZE aligned */
> +       if (!IS_ALIGNED(lldev->tre_ring_handle, TRE_SIZE)) {
> +               u8  tre_ring_shift;
> +
> +               tre_ring_shift = lldev->tre_ring_handle % TRE_SIZE;
> +               tre_ring_shift = TRE_SIZE - tre_ring_shift;
> +               lldev->tre_ring_handle += tre_ring_shift;
> +               lldev->tre_ring += tre_ring_shift;
> +       }
> +
> +       lldev->evre_ring = dmam_alloc_coherent(dev, (EVRE_SIZE + 1) * nr_tres,
> +                                       &lldev->evre_ring_handle, GFP_KERNEL);
> +       if (!lldev->evre_ring)
> +               return NULL;
> +
> +       memset(lldev->evre_ring, 0, (EVRE_SIZE + 1) * nr_tres);
> +       lldev->evre_ring_size = EVRE_SIZE * nr_tres;
> +
> +       /* the EVRE ring has to be EVRE_SIZE aligned */
> +       if (!IS_ALIGNED(lldev->evre_ring_handle, EVRE_SIZE)) {
> +               u8  evre_ring_shift;
> +
> +               evre_ring_shift = lldev->evre_ring_handle % EVRE_SIZE;
> +               evre_ring_shift = EVRE_SIZE - evre_ring_shift;
> +               lldev->evre_ring_handle += evre_ring_shift;
> +               lldev->evre_ring += evre_ring_shift;
> +       }
> +       lldev->nr_tres = nr_tres;
> +       lldev->evridx = evridx;
> +
> +       rc = kfifo_alloc(&lldev->handoff_fifo,
> +               nr_tres * sizeof(struct hidma_tre *), GFP_KERNEL);
> +       if (rc)
> +               return NULL;
> +
> +       rc = hidma_ll_setup(lldev);
> +       if (rc)
> +               return NULL;
> +
> +       spin_lock_init(&lldev->lock);
> +       tasklet_init(&lldev->task, hidma_ll_tre_complete,
> +                       (unsigned long)lldev);
> +       lldev->initialized = 1;
> +       hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +       return lldev;
> +}
> +
> +int hidma_ll_uninit(struct hidma_lldev *lldev)
> +{
> +       int rc = 0;
> +       u32 val;
> +
> +       if (!lldev)
> +               return -ENODEV;
> +
> +       if (lldev->initialized) {
> +               u32 required_bytes;
> +
> +               lldev->initialized = 0;
> +
> +               required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
> +               tasklet_kill(&lldev->task);
> +               memset(lldev->trepool, 0, required_bytes);
> +               lldev->trepool = NULL;
> +               lldev->pending_tre_count = 0;
> +               lldev->tre_write_offset = 0;
> +
> +               rc = hidma_ll_reset(lldev);
> +
> +               /*
> +                * Clear all pending interrupts again.
> +                * Otherwise, we observe reset complete interrupts.
> +                */
> +               val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +               writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +               hidma_ll_enable_irq(lldev, 0);
> +       }
> +       return rc;
> +}
> +
> +irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
> +{
> +       struct hidma_lldev *lldev = arg;
> +
> +       hidma_ll_int_handler_internal(lldev);
> +       return IRQ_HANDLED;
> +}
> +
> +enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       enum dma_status ret = DMA_ERROR;
> +       unsigned long flags;
> +       u8 err_code;
> +
> +       spin_lock_irqsave(&lldev->lock, flags);
> +       err_code = lldev->tx_status_list[tre_ch].err_code;
> +
> +       if (err_code & EVRE_STATUS_COMPLETE)
> +               ret = DMA_COMPLETE;
> +       else if (err_code & EVRE_STATUS_ERROR)
> +               ret = DMA_ERROR;
> +       else
> +               ret = DMA_IN_PROGRESS;
> +       spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +       return ret;
> +}
> --
> Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-08 20:47       ` Andy Shevchenko
  0 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-08 20:47 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Nov 8, 2015 at 6:53 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
> This patch adds support for hidma engine. The driver
> consists of two logical blocks. The DMA engine interface
> and the low-level interface. The hardware only supports
> memcpy/memset and this driver only support memcpy
> interface. HW and driver doesn't support slave interface.

Make lines a bit longer.

> +/*
> + * Qualcomm Technologies HIDMA DMA engine interface
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +/*
> + * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
> + * Copyright (C) Semihalf 2009
> + * Copyright (C) Ilya Yanok, Emcraft Systems 2010
> + * Copyright (C) Alexander Popov, Promcontroller 2014
> + *
> + * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
> + * (defines, structures and comments) was taken from MPC5121 DMA driver
> + * written by Hongjun Chen <hong-jun.chen@freescale.com>.
> + *
> + * Approved as OSADL project by a majority of OSADL members and funded
> + * by OSADL membership fees in 2009;  for details see www.osadl.org.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License as published by the Free
> + * Software Foundation; either version 2 of the License, or (at your option)
> + * any later version.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * The full GNU General Public License is included in this distribution in the
> + * file called COPYING.
> + */
> +
> +/* Linux Foundation elects GPLv2 license only. */
> +
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/of_dma.h>
> +#include <linux/property.h>
> +#include <linux/delay.h>
> +#include <linux/highmem.h>
> +#include <linux/io.h>
> +#include <linux/sched.h>
> +#include <linux/wait.h>
> +#include <linux/acpi.h>
> +#include <linux/irq.h>
> +#include <linux/atomic.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "../dmaengine.h"
> +#include "hidma.h"
> +
> +/*
> + * Default idle time is 2 seconds. This parameter can
> + * be overridden by changing the following
> + * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
> + * during kernel boot.
> + */
> +#define AUTOSUSPEND_TIMEOUT            2000
> +#define ERR_INFO_SW                    0xFF
> +#define ERR_CODE_UNEXPECTED_TERMINATE  0x0
> +
> +static inline
> +struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
> +{
> +       return container_of(dmadev, struct hidma_dev, ddev);
> +}
> +
> +static inline
> +struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
> +{
> +       return container_of(_lldevp, struct hidma_dev, lldev);
> +}
> +
> +static inline
> +struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
> +{
> +       return container_of(dmach, struct hidma_chan, chan);
> +}
> +
> +static inline struct hidma_desc *
> +to_hidma_desc(struct dma_async_tx_descriptor *t)
> +{
> +       return container_of(t, struct hidma_desc, desc);
> +}
> +
> +static void hidma_free(struct hidma_dev *dmadev)
> +{
> +       dev_dbg(dmadev->ddev.dev, "free dmadev\n");
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +}
> +
> +static unsigned int nr_desc_prm;
> +module_param(nr_desc_prm, uint, 0644);
> +MODULE_PARM_DESC(nr_desc_prm,
> +                "number of descriptors (default: 0)");
> +
> +#define MAX_HIDMA_CHANNELS     64
> +static int event_channel_idx[MAX_HIDMA_CHANNELS] = {
> +       [0 ... (MAX_HIDMA_CHANNELS - 1)] = -1};
> +static unsigned int num_event_channel_idx;
> +module_param_array_named(event_channel_idx, event_channel_idx, int,
> +                       &num_event_channel_idx, 0644);
> +MODULE_PARM_DESC(event_channel_idx,
> +               "event channel index array for the notifications");
> +static atomic_t channel_ref_count;
> +
> +/* process completed descriptors */
> +static void hidma_process_completed(struct hidma_dev *mdma)
> +{
> +       dma_cookie_t last_cookie = 0;
> +       struct hidma_chan *mchan;
> +       struct hidma_desc *mdesc;
> +       struct dma_async_tx_descriptor *desc;
> +       unsigned long irqflags;
> +       struct list_head list;
> +       struct dma_chan *dmach = NULL;

Redundant assignment.

> +
> +       list_for_each_entry(dmach, &mdma->ddev.channels,
> +                       device_node) {
> +               mchan = to_hidma_chan(dmach);
> +               INIT_LIST_HEAD(&list);
> +
> +               /* Get all completed descriptors */
> +               spin_lock_irqsave(&mchan->lock, irqflags);
> +               list_splice_tail_init(&mchan->completed, &list);
> +               spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +               /* Execute callbacks and run dependencies */
> +               list_for_each_entry(mdesc, &list, node) {
> +                       desc = &mdesc->desc;
> +
> +                       spin_lock_irqsave(&mchan->lock, irqflags);
> +                       dma_cookie_complete(desc);
> +                       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +                       if (desc->callback &&
> +                               (hidma_ll_status(mdma->lldev, mdesc->tre_ch)
> +                               == DMA_COMPLETE))
> +                               desc->callback(desc->callback_param);
> +
> +                       last_cookie = desc->cookie;
> +                       dma_run_dependencies(desc);
> +               }
> +
> +               /* Free descriptors */
> +               spin_lock_irqsave(&mchan->lock, irqflags);
> +               list_splice_tail_init(&list, &mchan->free);
> +               spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       }
> +}
> +
> +/*
> + * Called once for each submitted descriptor.
> + * PM is locked once for each descriptor that is currently
> + * in execution.
> + */
> +static void hidma_callback(void *data)
> +{
> +       struct hidma_desc *mdesc = data;
> +       struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
> +       unsigned long irqflags;
> +       struct dma_device *ddev = mchan->chan.device;
> +       struct hidma_dev *dmadev = to_hidma_dev(ddev);
> +       bool queued = false;
> +
> +       dev_dbg(dmadev->ddev.dev, "callback: data:0x%p\n", data);
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +
> +       if (mdesc->node.next) {
> +               /* Delete from the active list, add to completed list */
> +               list_move_tail(&mdesc->node, &mchan->completed);
> +               queued = true;
> +       }
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       hidma_process_completed(dmadev);
> +
> +       if (queued) {
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +}
> +
> +static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
> +{
> +       struct hidma_chan *mchan;
> +       struct dma_device *ddev;
> +
> +       mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
> +       if (!mchan)
> +               return -ENOMEM;
> +
> +       ddev = &dmadev->ddev;
> +       mchan->dma_sig = dma_sig;
> +       mchan->dmadev = dmadev;
> +       mchan->chan.device = ddev;
> +       dma_cookie_init(&mchan->chan);
> +
> +       INIT_LIST_HEAD(&mchan->free);
> +       INIT_LIST_HEAD(&mchan->prepared);
> +       INIT_LIST_HEAD(&mchan->active);
> +       INIT_LIST_HEAD(&mchan->completed);
> +
> +       spin_lock_init(&mchan->lock);
> +       list_add_tail(&mchan->chan.device_node, &ddev->channels);
> +       dmadev->ddev.chancnt++;
> +       return 0;
> +}
> +
> +static void hidma_issue_pending(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +
> +       /* PM will be released in hidma_callback function. */
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       hidma_ll_start(dmadev->lldev);
> +}
> +
> +static enum dma_status hidma_tx_status(struct dma_chan *dmach,
> +                                       dma_cookie_t cookie,
> +                                       struct dma_tx_state *txstate)
> +{
> +       enum dma_status ret;
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +
> +       if (mchan->paused)
> +               ret = DMA_PAUSED;
> +       else
> +               ret = dma_cookie_status(dmach, cookie, txstate);
> +
> +       return ret;
> +}
> +
> +/*
> + * Submit descriptor to hardware.
> + * Lock the PM for each descriptor we are sending.
> + */
> +static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(txd->chan);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +       struct hidma_desc *mdesc;
> +       unsigned long irqflags;
> +       dma_cookie_t cookie;
> +
> +       if (!hidma_ll_isenabled(dmadev->lldev))
> +               return -ENODEV;
> +
> +       mdesc = container_of(txd, struct hidma_desc, desc);
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +
> +       /* Move descriptor to active */
> +       list_move_tail(&mdesc->node, &mchan->active);
> +
> +       /* Update cookie */
> +       cookie = dma_cookie_assign(txd);
> +
> +       hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       return cookie;
> +}
> +
> +static int hidma_alloc_chan_resources(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +       int rc = 0;
> +       struct hidma_desc *mdesc, *tmp;
> +       unsigned long irqflags;
> +       LIST_HEAD(descs);
> +       u32 i;
> +
> +       if (mchan->allocated)
> +               return 0;
> +
> +       /* Alloc descriptors for this channel */
> +       for (i = 0; i < dmadev->nr_descriptors; i++) {
> +               mdesc = kzalloc(sizeof(struct hidma_desc), GFP_KERNEL);
> +               if (!mdesc) {
> +                       rc = -ENOMEM;
> +                       break;
> +               }
> +               dma_async_tx_descriptor_init(&mdesc->desc, dmach);
> +               mdesc->desc.flags = DMA_CTRL_ACK;
> +               mdesc->desc.tx_submit = hidma_tx_submit;
> +
> +               rc = hidma_ll_request(dmadev->lldev,
> +                               mchan->dma_sig, "DMA engine", hidma_callback,
> +                               mdesc, &mdesc->tre_ch);
> +               if (rc) {
> +                       dev_err(dmach->device->dev,
> +                               "channel alloc failed at %u\n", i);
> +                       kfree(mdesc);
> +                       break;
> +               }
> +               list_add_tail(&mdesc->node, &descs);
> +       }
> +
> +       if (rc) {
> +               /* return the allocated descriptors */
> +               list_for_each_entry_safe(mdesc, tmp, &descs, node) {
> +                       hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
> +                       kfree(mdesc);
> +               }
> +               return rc;
> +       }
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_splice_tail_init(&descs, &mchan->free);
> +       mchan->allocated = true;
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       dev_dbg(dmadev->ddev.dev,
> +               "allocated channel for %u\n", mchan->dma_sig);
> +       return 1;
> +}
> +
> +static void hidma_free_chan_resources(struct dma_chan *dmach)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_dev *mdma = mchan->dmadev;
> +       struct hidma_desc *mdesc, *tmp;
> +       unsigned long irqflags;
> +       LIST_HEAD(descs);
> +
> +       if (!list_empty(&mchan->prepared) ||
> +               !list_empty(&mchan->active) ||
> +               !list_empty(&mchan->completed)) {
> +               /*
> +                * We have unfinished requests waiting.
> +                * Terminate the request from the hardware.
> +                */
> +               hidma_cleanup_pending_tre(mdma->lldev, ERR_INFO_SW,
> +                               ERR_CODE_UNEXPECTED_TERMINATE);
> +
> +               /* Give enough time for completions to be called. */
> +               msleep(100);
> +       }
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       /* Channel must be idle */
> +       WARN_ON(!list_empty(&mchan->prepared));
> +       WARN_ON(!list_empty(&mchan->active));
> +       WARN_ON(!list_empty(&mchan->completed));
> +
> +       /* Move data */
> +       list_splice_tail_init(&mchan->free, &descs);
> +
> +       /* Free descriptors */
> +       list_for_each_entry_safe(mdesc, tmp, &descs, node) {
> +               hidma_ll_free(mdma->lldev, mdesc->tre_ch);
> +               list_del(&mdesc->node);
> +               kfree(mdesc);
> +       }
> +
> +       mchan->allocated = 0;
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +       dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
> +}
> +
> +
> +static struct dma_async_tx_descriptor *
> +hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
> +                       dma_addr_t dma_src, size_t len, unsigned long flags)
> +{
> +       struct hidma_chan *mchan = to_hidma_chan(dmach);
> +       struct hidma_desc *mdesc = NULL;
> +       struct hidma_dev *mdma = mchan->dmadev;
> +       unsigned long irqflags;
> +
> +       dev_dbg(mdma->ddev.dev,
> +               "memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
> +               &dma_dest, &dma_src, len);
> +
> +       /* Get free descriptor */
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       if (!list_empty(&mchan->free)) {
> +               mdesc = list_first_entry(&mchan->free, struct hidma_desc,
> +                                       node);
> +               list_del(&mdesc->node);
> +       }
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       if (!mdesc)
> +               return NULL;
> +
> +       hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
> +                       dma_src, dma_dest, len, flags);
> +
> +       /* Place descriptor in prepared list */
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_add_tail(&mdesc->node, &mchan->prepared);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       return &mdesc->desc;
> +}
> +
> +static int hidma_terminate_all(struct dma_chan *chan)
> +{
> +       struct hidma_dev *dmadev;
> +       LIST_HEAD(head);
> +       unsigned long irqflags;
> +       LIST_HEAD(list);
> +       struct hidma_desc *tmp, *mdesc = NULL;
> +       int rc;
> +       struct hidma_chan *mchan;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "terminateall: chan:0x%p\n", mchan);
> +
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       /* give completed requests a chance to finish */
> +       hidma_process_completed(dmadev);
> +
> +       spin_lock_irqsave(&mchan->lock, irqflags);
> +       list_splice_init(&mchan->active, &list);
> +       list_splice_init(&mchan->prepared, &list);
> +       list_splice_init(&mchan->completed, &list);
> +       spin_unlock_irqrestore(&mchan->lock, irqflags);
> +
> +       /* this suspends the existing transfer */
> +       rc = hidma_ll_pause(dmadev->lldev);
> +       if (rc) {
> +               dev_err(dmadev->ddev.dev, "channel did not pause\n");
> +               goto out;
> +       }
> +
> +       /* return all user requests */
> +       list_for_each_entry_safe(mdesc, tmp, &list, node) {
> +               struct dma_async_tx_descriptor  *txd = &mdesc->desc;
> +               dma_async_tx_callback callback = mdesc->desc.callback;
> +               void *param = mdesc->desc.callback_param;
> +               enum dma_status status;
> +
> +               dma_descriptor_unmap(txd);
> +
> +               status = hidma_ll_status(dmadev->lldev, mdesc->tre_ch);
> +               /*
> +                * The API requires that no submissions are done from a
> +                * callback, so we don't need to drop the lock here
> +                */
> +               if (callback && (status == DMA_COMPLETE))
> +                       callback(param);
> +
> +               dma_run_dependencies(txd);
> +
> +               /* move myself to free_list */
> +               list_move(&mdesc->node, &mchan->free);
> +       }
> +
> +       /* reinitialize the hardware */
> +       rc = hidma_ll_setup(dmadev->lldev);
> +
> +out:
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return rc;
> +}
> +
> +static int hidma_pause(struct dma_chan *chan)
> +{
> +       struct hidma_chan *mchan;
> +       struct hidma_dev *dmadev;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "pause: chan:0x%p\n", mchan);
> +
> +       if (!mchan->paused) {
> +               pm_runtime_get_sync(dmadev->ddev.dev);
> +               if (hidma_ll_pause(dmadev->lldev))
> +                       dev_warn(dmadev->ddev.dev, "channel did not stop\n");
> +               mchan->paused = true;
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +       return 0;
> +}
> +
> +static int hidma_resume(struct dma_chan *chan)
> +{
> +       struct hidma_chan *mchan;
> +       struct hidma_dev *dmadev;
> +       int rc = 0;
> +
> +       mchan = to_hidma_chan(chan);
> +       dmadev = to_hidma_dev(mchan->chan.device);
> +       dev_dbg(dmadev->ddev.dev, "resume: chan:0x%p\n", mchan);
> +
> +       if (mchan->paused) {
> +               pm_runtime_get_sync(dmadev->ddev.dev);
> +               rc = hidma_ll_resume(dmadev->lldev);
> +               if (!rc)
> +                       mchan->paused = false;
> +               else
> +                       dev_err(dmadev->ddev.dev,
> +                                       "failed to resume the channel");
> +               pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +               pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       }
> +       return rc;
> +}
> +
> +static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
> +{
> +       struct hidma_lldev **lldev_ptr = arg;
> +       irqreturn_t ret;
> +       struct hidma_dev *dmadev = to_hidma_dev_from_lldev(lldev_ptr);
> +
> +       /*
> +        * All interrupts are request driven.
> +        * HW doesn't send an interrupt by itself.
> +        */
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       ret = hidma_ll_inthandler(chirq, *lldev_ptr);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return ret;
> +}
> +
> +static int hidma_probe(struct platform_device *pdev)
> +{
> +       struct hidma_dev *dmadev;
> +       int rc = 0;
> +       struct resource *trca_resource;
> +       struct resource *evca_resource;
> +       int chirq;
> +       int current_channel_index = atomic_read(&channel_ref_count);
> +       void *evca;
> +       void *trca;
> +
> +       pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
> +       pm_runtime_use_autosuspend(&pdev->dev);
> +       pm_runtime_set_active(&pdev->dev);
> +       pm_runtime_enable(&pdev->dev);
> +
> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);

> +       if (!trca_resource) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }

Why did you ignore my comment about this block?
Remove that condition entirely.

> +
> +       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
> +       if (IS_ERR(trca)) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
> +       if (!evca_resource) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }

Ditto.

> +
> +       evca = devm_ioremap_resource(&pdev->dev, evca_resource);
> +       if (IS_ERR(evca)) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       /*
> +        * This driver only handles the channel IRQs.
> +        * Common IRQ is handled by the management driver.
> +        */
> +       chirq = platform_get_irq(pdev, 0);
> +       if (chirq < 0) {
> +               rc = -ENODEV;
> +               goto bailout;
> +       }
> +
> +       dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
> +       if (!dmadev) {
> +               rc = -ENOMEM;
> +               goto bailout;
> +       }
> +
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +       spin_lock_init(&dmadev->lock);
> +       dmadev->ddev.dev = &pdev->dev;
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +
> +       dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
> +       if (WARN_ON(!pdev->dev.dma_mask)) {
> +               rc = -ENXIO;
> +               goto dmafree;
> +       }
> +
> +       dmadev->dev_evca = evca;
> +       dmadev->evca_resource = evca_resource;
> +       dmadev->dev_trca = trca;
> +       dmadev->trca_resource = trca_resource;
> +       dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
> +       dmadev->ddev.device_alloc_chan_resources =
> +               hidma_alloc_chan_resources;
> +       dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
> +       dmadev->ddev.device_tx_status = hidma_tx_status;
> +       dmadev->ddev.device_issue_pending = hidma_issue_pending;
> +       dmadev->ddev.device_pause = hidma_pause;
> +       dmadev->ddev.device_resume = hidma_resume;
> +       dmadev->ddev.device_terminate_all = hidma_terminate_all;
> +       dmadev->ddev.copy_align = 8;
> +
> +       device_property_read_u32(&pdev->dev, "desc-count",
> +                               &dmadev->nr_descriptors);
> +
> +       if (!dmadev->nr_descriptors && nr_desc_prm)
> +               dmadev->nr_descriptors = nr_desc_prm;
> +
> +       if (!dmadev->nr_descriptors)
> +               goto dmafree;
> +
> +       if (current_channel_index > MAX_HIDMA_CHANNELS)
> +               goto dmafree;
> +
> +       dmadev->evridx = -1;
> +       device_property_read_u32(&pdev->dev, "event-channel", &dmadev->evridx);
> +
> +       /* kernel command line override for the guest machine */
> +       if (event_channel_idx[current_channel_index] != -1)
> +               dmadev->evridx = event_channel_idx[current_channel_index];
> +
> +       if (dmadev->evridx == -1)
> +               goto dmafree;
> +
> +       /* Set DMA mask to 64 bits. */
> +       rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> +       if (rc) {
> +               dev_warn(&pdev->dev, "unable to set coherent mask to 64");
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> +               if (rc)
> +                       goto dmafree;
> +       }
> +
> +       dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
> +                               dmadev->nr_descriptors, dmadev->dev_trca,
> +                               dmadev->dev_evca, dmadev->evridx);
> +       if (!dmadev->lldev) {
> +               rc = -EPROBE_DEFER;
> +               goto dmafree;
> +       }
> +
> +       rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
> +                             "qcom-hidma", &dmadev->lldev);
> +       if (rc)
> +               goto uninit;
> +
> +       INIT_LIST_HEAD(&dmadev->ddev.channels);
> +       rc = hidma_chan_init(dmadev, 0);
> +       if (rc)
> +               goto uninit;
> +
> +       rc = dma_selftest_memcpy(&dmadev->ddev);
> +       if (rc)
> +               goto uninit;
> +
> +       rc = dma_async_device_register(&dmadev->ddev);
> +       if (rc)
> +               goto uninit;
> +
> +       hidma_debug_init(dmadev);
> +       dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
> +       platform_set_drvdata(pdev, dmadev);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       atomic_inc(&channel_ref_count);
> +       return 0;
> +
> +uninit:
> +       hidma_debug_uninit(dmadev);
> +       hidma_ll_uninit(dmadev->lldev);
> +dmafree:
> +       if (dmadev)
> +               hidma_free(dmadev);
> +bailout:
> +       pm_runtime_disable(&pdev->dev);
> +       pm_runtime_put_sync_suspend(&pdev->dev);

Are you sure this is appropriate sequence?

I think

pm_runtime_put();
pm_runtime_disable();

will do the job.

> +       return rc;
> +}
> +
> +static int hidma_remove(struct platform_device *pdev)
> +{
> +       struct hidma_dev *dmadev = platform_get_drvdata(pdev);
> +
> +       dev_dbg(&pdev->dev, "removing\n");

Useless message.

> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +
> +       dma_async_device_unregister(&dmadev->ddev);
> +       hidma_debug_uninit(dmadev);
> +       hidma_ll_uninit(dmadev->lldev);
> +       hidma_free(dmadev);
> +
> +       dev_info(&pdev->dev, "HI-DMA engine removed\n");
> +       pm_runtime_put_sync_suspend(&pdev->dev);
> +       pm_runtime_disable(&pdev->dev);
> +
> +       return 0;
> +}
> +
> +#if IS_ENABLED(CONFIG_ACPI)
> +static const struct acpi_device_id hidma_acpi_ids[] = {
> +       {"QCOM8061"},
> +       {},
> +};
> +#endif
> +
> +static const struct of_device_id hidma_match[] = {
> +       { .compatible = "qcom,hidma-1.0", },
> +       {},
> +};
> +MODULE_DEVICE_TABLE(of, hidma_match);
> +
> +static struct platform_driver hidma_driver = {
> +       .probe = hidma_probe,
> +       .remove = hidma_remove,
> +       .driver = {
> +               .name = "hidma",
> +               .of_match_table = hidma_match,
> +               .acpi_match_table = ACPI_PTR(hidma_acpi_ids),
> +       },
> +};
> +module_platform_driver(hidma_driver);
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
> new file mode 100644
> index 0000000..195d6b5
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma.h
> @@ -0,0 +1,157 @@
> +/*
> + * Qualcomm Technologies HIDMA data structures
> + *
> + * Copyright (c) 2014, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef QCOM_HIDMA_H
> +#define QCOM_HIDMA_H
> +
> +#include <linux/kfifo.h>
> +#include <linux/interrupt.h>
> +#include <linux/dmaengine.h>
> +
> +#define TRE_SIZE                       32 /* each TRE is 32 bytes  */
> +#define TRE_CFG_IDX                    0
> +#define TRE_LEN_IDX                    1
> +#define TRE_SRC_LOW_IDX                2
> +#define TRE_SRC_HI_IDX                 3
> +#define TRE_DEST_LOW_IDX               4
> +#define TRE_DEST_HI_IDX                5
> +
> +struct hidma_tx_status {
> +       u8 err_info;                    /* error record in this transfer    */
> +       u8 err_code;                    /* completion code                  */
> +};
> +
> +struct hidma_tre {
> +       atomic_t allocated;             /* if this channel is allocated     */
> +       bool queued;                    /* flag whether this is pending     */
> +       u16 status;                     /* status                           */
> +       u32 chidx;                      /* index of the tre         */
> +       u32 dma_sig;                    /* signature of the tre     */
> +       const char *dev_name;           /* name of the device               */
> +       void (*callback)(void *data);   /* requester callback               */
> +       void *data;                     /* Data associated with this channel*/
> +       struct hidma_lldev *lldev;      /* lldma device pointer             */
> +       u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy        */
> +       u32 tre_index;                  /* the offset where this was written*/
> +       u32 int_flags;                  /* interrupt flags*/
> +};
> +
> +struct hidma_lldev {
> +       bool initialized;               /* initialized flag               */
> +       u8 trch_state;                  /* trch_state of the device       */
> +       u8 evch_state;                  /* evch_state of the device       */
> +       u8 evridx;                      /* event channel to notify        */
> +       u32 nr_tres;                    /* max number of configs          */
> +       spinlock_t lock;                /* reentrancy                     */
> +       struct hidma_tre *trepool;      /* trepool of user configs */
> +       struct device *dev;             /* device                         */
> +       void __iomem *trca;             /* Transfer Channel address       */
> +       void __iomem *evca;             /* Event Channel address          */
> +       struct hidma_tre
> +               **pending_tre_list;     /* Pointers to pending TREs       */
> +       struct hidma_tx_status
> +               *tx_status_list;        /* Pointers to pending TREs status*/
> +       s32 pending_tre_count;          /* Number of TREs pending         */
> +
> +       void *tre_ring;         /* TRE ring                       */
> +       dma_addr_t tre_ring_handle;     /* TRE ring to be shared with HW  */
> +       u32 tre_ring_size;              /* Byte size of the ring          */
> +       u32 tre_processed_off;          /* last processed TRE              */
> +
> +       void *evre_ring;                /* EVRE ring                       */
> +       dma_addr_t evre_ring_handle;    /* EVRE ring to be shared with HW  */
> +       u32 evre_ring_size;             /* Byte size of the ring          */
> +       u32 evre_processed_off; /* last processed EVRE             */
> +
> +       u32 tre_write_offset;           /* TRE write location              */
> +       struct tasklet_struct task;     /* task delivering notifications   */
> +       DECLARE_KFIFO_PTR(handoff_fifo,
> +               struct hidma_tre *);    /* pending TREs FIFO              */
> +};
> +
> +struct hidma_desc {
> +       struct dma_async_tx_descriptor  desc;
> +       /* link list node for this channel*/
> +       struct list_head                node;
> +       u32                             tre_ch;
> +};
> +
> +struct hidma_chan {
> +       bool                            paused;
> +       bool                            allocated;
> +       char                            dbg_name[16];
> +       u32                             dma_sig;
> +
> +       /*
> +        * active descriptor on this channel
> +        * It is used by the DMA complete notification to
> +        * locate the descriptor that initiated the transfer.
> +        */
> +       struct dentry                   *debugfs;
> +       struct dentry                   *stats;
> +       struct hidma_dev                *dmadev;
> +
> +       struct dma_chan                 chan;
> +       struct list_head                free;
> +       struct list_head                prepared;
> +       struct list_head                active;
> +       struct list_head                completed;
> +
> +       /* Lock for this structure */
> +       spinlock_t                      lock;
> +};
> +
> +struct hidma_dev {
> +       int                             evridx;
> +       u32                             nr_descriptors;
> +
> +       struct hidma_lldev              *lldev;
> +       void                            __iomem *dev_trca;
> +       struct resource                 *trca_resource;
> +       void                            __iomem *dev_evca;
> +       struct resource                 *evca_resource;
> +
> +       /* used to protect the pending channel list*/
> +       spinlock_t                      lock;
> +       struct dma_device               ddev;
> +
> +       struct dentry                   *debugfs;
> +       struct dentry                   *stats;
> +};
> +
> +int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
> +                       const char *dev_name,
> +                       void (*callback)(void *data), void *data, u32 *tre_ch);
> +
> +void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
> +enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
> +bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
> +int hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
> +int hidma_ll_start(struct hidma_lldev *llhndl);
> +int hidma_ll_pause(struct hidma_lldev *llhndl);
> +int hidma_ll_resume(struct hidma_lldev *llhndl);
> +void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
> +       dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
> +int hidma_ll_setup(struct hidma_lldev *lldev);
> +struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
> +                       void __iomem *trca, void __iomem *evca,
> +                       u8 evridx);
> +int hidma_ll_uninit(struct hidma_lldev *llhndl);
> +irqreturn_t hidma_ll_inthandler(int irq, void *arg);
> +void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
> +                               u8 err_code);
> +int hidma_debug_init(struct hidma_dev *dmadev);
> +void hidma_debug_uninit(struct hidma_dev *dmadev);
> +#endif
> diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
> new file mode 100644
> index 0000000..e0e6711
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_dbg.c
> @@ -0,0 +1,225 @@
> +/*
> + * Qualcomm Technologies HIDMA debug file
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/list.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "hidma.h"
> +
> +void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
> +{
> +       struct hidma_lldev *lldev = llhndl;
> +       struct hidma_tre *tre;
> +       u32 length;
> +       dma_addr_t src_start;
> +       dma_addr_t dest_start;
> +       u32 *tre_local;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev, "invalid TRE number in chstats:%d",
> +                       tre_ch);
> +               return;
> +       }
> +       tre = &lldev->trepool[tre_ch];
> +       seq_printf(s, "------Channel %d -----\n", tre_ch);
> +       seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
> +       seq_printf(s, "queued=0x%x\n", tre->queued);
> +       seq_printf(s, "err_info=0x%x\n",
> +                  lldev->tx_status_list[tre->chidx].err_info);
> +       seq_printf(s, "err_code=0x%x\n",
> +                  lldev->tx_status_list[tre->chidx].err_code);
> +       seq_printf(s, "status=0x%x\n", tre->status);
> +       seq_printf(s, "chidx=0x%x\n", tre->chidx);
> +       seq_printf(s, "dma_sig=0x%x\n", tre->dma_sig);
> +       seq_printf(s, "dev_name=%s\n", tre->dev_name);
> +       seq_printf(s, "callback=%p\n", tre->callback);
> +       seq_printf(s, "data=%p\n", tre->data);
> +       seq_printf(s, "tre_index=0x%x\n", tre->tre_index);
> +
> +       tre_local = &tre->tre_local[0];
> +       src_start = tre_local[TRE_SRC_LOW_IDX];
> +       src_start = ((u64)(tre_local[TRE_SRC_HI_IDX]) << 32) + src_start;
> +       dest_start = tre_local[TRE_DEST_LOW_IDX];
> +       dest_start += ((u64)(tre_local[TRE_DEST_HI_IDX]) << 32);
> +       length = tre_local[TRE_LEN_IDX];
> +
> +       seq_printf(s, "src=%pap\n", &src_start);
> +       seq_printf(s, "dest=%pap\n", &dest_start);
> +       seq_printf(s, "length=0x%x\n", length);
> +}
> +
> +void hidma_ll_devstats(struct seq_file *s, void *llhndl)
> +{
> +       struct hidma_lldev *lldev = llhndl;
> +
> +       seq_puts(s, "------Device -----\n");
> +       seq_printf(s, "lldev init=0x%x\n", lldev->initialized);
> +       seq_printf(s, "trch_state=0x%x\n", lldev->trch_state);
> +       seq_printf(s, "evch_state=0x%x\n", lldev->evch_state);
> +       seq_printf(s, "evridx=0x%x\n", lldev->evridx);
> +       seq_printf(s, "nr_tres=0x%x\n", lldev->nr_tres);
> +       seq_printf(s, "trca=%p\n", lldev->trca);
> +       seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
> +       seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_ring_handle);
> +       seq_printf(s, "tre_ring_size=0x%x\n", lldev->tre_ring_size);
> +       seq_printf(s, "tre_processed_off=0x%x\n", lldev->tre_processed_off);
> +       seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
> +       seq_printf(s, "evca=%p\n", lldev->evca);
> +       seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
> +       seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_ring_handle);
> +       seq_printf(s, "evre_ring_size=0x%x\n", lldev->evre_ring_size);
> +       seq_printf(s, "evre_processed_off=0x%x\n", lldev->evre_processed_off);
> +       seq_printf(s, "tre_write_offset=0x%x\n", lldev->tre_write_offset);
> +}
> +
> +/**
> + * hidma_chan_stats: display HIDMA channel statistics
> + *
> + * Display the statistics for the current HIDMA virtual channel device.
> + */
> +static int hidma_chan_stats(struct seq_file *s, void *unused)
> +{
> +       struct hidma_chan *mchan = s->private;
> +       struct hidma_desc *mdesc;
> +       struct hidma_dev *dmadev = mchan->dmadev;
> +
> +       pm_runtime_get_sync(dmadev->ddev.dev);
> +       seq_printf(s, "paused=%u\n", mchan->paused);
> +       seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
> +       seq_puts(s, "prepared\n");
> +       list_for_each_entry(mdesc, &mchan->prepared, node)
> +               hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
> +
> +       seq_puts(s, "active\n");
> +               list_for_each_entry(mdesc, &mchan->active, node)
> +                       hidma_ll_chstats(s, mchan->dmadev->lldev,
> +                               mdesc->tre_ch);
> +
> +       seq_puts(s, "completed\n");
> +               list_for_each_entry(mdesc, &mchan->completed, node)
> +                       hidma_ll_chstats(s, mchan->dmadev->lldev,
> +                               mdesc->tre_ch);
> +
> +       hidma_ll_devstats(s, mchan->dmadev->lldev);
> +       pm_runtime_mark_last_busy(dmadev->ddev.dev);
> +       pm_runtime_put_autosuspend(dmadev->ddev.dev);
> +       return 0;
> +}
> +
> +/**
> + * hidma_dma_info: display HIDMA device info
> + *
> + * Display the info for the current HIDMA device.
> + */
> +static int hidma_dma_info(struct seq_file *s, void *unused)
> +{
> +       struct hidma_dev *dmadev = s->private;
> +       resource_size_t sz;
> +
> +       seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
> +       seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
> +       seq_printf(s, "dev_trca_phys=%pa\n",
> +               &dmadev->trca_resource->start);
> +       sz = resource_size(dmadev->trca_resource);
> +       seq_printf(s, "dev_trca_size=%pa\n", &sz);
> +       seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
> +       seq_printf(s, "dev_evca_phys=%pa\n",
> +               &dmadev->evca_resource->start);
> +       sz = resource_size(dmadev->evca_resource);
> +       seq_printf(s, "dev_evca_size=%pa\n", &sz);
> +       return 0;
> +}
> +
> +static int hidma_chan_stats_open(struct inode *inode, struct file *file)
> +{
> +       return single_open(file, hidma_chan_stats, inode->i_private);
> +}
> +
> +static int hidma_dma_info_open(struct inode *inode, struct file *file)
> +{
> +       return single_open(file, hidma_dma_info, inode->i_private);
> +}
> +
> +static const struct file_operations hidma_chan_fops = {
> +       .open = hidma_chan_stats_open,
> +       .read = seq_read,
> +       .llseek = seq_lseek,
> +       .release = single_release,
> +};
> +
> +static const struct file_operations hidma_dma_fops = {
> +       .open = hidma_dma_info_open,
> +       .read = seq_read,
> +       .llseek = seq_lseek,
> +       .release = single_release,
> +};
> +
> +void hidma_debug_uninit(struct hidma_dev *dmadev)
> +{
> +       debugfs_remove_recursive(dmadev->debugfs);
> +       debugfs_remove_recursive(dmadev->stats);
> +}
> +
> +int hidma_debug_init(struct hidma_dev *dmadev)
> +{
> +       int rc = 0;
> +       int chidx = 0;
> +       struct list_head *position = NULL;
> +
> +       dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev),
> +                                               NULL);
> +       if (!dmadev->debugfs) {
> +               rc = -ENODEV;
> +               return rc;
> +       }
> +
> +       /* walk through the virtual channel list */
> +       list_for_each(position, &dmadev->ddev.channels) {
> +               struct hidma_chan *chan;
> +
> +               chan = list_entry(position, struct hidma_chan,
> +                               chan.device_node);
> +               sprintf(chan->dbg_name, "chan%d", chidx);
> +               chan->debugfs = debugfs_create_dir(chan->dbg_name,
> +                                               dmadev->debugfs);
> +               if (!chan->debugfs) {
> +                       rc = -ENOMEM;
> +                       goto cleanup;
> +               }
> +               chan->stats = debugfs_create_file("stats", S_IRUGO,
> +                               chan->debugfs, chan,
> +                               &hidma_chan_fops);
> +               if (!chan->stats) {
> +                       rc = -ENOMEM;
> +                       goto cleanup;
> +               }
> +               chidx++;
> +       }
> +
> +       dmadev->stats = debugfs_create_file("stats", S_IRUGO,
> +                       dmadev->debugfs, dmadev,
> +                       &hidma_dma_fops);
> +       if (!dmadev->stats) {
> +               rc = -ENOMEM;
> +               goto cleanup;
> +       }
> +
> +       return 0;
> +cleanup:
> +       hidma_debug_uninit(dmadev);
> +       return rc;
> +}
> diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
> new file mode 100644
> index 0000000..f5c0b8b
> --- /dev/null
> +++ b/drivers/dma/qcom/hidma_ll.c
> @@ -0,0 +1,944 @@
> +/*
> + * Qualcomm Technologies HIDMA DMA engine low level code
> + *
> + * Copyright (c) 2015, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/dmaengine.h>
> +#include <linux/slab.h>
> +#include <linux/interrupt.h>
> +#include <linux/mm.h>
> +#include <linux/highmem.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/delay.h>
> +#include <linux/atomic.h>
> +#include <linux/iopoll.h>
> +#include <linux/kfifo.h>
> +
> +#include "hidma.h"
> +
> +#define EVRE_SIZE                      16 /* each EVRE is 16 bytes */
> +
> +#define TRCA_CTRLSTS_OFFSET            0x0
> +#define TRCA_RING_LOW_OFFSET           0x8
> +#define TRCA_RING_HIGH_OFFSET          0xC
> +#define TRCA_RING_LEN_OFFSET           0x10
> +#define TRCA_READ_PTR_OFFSET           0x18
> +#define TRCA_WRITE_PTR_OFFSET          0x20
> +#define TRCA_DOORBELL_OFFSET           0x400
> +
> +#define EVCA_CTRLSTS_OFFSET            0x0
> +#define EVCA_INTCTRL_OFFSET            0x4
> +#define EVCA_RING_LOW_OFFSET           0x8
> +#define EVCA_RING_HIGH_OFFSET          0xC
> +#define EVCA_RING_LEN_OFFSET           0x10
> +#define EVCA_READ_PTR_OFFSET           0x18
> +#define EVCA_WRITE_PTR_OFFSET          0x20
> +#define EVCA_DOORBELL_OFFSET           0x400
> +
> +#define EVCA_IRQ_STAT_OFFSET           0x100
> +#define EVCA_IRQ_CLR_OFFSET            0x108
> +#define EVCA_IRQ_EN_OFFSET             0x110
> +
> +#define EVRE_CFG_IDX                   0
> +#define EVRE_LEN_IDX                   1
> +#define EVRE_DEST_LOW_IDX              2
> +#define EVRE_DEST_HI_IDX               3
> +
> +#define EVRE_ERRINFO_BIT_POS           24
> +#define EVRE_CODE_BIT_POS              28
> +
> +#define EVRE_ERRINFO_MASK              0xF
> +#define EVRE_CODE_MASK                 0xF
> +
> +#define CH_CONTROL_MASK                0xFF
> +#define CH_STATE_MASK                  0xFF
> +#define CH_STATE_BIT_POS               0x8
> +
> +#define MAKE64(high, low) (((u64)(high) << 32) | (low))
> +
> +#define IRQ_EV_CH_EOB_IRQ_BIT_POS      0
> +#define IRQ_EV_CH_WR_RESP_BIT_POS      1
> +#define IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
> +#define IRQ_TR_CH_DATA_RD_ER_BIT_POS   10
> +#define IRQ_TR_CH_DATA_WR_ER_BIT_POS   11
> +#define IRQ_TR_CH_INVALID_TRE_BIT_POS  14
> +
> +#define        ENABLE_IRQS (BIT(IRQ_EV_CH_EOB_IRQ_BIT_POS) | \
> +               BIT(IRQ_EV_CH_WR_RESP_BIT_POS) | \
> +               BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS) |   \
> +               BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS) |              \
> +               BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS) |              \
> +               BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))
> +
> +enum ch_command {
> +       CH_DISABLE = 0,
> +       CH_ENABLE = 1,
> +       CH_SUSPEND = 2,
> +       CH_RESET = 9,
> +};
> +
> +enum ch_state {
> +       CH_DISABLED = 0,
> +       CH_ENABLED = 1,
> +       CH_RUNNING = 2,
> +       CH_SUSPENDED = 3,
> +       CH_STOPPED = 4,
> +       CH_ERROR = 5,
> +       CH_IN_RESET = 9,
> +};
> +
> +enum tre_type {
> +       TRE_MEMCPY = 3,
> +       TRE_MEMSET = 4,
> +};
> +
> +enum evre_type {
> +       EVRE_DMA_COMPLETE = 0x23,
> +       EVRE_IMM_DATA = 0x24,
> +};
> +
> +enum err_code {
> +       EVRE_STATUS_COMPLETE = 1,
> +       EVRE_STATUS_ERROR = 4,
> +};
> +
> +void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       struct hidma_tre *tre;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre = &lldev->trepool[tre_ch];
> +       if (atomic_read(&tre->allocated) != true) {
> +               dev_err(lldev->dev, "trying to free an unused TRE:%d",
> +                       tre_ch);
> +               return;
> +       }
> +
> +       atomic_set(&tre->allocated, 0);
> +       dev_dbg(lldev->dev, "free_dma: allocated:%d tre_ch:%d\n",
> +               atomic_read(&tre->allocated), tre_ch);
> +}
> +
> +int hidma_ll_request(struct hidma_lldev *lldev, u32 dma_sig,
> +                       const char *dev_name,
> +                       void (*callback)(void *data), void *data, u32 *tre_ch)
> +{
> +       u32 i;
> +       struct hidma_tre *tre = NULL;
> +       u32 *tre_local;
> +
> +       if (!tre_ch || !lldev)
> +               return -EINVAL;
> +
> +       /* need to have at least one empty spot in the queue */
> +       for (i = 0; i < lldev->nr_tres - 1; i++) {
> +               if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
> +                       break;
> +       }
> +
> +       if (i == (lldev->nr_tres - 1))
> +               return -ENOMEM;
> +
> +       tre = &lldev->trepool[i];
> +       tre->dma_sig = dma_sig;
> +       tre->dev_name = dev_name;
> +       tre->callback = callback;
> +       tre->data = data;
> +       tre->chidx = i;
> +       tre->status = 0;
> +       tre->queued = 0;
> +       lldev->tx_status_list[i].err_code = 0;
> +       tre->lldev = lldev;
> +       tre_local = &tre->tre_local[0];
> +       tre_local[TRE_CFG_IDX] = TRE_MEMCPY;
> +       tre_local[TRE_CFG_IDX] |= ((lldev->evridx & 0xFF) << 8);
> +       tre_local[TRE_CFG_IDX] |= BIT(16);      /* set IEOB */
> +       *tre_ch = i;
> +       if (callback)
> +               callback(data);
> +       return 0;
> +}
> +
> +/*
> + * Multiple TREs may be queued and waiting in the
> + * pending queue.
> + */
> +static void hidma_ll_tre_complete(unsigned long arg)
> +{
> +       struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
> +       struct hidma_tre *tre;
> +
> +       while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
> +               /* call the user if it has been read by the hardware*/
> +               if (tre->callback)
> +                       tre->callback(tre->data);
> +       }
> +}
> +
> +/*
> + * Called to handle the interrupt for the channel.
> + * Return a positive number if TRE or EVRE were consumed on this run.
> + * Return a positive number if there are pending TREs or EVREs.
> + * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
> + */
> +static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
> +{
> +       struct hidma_tre *tre;
> +       u32 evre_write_off;
> +       u32 evre_ring_size = lldev->evre_ring_size;
> +       u32 tre_ring_size = lldev->tre_ring_size;
> +       u32 num_completed = 0, tre_iterator, evre_iterator;
> +       unsigned long flags;
> +
> +       evre_write_off = readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
> +       tre_iterator = lldev->tre_processed_off;
> +       evre_iterator = lldev->evre_processed_off;
> +
> +       if ((evre_write_off > evre_ring_size) ||
> +               ((evre_write_off % EVRE_SIZE) != 0)) {
> +               dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
> +               return 0;
> +       }
> +
> +       /*
> +        * By the time control reaches here the number of EVREs and TREs
> +        * may not match. Only consume the ones that hardware told us.
> +        */
> +       while ((evre_iterator != evre_write_off)) {
> +               u32 *current_evre = lldev->evre_ring + evre_iterator;
> +               u32 cfg;
> +               u8 err_info;
> +
> +               spin_lock_irqsave(&lldev->lock, flags);
> +               tre = lldev->pending_tre_list[tre_iterator / TRE_SIZE];
> +               if (!tre) {
> +                       spin_unlock_irqrestore(&lldev->lock, flags);
> +                       dev_warn(lldev->dev,
> +                               "tre_index [%d] and tre out of sync\n",
> +                               tre_iterator / TRE_SIZE);
> +                       tre_iterator += TRE_SIZE;
> +                       if (tre_iterator >= tre_ring_size)
> +                               tre_iterator -= tre_ring_size;
> +                       evre_iterator += EVRE_SIZE;
> +                       if (evre_iterator >= evre_ring_size)
> +                               evre_iterator -= evre_ring_size;
> +
> +                       continue;
> +               }
> +               lldev->pending_tre_list[tre->tre_index] = NULL;
> +
> +               /*
> +                * Keep track of pending TREs that SW is expecting to receive
> +                * from HW. We got one now. Decrement our counter.
> +                */
> +               lldev->pending_tre_count--;
> +               if (lldev->pending_tre_count < 0) {
> +                       dev_warn(lldev->dev,
> +                               "tre count mismatch on completion");
> +                       lldev->pending_tre_count = 0;
> +               }
> +
> +               spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +               cfg = current_evre[EVRE_CFG_IDX];
> +               err_info = (cfg >> EVRE_ERRINFO_BIT_POS);
> +               err_info = err_info & EVRE_ERRINFO_MASK;
> +               lldev->tx_status_list[tre->chidx].err_info = err_info;
> +               lldev->tx_status_list[tre->chidx].err_code =
> +                       (cfg >> EVRE_CODE_BIT_POS) & EVRE_CODE_MASK;
> +               tre->queued = 0;
> +
> +               kfifo_put(&lldev->handoff_fifo, tre);
> +               tasklet_schedule(&lldev->task);
> +
> +               tre_iterator += TRE_SIZE;
> +               if (tre_iterator >= tre_ring_size)
> +                       tre_iterator -= tre_ring_size;
> +               evre_iterator += EVRE_SIZE;
> +               if (evre_iterator >= evre_ring_size)
> +                       evre_iterator -= evre_ring_size;
> +
> +               /*
> +                * Read the new event descriptor written by the HW.
> +                * As we are processing the delivered events, other events
> +                * get queued to the SW for processing.
> +                */
> +               evre_write_off =
> +                       readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
> +               num_completed++;
> +       }
> +
> +       if (num_completed) {
> +               u32 evre_read_off = (lldev->evre_processed_off +
> +                               EVRE_SIZE * num_completed);
> +               u32 tre_read_off = (lldev->tre_processed_off +
> +                               TRE_SIZE * num_completed);
> +
> +               evre_read_off = evre_read_off % evre_ring_size;
> +               tre_read_off = tre_read_off % tre_ring_size;
> +
> +               writel(evre_read_off, lldev->evca + EVCA_DOORBELL_OFFSET);
> +
> +               /* record the last processed tre offset */
> +               lldev->tre_processed_off = tre_read_off;
> +               lldev->evre_processed_off = evre_read_off;
> +       }
> +
> +       return num_completed;
> +}
> +
> +void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
> +                               u8 err_code)
> +{
> +       u32 tre_iterator;
> +       struct hidma_tre *tre;
> +       u32 tre_ring_size = lldev->tre_ring_size;
> +       int num_completed = 0;
> +       u32 tre_read_off;
> +       unsigned long flags;
> +
> +       tre_iterator = lldev->tre_processed_off;
> +       while (lldev->pending_tre_count) {
> +               int tre_index = tre_iterator / TRE_SIZE;
> +
> +               spin_lock_irqsave(&lldev->lock, flags);
> +               tre = lldev->pending_tre_list[tre_index];
> +               if (!tre) {
> +                       spin_unlock_irqrestore(&lldev->lock, flags);
> +                       tre_iterator += TRE_SIZE;
> +                       if (tre_iterator >= tre_ring_size)
> +                               tre_iterator -= tre_ring_size;
> +                       continue;
> +               }
> +               lldev->pending_tre_list[tre_index] = NULL;
> +               lldev->pending_tre_count--;
> +               if (lldev->pending_tre_count < 0) {
> +                       dev_warn(lldev->dev,
> +                               "tre count mismatch on completion");
> +                       lldev->pending_tre_count = 0;
> +               }
> +               spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +               lldev->tx_status_list[tre->chidx].err_info = err_info;
> +               lldev->tx_status_list[tre->chidx].err_code = err_code;
> +               tre->queued = 0;
> +
> +               kfifo_put(&lldev->handoff_fifo, tre);
> +               tasklet_schedule(&lldev->task);
> +
> +               tre_iterator += TRE_SIZE;
> +               if (tre_iterator >= tre_ring_size)
> +                       tre_iterator -= tre_ring_size;
> +
> +               num_completed++;
> +       }
> +       tre_read_off = (lldev->tre_processed_off +
> +                       TRE_SIZE * num_completed);
> +
> +       tre_read_off = tre_read_off % tre_ring_size;
> +
> +       /* record the last processed tre offset */
> +       lldev->tre_processed_off = tre_read_off;
> +}
> +
> +static int hidma_ll_reset(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_RESET << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Delay 10ms after reset to allow DMA logic to quiesce.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "transfer channel did not reset\n");
> +               return ret;
> +       }
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_RESET << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Delay 10ms after reset to allow DMA logic to quiesce.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_DISABLED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       lldev->trch_state = CH_DISABLED;
> +       lldev->evch_state = CH_DISABLED;
> +       return 0;
> +}
> +
> +static void hidma_ll_enable_irq(struct hidma_lldev *lldev, u32 irq_bits)
> +{
> +       writel(irq_bits, lldev->evca + EVCA_IRQ_EN_OFFSET);
> +       dev_dbg(lldev->dev, "enableirq\n");
> +}
> +
> +/*
> + * The interrupt handler for HIDMA will try to consume as many pending
> + * EVRE from the event queue as possible. Each EVRE has an associated
> + * TRE that holds the user interface parameters. EVRE reports the
> + * result of the transaction. Hardware guarantees ordering between EVREs
> + * and TREs. We use last processed offset to figure out which TRE is
> + * associated with which EVRE. If two TREs are consumed by HW, the EVREs
> + * are in order in the event ring.
> + *
> + * This handler will do a one pass for consuming EVREs. Other EVREs may
> + * be delivered while we are working. It will try to consume incoming
> + * EVREs one more time and return.
> + *
> + * For unprocessed EVREs, hardware will trigger another interrupt until
> + * all the interrupt bits are cleared.
> + *
> + * Hardware guarantees that by the time interrupt is observed, all data
> + * transactions in flight are delivered to their respective places and
> + * are visible to the CPU.
> + *
> + * On demand paging for IOMMU is only supported for PCIe via PRI
> + * (Page Request Interface) not for HIDMA. All other hardware instances
> + * including HIDMA work on pinned DMA addresses.
> + *
> + * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
> + * IOMMU latency will be built into the data movement time. By the time
> + * interrupt happens, IOMMU lookups + data movement has already taken place.
> + *
> + * While the first read in a typical PCI endpoint ISR flushes all outstanding
> + * requests traditionally to the destination, this concept does not apply
> + * here for this HW.
> + */
> +static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev)
> +{
> +       u32 status;
> +       u32 enable;
> +       u32 cause;
> +       int repeat = 2;
> +       unsigned long timeout;
> +
> +       /*
> +        * Fine tuned for this HW...
> +        *
> +        * This ISR has been designed for this particular hardware. Relaxed read
> +        * and write accessors are used for performance reasons due to interrupt
> +        * delivery guarantees. Do not copy this code blindly and expect
> +        * that to work.
> +        */
> +       status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
> +       cause = status & enable;
> +
> +       if ((cause & (BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))) ||
> +                       (cause & BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)) ||
> +                       (cause & BIT(IRQ_EV_CH_WR_RESP_BIT_POS)) ||
> +                       (cause & BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)) ||
> +                       (cause & BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS))) {
> +               u8 err_code = EVRE_STATUS_ERROR;
> +               u8 err_info = 0xFF;
> +
> +               /* Clear out pending interrupts */
> +               writel(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +               dev_err(lldev->dev,
> +                       "error 0x%x, resetting...\n", cause);
> +
> +               hidma_cleanup_pending_tre(lldev, err_info, err_code);
> +
> +               /* reset the channel for recovery */
> +               if (hidma_ll_setup(lldev)) {
> +                       dev_err(lldev->dev,
> +                               "channel reinitialize failed after error\n");
> +                       return;
> +               }
> +               hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +               return;
> +       }
> +
> +       /*
> +        * Try to consume as many EVREs as possible.
> +        * skip this loop if the interrupt is spurious.
> +        */
> +       while (cause && repeat) {
> +               unsigned long start = jiffies;
> +
> +               /* This timeout should be sufficent for core to finish */
> +               timeout = start + msecs_to_jiffies(500);
> +
> +               while (lldev->pending_tre_count) {
> +                       hidma_handle_tre_completion(lldev);
> +                       if (time_is_before_jiffies(timeout)) {
> +                               dev_warn(lldev->dev,
> +                                       "ISR timeout %lx-%lx from %lx [%d]\n",
> +                                       jiffies, timeout, start,
> +                                       lldev->pending_tre_count);
> +                               break;
> +                       }
> +               }
> +
> +               /* We consumed TREs or there are pending TREs or EVREs. */
> +               writel_relaxed(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +               /*
> +                * Another interrupt might have arrived while we are
> +                * processing this one. Read the new cause.
> +                */
> +               status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +               enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
> +               cause = status & enable;
> +
> +               repeat--;
> +       }
> +}
> +
> +
> +static int hidma_ll_enable(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val &= ~(CH_CONTROL_MASK << 16);
> +       val |= (CH_ENABLE << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               ((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "event channel did not get enabled\n");
> +               return ret;
> +       }
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_ENABLE << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               ((((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_ENABLED) ||
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_RUNNING)),
> +               1000, 10000);
> +       if (ret) {
> +               dev_err(lldev->dev,
> +                       "transfer channel did not get enabled\n");
> +               return ret;
> +       }
> +
> +       lldev->trch_state = CH_ENABLED;
> +       lldev->evch_state = CH_ENABLED;
> +
> +       return 0;
> +}
> +
> +int hidma_ll_resume(struct hidma_lldev *lldev)
> +{
> +       return hidma_ll_enable(lldev);
> +}
> +
> +static int hidma_ll_hw_start(struct hidma_lldev *lldev)
> +{
> +       int rc = 0;
> +       unsigned long irqflags;
> +
> +       spin_lock_irqsave(&lldev->lock, irqflags);
> +       writel(lldev->tre_write_offset, lldev->trca + TRCA_DOORBELL_OFFSET);
> +       spin_unlock_irqrestore(&lldev->lock, irqflags);
> +
> +       return rc;
> +}
> +
> +bool hidma_ll_isenabled(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +
> +       /* both channels have to be enabled before calling this function*/
> +       if (((lldev->trch_state == CH_ENABLED) ||
> +               (lldev->trch_state == CH_RUNNING)) &&
> +               ((lldev->evch_state == CH_ENABLED) ||
> +                       (lldev->evch_state == CH_RUNNING)))
> +               return true;
> +
> +       dev_dbg(lldev->dev, "channels are not enabled or are in error state");
> +       return false;
> +}
> +
> +int hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       struct hidma_tre *tre;
> +       int rc = 0;
> +       unsigned long flags;
> +
> +       tre = &lldev->trepool[tre_ch];
> +
> +       /* copy the TRE into its location in the TRE ring */
> +       spin_lock_irqsave(&lldev->lock, flags);
> +       tre->tre_index = lldev->tre_write_offset / TRE_SIZE;
> +       lldev->pending_tre_list[tre->tre_index] = tre;
> +       memcpy(lldev->tre_ring + lldev->tre_write_offset, &tre->tre_local[0],
> +               TRE_SIZE);
> +       lldev->tx_status_list[tre->chidx].err_code = 0;
> +       lldev->tx_status_list[tre->chidx].err_info = 0;
> +       tre->queued = 1;
> +       lldev->pending_tre_count++;
> +       lldev->tre_write_offset = (lldev->tre_write_offset + TRE_SIZE)
> +                               % lldev->tre_ring_size;
> +       spin_unlock_irqrestore(&lldev->lock, flags);
> +       return rc;
> +}
> +
> +int hidma_ll_start(struct hidma_lldev *lldev)
> +{
> +       return hidma_ll_hw_start(lldev);
> +}
> +
> +/*
> + * Note that even though we stop this channel
> + * if there is a pending transaction in flight
> + * it will complete and follow the callback.
> + * This request will prevent further requests
> + * to be made.
> + */
> +int hidma_ll_pause(struct hidma_lldev *lldev)
> +{
> +       u32 val;
> +       int ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       lldev->evch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       lldev->trch_state = (val >> CH_STATE_BIT_POS) & CH_STATE_MASK;
> +
> +       /* already suspended by this OS */
> +       if ((lldev->trch_state == CH_SUSPENDED) ||
> +               (lldev->evch_state == CH_SUSPENDED))
> +               return 0;
> +
> +       /* already stopped by the manager */
> +       if ((lldev->trch_state == CH_STOPPED) ||
> +               (lldev->evch_state == CH_STOPPED))
> +               return 0;
> +
> +       val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_SUSPEND << 16);
> +       writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Start the wait right after the suspend is confirmed.
> +        * Do a polled read up to 1ms and 10ms maximum.
> +        */
> +       ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
> +       val = val & ~(CH_CONTROL_MASK << 16);
> +       val = val | (CH_SUSPEND << 16);
> +       writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
> +
> +       /*
> +        * Start the wait right after the suspend is confirmed
> +        * Delay up to 10ms after reset to allow DMA logic to quiesce.
> +        */
> +       ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
> +               (((val >> CH_STATE_BIT_POS) & CH_STATE_MASK) == CH_SUSPENDED),
> +               1000, 10000);
> +       if (ret)
> +               return ret;
> +
> +       lldev->trch_state = CH_SUSPENDED;
> +       lldev->evch_state = CH_SUSPENDED;
> +       dev_dbg(lldev->dev, "stop\n");
> +
> +       return 0;
> +}
> +
> +void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
> +       dma_addr_t src, dma_addr_t dest, u32 len, u32 flags)
> +{
> +       struct hidma_tre *tre;
> +       u32 *tre_local;
> +
> +       if (tre_ch >= lldev->nr_tres) {
> +               dev_err(lldev->dev,
> +                       "invalid TRE number in transfer params:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre = &lldev->trepool[tre_ch];
> +       if (atomic_read(&tre->allocated) != true) {
> +               dev_err(lldev->dev,
> +                       "trying to set params on an unused TRE:%d", tre_ch);
> +               return;
> +       }
> +
> +       tre_local = &tre->tre_local[0];
> +       tre_local[TRE_LEN_IDX] = len;
> +       tre_local[TRE_SRC_LOW_IDX] = lower_32_bits(src);
> +       tre_local[TRE_SRC_HI_IDX] = upper_32_bits(src);
> +       tre_local[TRE_DEST_LOW_IDX] = lower_32_bits(dest);
> +       tre_local[TRE_DEST_HI_IDX] = upper_32_bits(dest);
> +       tre->int_flags = flags;
> +
> +       dev_dbg(lldev->dev, "transferparams: tre_ch:%d %pap->%pap len:%u\n",
> +               tre_ch, &src, &dest, len);
> +}
> +
> +/*
> + * Called during initialization and after an error condition
> + * to restore hardware state.
> + */
> +int hidma_ll_setup(struct hidma_lldev *lldev)
> +{
> +       int rc;
> +       u64 addr;
> +       u32 val;
> +       u32 nr_tres = lldev->nr_tres;
> +
> +       lldev->pending_tre_count = 0;
> +       lldev->tre_processed_off = 0;
> +       lldev->evre_processed_off = 0;
> +       lldev->tre_write_offset = 0;
> +
> +       /* disable interrupts */
> +       hidma_ll_enable_irq(lldev, 0);
> +
> +       /* clear all pending interrupts */
> +       val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +       rc = hidma_ll_reset(lldev);
> +       if (rc)
> +               return rc;
> +
> +       /*
> +        * Clear all pending interrupts again.
> +        * Otherwise, we observe reset complete interrupts.
> +        */
> +       val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +       writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +
> +       /* disable interrupts again after reset */
> +       hidma_ll_enable_irq(lldev, 0);
> +
> +       addr = lldev->tre_ring_handle;
> +       writel(lower_32_bits(addr), lldev->trca + TRCA_RING_LOW_OFFSET);
> +       writel(upper_32_bits(addr), lldev->trca + TRCA_RING_HIGH_OFFSET);
> +       writel(lldev->tre_ring_size, lldev->trca + TRCA_RING_LEN_OFFSET);
> +
> +       addr = lldev->evre_ring_handle;
> +       writel(lower_32_bits(addr), lldev->evca + EVCA_RING_LOW_OFFSET);
> +       writel(upper_32_bits(addr), lldev->evca + EVCA_RING_HIGH_OFFSET);
> +       writel(EVRE_SIZE * nr_tres, lldev->evca + EVCA_RING_LEN_OFFSET);
> +
> +       /* support IRQ only for now */
> +       val = readl(lldev->evca + EVCA_INTCTRL_OFFSET);
> +       val = val & ~(0xF);
> +       val = val | 0x1;
> +       writel(val, lldev->evca + EVCA_INTCTRL_OFFSET);
> +
> +       /* clear all pending interrupts and enable them*/
> +       writel(ENABLE_IRQS, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +       hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +
> +       rc = hidma_ll_enable(lldev);
> +       if (rc)
> +               return rc;
> +
> +       return rc;
> +}
> +
> +struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
> +                       void __iomem *trca, void __iomem *evca,
> +                       u8 evridx)
> +{
> +       u32 required_bytes;
> +       struct hidma_lldev *lldev;
> +       int rc;
> +
> +       if (!trca || !evca || !dev || !nr_tres)
> +               return NULL;
> +
> +       /* need at least four TREs */
> +       if (nr_tres < 4)
> +               return NULL;
> +
> +       /* need an extra space */
> +       nr_tres += 1;
> +
> +       lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
> +       if (!lldev)
> +               return NULL;
> +
> +       lldev->evca = evca;
> +       lldev->trca = trca;
> +       lldev->dev = dev;
> +       required_bytes = sizeof(struct hidma_tre) * nr_tres;
> +       lldev->trepool = devm_kzalloc(lldev->dev, required_bytes, GFP_KERNEL);
> +       if (!lldev->trepool)
> +               return NULL;
> +
> +       required_bytes = sizeof(lldev->pending_tre_list[0]) * nr_tres;
> +       lldev->pending_tre_list = devm_kzalloc(dev, required_bytes,
> +                                       GFP_KERNEL);
> +       if (!lldev->pending_tre_list)
> +               return NULL;
> +
> +       required_bytes = sizeof(lldev->tx_status_list[0]) * nr_tres;
> +       lldev->tx_status_list = devm_kzalloc(dev, required_bytes, GFP_KERNEL);
> +       if (!lldev->tx_status_list)
> +               return NULL;
> +
> +       lldev->tre_ring = dmam_alloc_coherent(dev, (TRE_SIZE + 1) * nr_tres,
> +                                       &lldev->tre_ring_handle, GFP_KERNEL);
> +       if (!lldev->tre_ring)
> +               return NULL;
> +
> +       memset(lldev->tre_ring, 0, (TRE_SIZE + 1) * nr_tres);
> +       lldev->tre_ring_size = TRE_SIZE * nr_tres;
> +       lldev->nr_tres = nr_tres;
> +
> +       /* the TRE ring has to be TRE_SIZE aligned */
> +       if (!IS_ALIGNED(lldev->tre_ring_handle, TRE_SIZE)) {
> +               u8  tre_ring_shift;
> +
> +               tre_ring_shift = lldev->tre_ring_handle % TRE_SIZE;
> +               tre_ring_shift = TRE_SIZE - tre_ring_shift;
> +               lldev->tre_ring_handle += tre_ring_shift;
> +               lldev->tre_ring += tre_ring_shift;
> +       }
> +
> +       lldev->evre_ring = dmam_alloc_coherent(dev, (EVRE_SIZE + 1) * nr_tres,
> +                                       &lldev->evre_ring_handle, GFP_KERNEL);
> +       if (!lldev->evre_ring)
> +               return NULL;
> +
> +       memset(lldev->evre_ring, 0, (EVRE_SIZE + 1) * nr_tres);
> +       lldev->evre_ring_size = EVRE_SIZE * nr_tres;
> +
> +       /* the EVRE ring has to be EVRE_SIZE aligned */
> +       if (!IS_ALIGNED(lldev->evre_ring_handle, EVRE_SIZE)) {
> +               u8  evre_ring_shift;
> +
> +               evre_ring_shift = lldev->evre_ring_handle % EVRE_SIZE;
> +               evre_ring_shift = EVRE_SIZE - evre_ring_shift;
> +               lldev->evre_ring_handle += evre_ring_shift;
> +               lldev->evre_ring += evre_ring_shift;
> +       }
> +       lldev->nr_tres = nr_tres;
> +       lldev->evridx = evridx;
> +
> +       rc = kfifo_alloc(&lldev->handoff_fifo,
> +               nr_tres * sizeof(struct hidma_tre *), GFP_KERNEL);
> +       if (rc)
> +               return NULL;
> +
> +       rc = hidma_ll_setup(lldev);
> +       if (rc)
> +               return NULL;
> +
> +       spin_lock_init(&lldev->lock);
> +       tasklet_init(&lldev->task, hidma_ll_tre_complete,
> +                       (unsigned long)lldev);
> +       lldev->initialized = 1;
> +       hidma_ll_enable_irq(lldev, ENABLE_IRQS);
> +       return lldev;
> +}
> +
> +int hidma_ll_uninit(struct hidma_lldev *lldev)
> +{
> +       int rc = 0;
> +       u32 val;
> +
> +       if (!lldev)
> +               return -ENODEV;
> +
> +       if (lldev->initialized) {
> +               u32 required_bytes;
> +
> +               lldev->initialized = 0;
> +
> +               required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
> +               tasklet_kill(&lldev->task);
> +               memset(lldev->trepool, 0, required_bytes);
> +               lldev->trepool = NULL;
> +               lldev->pending_tre_count = 0;
> +               lldev->tre_write_offset = 0;
> +
> +               rc = hidma_ll_reset(lldev);
> +
> +               /*
> +                * Clear all pending interrupts again.
> +                * Otherwise, we observe reset complete interrupts.
> +                */
> +               val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
> +               writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
> +               hidma_ll_enable_irq(lldev, 0);
> +       }
> +       return rc;
> +}
> +
> +irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
> +{
> +       struct hidma_lldev *lldev = arg;
> +
> +       hidma_ll_int_handler_internal(lldev);
> +       return IRQ_HANDLED;
> +}
> +
> +enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
> +{
> +       enum dma_status ret = DMA_ERROR;
> +       unsigned long flags;
> +       u8 err_code;
> +
> +       spin_lock_irqsave(&lldev->lock, flags);
> +       err_code = lldev->tx_status_list[tre_ch].err_code;
> +
> +       if (err_code & EVRE_STATUS_COMPLETE)
> +               ret = DMA_COMPLETE;
> +       else if (err_code & EVRE_STATUS_ERROR)
> +               ret = DMA_ERROR;
> +       else
> +               ret = DMA_IN_PROGRESS;
> +       spin_unlock_irqrestore(&lldev->lock, flags);
> +
> +       return ret;
> +}
> --
> Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-08 20:47       ` Andy Shevchenko
@ 2015-11-08 21:51         ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08 21:51 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: dmaengine, timur, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Rob Herring, Pawel Moll, Mark Rutland,
	Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams, devicetree,
	linux-kernel



On 11/8/2015 3:47 PM, Andy Shevchenko wrote:
>> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>> >+       if (!trca_resource) {
>> >+               rc = -ENODEV;
>> >+               goto bailout;
>> >+       }
> Why did you ignore my comment about this block?
> Remove that condition entirely.
>
>> >+
>> >+       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
>> >+       if (IS_ERR(trca)) {
>> >+               rc = -ENOMEM;
>> >+               goto bailout;
>> >+       }

Sorry, I didn't quite get your comment. I thought you wanted to see 
platform_get_resource and devm_ioremap_resource together.

Which one do you want me to remove?

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-08 21:51         ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-08 21:51 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/8/2015 3:47 PM, Andy Shevchenko wrote:
>> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>> >+       if (!trca_resource) {
>> >+               rc = -ENODEV;
>> >+               goto bailout;
>> >+       }
> Why did you ignore my comment about this block?
> Remove that condition entirely.
>
>> >+
>> >+       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
>> >+       if (IS_ERR(trca)) {
>> >+               rc = -ENOMEM;
>> >+               goto bailout;
>> >+       }

Sorry, I didn't quite get your comment. I thought you wanted to see 
platform_get_resource and devm_ioremap_resource together.

Which one do you want me to remove?

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-08 21:51         ` Sinan Kaya
@ 2015-11-08 22:00           ` Andy Shevchenko
  -1 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-08 22:00 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, timur, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Rob Herring, Pawel Moll, Mark Rutland,
	Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams, devicetree,
	linux-kernel

On Sun, Nov 8, 2015 at 11:51 PM, Sinan Kaya <okaya@codeaurora.org> wrote:
>
>
> On 11/8/2015 3:47 PM, Andy Shevchenko wrote:
>>>
>>> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>>> >+       if (!trca_resource) {
>>> >+               rc = -ENODEV;
>>> >+               goto bailout;
>>> >+       }
>>
>> Why did you ignore my comment about this block?
>> Remove that condition entirely.
>>
>>> >+
>>> >+       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
>>> >+       if (IS_ERR(trca)) {
>>> >+               rc = -ENOMEM;
>>> >+               goto bailout;
>>> >+       }
>
>
> Sorry, I didn't quite get your comment. I thought you wanted to see
> platform_get_resource and devm_ioremap_resource together.
>
> Which one do you want me to remove?

At the end you would have something like

res = platform_get_resource();
addr = devm_ioremap_resources();
if (!addr) {
…
}

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-08 22:00           ` Andy Shevchenko
  0 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-08 22:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Nov 8, 2015 at 11:51 PM, Sinan Kaya <okaya@codeaurora.org> wrote:
>
>
> On 11/8/2015 3:47 PM, Andy Shevchenko wrote:
>>>
>>> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>>> >+       if (!trca_resource) {
>>> >+               rc = -ENODEV;
>>> >+               goto bailout;
>>> >+       }
>>
>> Why did you ignore my comment about this block?
>> Remove that condition entirely.
>>
>>> >+
>>> >+       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
>>> >+       if (IS_ERR(trca)) {
>>> >+               rc = -ENOMEM;
>>> >+               goto bailout;
>>> >+       }
>
>
> Sorry, I didn't quite get your comment. I thought you wanted to see
> platform_get_resource and devm_ioremap_resource together.
>
> Which one do you want me to remove?

At the end you would have something like

res = platform_get_resource();
addr = devm_ioremap_resources();
if (!addr) {
?
}

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-08 20:47       ` Andy Shevchenko
@ 2015-11-09  0:31         ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  0:31 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: dmaengine, timur, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Rob Herring, Pawel Moll, Mark Rutland,
	Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams, devicetree,
	linux-kernel



On 11/8/2015 3:47 PM, Andy Shevchenko wrote:
> On Sun, Nov 8, 2015 at 6:53 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>> This patch adds support for hidma engine. The driver
>> consists of two logical blocks. The DMA engine interface
>> and the low-level interface. The hardware only supports
>> memcpy/memset and this driver only support memcpy
>> interface. HW and driver doesn't support slave interface.
>
> Make lines a bit longer.
>

OK

>> +       pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
>> +       pm_runtime_use_autosuspend(&pdev->dev);
>> +       pm_runtime_set_active(&pdev->dev);
>> +       pm_runtime_enable(&pdev->dev);
>> +
>> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>
>> +       if (!trca_resource) {
>> +               rc = -ENODEV;
>> +               goto bailout;
>> +       }
>
> Why did you ignore my comment about this block?
> Remove that condition entirely.
>
Removed these four lines above.

>> +
>> +       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
>> +       if (IS_ERR(trca)) {
>> +               rc = -ENOMEM;
>> +               goto bailout;
>> +       }
>> +
>> +       evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
>> +       if (!evca_resource) {
>> +               rc = -ENODEV;
>> +               goto bailout;
>> +       }
>
> Ditto.
>

done


>> +uninit:
>> +       hidma_debug_uninit(dmadev);
>> +       hidma_ll_uninit(dmadev->lldev);
>> +dmafree:
>> +       if (dmadev)
>> +               hidma_free(dmadev);
>> +bailout:
>> +       pm_runtime_disable(&pdev->dev);
>> +       pm_runtime_put_sync_suspend(&pdev->dev);
>
> Are you sure this is appropriate sequence?
>
> I think
>
> pm_runtime_put();
> pm_runtime_disable();
>
corrected, reordered and used pm_runtime_put_sync() instead.

> will do the job.
>
>> +       return rc;
>> +}
>> +
>> +static int hidma_remove(struct platform_device *pdev)
>> +{
>> +       struct hidma_dev *dmadev = platform_get_drvdata(pdev);
>> +
>> +       dev_dbg(&pdev->dev, "removing\n");
>
> Useless message.
>
Removed.

>> +       pm_runtime_get_sync(dmadev->ddev.dev);
>> +
>> +       dma_async_device_unregister(&dmadev->ddev);
>> +       hidma_debug_uninit(dmadev);
>> +       hidma_ll_uninit(dmadev->lldev);
>> +       hidma_free(dmadev);
>> +
>> +       dev_info(&pdev->dev, "HI-DMA engine removed\n");
>> +       pm_runtime_put_sync_suspend(&pdev->dev);
>> +       pm_runtime_disable(&pdev->dev);
>> +
>> +       return 0;
>> +}



-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-09  0:31         ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  0:31 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/8/2015 3:47 PM, Andy Shevchenko wrote:
> On Sun, Nov 8, 2015 at 6:53 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>> This patch adds support for hidma engine. The driver
>> consists of two logical blocks. The DMA engine interface
>> and the low-level interface. The hardware only supports
>> memcpy/memset and this driver only support memcpy
>> interface. HW and driver doesn't support slave interface.
>
> Make lines a bit longer.
>

OK

>> +       pm_runtime_set_autosuspend_delay(&pdev->dev, AUTOSUSPEND_TIMEOUT);
>> +       pm_runtime_use_autosuspend(&pdev->dev);
>> +       pm_runtime_set_active(&pdev->dev);
>> +       pm_runtime_enable(&pdev->dev);
>> +
>> +       trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>
>> +       if (!trca_resource) {
>> +               rc = -ENODEV;
>> +               goto bailout;
>> +       }
>
> Why did you ignore my comment about this block?
> Remove that condition entirely.
>
Removed these four lines above.

>> +
>> +       trca = devm_ioremap_resource(&pdev->dev, trca_resource);
>> +       if (IS_ERR(trca)) {
>> +               rc = -ENOMEM;
>> +               goto bailout;
>> +       }
>> +
>> +       evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
>> +       if (!evca_resource) {
>> +               rc = -ENODEV;
>> +               goto bailout;
>> +       }
>
> Ditto.
>

done


>> +uninit:
>> +       hidma_debug_uninit(dmadev);
>> +       hidma_ll_uninit(dmadev->lldev);
>> +dmafree:
>> +       if (dmadev)
>> +               hidma_free(dmadev);
>> +bailout:
>> +       pm_runtime_disable(&pdev->dev);
>> +       pm_runtime_put_sync_suspend(&pdev->dev);
>
> Are you sure this is appropriate sequence?
>
> I think
>
> pm_runtime_put();
> pm_runtime_disable();
>
corrected, reordered and used pm_runtime_put_sync() instead.

> will do the job.
>
>> +       return rc;
>> +}
>> +
>> +static int hidma_remove(struct platform_device *pdev)
>> +{
>> +       struct hidma_dev *dmadev = platform_get_drvdata(pdev);
>> +
>> +       dev_dbg(&pdev->dev, "removing\n");
>
> Useless message.
>
Removed.

>> +       pm_runtime_get_sync(dmadev->ddev.dev);
>> +
>> +       dma_async_device_unregister(&dmadev->ddev);
>> +       hidma_debug_uninit(dmadev);
>> +       hidma_ll_uninit(dmadev->lldev);
>> +       hidma_free(dmadev);
>> +
>> +       dev_info(&pdev->dev, "HI-DMA engine removed\n");
>> +       pm_runtime_put_sync_suspend(&pdev->dev);
>> +       pm_runtime_disable(&pdev->dev);
>> +
>> +       return 0;
>> +}



-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-08 19:13       ` kbuild test robot
@ 2015-11-09  0:43         ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  0:43 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, dmaengine, timur, cov, jcm, agross, linux-arm-msm,
	linux-arm-kernel, Rob Herring, Pawel Moll, Mark Rutland,
	Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams, devicetree,
	linux-kernel



On 11/8/2015 2:13 PM, kbuild test robot wrote:
> Hi Sinan,
>
> [auto build test WARNING on: robh/for-next]
> [also build test WARNING on: v4.3 next-20151106]
>
> url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
> config: mn10300-allyesconfig (attached as .config)
> reproduce:
>          wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>          chmod +x ~/bin/make.cross
>          # save the attached .config to linux build tree
>          make.cross ARCH=mn10300
>
> All warnings (new ones prefixed by >>):
>
>     In file included from include/linux/printk.h:277:0,
>                      from include/linux/kernel.h:13,
>                      from include/linux/list.h:8,
>                      from include/linux/kobject.h:20,
>                      from include/linux/device.h:17,
>                      from include/linux/dmaengine.h:20,
>                      from drivers/dma/qcom/hidma.c:45:
>     drivers/dma/qcom/hidma.c: In function 'hidma_prep_dma_memcpy':
>     include/linux/dynamic_debug.h:64:16: warning: format '%zu' expects argument of type 'size_t', but argument 7 has type 'unsigned int' [-Wformat=]
>       static struct _ddebug  __aligned(8)   \
>                     ^
>     include/linux/dynamic_debug.h:84:2: note: in expansion of macro 'DEFINE_DYNAMIC_DEBUG_METADATA'
>       DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);  \
>       ^
>     include/linux/device.h:1171:2: note: in expansion of macro 'dynamic_dev_dbg'
>       dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
>       ^
>>> drivers/dma/qcom/hidma.c:391:2: note: in expansion of macro 'dev_dbg'
>       dev_dbg(mdma->ddev.dev,
>       ^
>
> vim +/dev_dbg +391 drivers/dma/qcom/hidma.c
>
>     375	
>     376		mchan->allocated = 0;
>     377		spin_unlock_irqrestore(&mchan->lock, irqflags);
>     378		dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
>     379	}
>     380	
>     381	
>     382	static struct dma_async_tx_descriptor *
>     383	hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
>     384				dma_addr_t dma_src, size_t len, unsigned long flags)
>     385	{
>     386		struct hidma_chan *mchan = to_hidma_chan(dmach);
>     387		struct hidma_desc *mdesc = NULL;
>     388		struct hidma_dev *mdma = mchan->dmadev;
>     389		unsigned long irqflags;
>     390	
>   > 391		dev_dbg(mdma->ddev.dev,
>     392			"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
>     393			&dma_dest, &dma_src, len);
>     394	

What am I missing?

len is size_t. This page says use %zu for size_t.

https://www.kernel.org/doc/Documentation/printk-formats.txt



>     395		/* Get free descriptor */
>     396		spin_lock_irqsave(&mchan->lock, irqflags);
>     397		if (!list_empty(&mchan->free)) {
>     398			mdesc = list_first_entry(&mchan->free, struct hidma_desc,
>     399						node);
>
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
>



-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-09  0:43         ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  0:43 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/8/2015 2:13 PM, kbuild test robot wrote:
> Hi Sinan,
>
> [auto build test WARNING on: robh/for-next]
> [also build test WARNING on: v4.3 next-20151106]
>
> url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
> config: mn10300-allyesconfig (attached as .config)
> reproduce:
>          wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>          chmod +x ~/bin/make.cross
>          # save the attached .config to linux build tree
>          make.cross ARCH=mn10300
>
> All warnings (new ones prefixed by >>):
>
>     In file included from include/linux/printk.h:277:0,
>                      from include/linux/kernel.h:13,
>                      from include/linux/list.h:8,
>                      from include/linux/kobject.h:20,
>                      from include/linux/device.h:17,
>                      from include/linux/dmaengine.h:20,
>                      from drivers/dma/qcom/hidma.c:45:
>     drivers/dma/qcom/hidma.c: In function 'hidma_prep_dma_memcpy':
>     include/linux/dynamic_debug.h:64:16: warning: format '%zu' expects argument of type 'size_t', but argument 7 has type 'unsigned int' [-Wformat=]
>       static struct _ddebug  __aligned(8)   \
>                     ^
>     include/linux/dynamic_debug.h:84:2: note: in expansion of macro 'DEFINE_DYNAMIC_DEBUG_METADATA'
>       DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);  \
>       ^
>     include/linux/device.h:1171:2: note: in expansion of macro 'dynamic_dev_dbg'
>       dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
>       ^
>>> drivers/dma/qcom/hidma.c:391:2: note: in expansion of macro 'dev_dbg'
>       dev_dbg(mdma->ddev.dev,
>       ^
>
> vim +/dev_dbg +391 drivers/dma/qcom/hidma.c
>
>     375	
>     376		mchan->allocated = 0;
>     377		spin_unlock_irqrestore(&mchan->lock, irqflags);
>     378		dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
>     379	}
>     380	
>     381	
>     382	static struct dma_async_tx_descriptor *
>     383	hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
>     384				dma_addr_t dma_src, size_t len, unsigned long flags)
>     385	{
>     386		struct hidma_chan *mchan = to_hidma_chan(dmach);
>     387		struct hidma_desc *mdesc = NULL;
>     388		struct hidma_dev *mdma = mchan->dmadev;
>     389		unsigned long irqflags;
>     390	
>   > 391		dev_dbg(mdma->ddev.dev,
>     392			"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
>     393			&dma_dest, &dma_src, len);
>     394	

What am I missing?

len is size_t. This page says use %zu for size_t.

https://www.kernel.org/doc/Documentation/printk-formats.txt



>     395		/* Get free descriptor */
>     396		spin_lock_irqsave(&mchan->lock, irqflags);
>     397		if (!list_empty(&mchan->free)) {
>     398			mdesc = list_first_entry(&mchan->free, struct hidma_desc,
>     399						node);
>
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
>



-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
  2015-11-08  5:08       ` Timur Tabi
@ 2015-11-09  2:17         ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  2:17 UTC (permalink / raw)
  To: Timur Tabi, dmaengine, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, Vinod Koul, Dan Williams,
	devicetree, linux-kernel

On 11/8/2015 12:08 AM, Timur Tabi wrote:

On 11/8/2015 12:08 AM, Timur Tabi wrote:
> Sinan Kaya wrote:
>> +    val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
>> +    val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
>> +    val = val & ~(MAX_BUS_REQ_LEN_MASK);
>> +    val = val | (mgmtdev->max_read_request);
>
> val &= ~MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS;
> val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
> val &= ~MAX_BUS_REQ_LEN_MASK;
> val |= mgmtdev->max_read_request;
>
>> +static const struct of_device_id hidma_mgmt_match[] = {
>> +    { .compatible = "qcom,hidma-mgmt", },
>> +    { .compatible = "qcom,hidma-mgmt-1.0", },
>> +    { .compatible = "qcom,hidma-mgmt-1.1", },
>> +    {},
>> +};
>
> I thought Rob said that he did NOT want to use version numbers in
> compatible strings.  And what's the difference between these three
> versions anyway?
>

This was already discussed here.

https://lkml.org/lkml/2015/11/2/689

The agreement was to use

compatible = "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0", 
"qcom,hidma-mgmt";

I'll be adding code for v1.1 specifically in the future.


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
@ 2015-11-09  2:17         ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  2:17 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/8/2015 12:08 AM, Timur Tabi wrote:

On 11/8/2015 12:08 AM, Timur Tabi wrote:
> Sinan Kaya wrote:
>> +    val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
>> +    val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
>> +    val = val & ~(MAX_BUS_REQ_LEN_MASK);
>> +    val = val | (mgmtdev->max_read_request);
>
> val &= ~MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS;
> val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
> val &= ~MAX_BUS_REQ_LEN_MASK;
> val |= mgmtdev->max_read_request;
>
>> +static const struct of_device_id hidma_mgmt_match[] = {
>> +    { .compatible = "qcom,hidma-mgmt", },
>> +    { .compatible = "qcom,hidma-mgmt-1.0", },
>> +    { .compatible = "qcom,hidma-mgmt-1.1", },
>> +    {},
>> +};
>
> I thought Rob said that he did NOT want to use version numbers in
> compatible strings.  And what's the difference between these three
> versions anyway?
>

This was already discussed here.

https://lkml.org/lkml/2015/11/2/689

The agreement was to use

compatible = "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0", 
"qcom,hidma-mgmt";

I'll be adding code for v1.1 specifically in the future.


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-08  5:13     ` Timur Tabi
@ 2015-11-09  2:46       ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  2:46 UTC (permalink / raw)
  To: Timur Tabi, dmaengine, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Vinod Koul,
	Dan Williams, linux-kernel



On 11/8/2015 12:13 AM, Timur Tabi wrote:
> Sinan Kaya wrote:
>
>> +static int dma_selftest_sg(struct dma_device *dmadev,
>> +            struct dma_chan *dma_chanptr, u64 size,
>> +            unsigned long flags)
>> +{
>> +    dma_addr_t src_dma, dest_dma, dest_dma_it;
>> +    u8 *dest_buf;
>> +    u32 i, j = 0;
>> +    dma_cookie_t cookie;
>> +    struct dma_async_tx_descriptor *tx;
>> +    int err = 0;
>> +    int ret;
>> +    struct sg_table sg_table;
>> +    struct scatterlist    *sg;
>> +    int nents = 10, count;
>> +    bool free_channel = 1;
>
> Booleans are either 'true' or 'false'.
>

OK

>> +static int dma_selftest_mapsngle(struct device *dev)
>> +{
>> +    u32 buf_size = 256;
>> +    char *src;
>> +    int ret = -ENOMEM;
>> +    dma_addr_t dma_src;
>> +
>> +    src = kmalloc(buf_size, GFP_KERNEL);
>> +    if (!src)
>> +        return -ENOMEM;
>> +
>> +    strcpy(src, "hello world");
>
> kstrdup()?
>
> And why kmalloc anyway?  Why not leave it on the stack?
>
>      char src[] = "hello world";
>
> ?

I need to call dma_map_single on this address to convert it to a DMA 
address. That's why.

>
>

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-09  2:46       ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  2:46 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/8/2015 12:13 AM, Timur Tabi wrote:
> Sinan Kaya wrote:
>
>> +static int dma_selftest_sg(struct dma_device *dmadev,
>> +            struct dma_chan *dma_chanptr, u64 size,
>> +            unsigned long flags)
>> +{
>> +    dma_addr_t src_dma, dest_dma, dest_dma_it;
>> +    u8 *dest_buf;
>> +    u32 i, j = 0;
>> +    dma_cookie_t cookie;
>> +    struct dma_async_tx_descriptor *tx;
>> +    int err = 0;
>> +    int ret;
>> +    struct sg_table sg_table;
>> +    struct scatterlist    *sg;
>> +    int nents = 10, count;
>> +    bool free_channel = 1;
>
> Booleans are either 'true' or 'false'.
>

OK

>> +static int dma_selftest_mapsngle(struct device *dev)
>> +{
>> +    u32 buf_size = 256;
>> +    char *src;
>> +    int ret = -ENOMEM;
>> +    dma_addr_t dma_src;
>> +
>> +    src = kmalloc(buf_size, GFP_KERNEL);
>> +    if (!src)
>> +        return -ENOMEM;
>> +
>> +    strcpy(src, "hello world");
>
> kstrdup()?
>
> And why kmalloc anyway?  Why not leave it on the stack?
>
>      char src[] = "hello world";
>
> ?

I need to call dma_map_single on this address to convert it to a DMA 
address. That's why.

>
>

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-08 20:09     ` Andy Shevchenko
@ 2015-11-09  3:07       ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  3:07 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: dmaengine, timur, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Vinod Koul, Dan Williams, linux-kernel



On 11/8/2015 3:09 PM, Andy Shevchenko wrote:
> On Sun, Nov 8, 2015 at 6:52 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>> This patch adds supporting utility functions
>> for selftest. The intention is to share the self
>> test code between different drivers.
>>
>> Supported test cases include:
>> 1. dma_map_single
>> 2. streaming DMA
>> 3. coherent DMA
>> 4. scatter-gather DMA
>
> All below comments about entire file, please check and update.
>
>> +struct test_result {
>> +       atomic_t counter;
>> +       wait_queue_head_t wq;
>> +       struct dma_device *dmadev;
>
> dmadev -> dma.
>

Done.

>> +};
>> +
>> +static void dma_selftest_complete(void *arg)
>> +{
>> +       struct test_result *result = arg;
>> +       struct dma_device *dmadev = result->dmadev;
>> +
>> +       atomic_inc(&result->counter);
>> +       wake_up(&result->wq);
>> +       dev_dbg(dmadev->dev, "self test transfer complete :%d\n",
>> +               atomic_read(&result->counter));
>> +}
>> +
>> +/*
>> + * Perform a transaction to verify the HW works.
>> + */
>> +static int dma_selftest_sg(struct dma_device *dmadev,
>
> dmdev -> dma
>
ok

>> +                       struct dma_chan *dma_chanptr, u64 size,
>
> dma_chanptr -> chan

ok

>
>> +                       unsigned long flags)
>> +{
>> +       dma_addr_t src_dma, dest_dma, dest_dma_it;
>
> src_dma -> src, dest_dma_it -> dst ?

ok

>
>> +       u8 *dest_buf;
>
> Perhaps put nearby src_buf definition?

ok
>
>> +       u32 i, j = 0;
>
> unsigned int

why?

>
>> +       dma_cookie_t cookie;
>> +       struct dma_async_tx_descriptor *tx;
>
>> +       int err = 0;
>> +       int ret;
>
> Any reason to have two instead of one of similar meaning?
>

removed ret

>> +       struct sg_table sg_table;
>> +       struct scatterlist      *sg;
>> +       int nents = 10, count;
>> +       bool free_channel = 1;
>> +       u8 *src_buf;
>> +       int map_count;
>> +       struct test_result result;
>
> Hmm… Maybe make names shorter?
>
>> +
>> +       init_waitqueue_head(&result.wq);
>> +       atomic_set(&result.counter, 0);
>> +       result.dmadev = dmadev;
>> +
>> +       if (!dma_chanptr)
>> +               return -ENOMEM;
>> +
>> +       if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
>
>
>> +               return -ENODEV;
>> +
>> +       if (!dma_chanptr->device || !dmadev->dev) {
>> +               dmadev->device_free_chan_resources(dma_chanptr);
>> +               return -ENODEV;
>> +       }
>> +
>> +       ret = sg_alloc_table(&sg_table, nents, GFP_KERNEL);
>> +       if (ret) {
>> +               err = ret;
>> +               goto sg_table_alloc_failed;
>> +       }
>> +
>> +       for_each_sg(sg_table.sgl, sg, nents, i) {
>> +               u64 alloc_sz;
>> +               void *cpu_addr;
>> +
>> +               alloc_sz = round_up(size, nents);
>> +               do_div(alloc_sz, nents);
>> +               cpu_addr = kmalloc(alloc_sz, GFP_KERNEL);
>> +
>> +               if (!cpu_addr) {
>> +                       err = -ENOMEM;
>> +                       goto sg_buf_alloc_failed;
>> +               }
>> +
>> +               dev_dbg(dmadev->dev, "set sg buf[%d] :%p\n", i, cpu_addr);
>> +               sg_set_buf(sg, cpu_addr, alloc_sz);
>> +       }
>> +
>> +       dest_buf = kmalloc(round_up(size, nents), GFP_KERNEL);
>> +       if (!dest_buf) {
>> +               err = -ENOMEM;
>> +               goto dst_alloc_failed;
>> +       }
>> +       dev_dbg(dmadev->dev, "dest:%p\n", dest_buf);
>> +
>> +       /* Fill in src buffer */
>> +       count = 0;
>> +       for_each_sg(sg_table.sgl, sg, nents, i) {
>> +               src_buf = sg_virt(sg);
>> +               dev_dbg(dmadev->dev,
>> +                       "set src[%d, %d, %p] = %d\n", i, j, src_buf, count);
>> +
>> +               for (j = 0; j < sg_dma_len(sg); j++)
>> +                       src_buf[j] = count++;
>> +       }
>> +
>> +       /* dma_map_sg cleans and invalidates the cache in arm64 when
>> +        * DMA_TO_DEVICE is selected for src. That's why, we need to do
>> +        * the mapping after the data is copied.
>> +        */
>> +       map_count = dma_map_sg(dmadev->dev, sg_table.sgl, nents,
>> +                               DMA_TO_DEVICE);
>> +       if (!map_count) {
>> +               err =  -EINVAL;
>> +               goto src_map_failed;
>> +       }
>> +
>> +       dest_dma = dma_map_single(dmadev->dev, dest_buf,
>> +                               size, DMA_FROM_DEVICE);
>> +
>> +       err = dma_mapping_error(dmadev->dev, dest_dma);
>> +       if (err)
>> +               goto dest_map_failed;
>> +
>> +       /* check scatter gather list contents */
>> +       for_each_sg(sg_table.sgl, sg, map_count, i)
>> +               dev_dbg(dmadev->dev,
>> +                       "[%d/%d] src va=%p, iova = %pa len:%d\n",
>> +                       i, map_count, sg_virt(sg), &sg_dma_address(sg),
>> +                       sg_dma_len(sg));
>> +
>> +       dest_dma_it = dest_dma;
>> +       for_each_sg(sg_table.sgl, sg, map_count, i) {
>> +               src_buf = sg_virt(sg);
>> +               src_dma = sg_dma_address(sg);
>> +               dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n",
>> +                       &src_dma, &dest_dma_it);
>> +
>> +               tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma_it,
>> +                               src_dma, sg_dma_len(sg), flags);
>> +               if (!tx) {
>> +                       dev_err(dmadev->dev,
>> +                               "Self-test sg failed, disabling\n");
>> +                       err = -ENODEV;
>> +                       goto prep_memcpy_failed;
>> +               }
>> +
>> +               tx->callback_param = &result;
>> +               tx->callback = dma_selftest_complete;
>> +               cookie = tx->tx_submit(tx);
>> +               dest_dma_it += sg_dma_len(sg);
>> +       }
>> +
>> +       dmadev->device_issue_pending(dma_chanptr);
>> +
>> +       /*
>> +        * It is assumed that the hardware can move the data within 1s
>> +        * and signal the OS of the completion
>> +        */
>> +       ret = wait_event_timeout(result.wq,
>> +               atomic_read(&result.counter) == (map_count),
>> +                               msecs_to_jiffies(10000));
>> +
>> +       if (ret <= 0) {
>> +               dev_err(dmadev->dev,
>> +                       "Self-test sg copy timed out, disabling\n");
>> +               err = -ENODEV;
>> +               goto tx_status;
>> +       }
>> +       dev_dbg(dmadev->dev,
>> +               "Self-test complete signal received\n");
>> +
>> +       if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
>> +                               DMA_COMPLETE) {
>> +               dev_err(dmadev->dev,
>> +                       "Self-test sg status not complete, disabling\n");
>> +               err = -ENODEV;
>> +               goto tx_status;
>> +       }
>> +
>> +       dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
>> +                               DMA_FROM_DEVICE);
>> +
>> +       count = 0;
>> +       for_each_sg(sg_table.sgl, sg, map_count, i) {
>> +               src_buf = sg_virt(sg);
>> +               if (memcmp(src_buf, &dest_buf[count], sg_dma_len(sg)) == 0) {
>> +                       count += sg_dma_len(sg);
>> +                       continue;
>> +               }
>> +
>> +               for (j = 0; j < sg_dma_len(sg); j++) {
>> +                       if (src_buf[j] != dest_buf[count]) {
>> +                               dev_dbg(dmadev->dev,
>> +                               "[%d, %d] (%p) src :%x dest (%p):%x cnt:%d\n",
>> +                                       i, j, &src_buf[j], src_buf[j],
>> +                                       &dest_buf[count], dest_buf[count],
>> +                                       count);
>> +                               dev_err(dmadev->dev,
>> +                                "Self-test copy failed compare, disabling\n");
>> +                               err = -EFAULT;
>> +                               return err;
>> +                               goto compare_failed;
>
> Here something wrong.

removed the return.
>
>> +                       }
>> +                       count++;
>> +               }
>> +       }
>> +

thanks

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-09  3:07       ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-09  3:07 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/8/2015 3:09 PM, Andy Shevchenko wrote:
> On Sun, Nov 8, 2015 at 6:52 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>> This patch adds supporting utility functions
>> for selftest. The intention is to share the self
>> test code between different drivers.
>>
>> Supported test cases include:
>> 1. dma_map_single
>> 2. streaming DMA
>> 3. coherent DMA
>> 4. scatter-gather DMA
>
> All below comments about entire file, please check and update.
>
>> +struct test_result {
>> +       atomic_t counter;
>> +       wait_queue_head_t wq;
>> +       struct dma_device *dmadev;
>
> dmadev -> dma.
>

Done.

>> +};
>> +
>> +static void dma_selftest_complete(void *arg)
>> +{
>> +       struct test_result *result = arg;
>> +       struct dma_device *dmadev = result->dmadev;
>> +
>> +       atomic_inc(&result->counter);
>> +       wake_up(&result->wq);
>> +       dev_dbg(dmadev->dev, "self test transfer complete :%d\n",
>> +               atomic_read(&result->counter));
>> +}
>> +
>> +/*
>> + * Perform a transaction to verify the HW works.
>> + */
>> +static int dma_selftest_sg(struct dma_device *dmadev,
>
> dmdev -> dma
>
ok

>> +                       struct dma_chan *dma_chanptr, u64 size,
>
> dma_chanptr -> chan

ok

>
>> +                       unsigned long flags)
>> +{
>> +       dma_addr_t src_dma, dest_dma, dest_dma_it;
>
> src_dma -> src, dest_dma_it -> dst ?

ok

>
>> +       u8 *dest_buf;
>
> Perhaps put nearby src_buf definition?

ok
>
>> +       u32 i, j = 0;
>
> unsigned int

why?

>
>> +       dma_cookie_t cookie;
>> +       struct dma_async_tx_descriptor *tx;
>
>> +       int err = 0;
>> +       int ret;
>
> Any reason to have two instead of one of similar meaning?
>

removed ret

>> +       struct sg_table sg_table;
>> +       struct scatterlist      *sg;
>> +       int nents = 10, count;
>> +       bool free_channel = 1;
>> +       u8 *src_buf;
>> +       int map_count;
>> +       struct test_result result;
>
> Hmm? Maybe make names shorter?
>
>> +
>> +       init_waitqueue_head(&result.wq);
>> +       atomic_set(&result.counter, 0);
>> +       result.dmadev = dmadev;
>> +
>> +       if (!dma_chanptr)
>> +               return -ENOMEM;
>> +
>> +       if (dmadev->device_alloc_chan_resources(dma_chanptr) < 1)
>
>
>> +               return -ENODEV;
>> +
>> +       if (!dma_chanptr->device || !dmadev->dev) {
>> +               dmadev->device_free_chan_resources(dma_chanptr);
>> +               return -ENODEV;
>> +       }
>> +
>> +       ret = sg_alloc_table(&sg_table, nents, GFP_KERNEL);
>> +       if (ret) {
>> +               err = ret;
>> +               goto sg_table_alloc_failed;
>> +       }
>> +
>> +       for_each_sg(sg_table.sgl, sg, nents, i) {
>> +               u64 alloc_sz;
>> +               void *cpu_addr;
>> +
>> +               alloc_sz = round_up(size, nents);
>> +               do_div(alloc_sz, nents);
>> +               cpu_addr = kmalloc(alloc_sz, GFP_KERNEL);
>> +
>> +               if (!cpu_addr) {
>> +                       err = -ENOMEM;
>> +                       goto sg_buf_alloc_failed;
>> +               }
>> +
>> +               dev_dbg(dmadev->dev, "set sg buf[%d] :%p\n", i, cpu_addr);
>> +               sg_set_buf(sg, cpu_addr, alloc_sz);
>> +       }
>> +
>> +       dest_buf = kmalloc(round_up(size, nents), GFP_KERNEL);
>> +       if (!dest_buf) {
>> +               err = -ENOMEM;
>> +               goto dst_alloc_failed;
>> +       }
>> +       dev_dbg(dmadev->dev, "dest:%p\n", dest_buf);
>> +
>> +       /* Fill in src buffer */
>> +       count = 0;
>> +       for_each_sg(sg_table.sgl, sg, nents, i) {
>> +               src_buf = sg_virt(sg);
>> +               dev_dbg(dmadev->dev,
>> +                       "set src[%d, %d, %p] = %d\n", i, j, src_buf, count);
>> +
>> +               for (j = 0; j < sg_dma_len(sg); j++)
>> +                       src_buf[j] = count++;
>> +       }
>> +
>> +       /* dma_map_sg cleans and invalidates the cache in arm64 when
>> +        * DMA_TO_DEVICE is selected for src. That's why, we need to do
>> +        * the mapping after the data is copied.
>> +        */
>> +       map_count = dma_map_sg(dmadev->dev, sg_table.sgl, nents,
>> +                               DMA_TO_DEVICE);
>> +       if (!map_count) {
>> +               err =  -EINVAL;
>> +               goto src_map_failed;
>> +       }
>> +
>> +       dest_dma = dma_map_single(dmadev->dev, dest_buf,
>> +                               size, DMA_FROM_DEVICE);
>> +
>> +       err = dma_mapping_error(dmadev->dev, dest_dma);
>> +       if (err)
>> +               goto dest_map_failed;
>> +
>> +       /* check scatter gather list contents */
>> +       for_each_sg(sg_table.sgl, sg, map_count, i)
>> +               dev_dbg(dmadev->dev,
>> +                       "[%d/%d] src va=%p, iova = %pa len:%d\n",
>> +                       i, map_count, sg_virt(sg), &sg_dma_address(sg),
>> +                       sg_dma_len(sg));
>> +
>> +       dest_dma_it = dest_dma;
>> +       for_each_sg(sg_table.sgl, sg, map_count, i) {
>> +               src_buf = sg_virt(sg);
>> +               src_dma = sg_dma_address(sg);
>> +               dev_dbg(dmadev->dev, "src_dma: %pad dest_dma:%pad\n",
>> +                       &src_dma, &dest_dma_it);
>> +
>> +               tx = dmadev->device_prep_dma_memcpy(dma_chanptr, dest_dma_it,
>> +                               src_dma, sg_dma_len(sg), flags);
>> +               if (!tx) {
>> +                       dev_err(dmadev->dev,
>> +                               "Self-test sg failed, disabling\n");
>> +                       err = -ENODEV;
>> +                       goto prep_memcpy_failed;
>> +               }
>> +
>> +               tx->callback_param = &result;
>> +               tx->callback = dma_selftest_complete;
>> +               cookie = tx->tx_submit(tx);
>> +               dest_dma_it += sg_dma_len(sg);
>> +       }
>> +
>> +       dmadev->device_issue_pending(dma_chanptr);
>> +
>> +       /*
>> +        * It is assumed that the hardware can move the data within 1s
>> +        * and signal the OS of the completion
>> +        */
>> +       ret = wait_event_timeout(result.wq,
>> +               atomic_read(&result.counter) == (map_count),
>> +                               msecs_to_jiffies(10000));
>> +
>> +       if (ret <= 0) {
>> +               dev_err(dmadev->dev,
>> +                       "Self-test sg copy timed out, disabling\n");
>> +               err = -ENODEV;
>> +               goto tx_status;
>> +       }
>> +       dev_dbg(dmadev->dev,
>> +               "Self-test complete signal received\n");
>> +
>> +       if (dmadev->device_tx_status(dma_chanptr, cookie, NULL) !=
>> +                               DMA_COMPLETE) {
>> +               dev_err(dmadev->dev,
>> +                       "Self-test sg status not complete, disabling\n");
>> +               err = -ENODEV;
>> +               goto tx_status;
>> +       }
>> +
>> +       dma_sync_single_for_cpu(dmadev->dev, dest_dma, size,
>> +                               DMA_FROM_DEVICE);
>> +
>> +       count = 0;
>> +       for_each_sg(sg_table.sgl, sg, map_count, i) {
>> +               src_buf = sg_virt(sg);
>> +               if (memcmp(src_buf, &dest_buf[count], sg_dma_len(sg)) == 0) {
>> +                       count += sg_dma_len(sg);
>> +                       continue;
>> +               }
>> +
>> +               for (j = 0; j < sg_dma_len(sg); j++) {
>> +                       if (src_buf[j] != dest_buf[count]) {
>> +                               dev_dbg(dmadev->dev,
>> +                               "[%d, %d] (%p) src :%x dest (%p):%x cnt:%d\n",
>> +                                       i, j, &src_buf[j], src_buf[j],
>> +                                       &dest_buf[count], dest_buf[count],
>> +                                       count);
>> +                               dev_err(dmadev->dev,
>> +                                "Self-test copy failed compare, disabling\n");
>> +                               err = -EFAULT;
>> +                               return err;
>> +                               goto compare_failed;
>
> Here something wrong.

removed the return.
>
>> +                       }
>> +                       count++;
>> +               }
>> +       }
>> +

thanks

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-09  3:07       ` Sinan Kaya
@ 2015-11-09  9:26         ` Andy Shevchenko
  -1 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-09  9:26 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, timur, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Vinod Koul, Dan Williams, linux-kernel

On Mon, Nov 9, 2015 at 5:07 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>
>
> On 11/8/2015 3:09 PM, Andy Shevchenko wrote:
>>
>> On Sun, Nov 8, 2015 at 6:52 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>>>
>>> This patch adds supporting utility functions
>>> for selftest. The intention is to share the self
>>> test code between different drivers.
>>>
>>> Supported test cases include:
>>> 1. dma_map_single
>>> 2. streaming DMA
>>> 3. coherent DMA
>>> 4. scatter-gather DMA
>>
>>

>>> +       u32 i, j = 0;
>>
>> unsigned int
>
> why?

Is i or j is going to be used for HW communication? No? What about
assignment to a values of type u32? No? Plain counters? Use plain
types.

It's actually comment about your all patches I saw last week.

>>> +       int err = 0;
>>> +       int ret;
>>
>>
>> Any reason to have two instead of one of similar meaning?
>>
>
> removed ret

Don't forget to check if it's redundant assignment (check in all your
patches as well).

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-09  9:26         ` Andy Shevchenko
  0 siblings, 0 replies; 71+ messages in thread
From: Andy Shevchenko @ 2015-11-09  9:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Nov 9, 2015 at 5:07 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>
>
> On 11/8/2015 3:09 PM, Andy Shevchenko wrote:
>>
>> On Sun, Nov 8, 2015 at 6:52 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>>>
>>> This patch adds supporting utility functions
>>> for selftest. The intention is to share the self
>>> test code between different drivers.
>>>
>>> Supported test cases include:
>>> 1. dma_map_single
>>> 2. streaming DMA
>>> 3. coherent DMA
>>> 4. scatter-gather DMA
>>
>>

>>> +       u32 i, j = 0;
>>
>> unsigned int
>
> why?

Is i or j is going to be used for HW communication? No? What about
assignment to a values of type u32? No? Plain counters? Use plain
types.

It's actually comment about your all patches I saw last week.

>>> +       int err = 0;
>>> +       int ret;
>>
>>
>> Any reason to have two instead of one of similar meaning?
>>
>
> removed ret

Don't forget to check if it's redundant assignment (check in all your
patches as well).

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-09  2:46       ` Sinan Kaya
@ 2015-11-09 13:48         ` Timur Tabi
  -1 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-09 13:48 UTC (permalink / raw)
  To: Sinan Kaya, dmaengine, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Vinod Koul,
	Dan Williams, linux-kernel

Sinan Kaya wrote:
>>
>> And why kmalloc anyway?  Why not leave it on the stack?
>>
>>      char src[] = "hello world";
>>
>> ?
>
> I need to call dma_map_single on this address to convert it to a DMA
> address. That's why.

And you can't do that with an object that's on the stack?

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-09 13:48         ` Timur Tabi
  0 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-09 13:48 UTC (permalink / raw)
  To: linux-arm-kernel

Sinan Kaya wrote:
>>
>> And why kmalloc anyway?  Why not leave it on the stack?
>>
>>      char src[] = "hello world";
>>
>> ?
>
> I need to call dma_map_single on this address to convert it to a DMA
> address. That's why.

And you can't do that with an object that's on the stack?

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-08  4:53   ` Sinan Kaya
@ 2015-11-09 18:19     ` Rob Herring
  -1 siblings, 0 replies; 71+ messages in thread
From: Rob Herring @ 2015-11-09 18:19 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, timur, cov, jcm, agross, linux-arm-msm,
	linux-arm-kernel, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, Vinod Koul, Dan Williams, devicetree, linux-kernel

On Sat, Nov 07, 2015 at 11:53:00PM -0500, Sinan Kaya wrote:
> This patch adds support for hidma engine. The driver
> consists of two logical blocks. The DMA engine interface
> and the low-level interface. The hardware only supports
> memcpy/memset and this driver only support memcpy
> interface. HW and driver doesn't support slave interface.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../devicetree/bindings/dma/qcom_hidma.txt         |  18 +
>  drivers/dma/qcom/Kconfig                           |   9 +
>  drivers/dma/qcom/Makefile                          |   2 +
>  drivers/dma/qcom/hidma.c                           | 743 ++++++++++++++++
>  drivers/dma/qcom/hidma.h                           | 157 ++++
>  drivers/dma/qcom/hidma_dbg.c                       | 225 +++++
>  drivers/dma/qcom/hidma_ll.c                        | 944 +++++++++++++++++++++
>  7 files changed, 2098 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
>  create mode 100644 drivers/dma/qcom/hidma.c
>  create mode 100644 drivers/dma/qcom/hidma.h
>  create mode 100644 drivers/dma/qcom/hidma_dbg.c
>  create mode 100644 drivers/dma/qcom/hidma_ll.c
> 
> diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma.txt b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
> new file mode 100644
> index 0000000..c9fb2d44
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
> @@ -0,0 +1,18 @@
> +Qualcomm Technologies HIDMA Channel driver
> +
> +Required properties:
> +- compatible: must contain "qcom,hidma"

This should be "qcom,hidma-1.0" to match the example and driver. I 
would drop "qcom,hidma" altogether.

Rob

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-09 18:19     ` Rob Herring
  0 siblings, 0 replies; 71+ messages in thread
From: Rob Herring @ 2015-11-09 18:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Nov 07, 2015 at 11:53:00PM -0500, Sinan Kaya wrote:
> This patch adds support for hidma engine. The driver
> consists of two logical blocks. The DMA engine interface
> and the low-level interface. The hardware only supports
> memcpy/memset and this driver only support memcpy
> interface. HW and driver doesn't support slave interface.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../devicetree/bindings/dma/qcom_hidma.txt         |  18 +
>  drivers/dma/qcom/Kconfig                           |   9 +
>  drivers/dma/qcom/Makefile                          |   2 +
>  drivers/dma/qcom/hidma.c                           | 743 ++++++++++++++++
>  drivers/dma/qcom/hidma.h                           | 157 ++++
>  drivers/dma/qcom/hidma_dbg.c                       | 225 +++++
>  drivers/dma/qcom/hidma_ll.c                        | 944 +++++++++++++++++++++
>  7 files changed, 2098 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
>  create mode 100644 drivers/dma/qcom/hidma.c
>  create mode 100644 drivers/dma/qcom/hidma.h
>  create mode 100644 drivers/dma/qcom/hidma_dbg.c
>  create mode 100644 drivers/dma/qcom/hidma_ll.c
> 
> diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma.txt b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
> new file mode 100644
> index 0000000..c9fb2d44
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
> @@ -0,0 +1,18 @@
> +Qualcomm Technologies HIDMA Channel driver
> +
> +Required properties:
> +- compatible: must contain "qcom,hidma"

This should be "qcom,hidma-1.0" to match the example and driver. I 
would drop "qcom,hidma" altogether.

Rob

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
  2015-11-09  2:17         ` Sinan Kaya
@ 2015-11-09 18:25           ` Rob Herring
  -1 siblings, 0 replies; 71+ messages in thread
From: Rob Herring @ 2015-11-09 18:25 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: Timur Tabi, dmaengine, cov, jcm, agross, linux-arm-msm,
	linux-arm-kernel, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, Vinod Koul, Dan Williams, devicetree, linux-kernel

On Sun, Nov 08, 2015 at 09:17:20PM -0500, Sinan Kaya wrote:
> On 11/8/2015 12:08 AM, Timur Tabi wrote:
> 
> On 11/8/2015 12:08 AM, Timur Tabi wrote:
> >Sinan Kaya wrote:
> >>+    val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
> >>+    val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
> >>+    val = val & ~(MAX_BUS_REQ_LEN_MASK);
> >>+    val = val | (mgmtdev->max_read_request);
> >
> >val &= ~MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS;
> >val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
> >val &= ~MAX_BUS_REQ_LEN_MASK;
> >val |= mgmtdev->max_read_request;
> >
> >>+static const struct of_device_id hidma_mgmt_match[] = {
> >>+    { .compatible = "qcom,hidma-mgmt", },
> >>+    { .compatible = "qcom,hidma-mgmt-1.0", },
> >>+    { .compatible = "qcom,hidma-mgmt-1.1", },
> >>+    {},
> >>+};
> >
> >I thought Rob said that he did NOT want to use version numbers in
> >compatible strings.  And what's the difference between these three
> >versions anyway?
> >
> 
> This was already discussed here.
> 
> https://lkml.org/lkml/2015/11/2/689
> 
> The agreement was to use

The suggestion...
 
> compatible = "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0",
> "qcom,hidma-mgmt";

I don't really want to see 3 generic-ish strings.
 
> I'll be adding code for v1.1 specifically in the future.

Please drop "qcom,hidma-mgmt" altogether. It is already meaningless. 
Then add the 1.1 compatible when you add the code for it. Hopefully you 
all can decide on part number(s) by then.

Rob

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
@ 2015-11-09 18:25           ` Rob Herring
  0 siblings, 0 replies; 71+ messages in thread
From: Rob Herring @ 2015-11-09 18:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Nov 08, 2015 at 09:17:20PM -0500, Sinan Kaya wrote:
> On 11/8/2015 12:08 AM, Timur Tabi wrote:
> 
> On 11/8/2015 12:08 AM, Timur Tabi wrote:
> >Sinan Kaya wrote:
> >>+    val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
> >>+    val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
> >>+    val = val & ~(MAX_BUS_REQ_LEN_MASK);
> >>+    val = val | (mgmtdev->max_read_request);
> >
> >val &= ~MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS;
> >val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
> >val &= ~MAX_BUS_REQ_LEN_MASK;
> >val |= mgmtdev->max_read_request;
> >
> >>+static const struct of_device_id hidma_mgmt_match[] = {
> >>+    { .compatible = "qcom,hidma-mgmt", },
> >>+    { .compatible = "qcom,hidma-mgmt-1.0", },
> >>+    { .compatible = "qcom,hidma-mgmt-1.1", },
> >>+    {},
> >>+};
> >
> >I thought Rob said that he did NOT want to use version numbers in
> >compatible strings.  And what's the difference between these three
> >versions anyway?
> >
> 
> This was already discussed here.
> 
> https://lkml.org/lkml/2015/11/2/689
> 
> The agreement was to use

The suggestion...
 
> compatible = "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0",
> "qcom,hidma-mgmt";

I don't really want to see 3 generic-ish strings.
 
> I'll be adding code for v1.1 specifically in the future.

Please drop "qcom,hidma-mgmt" altogether. It is already meaningless. 
Then add the 1.1 compatible when you add the code for it. Hopefully you 
all can decide on part number(s) by then.

Rob

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-09 18:19     ` Rob Herring
@ 2015-11-10  4:44       ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-10  4:44 UTC (permalink / raw)
  To: Rob Herring
  Cc: dmaengine, timur, cov, jcm, agross, linux-arm-msm,
	linux-arm-kernel, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, Vinod Koul, Dan Williams, devicetree, linux-kernel



On 11/9/2015 1:19 PM, Rob Herring wrote:
> On Sat, Nov 07, 2015 at 11:53:00PM -0500, Sinan Kaya wrote:
>> This patch adds support for hidma engine. The driver
>> consists of two logical blocks. The DMA engine interface
>> and the low-level interface. The hardware only supports
>> memcpy/memset and this driver only support memcpy
>> interface. HW and driver doesn't support slave interface.
>>
>> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
>> ---
>>   .../devicetree/bindings/dma/qcom_hidma.txt         |  18 +
>>   drivers/dma/qcom/Kconfig                           |   9 +
>>   drivers/dma/qcom/Makefile                          |   2 +
>>   drivers/dma/qcom/hidma.c                           | 743 ++++++++++++++++
>>   drivers/dma/qcom/hidma.h                           | 157 ++++
>>   drivers/dma/qcom/hidma_dbg.c                       | 225 +++++
>>   drivers/dma/qcom/hidma_ll.c                        | 944 +++++++++++++++++++++
>>   7 files changed, 2098 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
>>   create mode 100644 drivers/dma/qcom/hidma.c
>>   create mode 100644 drivers/dma/qcom/hidma.h
>>   create mode 100644 drivers/dma/qcom/hidma_dbg.c
>>   create mode 100644 drivers/dma/qcom/hidma_ll.c
>>
>> diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma.txt b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
>> new file mode 100644
>> index 0000000..c9fb2d44
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
>> @@ -0,0 +1,18 @@
>> +Qualcomm Technologies HIDMA Channel driver
>> +
>> +Required properties:
>> +- compatible: must contain "qcom,hidma"
>
> This should be "qcom,hidma-1.0" to match the example and driver. I
> would drop "qcom,hidma" altogether.

I matched it.

>
> Rob
>

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-10  4:44       ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-10  4:44 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/9/2015 1:19 PM, Rob Herring wrote:
> On Sat, Nov 07, 2015 at 11:53:00PM -0500, Sinan Kaya wrote:
>> This patch adds support for hidma engine. The driver
>> consists of two logical blocks. The DMA engine interface
>> and the low-level interface. The hardware only supports
>> memcpy/memset and this driver only support memcpy
>> interface. HW and driver doesn't support slave interface.
>>
>> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
>> ---
>>   .../devicetree/bindings/dma/qcom_hidma.txt         |  18 +
>>   drivers/dma/qcom/Kconfig                           |   9 +
>>   drivers/dma/qcom/Makefile                          |   2 +
>>   drivers/dma/qcom/hidma.c                           | 743 ++++++++++++++++
>>   drivers/dma/qcom/hidma.h                           | 157 ++++
>>   drivers/dma/qcom/hidma_dbg.c                       | 225 +++++
>>   drivers/dma/qcom/hidma_ll.c                        | 944 +++++++++++++++++++++
>>   7 files changed, 2098 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma.txt
>>   create mode 100644 drivers/dma/qcom/hidma.c
>>   create mode 100644 drivers/dma/qcom/hidma.h
>>   create mode 100644 drivers/dma/qcom/hidma_dbg.c
>>   create mode 100644 drivers/dma/qcom/hidma_ll.c
>>
>> diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma.txt b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
>> new file mode 100644
>> index 0000000..c9fb2d44
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/dma/qcom_hidma.txt
>> @@ -0,0 +1,18 @@
>> +Qualcomm Technologies HIDMA Channel driver
>> +
>> +Required properties:
>> +- compatible: must contain "qcom,hidma"
>
> This should be "qcom,hidma-1.0" to match the example and driver. I
> would drop "qcom,hidma" altogether.

I matched it.

>
> Rob
>

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-09 13:48         ` Timur Tabi
@ 2015-11-10  4:49           ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-10  4:49 UTC (permalink / raw)
  To: Timur Tabi, dmaengine, cov, jcm
  Cc: agross, linux-arm-msm, linux-arm-kernel, Vinod Koul,
	Dan Williams, linux-kernel



On 11/9/2015 8:48 AM, Timur Tabi wrote:
> Sinan Kaya wrote:
>>>
>>> And why kmalloc anyway?  Why not leave it on the stack?
>>>
>>>      char src[] = "hello world";
>>>
>>> ?
>>
>> I need to call dma_map_single on this address to convert it to a DMA
>> address. That's why.
>
> And you can't do that with an object that's on the stack?
>

no, pasting from here.

https://www.kernel.org/doc/Documentation/DMA-API-HOWTO.txt

under 'What memory is DMA'able?'

This rule also means that you may use neither kernel image addresses
(items in data/text/bss segments), nor module image addresses, nor
stack addresses for DMA.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-10  4:49           ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-10  4:49 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/9/2015 8:48 AM, Timur Tabi wrote:
> Sinan Kaya wrote:
>>>
>>> And why kmalloc anyway?  Why not leave it on the stack?
>>>
>>>      char src[] = "hello world";
>>>
>>> ?
>>
>> I need to call dma_map_single on this address to convert it to a DMA
>> address. That's why.
>
> And you can't do that with an object that's on the stack?
>

no, pasting from here.

https://www.kernel.org/doc/Documentation/DMA-API-HOWTO.txt

under 'What memory is DMA'able?'

This rule also means that you may use neither kernel image addresses
(items in data/text/bss segments), nor module image addresses, nor
stack addresses for DMA.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-09  9:26         ` Andy Shevchenko
@ 2015-11-10  4:55           ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-10  4:55 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: dmaengine, timur, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Vinod Koul, Dan Williams, linux-kernel



On 11/9/2015 4:26 AM, Andy Shevchenko wrote:
> On Mon, Nov 9, 2015 at 5:07 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>>
>>
>> On 11/8/2015 3:09 PM, Andy Shevchenko wrote:
>>>
>>> On Sun, Nov 8, 2015 at 6:52 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>>>>
>>>> This patch adds supporting utility functions
>>>> for selftest. The intention is to share the self
>>>> test code between different drivers.
>>>>
>>>> Supported test cases include:
>>>> 1. dma_map_single
>>>> 2. streaming DMA
>>>> 3. coherent DMA
>>>> 4. scatter-gather DMA
>>>
>>>
>
>>>> +       u32 i, j = 0;
>>>
>>> unsigned int
>>
>> why?
>
> Is i or j is going to be used for HW communication? No? What about
> assignment to a values of type u32? No? Plain counters? Use plain
> types.

OK. I did an internal code review before posting the patch. Nobody 
complained about iterator types. I am trying to find what goes as a good 
practice vs. what is personal style.

>
> It's actually comment about your all patches I saw last week.
>
>>>> +       int err = 0;
>>>> +       int ret;
>>>
>>>
>>> Any reason to have two instead of one of similar meaning?
>>>
>>
>> removed ret
>
> Don't forget to check if it's redundant assignment (check in all your
> patches as well).
>

I'll look.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-10  4:55           ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-10  4:55 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/9/2015 4:26 AM, Andy Shevchenko wrote:
> On Mon, Nov 9, 2015 at 5:07 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>>
>>
>> On 11/8/2015 3:09 PM, Andy Shevchenko wrote:
>>>
>>> On Sun, Nov 8, 2015 at 6:52 AM, Sinan Kaya <okaya@codeaurora.org> wrote:
>>>>
>>>> This patch adds supporting utility functions
>>>> for selftest. The intention is to share the self
>>>> test code between different drivers.
>>>>
>>>> Supported test cases include:
>>>> 1. dma_map_single
>>>> 2. streaming DMA
>>>> 3. coherent DMA
>>>> 4. scatter-gather DMA
>>>
>>>
>
>>>> +       u32 i, j = 0;
>>>
>>> unsigned int
>>
>> why?
>
> Is i or j is going to be used for HW communication? No? What about
> assignment to a values of type u32? No? Plain counters? Use plain
> types.

OK. I did an internal code review before posting the patch. Nobody 
complained about iterator types. I am trying to find what goes as a good 
practice vs. what is personal style.

>
> It's actually comment about your all patches I saw last week.
>
>>>> +       int err = 0;
>>>> +       int ret;
>>>
>>>
>>> Any reason to have two instead of one of similar meaning?
>>>
>>
>> removed ret
>
> Don't forget to check if it's redundant assignment (check in all your
> patches as well).
>

I'll look.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
  2015-11-09 18:25           ` Rob Herring
@ 2015-11-10  4:57             ` Sinan Kaya
  -1 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-10  4:57 UTC (permalink / raw)
  To: Rob Herring
  Cc: Timur Tabi, dmaengine, cov, jcm, agross, linux-arm-msm,
	linux-arm-kernel, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, Vinod Koul, Dan Williams, devicetree, linux-kernel



On 11/9/2015 1:25 PM, Rob Herring wrote:
> On Sun, Nov 08, 2015 at 09:17:20PM -0500, Sinan Kaya wrote:
>> On 11/8/2015 12:08 AM, Timur Tabi wrote:
>>
>> On 11/8/2015 12:08 AM, Timur Tabi wrote:
>>> Sinan Kaya wrote:
>>>> +    val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
>>>> +    val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
>>>> +    val = val & ~(MAX_BUS_REQ_LEN_MASK);
>>>> +    val = val | (mgmtdev->max_read_request);
>>>
>>> val &= ~MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS;
>>> val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
>>> val &= ~MAX_BUS_REQ_LEN_MASK;
>>> val |= mgmtdev->max_read_request;
>>>
>>>> +static const struct of_device_id hidma_mgmt_match[] = {
>>>> +    { .compatible = "qcom,hidma-mgmt", },
>>>> +    { .compatible = "qcom,hidma-mgmt-1.0", },
>>>> +    { .compatible = "qcom,hidma-mgmt-1.1", },
>>>> +    {},
>>>> +};
>>>
>>> I thought Rob said that he did NOT want to use version numbers in
>>> compatible strings.  And what's the difference between these three
>>> versions anyway?
>>>
>>
>> This was already discussed here.
>>
>> https://lkml.org/lkml/2015/11/2/689
>>
>> The agreement was to use
>
> The suggestion...
>
>> compatible = "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0",
>> "qcom,hidma-mgmt";
>
> I don't really want to see 3 generic-ish strings.
>
>> I'll be adding code for v1.1 specifically in the future.
>
> Please drop "qcom,hidma-mgmt" altogether. It is already meaningless.
> Then add the 1.1 compatible when you add the code for it. Hopefully you
> all can decide on part number(s) by then.
>
> Rob
>

OK. I'll only have "qcom,hidma-mgmt-1.0" for now.


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver
@ 2015-11-10  4:57             ` Sinan Kaya
  0 siblings, 0 replies; 71+ messages in thread
From: Sinan Kaya @ 2015-11-10  4:57 UTC (permalink / raw)
  To: linux-arm-kernel



On 11/9/2015 1:25 PM, Rob Herring wrote:
> On Sun, Nov 08, 2015 at 09:17:20PM -0500, Sinan Kaya wrote:
>> On 11/8/2015 12:08 AM, Timur Tabi wrote:
>>
>> On 11/8/2015 12:08 AM, Timur Tabi wrote:
>>> Sinan Kaya wrote:
>>>> +    val = val & ~(MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS);
>>>> +    val = val | (mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS);
>>>> +    val = val & ~(MAX_BUS_REQ_LEN_MASK);
>>>> +    val = val | (mgmtdev->max_read_request);
>>>
>>> val &= ~MAX_BUS_REQ_LEN_MASK << MAX_BUS_WR_REQ_BIT_POS;
>>> val |= mgmtdev->max_write_request << MAX_BUS_WR_REQ_BIT_POS;
>>> val &= ~MAX_BUS_REQ_LEN_MASK;
>>> val |= mgmtdev->max_read_request;
>>>
>>>> +static const struct of_device_id hidma_mgmt_match[] = {
>>>> +    { .compatible = "qcom,hidma-mgmt", },
>>>> +    { .compatible = "qcom,hidma-mgmt-1.0", },
>>>> +    { .compatible = "qcom,hidma-mgmt-1.1", },
>>>> +    {},
>>>> +};
>>>
>>> I thought Rob said that he did NOT want to use version numbers in
>>> compatible strings.  And what's the difference between these three
>>> versions anyway?
>>>
>>
>> This was already discussed here.
>>
>> https://lkml.org/lkml/2015/11/2/689
>>
>> The agreement was to use
>
> The suggestion...
>
>> compatible = "qcom,hidma-mgmt-1.1", "qcom,hidma-mgmt-1.0",
>> "qcom,hidma-mgmt";
>
> I don't really want to see 3 generic-ish strings.
>
>> I'll be adding code for v1.1 specifically in the future.
>
> Please drop "qcom,hidma-mgmt" altogether. It is already meaningless.
> Then add the 1.1 compatible when you add the code for it. Hopefully you
> all can decide on part number(s) by then.
>
> Rob
>

OK. I'll only have "qcom,hidma-mgmt-1.0" for now.


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a 
Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-10  4:55           ` Sinan Kaya
@ 2015-11-10  4:59             ` Timur Tabi
  -1 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-10  4:59 UTC (permalink / raw)
  To: Sinan Kaya, Andy Shevchenko
  Cc: dmaengine, cov, jcm, Andy Gross, linux-arm-msm,
	linux-arm Mailing List, Vinod Koul, Dan Williams, linux-kernel

Sinan Kaya wrote:
>
> OK. I did an internal code review before posting the patch. Nobody
> complained about iterator types. I am trying to find what goes as a good
> practice vs. what is personal style.

I normally check for inappropriate usage of sized integers in my 
reviews, but I admit I'm inconsistent about that sort of thing for 
internal reviews.

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-10  4:59             ` Timur Tabi
  0 siblings, 0 replies; 71+ messages in thread
From: Timur Tabi @ 2015-11-10  4:59 UTC (permalink / raw)
  To: linux-arm-kernel

Sinan Kaya wrote:
>
> OK. I did an internal code review before posting the patch. Nobody
> complained about iterator types. I am trying to find what goes as a good
> practice vs. what is personal style.

I normally check for inappropriate usage of sized integers in my 
reviews, but I admit I'm inconsistent about that sort of thing for 
internal reviews.

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
  2015-11-10  4:49           ` Sinan Kaya
@ 2015-11-10 10:13             ` Arnd Bergmann
  -1 siblings, 0 replies; 71+ messages in thread
From: Arnd Bergmann @ 2015-11-10 10:13 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Sinan Kaya, Timur Tabi, dmaengine, cov, jcm, Vinod Koul,
	linux-arm-msm, linux-kernel, agross, Dan Williams

On Monday 09 November 2015 23:49:54 Sinan Kaya wrote:
> On 11/9/2015 8:48 AM, Timur Tabi wrote:
> > Sinan Kaya wrote:
> >>>
> >>> And why kmalloc anyway?  Why not leave it on the stack?
> >>>
> >>>      char src[] = "hello world";
> >>>
> >>> ?
> >>
> >> I need to call dma_map_single on this address to convert it to a DMA
> >> address. That's why.
> >
> > And you can't do that with an object that's on the stack?
> >
> 
> no, pasting from here.
> 
> https://www.kernel.org/doc/Documentation/DMA-API-HOWTO.txt
> 
> under 'What memory is DMA'able?'
> 
> This rule also means that you may use neither kernel image addresses
> (items in data/text/bss segments), nor module image addresses, nor
> stack addresses for DMA.

Correct. I think this is just because of cache line alignment that
is guaranteed for kmalloc but not for anything on the stack.

	Arnd

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions
@ 2015-11-10 10:13             ` Arnd Bergmann
  0 siblings, 0 replies; 71+ messages in thread
From: Arnd Bergmann @ 2015-11-10 10:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Monday 09 November 2015 23:49:54 Sinan Kaya wrote:
> On 11/9/2015 8:48 AM, Timur Tabi wrote:
> > Sinan Kaya wrote:
> >>>
> >>> And why kmalloc anyway?  Why not leave it on the stack?
> >>>
> >>>      char src[] = "hello world";
> >>>
> >>> ?
> >>
> >> I need to call dma_map_single on this address to convert it to a DMA
> >> address. That's why.
> >
> > And you can't do that with an object that's on the stack?
> >
> 
> no, pasting from here.
> 
> https://www.kernel.org/doc/Documentation/DMA-API-HOWTO.txt
> 
> under 'What memory is DMA'able?'
> 
> This rule also means that you may use neither kernel image addresses
> (items in data/text/bss segments), nor module image addresses, nor
> stack addresses for DMA.

Correct. I think this is just because of cache line alignment that
is guaranteed for kmalloc but not for anything on the stack.

	Arnd

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-09  0:43         ` Sinan Kaya
@ 2015-11-11  2:21           ` Fengguang Wu
  -1 siblings, 0 replies; 71+ messages in thread
From: Fengguang Wu @ 2015-11-11  2:21 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: linux-arm-kernel, Mark Rutland, Pawel Moll, Ian Campbell,
	Vinod Koul, jcm, timur, Kumar Gala, linux-kernel, devicetree,
	Rob Herring, kbuild-all, agross, dmaengine, Dan Williams,
	linux-arm-msm, cov

Hi Sinan,

Sorry please ignore this warning -- it's actually a problem specific
to the mn10300 arch. I'll disable such warning in mn10300 in future.

Thanks,
Fengguang

On Sun, Nov 08, 2015 at 07:43:52PM -0500, Sinan Kaya wrote:
> 
> 
> On 11/8/2015 2:13 PM, kbuild test robot wrote:
> >Hi Sinan,
> >
> >[auto build test WARNING on: robh/for-next]
> >[also build test WARNING on: v4.3 next-20151106]
> >
> >url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
> >base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
> >config: mn10300-allyesconfig (attached as .config)
> >reproduce:
> >         wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
> >         chmod +x ~/bin/make.cross
> >         # save the attached .config to linux build tree
> >         make.cross ARCH=mn10300
> >
> >All warnings (new ones prefixed by >>):
> >
> >    In file included from include/linux/printk.h:277:0,
> >                     from include/linux/kernel.h:13,
> >                     from include/linux/list.h:8,
> >                     from include/linux/kobject.h:20,
> >                     from include/linux/device.h:17,
> >                     from include/linux/dmaengine.h:20,
> >                     from drivers/dma/qcom/hidma.c:45:
> >    drivers/dma/qcom/hidma.c: In function 'hidma_prep_dma_memcpy':
> >    include/linux/dynamic_debug.h:64:16: warning: format '%zu' expects argument of type 'size_t', but argument 7 has type 'unsigned int' [-Wformat=]
> >      static struct _ddebug  __aligned(8)   \
> >                    ^
> >    include/linux/dynamic_debug.h:84:2: note: in expansion of macro 'DEFINE_DYNAMIC_DEBUG_METADATA'
> >      DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);  \
> >      ^
> >    include/linux/device.h:1171:2: note: in expansion of macro 'dynamic_dev_dbg'
> >      dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
> >      ^
> >>>drivers/dma/qcom/hidma.c:391:2: note: in expansion of macro 'dev_dbg'
> >      dev_dbg(mdma->ddev.dev,
> >      ^
> >
> >vim +/dev_dbg +391 drivers/dma/qcom/hidma.c
> >
> >    375	
> >    376		mchan->allocated = 0;
> >    377		spin_unlock_irqrestore(&mchan->lock, irqflags);
> >    378		dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
> >    379	}
> >    380	
> >    381	
> >    382	static struct dma_async_tx_descriptor *
> >    383	hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
> >    384				dma_addr_t dma_src, size_t len, unsigned long flags)
> >    385	{
> >    386		struct hidma_chan *mchan = to_hidma_chan(dmach);
> >    387		struct hidma_desc *mdesc = NULL;
> >    388		struct hidma_dev *mdma = mchan->dmadev;
> >    389		unsigned long irqflags;
> >    390	
> >  > 391		dev_dbg(mdma->ddev.dev,
> >    392			"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
> >    393			&dma_dest, &dma_src, len);
> >    394	
> 
> What am I missing?
> 
> len is size_t. This page says use %zu for size_t.
> 
> https://www.kernel.org/doc/Documentation/printk-formats.txt
> 
> 
> 
> >    395		/* Get free descriptor */
> >    396		spin_lock_irqsave(&mchan->lock, irqflags);
> >    397		if (!list_empty(&mchan->free)) {
> >    398			mdesc = list_first_entry(&mchan->free, struct hidma_desc,
> >    399						node);
> >
> >---
> >0-DAY kernel test infrastructure                Open Source Technology Center
> >https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
> >
> 
> 
> 
> -- 
> Sinan Kaya
> Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux
> Foundation Collaborative Project
> _______________________________________________
> kbuild-all mailing list
> kbuild-all@lists.01.org
> https://lists.01.org/mailman/listinfo/kbuild-all

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-11  2:21           ` Fengguang Wu
  0 siblings, 0 replies; 71+ messages in thread
From: Fengguang Wu @ 2015-11-11  2:21 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sinan,

Sorry please ignore this warning -- it's actually a problem specific
to the mn10300 arch. I'll disable such warning in mn10300 in future.

Thanks,
Fengguang

On Sun, Nov 08, 2015 at 07:43:52PM -0500, Sinan Kaya wrote:
> 
> 
> On 11/8/2015 2:13 PM, kbuild test robot wrote:
> >Hi Sinan,
> >
> >[auto build test WARNING on: robh/for-next]
> >[also build test WARNING on: v4.3 next-20151106]
> >
> >url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/ma-add-Qualcomm-Technologies-HIDMA-driver/20151108-125824
> >base:   https://git.kernel.org/pub/scm/linux/kernel/git/robh/linux for-next
> >config: mn10300-allyesconfig (attached as .config)
> >reproduce:
> >         wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
> >         chmod +x ~/bin/make.cross
> >         # save the attached .config to linux build tree
> >         make.cross ARCH=mn10300
> >
> >All warnings (new ones prefixed by >>):
> >
> >    In file included from include/linux/printk.h:277:0,
> >                     from include/linux/kernel.h:13,
> >                     from include/linux/list.h:8,
> >                     from include/linux/kobject.h:20,
> >                     from include/linux/device.h:17,
> >                     from include/linux/dmaengine.h:20,
> >                     from drivers/dma/qcom/hidma.c:45:
> >    drivers/dma/qcom/hidma.c: In function 'hidma_prep_dma_memcpy':
> >    include/linux/dynamic_debug.h:64:16: warning: format '%zu' expects argument of type 'size_t', but argument 7 has type 'unsigned int' [-Wformat=]
> >      static struct _ddebug  __aligned(8)   \
> >                    ^
> >    include/linux/dynamic_debug.h:84:2: note: in expansion of macro 'DEFINE_DYNAMIC_DEBUG_METADATA'
> >      DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);  \
> >      ^
> >    include/linux/device.h:1171:2: note: in expansion of macro 'dynamic_dev_dbg'
> >      dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
> >      ^
> >>>drivers/dma/qcom/hidma.c:391:2: note: in expansion of macro 'dev_dbg'
> >      dev_dbg(mdma->ddev.dev,
> >      ^
> >
> >vim +/dev_dbg +391 drivers/dma/qcom/hidma.c
> >
> >    375	
> >    376		mchan->allocated = 0;
> >    377		spin_unlock_irqrestore(&mchan->lock, irqflags);
> >    378		dev_dbg(mdma->ddev.dev, "freed channel for %u\n", mchan->dma_sig);
> >    379	}
> >    380	
> >    381	
> >    382	static struct dma_async_tx_descriptor *
> >    383	hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dma_dest,
> >    384				dma_addr_t dma_src, size_t len, unsigned long flags)
> >    385	{
> >    386		struct hidma_chan *mchan = to_hidma_chan(dmach);
> >    387		struct hidma_desc *mdesc = NULL;
> >    388		struct hidma_dev *mdma = mchan->dmadev;
> >    389		unsigned long irqflags;
> >    390	
> >  > 391		dev_dbg(mdma->ddev.dev,
> >    392			"memcpy: chan:%p dest:%pad src:%pad len:%zu\n", mchan,
> >    393			&dma_dest, &dma_src, len);
> >    394	
> 
> What am I missing?
> 
> len is size_t. This page says use %zu for size_t.
> 
> https://www.kernel.org/doc/Documentation/printk-formats.txt
> 
> 
> 
> >    395		/* Get free descriptor */
> >    396		spin_lock_irqsave(&mchan->lock, irqflags);
> >    397		if (!list_empty(&mchan->free)) {
> >    398			mdesc = list_first_entry(&mchan->free, struct hidma_desc,
> >    399						node);
> >
> >---
> >0-DAY kernel test infrastructure                Open Source Technology Center
> >https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
> >
> 
> 
> 
> -- 
> Sinan Kaya
> Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux
> Foundation Collaborative Project
> _______________________________________________
> kbuild-all mailing list
> kbuild-all at lists.01.org
> https://lists.01.org/mailman/listinfo/kbuild-all

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-11  2:21           ` Fengguang Wu
  (?)
@ 2015-11-11  8:42               ` Arnd Bergmann
  -1 siblings, 0 replies; 71+ messages in thread
From: Arnd Bergmann @ 2015-11-11  8:42 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: Sinan Kaya, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	Mark Rutland, Pawel Moll, Ian Campbell, Vinod Koul,
	jcm-H+wXaHxf7aLQT0dZR+AlfA, timur-sgV2jX0FEOL9JmXXK+q4OQ,
	Kumar Gala, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Rob Herring,
	kbuild-all-JC7UmRfGjtg, agross-sgV2jX0FEOL9JmXXK+q4OQ,
	dmaengine-u79uwXL29TY76Z2rM5mHXA, Dan Williams,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA, cov-sgV2jX0FEOL9JmXXK+q4OQ

On Wednesday 11 November 2015 10:21:03 Fengguang Wu wrote:
> Hi Sinan,
> 
> Sorry please ignore this warning -- it's actually a problem specific
> to the mn10300 arch. I'll disable such warning in mn10300 in future.

I just tried to find what happened here. mn10300 appears to define
the type based on the gcc version:

#if __GNUC__ == 4
typedef unsigned int    __kernel_size_t;
typedef signed int      __kernel_ssize_t;
#else
typedef unsigned long   __kernel_size_t;
typedef signed long     __kernel_ssize_t;
#endif

while gcc defines it based on whether you are using a Linux targetted
gcc or a bare-metal one:

gcc/config/mn10300/linux.h:#undef SIZE_TYPE
gcc/config/mn10300/mn10300.h:#undef  SIZE_TYPE
gcc/config/mn10300/mn10300.h:#define SIZE_TYPE "unsigned int"

I can think of two reasons why it went wrong here:

a) You are using gcc-5.x, and the check in the kernel should be ">="
   rather than "==". We should probably fix that regardless

b) You are using a bare-metal gcc rather than a Linux version.

I couldn't find an mn10300 gcc on kernel.org, which one do you use?

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-11  8:42               ` Arnd Bergmann
  0 siblings, 0 replies; 71+ messages in thread
From: Arnd Bergmann @ 2015-11-11  8:42 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: Sinan Kaya, linux-arm-kernel, Mark Rutland, Pawel Moll,
	Ian Campbell, Vinod Koul, jcm, timur, Kumar Gala, linux-kernel,
	devicetree, Rob Herring, kbuild-all, agross, dmaengine,
	Dan Williams, linux-arm-msm, cov

On Wednesday 11 November 2015 10:21:03 Fengguang Wu wrote:
> Hi Sinan,
> 
> Sorry please ignore this warning -- it's actually a problem specific
> to the mn10300 arch. I'll disable such warning in mn10300 in future.

I just tried to find what happened here. mn10300 appears to define
the type based on the gcc version:

#if __GNUC__ == 4
typedef unsigned int    __kernel_size_t;
typedef signed int      __kernel_ssize_t;
#else
typedef unsigned long   __kernel_size_t;
typedef signed long     __kernel_ssize_t;
#endif

while gcc defines it based on whether you are using a Linux targetted
gcc or a bare-metal one:

gcc/config/mn10300/linux.h:#undef SIZE_TYPE
gcc/config/mn10300/mn10300.h:#undef  SIZE_TYPE
gcc/config/mn10300/mn10300.h:#define SIZE_TYPE "unsigned int"

I can think of two reasons why it went wrong here:

a) You are using gcc-5.x, and the check in the kernel should be ">="
   rather than "==". We should probably fix that regardless

b) You are using a bare-metal gcc rather than a Linux version.

I couldn't find an mn10300 gcc on kernel.org, which one do you use?

	Arnd

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-11  8:42               ` Arnd Bergmann
  0 siblings, 0 replies; 71+ messages in thread
From: Arnd Bergmann @ 2015-11-11  8:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Wednesday 11 November 2015 10:21:03 Fengguang Wu wrote:
> Hi Sinan,
> 
> Sorry please ignore this warning -- it's actually a problem specific
> to the mn10300 arch. I'll disable such warning in mn10300 in future.

I just tried to find what happened here. mn10300 appears to define
the type based on the gcc version:

#if __GNUC__ == 4
typedef unsigned int    __kernel_size_t;
typedef signed int      __kernel_ssize_t;
#else
typedef unsigned long   __kernel_size_t;
typedef signed long     __kernel_ssize_t;
#endif

while gcc defines it based on whether you are using a Linux targetted
gcc or a bare-metal one:

gcc/config/mn10300/linux.h:#undef SIZE_TYPE
gcc/config/mn10300/mn10300.h:#undef  SIZE_TYPE
gcc/config/mn10300/mn10300.h:#define SIZE_TYPE "unsigned int"

I can think of two reasons why it went wrong here:

a) You are using gcc-5.x, and the check in the kernel should be ">="
   rather than "==". We should probably fix that regardless

b) You are using a bare-metal gcc rather than a Linux version.

I couldn't find an mn10300 gcc on kernel.org, which one do you use?

	Arnd

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-11  8:42               ` Arnd Bergmann
@ 2015-11-12  8:20                 ` Fengguang Wu
  -1 siblings, 0 replies; 71+ messages in thread
From: Fengguang Wu @ 2015-11-12  8:20 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Mark Rutland, devicetree, Pawel Moll, Ian Campbell, Vinod Koul,
	jcm, timur, agross, linux-kernel, Sinan Kaya, Rob Herring,
	kbuild-all, Kumar Gala, dmaengine, linux-arm-msm, Dan Williams,
	linux-arm-kernel, cov

Hi Arnd,

On Wed, Nov 11, 2015 at 09:42:00AM +0100, Arnd Bergmann wrote:
> On Wednesday 11 November 2015 10:21:03 Fengguang Wu wrote:
> > Hi Sinan,
> > 
> > Sorry please ignore this warning -- it's actually a problem specific
> > to the mn10300 arch. I'll disable such warning in mn10300 in future.
> 
> I just tried to find what happened here. mn10300 appears to define
> the type based on the gcc version:
> 
> #if __GNUC__ == 4
> typedef unsigned int    __kernel_size_t;
> typedef signed int      __kernel_ssize_t;
> #else
> typedef unsigned long   __kernel_size_t;
> typedef signed long     __kernel_ssize_t;
> #endif
> 
> while gcc defines it based on whether you are using a Linux targetted
> gcc or a bare-metal one:
> 
> gcc/config/mn10300/linux.h:#undef SIZE_TYPE
> gcc/config/mn10300/mn10300.h:#undef  SIZE_TYPE
> gcc/config/mn10300/mn10300.h:#define SIZE_TYPE "unsigned int"
> 
> I can think of two reasons why it went wrong here:
> 
> a) You are using gcc-5.x, and the check in the kernel should be ">="
>    rather than "==". We should probably fix that regardless
> 
> b) You are using a bare-metal gcc rather than a Linux version.

> I couldn't find an mn10300 gcc on kernel.org, which one do you use?

I used this mn10300 compiler:

https://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_am33_2.0-linux.tar.xz

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-12  8:20                 ` Fengguang Wu
  0 siblings, 0 replies; 71+ messages in thread
From: Fengguang Wu @ 2015-11-12  8:20 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Arnd,

On Wed, Nov 11, 2015 at 09:42:00AM +0100, Arnd Bergmann wrote:
> On Wednesday 11 November 2015 10:21:03 Fengguang Wu wrote:
> > Hi Sinan,
> > 
> > Sorry please ignore this warning -- it's actually a problem specific
> > to the mn10300 arch. I'll disable such warning in mn10300 in future.
> 
> I just tried to find what happened here. mn10300 appears to define
> the type based on the gcc version:
> 
> #if __GNUC__ == 4
> typedef unsigned int    __kernel_size_t;
> typedef signed int      __kernel_ssize_t;
> #else
> typedef unsigned long   __kernel_size_t;
> typedef signed long     __kernel_ssize_t;
> #endif
> 
> while gcc defines it based on whether you are using a Linux targetted
> gcc or a bare-metal one:
> 
> gcc/config/mn10300/linux.h:#undef SIZE_TYPE
> gcc/config/mn10300/mn10300.h:#undef  SIZE_TYPE
> gcc/config/mn10300/mn10300.h:#define SIZE_TYPE "unsigned int"
> 
> I can think of two reasons why it went wrong here:
> 
> a) You are using gcc-5.x, and the check in the kernel should be ">="
>    rather than "==". We should probably fix that regardless
> 
> b) You are using a bare-metal gcc rather than a Linux version.

> I couldn't find an mn10300 gcc on kernel.org, which one do you use?

I used this mn10300 compiler:

https://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_am33_2.0-linux.tar.xz

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 71+ messages in thread

* Re: [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
  2015-11-12  8:20                 ` Fengguang Wu
@ 2015-11-12 13:49                   ` Arnd Bergmann
  -1 siblings, 0 replies; 71+ messages in thread
From: Arnd Bergmann @ 2015-11-12 13:49 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: Mark Rutland, devicetree, Pawel Moll, Ian Campbell, Vinod Koul,
	jcm, timur, agross, linux-kernel, Sinan Kaya, Rob Herring,
	kbuild-all, Kumar Gala, dmaengine, linux-arm-msm, Dan Williams,
	linux-arm-kernel, cov

On Thursday 12 November 2015 16:20:15 Fengguang Wu wrote:
> Hi Arnd,
> 
> On Wed, Nov 11, 2015 at 09:42:00AM +0100, Arnd Bergmann wrote:
> > On Wednesday 11 November 2015 10:21:03 Fengguang Wu wrote:
> > > Hi Sinan,
> > > 
> > > Sorry please ignore this warning -- it's actually a problem specific
> > > to the mn10300 arch. I'll disable such warning in mn10300 in future.
> > 
> > I just tried to find what happened here. mn10300 appears to define
> > the type based on the gcc version:
> > 
> > #if __GNUC__ == 4
> > typedef unsigned int    __kernel_size_t;
> > typedef signed int      __kernel_ssize_t;
> > #else
> > typedef unsigned long   __kernel_size_t;
> > typedef signed long     __kernel_ssize_t;
> > #endif
> > 
> > while gcc defines it based on whether you are using a Linux targetted
> > gcc or a bare-metal one:
> > 
> > gcc/config/mn10300/linux.h:#undef SIZE_TYPE
> > gcc/config/mn10300/mn10300.h:#undef  SIZE_TYPE
> > gcc/config/mn10300/mn10300.h:#define SIZE_TYPE "unsigned int"
> > 
> > I can think of two reasons why it went wrong here:
> > 
> > a) You are using gcc-5.x, and the check in the kernel should be ">="
> >    rather than "==". We should probably fix that regardless
> > 
> > b) You are using a bare-metal gcc rather than a Linux version.
> 
> > I couldn't find an mn10300 gcc on kernel.org, which one do you use?
> 
> I used this mn10300 compiler:
> 
> https://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_am33_2.0-linux.tar.xz

Ok, so this is not gcc-5.x (i.e. we are not hitting the first problem), but it uses
this definition:

./lib/gcc/am33_2.0-linux/4.9.0/include/stddef.h:#define __SIZE_TYPE__ long unsigned int

which does not match what the kernel expects. I see I have the same thing in
my locally built am33_2.0-linux-gcc-4.9.3.

I have just tried this again with a newly built am33_2.0-linux-gcc-5.2.1, and that
indeed avoids almost all warnings for the mn10300 kernel. I suspect this is
really a combination of two bugs that cancel each other out, but if you do the
same update on your system, you will get the results you want and will no longer
see the bogus warning.

	Arnd

^ permalink raw reply	[flat|nested] 71+ messages in thread

* [kbuild-all] [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver
@ 2015-11-12 13:49                   ` Arnd Bergmann
  0 siblings, 0 replies; 71+ messages in thread
From: Arnd Bergmann @ 2015-11-12 13:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Thursday 12 November 2015 16:20:15 Fengguang Wu wrote:
> Hi Arnd,
> 
> On Wed, Nov 11, 2015 at 09:42:00AM +0100, Arnd Bergmann wrote:
> > On Wednesday 11 November 2015 10:21:03 Fengguang Wu wrote:
> > > Hi Sinan,
> > > 
> > > Sorry please ignore this warning -- it's actually a problem specific
> > > to the mn10300 arch. I'll disable such warning in mn10300 in future.
> > 
> > I just tried to find what happened here. mn10300 appears to define
> > the type based on the gcc version:
> > 
> > #if __GNUC__ == 4
> > typedef unsigned int    __kernel_size_t;
> > typedef signed int      __kernel_ssize_t;
> > #else
> > typedef unsigned long   __kernel_size_t;
> > typedef signed long     __kernel_ssize_t;
> > #endif
> > 
> > while gcc defines it based on whether you are using a Linux targetted
> > gcc or a bare-metal one:
> > 
> > gcc/config/mn10300/linux.h:#undef SIZE_TYPE
> > gcc/config/mn10300/mn10300.h:#undef  SIZE_TYPE
> > gcc/config/mn10300/mn10300.h:#define SIZE_TYPE "unsigned int"
> > 
> > I can think of two reasons why it went wrong here:
> > 
> > a) You are using gcc-5.x, and the check in the kernel should be ">="
> >    rather than "==". We should probably fix that regardless
> > 
> > b) You are using a bare-metal gcc rather than a Linux version.
> 
> > I couldn't find an mn10300 gcc on kernel.org, which one do you use?
> 
> I used this mn10300 compiler:
> 
> https://www.kernel.org/pub/tools/crosstool/files/bin/x86_64/4.9.0/x86_64-gcc-4.9.0-nolibc_am33_2.0-linux.tar.xz

Ok, so this is not gcc-5.x (i.e. we are not hitting the first problem), but it uses
this definition:

./lib/gcc/am33_2.0-linux/4.9.0/include/stddef.h:#define __SIZE_TYPE__ long unsigned int

which does not match what the kernel expects. I see I have the same thing in
my locally built am33_2.0-linux-gcc-4.9.3.

I have just tried this again with a newly built am33_2.0-linux-gcc-5.2.1, and that
indeed avoids almost all warnings for the mn10300 kernel. I suspect this is
really a combination of two bugs that cancel each other out, but if you do the
same update on your system, you will get the results you want and will no longer
see the bogus warning.

	Arnd

^ permalink raw reply	[flat|nested] 71+ messages in thread

end of thread, other threads:[~2015-11-12 13:49 UTC | newest]

Thread overview: 71+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-08  4:52 [PATCH V3 0/4] *ma: add Qualcomm Technologies HIDMA driver Sinan Kaya
2015-11-08  4:52 ` Sinan Kaya
2015-11-08  4:52 ` [PATCH V3 1/4] dma: qcom_bam_dma: move to qcom directory Sinan Kaya
2015-11-08  4:52   ` Sinan Kaya
2015-11-08  5:02   ` Timur Tabi
2015-11-08  5:02     ` Timur Tabi
     [not found] ` <1446958380-23298-1-git-send-email-okaya-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
2015-11-08  4:52   ` [PATCH V3 2/4] dma: add Qualcomm Technologies HIDMA management driver Sinan Kaya
2015-11-08  4:52     ` Sinan Kaya
2015-11-08  4:52     ` Sinan Kaya
2015-11-08  5:08     ` Timur Tabi
2015-11-08  5:08       ` Timur Tabi
2015-11-09  2:17       ` Sinan Kaya
2015-11-09  2:17         ` Sinan Kaya
2015-11-09 18:25         ` Rob Herring
2015-11-09 18:25           ` Rob Herring
2015-11-10  4:57           ` Sinan Kaya
2015-11-10  4:57             ` Sinan Kaya
2015-11-08  9:32     ` kbuild test robot
2015-11-08  9:32       ` kbuild test robot
2015-11-08  9:32       ` kbuild test robot
2015-11-08  4:52 ` [PATCH V3 3/4] dmaselftest: add memcpy selftest support functions Sinan Kaya
2015-11-08  4:52   ` Sinan Kaya
2015-11-08  5:13   ` Timur Tabi
2015-11-08  5:13     ` Timur Tabi
2015-11-09  2:46     ` Sinan Kaya
2015-11-09  2:46       ` Sinan Kaya
2015-11-09 13:48       ` Timur Tabi
2015-11-09 13:48         ` Timur Tabi
2015-11-10  4:49         ` Sinan Kaya
2015-11-10  4:49           ` Sinan Kaya
2015-11-10 10:13           ` Arnd Bergmann
2015-11-10 10:13             ` Arnd Bergmann
2015-11-08 20:09   ` Andy Shevchenko
2015-11-08 20:09     ` Andy Shevchenko
2015-11-09  3:07     ` Sinan Kaya
2015-11-09  3:07       ` Sinan Kaya
2015-11-09  9:26       ` Andy Shevchenko
2015-11-09  9:26         ` Andy Shevchenko
2015-11-10  4:55         ` Sinan Kaya
2015-11-10  4:55           ` Sinan Kaya
2015-11-10  4:59           ` Timur Tabi
2015-11-10  4:59             ` Timur Tabi
2015-11-08  4:53 ` [PATCH V3 4/4] dma: add Qualcomm Technologies HIDMA channel driver Sinan Kaya
2015-11-08  4:53   ` Sinan Kaya
     [not found]   ` <1446958380-23298-5-git-send-email-okaya-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
2015-11-08 19:13     ` kbuild test robot
2015-11-08 19:13       ` kbuild test robot
2015-11-08 19:13       ` kbuild test robot
2015-11-09  0:43       ` Sinan Kaya
2015-11-09  0:43         ` Sinan Kaya
2015-11-11  2:21         ` [kbuild-all] " Fengguang Wu
2015-11-11  2:21           ` Fengguang Wu
     [not found]           ` <20151111022103.GA29459-q6ZYBFIlbFFi0tQiZxhdj1DQ4js95KgL@public.gmane.org>
2015-11-11  8:42             ` Arnd Bergmann
2015-11-11  8:42               ` Arnd Bergmann
2015-11-11  8:42               ` Arnd Bergmann
2015-11-12  8:20               ` Fengguang Wu
2015-11-12  8:20                 ` Fengguang Wu
2015-11-12 13:49                 ` Arnd Bergmann
2015-11-12 13:49                   ` Arnd Bergmann
2015-11-08 20:47     ` Andy Shevchenko
2015-11-08 20:47       ` Andy Shevchenko
2015-11-08 20:47       ` Andy Shevchenko
2015-11-08 21:51       ` Sinan Kaya
2015-11-08 21:51         ` Sinan Kaya
2015-11-08 22:00         ` Andy Shevchenko
2015-11-08 22:00           ` Andy Shevchenko
2015-11-09  0:31       ` Sinan Kaya
2015-11-09  0:31         ` Sinan Kaya
2015-11-09 18:19   ` Rob Herring
2015-11-09 18:19     ` Rob Herring
2015-11-10  4:44     ` Sinan Kaya
2015-11-10  4:44       ` Sinan Kaya

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.