All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V13 00/10] dma: add Qualcomm Technologies HIDMA driver
@ 2016-01-29 22:35 ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization in the future.

The management driver is executed in host context and
is the main management entity for all channels provided by
the device.

------------------------
What's new
------------------------
- VFIO Reset driver
- More documentation in device-tree binding.
- Removed event-channel parameter

------------------------
Git repos
------------------------
QEMU Support
https://www.codeaurora.org/cgit/quic/qemu/qemu/log/?h=v2.5.0-sriov

------------------------
History of Changes
------------------------
Creating a QCOM directory for all QCOM DMA source files.
Changes from V12: (https://lkml.org/lkml/2016/1/11/525)
* none

dma: hidma: Add Device Tree Binding
Changes from V12: (https://lkml.org/lkml/2016/1/11/522)
* s/driver/binding/
* s/hypervisor/host/
* add iommu node required.
* removed the flexibility to choose event channel. Will use HW defaults
* instead.
  channel-index parameter is removed.

dma: add Qualcomm Technologies HIDMA management driver
Changes from V12: (https://lkml.org/lkml/2016/1/11/518)
* s/hypervisor/host/

dma: add Qualcomm Technologies HIDMA channel driver
Changes from V12: (https://lkml.org/lkml/2016/1/11/523)
* none

dma: qcom_hidma: implement lower level hardware interface
Changes from V12: (https://lkml.org/lkml/2016/1/11/519)
* none

dma: qcom_hidma: add debugfs hooks
Changes from V12: (https://lkml.org/lkml/2016/1/11/521)
* none

dma: qcom_hidma: add support for object hierarchy
Changes from V12: (https://lkml.org/lkml/2016/1/11/520)
* none

Sinan Kaya (10):
  dma: qcom_bam_dma: move to qcom directory
  dma: hidma: Add Device Tree binding
  dma: add Qualcomm Technologies HIDMA management driver
  dma: add Qualcomm Technologies HIDMA channel driver
  dma: qcom_hidma: implement lower level hardware interface
  dma: qcom_hidma: add debugfs hooks
  dma: qcom_hidma: add support for object hierarchy
  dma: qcom_hidma: read the channel id from HW
  vfio, platform: add support for ACPI while detecting the reset driver
  vfio, platform: add QTI HIDMA reset driver

 Documentation/ABI/testing/sysfs-platform-hidma     |   9 +
 .../ABI/testing/sysfs-platform-hidma-mgmt          |  97 +++
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  89 ++
 drivers/dma/Kconfig                                |  11 +-
 drivers/dma/Makefile                               |   2 +-
 drivers/dma/qcom/Kconfig                           |  29 +
 drivers/dma/qcom/Makefile                          |   5 +
 drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c}     |   4 +-
 drivers/dma/qcom/hidma.c                           | 747 +++++++++++++++++
 drivers/dma/qcom/hidma.h                           | 162 ++++
 drivers/dma/qcom/hidma_dbg.c                       | 219 +++++
 drivers/dma/qcom/hidma_ll.c                        | 927 +++++++++++++++++++++
 drivers/dma/qcom/hidma_mgmt.c                      | 400 +++++++++
 drivers/dma/qcom/hidma_mgmt.h                      |  39 +
 drivers/dma/qcom/hidma_mgmt_sys.c                  | 295 +++++++
 drivers/vfio/platform/reset/Kconfig                |   9 +
 drivers/vfio/platform/reset/Makefile               |   2 +
 .../vfio/platform/reset/vfio_platform_amdxgbe.c    |   3 +-
 .../platform/reset/vfio_platform_calxedaxgmac.c    |   3 +-
 .../vfio/platform/reset/vfio_platform_qcomhidma.c  |  99 +++
 drivers/vfio/platform/vfio_platform_common.c       |  80 +-
 drivers/vfio/platform/vfio_platform_private.h      |  41 +-
 22 files changed, 3229 insertions(+), 43 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma
 create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma-mgmt
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
 create mode 100644 drivers/dma/qcom/Kconfig
 create mode 100644 drivers/dma/qcom/Makefile
 rename drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} (99%)
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h
 create mode 100644 drivers/dma/qcom/hidma_dbg.c
 create mode 100644 drivers/dma/qcom/hidma_ll.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c
 create mode 100644 drivers/vfio/platform/reset/vfio_platform_qcomhidma.c

-- 
1.8.2.1

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 00/10] dma: add Qualcomm Technologies HIDMA driver
@ 2016-01-29 22:35 ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

The Qualcomm Technologies HIDMA device has been designed
to support virtualization technology. The driver has been
divided into two to follow the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels
share some set of common parameters. These parameters are
initialized by the management driver during power up.
Same management driver is used for monitoring the execution
of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and
prioritization in the future.

The management driver is executed in host context and
is the main management entity for all channels provided by
the device.

------------------------
What's new
------------------------
- VFIO Reset driver
- More documentation in device-tree binding.
- Removed event-channel parameter

------------------------
Git repos
------------------------
QEMU Support
https://www.codeaurora.org/cgit/quic/qemu/qemu/log/?h=v2.5.0-sriov

------------------------
History of Changes
------------------------
Creating a QCOM directory for all QCOM DMA source files.
Changes from V12: (https://lkml.org/lkml/2016/1/11/525)
* none

dma: hidma: Add Device Tree Binding
Changes from V12: (https://lkml.org/lkml/2016/1/11/522)
* s/driver/binding/
* s/hypervisor/host/
* add iommu node required.
* removed the flexibility to choose event channel. Will use HW defaults
* instead.
  channel-index parameter is removed.

dma: add Qualcomm Technologies HIDMA management driver
Changes from V12: (https://lkml.org/lkml/2016/1/11/518)
* s/hypervisor/host/

dma: add Qualcomm Technologies HIDMA channel driver
Changes from V12: (https://lkml.org/lkml/2016/1/11/523)
* none

dma: qcom_hidma: implement lower level hardware interface
Changes from V12: (https://lkml.org/lkml/2016/1/11/519)
* none

dma: qcom_hidma: add debugfs hooks
Changes from V12: (https://lkml.org/lkml/2016/1/11/521)
* none

dma: qcom_hidma: add support for object hierarchy
Changes from V12: (https://lkml.org/lkml/2016/1/11/520)
* none

Sinan Kaya (10):
  dma: qcom_bam_dma: move to qcom directory
  dma: hidma: Add Device Tree binding
  dma: add Qualcomm Technologies HIDMA management driver
  dma: add Qualcomm Technologies HIDMA channel driver
  dma: qcom_hidma: implement lower level hardware interface
  dma: qcom_hidma: add debugfs hooks
  dma: qcom_hidma: add support for object hierarchy
  dma: qcom_hidma: read the channel id from HW
  vfio, platform: add support for ACPI while detecting the reset driver
  vfio, platform: add QTI HIDMA reset driver

 Documentation/ABI/testing/sysfs-platform-hidma     |   9 +
 .../ABI/testing/sysfs-platform-hidma-mgmt          |  97 +++
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  89 ++
 drivers/dma/Kconfig                                |  11 +-
 drivers/dma/Makefile                               |   2 +-
 drivers/dma/qcom/Kconfig                           |  29 +
 drivers/dma/qcom/Makefile                          |   5 +
 drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c}     |   4 +-
 drivers/dma/qcom/hidma.c                           | 747 +++++++++++++++++
 drivers/dma/qcom/hidma.h                           | 162 ++++
 drivers/dma/qcom/hidma_dbg.c                       | 219 +++++
 drivers/dma/qcom/hidma_ll.c                        | 927 +++++++++++++++++++++
 drivers/dma/qcom/hidma_mgmt.c                      | 400 +++++++++
 drivers/dma/qcom/hidma_mgmt.h                      |  39 +
 drivers/dma/qcom/hidma_mgmt_sys.c                  | 295 +++++++
 drivers/vfio/platform/reset/Kconfig                |   9 +
 drivers/vfio/platform/reset/Makefile               |   2 +
 .../vfio/platform/reset/vfio_platform_amdxgbe.c    |   3 +-
 .../platform/reset/vfio_platform_calxedaxgmac.c    |   3 +-
 .../vfio/platform/reset/vfio_platform_qcomhidma.c  |  99 +++
 drivers/vfio/platform/vfio_platform_common.c       |  80 +-
 drivers/vfio/platform/vfio_platform_private.h      |  41 +-
 22 files changed, 3229 insertions(+), 43 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma
 create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma-mgmt
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
 create mode 100644 drivers/dma/qcom/Kconfig
 create mode 100644 drivers/dma/qcom/Makefile
 rename drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} (99%)
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h
 create mode 100644 drivers/dma/qcom/hidma_dbg.c
 create mode 100644 drivers/dma/qcom/hidma_ll.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c
 create mode 100644 drivers/vfio/platform/reset/vfio_platform_qcomhidma.c

-- 
1.8.2.1

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 01/10] dma: qcom_bam_dma: move to qcom directory
  2016-01-29 22:35 ` Sinan Kaya
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

Creating a QCOM directory for all QCOM DMA source files.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Gross <agross@codeaurora.org>
---
 drivers/dma/Kconfig                            | 11 ++---------
 drivers/dma/Makefile                           |  2 +-
 drivers/dma/qcom/Kconfig                       |  8 ++++++++
 drivers/dma/qcom/Makefile                      |  1 +
 drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} |  4 ++--
 5 files changed, 14 insertions(+), 12 deletions(-)
 create mode 100644 drivers/dma/qcom/Kconfig
 create mode 100644 drivers/dma/qcom/Makefile
 rename drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} (99%)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 79b1390..245b061 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -408,15 +408,6 @@ config PXA_DMA
 	  16 to 32 channels for peripheral to memory or memory to memory
 	  transfers.
 
-config QCOM_BAM_DMA
-	tristate "QCOM BAM DMA support"
-	depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
-	select DMA_ENGINE
-	select DMA_VIRTUAL_CHANNELS
-	---help---
-	  Enable support for the QCOM BAM DMA controller.  This controller
-	  provides DMA capabilities for a variety of on-chip devices.
-
 config SIRF_DMA
 	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
 	depends on ARCH_SIRF
@@ -539,6 +530,8 @@ config ZX_DMA
 # driver files
 source "drivers/dma/bestcomm/Kconfig"
 
+source "drivers/dma/qcom/Kconfig"
+
 source "drivers/dma/dw/Kconfig"
 
 source "drivers/dma/hsu/Kconfig"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 2dd0a067..6084127 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -52,7 +52,6 @@ obj-$(CONFIG_PCH_DMA) += pch_dma.o
 obj-$(CONFIG_PL330_DMA) += pl330.o
 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
 obj-$(CONFIG_PXA_DMA) += pxa_dma.o
-obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
 obj-$(CONFIG_RENESAS_DMA) += sh/
 obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
 obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
@@ -67,4 +66,5 @@ obj-$(CONFIG_TI_EDMA) += edma.o
 obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx296702_dma.o
 
+obj-y += qcom/
 obj-y += xilinx/
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
new file mode 100644
index 0000000..f17c272
--- /dev/null
+++ b/drivers/dma/qcom/Kconfig
@@ -0,0 +1,8 @@
+config QCOM_BAM_DMA
+	tristate "QCOM BAM DMA support"
+	depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	---help---
+	  Enable support for the QCOM BAM DMA controller.  This controller
+	  provides DMA capabilities for a variety of on-chip devices.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
new file mode 100644
index 0000000..f612ae3
--- /dev/null
+++ b/drivers/dma/qcom/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
diff --git a/drivers/dma/qcom_bam_dma.c b/drivers/dma/qcom/bam_dma.c
similarity index 99%
rename from drivers/dma/qcom_bam_dma.c
rename to drivers/dma/qcom/bam_dma.c
index 5a250cd..b6f053d 100644
--- a/drivers/dma/qcom_bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -49,8 +49,8 @@
 #include <linux/clk.h>
 #include <linux/dmaengine.h>
 
-#include "dmaengine.h"
-#include "virt-dma.h"
+#include "../dmaengine.h"
+#include "../virt-dma.h"
 
 struct bam_desc_hw {
 	u32 addr;		/* Buffer physical address */
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 01/10] dma: qcom_bam_dma: move to qcom directory
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

Creating a QCOM directory for all QCOM DMA source files.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Gross <agross@codeaurora.org>
---
 drivers/dma/Kconfig                            | 11 ++---------
 drivers/dma/Makefile                           |  2 +-
 drivers/dma/qcom/Kconfig                       |  8 ++++++++
 drivers/dma/qcom/Makefile                      |  1 +
 drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} |  4 ++--
 5 files changed, 14 insertions(+), 12 deletions(-)
 create mode 100644 drivers/dma/qcom/Kconfig
 create mode 100644 drivers/dma/qcom/Makefile
 rename drivers/dma/{qcom_bam_dma.c => qcom/bam_dma.c} (99%)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 79b1390..245b061 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -408,15 +408,6 @@ config PXA_DMA
 	  16 to 32 channels for peripheral to memory or memory to memory
 	  transfers.
 
-config QCOM_BAM_DMA
-	tristate "QCOM BAM DMA support"
-	depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
-	select DMA_ENGINE
-	select DMA_VIRTUAL_CHANNELS
-	---help---
-	  Enable support for the QCOM BAM DMA controller.  This controller
-	  provides DMA capabilities for a variety of on-chip devices.
-
 config SIRF_DMA
 	tristate "CSR SiRFprimaII/SiRFmarco DMA support"
 	depends on ARCH_SIRF
@@ -539,6 +530,8 @@ config ZX_DMA
 # driver files
 source "drivers/dma/bestcomm/Kconfig"
 
+source "drivers/dma/qcom/Kconfig"
+
 source "drivers/dma/dw/Kconfig"
 
 source "drivers/dma/hsu/Kconfig"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 2dd0a067..6084127 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -52,7 +52,6 @@ obj-$(CONFIG_PCH_DMA) += pch_dma.o
 obj-$(CONFIG_PL330_DMA) += pl330.o
 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
 obj-$(CONFIG_PXA_DMA) += pxa_dma.o
-obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
 obj-$(CONFIG_RENESAS_DMA) += sh/
 obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
 obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
@@ -67,4 +66,5 @@ obj-$(CONFIG_TI_EDMA) += edma.o
 obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx296702_dma.o
 
+obj-y += qcom/
 obj-y += xilinx/
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
new file mode 100644
index 0000000..f17c272
--- /dev/null
+++ b/drivers/dma/qcom/Kconfig
@@ -0,0 +1,8 @@
+config QCOM_BAM_DMA
+	tristate "QCOM BAM DMA support"
+	depends on ARCH_QCOM || (COMPILE_TEST && OF && ARM)
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	---help---
+	  Enable support for the QCOM BAM DMA controller.  This controller
+	  provides DMA capabilities for a variety of on-chip devices.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
new file mode 100644
index 0000000..f612ae3
--- /dev/null
+++ b/drivers/dma/qcom/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
diff --git a/drivers/dma/qcom_bam_dma.c b/drivers/dma/qcom/bam_dma.c
similarity index 99%
rename from drivers/dma/qcom_bam_dma.c
rename to drivers/dma/qcom/bam_dma.c
index 5a250cd..b6f053d 100644
--- a/drivers/dma/qcom_bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -49,8 +49,8 @@
 #include <linux/clk.h>
 #include <linux/dmaengine.h>
 
-#include "dmaengine.h"
-#include "virt-dma.h"
+#include "../dmaengine.h"
+#include "../virt-dma.h"
 
 struct bam_desc_hw {
 	u32 addr;		/* Buffer physical address */
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 02/10] dma: hidma: Add Device Tree binding
  2016-01-29 22:35 ` Sinan Kaya
  (?)
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: vikrams, arnd, eric.auger, linux-arm-msm, linux-kernel,
	Sinan Kaya, linux-arm-kernel, agross, shankerd

Add documentation for the Qualcomm Technologies HIDMA binding.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Rob Herring <robh@kernel.org>
---
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    | 92 ++++++++++++++++++++++
 1 file changed, 92 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
new file mode 100644
index 0000000..e3677a5
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -0,0 +1,92 @@
+Qualcomm Technologies HIDMA Management interface
+
+Qualcomm Technologies HIDMA is a high speed DMA device. It only supports
+memcpy and memset capabilities. It has been designed for virtualized
+environments.
+
+Each HIDMA HW instance consists of multiple DMA channels. These channels
+share the same bandwidth. The bandwidth utilization can be parititioned
+among channels based on the priority and weight assignments.
+
+There are only two priority levels and 15 weigh assignments possible.
+
+Other parameters here determine how much of the system bus this HIDMA
+instance can use like maximum read/write request and and number of bytes to
+read/write in a single burst.
+
+Main node required properties:
+- compatible: "qcom,hidma-mgmt-1.0";
+- reg: Address range for DMA device
+- dma-channels: Number of channels supported by this DMA controller.
+- max-write-burst-bytes: Maximum write burst in bytes that HIDMA can
+  occupy the bus for in a single transaction. A memcpy requested is
+  fragmented to multiples of this amount. This parameter is used while
+  writing into destination memory. Setting this value incorrectly can
+  starve other peripherals in the system.
+- max-read-burst-bytes: Maximum read burst in bytes that HIDMA can
+  occupy the bus for in a single transaction. A memcpy request is
+  fragmented to multiples of this amount. This parameter is used while
+  reading the source memory. Setting this value incorrectly can starve
+  other peripherals in the system.
+- max-write-transactions: This value is how many times a write burst is
+  applied back to back while writing to the destination before yielding
+  the bus.
+- max-read-transactions: This value is how many times a read burst is
+  applied back to back while reading the source before yielding the bus.
+- channel-reset-timeout-cycles: Channel reset timeout in cycles for this SOC.
+  Once a reset is applied to the HW, HW starts a timer for reset operation
+  to confirm. If reset is not completed within this time, HW reports reset
+  failure.
+
+Sub-nodes:
+
+HIDMA has one or more DMA channels that are used to move data from one
+memory location to another.
+
+When the OS is not in control of the management interface (i.e. it's a guest),
+the channel nodes appear on their own, not under a management node.
+
+Required properties:
+- compatible: must contain "qcom,hidma-1.0"
+- reg: Addresses for the transfer and event channel
+- interrupts: Should contain the event interrupt
+- desc-count: Number of asynchronous requests this channel can handle
+- channel-index: The HW event channel completions will be delivered.
+- iommus: required a iommu node
+
+Example:
+
+Hypervisor OS configuration:
+
+	hidma-mgmt@f9984000 = {
+		compatible = "qcom,hidma-mgmt-1.0";
+		reg = <0xf9984000 0x15000>;
+		dma-channels = <6>;
+		max-write-burst-bytes = <1024>;
+		max-read-burst-bytes = <1024>;
+		max-write-transactions = <31>;
+		max-read-transactions = <31>;
+		channel-reset-timeout-cycles = <0x500>;
+
+		hidma_24: dma-controller@0x5c050000 {
+			compatible = "qcom,hidma-1.0";
+			reg = <0 0x5c050000 0x0 0x1000>,
+			      <0 0x5c0b0000 0x0 0x1000>;
+			interrupts = <0 389 0>;
+			desc-count = <10>;
+			iommus = <&system_mmu>;
+			channel-index = <4>;
+		};
+	};
+
+Guest OS configuration:
+
+	hidma_24: dma-controller@0x5c050000 {
+		compatible = "qcom,hidma-1.0";
+		reg = <0 0x5c050000 0x0 0x1000>,
+		      <0 0x5c0b0000 0x0 0x1000>;
+		interrupts = <0 389 0>;
+		desc-count = <10>;
+		channel-index = <4>;
+		iommus = <&system_mmu>;
+	};
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 02/10] dma: hidma: Add Device Tree binding
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

Add documentation for the Qualcomm Technologies HIDMA binding.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Rob Herring <robh@kernel.org>
---
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    | 92 ++++++++++++++++++++++
 1 file changed, 92 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
new file mode 100644
index 0000000..e3677a5
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -0,0 +1,92 @@
+Qualcomm Technologies HIDMA Management interface
+
+Qualcomm Technologies HIDMA is a high speed DMA device. It only supports
+memcpy and memset capabilities. It has been designed for virtualized
+environments.
+
+Each HIDMA HW instance consists of multiple DMA channels. These channels
+share the same bandwidth. The bandwidth utilization can be parititioned
+among channels based on the priority and weight assignments.
+
+There are only two priority levels and 15 weigh assignments possible.
+
+Other parameters here determine how much of the system bus this HIDMA
+instance can use like maximum read/write request and and number of bytes to
+read/write in a single burst.
+
+Main node required properties:
+- compatible: "qcom,hidma-mgmt-1.0";
+- reg: Address range for DMA device
+- dma-channels: Number of channels supported by this DMA controller.
+- max-write-burst-bytes: Maximum write burst in bytes that HIDMA can
+  occupy the bus for in a single transaction. A memcpy requested is
+  fragmented to multiples of this amount. This parameter is used while
+  writing into destination memory. Setting this value incorrectly can
+  starve other peripherals in the system.
+- max-read-burst-bytes: Maximum read burst in bytes that HIDMA can
+  occupy the bus for in a single transaction. A memcpy request is
+  fragmented to multiples of this amount. This parameter is used while
+  reading the source memory. Setting this value incorrectly can starve
+  other peripherals in the system.
+- max-write-transactions: This value is how many times a write burst is
+  applied back to back while writing to the destination before yielding
+  the bus.
+- max-read-transactions: This value is how many times a read burst is
+  applied back to back while reading the source before yielding the bus.
+- channel-reset-timeout-cycles: Channel reset timeout in cycles for this SOC.
+  Once a reset is applied to the HW, HW starts a timer for reset operation
+  to confirm. If reset is not completed within this time, HW reports reset
+  failure.
+
+Sub-nodes:
+
+HIDMA has one or more DMA channels that are used to move data from one
+memory location to another.
+
+When the OS is not in control of the management interface (i.e. it's a guest),
+the channel nodes appear on their own, not under a management node.
+
+Required properties:
+- compatible: must contain "qcom,hidma-1.0"
+- reg: Addresses for the transfer and event channel
+- interrupts: Should contain the event interrupt
+- desc-count: Number of asynchronous requests this channel can handle
+- channel-index: The HW event channel completions will be delivered.
+- iommus: required a iommu node
+
+Example:
+
+Hypervisor OS configuration:
+
+	hidma-mgmt@f9984000 = {
+		compatible = "qcom,hidma-mgmt-1.0";
+		reg = <0xf9984000 0x15000>;
+		dma-channels = <6>;
+		max-write-burst-bytes = <1024>;
+		max-read-burst-bytes = <1024>;
+		max-write-transactions = <31>;
+		max-read-transactions = <31>;
+		channel-reset-timeout-cycles = <0x500>;
+
+		hidma_24: dma-controller@0x5c050000 {
+			compatible = "qcom,hidma-1.0";
+			reg = <0 0x5c050000 0x0 0x1000>,
+			      <0 0x5c0b0000 0x0 0x1000>;
+			interrupts = <0 389 0>;
+			desc-count = <10>;
+			iommus = <&system_mmu>;
+			channel-index = <4>;
+		};
+	};
+
+Guest OS configuration:
+
+	hidma_24: dma-controller@0x5c050000 {
+		compatible = "qcom,hidma-1.0";
+		reg = <0 0x5c050000 0x0 0x1000>,
+		      <0 0x5c0b0000 0x0 0x1000>;
+		interrupts = <0 389 0>;
+		desc-count = <10>;
+		channel-index = <4>;
+		iommus = <&system_mmu>;
+	};
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 02/10] dma: hidma: Add Device Tree binding
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

Add documentation for the Qualcomm Technologies HIDMA binding.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Rob Herring <robh@kernel.org>
---
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    | 92 ++++++++++++++++++++++
 1 file changed, 92 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
new file mode 100644
index 0000000..e3677a5
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -0,0 +1,92 @@
+Qualcomm Technologies HIDMA Management interface
+
+Qualcomm Technologies HIDMA is a high speed DMA device. It only supports
+memcpy and memset capabilities. It has been designed for virtualized
+environments.
+
+Each HIDMA HW instance consists of multiple DMA channels. These channels
+share the same bandwidth. The bandwidth utilization can be parititioned
+among channels based on the priority and weight assignments.
+
+There are only two priority levels and 15 weigh assignments possible.
+
+Other parameters here determine how much of the system bus this HIDMA
+instance can use like maximum read/write request and and number of bytes to
+read/write in a single burst.
+
+Main node required properties:
+- compatible: "qcom,hidma-mgmt-1.0";
+- reg: Address range for DMA device
+- dma-channels: Number of channels supported by this DMA controller.
+- max-write-burst-bytes: Maximum write burst in bytes that HIDMA can
+  occupy the bus for in a single transaction. A memcpy requested is
+  fragmented to multiples of this amount. This parameter is used while
+  writing into destination memory. Setting this value incorrectly can
+  starve other peripherals in the system.
+- max-read-burst-bytes: Maximum read burst in bytes that HIDMA can
+  occupy the bus for in a single transaction. A memcpy request is
+  fragmented to multiples of this amount. This parameter is used while
+  reading the source memory. Setting this value incorrectly can starve
+  other peripherals in the system.
+- max-write-transactions: This value is how many times a write burst is
+  applied back to back while writing to the destination before yielding
+  the bus.
+- max-read-transactions: This value is how many times a read burst is
+  applied back to back while reading the source before yielding the bus.
+- channel-reset-timeout-cycles: Channel reset timeout in cycles for this SOC.
+  Once a reset is applied to the HW, HW starts a timer for reset operation
+  to confirm. If reset is not completed within this time, HW reports reset
+  failure.
+
+Sub-nodes:
+
+HIDMA has one or more DMA channels that are used to move data from one
+memory location to another.
+
+When the OS is not in control of the management interface (i.e. it's a guest),
+the channel nodes appear on their own, not under a management node.
+
+Required properties:
+- compatible: must contain "qcom,hidma-1.0"
+- reg: Addresses for the transfer and event channel
+- interrupts: Should contain the event interrupt
+- desc-count: Number of asynchronous requests this channel can handle
+- channel-index: The HW event channel completions will be delivered.
+- iommus: required a iommu node
+
+Example:
+
+Hypervisor OS configuration:
+
+	hidma-mgmt at f9984000 = {
+		compatible = "qcom,hidma-mgmt-1.0";
+		reg = <0xf9984000 0x15000>;
+		dma-channels = <6>;
+		max-write-burst-bytes = <1024>;
+		max-read-burst-bytes = <1024>;
+		max-write-transactions = <31>;
+		max-read-transactions = <31>;
+		channel-reset-timeout-cycles = <0x500>;
+
+		hidma_24: dma-controller at 0x5c050000 {
+			compatible = "qcom,hidma-1.0";
+			reg = <0 0x5c050000 0x0 0x1000>,
+			      <0 0x5c0b0000 0x0 0x1000>;
+			interrupts = <0 389 0>;
+			desc-count = <10>;
+			iommus = <&system_mmu>;
+			channel-index = <4>;
+		};
+	};
+
+Guest OS configuration:
+
+	hidma_24: dma-controller at 0x5c050000 {
+		compatible = "qcom,hidma-1.0";
+		reg = <0 0x5c050000 0x0 0x1000>,
+		      <0 0x5c0b0000 0x0 0x1000>;
+		interrupts = <0 389 0>;
+		desc-count = <10>;
+		channel-index = <4>;
+		iommus = <&system_mmu>;
+	};
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 03/10] dma: add Qualcomm Technologies HIDMA management driver
  2016-01-29 22:35 ` Sinan Kaya
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

The Qualcomm Technologies HIDMA device has been designed to support
virtualization technology. The driver has been divided into two to follow
the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels share some set
of common parameters. These parameters are initialized by the management
driver during power up. Same management driver is used for monitoring the
execution of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and prioritization.

The management driver is executed in host context and is the main
management entity for all channels provided by the device.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
---
 .../ABI/testing/sysfs-platform-hidma-mgmt          |  97 +++++++
 drivers/dma/qcom/Kconfig                           |  11 +
 drivers/dma/qcom/Makefile                          |   2 +
 drivers/dma/qcom/hidma_mgmt.c                      | 302 +++++++++++++++++++++
 drivers/dma/qcom/hidma_mgmt.h                      |  39 +++
 drivers/dma/qcom/hidma_mgmt_sys.c                  | 295 ++++++++++++++++++++
 6 files changed, 746 insertions(+)
 create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma-mgmt
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c

diff --git a/Documentation/ABI/testing/sysfs-platform-hidma-mgmt b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
new file mode 100644
index 0000000..c2fb5d0
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
@@ -0,0 +1,97 @@
+What:		/sys/devices/platform/hidma-mgmt*/chanops/chan*/priority
+		/sys/devices/platform/QCOM8060:*/chanops/chan*/priority
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains either 0 or 1 and indicates if the DMA channel is a
+		low priority (0) or high priority (1) channel.
+
+What:		/sys/devices/platform/hidma-mgmt*/chanops/chan*/weight
+		/sys/devices/platform/QCOM8060:*/chanops/chan*/weight
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains 0..15 and indicates the weight of the channel among
+		equal priority channels during round robin scheduling.
+
+What:		/sys/devices/platform/hidma-mgmt*/chreset_timeout_cycles
+		/sys/devices/platform/QCOM8060:*/chreset_timeout_cycles
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains the platform specific cycle value to wait after a
+		reset command is issued. If the value is chosen too short,
+		then the HW will issue a reset failure interrupt. The value
+		is platform specific and should not be changed without
+		consultance.
+
+What:		/sys/devices/platform/hidma-mgmt*/dma_channels
+		/sys/devices/platform/QCOM8060:*/dma_channels
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains the number of dma channels supported by one instance
+		of HIDMA hardware. The value may change from chip to chip.
+
+What:		/sys/devices/platform/hidma-mgmt*/hw_version_major
+		/sys/devices/platform/QCOM8060:*/hw_version_major
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Version number major for the hardware.
+
+What:		/sys/devices/platform/hidma-mgmt*/hw_version_minor
+		/sys/devices/platform/QCOM8060:*/hw_version_minor
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Version number minor for the hardware.
+
+What:		/sys/devices/platform/hidma-mgmt*/max_rd_xactions
+		/sys/devices/platform/QCOM8060:*/max_rd_xactions
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains a value between 0 and 31. Maximum number of
+		read transactions that can be issued back to back.
+		Choosing a higher number gives better performance but
+		can also cause performance reduction to other peripherals
+		sharing the same bus.
+
+What:		/sys/devices/platform/hidma-mgmt*/max_read_request
+		/sys/devices/platform/QCOM8060:*/max_read_request
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Size of each read request. The value needs to be a power
+		of two and can be between 128 and 1024.
+
+What:		/sys/devices/platform/hidma-mgmt*/max_wr_xactions
+		/sys/devices/platform/QCOM8060:*/max_wr_xactions
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains a value between 0 and 31. Maximum number of
+		write transactions that can be issued back to back.
+		Choosing a higher number gives better performance but
+		can also cause performance reduction to other peripherals
+		sharing the same bus.
+
+
+What:		/sys/devices/platform/hidma-mgmt*/max_write_request
+		/sys/devices/platform/QCOM8060:*/max_write_request
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Size of each write request. The value needs to be a power
+		of two and can be between 128 and 1024.
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index f17c272..c975b11 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -6,3 +6,14 @@ config QCOM_BAM_DMA
 	---help---
 	  Enable support for the QCOM BAM DMA controller.  This controller
 	  provides DMA capabilities for a variety of on-chip devices.
+
+config QCOM_HIDMA_MGMT
+	tristate "Qualcomm Technologies HIDMA Management support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA Management.
+	  Each DMA device requires one management interface driver
+	  for basic initialization before QCOM_HIDMA channel driver can
+	  start managing the channels. In a virtualized environment,
+	  the guest OS would run QCOM_HIDMA channel driver and the
+	  host would run the QCOM_HIDMA_MGMT management driver.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index f612ae3..bfea699 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1 +1,3 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
+obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
+hdma_mgmt-objs	 := hidma_mgmt.o hidma_mgmt_sys.o
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
new file mode 100644
index 0000000..ef491b8
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -0,0 +1,302 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine Management interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/acpi.h>
+#include <linux/of.h>
+#include <linux/property.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/pm_runtime.h>
+#include <linux/bitops.h>
+
+#include "hidma_mgmt.h"
+
+#define HIDMA_QOS_N_OFFSET		0x300
+#define HIDMA_CFG_OFFSET		0x400
+#define HIDMA_MAX_BUS_REQ_LEN_OFFSET	0x41C
+#define HIDMA_MAX_XACTIONS_OFFSET	0x420
+#define HIDMA_HW_VERSION_OFFSET	0x424
+#define HIDMA_CHRESET_TIMEOUT_OFFSET	0x418
+
+#define HIDMA_MAX_WR_XACTIONS_MASK	GENMASK(4, 0)
+#define HIDMA_MAX_RD_XACTIONS_MASK	GENMASK(4, 0)
+#define HIDMA_WEIGHT_MASK		GENMASK(6, 0)
+#define HIDMA_MAX_BUS_REQ_LEN_MASK	GENMASK(15, 0)
+#define HIDMA_CHRESET_TIMEOUT_MASK	GENMASK(19, 0)
+
+#define HIDMA_MAX_WR_XACTIONS_BIT_POS	16
+#define HIDMA_MAX_BUS_WR_REQ_BIT_POS	16
+#define HIDMA_WRR_BIT_POS		8
+#define HIDMA_PRIORITY_BIT_POS		15
+
+#define HIDMA_AUTOSUSPEND_TIMEOUT	2000
+#define HIDMA_MAX_CHANNEL_WEIGHT	15
+
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev)
+{
+	unsigned int i;
+	u32 val;
+
+	if (!is_power_of_2(mgmtdev->max_write_request) ||
+	    (mgmtdev->max_write_request < 128) ||
+	    (mgmtdev->max_write_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid write request %d\n",
+			mgmtdev->max_write_request);
+		return -EINVAL;
+	}
+
+	if (!is_power_of_2(mgmtdev->max_read_request) ||
+	    (mgmtdev->max_read_request < 128) ||
+	    (mgmtdev->max_read_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid read request %d\n",
+			mgmtdev->max_read_request);
+		return -EINVAL;
+	}
+
+	if (mgmtdev->max_wr_xactions > HIDMA_MAX_WR_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_wr_xactions cannot be bigger than %ld\n",
+			HIDMA_MAX_WR_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	if (mgmtdev->max_rd_xactions > HIDMA_MAX_RD_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_rd_xactions cannot be bigger than %ld\n",
+			HIDMA_MAX_RD_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		if (mgmtdev->priority[i] > 1) {
+			dev_err(&mgmtdev->pdev->dev,
+				"priority can be 0 or 1\n");
+			return -EINVAL;
+		}
+
+		if (mgmtdev->weight[i] > HIDMA_MAX_CHANNEL_WEIGHT) {
+			dev_err(&mgmtdev->pdev->dev,
+				"max value of weight can be %d.\n",
+				HIDMA_MAX_CHANNEL_WEIGHT);
+			return -EINVAL;
+		}
+
+		/* weight needs to be at least one */
+		if (mgmtdev->weight[i] == 0)
+			mgmtdev->weight[i] = 1;
+	}
+
+	pm_runtime_get_sync(&mgmtdev->pdev->dev);
+	val = readl(mgmtdev->virtaddr + HIDMA_MAX_BUS_REQ_LEN_OFFSET);
+	val &= ~(HIDMA_MAX_BUS_REQ_LEN_MASK << HIDMA_MAX_BUS_WR_REQ_BIT_POS);
+	val |= mgmtdev->max_write_request << HIDMA_MAX_BUS_WR_REQ_BIT_POS;
+	val &= ~HIDMA_MAX_BUS_REQ_LEN_MASK;
+	val |= mgmtdev->max_read_request;
+	writel(val, mgmtdev->virtaddr + HIDMA_MAX_BUS_REQ_LEN_OFFSET);
+
+	val = readl(mgmtdev->virtaddr + HIDMA_MAX_XACTIONS_OFFSET);
+	val &= ~(HIDMA_MAX_WR_XACTIONS_MASK << HIDMA_MAX_WR_XACTIONS_BIT_POS);
+	val |= mgmtdev->max_wr_xactions << HIDMA_MAX_WR_XACTIONS_BIT_POS;
+	val &= ~HIDMA_MAX_RD_XACTIONS_MASK;
+	val |= mgmtdev->max_rd_xactions;
+	writel(val, mgmtdev->virtaddr + HIDMA_MAX_XACTIONS_OFFSET);
+
+	mgmtdev->hw_version =
+	    readl(mgmtdev->virtaddr + HIDMA_HW_VERSION_OFFSET);
+	mgmtdev->hw_version_major = (mgmtdev->hw_version >> 28) & 0xF;
+	mgmtdev->hw_version_minor = (mgmtdev->hw_version >> 16) & 0xF;
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		u32 weight = mgmtdev->weight[i];
+		u32 priority = mgmtdev->priority[i];
+
+		val = readl(mgmtdev->virtaddr + HIDMA_QOS_N_OFFSET + (4 * i));
+		val &= ~(1 << HIDMA_PRIORITY_BIT_POS);
+		val |= (priority & 0x1) << HIDMA_PRIORITY_BIT_POS;
+		val &= ~(HIDMA_WEIGHT_MASK << HIDMA_WRR_BIT_POS);
+		val |= (weight & HIDMA_WEIGHT_MASK) << HIDMA_WRR_BIT_POS;
+		writel(val, mgmtdev->virtaddr + HIDMA_QOS_N_OFFSET + (4 * i));
+	}
+
+	val = readl(mgmtdev->virtaddr + HIDMA_CHRESET_TIMEOUT_OFFSET);
+	val &= ~HIDMA_CHRESET_TIMEOUT_MASK;
+	val |= mgmtdev->chreset_timeout_cycles & HIDMA_CHRESET_TIMEOUT_MASK;
+	writel(val, mgmtdev->virtaddr + HIDMA_CHRESET_TIMEOUT_OFFSET);
+
+	pm_runtime_mark_last_busy(&mgmtdev->pdev->dev);
+	pm_runtime_put_autosuspend(&mgmtdev->pdev->dev);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(hidma_mgmt_setup);
+
+static int hidma_mgmt_probe(struct platform_device *pdev)
+{
+	struct hidma_mgmt_dev *mgmtdev;
+	struct resource *res;
+	void __iomem *virtaddr;
+	int irq;
+	int rc;
+	u32 val;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, HIDMA_AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	virtaddr = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(virtaddr)) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(&pdev->dev, "irq resources not found\n");
+		rc = irq;
+		goto out;
+	}
+
+	mgmtdev = devm_kzalloc(&pdev->dev, sizeof(*mgmtdev), GFP_KERNEL);
+	if (!mgmtdev) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->pdev = pdev;
+	mgmtdev->addrsize = resource_size(res);
+	mgmtdev->virtaddr = virtaddr;
+
+	rc = device_property_read_u32(&pdev->dev, "dma-channels",
+				      &mgmtdev->dma_channels);
+	if (rc) {
+		dev_err(&pdev->dev, "number of channels missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev,
+				      "channel-reset-timeout-cycles",
+				      &mgmtdev->chreset_timeout_cycles);
+	if (rc) {
+		dev_err(&pdev->dev, "channel reset timeout missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev, "max-write-burst-bytes",
+				      &mgmtdev->max_write_request);
+	if (rc) {
+		dev_err(&pdev->dev, "max-write-burst-bytes missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev, "max-read-burst-bytes",
+				      &mgmtdev->max_read_request);
+	if (rc) {
+		dev_err(&pdev->dev, "max-read-burst-bytes missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev, "max-write-transactions",
+				      &mgmtdev->max_wr_xactions);
+	if (rc) {
+		dev_err(&pdev->dev, "max-write-transactions missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev, "max-read-transactions",
+				      &mgmtdev->max_rd_xactions);
+	if (rc) {
+		dev_err(&pdev->dev, "max-read-transactions missing\n");
+		goto out;
+	}
+
+	mgmtdev->priority = devm_kcalloc(&pdev->dev,
+					 mgmtdev->dma_channels,
+					 sizeof(*mgmtdev->priority),
+					 GFP_KERNEL);
+	if (!mgmtdev->priority) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->weight = devm_kcalloc(&pdev->dev,
+				       mgmtdev->dma_channels,
+				       sizeof(*mgmtdev->weight), GFP_KERNEL);
+	if (!mgmtdev->weight) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	rc = hidma_mgmt_setup(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "setup failed\n");
+		goto out;
+	}
+
+	/* start the HW */
+	val = readl(mgmtdev->virtaddr + HIDMA_CFG_OFFSET);
+	val |= 1;
+	writel(val, mgmtdev->virtaddr + HIDMA_CFG_OFFSET);
+
+	rc = hidma_mgmt_init_sys(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "sysfs setup failed\n");
+		goto out;
+	}
+
+	dev_info(&pdev->dev,
+		 "HW rev: %d.%d @ %pa with %d physical channels\n",
+		 mgmtdev->hw_version_major, mgmtdev->hw_version_minor,
+		 &res->start, mgmtdev->dma_channels);
+
+	platform_set_drvdata(pdev, mgmtdev);
+	pm_runtime_mark_last_busy(&pdev->dev);
+	pm_runtime_put_autosuspend(&pdev->dev);
+	return 0;
+out:
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return rc;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_mgmt_acpi_ids[] = {
+	{"QCOM8060"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_mgmt_match[] = {
+	{.compatible = "qcom,hidma-mgmt-1.0",},
+	{},
+};
+MODULE_DEVICE_TABLE(of, hidma_mgmt_match);
+
+static struct platform_driver hidma_mgmt_driver = {
+	.probe = hidma_mgmt_probe,
+	.driver = {
+		   .name = "hidma-mgmt",
+		   .of_match_table = hidma_mgmt_match,
+		   .acpi_match_table = ACPI_PTR(hidma_mgmt_acpi_ids),
+	},
+};
+
+module_platform_driver(hidma_mgmt_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma_mgmt.h b/drivers/dma/qcom/hidma_mgmt.h
new file mode 100644
index 0000000..f7daf33
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.h
@@ -0,0 +1,39 @@
+/*
+ * Qualcomm Technologies HIDMA Management common header
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+struct hidma_mgmt_dev {
+	u8 hw_version_major;
+	u8 hw_version_minor;
+
+	u32 max_wr_xactions;
+	u32 max_rd_xactions;
+	u32 max_write_request;
+	u32 max_read_request;
+	u32 dma_channels;
+	u32 chreset_timeout_cycles;
+	u32 hw_version;
+	u32 *priority;
+	u32 *weight;
+
+	/* Hardware device constants */
+	void __iomem *virtaddr;
+	resource_size_t addrsize;
+
+	struct kobject **chroots;
+	struct platform_device *pdev;
+};
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev);
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev);
diff --git a/drivers/dma/qcom/hidma_mgmt_sys.c b/drivers/dma/qcom/hidma_mgmt_sys.c
new file mode 100644
index 0000000..d61f106
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt_sys.c
@@ -0,0 +1,295 @@
+/*
+ * Qualcomm Technologies HIDMA Management SYS interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/sysfs.h>
+#include <linux/platform_device.h>
+
+#include "hidma_mgmt.h"
+
+struct hidma_chan_attr {
+	struct hidma_mgmt_dev *mdev;
+	int index;
+	struct kobj_attribute attr;
+};
+
+struct hidma_mgmt_fileinfo {
+	char *name;
+	int mode;
+	int (*get)(struct hidma_mgmt_dev *mdev);
+	int (*set)(struct hidma_mgmt_dev *mdev, u64 val);
+};
+
+#define IMPLEMENT_GETSET(name)					\
+static int get_##name(struct hidma_mgmt_dev *mdev)		\
+{								\
+	return mdev->name;					\
+}								\
+static int set_##name(struct hidma_mgmt_dev *mdev, u64 val)	\
+{								\
+	u64 tmp;						\
+	int rc;							\
+								\
+	tmp = mdev->name;					\
+	mdev->name = val;					\
+	rc = hidma_mgmt_setup(mdev);				\
+	if (rc)							\
+		mdev->name = tmp;				\
+	return rc;						\
+}
+
+#define DECLARE_ATTRIBUTE(name, mode)				\
+	{#name, mode, get_##name, set_##name}
+
+IMPLEMENT_GETSET(hw_version_major)
+IMPLEMENT_GETSET(hw_version_minor)
+IMPLEMENT_GETSET(max_wr_xactions)
+IMPLEMENT_GETSET(max_rd_xactions)
+IMPLEMENT_GETSET(max_write_request)
+IMPLEMENT_GETSET(max_read_request)
+IMPLEMENT_GETSET(dma_channels)
+IMPLEMENT_GETSET(chreset_timeout_cycles)
+
+static int set_priority(struct hidma_mgmt_dev *mdev, unsigned int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i >= mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->priority[i];
+	mdev->priority[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->priority[i] = tmp;
+	return rc;
+}
+
+static int set_weight(struct hidma_mgmt_dev *mdev, unsigned int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i >= mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->weight[i];
+	mdev->weight[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->weight[i] = tmp;
+	return rc;
+}
+
+static struct hidma_mgmt_fileinfo hidma_mgmt_files[] = {
+	DECLARE_ATTRIBUTE(hw_version_major, S_IRUGO),
+	DECLARE_ATTRIBUTE(hw_version_minor, S_IRUGO),
+	DECLARE_ATTRIBUTE(dma_channels, S_IRUGO),
+	DECLARE_ATTRIBUTE(chreset_timeout_cycles, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_wr_xactions, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_rd_xactions, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_write_request, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_read_request, S_IRUGO),
+};
+
+static ssize_t show_values(struct device *dev, struct device_attribute *attr,
+			   char *buf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+	unsigned int i;
+
+	buf[0] = 0;
+
+	for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+		if (strcmp(attr->attr.name, hidma_mgmt_files[i].name) == 0) {
+			sprintf(buf, "%d\n", hidma_mgmt_files[i].get(mdev));
+			break;
+		}
+	}
+	return strlen(buf);
+}
+
+static ssize_t set_values(struct device *dev, struct device_attribute *attr,
+			  const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+	unsigned long tmp;
+	unsigned int i;
+	int rc;
+
+	rc = kstrtoul(buf, 0, &tmp);
+	if (rc)
+		return rc;
+
+	for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+		if (strcmp(attr->attr.name, hidma_mgmt_files[i].name) == 0) {
+			rc = hidma_mgmt_files[i].set(mdev, tmp);
+			if (rc)
+				return rc;
+
+			break;
+		}
+	}
+	return count;
+}
+
+static ssize_t show_values_channel(struct kobject *kobj,
+				   struct kobj_attribute *attr, char *buf)
+{
+	struct hidma_chan_attr *chattr;
+	struct hidma_mgmt_dev *mdev;
+
+	buf[0] = 0;
+	chattr = container_of(attr, struct hidma_chan_attr, attr);
+	mdev = chattr->mdev;
+	if (strcmp(attr->attr.name, "priority") == 0)
+		sprintf(buf, "%d\n", mdev->priority[chattr->index]);
+	else if (strcmp(attr->attr.name, "weight") == 0)
+		sprintf(buf, "%d\n", mdev->weight[chattr->index]);
+
+	return strlen(buf);
+}
+
+static ssize_t set_values_channel(struct kobject *kobj,
+				  struct kobj_attribute *attr, const char *buf,
+				  size_t count)
+{
+	struct hidma_chan_attr *chattr;
+	struct hidma_mgmt_dev *mdev;
+	unsigned long tmp;
+	int rc;
+
+	chattr = container_of(attr, struct hidma_chan_attr, attr);
+	mdev = chattr->mdev;
+
+	rc = kstrtoul(buf, 0, &tmp);
+	if (rc)
+		return rc;
+
+	if (strcmp(attr->attr.name, "priority") == 0) {
+		rc = set_priority(mdev, chattr->index, tmp);
+		if (rc)
+			return rc;
+	} else if (strcmp(attr->attr.name, "weight") == 0) {
+		rc = set_weight(mdev, chattr->index, tmp);
+		if (rc)
+			return rc;
+	}
+	return count;
+}
+
+static int create_sysfs_entry(struct hidma_mgmt_dev *dev, char *name, int mode)
+{
+	struct device_attribute *attrs;
+	char *name_copy;
+
+	attrs = devm_kmalloc(&dev->pdev->dev,
+			     sizeof(struct device_attribute), GFP_KERNEL);
+	if (!attrs)
+		return -ENOMEM;
+
+	name_copy = devm_kstrdup(&dev->pdev->dev, name, GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	attrs->attr.name = name_copy;
+	attrs->attr.mode = mode;
+	attrs->show = show_values;
+	attrs->store = set_values;
+	sysfs_attr_init(&attrs->attr);
+
+	return device_create_file(&dev->pdev->dev, attrs);
+}
+
+static int create_sysfs_entry_channel(struct hidma_mgmt_dev *mdev, char *name,
+				      int mode, int index,
+				      struct kobject *parent)
+{
+	struct hidma_chan_attr *chattr;
+	char *name_copy;
+
+	chattr = devm_kmalloc(&mdev->pdev->dev, sizeof(*chattr), GFP_KERNEL);
+	if (!chattr)
+		return -ENOMEM;
+
+	name_copy = devm_kstrdup(&mdev->pdev->dev, name, GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	chattr->mdev = mdev;
+	chattr->index = index;
+	chattr->attr.attr.name = name_copy;
+	chattr->attr.attr.mode = mode;
+	chattr->attr.show = show_values_channel;
+	chattr->attr.store = set_values_channel;
+	sysfs_attr_init(&chattr->attr.attr);
+
+	return sysfs_create_file(parent, &chattr->attr.attr);
+}
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *mdev)
+{
+	unsigned int i;
+	int rc;
+	int required;
+	struct kobject *chanops;
+
+	required = sizeof(*mdev->chroots) * mdev->dma_channels;
+	mdev->chroots = devm_kmalloc(&mdev->pdev->dev, required, GFP_KERNEL);
+	if (!mdev->chroots)
+		return -ENOMEM;
+
+	chanops = kobject_create_and_add("chanops", &mdev->pdev->dev.kobj);
+	if (!chanops)
+		return -ENOMEM;
+
+	/* create each channel directory here */
+	for (i = 0; i < mdev->dma_channels; i++) {
+		char name[20];
+
+		snprintf(name, sizeof(name), "chan%d", i);
+		mdev->chroots[i] = kobject_create_and_add(name, chanops);
+		if (!mdev->chroots[i])
+			return -ENOMEM;
+	}
+
+	/* populate common parameters */
+	for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+		rc = create_sysfs_entry(mdev, hidma_mgmt_files[i].name,
+					hidma_mgmt_files[i].mode);
+		if (rc)
+			return rc;
+	}
+
+	/* populate parameters that are per channel */
+	for (i = 0; i < mdev->dma_channels; i++) {
+		rc = create_sysfs_entry_channel(mdev, "priority",
+						(S_IRUGO | S_IWUGO), i,
+						mdev->chroots[i]);
+		if (rc)
+			return rc;
+
+		rc = create_sysfs_entry_channel(mdev, "weight",
+						(S_IRUGO | S_IWUGO), i,
+						mdev->chroots[i]);
+		if (rc)
+			return rc;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(hidma_mgmt_init_sys);
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 03/10] dma: add Qualcomm Technologies HIDMA management driver
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

The Qualcomm Technologies HIDMA device has been designed to support
virtualization technology. The driver has been divided into two to follow
the hardware design.

1. HIDMA Management driver
2. HIDMA Channel driver

Each HIDMA HW consists of multiple channels. These channels share some set
of common parameters. These parameters are initialized by the management
driver during power up. Same management driver is used for monitoring the
execution of the channels. Management driver can change the performance
behavior dynamically such as bandwidth allocation and prioritization.

The management driver is executed in host context and is the main
management entity for all channels provided by the device.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
---
 .../ABI/testing/sysfs-platform-hidma-mgmt          |  97 +++++++
 drivers/dma/qcom/Kconfig                           |  11 +
 drivers/dma/qcom/Makefile                          |   2 +
 drivers/dma/qcom/hidma_mgmt.c                      | 302 +++++++++++++++++++++
 drivers/dma/qcom/hidma_mgmt.h                      |  39 +++
 drivers/dma/qcom/hidma_mgmt_sys.c                  | 295 ++++++++++++++++++++
 6 files changed, 746 insertions(+)
 create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma-mgmt
 create mode 100644 drivers/dma/qcom/hidma_mgmt.c
 create mode 100644 drivers/dma/qcom/hidma_mgmt.h
 create mode 100644 drivers/dma/qcom/hidma_mgmt_sys.c

diff --git a/Documentation/ABI/testing/sysfs-platform-hidma-mgmt b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
new file mode 100644
index 0000000..c2fb5d0
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-platform-hidma-mgmt
@@ -0,0 +1,97 @@
+What:		/sys/devices/platform/hidma-mgmt*/chanops/chan*/priority
+		/sys/devices/platform/QCOM8060:*/chanops/chan*/priority
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains either 0 or 1 and indicates if the DMA channel is a
+		low priority (0) or high priority (1) channel.
+
+What:		/sys/devices/platform/hidma-mgmt*/chanops/chan*/weight
+		/sys/devices/platform/QCOM8060:*/chanops/chan*/weight
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains 0..15 and indicates the weight of the channel among
+		equal priority channels during round robin scheduling.
+
+What:		/sys/devices/platform/hidma-mgmt*/chreset_timeout_cycles
+		/sys/devices/platform/QCOM8060:*/chreset_timeout_cycles
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains the platform specific cycle value to wait after a
+		reset command is issued. If the value is chosen too short,
+		then the HW will issue a reset failure interrupt. The value
+		is platform specific and should not be changed without
+		consultance.
+
+What:		/sys/devices/platform/hidma-mgmt*/dma_channels
+		/sys/devices/platform/QCOM8060:*/dma_channels
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains the number of dma channels supported by one instance
+		of HIDMA hardware. The value may change from chip to chip.
+
+What:		/sys/devices/platform/hidma-mgmt*/hw_version_major
+		/sys/devices/platform/QCOM8060:*/hw_version_major
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Version number major for the hardware.
+
+What:		/sys/devices/platform/hidma-mgmt*/hw_version_minor
+		/sys/devices/platform/QCOM8060:*/hw_version_minor
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Version number minor for the hardware.
+
+What:		/sys/devices/platform/hidma-mgmt*/max_rd_xactions
+		/sys/devices/platform/QCOM8060:*/max_rd_xactions
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains a value between 0 and 31. Maximum number of
+		read transactions that can be issued back to back.
+		Choosing a higher number gives better performance but
+		can also cause performance reduction to other peripherals
+		sharing the same bus.
+
+What:		/sys/devices/platform/hidma-mgmt*/max_read_request
+		/sys/devices/platform/QCOM8060:*/max_read_request
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Size of each read request. The value needs to be a power
+		of two and can be between 128 and 1024.
+
+What:		/sys/devices/platform/hidma-mgmt*/max_wr_xactions
+		/sys/devices/platform/QCOM8060:*/max_wr_xactions
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains a value between 0 and 31. Maximum number of
+		write transactions that can be issued back to back.
+		Choosing a higher number gives better performance but
+		can also cause performance reduction to other peripherals
+		sharing the same bus.
+
+
+What:		/sys/devices/platform/hidma-mgmt*/max_write_request
+		/sys/devices/platform/QCOM8060:*/max_write_request
+Date:		Nov 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Size of each write request. The value needs to be a power
+		of two and can be between 128 and 1024.
diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index f17c272..c975b11 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -6,3 +6,14 @@ config QCOM_BAM_DMA
 	---help---
 	  Enable support for the QCOM BAM DMA controller.  This controller
 	  provides DMA capabilities for a variety of on-chip devices.
+
+config QCOM_HIDMA_MGMT
+	tristate "Qualcomm Technologies HIDMA Management support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA Management.
+	  Each DMA device requires one management interface driver
+	  for basic initialization before QCOM_HIDMA channel driver can
+	  start managing the channels. In a virtualized environment,
+	  the guest OS would run QCOM_HIDMA channel driver and the
+	  host would run the QCOM_HIDMA_MGMT management driver.
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index f612ae3..bfea699 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1 +1,3 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
+obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
+hdma_mgmt-objs	 := hidma_mgmt.o hidma_mgmt_sys.o
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
new file mode 100644
index 0000000..ef491b8
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -0,0 +1,302 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine Management interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/acpi.h>
+#include <linux/of.h>
+#include <linux/property.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/pm_runtime.h>
+#include <linux/bitops.h>
+
+#include "hidma_mgmt.h"
+
+#define HIDMA_QOS_N_OFFSET		0x300
+#define HIDMA_CFG_OFFSET		0x400
+#define HIDMA_MAX_BUS_REQ_LEN_OFFSET	0x41C
+#define HIDMA_MAX_XACTIONS_OFFSET	0x420
+#define HIDMA_HW_VERSION_OFFSET	0x424
+#define HIDMA_CHRESET_TIMEOUT_OFFSET	0x418
+
+#define HIDMA_MAX_WR_XACTIONS_MASK	GENMASK(4, 0)
+#define HIDMA_MAX_RD_XACTIONS_MASK	GENMASK(4, 0)
+#define HIDMA_WEIGHT_MASK		GENMASK(6, 0)
+#define HIDMA_MAX_BUS_REQ_LEN_MASK	GENMASK(15, 0)
+#define HIDMA_CHRESET_TIMEOUT_MASK	GENMASK(19, 0)
+
+#define HIDMA_MAX_WR_XACTIONS_BIT_POS	16
+#define HIDMA_MAX_BUS_WR_REQ_BIT_POS	16
+#define HIDMA_WRR_BIT_POS		8
+#define HIDMA_PRIORITY_BIT_POS		15
+
+#define HIDMA_AUTOSUSPEND_TIMEOUT	2000
+#define HIDMA_MAX_CHANNEL_WEIGHT	15
+
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev)
+{
+	unsigned int i;
+	u32 val;
+
+	if (!is_power_of_2(mgmtdev->max_write_request) ||
+	    (mgmtdev->max_write_request < 128) ||
+	    (mgmtdev->max_write_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid write request %d\n",
+			mgmtdev->max_write_request);
+		return -EINVAL;
+	}
+
+	if (!is_power_of_2(mgmtdev->max_read_request) ||
+	    (mgmtdev->max_read_request < 128) ||
+	    (mgmtdev->max_read_request > 1024)) {
+		dev_err(&mgmtdev->pdev->dev, "invalid read request %d\n",
+			mgmtdev->max_read_request);
+		return -EINVAL;
+	}
+
+	if (mgmtdev->max_wr_xactions > HIDMA_MAX_WR_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_wr_xactions cannot be bigger than %ld\n",
+			HIDMA_MAX_WR_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	if (mgmtdev->max_rd_xactions > HIDMA_MAX_RD_XACTIONS_MASK) {
+		dev_err(&mgmtdev->pdev->dev,
+			"max_rd_xactions cannot be bigger than %ld\n",
+			HIDMA_MAX_RD_XACTIONS_MASK);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		if (mgmtdev->priority[i] > 1) {
+			dev_err(&mgmtdev->pdev->dev,
+				"priority can be 0 or 1\n");
+			return -EINVAL;
+		}
+
+		if (mgmtdev->weight[i] > HIDMA_MAX_CHANNEL_WEIGHT) {
+			dev_err(&mgmtdev->pdev->dev,
+				"max value of weight can be %d.\n",
+				HIDMA_MAX_CHANNEL_WEIGHT);
+			return -EINVAL;
+		}
+
+		/* weight needs to be at least one */
+		if (mgmtdev->weight[i] == 0)
+			mgmtdev->weight[i] = 1;
+	}
+
+	pm_runtime_get_sync(&mgmtdev->pdev->dev);
+	val = readl(mgmtdev->virtaddr + HIDMA_MAX_BUS_REQ_LEN_OFFSET);
+	val &= ~(HIDMA_MAX_BUS_REQ_LEN_MASK << HIDMA_MAX_BUS_WR_REQ_BIT_POS);
+	val |= mgmtdev->max_write_request << HIDMA_MAX_BUS_WR_REQ_BIT_POS;
+	val &= ~HIDMA_MAX_BUS_REQ_LEN_MASK;
+	val |= mgmtdev->max_read_request;
+	writel(val, mgmtdev->virtaddr + HIDMA_MAX_BUS_REQ_LEN_OFFSET);
+
+	val = readl(mgmtdev->virtaddr + HIDMA_MAX_XACTIONS_OFFSET);
+	val &= ~(HIDMA_MAX_WR_XACTIONS_MASK << HIDMA_MAX_WR_XACTIONS_BIT_POS);
+	val |= mgmtdev->max_wr_xactions << HIDMA_MAX_WR_XACTIONS_BIT_POS;
+	val &= ~HIDMA_MAX_RD_XACTIONS_MASK;
+	val |= mgmtdev->max_rd_xactions;
+	writel(val, mgmtdev->virtaddr + HIDMA_MAX_XACTIONS_OFFSET);
+
+	mgmtdev->hw_version =
+	    readl(mgmtdev->virtaddr + HIDMA_HW_VERSION_OFFSET);
+	mgmtdev->hw_version_major = (mgmtdev->hw_version >> 28) & 0xF;
+	mgmtdev->hw_version_minor = (mgmtdev->hw_version >> 16) & 0xF;
+
+	for (i = 0; i < mgmtdev->dma_channels; i++) {
+		u32 weight = mgmtdev->weight[i];
+		u32 priority = mgmtdev->priority[i];
+
+		val = readl(mgmtdev->virtaddr + HIDMA_QOS_N_OFFSET + (4 * i));
+		val &= ~(1 << HIDMA_PRIORITY_BIT_POS);
+		val |= (priority & 0x1) << HIDMA_PRIORITY_BIT_POS;
+		val &= ~(HIDMA_WEIGHT_MASK << HIDMA_WRR_BIT_POS);
+		val |= (weight & HIDMA_WEIGHT_MASK) << HIDMA_WRR_BIT_POS;
+		writel(val, mgmtdev->virtaddr + HIDMA_QOS_N_OFFSET + (4 * i));
+	}
+
+	val = readl(mgmtdev->virtaddr + HIDMA_CHRESET_TIMEOUT_OFFSET);
+	val &= ~HIDMA_CHRESET_TIMEOUT_MASK;
+	val |= mgmtdev->chreset_timeout_cycles & HIDMA_CHRESET_TIMEOUT_MASK;
+	writel(val, mgmtdev->virtaddr + HIDMA_CHRESET_TIMEOUT_OFFSET);
+
+	pm_runtime_mark_last_busy(&mgmtdev->pdev->dev);
+	pm_runtime_put_autosuspend(&mgmtdev->pdev->dev);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(hidma_mgmt_setup);
+
+static int hidma_mgmt_probe(struct platform_device *pdev)
+{
+	struct hidma_mgmt_dev *mgmtdev;
+	struct resource *res;
+	void __iomem *virtaddr;
+	int irq;
+	int rc;
+	u32 val;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, HIDMA_AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	virtaddr = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(virtaddr)) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(&pdev->dev, "irq resources not found\n");
+		rc = irq;
+		goto out;
+	}
+
+	mgmtdev = devm_kzalloc(&pdev->dev, sizeof(*mgmtdev), GFP_KERNEL);
+	if (!mgmtdev) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->pdev = pdev;
+	mgmtdev->addrsize = resource_size(res);
+	mgmtdev->virtaddr = virtaddr;
+
+	rc = device_property_read_u32(&pdev->dev, "dma-channels",
+				      &mgmtdev->dma_channels);
+	if (rc) {
+		dev_err(&pdev->dev, "number of channels missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev,
+				      "channel-reset-timeout-cycles",
+				      &mgmtdev->chreset_timeout_cycles);
+	if (rc) {
+		dev_err(&pdev->dev, "channel reset timeout missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev, "max-write-burst-bytes",
+				      &mgmtdev->max_write_request);
+	if (rc) {
+		dev_err(&pdev->dev, "max-write-burst-bytes missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev, "max-read-burst-bytes",
+				      &mgmtdev->max_read_request);
+	if (rc) {
+		dev_err(&pdev->dev, "max-read-burst-bytes missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev, "max-write-transactions",
+				      &mgmtdev->max_wr_xactions);
+	if (rc) {
+		dev_err(&pdev->dev, "max-write-transactions missing\n");
+		goto out;
+	}
+
+	rc = device_property_read_u32(&pdev->dev, "max-read-transactions",
+				      &mgmtdev->max_rd_xactions);
+	if (rc) {
+		dev_err(&pdev->dev, "max-read-transactions missing\n");
+		goto out;
+	}
+
+	mgmtdev->priority = devm_kcalloc(&pdev->dev,
+					 mgmtdev->dma_channels,
+					 sizeof(*mgmtdev->priority),
+					 GFP_KERNEL);
+	if (!mgmtdev->priority) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	mgmtdev->weight = devm_kcalloc(&pdev->dev,
+				       mgmtdev->dma_channels,
+				       sizeof(*mgmtdev->weight), GFP_KERNEL);
+	if (!mgmtdev->weight) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	rc = hidma_mgmt_setup(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "setup failed\n");
+		goto out;
+	}
+
+	/* start the HW */
+	val = readl(mgmtdev->virtaddr + HIDMA_CFG_OFFSET);
+	val |= 1;
+	writel(val, mgmtdev->virtaddr + HIDMA_CFG_OFFSET);
+
+	rc = hidma_mgmt_init_sys(mgmtdev);
+	if (rc) {
+		dev_err(&pdev->dev, "sysfs setup failed\n");
+		goto out;
+	}
+
+	dev_info(&pdev->dev,
+		 "HW rev: %d.%d @ %pa with %d physical channels\n",
+		 mgmtdev->hw_version_major, mgmtdev->hw_version_minor,
+		 &res->start, mgmtdev->dma_channels);
+
+	platform_set_drvdata(pdev, mgmtdev);
+	pm_runtime_mark_last_busy(&pdev->dev);
+	pm_runtime_put_autosuspend(&pdev->dev);
+	return 0;
+out:
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return rc;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_mgmt_acpi_ids[] = {
+	{"QCOM8060"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_mgmt_match[] = {
+	{.compatible = "qcom,hidma-mgmt-1.0",},
+	{},
+};
+MODULE_DEVICE_TABLE(of, hidma_mgmt_match);
+
+static struct platform_driver hidma_mgmt_driver = {
+	.probe = hidma_mgmt_probe,
+	.driver = {
+		   .name = "hidma-mgmt",
+		   .of_match_table = hidma_mgmt_match,
+		   .acpi_match_table = ACPI_PTR(hidma_mgmt_acpi_ids),
+	},
+};
+
+module_platform_driver(hidma_mgmt_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma_mgmt.h b/drivers/dma/qcom/hidma_mgmt.h
new file mode 100644
index 0000000..f7daf33
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt.h
@@ -0,0 +1,39 @@
+/*
+ * Qualcomm Technologies HIDMA Management common header
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+struct hidma_mgmt_dev {
+	u8 hw_version_major;
+	u8 hw_version_minor;
+
+	u32 max_wr_xactions;
+	u32 max_rd_xactions;
+	u32 max_write_request;
+	u32 max_read_request;
+	u32 dma_channels;
+	u32 chreset_timeout_cycles;
+	u32 hw_version;
+	u32 *priority;
+	u32 *weight;
+
+	/* Hardware device constants */
+	void __iomem *virtaddr;
+	resource_size_t addrsize;
+
+	struct kobject **chroots;
+	struct platform_device *pdev;
+};
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *dev);
+int hidma_mgmt_setup(struct hidma_mgmt_dev *mgmtdev);
diff --git a/drivers/dma/qcom/hidma_mgmt_sys.c b/drivers/dma/qcom/hidma_mgmt_sys.c
new file mode 100644
index 0000000..d61f106
--- /dev/null
+++ b/drivers/dma/qcom/hidma_mgmt_sys.c
@@ -0,0 +1,295 @@
+/*
+ * Qualcomm Technologies HIDMA Management SYS interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/sysfs.h>
+#include <linux/platform_device.h>
+
+#include "hidma_mgmt.h"
+
+struct hidma_chan_attr {
+	struct hidma_mgmt_dev *mdev;
+	int index;
+	struct kobj_attribute attr;
+};
+
+struct hidma_mgmt_fileinfo {
+	char *name;
+	int mode;
+	int (*get)(struct hidma_mgmt_dev *mdev);
+	int (*set)(struct hidma_mgmt_dev *mdev, u64 val);
+};
+
+#define IMPLEMENT_GETSET(name)					\
+static int get_##name(struct hidma_mgmt_dev *mdev)		\
+{								\
+	return mdev->name;					\
+}								\
+static int set_##name(struct hidma_mgmt_dev *mdev, u64 val)	\
+{								\
+	u64 tmp;						\
+	int rc;							\
+								\
+	tmp = mdev->name;					\
+	mdev->name = val;					\
+	rc = hidma_mgmt_setup(mdev);				\
+	if (rc)							\
+		mdev->name = tmp;				\
+	return rc;						\
+}
+
+#define DECLARE_ATTRIBUTE(name, mode)				\
+	{#name, mode, get_##name, set_##name}
+
+IMPLEMENT_GETSET(hw_version_major)
+IMPLEMENT_GETSET(hw_version_minor)
+IMPLEMENT_GETSET(max_wr_xactions)
+IMPLEMENT_GETSET(max_rd_xactions)
+IMPLEMENT_GETSET(max_write_request)
+IMPLEMENT_GETSET(max_read_request)
+IMPLEMENT_GETSET(dma_channels)
+IMPLEMENT_GETSET(chreset_timeout_cycles)
+
+static int set_priority(struct hidma_mgmt_dev *mdev, unsigned int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i >= mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->priority[i];
+	mdev->priority[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->priority[i] = tmp;
+	return rc;
+}
+
+static int set_weight(struct hidma_mgmt_dev *mdev, unsigned int i, u64 val)
+{
+	u64 tmp;
+	int rc;
+
+	if (i >= mdev->dma_channels)
+		return -EINVAL;
+
+	tmp = mdev->weight[i];
+	mdev->weight[i] = val;
+	rc = hidma_mgmt_setup(mdev);
+	if (rc)
+		mdev->weight[i] = tmp;
+	return rc;
+}
+
+static struct hidma_mgmt_fileinfo hidma_mgmt_files[] = {
+	DECLARE_ATTRIBUTE(hw_version_major, S_IRUGO),
+	DECLARE_ATTRIBUTE(hw_version_minor, S_IRUGO),
+	DECLARE_ATTRIBUTE(dma_channels, S_IRUGO),
+	DECLARE_ATTRIBUTE(chreset_timeout_cycles, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_wr_xactions, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_rd_xactions, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_write_request, S_IRUGO),
+	DECLARE_ATTRIBUTE(max_read_request, S_IRUGO),
+};
+
+static ssize_t show_values(struct device *dev, struct device_attribute *attr,
+			   char *buf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+	unsigned int i;
+
+	buf[0] = 0;
+
+	for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+		if (strcmp(attr->attr.name, hidma_mgmt_files[i].name) == 0) {
+			sprintf(buf, "%d\n", hidma_mgmt_files[i].get(mdev));
+			break;
+		}
+	}
+	return strlen(buf);
+}
+
+static ssize_t set_values(struct device *dev, struct device_attribute *attr,
+			  const char *buf, size_t count)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_mgmt_dev *mdev = platform_get_drvdata(pdev);
+	unsigned long tmp;
+	unsigned int i;
+	int rc;
+
+	rc = kstrtoul(buf, 0, &tmp);
+	if (rc)
+		return rc;
+
+	for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+		if (strcmp(attr->attr.name, hidma_mgmt_files[i].name) == 0) {
+			rc = hidma_mgmt_files[i].set(mdev, tmp);
+			if (rc)
+				return rc;
+
+			break;
+		}
+	}
+	return count;
+}
+
+static ssize_t show_values_channel(struct kobject *kobj,
+				   struct kobj_attribute *attr, char *buf)
+{
+	struct hidma_chan_attr *chattr;
+	struct hidma_mgmt_dev *mdev;
+
+	buf[0] = 0;
+	chattr = container_of(attr, struct hidma_chan_attr, attr);
+	mdev = chattr->mdev;
+	if (strcmp(attr->attr.name, "priority") == 0)
+		sprintf(buf, "%d\n", mdev->priority[chattr->index]);
+	else if (strcmp(attr->attr.name, "weight") == 0)
+		sprintf(buf, "%d\n", mdev->weight[chattr->index]);
+
+	return strlen(buf);
+}
+
+static ssize_t set_values_channel(struct kobject *kobj,
+				  struct kobj_attribute *attr, const char *buf,
+				  size_t count)
+{
+	struct hidma_chan_attr *chattr;
+	struct hidma_mgmt_dev *mdev;
+	unsigned long tmp;
+	int rc;
+
+	chattr = container_of(attr, struct hidma_chan_attr, attr);
+	mdev = chattr->mdev;
+
+	rc = kstrtoul(buf, 0, &tmp);
+	if (rc)
+		return rc;
+
+	if (strcmp(attr->attr.name, "priority") == 0) {
+		rc = set_priority(mdev, chattr->index, tmp);
+		if (rc)
+			return rc;
+	} else if (strcmp(attr->attr.name, "weight") == 0) {
+		rc = set_weight(mdev, chattr->index, tmp);
+		if (rc)
+			return rc;
+	}
+	return count;
+}
+
+static int create_sysfs_entry(struct hidma_mgmt_dev *dev, char *name, int mode)
+{
+	struct device_attribute *attrs;
+	char *name_copy;
+
+	attrs = devm_kmalloc(&dev->pdev->dev,
+			     sizeof(struct device_attribute), GFP_KERNEL);
+	if (!attrs)
+		return -ENOMEM;
+
+	name_copy = devm_kstrdup(&dev->pdev->dev, name, GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	attrs->attr.name = name_copy;
+	attrs->attr.mode = mode;
+	attrs->show = show_values;
+	attrs->store = set_values;
+	sysfs_attr_init(&attrs->attr);
+
+	return device_create_file(&dev->pdev->dev, attrs);
+}
+
+static int create_sysfs_entry_channel(struct hidma_mgmt_dev *mdev, char *name,
+				      int mode, int index,
+				      struct kobject *parent)
+{
+	struct hidma_chan_attr *chattr;
+	char *name_copy;
+
+	chattr = devm_kmalloc(&mdev->pdev->dev, sizeof(*chattr), GFP_KERNEL);
+	if (!chattr)
+		return -ENOMEM;
+
+	name_copy = devm_kstrdup(&mdev->pdev->dev, name, GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	chattr->mdev = mdev;
+	chattr->index = index;
+	chattr->attr.attr.name = name_copy;
+	chattr->attr.attr.mode = mode;
+	chattr->attr.show = show_values_channel;
+	chattr->attr.store = set_values_channel;
+	sysfs_attr_init(&chattr->attr.attr);
+
+	return sysfs_create_file(parent, &chattr->attr.attr);
+}
+
+int hidma_mgmt_init_sys(struct hidma_mgmt_dev *mdev)
+{
+	unsigned int i;
+	int rc;
+	int required;
+	struct kobject *chanops;
+
+	required = sizeof(*mdev->chroots) * mdev->dma_channels;
+	mdev->chroots = devm_kmalloc(&mdev->pdev->dev, required, GFP_KERNEL);
+	if (!mdev->chroots)
+		return -ENOMEM;
+
+	chanops = kobject_create_and_add("chanops", &mdev->pdev->dev.kobj);
+	if (!chanops)
+		return -ENOMEM;
+
+	/* create each channel directory here */
+	for (i = 0; i < mdev->dma_channels; i++) {
+		char name[20];
+
+		snprintf(name, sizeof(name), "chan%d", i);
+		mdev->chroots[i] = kobject_create_and_add(name, chanops);
+		if (!mdev->chroots[i])
+			return -ENOMEM;
+	}
+
+	/* populate common parameters */
+	for (i = 0; i < ARRAY_SIZE(hidma_mgmt_files); i++) {
+		rc = create_sysfs_entry(mdev, hidma_mgmt_files[i].name,
+					hidma_mgmt_files[i].mode);
+		if (rc)
+			return rc;
+	}
+
+	/* populate parameters that are per channel */
+	for (i = 0; i < mdev->dma_channels; i++) {
+		rc = create_sysfs_entry_channel(mdev, "priority",
+						(S_IRUGO | S_IWUGO), i,
+						mdev->chroots[i]);
+		if (rc)
+			return rc;
+
+		rc = create_sysfs_entry_channel(mdev, "weight",
+						(S_IRUGO | S_IWUGO), i,
+						mdev->chroots[i]);
+		if (rc)
+			return rc;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(hidma_mgmt_init_sys);
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 04/10] dma: add Qualcomm Technologies HIDMA channel driver
  2016-01-29 22:35 ` Sinan Kaya
  (?)
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: vikrams, arnd, eric.auger, linux-arm-msm, linux-kernel,
	Sinan Kaya, linux-arm-kernel, agross, shankerd

This patch adds support for hidma engine. The driver consists of two
logical blocks. The DMA engine interface and the low-level interface.
The hardware only supports memcpy/memset and this driver only support
memcpy interface. HW and driver doesn't support slave interface.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
---
 drivers/dma/qcom/Kconfig |  10 +
 drivers/dma/qcom/hidma.c | 744 +++++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/qcom/hidma.h | 160 ++++++++++
 3 files changed, 914 insertions(+)
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h

diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index c975b11..a7761c4 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -17,3 +17,13 @@ config QCOM_HIDMA_MGMT
 	  start managing the channels. In a virtualized environment,
 	  the guest OS would run QCOM_HIDMA channel driver and the
 	  host would run the QCOM_HIDMA_MGMT management driver.
+
+config QCOM_HIDMA
+	tristate "Qualcomm Technologies HIDMA Channel support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA controller.
+	  The HIDMA controller supports optimized buffer copies
+	  (user to kernel, kernel to kernel, etc.).  It only supports
+	  memcpy interface. The core is not intended for general
+	  purpose slave DMA.
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
new file mode 100644
index 0000000..f8960f1
--- /dev/null
+++ b/drivers/dma/qcom/hidma.c
@@ -0,0 +1,744 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
+ * Copyright (C) Semihalf 2009
+ * Copyright (C) Ilya Yanok, Emcraft Systems 2010
+ * Copyright (C) Alexander Popov, Promcontroller 2014
+ *
+ * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
+ * (defines, structures and comments) was taken from MPC5121 DMA driver
+ * written by Hongjun Chen <hong-jun.chen@freescale.com>.
+ *
+ * Approved as OSADL project by a majority of OSADL members and funded
+ * by OSADL membership fees in 2009;  for details see www.osadl.org.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ */
+
+/* Linux Foundation elects GPLv2 license only. */
+
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/of_dma.h>
+#include <linux/property.h>
+#include <linux/delay.h>
+#include <linux/acpi.h>
+#include <linux/irq.h>
+#include <linux/atomic.h>
+#include <linux/pm_runtime.h>
+
+#include "../dmaengine.h"
+#include "hidma.h"
+
+/*
+ * Default idle time is 2 seconds. This parameter can
+ * be overridden by changing the following
+ * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
+ * during kernel boot.
+ */
+#define HIDMA_AUTOSUSPEND_TIMEOUT		2000
+#define HIDMA_ERR_INFO_SW			0xFF
+#define HIDMA_ERR_CODE_UNEXPECTED_TERMINATE	0x0
+
+static inline struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
+{
+	return container_of(dmadev, struct hidma_dev, ddev);
+}
+
+static inline
+struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
+{
+	return container_of(_lldevp, struct hidma_dev, lldev);
+}
+
+static inline struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
+{
+	return container_of(dmach, struct hidma_chan, chan);
+}
+
+static inline
+struct hidma_desc *to_hidma_desc(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct hidma_desc, desc);
+}
+
+static void hidma_free(struct hidma_dev *dmadev)
+{
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+}
+
+static unsigned int nr_desc_prm;
+module_param(nr_desc_prm, uint, 0644);
+MODULE_PARM_DESC(nr_desc_prm, "number of descriptors (default: 0)");
+
+#define HIDMA_MAX_CHANNELS	64
+static int channel_idx[HIDMA_MAX_CHANNELS] = {
+	[0 ... (HIDMA_MAX_CHANNELS - 1)] = -1
+};
+
+/*
+ * Each DMA channel is associated with an event channel for interrupt
+ * delivery. The event channel index usually comes from the firmware through
+ * ACPI/DT. When a HIDMA channel is executed in the guest machine context (QEMU)
+ * the device tree gets auto-generated based on the memory and IRQ resources
+ * this driver uses on the host machine. Any device specific paraemeter such as
+ * channel-index gets ignored by the QEMU.
+ * We are using this command line parameter to pass the event channel index to
+ * the guest machine.
+ */
+static unsigned int num_channel_idx;
+module_param_array_named(channel_idx, channel_idx, int, &num_channel_idx,
+			 0644);
+MODULE_PARM_DESC(channel_idx, "channel index array for the notifications");
+static atomic_t channel_ref_count;
+
+/* process completed descriptors */
+static void hidma_process_completed(struct hidma_chan *mchan)
+{
+	struct dma_device *ddev = mchan->chan.device;
+	struct hidma_dev *mdma = to_hidma_dev(ddev);
+	struct dma_async_tx_descriptor *desc;
+	dma_cookie_t last_cookie;
+	struct hidma_desc *mdesc;
+	unsigned long irqflags;
+	struct list_head list;
+
+	INIT_LIST_HEAD(&list);
+
+	/* Get all completed descriptors */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&mchan->completed, &list);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	/* Execute callbacks and run dependencies */
+	list_for_each_entry(mdesc, &list, node) {
+		enum dma_status llstat;
+
+		desc = &mdesc->desc;
+
+		spin_lock_irqsave(&mchan->lock, irqflags);
+		dma_cookie_complete(desc);
+		spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
+		if (desc->callback && (llstat == DMA_COMPLETE))
+			desc->callback(desc->callback_param);
+
+		last_cookie = desc->cookie;
+		dma_run_dependencies(desc);
+	}
+
+	/* Free descriptors */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&list, &mchan->free);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+}
+
+/*
+ * Called once for each submitted descriptor.
+ * PM is locked once for each descriptor that is currently
+ * in execution.
+ */
+static void hidma_callback(void *data)
+{
+	struct hidma_desc *mdesc = data;
+	struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
+	struct dma_device *ddev = mchan->chan.device;
+	struct hidma_dev *dmadev = to_hidma_dev(ddev);
+	unsigned long irqflags;
+	bool queued = false;
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	if (mdesc->node.next) {
+		/* Delete from the active list, add to completed list */
+		list_move_tail(&mdesc->node, &mchan->completed);
+		queued = true;
+
+		/* calculate the next running descriptor */
+		mchan->running = list_first_entry(&mchan->active,
+						  struct hidma_desc, node);
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	hidma_process_completed(mchan);
+
+	if (queued) {
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+}
+
+static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
+{
+	struct hidma_chan *mchan;
+	struct dma_device *ddev;
+
+	mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
+	if (!mchan)
+		return -ENOMEM;
+
+	ddev = &dmadev->ddev;
+	mchan->dma_sig = dma_sig;
+	mchan->dmadev = dmadev;
+	mchan->chan.device = ddev;
+	dma_cookie_init(&mchan->chan);
+
+	INIT_LIST_HEAD(&mchan->free);
+	INIT_LIST_HEAD(&mchan->prepared);
+	INIT_LIST_HEAD(&mchan->active);
+	INIT_LIST_HEAD(&mchan->completed);
+
+	spin_lock_init(&mchan->lock);
+	list_add_tail(&mchan->chan.device_node, &ddev->channels);
+	dmadev->ddev.chancnt++;
+	return 0;
+}
+
+static void hidma_issue_task(unsigned long arg)
+{
+	struct hidma_dev *dmadev = (struct hidma_dev *)arg;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	hidma_ll_start(dmadev->lldev);
+}
+
+static void hidma_issue_pending(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	unsigned long flags;
+	int status;
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	if (!mchan->running) {
+		struct hidma_desc *desc = list_first_entry(&mchan->active,
+							   struct hidma_desc,
+							   node);
+		mchan->running = desc;
+	}
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	/* PM will be released in hidma_callback function. */
+	status = pm_runtime_get(dmadev->ddev.dev);
+	if (status < 0)
+		tasklet_schedule(&dmadev->task);
+	else
+		hidma_ll_start(dmadev->lldev);
+}
+
+static enum dma_status hidma_tx_status(struct dma_chan *dmach,
+				       dma_cookie_t cookie,
+				       struct dma_tx_state *txstate)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	enum dma_status ret;
+
+	ret = dma_cookie_status(dmach, cookie, txstate);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	if (mchan->paused && (ret == DMA_IN_PROGRESS)) {
+		unsigned long flags;
+		dma_cookie_t runcookie;
+
+		spin_lock_irqsave(&mchan->lock, flags);
+		if (mchan->running)
+			runcookie = mchan->running->desc.cookie;
+		else
+			runcookie = -EINVAL;
+
+		if (runcookie == cookie)
+			ret = DMA_PAUSED;
+
+		spin_unlock_irqrestore(&mchan->lock, flags);
+	}
+
+	return ret;
+}
+
+/*
+ * Submit descriptor to hardware.
+ * Lock the PM for each descriptor we are sending.
+ */
+static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
+{
+	struct hidma_chan *mchan = to_hidma_chan(txd->chan);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	struct hidma_desc *mdesc;
+	unsigned long irqflags;
+	dma_cookie_t cookie;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	if (!hidma_ll_isenabled(dmadev->lldev)) {
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+		return -ENODEV;
+	}
+
+	mdesc = container_of(txd, struct hidma_desc, desc);
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	/* Move descriptor to active */
+	list_move_tail(&mdesc->node, &mchan->active);
+
+	/* Update cookie */
+	cookie = dma_cookie_assign(txd);
+
+	hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return cookie;
+}
+
+static int hidma_alloc_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+	unsigned int i;
+	int rc = 0;
+
+	if (mchan->allocated)
+		return 0;
+
+	/* Alloc descriptors for this channel */
+	for (i = 0; i < dmadev->nr_descriptors; i++) {
+		mdesc = kzalloc(sizeof(struct hidma_desc), GFP_NOWAIT);
+		if (!mdesc) {
+			rc = -ENOMEM;
+			break;
+		}
+		dma_async_tx_descriptor_init(&mdesc->desc, dmach);
+		mdesc->desc.tx_submit = hidma_tx_submit;
+
+		rc = hidma_ll_request(dmadev->lldev, mchan->dma_sig,
+				      "DMA engine", hidma_callback, mdesc,
+				      &mdesc->tre_ch);
+		if (rc) {
+			dev_err(dmach->device->dev,
+				"channel alloc failed at %u\n", i);
+			kfree(mdesc);
+			break;
+		}
+		list_add_tail(&mdesc->node, &descs);
+	}
+
+	if (rc) {
+		/* return the allocated descriptors */
+		list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+			hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
+			kfree(mdesc);
+		}
+		return rc;
+	}
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&descs, &mchan->free);
+	mchan->allocated = true;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+	return 1;
+}
+
+static struct dma_async_tx_descriptor *
+hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dest, dma_addr_t src,
+		size_t len, unsigned long flags)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_desc *mdesc = NULL;
+	struct hidma_dev *mdma = mchan->dmadev;
+	unsigned long irqflags;
+
+	/* Get free descriptor */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	if (!list_empty(&mchan->free)) {
+		mdesc = list_first_entry(&mchan->free, struct hidma_desc, node);
+		list_del(&mdesc->node);
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	if (!mdesc)
+		return NULL;
+
+	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+				     src, dest, len, flags);
+
+	/* Place descriptor in prepared list */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_add_tail(&mdesc->node, &mchan->prepared);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return &mdesc->desc;
+}
+
+static int hidma_terminate_channel(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan = to_hidma_chan(chan);
+	struct hidma_dev *dmadev = to_hidma_dev(mchan->chan.device);
+	struct hidma_desc *tmp, *mdesc;
+	unsigned long irqflags;
+	LIST_HEAD(list);
+	int rc;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	/* give completed requests a chance to finish */
+	hidma_process_completed(mchan);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_init(&mchan->active, &list);
+	list_splice_init(&mchan->prepared, &list);
+	list_splice_init(&mchan->completed, &list);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	/* this suspends the existing transfer */
+	rc = hidma_ll_pause(dmadev->lldev);
+	if (rc) {
+		dev_err(dmadev->ddev.dev, "channel did not pause\n");
+		goto out;
+	}
+
+	/* return all user requests */
+	list_for_each_entry_safe(mdesc, tmp, &list, node) {
+		struct dma_async_tx_descriptor *txd = &mdesc->desc;
+		dma_async_tx_callback callback = mdesc->desc.callback;
+		void *param = mdesc->desc.callback_param;
+
+		dma_descriptor_unmap(txd);
+
+		if (callback)
+			callback(param);
+
+		dma_run_dependencies(txd);
+
+		/* move myself to free_list */
+		list_move(&mdesc->node, &mchan->free);
+	}
+
+	rc = hidma_ll_resume(dmadev->lldev);
+out:
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return rc;
+}
+
+static int hidma_terminate_all(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan = to_hidma_chan(chan);
+	struct hidma_dev *dmadev = to_hidma_dev(mchan->chan.device);
+	int rc;
+
+	rc = hidma_terminate_channel(chan);
+	if (rc)
+		return rc;
+
+	/* reinitialize the hardware */
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	rc = hidma_ll_setup(dmadev->lldev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return rc;
+}
+
+static void hidma_free_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *mdma = mchan->dmadev;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+
+	/* terminate running transactions and free descriptors */
+	hidma_terminate_channel(dmach);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	/* Move data */
+	list_splice_tail_init(&mchan->free, &descs);
+
+	/* Free descriptors */
+	list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+		hidma_ll_free(mdma->lldev, mdesc->tre_ch);
+		list_del(&mdesc->node);
+		kfree(mdesc);
+	}
+
+	mchan->allocated = 0;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+}
+
+static int hidma_pause(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	if (!mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		if (hidma_ll_pause(dmadev->lldev))
+			dev_warn(dmadev->ddev.dev, "channel did not stop\n");
+		mchan->paused = true;
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return 0;
+}
+
+static int hidma_resume(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+	int rc = 0;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	if (mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		rc = hidma_ll_resume(dmadev->lldev);
+		if (!rc)
+			mchan->paused = false;
+		else
+			dev_err(dmadev->ddev.dev,
+				"failed to resume the channel");
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return rc;
+}
+
+static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
+{
+	struct hidma_lldev *lldev = arg;
+
+	/*
+	 * All interrupts are request driven.
+	 * HW doesn't send an interrupt by itself.
+	 */
+	return hidma_ll_inthandler(chirq, lldev);
+}
+
+static int hidma_probe(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev;
+	struct resource *trca_resource;
+	struct resource *evca_resource;
+	int chirq;
+	int current_channel_index = atomic_read(&channel_ref_count);
+	void __iomem *evca;
+	void __iomem *trca;
+	int rc;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, HIDMA_AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+
+	trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	trca = devm_ioremap_resource(&pdev->dev, trca_resource);
+	if (IS_ERR(trca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+	evca = devm_ioremap_resource(&pdev->dev, evca_resource);
+	if (IS_ERR(evca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	/*
+	 * This driver only handles the channel IRQs.
+	 * Common IRQ is handled by the management driver.
+	 */
+	chirq = platform_get_irq(pdev, 0);
+	if (chirq < 0) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
+	if (!dmadev) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	spin_lock_init(&dmadev->lock);
+	dmadev->ddev.dev = &pdev->dev;
+	pm_runtime_get_sync(dmadev->ddev.dev);
+
+	dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
+	if (WARN_ON(!pdev->dev.dma_mask)) {
+		rc = -ENXIO;
+		goto dmafree;
+	}
+
+	dmadev->dev_evca = evca;
+	dmadev->evca_resource = evca_resource;
+	dmadev->dev_trca = trca;
+	dmadev->trca_resource = trca_resource;
+	dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
+	dmadev->ddev.device_alloc_chan_resources = hidma_alloc_chan_resources;
+	dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
+	dmadev->ddev.device_tx_status = hidma_tx_status;
+	dmadev->ddev.device_issue_pending = hidma_issue_pending;
+	dmadev->ddev.device_pause = hidma_pause;
+	dmadev->ddev.device_resume = hidma_resume;
+	dmadev->ddev.device_terminate_all = hidma_terminate_all;
+	dmadev->ddev.copy_align = 8;
+
+	device_property_read_u32(&pdev->dev, "desc-count",
+				 &dmadev->nr_descriptors);
+
+	if (!dmadev->nr_descriptors && nr_desc_prm)
+		dmadev->nr_descriptors = nr_desc_prm;
+
+	if (!dmadev->nr_descriptors) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	if (current_channel_index > HIDMA_MAX_CHANNELS) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	dmadev->chidx = -1;
+	device_property_read_u32(&pdev->dev, "channel-index", &dmadev->chidx);
+
+	/* kernel command line override for the guest machine */
+	if (channel_idx[current_channel_index] != -1)
+		dmadev->chidx = channel_idx[current_channel_index];
+
+	if (dmadev->chidx == -1) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	/* Set DMA mask to 64 bits. */
+	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (rc) {
+		dev_warn(&pdev->dev, "unable to set coherent mask to 64");
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+		if (rc)
+			goto dmafree;
+	}
+
+	dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
+				      dmadev->nr_descriptors, dmadev->dev_trca,
+				      dmadev->dev_evca, dmadev->chidx);
+	if (!dmadev->lldev) {
+		rc = -EPROBE_DEFER;
+		goto dmafree;
+	}
+
+	rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
+			      "qcom-hidma", dmadev->lldev);
+	if (rc)
+		goto uninit;
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	rc = hidma_chan_init(dmadev, 0);
+	if (rc)
+		goto uninit;
+
+	rc = dma_async_device_register(&dmadev->ddev);
+	if (rc)
+		goto uninit;
+
+	dmadev->irq = chirq;
+	tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
+	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
+	platform_set_drvdata(pdev, dmadev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	atomic_inc(&channel_ref_count);
+	return 0;
+
+uninit:
+	hidma_ll_uninit(dmadev->lldev);
+dmafree:
+	if (dmadev)
+		hidma_free(dmadev);
+bailout:
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return rc;
+}
+
+static int hidma_remove(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev = platform_get_drvdata(pdev);
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	dma_async_device_unregister(&dmadev->ddev);
+	devm_free_irq(dmadev->ddev.dev, dmadev->irq, dmadev->lldev);
+	hidma_ll_uninit(dmadev->lldev);
+	hidma_free(dmadev);
+
+	dev_info(&pdev->dev, "HI-DMA engine removed\n");
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_acpi_ids[] = {
+	{"QCOM8061"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_match[] = {
+	{.compatible = "qcom,hidma-1.0",},
+	{},
+};
+
+MODULE_DEVICE_TABLE(of, hidma_match);
+
+static struct platform_driver hidma_driver = {
+	.probe = hidma_probe,
+	.remove = hidma_remove,
+	.driver = {
+		   .name = "hidma",
+		   .of_match_table = hidma_match,
+		   .acpi_match_table = ACPI_PTR(hidma_acpi_ids),
+	},
+};
+
+module_platform_driver(hidma_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
new file mode 100644
index 0000000..231e306
--- /dev/null
+++ b/drivers/dma/qcom/hidma.h
@@ -0,0 +1,160 @@
+/*
+ * Qualcomm Technologies HIDMA data structures
+ *
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef QCOM_HIDMA_H
+#define QCOM_HIDMA_H
+
+#include <linux/kfifo.h>
+#include <linux/interrupt.h>
+#include <linux/dmaengine.h>
+
+#define TRE_SIZE			32 /* each TRE is 32 bytes  */
+#define TRE_CFG_IDX			0
+#define TRE_LEN_IDX			1
+#define TRE_SRC_LOW_IDX		2
+#define TRE_SRC_HI_IDX			3
+#define TRE_DEST_LOW_IDX		4
+#define TRE_DEST_HI_IDX		5
+
+struct hidma_tx_status {
+	u8 err_info;			/* error record in this transfer    */
+	u8 err_code;			/* completion code		    */
+};
+
+struct hidma_tre {
+	atomic_t allocated;		/* if this channel is allocated	    */
+	bool queued;			/* flag whether this is pending     */
+	u16 status;			/* status			    */
+	u32 chidx;			/* index of the tre		    */
+	u32 dma_sig;			/* signature of the tre		    */
+	const char *dev_name;		/* name of the device		    */
+	void (*callback)(void *data);	/* requester callback		    */
+	void *data;			/* Data associated with this channel*/
+	struct hidma_lldev *lldev;	/* lldma device pointer		    */
+	u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy        */
+	u32 tre_index;			/* the offset where this was written*/
+	u32 int_flags;			/* interrupt flags		    */
+};
+
+struct hidma_lldev {
+	bool initialized;		/* initialized flag               */
+	u8 trch_state;			/* trch_state of the device	  */
+	u8 evch_state;			/* evch_state of the device	  */
+	u8 chidx;			/* channel index in the core	  */
+	u32 nr_tres;			/* max number of configs          */
+	spinlock_t lock;		/* reentrancy                     */
+	struct hidma_tre *trepool;	/* trepool of user configs */
+	struct device *dev;		/* device			  */
+	void __iomem *trca;		/* Transfer Channel address       */
+	void __iomem *evca;		/* Event Channel address          */
+	struct hidma_tre
+		**pending_tre_list;	/* Pointers to pending TREs	  */
+	struct hidma_tx_status
+		*tx_status_list;	/* Pointers to pending TREs status*/
+	s32 pending_tre_count;		/* Number of TREs pending	  */
+
+	void *tre_ring;			/* TRE ring			  */
+	dma_addr_t tre_ring_handle;	/* TRE ring to be shared with HW  */
+	u32 tre_ring_size;		/* Byte size of the ring	  */
+	u32 tre_processed_off;		/* last processed TRE		  */
+
+	void *evre_ring;		/* EVRE ring			   */
+	dma_addr_t evre_ring_handle;	/* EVRE ring to be shared with HW  */
+	u32 evre_ring_size;		/* Byte size of the ring	   */
+	u32 evre_processed_off;		/* last processed EVRE		   */
+
+	u32 tre_write_offset;           /* TRE write location              */
+	struct tasklet_struct task;	/* task delivering notifications   */
+	DECLARE_KFIFO_PTR(handoff_fifo,
+		struct hidma_tre *);    /* pending TREs FIFO               */
+};
+
+struct hidma_desc {
+	struct dma_async_tx_descriptor	desc;
+	/* link list node for this channel*/
+	struct list_head		node;
+	u32				tre_ch;
+};
+
+struct hidma_chan {
+	bool				paused;
+	bool				allocated;
+	char				dbg_name[16];
+	u32				dma_sig;
+
+	/*
+	 * active descriptor on this channel
+	 * It is used by the DMA complete notification to
+	 * locate the descriptor that initiated the transfer.
+	 */
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+	struct hidma_dev		*dmadev;
+	struct hidma_desc		*running;
+
+	struct dma_chan			chan;
+	struct list_head		free;
+	struct list_head		prepared;
+	struct list_head		active;
+	struct list_head		completed;
+
+	/* Lock for this structure */
+	spinlock_t			lock;
+};
+
+struct hidma_dev {
+	int				irq;
+	int				chidx;
+	u32				nr_descriptors;
+
+	struct hidma_lldev		*lldev;
+	void				__iomem *dev_trca;
+	struct resource			*trca_resource;
+	void				__iomem *dev_evca;
+	struct resource			*evca_resource;
+
+	/* used to protect the pending channel list*/
+	spinlock_t			lock;
+	struct dma_device		ddev;
+
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+
+	/* Task delivering issue_pending */
+	struct tasklet_struct		task;
+};
+
+int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
+			const char *dev_name,
+			void (*callback)(void *data), void *data, u32 *tre_ch);
+
+void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
+enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
+bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
+void hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
+void hidma_ll_start(struct hidma_lldev *llhndl);
+int hidma_ll_pause(struct hidma_lldev *llhndl);
+int hidma_ll_resume(struct hidma_lldev *llhndl);
+void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
+	dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
+int hidma_ll_setup(struct hidma_lldev *lldev);
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
+			void __iomem *trca, void __iomem *evca,
+			u8 chidx);
+int hidma_ll_uninit(struct hidma_lldev *llhndl);
+irqreturn_t hidma_ll_inthandler(int irq, void *arg);
+void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
+				u8 err_code);
+#endif
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 04/10] dma: add Qualcomm Technologies HIDMA channel driver
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

This patch adds support for hidma engine. The driver consists of two
logical blocks. The DMA engine interface and the low-level interface.
The hardware only supports memcpy/memset and this driver only support
memcpy interface. HW and driver doesn't support slave interface.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
---
 drivers/dma/qcom/Kconfig |  10 +
 drivers/dma/qcom/hidma.c | 744 +++++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/qcom/hidma.h | 160 ++++++++++
 3 files changed, 914 insertions(+)
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h

diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index c975b11..a7761c4 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -17,3 +17,13 @@ config QCOM_HIDMA_MGMT
 	  start managing the channels. In a virtualized environment,
 	  the guest OS would run QCOM_HIDMA channel driver and the
 	  host would run the QCOM_HIDMA_MGMT management driver.
+
+config QCOM_HIDMA
+	tristate "Qualcomm Technologies HIDMA Channel support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA controller.
+	  The HIDMA controller supports optimized buffer copies
+	  (user to kernel, kernel to kernel, etc.).  It only supports
+	  memcpy interface. The core is not intended for general
+	  purpose slave DMA.
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
new file mode 100644
index 0000000..f8960f1
--- /dev/null
+++ b/drivers/dma/qcom/hidma.c
@@ -0,0 +1,744 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
+ * Copyright (C) Semihalf 2009
+ * Copyright (C) Ilya Yanok, Emcraft Systems 2010
+ * Copyright (C) Alexander Popov, Promcontroller 2014
+ *
+ * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
+ * (defines, structures and comments) was taken from MPC5121 DMA driver
+ * written by Hongjun Chen <hong-jun.chen@freescale.com>.
+ *
+ * Approved as OSADL project by a majority of OSADL members and funded
+ * by OSADL membership fees in 2009;  for details see www.osadl.org.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ */
+
+/* Linux Foundation elects GPLv2 license only. */
+
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/of_dma.h>
+#include <linux/property.h>
+#include <linux/delay.h>
+#include <linux/acpi.h>
+#include <linux/irq.h>
+#include <linux/atomic.h>
+#include <linux/pm_runtime.h>
+
+#include "../dmaengine.h"
+#include "hidma.h"
+
+/*
+ * Default idle time is 2 seconds. This parameter can
+ * be overridden by changing the following
+ * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
+ * during kernel boot.
+ */
+#define HIDMA_AUTOSUSPEND_TIMEOUT		2000
+#define HIDMA_ERR_INFO_SW			0xFF
+#define HIDMA_ERR_CODE_UNEXPECTED_TERMINATE	0x0
+
+static inline struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
+{
+	return container_of(dmadev, struct hidma_dev, ddev);
+}
+
+static inline
+struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
+{
+	return container_of(_lldevp, struct hidma_dev, lldev);
+}
+
+static inline struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
+{
+	return container_of(dmach, struct hidma_chan, chan);
+}
+
+static inline
+struct hidma_desc *to_hidma_desc(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct hidma_desc, desc);
+}
+
+static void hidma_free(struct hidma_dev *dmadev)
+{
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+}
+
+static unsigned int nr_desc_prm;
+module_param(nr_desc_prm, uint, 0644);
+MODULE_PARM_DESC(nr_desc_prm, "number of descriptors (default: 0)");
+
+#define HIDMA_MAX_CHANNELS	64
+static int channel_idx[HIDMA_MAX_CHANNELS] = {
+	[0 ... (HIDMA_MAX_CHANNELS - 1)] = -1
+};
+
+/*
+ * Each DMA channel is associated with an event channel for interrupt
+ * delivery. The event channel index usually comes from the firmware through
+ * ACPI/DT. When a HIDMA channel is executed in the guest machine context (QEMU)
+ * the device tree gets auto-generated based on the memory and IRQ resources
+ * this driver uses on the host machine. Any device specific paraemeter such as
+ * channel-index gets ignored by the QEMU.
+ * We are using this command line parameter to pass the event channel index to
+ * the guest machine.
+ */
+static unsigned int num_channel_idx;
+module_param_array_named(channel_idx, channel_idx, int, &num_channel_idx,
+			 0644);
+MODULE_PARM_DESC(channel_idx, "channel index array for the notifications");
+static atomic_t channel_ref_count;
+
+/* process completed descriptors */
+static void hidma_process_completed(struct hidma_chan *mchan)
+{
+	struct dma_device *ddev = mchan->chan.device;
+	struct hidma_dev *mdma = to_hidma_dev(ddev);
+	struct dma_async_tx_descriptor *desc;
+	dma_cookie_t last_cookie;
+	struct hidma_desc *mdesc;
+	unsigned long irqflags;
+	struct list_head list;
+
+	INIT_LIST_HEAD(&list);
+
+	/* Get all completed descriptors */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&mchan->completed, &list);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	/* Execute callbacks and run dependencies */
+	list_for_each_entry(mdesc, &list, node) {
+		enum dma_status llstat;
+
+		desc = &mdesc->desc;
+
+		spin_lock_irqsave(&mchan->lock, irqflags);
+		dma_cookie_complete(desc);
+		spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
+		if (desc->callback && (llstat == DMA_COMPLETE))
+			desc->callback(desc->callback_param);
+
+		last_cookie = desc->cookie;
+		dma_run_dependencies(desc);
+	}
+
+	/* Free descriptors */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&list, &mchan->free);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+}
+
+/*
+ * Called once for each submitted descriptor.
+ * PM is locked once for each descriptor that is currently
+ * in execution.
+ */
+static void hidma_callback(void *data)
+{
+	struct hidma_desc *mdesc = data;
+	struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
+	struct dma_device *ddev = mchan->chan.device;
+	struct hidma_dev *dmadev = to_hidma_dev(ddev);
+	unsigned long irqflags;
+	bool queued = false;
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	if (mdesc->node.next) {
+		/* Delete from the active list, add to completed list */
+		list_move_tail(&mdesc->node, &mchan->completed);
+		queued = true;
+
+		/* calculate the next running descriptor */
+		mchan->running = list_first_entry(&mchan->active,
+						  struct hidma_desc, node);
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	hidma_process_completed(mchan);
+
+	if (queued) {
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+}
+
+static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
+{
+	struct hidma_chan *mchan;
+	struct dma_device *ddev;
+
+	mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
+	if (!mchan)
+		return -ENOMEM;
+
+	ddev = &dmadev->ddev;
+	mchan->dma_sig = dma_sig;
+	mchan->dmadev = dmadev;
+	mchan->chan.device = ddev;
+	dma_cookie_init(&mchan->chan);
+
+	INIT_LIST_HEAD(&mchan->free);
+	INIT_LIST_HEAD(&mchan->prepared);
+	INIT_LIST_HEAD(&mchan->active);
+	INIT_LIST_HEAD(&mchan->completed);
+
+	spin_lock_init(&mchan->lock);
+	list_add_tail(&mchan->chan.device_node, &ddev->channels);
+	dmadev->ddev.chancnt++;
+	return 0;
+}
+
+static void hidma_issue_task(unsigned long arg)
+{
+	struct hidma_dev *dmadev = (struct hidma_dev *)arg;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	hidma_ll_start(dmadev->lldev);
+}
+
+static void hidma_issue_pending(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	unsigned long flags;
+	int status;
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	if (!mchan->running) {
+		struct hidma_desc *desc = list_first_entry(&mchan->active,
+							   struct hidma_desc,
+							   node);
+		mchan->running = desc;
+	}
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	/* PM will be released in hidma_callback function. */
+	status = pm_runtime_get(dmadev->ddev.dev);
+	if (status < 0)
+		tasklet_schedule(&dmadev->task);
+	else
+		hidma_ll_start(dmadev->lldev);
+}
+
+static enum dma_status hidma_tx_status(struct dma_chan *dmach,
+				       dma_cookie_t cookie,
+				       struct dma_tx_state *txstate)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	enum dma_status ret;
+
+	ret = dma_cookie_status(dmach, cookie, txstate);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	if (mchan->paused && (ret == DMA_IN_PROGRESS)) {
+		unsigned long flags;
+		dma_cookie_t runcookie;
+
+		spin_lock_irqsave(&mchan->lock, flags);
+		if (mchan->running)
+			runcookie = mchan->running->desc.cookie;
+		else
+			runcookie = -EINVAL;
+
+		if (runcookie == cookie)
+			ret = DMA_PAUSED;
+
+		spin_unlock_irqrestore(&mchan->lock, flags);
+	}
+
+	return ret;
+}
+
+/*
+ * Submit descriptor to hardware.
+ * Lock the PM for each descriptor we are sending.
+ */
+static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
+{
+	struct hidma_chan *mchan = to_hidma_chan(txd->chan);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	struct hidma_desc *mdesc;
+	unsigned long irqflags;
+	dma_cookie_t cookie;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	if (!hidma_ll_isenabled(dmadev->lldev)) {
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+		return -ENODEV;
+	}
+
+	mdesc = container_of(txd, struct hidma_desc, desc);
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	/* Move descriptor to active */
+	list_move_tail(&mdesc->node, &mchan->active);
+
+	/* Update cookie */
+	cookie = dma_cookie_assign(txd);
+
+	hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return cookie;
+}
+
+static int hidma_alloc_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+	unsigned int i;
+	int rc = 0;
+
+	if (mchan->allocated)
+		return 0;
+
+	/* Alloc descriptors for this channel */
+	for (i = 0; i < dmadev->nr_descriptors; i++) {
+		mdesc = kzalloc(sizeof(struct hidma_desc), GFP_NOWAIT);
+		if (!mdesc) {
+			rc = -ENOMEM;
+			break;
+		}
+		dma_async_tx_descriptor_init(&mdesc->desc, dmach);
+		mdesc->desc.tx_submit = hidma_tx_submit;
+
+		rc = hidma_ll_request(dmadev->lldev, mchan->dma_sig,
+				      "DMA engine", hidma_callback, mdesc,
+				      &mdesc->tre_ch);
+		if (rc) {
+			dev_err(dmach->device->dev,
+				"channel alloc failed at %u\n", i);
+			kfree(mdesc);
+			break;
+		}
+		list_add_tail(&mdesc->node, &descs);
+	}
+
+	if (rc) {
+		/* return the allocated descriptors */
+		list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+			hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
+			kfree(mdesc);
+		}
+		return rc;
+	}
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&descs, &mchan->free);
+	mchan->allocated = true;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+	return 1;
+}
+
+static struct dma_async_tx_descriptor *
+hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dest, dma_addr_t src,
+		size_t len, unsigned long flags)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_desc *mdesc = NULL;
+	struct hidma_dev *mdma = mchan->dmadev;
+	unsigned long irqflags;
+
+	/* Get free descriptor */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	if (!list_empty(&mchan->free)) {
+		mdesc = list_first_entry(&mchan->free, struct hidma_desc, node);
+		list_del(&mdesc->node);
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	if (!mdesc)
+		return NULL;
+
+	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+				     src, dest, len, flags);
+
+	/* Place descriptor in prepared list */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_add_tail(&mdesc->node, &mchan->prepared);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return &mdesc->desc;
+}
+
+static int hidma_terminate_channel(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan = to_hidma_chan(chan);
+	struct hidma_dev *dmadev = to_hidma_dev(mchan->chan.device);
+	struct hidma_desc *tmp, *mdesc;
+	unsigned long irqflags;
+	LIST_HEAD(list);
+	int rc;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	/* give completed requests a chance to finish */
+	hidma_process_completed(mchan);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_init(&mchan->active, &list);
+	list_splice_init(&mchan->prepared, &list);
+	list_splice_init(&mchan->completed, &list);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	/* this suspends the existing transfer */
+	rc = hidma_ll_pause(dmadev->lldev);
+	if (rc) {
+		dev_err(dmadev->ddev.dev, "channel did not pause\n");
+		goto out;
+	}
+
+	/* return all user requests */
+	list_for_each_entry_safe(mdesc, tmp, &list, node) {
+		struct dma_async_tx_descriptor *txd = &mdesc->desc;
+		dma_async_tx_callback callback = mdesc->desc.callback;
+		void *param = mdesc->desc.callback_param;
+
+		dma_descriptor_unmap(txd);
+
+		if (callback)
+			callback(param);
+
+		dma_run_dependencies(txd);
+
+		/* move myself to free_list */
+		list_move(&mdesc->node, &mchan->free);
+	}
+
+	rc = hidma_ll_resume(dmadev->lldev);
+out:
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return rc;
+}
+
+static int hidma_terminate_all(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan = to_hidma_chan(chan);
+	struct hidma_dev *dmadev = to_hidma_dev(mchan->chan.device);
+	int rc;
+
+	rc = hidma_terminate_channel(chan);
+	if (rc)
+		return rc;
+
+	/* reinitialize the hardware */
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	rc = hidma_ll_setup(dmadev->lldev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return rc;
+}
+
+static void hidma_free_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *mdma = mchan->dmadev;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+
+	/* terminate running transactions and free descriptors */
+	hidma_terminate_channel(dmach);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	/* Move data */
+	list_splice_tail_init(&mchan->free, &descs);
+
+	/* Free descriptors */
+	list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+		hidma_ll_free(mdma->lldev, mdesc->tre_ch);
+		list_del(&mdesc->node);
+		kfree(mdesc);
+	}
+
+	mchan->allocated = 0;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+}
+
+static int hidma_pause(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	if (!mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		if (hidma_ll_pause(dmadev->lldev))
+			dev_warn(dmadev->ddev.dev, "channel did not stop\n");
+		mchan->paused = true;
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return 0;
+}
+
+static int hidma_resume(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+	int rc = 0;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	if (mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		rc = hidma_ll_resume(dmadev->lldev);
+		if (!rc)
+			mchan->paused = false;
+		else
+			dev_err(dmadev->ddev.dev,
+				"failed to resume the channel");
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return rc;
+}
+
+static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
+{
+	struct hidma_lldev *lldev = arg;
+
+	/*
+	 * All interrupts are request driven.
+	 * HW doesn't send an interrupt by itself.
+	 */
+	return hidma_ll_inthandler(chirq, lldev);
+}
+
+static int hidma_probe(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev;
+	struct resource *trca_resource;
+	struct resource *evca_resource;
+	int chirq;
+	int current_channel_index = atomic_read(&channel_ref_count);
+	void __iomem *evca;
+	void __iomem *trca;
+	int rc;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, HIDMA_AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+
+	trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	trca = devm_ioremap_resource(&pdev->dev, trca_resource);
+	if (IS_ERR(trca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+	evca = devm_ioremap_resource(&pdev->dev, evca_resource);
+	if (IS_ERR(evca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	/*
+	 * This driver only handles the channel IRQs.
+	 * Common IRQ is handled by the management driver.
+	 */
+	chirq = platform_get_irq(pdev, 0);
+	if (chirq < 0) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
+	if (!dmadev) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	spin_lock_init(&dmadev->lock);
+	dmadev->ddev.dev = &pdev->dev;
+	pm_runtime_get_sync(dmadev->ddev.dev);
+
+	dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
+	if (WARN_ON(!pdev->dev.dma_mask)) {
+		rc = -ENXIO;
+		goto dmafree;
+	}
+
+	dmadev->dev_evca = evca;
+	dmadev->evca_resource = evca_resource;
+	dmadev->dev_trca = trca;
+	dmadev->trca_resource = trca_resource;
+	dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
+	dmadev->ddev.device_alloc_chan_resources = hidma_alloc_chan_resources;
+	dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
+	dmadev->ddev.device_tx_status = hidma_tx_status;
+	dmadev->ddev.device_issue_pending = hidma_issue_pending;
+	dmadev->ddev.device_pause = hidma_pause;
+	dmadev->ddev.device_resume = hidma_resume;
+	dmadev->ddev.device_terminate_all = hidma_terminate_all;
+	dmadev->ddev.copy_align = 8;
+
+	device_property_read_u32(&pdev->dev, "desc-count",
+				 &dmadev->nr_descriptors);
+
+	if (!dmadev->nr_descriptors && nr_desc_prm)
+		dmadev->nr_descriptors = nr_desc_prm;
+
+	if (!dmadev->nr_descriptors) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	if (current_channel_index > HIDMA_MAX_CHANNELS) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	dmadev->chidx = -1;
+	device_property_read_u32(&pdev->dev, "channel-index", &dmadev->chidx);
+
+	/* kernel command line override for the guest machine */
+	if (channel_idx[current_channel_index] != -1)
+		dmadev->chidx = channel_idx[current_channel_index];
+
+	if (dmadev->chidx == -1) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	/* Set DMA mask to 64 bits. */
+	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (rc) {
+		dev_warn(&pdev->dev, "unable to set coherent mask to 64");
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+		if (rc)
+			goto dmafree;
+	}
+
+	dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
+				      dmadev->nr_descriptors, dmadev->dev_trca,
+				      dmadev->dev_evca, dmadev->chidx);
+	if (!dmadev->lldev) {
+		rc = -EPROBE_DEFER;
+		goto dmafree;
+	}
+
+	rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
+			      "qcom-hidma", dmadev->lldev);
+	if (rc)
+		goto uninit;
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	rc = hidma_chan_init(dmadev, 0);
+	if (rc)
+		goto uninit;
+
+	rc = dma_async_device_register(&dmadev->ddev);
+	if (rc)
+		goto uninit;
+
+	dmadev->irq = chirq;
+	tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
+	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
+	platform_set_drvdata(pdev, dmadev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	atomic_inc(&channel_ref_count);
+	return 0;
+
+uninit:
+	hidma_ll_uninit(dmadev->lldev);
+dmafree:
+	if (dmadev)
+		hidma_free(dmadev);
+bailout:
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return rc;
+}
+
+static int hidma_remove(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev = platform_get_drvdata(pdev);
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	dma_async_device_unregister(&dmadev->ddev);
+	devm_free_irq(dmadev->ddev.dev, dmadev->irq, dmadev->lldev);
+	hidma_ll_uninit(dmadev->lldev);
+	hidma_free(dmadev);
+
+	dev_info(&pdev->dev, "HI-DMA engine removed\n");
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_acpi_ids[] = {
+	{"QCOM8061"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_match[] = {
+	{.compatible = "qcom,hidma-1.0",},
+	{},
+};
+
+MODULE_DEVICE_TABLE(of, hidma_match);
+
+static struct platform_driver hidma_driver = {
+	.probe = hidma_probe,
+	.remove = hidma_remove,
+	.driver = {
+		   .name = "hidma",
+		   .of_match_table = hidma_match,
+		   .acpi_match_table = ACPI_PTR(hidma_acpi_ids),
+	},
+};
+
+module_platform_driver(hidma_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
new file mode 100644
index 0000000..231e306
--- /dev/null
+++ b/drivers/dma/qcom/hidma.h
@@ -0,0 +1,160 @@
+/*
+ * Qualcomm Technologies HIDMA data structures
+ *
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef QCOM_HIDMA_H
+#define QCOM_HIDMA_H
+
+#include <linux/kfifo.h>
+#include <linux/interrupt.h>
+#include <linux/dmaengine.h>
+
+#define TRE_SIZE			32 /* each TRE is 32 bytes  */
+#define TRE_CFG_IDX			0
+#define TRE_LEN_IDX			1
+#define TRE_SRC_LOW_IDX		2
+#define TRE_SRC_HI_IDX			3
+#define TRE_DEST_LOW_IDX		4
+#define TRE_DEST_HI_IDX		5
+
+struct hidma_tx_status {
+	u8 err_info;			/* error record in this transfer    */
+	u8 err_code;			/* completion code		    */
+};
+
+struct hidma_tre {
+	atomic_t allocated;		/* if this channel is allocated	    */
+	bool queued;			/* flag whether this is pending     */
+	u16 status;			/* status			    */
+	u32 chidx;			/* index of the tre		    */
+	u32 dma_sig;			/* signature of the tre		    */
+	const char *dev_name;		/* name of the device		    */
+	void (*callback)(void *data);	/* requester callback		    */
+	void *data;			/* Data associated with this channel*/
+	struct hidma_lldev *lldev;	/* lldma device pointer		    */
+	u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy        */
+	u32 tre_index;			/* the offset where this was written*/
+	u32 int_flags;			/* interrupt flags		    */
+};
+
+struct hidma_lldev {
+	bool initialized;		/* initialized flag               */
+	u8 trch_state;			/* trch_state of the device	  */
+	u8 evch_state;			/* evch_state of the device	  */
+	u8 chidx;			/* channel index in the core	  */
+	u32 nr_tres;			/* max number of configs          */
+	spinlock_t lock;		/* reentrancy                     */
+	struct hidma_tre *trepool;	/* trepool of user configs */
+	struct device *dev;		/* device			  */
+	void __iomem *trca;		/* Transfer Channel address       */
+	void __iomem *evca;		/* Event Channel address          */
+	struct hidma_tre
+		**pending_tre_list;	/* Pointers to pending TREs	  */
+	struct hidma_tx_status
+		*tx_status_list;	/* Pointers to pending TREs status*/
+	s32 pending_tre_count;		/* Number of TREs pending	  */
+
+	void *tre_ring;			/* TRE ring			  */
+	dma_addr_t tre_ring_handle;	/* TRE ring to be shared with HW  */
+	u32 tre_ring_size;		/* Byte size of the ring	  */
+	u32 tre_processed_off;		/* last processed TRE		  */
+
+	void *evre_ring;		/* EVRE ring			   */
+	dma_addr_t evre_ring_handle;	/* EVRE ring to be shared with HW  */
+	u32 evre_ring_size;		/* Byte size of the ring	   */
+	u32 evre_processed_off;		/* last processed EVRE		   */
+
+	u32 tre_write_offset;           /* TRE write location              */
+	struct tasklet_struct task;	/* task delivering notifications   */
+	DECLARE_KFIFO_PTR(handoff_fifo,
+		struct hidma_tre *);    /* pending TREs FIFO               */
+};
+
+struct hidma_desc {
+	struct dma_async_tx_descriptor	desc;
+	/* link list node for this channel*/
+	struct list_head		node;
+	u32				tre_ch;
+};
+
+struct hidma_chan {
+	bool				paused;
+	bool				allocated;
+	char				dbg_name[16];
+	u32				dma_sig;
+
+	/*
+	 * active descriptor on this channel
+	 * It is used by the DMA complete notification to
+	 * locate the descriptor that initiated the transfer.
+	 */
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+	struct hidma_dev		*dmadev;
+	struct hidma_desc		*running;
+
+	struct dma_chan			chan;
+	struct list_head		free;
+	struct list_head		prepared;
+	struct list_head		active;
+	struct list_head		completed;
+
+	/* Lock for this structure */
+	spinlock_t			lock;
+};
+
+struct hidma_dev {
+	int				irq;
+	int				chidx;
+	u32				nr_descriptors;
+
+	struct hidma_lldev		*lldev;
+	void				__iomem *dev_trca;
+	struct resource			*trca_resource;
+	void				__iomem *dev_evca;
+	struct resource			*evca_resource;
+
+	/* used to protect the pending channel list*/
+	spinlock_t			lock;
+	struct dma_device		ddev;
+
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+
+	/* Task delivering issue_pending */
+	struct tasklet_struct		task;
+};
+
+int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
+			const char *dev_name,
+			void (*callback)(void *data), void *data, u32 *tre_ch);
+
+void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
+enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
+bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
+void hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
+void hidma_ll_start(struct hidma_lldev *llhndl);
+int hidma_ll_pause(struct hidma_lldev *llhndl);
+int hidma_ll_resume(struct hidma_lldev *llhndl);
+void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
+	dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
+int hidma_ll_setup(struct hidma_lldev *lldev);
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
+			void __iomem *trca, void __iomem *evca,
+			u8 chidx);
+int hidma_ll_uninit(struct hidma_lldev *llhndl);
+irqreturn_t hidma_ll_inthandler(int irq, void *arg);
+void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
+				u8 err_code);
+#endif
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 04/10] dma: add Qualcomm Technologies HIDMA channel driver
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds support for hidma engine. The driver consists of two
logical blocks. The DMA engine interface and the low-level interface.
The hardware only supports memcpy/memset and this driver only support
memcpy interface. HW and driver doesn't support slave interface.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
---
 drivers/dma/qcom/Kconfig |  10 +
 drivers/dma/qcom/hidma.c | 744 +++++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/qcom/hidma.h | 160 ++++++++++
 3 files changed, 914 insertions(+)
 create mode 100644 drivers/dma/qcom/hidma.c
 create mode 100644 drivers/dma/qcom/hidma.h

diff --git a/drivers/dma/qcom/Kconfig b/drivers/dma/qcom/Kconfig
index c975b11..a7761c4 100644
--- a/drivers/dma/qcom/Kconfig
+++ b/drivers/dma/qcom/Kconfig
@@ -17,3 +17,13 @@ config QCOM_HIDMA_MGMT
 	  start managing the channels. In a virtualized environment,
 	  the guest OS would run QCOM_HIDMA channel driver and the
 	  host would run the QCOM_HIDMA_MGMT management driver.
+
+config QCOM_HIDMA
+	tristate "Qualcomm Technologies HIDMA Channel support"
+	select DMA_ENGINE
+	help
+	  Enable support for the Qualcomm Technologies HIDMA controller.
+	  The HIDMA controller supports optimized buffer copies
+	  (user to kernel, kernel to kernel, etc.).  It only supports
+	  memcpy interface. The core is not intended for general
+	  purpose slave DMA.
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
new file mode 100644
index 0000000..f8960f1
--- /dev/null
+++ b/drivers/dma/qcom/hidma.c
@@ -0,0 +1,744 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine interface
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008.
+ * Copyright (C) Semihalf 2009
+ * Copyright (C) Ilya Yanok, Emcraft Systems 2010
+ * Copyright (C) Alexander Popov, Promcontroller 2014
+ *
+ * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
+ * (defines, structures and comments) was taken from MPC5121 DMA driver
+ * written by Hongjun Chen <hong-jun.chen@freescale.com>.
+ *
+ * Approved as OSADL project by a majority of OSADL members and funded
+ * by OSADL membership fees in 2009;  for details see www.osadl.org.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ */
+
+/* Linux Foundation elects GPLv2 license only. */
+
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/of_dma.h>
+#include <linux/property.h>
+#include <linux/delay.h>
+#include <linux/acpi.h>
+#include <linux/irq.h>
+#include <linux/atomic.h>
+#include <linux/pm_runtime.h>
+
+#include "../dmaengine.h"
+#include "hidma.h"
+
+/*
+ * Default idle time is 2 seconds. This parameter can
+ * be overridden by changing the following
+ * /sys/bus/platform/devices/QCOM8061:<xy>/power/autosuspend_delay_ms
+ * during kernel boot.
+ */
+#define HIDMA_AUTOSUSPEND_TIMEOUT		2000
+#define HIDMA_ERR_INFO_SW			0xFF
+#define HIDMA_ERR_CODE_UNEXPECTED_TERMINATE	0x0
+
+static inline struct hidma_dev *to_hidma_dev(struct dma_device *dmadev)
+{
+	return container_of(dmadev, struct hidma_dev, ddev);
+}
+
+static inline
+struct hidma_dev *to_hidma_dev_from_lldev(struct hidma_lldev **_lldevp)
+{
+	return container_of(_lldevp, struct hidma_dev, lldev);
+}
+
+static inline struct hidma_chan *to_hidma_chan(struct dma_chan *dmach)
+{
+	return container_of(dmach, struct hidma_chan, chan);
+}
+
+static inline
+struct hidma_desc *to_hidma_desc(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct hidma_desc, desc);
+}
+
+static void hidma_free(struct hidma_dev *dmadev)
+{
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+}
+
+static unsigned int nr_desc_prm;
+module_param(nr_desc_prm, uint, 0644);
+MODULE_PARM_DESC(nr_desc_prm, "number of descriptors (default: 0)");
+
+#define HIDMA_MAX_CHANNELS	64
+static int channel_idx[HIDMA_MAX_CHANNELS] = {
+	[0 ... (HIDMA_MAX_CHANNELS - 1)] = -1
+};
+
+/*
+ * Each DMA channel is associated with an event channel for interrupt
+ * delivery. The event channel index usually comes from the firmware through
+ * ACPI/DT. When a HIDMA channel is executed in the guest machine context (QEMU)
+ * the device tree gets auto-generated based on the memory and IRQ resources
+ * this driver uses on the host machine. Any device specific paraemeter such as
+ * channel-index gets ignored by the QEMU.
+ * We are using this command line parameter to pass the event channel index to
+ * the guest machine.
+ */
+static unsigned int num_channel_idx;
+module_param_array_named(channel_idx, channel_idx, int, &num_channel_idx,
+			 0644);
+MODULE_PARM_DESC(channel_idx, "channel index array for the notifications");
+static atomic_t channel_ref_count;
+
+/* process completed descriptors */
+static void hidma_process_completed(struct hidma_chan *mchan)
+{
+	struct dma_device *ddev = mchan->chan.device;
+	struct hidma_dev *mdma = to_hidma_dev(ddev);
+	struct dma_async_tx_descriptor *desc;
+	dma_cookie_t last_cookie;
+	struct hidma_desc *mdesc;
+	unsigned long irqflags;
+	struct list_head list;
+
+	INIT_LIST_HEAD(&list);
+
+	/* Get all completed descriptors */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&mchan->completed, &list);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	/* Execute callbacks and run dependencies */
+	list_for_each_entry(mdesc, &list, node) {
+		enum dma_status llstat;
+
+		desc = &mdesc->desc;
+
+		spin_lock_irqsave(&mchan->lock, irqflags);
+		dma_cookie_complete(desc);
+		spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
+		if (desc->callback && (llstat == DMA_COMPLETE))
+			desc->callback(desc->callback_param);
+
+		last_cookie = desc->cookie;
+		dma_run_dependencies(desc);
+	}
+
+	/* Free descriptors */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&list, &mchan->free);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+}
+
+/*
+ * Called once for each submitted descriptor.
+ * PM is locked once for each descriptor that is currently
+ * in execution.
+ */
+static void hidma_callback(void *data)
+{
+	struct hidma_desc *mdesc = data;
+	struct hidma_chan *mchan = to_hidma_chan(mdesc->desc.chan);
+	struct dma_device *ddev = mchan->chan.device;
+	struct hidma_dev *dmadev = to_hidma_dev(ddev);
+	unsigned long irqflags;
+	bool queued = false;
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	if (mdesc->node.next) {
+		/* Delete from the active list, add to completed list */
+		list_move_tail(&mdesc->node, &mchan->completed);
+		queued = true;
+
+		/* calculate the next running descriptor */
+		mchan->running = list_first_entry(&mchan->active,
+						  struct hidma_desc, node);
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	hidma_process_completed(mchan);
+
+	if (queued) {
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+}
+
+static int hidma_chan_init(struct hidma_dev *dmadev, u32 dma_sig)
+{
+	struct hidma_chan *mchan;
+	struct dma_device *ddev;
+
+	mchan = devm_kzalloc(dmadev->ddev.dev, sizeof(*mchan), GFP_KERNEL);
+	if (!mchan)
+		return -ENOMEM;
+
+	ddev = &dmadev->ddev;
+	mchan->dma_sig = dma_sig;
+	mchan->dmadev = dmadev;
+	mchan->chan.device = ddev;
+	dma_cookie_init(&mchan->chan);
+
+	INIT_LIST_HEAD(&mchan->free);
+	INIT_LIST_HEAD(&mchan->prepared);
+	INIT_LIST_HEAD(&mchan->active);
+	INIT_LIST_HEAD(&mchan->completed);
+
+	spin_lock_init(&mchan->lock);
+	list_add_tail(&mchan->chan.device_node, &ddev->channels);
+	dmadev->ddev.chancnt++;
+	return 0;
+}
+
+static void hidma_issue_task(unsigned long arg)
+{
+	struct hidma_dev *dmadev = (struct hidma_dev *)arg;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	hidma_ll_start(dmadev->lldev);
+}
+
+static void hidma_issue_pending(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	unsigned long flags;
+	int status;
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	if (!mchan->running) {
+		struct hidma_desc *desc = list_first_entry(&mchan->active,
+							   struct hidma_desc,
+							   node);
+		mchan->running = desc;
+	}
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	/* PM will be released in hidma_callback function. */
+	status = pm_runtime_get(dmadev->ddev.dev);
+	if (status < 0)
+		tasklet_schedule(&dmadev->task);
+	else
+		hidma_ll_start(dmadev->lldev);
+}
+
+static enum dma_status hidma_tx_status(struct dma_chan *dmach,
+				       dma_cookie_t cookie,
+				       struct dma_tx_state *txstate)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	enum dma_status ret;
+
+	ret = dma_cookie_status(dmach, cookie, txstate);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	if (mchan->paused && (ret == DMA_IN_PROGRESS)) {
+		unsigned long flags;
+		dma_cookie_t runcookie;
+
+		spin_lock_irqsave(&mchan->lock, flags);
+		if (mchan->running)
+			runcookie = mchan->running->desc.cookie;
+		else
+			runcookie = -EINVAL;
+
+		if (runcookie == cookie)
+			ret = DMA_PAUSED;
+
+		spin_unlock_irqrestore(&mchan->lock, flags);
+	}
+
+	return ret;
+}
+
+/*
+ * Submit descriptor to hardware.
+ * Lock the PM for each descriptor we are sending.
+ */
+static dma_cookie_t hidma_tx_submit(struct dma_async_tx_descriptor *txd)
+{
+	struct hidma_chan *mchan = to_hidma_chan(txd->chan);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	struct hidma_desc *mdesc;
+	unsigned long irqflags;
+	dma_cookie_t cookie;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	if (!hidma_ll_isenabled(dmadev->lldev)) {
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+		return -ENODEV;
+	}
+
+	mdesc = container_of(txd, struct hidma_desc, desc);
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	/* Move descriptor to active */
+	list_move_tail(&mdesc->node, &mchan->active);
+
+	/* Update cookie */
+	cookie = dma_cookie_assign(txd);
+
+	hidma_ll_queue_request(dmadev->lldev, mdesc->tre_ch);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return cookie;
+}
+
+static int hidma_alloc_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *dmadev = mchan->dmadev;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+	unsigned int i;
+	int rc = 0;
+
+	if (mchan->allocated)
+		return 0;
+
+	/* Alloc descriptors for this channel */
+	for (i = 0; i < dmadev->nr_descriptors; i++) {
+		mdesc = kzalloc(sizeof(struct hidma_desc), GFP_NOWAIT);
+		if (!mdesc) {
+			rc = -ENOMEM;
+			break;
+		}
+		dma_async_tx_descriptor_init(&mdesc->desc, dmach);
+		mdesc->desc.tx_submit = hidma_tx_submit;
+
+		rc = hidma_ll_request(dmadev->lldev, mchan->dma_sig,
+				      "DMA engine", hidma_callback, mdesc,
+				      &mdesc->tre_ch);
+		if (rc) {
+			dev_err(dmach->device->dev,
+				"channel alloc failed at %u\n", i);
+			kfree(mdesc);
+			break;
+		}
+		list_add_tail(&mdesc->node, &descs);
+	}
+
+	if (rc) {
+		/* return the allocated descriptors */
+		list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+			hidma_ll_free(dmadev->lldev, mdesc->tre_ch);
+			kfree(mdesc);
+		}
+		return rc;
+	}
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_tail_init(&descs, &mchan->free);
+	mchan->allocated = true;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+	return 1;
+}
+
+static struct dma_async_tx_descriptor *
+hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dest, dma_addr_t src,
+		size_t len, unsigned long flags)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_desc *mdesc = NULL;
+	struct hidma_dev *mdma = mchan->dmadev;
+	unsigned long irqflags;
+
+	/* Get free descriptor */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	if (!list_empty(&mchan->free)) {
+		mdesc = list_first_entry(&mchan->free, struct hidma_desc, node);
+		list_del(&mdesc->node);
+	}
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	if (!mdesc)
+		return NULL;
+
+	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+				     src, dest, len, flags);
+
+	/* Place descriptor in prepared list */
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_add_tail(&mdesc->node, &mchan->prepared);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	return &mdesc->desc;
+}
+
+static int hidma_terminate_channel(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan = to_hidma_chan(chan);
+	struct hidma_dev *dmadev = to_hidma_dev(mchan->chan.device);
+	struct hidma_desc *tmp, *mdesc;
+	unsigned long irqflags;
+	LIST_HEAD(list);
+	int rc;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	/* give completed requests a chance to finish */
+	hidma_process_completed(mchan);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+	list_splice_init(&mchan->active, &list);
+	list_splice_init(&mchan->prepared, &list);
+	list_splice_init(&mchan->completed, &list);
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+	/* this suspends the existing transfer */
+	rc = hidma_ll_pause(dmadev->lldev);
+	if (rc) {
+		dev_err(dmadev->ddev.dev, "channel did not pause\n");
+		goto out;
+	}
+
+	/* return all user requests */
+	list_for_each_entry_safe(mdesc, tmp, &list, node) {
+		struct dma_async_tx_descriptor *txd = &mdesc->desc;
+		dma_async_tx_callback callback = mdesc->desc.callback;
+		void *param = mdesc->desc.callback_param;
+
+		dma_descriptor_unmap(txd);
+
+		if (callback)
+			callback(param);
+
+		dma_run_dependencies(txd);
+
+		/* move myself to free_list */
+		list_move(&mdesc->node, &mchan->free);
+	}
+
+	rc = hidma_ll_resume(dmadev->lldev);
+out:
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return rc;
+}
+
+static int hidma_terminate_all(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan = to_hidma_chan(chan);
+	struct hidma_dev *dmadev = to_hidma_dev(mchan->chan.device);
+	int rc;
+
+	rc = hidma_terminate_channel(chan);
+	if (rc)
+		return rc;
+
+	/* reinitialize the hardware */
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	rc = hidma_ll_setup(dmadev->lldev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return rc;
+}
+
+static void hidma_free_chan_resources(struct dma_chan *dmach)
+{
+	struct hidma_chan *mchan = to_hidma_chan(dmach);
+	struct hidma_dev *mdma = mchan->dmadev;
+	struct hidma_desc *mdesc, *tmp;
+	unsigned long irqflags;
+	LIST_HEAD(descs);
+
+	/* terminate running transactions and free descriptors */
+	hidma_terminate_channel(dmach);
+
+	spin_lock_irqsave(&mchan->lock, irqflags);
+
+	/* Move data */
+	list_splice_tail_init(&mchan->free, &descs);
+
+	/* Free descriptors */
+	list_for_each_entry_safe(mdesc, tmp, &descs, node) {
+		hidma_ll_free(mdma->lldev, mdesc->tre_ch);
+		list_del(&mdesc->node);
+		kfree(mdesc);
+	}
+
+	mchan->allocated = 0;
+	spin_unlock_irqrestore(&mchan->lock, irqflags);
+}
+
+static int hidma_pause(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	if (!mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		if (hidma_ll_pause(dmadev->lldev))
+			dev_warn(dmadev->ddev.dev, "channel did not stop\n");
+		mchan->paused = true;
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return 0;
+}
+
+static int hidma_resume(struct dma_chan *chan)
+{
+	struct hidma_chan *mchan;
+	struct hidma_dev *dmadev;
+	int rc = 0;
+
+	mchan = to_hidma_chan(chan);
+	dmadev = to_hidma_dev(mchan->chan.device);
+	if (mchan->paused) {
+		pm_runtime_get_sync(dmadev->ddev.dev);
+		rc = hidma_ll_resume(dmadev->lldev);
+		if (!rc)
+			mchan->paused = false;
+		else
+			dev_err(dmadev->ddev.dev,
+				"failed to resume the channel");
+		pm_runtime_mark_last_busy(dmadev->ddev.dev);
+		pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	}
+	return rc;
+}
+
+static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
+{
+	struct hidma_lldev *lldev = arg;
+
+	/*
+	 * All interrupts are request driven.
+	 * HW doesn't send an interrupt by itself.
+	 */
+	return hidma_ll_inthandler(chirq, lldev);
+}
+
+static int hidma_probe(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev;
+	struct resource *trca_resource;
+	struct resource *evca_resource;
+	int chirq;
+	int current_channel_index = atomic_read(&channel_ref_count);
+	void __iomem *evca;
+	void __iomem *trca;
+	int rc;
+
+	pm_runtime_set_autosuspend_delay(&pdev->dev, HIDMA_AUTOSUSPEND_TIMEOUT);
+	pm_runtime_use_autosuspend(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+
+	trca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	trca = devm_ioremap_resource(&pdev->dev, trca_resource);
+	if (IS_ERR(trca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	evca_resource = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+	evca = devm_ioremap_resource(&pdev->dev, evca_resource);
+	if (IS_ERR(evca)) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	/*
+	 * This driver only handles the channel IRQs.
+	 * Common IRQ is handled by the management driver.
+	 */
+	chirq = platform_get_irq(pdev, 0);
+	if (chirq < 0) {
+		rc = -ENODEV;
+		goto bailout;
+	}
+
+	dmadev = devm_kzalloc(&pdev->dev, sizeof(*dmadev), GFP_KERNEL);
+	if (!dmadev) {
+		rc = -ENOMEM;
+		goto bailout;
+	}
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	spin_lock_init(&dmadev->lock);
+	dmadev->ddev.dev = &pdev->dev;
+	pm_runtime_get_sync(dmadev->ddev.dev);
+
+	dma_cap_set(DMA_MEMCPY, dmadev->ddev.cap_mask);
+	if (WARN_ON(!pdev->dev.dma_mask)) {
+		rc = -ENXIO;
+		goto dmafree;
+	}
+
+	dmadev->dev_evca = evca;
+	dmadev->evca_resource = evca_resource;
+	dmadev->dev_trca = trca;
+	dmadev->trca_resource = trca_resource;
+	dmadev->ddev.device_prep_dma_memcpy = hidma_prep_dma_memcpy;
+	dmadev->ddev.device_alloc_chan_resources = hidma_alloc_chan_resources;
+	dmadev->ddev.device_free_chan_resources = hidma_free_chan_resources;
+	dmadev->ddev.device_tx_status = hidma_tx_status;
+	dmadev->ddev.device_issue_pending = hidma_issue_pending;
+	dmadev->ddev.device_pause = hidma_pause;
+	dmadev->ddev.device_resume = hidma_resume;
+	dmadev->ddev.device_terminate_all = hidma_terminate_all;
+	dmadev->ddev.copy_align = 8;
+
+	device_property_read_u32(&pdev->dev, "desc-count",
+				 &dmadev->nr_descriptors);
+
+	if (!dmadev->nr_descriptors && nr_desc_prm)
+		dmadev->nr_descriptors = nr_desc_prm;
+
+	if (!dmadev->nr_descriptors) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	if (current_channel_index > HIDMA_MAX_CHANNELS) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	dmadev->chidx = -1;
+	device_property_read_u32(&pdev->dev, "channel-index", &dmadev->chidx);
+
+	/* kernel command line override for the guest machine */
+	if (channel_idx[current_channel_index] != -1)
+		dmadev->chidx = channel_idx[current_channel_index];
+
+	if (dmadev->chidx == -1) {
+		rc = -EINVAL;
+		goto dmafree;
+	}
+
+	/* Set DMA mask to 64 bits. */
+	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (rc) {
+		dev_warn(&pdev->dev, "unable to set coherent mask to 64");
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+		if (rc)
+			goto dmafree;
+	}
+
+	dmadev->lldev = hidma_ll_init(dmadev->ddev.dev,
+				      dmadev->nr_descriptors, dmadev->dev_trca,
+				      dmadev->dev_evca, dmadev->chidx);
+	if (!dmadev->lldev) {
+		rc = -EPROBE_DEFER;
+		goto dmafree;
+	}
+
+	rc = devm_request_irq(&pdev->dev, chirq, hidma_chirq_handler, 0,
+			      "qcom-hidma", dmadev->lldev);
+	if (rc)
+		goto uninit;
+
+	INIT_LIST_HEAD(&dmadev->ddev.channels);
+	rc = hidma_chan_init(dmadev, 0);
+	if (rc)
+		goto uninit;
+
+	rc = dma_async_device_register(&dmadev->ddev);
+	if (rc)
+		goto uninit;
+
+	dmadev->irq = chirq;
+	tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
+	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
+	platform_set_drvdata(pdev, dmadev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	atomic_inc(&channel_ref_count);
+	return 0;
+
+uninit:
+	hidma_ll_uninit(dmadev->lldev);
+dmafree:
+	if (dmadev)
+		hidma_free(dmadev);
+bailout:
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return rc;
+}
+
+static int hidma_remove(struct platform_device *pdev)
+{
+	struct hidma_dev *dmadev = platform_get_drvdata(pdev);
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	dma_async_device_unregister(&dmadev->ddev);
+	devm_free_irq(dmadev->ddev.dev, dmadev->irq, dmadev->lldev);
+	hidma_ll_uninit(dmadev->lldev);
+	hidma_free(dmadev);
+
+	dev_info(&pdev->dev, "HI-DMA engine removed\n");
+	pm_runtime_put_sync_suspend(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+#if IS_ENABLED(CONFIG_ACPI)
+static const struct acpi_device_id hidma_acpi_ids[] = {
+	{"QCOM8061"},
+	{},
+};
+#endif
+
+static const struct of_device_id hidma_match[] = {
+	{.compatible = "qcom,hidma-1.0",},
+	{},
+};
+
+MODULE_DEVICE_TABLE(of, hidma_match);
+
+static struct platform_driver hidma_driver = {
+	.probe = hidma_probe,
+	.remove = hidma_remove,
+	.driver = {
+		   .name = "hidma",
+		   .of_match_table = hidma_match,
+		   .acpi_match_table = ACPI_PTR(hidma_acpi_ids),
+	},
+};
+
+module_platform_driver(hidma_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
new file mode 100644
index 0000000..231e306
--- /dev/null
+++ b/drivers/dma/qcom/hidma.h
@@ -0,0 +1,160 @@
+/*
+ * Qualcomm Technologies HIDMA data structures
+ *
+ * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef QCOM_HIDMA_H
+#define QCOM_HIDMA_H
+
+#include <linux/kfifo.h>
+#include <linux/interrupt.h>
+#include <linux/dmaengine.h>
+
+#define TRE_SIZE			32 /* each TRE is 32 bytes  */
+#define TRE_CFG_IDX			0
+#define TRE_LEN_IDX			1
+#define TRE_SRC_LOW_IDX		2
+#define TRE_SRC_HI_IDX			3
+#define TRE_DEST_LOW_IDX		4
+#define TRE_DEST_HI_IDX		5
+
+struct hidma_tx_status {
+	u8 err_info;			/* error record in this transfer    */
+	u8 err_code;			/* completion code		    */
+};
+
+struct hidma_tre {
+	atomic_t allocated;		/* if this channel is allocated	    */
+	bool queued;			/* flag whether this is pending     */
+	u16 status;			/* status			    */
+	u32 chidx;			/* index of the tre		    */
+	u32 dma_sig;			/* signature of the tre		    */
+	const char *dev_name;		/* name of the device		    */
+	void (*callback)(void *data);	/* requester callback		    */
+	void *data;			/* Data associated with this channel*/
+	struct hidma_lldev *lldev;	/* lldma device pointer		    */
+	u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy        */
+	u32 tre_index;			/* the offset where this was written*/
+	u32 int_flags;			/* interrupt flags		    */
+};
+
+struct hidma_lldev {
+	bool initialized;		/* initialized flag               */
+	u8 trch_state;			/* trch_state of the device	  */
+	u8 evch_state;			/* evch_state of the device	  */
+	u8 chidx;			/* channel index in the core	  */
+	u32 nr_tres;			/* max number of configs          */
+	spinlock_t lock;		/* reentrancy                     */
+	struct hidma_tre *trepool;	/* trepool of user configs */
+	struct device *dev;		/* device			  */
+	void __iomem *trca;		/* Transfer Channel address       */
+	void __iomem *evca;		/* Event Channel address          */
+	struct hidma_tre
+		**pending_tre_list;	/* Pointers to pending TREs	  */
+	struct hidma_tx_status
+		*tx_status_list;	/* Pointers to pending TREs status*/
+	s32 pending_tre_count;		/* Number of TREs pending	  */
+
+	void *tre_ring;			/* TRE ring			  */
+	dma_addr_t tre_ring_handle;	/* TRE ring to be shared with HW  */
+	u32 tre_ring_size;		/* Byte size of the ring	  */
+	u32 tre_processed_off;		/* last processed TRE		  */
+
+	void *evre_ring;		/* EVRE ring			   */
+	dma_addr_t evre_ring_handle;	/* EVRE ring to be shared with HW  */
+	u32 evre_ring_size;		/* Byte size of the ring	   */
+	u32 evre_processed_off;		/* last processed EVRE		   */
+
+	u32 tre_write_offset;           /* TRE write location              */
+	struct tasklet_struct task;	/* task delivering notifications   */
+	DECLARE_KFIFO_PTR(handoff_fifo,
+		struct hidma_tre *);    /* pending TREs FIFO               */
+};
+
+struct hidma_desc {
+	struct dma_async_tx_descriptor	desc;
+	/* link list node for this channel*/
+	struct list_head		node;
+	u32				tre_ch;
+};
+
+struct hidma_chan {
+	bool				paused;
+	bool				allocated;
+	char				dbg_name[16];
+	u32				dma_sig;
+
+	/*
+	 * active descriptor on this channel
+	 * It is used by the DMA complete notification to
+	 * locate the descriptor that initiated the transfer.
+	 */
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+	struct hidma_dev		*dmadev;
+	struct hidma_desc		*running;
+
+	struct dma_chan			chan;
+	struct list_head		free;
+	struct list_head		prepared;
+	struct list_head		active;
+	struct list_head		completed;
+
+	/* Lock for this structure */
+	spinlock_t			lock;
+};
+
+struct hidma_dev {
+	int				irq;
+	int				chidx;
+	u32				nr_descriptors;
+
+	struct hidma_lldev		*lldev;
+	void				__iomem *dev_trca;
+	struct resource			*trca_resource;
+	void				__iomem *dev_evca;
+	struct resource			*evca_resource;
+
+	/* used to protect the pending channel list*/
+	spinlock_t			lock;
+	struct dma_device		ddev;
+
+	struct dentry			*debugfs;
+	struct dentry			*stats;
+
+	/* Task delivering issue_pending */
+	struct tasklet_struct		task;
+};
+
+int hidma_ll_request(struct hidma_lldev *llhndl, u32 dev_id,
+			const char *dev_name,
+			void (*callback)(void *data), void *data, u32 *tre_ch);
+
+void hidma_ll_free(struct hidma_lldev *llhndl, u32 tre_ch);
+enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
+bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
+void hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
+void hidma_ll_start(struct hidma_lldev *llhndl);
+int hidma_ll_pause(struct hidma_lldev *llhndl);
+int hidma_ll_resume(struct hidma_lldev *llhndl);
+void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
+	dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
+int hidma_ll_setup(struct hidma_lldev *lldev);
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 max_channels,
+			void __iomem *trca, void __iomem *evca,
+			u8 chidx);
+int hidma_ll_uninit(struct hidma_lldev *llhndl);
+irqreturn_t hidma_ll_inthandler(int irq, void *arg);
+void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
+				u8 err_code);
+#endif
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 05/10] dma: qcom_hidma: implement lower level hardware interface
  2016-01-29 22:35 ` Sinan Kaya
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

This patch implements the hardware hooks for the HIDMA channel driver.

The main functions of interest are:
- hidma_ll_init
- hidma_ll_request
- hidma_ll_queue_request
- hidma_ll_hw_start

OS layer calls the hidma_ll_init function during probe to set up the
hardware. At this moment, the number of supported descriptors are also
given. On each request, a descriptor is allocated from the free pool and
filled in with the transfer parameters. Multiple requests can be queued
into the hardware via the OS interface. When client is ready for requests
to be executed, start method is called.

Completions are delivered via callbacks via tasklet.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/dma/qcom/Makefile   |   2 +
 drivers/dma/qcom/hidma.h    |   2 +-
 drivers/dma/qcom/hidma_ll.c | 927 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 930 insertions(+), 1 deletion(-)
 create mode 100644 drivers/dma/qcom/hidma_ll.c

diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index bfea699..6bf9267 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1,3 +1,5 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
 obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
 hdma_mgmt-objs	 := hidma_mgmt.o hidma_mgmt_sys.o
+obj-$(CONFIG_QCOM_HIDMA) +=  hdma.o
+hdma-objs        := hidma_ll.o hidma.o
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
index 231e306..1e09d7c 100644
--- a/drivers/dma/qcom/hidma.h
+++ b/drivers/dma/qcom/hidma.h
@@ -37,7 +37,7 @@ struct hidma_tre {
 	atomic_t allocated;		/* if this channel is allocated	    */
 	bool queued;			/* flag whether this is pending     */
 	u16 status;			/* status			    */
-	u32 chidx;			/* index of the tre		    */
+	u32 idx;			/* index of the tre		    */
 	u32 dma_sig;			/* signature of the tre		    */
 	const char *dev_name;		/* name of the device		    */
 	void (*callback)(void *data);	/* requester callback		    */
diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
new file mode 100644
index 0000000..c343bf7
--- /dev/null
+++ b/drivers/dma/qcom/hidma_ll.c
@@ -0,0 +1,927 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine low level code
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/atomic.h>
+#include <linux/iopoll.h>
+#include <linux/kfifo.h>
+#include <linux/bitops.h>
+
+#include "hidma.h"
+
+#define EVRE_SIZE			16	/* each EVRE is 16 bytes */
+
+#define TRCA_CTRLSTS_OFFSET		0x000
+#define TRCA_RING_LOW_OFFSET		0x008
+#define TRCA_RING_HIGH_OFFSET		0x00C
+#define TRCA_RING_LEN_OFFSET		0x010
+#define TRCA_READ_PTR_OFFSET		0x018
+#define TRCA_WRITE_PTR_OFFSET		0x020
+#define TRCA_DOORBELL_OFFSET		0x400
+
+#define EVCA_CTRLSTS_OFFSET		0x000
+#define EVCA_INTCTRL_OFFSET		0x004
+#define EVCA_RING_LOW_OFFSET		0x008
+#define EVCA_RING_HIGH_OFFSET		0x00C
+#define EVCA_RING_LEN_OFFSET		0x010
+#define EVCA_READ_PTR_OFFSET		0x018
+#define EVCA_WRITE_PTR_OFFSET		0x020
+#define EVCA_DOORBELL_OFFSET		0x400
+
+#define EVCA_IRQ_STAT_OFFSET		0x100
+#define EVCA_IRQ_CLR_OFFSET		0x108
+#define EVCA_IRQ_EN_OFFSET		0x110
+
+#define EVRE_CFG_IDX			0
+#define EVRE_LEN_IDX			1
+#define EVRE_DEST_LOW_IDX		2
+#define EVRE_DEST_HI_IDX		3
+
+#define EVRE_ERRINFO_BIT_POS		24
+#define EVRE_CODE_BIT_POS		28
+
+#define EVRE_ERRINFO_MASK		GENMASK(3, 0)
+#define EVRE_CODE_MASK			GENMASK(3, 0)
+
+#define CH_CONTROL_MASK		GENMASK(7, 0)
+#define CH_STATE_MASK			GENMASK(7, 0)
+#define CH_STATE_BIT_POS		0x8
+
+#define IRQ_EV_CH_EOB_IRQ_BIT_POS	0
+#define IRQ_EV_CH_WR_RESP_BIT_POS	1
+#define IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
+#define IRQ_TR_CH_DATA_RD_ER_BIT_POS	10
+#define IRQ_TR_CH_DATA_WR_ER_BIT_POS	11
+#define IRQ_TR_CH_INVALID_TRE_BIT_POS	14
+
+#define	ENABLE_IRQS (BIT(IRQ_EV_CH_EOB_IRQ_BIT_POS)	| \
+		BIT(IRQ_EV_CH_WR_RESP_BIT_POS)		| \
+		BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)	| \
+		BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)	| \
+		BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS)	| \
+		BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))
+
+#define HIDMA_INCREMENT_ITERATOR(iter, size, ring_size)	\
+do {								\
+	iter += size;						\
+	if (iter >= ring_size)					\
+		iter -= ring_size;				\
+} while (0)
+
+#define HIDMA_CH_STATE(val)	\
+	((val >> CH_STATE_BIT_POS) & CH_STATE_MASK)
+
+enum ch_command {
+	CH_DISABLE = 0,
+	CH_ENABLE = 1,
+	CH_SUSPEND = 2,
+	CH_RESET = 9,
+};
+
+enum ch_state {
+	CH_DISABLED = 0,
+	CH_ENABLED = 1,
+	CH_RUNNING = 2,
+	CH_SUSPENDED = 3,
+	CH_STOPPED = 4,
+	CH_ERROR = 5,
+	CH_IN_RESET = 9,
+};
+
+enum tre_type {
+	TRE_MEMCPY = 3,
+	TRE_MEMSET = 4,
+};
+
+enum evre_type {
+	EVRE_DMA_COMPLETE = 0x23,
+	EVRE_IMM_DATA = 0x24,
+};
+
+enum err_code {
+	EVRE_STATUS_COMPLETE = 1,
+	EVRE_STATUS_ERROR = 4,
+};
+
+void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	struct hidma_tre *tre;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
+		return;
+	}
+
+	tre = &lldev->trepool[tre_ch];
+	if (atomic_read(&tre->allocated) != true) {
+		dev_err(lldev->dev, "trying to free an unused TRE:%d", tre_ch);
+		return;
+	}
+
+	atomic_set(&tre->allocated, 0);
+}
+
+int hidma_ll_request(struct hidma_lldev *lldev, u32 dma_sig,
+		     const char *dev_name,
+		     void (*callback)(void *data), void *data, u32 *tre_ch)
+{
+	unsigned int i;
+	struct hidma_tre *tre;
+	u32 *tre_local;
+
+	if (!tre_ch || !lldev)
+		return -EINVAL;
+
+	/* need to have at least one empty spot in the queue */
+	for (i = 0; i < lldev->nr_tres - 1; i++) {
+		if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
+			break;
+	}
+
+	if (i == (lldev->nr_tres - 1))
+		return -ENOMEM;
+
+	tre = &lldev->trepool[i];
+	tre->dma_sig = dma_sig;
+	tre->dev_name = dev_name;
+	tre->callback = callback;
+	tre->data = data;
+	tre->idx = i;
+	tre->status = 0;
+	tre->queued = 0;
+	lldev->tx_status_list[i].err_code = 0;
+	tre->lldev = lldev;
+	tre_local = &tre->tre_local[0];
+	tre_local[TRE_CFG_IDX] = TRE_MEMCPY;
+	tre_local[TRE_CFG_IDX] |= (lldev->chidx & 0xFF) << 8;
+	tre_local[TRE_CFG_IDX] |= BIT(16);	/* set IEOB */
+	*tre_ch = i;
+	if (callback)
+		callback(data);
+	return 0;
+}
+
+/*
+ * Multiple TREs may be queued and waiting in the
+ * pending queue.
+ */
+static void hidma_ll_tre_complete(unsigned long arg)
+{
+	struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
+	struct hidma_tre *tre;
+
+	while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
+		/* call the user if it has been read by the hardware */
+		if (tre->callback)
+			tre->callback(tre->data);
+	}
+}
+
+/*
+ * Called to handle the interrupt for the channel.
+ * Return a positive number if TRE or EVRE were consumed on this run.
+ * Return a positive number if there are pending TREs or EVREs.
+ * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
+ */
+static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
+{
+	struct hidma_tre *tre;
+	u32 evre_write_off;
+	u32 evre_ring_size = lldev->evre_ring_size;
+	u32 tre_ring_size = lldev->tre_ring_size;
+	u32 num_completed = 0, tre_iterator, evre_iterator;
+	unsigned long flags;
+
+	evre_write_off = readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+	tre_iterator = lldev->tre_processed_off;
+	evre_iterator = lldev->evre_processed_off;
+
+	if ((evre_write_off > evre_ring_size) ||
+	    ((evre_write_off % EVRE_SIZE) != 0)) {
+		dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
+		return 0;
+	}
+
+	/*
+	 * By the time control reaches here the number of EVREs and TREs
+	 * may not match. Only consume the ones that hardware told us.
+	 */
+	while ((evre_iterator != evre_write_off)) {
+		u32 *current_evre = lldev->evre_ring + evre_iterator;
+		u32 cfg;
+		u8 err_info;
+
+		spin_lock_irqsave(&lldev->lock, flags);
+		tre = lldev->pending_tre_list[tre_iterator / TRE_SIZE];
+		if (!tre) {
+			spin_unlock_irqrestore(&lldev->lock, flags);
+			dev_warn(lldev->dev,
+				 "tre_index [%d] and tre out of sync\n",
+				 tre_iterator / TRE_SIZE);
+			HIDMA_INCREMENT_ITERATOR(tre_iterator, TRE_SIZE,
+						 tre_ring_size);
+			HIDMA_INCREMENT_ITERATOR(evre_iterator, EVRE_SIZE,
+						 evre_ring_size);
+			continue;
+		}
+		lldev->pending_tre_list[tre->tre_index] = NULL;
+
+		/*
+		 * Keep track of pending TREs that SW is expecting to receive
+		 * from HW. We got one now. Decrement our counter.
+		 */
+		lldev->pending_tre_count--;
+		if (lldev->pending_tre_count < 0) {
+			dev_warn(lldev->dev,
+				 "tre count mismatch on completion");
+			lldev->pending_tre_count = 0;
+		}
+
+		spin_unlock_irqrestore(&lldev->lock, flags);
+
+		cfg = current_evre[EVRE_CFG_IDX];
+		err_info = cfg >> EVRE_ERRINFO_BIT_POS;
+		err_info &= EVRE_ERRINFO_MASK;
+		lldev->tx_status_list[tre->idx].err_info = err_info;
+		lldev->tx_status_list[tre->idx].err_code =
+		    (cfg >> EVRE_CODE_BIT_POS) & EVRE_CODE_MASK;
+		tre->queued = 0;
+
+		kfifo_put(&lldev->handoff_fifo, tre);
+		tasklet_schedule(&lldev->task);
+
+		HIDMA_INCREMENT_ITERATOR(tre_iterator, TRE_SIZE,
+					 tre_ring_size);
+		HIDMA_INCREMENT_ITERATOR(evre_iterator, EVRE_SIZE,
+					 evre_ring_size);
+
+		/*
+		 * Read the new event descriptor written by the HW.
+		 * As we are processing the delivered events, other events
+		 * get queued to the SW for processing.
+		 */
+		evre_write_off =
+		    readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+		num_completed++;
+	}
+
+	if (num_completed) {
+		u32 evre_read_off = (lldev->evre_processed_off +
+				     EVRE_SIZE * num_completed);
+		u32 tre_read_off = (lldev->tre_processed_off +
+				    TRE_SIZE * num_completed);
+
+		evre_read_off = evre_read_off % evre_ring_size;
+		tre_read_off = tre_read_off % tre_ring_size;
+
+		writel(evre_read_off, lldev->evca + EVCA_DOORBELL_OFFSET);
+
+		/* record the last processed tre offset */
+		lldev->tre_processed_off = tre_read_off;
+		lldev->evre_processed_off = evre_read_off;
+	}
+
+	return num_completed;
+}
+
+void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
+			       u8 err_code)
+{
+	u32 tre_iterator;
+	struct hidma_tre *tre;
+	u32 tre_ring_size = lldev->tre_ring_size;
+	int num_completed = 0;
+	u32 tre_read_off;
+	unsigned long flags;
+
+	tre_iterator = lldev->tre_processed_off;
+	while (lldev->pending_tre_count) {
+		int tre_index = tre_iterator / TRE_SIZE;
+
+		spin_lock_irqsave(&lldev->lock, flags);
+		tre = lldev->pending_tre_list[tre_index];
+		if (!tre) {
+			spin_unlock_irqrestore(&lldev->lock, flags);
+			HIDMA_INCREMENT_ITERATOR(tre_iterator, TRE_SIZE,
+						 tre_ring_size);
+			continue;
+		}
+		lldev->pending_tre_list[tre_index] = NULL;
+		lldev->pending_tre_count--;
+		if (lldev->pending_tre_count < 0) {
+			dev_warn(lldev->dev,
+				 "tre count mismatch on completion");
+			lldev->pending_tre_count = 0;
+		}
+		spin_unlock_irqrestore(&lldev->lock, flags);
+
+		lldev->tx_status_list[tre->idx].err_info = err_info;
+		lldev->tx_status_list[tre->idx].err_code = err_code;
+		tre->queued = 0;
+
+		kfifo_put(&lldev->handoff_fifo, tre);
+		tasklet_schedule(&lldev->task);
+
+		HIDMA_INCREMENT_ITERATOR(tre_iterator, TRE_SIZE,
+					 tre_ring_size);
+		num_completed++;
+	}
+	tre_read_off = (lldev->tre_processed_off + TRE_SIZE * num_completed);
+
+	tre_read_off = tre_read_off % tre_ring_size;
+
+	/* record the last processed tre offset */
+	lldev->tre_processed_off = tre_read_off;
+}
+
+static int hidma_ll_reset(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_RESET << 16;
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Delay 10ms after reset to allow DMA logic to quiesce.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
+				 10000);
+	if (ret) {
+		dev_err(lldev->dev, "transfer channel did not reset\n");
+		return ret;
+	}
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_RESET << 16;
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Delay 10ms after reset to allow DMA logic to quiesce.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	lldev->trch_state = CH_DISABLED;
+	lldev->evch_state = CH_DISABLED;
+	return 0;
+}
+
+static void hidma_ll_enable_irq(struct hidma_lldev *lldev, u32 irq_bits)
+{
+	writel(irq_bits, lldev->evca + EVCA_IRQ_EN_OFFSET);
+}
+
+/*
+ * The interrupt handler for HIDMA will try to consume as many pending
+ * EVRE from the event queue as possible. Each EVRE has an associated
+ * TRE that holds the user interface parameters. EVRE reports the
+ * result of the transaction. Hardware guarantees ordering between EVREs
+ * and TREs. We use last processed offset to figure out which TRE is
+ * associated with which EVRE. If two TREs are consumed by HW, the EVREs
+ * are in order in the event ring.
+ *
+ * This handler will do a one pass for consuming EVREs. Other EVREs may
+ * be delivered while we are working. It will try to consume incoming
+ * EVREs one more time and return.
+ *
+ * For unprocessed EVREs, hardware will trigger another interrupt until
+ * all the interrupt bits are cleared.
+ *
+ * Hardware guarantees that by the time interrupt is observed, all data
+ * transactions in flight are delivered to their respective places and
+ * are visible to the CPU.
+ *
+ * On demand paging for IOMMU is only supported for PCIe via PRI
+ * (Page Request Interface) not for HIDMA. All other hardware instances
+ * including HIDMA work on pinned DMA addresses.
+ *
+ * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
+ * IOMMU latency will be built into the data movement time. By the time
+ * interrupt happens, IOMMU lookups + data movement has already taken place.
+ *
+ * While the first read in a typical PCI endpoint ISR flushes all outstanding
+ * requests traditionally to the destination, this concept does not apply
+ * here for this HW.
+ */
+static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev)
+{
+	u32 status;
+	u32 enable;
+	u32 cause;
+	int repeat = 2;
+	unsigned long timeout;
+
+	/*
+	 * Fine tuned for this HW...
+	 *
+	 * This ISR has been designed for this particular hardware. Relaxed
+	 * read and write accessors are used for performance reasons due to
+	 * interrupt delivery guarantees. Do not copy this code blindly and
+	 * expect that to work.
+	 */
+	status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+	cause = status & enable;
+
+	if ((cause & (BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))) ||
+	    (cause & BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)) ||
+	    (cause & BIT(IRQ_EV_CH_WR_RESP_BIT_POS)) ||
+	    (cause & BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)) ||
+	    (cause & BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS))) {
+		u8 err_code = EVRE_STATUS_ERROR;
+		u8 err_info = 0xFF;
+
+		/* Clear out pending interrupts */
+		writel(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+		dev_err(lldev->dev, "error 0x%x, resetting...\n", cause);
+
+		hidma_cleanup_pending_tre(lldev, err_info, err_code);
+
+		/* reset the channel for recovery */
+		if (hidma_ll_setup(lldev)) {
+			dev_err(lldev->dev,
+				"channel reinitialize failed after error\n");
+			return;
+		}
+		hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+		return;
+	}
+
+	/*
+	 * Try to consume as many EVREs as possible.
+	 * skip this loop if the interrupt is spurious.
+	 */
+	while (cause && repeat) {
+		unsigned long start = jiffies;
+
+		/* This timeout should be sufficent for core to finish */
+		timeout = start + msecs_to_jiffies(500);
+
+		while (lldev->pending_tre_count) {
+			hidma_handle_tre_completion(lldev);
+			if (time_is_before_jiffies(timeout)) {
+				dev_warn(lldev->dev,
+					 "ISR timeout %lx-%lx from %lx [%d]\n",
+					 jiffies, timeout, start,
+					 lldev->pending_tre_count);
+				break;
+			}
+		}
+
+		/* We consumed TREs or there are pending TREs or EVREs. */
+		writel_relaxed(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+		/*
+		 * Another interrupt might have arrived while we are
+		 * processing this one. Read the new cause.
+		 */
+		status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+		enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+		cause = status & enable;
+
+		repeat--;
+	}
+}
+
+static int hidma_ll_enable(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_ENABLE << 16;
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+				 (HIDMA_CH_STATE(val) == CH_ENABLED) ||
+				 (HIDMA_CH_STATE(val) == CH_RUNNING), 1000,
+				 10000);
+	if (ret) {
+		dev_err(lldev->dev, "event channel did not get enabled\n");
+		return ret;
+	}
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_ENABLE << 16;
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+				 (HIDMA_CH_STATE(val) == CH_ENABLED) ||
+				 (HIDMA_CH_STATE(val) == CH_RUNNING), 1000,
+				 10000);
+	if (ret) {
+		dev_err(lldev->dev, "transfer channel did not get enabled\n");
+		return ret;
+	}
+
+	lldev->trch_state = CH_ENABLED;
+	lldev->evch_state = CH_ENABLED;
+
+	return 0;
+}
+
+int hidma_ll_resume(struct hidma_lldev *lldev)
+{
+	return hidma_ll_enable(lldev);
+}
+
+static void hidma_ll_hw_start(struct hidma_lldev *lldev)
+{
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&lldev->lock, irqflags);
+	writel(lldev->tre_write_offset, lldev->trca + TRCA_DOORBELL_OFFSET);
+	spin_unlock_irqrestore(&lldev->lock, irqflags);
+}
+
+bool hidma_ll_isenabled(struct hidma_lldev *lldev)
+{
+	u32 val;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	lldev->trch_state = HIDMA_CH_STATE(val);
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	lldev->evch_state = HIDMA_CH_STATE(val);
+
+	/* both channels have to be enabled before calling this function */
+	if (((lldev->trch_state == CH_ENABLED) ||
+	     (lldev->trch_state == CH_RUNNING)) &&
+	    ((lldev->evch_state == CH_ENABLED) ||
+	     (lldev->evch_state == CH_RUNNING)))
+		return true;
+
+	return false;
+}
+
+void hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	struct hidma_tre *tre;
+	unsigned long flags;
+
+	tre = &lldev->trepool[tre_ch];
+
+	/* copy the TRE into its location in the TRE ring */
+	spin_lock_irqsave(&lldev->lock, flags);
+	tre->tre_index = lldev->tre_write_offset / TRE_SIZE;
+	lldev->pending_tre_list[tre->tre_index] = tre;
+	memcpy(lldev->tre_ring + lldev->tre_write_offset, &tre->tre_local[0],
+	       TRE_SIZE);
+	lldev->tx_status_list[tre->idx].err_code = 0;
+	lldev->tx_status_list[tre->idx].err_info = 0;
+	tre->queued = 1;
+	lldev->pending_tre_count++;
+	lldev->tre_write_offset = (lldev->tre_write_offset + TRE_SIZE)
+	    % lldev->tre_ring_size;
+	spin_unlock_irqrestore(&lldev->lock, flags);
+}
+
+void hidma_ll_start(struct hidma_lldev *lldev)
+{
+	hidma_ll_hw_start(lldev);
+}
+
+/*
+ * Note that even though we stop this channel
+ * if there is a pending transaction in flight
+ * it will complete and follow the callback.
+ * This request will prevent further requests
+ * to be made.
+ */
+int hidma_ll_pause(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	lldev->evch_state = HIDMA_CH_STATE(val);
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	lldev->trch_state = HIDMA_CH_STATE(val);
+
+	/* already suspended by this OS */
+	if ((lldev->trch_state == CH_SUSPENDED) ||
+	    (lldev->evch_state == CH_SUSPENDED))
+		return 0;
+
+	/* already stopped by the manager */
+	if ((lldev->trch_state == CH_STOPPED) ||
+	    (lldev->evch_state == CH_STOPPED))
+		return 0;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_SUSPEND << 16;
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Start the wait right after the suspend is confirmed.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_SUSPENDED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_SUSPEND << 16;
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Start the wait right after the suspend is confirmed
+	 * Delay up to 10ms after reset to allow DMA logic to quiesce.
+	 */
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_SUSPENDED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	lldev->trch_state = CH_SUSPENDED;
+	lldev->evch_state = CH_SUSPENDED;
+	return 0;
+}
+
+void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
+				  dma_addr_t src, dma_addr_t dest, u32 len,
+				  u32 flags)
+{
+	struct hidma_tre *tre;
+	u32 *tre_local;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev,
+			"invalid TRE number in transfer params:%d", tre_ch);
+		return;
+	}
+
+	tre = &lldev->trepool[tre_ch];
+	if (atomic_read(&tre->allocated) != true) {
+		dev_err(lldev->dev,
+			"trying to set params on an unused TRE:%d", tre_ch);
+		return;
+	}
+
+	tre_local = &tre->tre_local[0];
+	tre_local[TRE_LEN_IDX] = len;
+	tre_local[TRE_SRC_LOW_IDX] = lower_32_bits(src);
+	tre_local[TRE_SRC_HI_IDX] = upper_32_bits(src);
+	tre_local[TRE_DEST_LOW_IDX] = lower_32_bits(dest);
+	tre_local[TRE_DEST_HI_IDX] = upper_32_bits(dest);
+	tre->int_flags = flags;
+}
+
+/*
+ * Called during initialization and after an error condition
+ * to restore hardware state.
+ */
+int hidma_ll_setup(struct hidma_lldev *lldev)
+{
+	int rc;
+	u64 addr;
+	u32 val;
+	u32 nr_tres = lldev->nr_tres;
+
+	lldev->pending_tre_count = 0;
+	lldev->tre_processed_off = 0;
+	lldev->evre_processed_off = 0;
+	lldev->tre_write_offset = 0;
+
+	/* disable interrupts */
+	hidma_ll_enable_irq(lldev, 0);
+
+	/* clear all pending interrupts */
+	val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+	rc = hidma_ll_reset(lldev);
+	if (rc)
+		return rc;
+
+	/*
+	 * Clear all pending interrupts again.
+	 * Otherwise, we observe reset complete interrupts.
+	 */
+	val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+	/* disable interrupts again after reset */
+	hidma_ll_enable_irq(lldev, 0);
+
+	addr = lldev->tre_ring_handle;
+	writel(lower_32_bits(addr), lldev->trca + TRCA_RING_LOW_OFFSET);
+	writel(upper_32_bits(addr), lldev->trca + TRCA_RING_HIGH_OFFSET);
+	writel(lldev->tre_ring_size, lldev->trca + TRCA_RING_LEN_OFFSET);
+
+	addr = lldev->evre_ring_handle;
+	writel(lower_32_bits(addr), lldev->evca + EVCA_RING_LOW_OFFSET);
+	writel(upper_32_bits(addr), lldev->evca + EVCA_RING_HIGH_OFFSET);
+	writel(EVRE_SIZE * nr_tres, lldev->evca + EVCA_RING_LEN_OFFSET);
+
+	/* support IRQ only for now */
+	val = readl(lldev->evca + EVCA_INTCTRL_OFFSET);
+	val &= ~0xF;
+	val |= 0x1;
+	writel(val, lldev->evca + EVCA_INTCTRL_OFFSET);
+
+	/* clear all pending interrupts and enable them */
+	writel(ENABLE_IRQS, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+	hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+
+	rc = hidma_ll_enable(lldev);
+	if (rc)
+		return rc;
+
+	return rc;
+}
+
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
+				  void __iomem *trca, void __iomem *evca,
+				  u8 chidx)
+{
+	u32 required_bytes;
+	struct hidma_lldev *lldev;
+	int rc;
+
+	if (!trca || !evca || !dev || !nr_tres)
+		return NULL;
+
+	/* need at least four TREs */
+	if (nr_tres < 4)
+		return NULL;
+
+	/* need an extra space */
+	nr_tres += 1;
+
+	lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
+	if (!lldev)
+		return NULL;
+
+	lldev->evca = evca;
+	lldev->trca = trca;
+	lldev->dev = dev;
+	lldev->trepool = devm_kcalloc(lldev->dev, nr_tres,
+				      sizeof(struct hidma_tre), GFP_KERNEL);
+	if (!lldev->trepool)
+		return NULL;
+
+	required_bytes = sizeof(lldev->pending_tre_list[0]);
+	lldev->pending_tre_list = devm_kcalloc(dev, nr_tres, required_bytes,
+					       GFP_KERNEL);
+	if (!lldev->pending_tre_list)
+		return NULL;
+
+	lldev->tx_status_list = devm_kcalloc(dev, nr_tres,
+					     sizeof(lldev->tx_status_list[0]),
+					     GFP_KERNEL);
+	if (!lldev->tx_status_list)
+		return NULL;
+
+	lldev->tre_ring = dmam_alloc_coherent(dev, (TRE_SIZE + 1) * nr_tres,
+					      &lldev->tre_ring_handle,
+					      GFP_KERNEL);
+	if (!lldev->tre_ring)
+		return NULL;
+
+	memset(lldev->tre_ring, 0, (TRE_SIZE + 1) * nr_tres);
+	lldev->tre_ring_size = TRE_SIZE * nr_tres;
+	lldev->nr_tres = nr_tres;
+
+	/* the TRE ring has to be TRE_SIZE aligned */
+	if (!IS_ALIGNED(lldev->tre_ring_handle, TRE_SIZE)) {
+		u8 tre_ring_shift;
+
+		tre_ring_shift = lldev->tre_ring_handle % TRE_SIZE;
+		tre_ring_shift = TRE_SIZE - tre_ring_shift;
+		lldev->tre_ring_handle += tre_ring_shift;
+		lldev->tre_ring += tre_ring_shift;
+	}
+
+	lldev->evre_ring = dmam_alloc_coherent(dev, (EVRE_SIZE + 1) * nr_tres,
+					       &lldev->evre_ring_handle,
+					       GFP_KERNEL);
+	if (!lldev->evre_ring)
+		return NULL;
+
+	memset(lldev->evre_ring, 0, (EVRE_SIZE + 1) * nr_tres);
+	lldev->evre_ring_size = EVRE_SIZE * nr_tres;
+
+	/* the EVRE ring has to be EVRE_SIZE aligned */
+	if (!IS_ALIGNED(lldev->evre_ring_handle, EVRE_SIZE)) {
+		u8 evre_ring_shift;
+
+		evre_ring_shift = lldev->evre_ring_handle % EVRE_SIZE;
+		evre_ring_shift = EVRE_SIZE - evre_ring_shift;
+		lldev->evre_ring_handle += evre_ring_shift;
+		lldev->evre_ring += evre_ring_shift;
+	}
+	lldev->nr_tres = nr_tres;
+	lldev->chidx = chidx;
+
+	rc = kfifo_alloc(&lldev->handoff_fifo,
+			 nr_tres * sizeof(struct hidma_tre *), GFP_KERNEL);
+	if (rc)
+		return NULL;
+
+	rc = hidma_ll_setup(lldev);
+	if (rc)
+		return NULL;
+
+	spin_lock_init(&lldev->lock);
+	tasklet_init(&lldev->task, hidma_ll_tre_complete, (unsigned long)lldev);
+	lldev->initialized = 1;
+	hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+	return lldev;
+}
+
+int hidma_ll_uninit(struct hidma_lldev *lldev)
+{
+	int rc = 0;
+	u32 val;
+
+	if (!lldev)
+		return -ENODEV;
+
+	if (lldev->initialized) {
+		u32 required_bytes;
+
+		lldev->initialized = 0;
+
+		required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
+		tasklet_kill(&lldev->task);
+		memset(lldev->trepool, 0, required_bytes);
+		lldev->trepool = NULL;
+		lldev->pending_tre_count = 0;
+		lldev->tre_write_offset = 0;
+
+		rc = hidma_ll_reset(lldev);
+
+		/*
+		 * Clear all pending interrupts again.
+		 * Otherwise, we observe reset complete interrupts.
+		 */
+		val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+		writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+		hidma_ll_enable_irq(lldev, 0);
+	}
+	return rc;
+}
+
+irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
+{
+	struct hidma_lldev *lldev = arg;
+
+	hidma_ll_int_handler_internal(lldev);
+	return IRQ_HANDLED;
+}
+
+enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	enum dma_status ret = DMA_ERROR;
+	unsigned long flags;
+	u8 err_code;
+
+	spin_lock_irqsave(&lldev->lock, flags);
+	err_code = lldev->tx_status_list[tre_ch].err_code;
+
+	if (err_code & EVRE_STATUS_COMPLETE)
+		ret = DMA_COMPLETE;
+	else if (err_code & EVRE_STATUS_ERROR)
+		ret = DMA_ERROR;
+	else
+		ret = DMA_IN_PROGRESS;
+	spin_unlock_irqrestore(&lldev->lock, flags);
+
+	return ret;
+}
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 05/10] dma: qcom_hidma: implement lower level hardware interface
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

This patch implements the hardware hooks for the HIDMA channel driver.

The main functions of interest are:
- hidma_ll_init
- hidma_ll_request
- hidma_ll_queue_request
- hidma_ll_hw_start

OS layer calls the hidma_ll_init function during probe to set up the
hardware. At this moment, the number of supported descriptors are also
given. On each request, a descriptor is allocated from the free pool and
filled in with the transfer parameters. Multiple requests can be queued
into the hardware via the OS interface. When client is ready for requests
to be executed, start method is called.

Completions are delivered via callbacks via tasklet.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/dma/qcom/Makefile   |   2 +
 drivers/dma/qcom/hidma.h    |   2 +-
 drivers/dma/qcom/hidma_ll.c | 927 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 930 insertions(+), 1 deletion(-)
 create mode 100644 drivers/dma/qcom/hidma_ll.c

diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index bfea699..6bf9267 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1,3 +1,5 @@
 obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
 obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
 hdma_mgmt-objs	 := hidma_mgmt.o hidma_mgmt_sys.o
+obj-$(CONFIG_QCOM_HIDMA) +=  hdma.o
+hdma-objs        := hidma_ll.o hidma.o
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
index 231e306..1e09d7c 100644
--- a/drivers/dma/qcom/hidma.h
+++ b/drivers/dma/qcom/hidma.h
@@ -37,7 +37,7 @@ struct hidma_tre {
 	atomic_t allocated;		/* if this channel is allocated	    */
 	bool queued;			/* flag whether this is pending     */
 	u16 status;			/* status			    */
-	u32 chidx;			/* index of the tre		    */
+	u32 idx;			/* index of the tre		    */
 	u32 dma_sig;			/* signature of the tre		    */
 	const char *dev_name;		/* name of the device		    */
 	void (*callback)(void *data);	/* requester callback		    */
diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
new file mode 100644
index 0000000..c343bf7
--- /dev/null
+++ b/drivers/dma/qcom/hidma_ll.c
@@ -0,0 +1,927 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine low level code
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/atomic.h>
+#include <linux/iopoll.h>
+#include <linux/kfifo.h>
+#include <linux/bitops.h>
+
+#include "hidma.h"
+
+#define EVRE_SIZE			16	/* each EVRE is 16 bytes */
+
+#define TRCA_CTRLSTS_OFFSET		0x000
+#define TRCA_RING_LOW_OFFSET		0x008
+#define TRCA_RING_HIGH_OFFSET		0x00C
+#define TRCA_RING_LEN_OFFSET		0x010
+#define TRCA_READ_PTR_OFFSET		0x018
+#define TRCA_WRITE_PTR_OFFSET		0x020
+#define TRCA_DOORBELL_OFFSET		0x400
+
+#define EVCA_CTRLSTS_OFFSET		0x000
+#define EVCA_INTCTRL_OFFSET		0x004
+#define EVCA_RING_LOW_OFFSET		0x008
+#define EVCA_RING_HIGH_OFFSET		0x00C
+#define EVCA_RING_LEN_OFFSET		0x010
+#define EVCA_READ_PTR_OFFSET		0x018
+#define EVCA_WRITE_PTR_OFFSET		0x020
+#define EVCA_DOORBELL_OFFSET		0x400
+
+#define EVCA_IRQ_STAT_OFFSET		0x100
+#define EVCA_IRQ_CLR_OFFSET		0x108
+#define EVCA_IRQ_EN_OFFSET		0x110
+
+#define EVRE_CFG_IDX			0
+#define EVRE_LEN_IDX			1
+#define EVRE_DEST_LOW_IDX		2
+#define EVRE_DEST_HI_IDX		3
+
+#define EVRE_ERRINFO_BIT_POS		24
+#define EVRE_CODE_BIT_POS		28
+
+#define EVRE_ERRINFO_MASK		GENMASK(3, 0)
+#define EVRE_CODE_MASK			GENMASK(3, 0)
+
+#define CH_CONTROL_MASK		GENMASK(7, 0)
+#define CH_STATE_MASK			GENMASK(7, 0)
+#define CH_STATE_BIT_POS		0x8
+
+#define IRQ_EV_CH_EOB_IRQ_BIT_POS	0
+#define IRQ_EV_CH_WR_RESP_BIT_POS	1
+#define IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
+#define IRQ_TR_CH_DATA_RD_ER_BIT_POS	10
+#define IRQ_TR_CH_DATA_WR_ER_BIT_POS	11
+#define IRQ_TR_CH_INVALID_TRE_BIT_POS	14
+
+#define	ENABLE_IRQS (BIT(IRQ_EV_CH_EOB_IRQ_BIT_POS)	| \
+		BIT(IRQ_EV_CH_WR_RESP_BIT_POS)		| \
+		BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)	| \
+		BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)	| \
+		BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS)	| \
+		BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))
+
+#define HIDMA_INCREMENT_ITERATOR(iter, size, ring_size)	\
+do {								\
+	iter += size;						\
+	if (iter >= ring_size)					\
+		iter -= ring_size;				\
+} while (0)
+
+#define HIDMA_CH_STATE(val)	\
+	((val >> CH_STATE_BIT_POS) & CH_STATE_MASK)
+
+enum ch_command {
+	CH_DISABLE = 0,
+	CH_ENABLE = 1,
+	CH_SUSPEND = 2,
+	CH_RESET = 9,
+};
+
+enum ch_state {
+	CH_DISABLED = 0,
+	CH_ENABLED = 1,
+	CH_RUNNING = 2,
+	CH_SUSPENDED = 3,
+	CH_STOPPED = 4,
+	CH_ERROR = 5,
+	CH_IN_RESET = 9,
+};
+
+enum tre_type {
+	TRE_MEMCPY = 3,
+	TRE_MEMSET = 4,
+};
+
+enum evre_type {
+	EVRE_DMA_COMPLETE = 0x23,
+	EVRE_IMM_DATA = 0x24,
+};
+
+enum err_code {
+	EVRE_STATUS_COMPLETE = 1,
+	EVRE_STATUS_ERROR = 4,
+};
+
+void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	struct hidma_tre *tre;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
+		return;
+	}
+
+	tre = &lldev->trepool[tre_ch];
+	if (atomic_read(&tre->allocated) != true) {
+		dev_err(lldev->dev, "trying to free an unused TRE:%d", tre_ch);
+		return;
+	}
+
+	atomic_set(&tre->allocated, 0);
+}
+
+int hidma_ll_request(struct hidma_lldev *lldev, u32 dma_sig,
+		     const char *dev_name,
+		     void (*callback)(void *data), void *data, u32 *tre_ch)
+{
+	unsigned int i;
+	struct hidma_tre *tre;
+	u32 *tre_local;
+
+	if (!tre_ch || !lldev)
+		return -EINVAL;
+
+	/* need to have at least one empty spot in the queue */
+	for (i = 0; i < lldev->nr_tres - 1; i++) {
+		if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
+			break;
+	}
+
+	if (i == (lldev->nr_tres - 1))
+		return -ENOMEM;
+
+	tre = &lldev->trepool[i];
+	tre->dma_sig = dma_sig;
+	tre->dev_name = dev_name;
+	tre->callback = callback;
+	tre->data = data;
+	tre->idx = i;
+	tre->status = 0;
+	tre->queued = 0;
+	lldev->tx_status_list[i].err_code = 0;
+	tre->lldev = lldev;
+	tre_local = &tre->tre_local[0];
+	tre_local[TRE_CFG_IDX] = TRE_MEMCPY;
+	tre_local[TRE_CFG_IDX] |= (lldev->chidx & 0xFF) << 8;
+	tre_local[TRE_CFG_IDX] |= BIT(16);	/* set IEOB */
+	*tre_ch = i;
+	if (callback)
+		callback(data);
+	return 0;
+}
+
+/*
+ * Multiple TREs may be queued and waiting in the
+ * pending queue.
+ */
+static void hidma_ll_tre_complete(unsigned long arg)
+{
+	struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
+	struct hidma_tre *tre;
+
+	while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
+		/* call the user if it has been read by the hardware */
+		if (tre->callback)
+			tre->callback(tre->data);
+	}
+}
+
+/*
+ * Called to handle the interrupt for the channel.
+ * Return a positive number if TRE or EVRE were consumed on this run.
+ * Return a positive number if there are pending TREs or EVREs.
+ * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
+ */
+static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
+{
+	struct hidma_tre *tre;
+	u32 evre_write_off;
+	u32 evre_ring_size = lldev->evre_ring_size;
+	u32 tre_ring_size = lldev->tre_ring_size;
+	u32 num_completed = 0, tre_iterator, evre_iterator;
+	unsigned long flags;
+
+	evre_write_off = readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+	tre_iterator = lldev->tre_processed_off;
+	evre_iterator = lldev->evre_processed_off;
+
+	if ((evre_write_off > evre_ring_size) ||
+	    ((evre_write_off % EVRE_SIZE) != 0)) {
+		dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
+		return 0;
+	}
+
+	/*
+	 * By the time control reaches here the number of EVREs and TREs
+	 * may not match. Only consume the ones that hardware told us.
+	 */
+	while ((evre_iterator != evre_write_off)) {
+		u32 *current_evre = lldev->evre_ring + evre_iterator;
+		u32 cfg;
+		u8 err_info;
+
+		spin_lock_irqsave(&lldev->lock, flags);
+		tre = lldev->pending_tre_list[tre_iterator / TRE_SIZE];
+		if (!tre) {
+			spin_unlock_irqrestore(&lldev->lock, flags);
+			dev_warn(lldev->dev,
+				 "tre_index [%d] and tre out of sync\n",
+				 tre_iterator / TRE_SIZE);
+			HIDMA_INCREMENT_ITERATOR(tre_iterator, TRE_SIZE,
+						 tre_ring_size);
+			HIDMA_INCREMENT_ITERATOR(evre_iterator, EVRE_SIZE,
+						 evre_ring_size);
+			continue;
+		}
+		lldev->pending_tre_list[tre->tre_index] = NULL;
+
+		/*
+		 * Keep track of pending TREs that SW is expecting to receive
+		 * from HW. We got one now. Decrement our counter.
+		 */
+		lldev->pending_tre_count--;
+		if (lldev->pending_tre_count < 0) {
+			dev_warn(lldev->dev,
+				 "tre count mismatch on completion");
+			lldev->pending_tre_count = 0;
+		}
+
+		spin_unlock_irqrestore(&lldev->lock, flags);
+
+		cfg = current_evre[EVRE_CFG_IDX];
+		err_info = cfg >> EVRE_ERRINFO_BIT_POS;
+		err_info &= EVRE_ERRINFO_MASK;
+		lldev->tx_status_list[tre->idx].err_info = err_info;
+		lldev->tx_status_list[tre->idx].err_code =
+		    (cfg >> EVRE_CODE_BIT_POS) & EVRE_CODE_MASK;
+		tre->queued = 0;
+
+		kfifo_put(&lldev->handoff_fifo, tre);
+		tasklet_schedule(&lldev->task);
+
+		HIDMA_INCREMENT_ITERATOR(tre_iterator, TRE_SIZE,
+					 tre_ring_size);
+		HIDMA_INCREMENT_ITERATOR(evre_iterator, EVRE_SIZE,
+					 evre_ring_size);
+
+		/*
+		 * Read the new event descriptor written by the HW.
+		 * As we are processing the delivered events, other events
+		 * get queued to the SW for processing.
+		 */
+		evre_write_off =
+		    readl_relaxed(lldev->evca + EVCA_WRITE_PTR_OFFSET);
+		num_completed++;
+	}
+
+	if (num_completed) {
+		u32 evre_read_off = (lldev->evre_processed_off +
+				     EVRE_SIZE * num_completed);
+		u32 tre_read_off = (lldev->tre_processed_off +
+				    TRE_SIZE * num_completed);
+
+		evre_read_off = evre_read_off % evre_ring_size;
+		tre_read_off = tre_read_off % tre_ring_size;
+
+		writel(evre_read_off, lldev->evca + EVCA_DOORBELL_OFFSET);
+
+		/* record the last processed tre offset */
+		lldev->tre_processed_off = tre_read_off;
+		lldev->evre_processed_off = evre_read_off;
+	}
+
+	return num_completed;
+}
+
+void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
+			       u8 err_code)
+{
+	u32 tre_iterator;
+	struct hidma_tre *tre;
+	u32 tre_ring_size = lldev->tre_ring_size;
+	int num_completed = 0;
+	u32 tre_read_off;
+	unsigned long flags;
+
+	tre_iterator = lldev->tre_processed_off;
+	while (lldev->pending_tre_count) {
+		int tre_index = tre_iterator / TRE_SIZE;
+
+		spin_lock_irqsave(&lldev->lock, flags);
+		tre = lldev->pending_tre_list[tre_index];
+		if (!tre) {
+			spin_unlock_irqrestore(&lldev->lock, flags);
+			HIDMA_INCREMENT_ITERATOR(tre_iterator, TRE_SIZE,
+						 tre_ring_size);
+			continue;
+		}
+		lldev->pending_tre_list[tre_index] = NULL;
+		lldev->pending_tre_count--;
+		if (lldev->pending_tre_count < 0) {
+			dev_warn(lldev->dev,
+				 "tre count mismatch on completion");
+			lldev->pending_tre_count = 0;
+		}
+		spin_unlock_irqrestore(&lldev->lock, flags);
+
+		lldev->tx_status_list[tre->idx].err_info = err_info;
+		lldev->tx_status_list[tre->idx].err_code = err_code;
+		tre->queued = 0;
+
+		kfifo_put(&lldev->handoff_fifo, tre);
+		tasklet_schedule(&lldev->task);
+
+		HIDMA_INCREMENT_ITERATOR(tre_iterator, TRE_SIZE,
+					 tre_ring_size);
+		num_completed++;
+	}
+	tre_read_off = (lldev->tre_processed_off + TRE_SIZE * num_completed);
+
+	tre_read_off = tre_read_off % tre_ring_size;
+
+	/* record the last processed tre offset */
+	lldev->tre_processed_off = tre_read_off;
+}
+
+static int hidma_ll_reset(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_RESET << 16;
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Delay 10ms after reset to allow DMA logic to quiesce.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
+				 10000);
+	if (ret) {
+		dev_err(lldev->dev, "transfer channel did not reset\n");
+		return ret;
+	}
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_RESET << 16;
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Delay 10ms after reset to allow DMA logic to quiesce.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	lldev->trch_state = CH_DISABLED;
+	lldev->evch_state = CH_DISABLED;
+	return 0;
+}
+
+static void hidma_ll_enable_irq(struct hidma_lldev *lldev, u32 irq_bits)
+{
+	writel(irq_bits, lldev->evca + EVCA_IRQ_EN_OFFSET);
+}
+
+/*
+ * The interrupt handler for HIDMA will try to consume as many pending
+ * EVRE from the event queue as possible. Each EVRE has an associated
+ * TRE that holds the user interface parameters. EVRE reports the
+ * result of the transaction. Hardware guarantees ordering between EVREs
+ * and TREs. We use last processed offset to figure out which TRE is
+ * associated with which EVRE. If two TREs are consumed by HW, the EVREs
+ * are in order in the event ring.
+ *
+ * This handler will do a one pass for consuming EVREs. Other EVREs may
+ * be delivered while we are working. It will try to consume incoming
+ * EVREs one more time and return.
+ *
+ * For unprocessed EVREs, hardware will trigger another interrupt until
+ * all the interrupt bits are cleared.
+ *
+ * Hardware guarantees that by the time interrupt is observed, all data
+ * transactions in flight are delivered to their respective places and
+ * are visible to the CPU.
+ *
+ * On demand paging for IOMMU is only supported for PCIe via PRI
+ * (Page Request Interface) not for HIDMA. All other hardware instances
+ * including HIDMA work on pinned DMA addresses.
+ *
+ * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
+ * IOMMU latency will be built into the data movement time. By the time
+ * interrupt happens, IOMMU lookups + data movement has already taken place.
+ *
+ * While the first read in a typical PCI endpoint ISR flushes all outstanding
+ * requests traditionally to the destination, this concept does not apply
+ * here for this HW.
+ */
+static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev)
+{
+	u32 status;
+	u32 enable;
+	u32 cause;
+	int repeat = 2;
+	unsigned long timeout;
+
+	/*
+	 * Fine tuned for this HW...
+	 *
+	 * This ISR has been designed for this particular hardware. Relaxed
+	 * read and write accessors are used for performance reasons due to
+	 * interrupt delivery guarantees. Do not copy this code blindly and
+	 * expect that to work.
+	 */
+	status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+	cause = status & enable;
+
+	if ((cause & (BIT(IRQ_TR_CH_INVALID_TRE_BIT_POS))) ||
+	    (cause & BIT(IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS)) ||
+	    (cause & BIT(IRQ_EV_CH_WR_RESP_BIT_POS)) ||
+	    (cause & BIT(IRQ_TR_CH_DATA_RD_ER_BIT_POS)) ||
+	    (cause & BIT(IRQ_TR_CH_DATA_WR_ER_BIT_POS))) {
+		u8 err_code = EVRE_STATUS_ERROR;
+		u8 err_info = 0xFF;
+
+		/* Clear out pending interrupts */
+		writel(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+		dev_err(lldev->dev, "error 0x%x, resetting...\n", cause);
+
+		hidma_cleanup_pending_tre(lldev, err_info, err_code);
+
+		/* reset the channel for recovery */
+		if (hidma_ll_setup(lldev)) {
+			dev_err(lldev->dev,
+				"channel reinitialize failed after error\n");
+			return;
+		}
+		hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+		return;
+	}
+
+	/*
+	 * Try to consume as many EVREs as possible.
+	 * skip this loop if the interrupt is spurious.
+	 */
+	while (cause && repeat) {
+		unsigned long start = jiffies;
+
+		/* This timeout should be sufficent for core to finish */
+		timeout = start + msecs_to_jiffies(500);
+
+		while (lldev->pending_tre_count) {
+			hidma_handle_tre_completion(lldev);
+			if (time_is_before_jiffies(timeout)) {
+				dev_warn(lldev->dev,
+					 "ISR timeout %lx-%lx from %lx [%d]\n",
+					 jiffies, timeout, start,
+					 lldev->pending_tre_count);
+				break;
+			}
+		}
+
+		/* We consumed TREs or there are pending TREs or EVREs. */
+		writel_relaxed(cause, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+		/*
+		 * Another interrupt might have arrived while we are
+		 * processing this one. Read the new cause.
+		 */
+		status = readl_relaxed(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+		enable = readl_relaxed(lldev->evca + EVCA_IRQ_EN_OFFSET);
+		cause = status & enable;
+
+		repeat--;
+	}
+}
+
+static int hidma_ll_enable(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_ENABLE << 16;
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+				 (HIDMA_CH_STATE(val) == CH_ENABLED) ||
+				 (HIDMA_CH_STATE(val) == CH_RUNNING), 1000,
+				 10000);
+	if (ret) {
+		dev_err(lldev->dev, "event channel did not get enabled\n");
+		return ret;
+	}
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_ENABLE << 16;
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+				 (HIDMA_CH_STATE(val) == CH_ENABLED) ||
+				 (HIDMA_CH_STATE(val) == CH_RUNNING), 1000,
+				 10000);
+	if (ret) {
+		dev_err(lldev->dev, "transfer channel did not get enabled\n");
+		return ret;
+	}
+
+	lldev->trch_state = CH_ENABLED;
+	lldev->evch_state = CH_ENABLED;
+
+	return 0;
+}
+
+int hidma_ll_resume(struct hidma_lldev *lldev)
+{
+	return hidma_ll_enable(lldev);
+}
+
+static void hidma_ll_hw_start(struct hidma_lldev *lldev)
+{
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&lldev->lock, irqflags);
+	writel(lldev->tre_write_offset, lldev->trca + TRCA_DOORBELL_OFFSET);
+	spin_unlock_irqrestore(&lldev->lock, irqflags);
+}
+
+bool hidma_ll_isenabled(struct hidma_lldev *lldev)
+{
+	u32 val;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	lldev->trch_state = HIDMA_CH_STATE(val);
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	lldev->evch_state = HIDMA_CH_STATE(val);
+
+	/* both channels have to be enabled before calling this function */
+	if (((lldev->trch_state == CH_ENABLED) ||
+	     (lldev->trch_state == CH_RUNNING)) &&
+	    ((lldev->evch_state == CH_ENABLED) ||
+	     (lldev->evch_state == CH_RUNNING)))
+		return true;
+
+	return false;
+}
+
+void hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	struct hidma_tre *tre;
+	unsigned long flags;
+
+	tre = &lldev->trepool[tre_ch];
+
+	/* copy the TRE into its location in the TRE ring */
+	spin_lock_irqsave(&lldev->lock, flags);
+	tre->tre_index = lldev->tre_write_offset / TRE_SIZE;
+	lldev->pending_tre_list[tre->tre_index] = tre;
+	memcpy(lldev->tre_ring + lldev->tre_write_offset, &tre->tre_local[0],
+	       TRE_SIZE);
+	lldev->tx_status_list[tre->idx].err_code = 0;
+	lldev->tx_status_list[tre->idx].err_info = 0;
+	tre->queued = 1;
+	lldev->pending_tre_count++;
+	lldev->tre_write_offset = (lldev->tre_write_offset + TRE_SIZE)
+	    % lldev->tre_ring_size;
+	spin_unlock_irqrestore(&lldev->lock, flags);
+}
+
+void hidma_ll_start(struct hidma_lldev *lldev)
+{
+	hidma_ll_hw_start(lldev);
+}
+
+/*
+ * Note that even though we stop this channel
+ * if there is a pending transaction in flight
+ * it will complete and follow the callback.
+ * This request will prevent further requests
+ * to be made.
+ */
+int hidma_ll_pause(struct hidma_lldev *lldev)
+{
+	u32 val;
+	int ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	lldev->evch_state = HIDMA_CH_STATE(val);
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	lldev->trch_state = HIDMA_CH_STATE(val);
+
+	/* already suspended by this OS */
+	if ((lldev->trch_state == CH_SUSPENDED) ||
+	    (lldev->evch_state == CH_SUSPENDED))
+		return 0;
+
+	/* already stopped by the manager */
+	if ((lldev->trch_state == CH_STOPPED) ||
+	    (lldev->evch_state == CH_STOPPED))
+		return 0;
+
+	val = readl(lldev->trca + TRCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_SUSPEND << 16;
+	writel(val, lldev->trca + TRCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Start the wait right after the suspend is confirmed.
+	 * Do a polled read up to 1ms and 10ms maximum.
+	 */
+	ret = readl_poll_timeout(lldev->trca + TRCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_SUSPENDED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	val = readl(lldev->evca + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_SUSPEND << 16;
+	writel(val, lldev->evca + EVCA_CTRLSTS_OFFSET);
+
+	/*
+	 * Start the wait right after the suspend is confirmed
+	 * Delay up to 10ms after reset to allow DMA logic to quiesce.
+	 */
+	ret = readl_poll_timeout(lldev->evca + EVCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_SUSPENDED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	lldev->trch_state = CH_SUSPENDED;
+	lldev->evch_state = CH_SUSPENDED;
+	return 0;
+}
+
+void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
+				  dma_addr_t src, dma_addr_t dest, u32 len,
+				  u32 flags)
+{
+	struct hidma_tre *tre;
+	u32 *tre_local;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev,
+			"invalid TRE number in transfer params:%d", tre_ch);
+		return;
+	}
+
+	tre = &lldev->trepool[tre_ch];
+	if (atomic_read(&tre->allocated) != true) {
+		dev_err(lldev->dev,
+			"trying to set params on an unused TRE:%d", tre_ch);
+		return;
+	}
+
+	tre_local = &tre->tre_local[0];
+	tre_local[TRE_LEN_IDX] = len;
+	tre_local[TRE_SRC_LOW_IDX] = lower_32_bits(src);
+	tre_local[TRE_SRC_HI_IDX] = upper_32_bits(src);
+	tre_local[TRE_DEST_LOW_IDX] = lower_32_bits(dest);
+	tre_local[TRE_DEST_HI_IDX] = upper_32_bits(dest);
+	tre->int_flags = flags;
+}
+
+/*
+ * Called during initialization and after an error condition
+ * to restore hardware state.
+ */
+int hidma_ll_setup(struct hidma_lldev *lldev)
+{
+	int rc;
+	u64 addr;
+	u32 val;
+	u32 nr_tres = lldev->nr_tres;
+
+	lldev->pending_tre_count = 0;
+	lldev->tre_processed_off = 0;
+	lldev->evre_processed_off = 0;
+	lldev->tre_write_offset = 0;
+
+	/* disable interrupts */
+	hidma_ll_enable_irq(lldev, 0);
+
+	/* clear all pending interrupts */
+	val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+	rc = hidma_ll_reset(lldev);
+	if (rc)
+		return rc;
+
+	/*
+	 * Clear all pending interrupts again.
+	 * Otherwise, we observe reset complete interrupts.
+	 */
+	val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+	writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+
+	/* disable interrupts again after reset */
+	hidma_ll_enable_irq(lldev, 0);
+
+	addr = lldev->tre_ring_handle;
+	writel(lower_32_bits(addr), lldev->trca + TRCA_RING_LOW_OFFSET);
+	writel(upper_32_bits(addr), lldev->trca + TRCA_RING_HIGH_OFFSET);
+	writel(lldev->tre_ring_size, lldev->trca + TRCA_RING_LEN_OFFSET);
+
+	addr = lldev->evre_ring_handle;
+	writel(lower_32_bits(addr), lldev->evca + EVCA_RING_LOW_OFFSET);
+	writel(upper_32_bits(addr), lldev->evca + EVCA_RING_HIGH_OFFSET);
+	writel(EVRE_SIZE * nr_tres, lldev->evca + EVCA_RING_LEN_OFFSET);
+
+	/* support IRQ only for now */
+	val = readl(lldev->evca + EVCA_INTCTRL_OFFSET);
+	val &= ~0xF;
+	val |= 0x1;
+	writel(val, lldev->evca + EVCA_INTCTRL_OFFSET);
+
+	/* clear all pending interrupts and enable them */
+	writel(ENABLE_IRQS, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+	hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+
+	rc = hidma_ll_enable(lldev);
+	if (rc)
+		return rc;
+
+	return rc;
+}
+
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
+				  void __iomem *trca, void __iomem *evca,
+				  u8 chidx)
+{
+	u32 required_bytes;
+	struct hidma_lldev *lldev;
+	int rc;
+
+	if (!trca || !evca || !dev || !nr_tres)
+		return NULL;
+
+	/* need at least four TREs */
+	if (nr_tres < 4)
+		return NULL;
+
+	/* need an extra space */
+	nr_tres += 1;
+
+	lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
+	if (!lldev)
+		return NULL;
+
+	lldev->evca = evca;
+	lldev->trca = trca;
+	lldev->dev = dev;
+	lldev->trepool = devm_kcalloc(lldev->dev, nr_tres,
+				      sizeof(struct hidma_tre), GFP_KERNEL);
+	if (!lldev->trepool)
+		return NULL;
+
+	required_bytes = sizeof(lldev->pending_tre_list[0]);
+	lldev->pending_tre_list = devm_kcalloc(dev, nr_tres, required_bytes,
+					       GFP_KERNEL);
+	if (!lldev->pending_tre_list)
+		return NULL;
+
+	lldev->tx_status_list = devm_kcalloc(dev, nr_tres,
+					     sizeof(lldev->tx_status_list[0]),
+					     GFP_KERNEL);
+	if (!lldev->tx_status_list)
+		return NULL;
+
+	lldev->tre_ring = dmam_alloc_coherent(dev, (TRE_SIZE + 1) * nr_tres,
+					      &lldev->tre_ring_handle,
+					      GFP_KERNEL);
+	if (!lldev->tre_ring)
+		return NULL;
+
+	memset(lldev->tre_ring, 0, (TRE_SIZE + 1) * nr_tres);
+	lldev->tre_ring_size = TRE_SIZE * nr_tres;
+	lldev->nr_tres = nr_tres;
+
+	/* the TRE ring has to be TRE_SIZE aligned */
+	if (!IS_ALIGNED(lldev->tre_ring_handle, TRE_SIZE)) {
+		u8 tre_ring_shift;
+
+		tre_ring_shift = lldev->tre_ring_handle % TRE_SIZE;
+		tre_ring_shift = TRE_SIZE - tre_ring_shift;
+		lldev->tre_ring_handle += tre_ring_shift;
+		lldev->tre_ring += tre_ring_shift;
+	}
+
+	lldev->evre_ring = dmam_alloc_coherent(dev, (EVRE_SIZE + 1) * nr_tres,
+					       &lldev->evre_ring_handle,
+					       GFP_KERNEL);
+	if (!lldev->evre_ring)
+		return NULL;
+
+	memset(lldev->evre_ring, 0, (EVRE_SIZE + 1) * nr_tres);
+	lldev->evre_ring_size = EVRE_SIZE * nr_tres;
+
+	/* the EVRE ring has to be EVRE_SIZE aligned */
+	if (!IS_ALIGNED(lldev->evre_ring_handle, EVRE_SIZE)) {
+		u8 evre_ring_shift;
+
+		evre_ring_shift = lldev->evre_ring_handle % EVRE_SIZE;
+		evre_ring_shift = EVRE_SIZE - evre_ring_shift;
+		lldev->evre_ring_handle += evre_ring_shift;
+		lldev->evre_ring += evre_ring_shift;
+	}
+	lldev->nr_tres = nr_tres;
+	lldev->chidx = chidx;
+
+	rc = kfifo_alloc(&lldev->handoff_fifo,
+			 nr_tres * sizeof(struct hidma_tre *), GFP_KERNEL);
+	if (rc)
+		return NULL;
+
+	rc = hidma_ll_setup(lldev);
+	if (rc)
+		return NULL;
+
+	spin_lock_init(&lldev->lock);
+	tasklet_init(&lldev->task, hidma_ll_tre_complete, (unsigned long)lldev);
+	lldev->initialized = 1;
+	hidma_ll_enable_irq(lldev, ENABLE_IRQS);
+	return lldev;
+}
+
+int hidma_ll_uninit(struct hidma_lldev *lldev)
+{
+	int rc = 0;
+	u32 val;
+
+	if (!lldev)
+		return -ENODEV;
+
+	if (lldev->initialized) {
+		u32 required_bytes;
+
+		lldev->initialized = 0;
+
+		required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
+		tasklet_kill(&lldev->task);
+		memset(lldev->trepool, 0, required_bytes);
+		lldev->trepool = NULL;
+		lldev->pending_tre_count = 0;
+		lldev->tre_write_offset = 0;
+
+		rc = hidma_ll_reset(lldev);
+
+		/*
+		 * Clear all pending interrupts again.
+		 * Otherwise, we observe reset complete interrupts.
+		 */
+		val = readl(lldev->evca + EVCA_IRQ_STAT_OFFSET);
+		writel(val, lldev->evca + EVCA_IRQ_CLR_OFFSET);
+		hidma_ll_enable_irq(lldev, 0);
+	}
+	return rc;
+}
+
+irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
+{
+	struct hidma_lldev *lldev = arg;
+
+	hidma_ll_int_handler_internal(lldev);
+	return IRQ_HANDLED;
+}
+
+enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
+{
+	enum dma_status ret = DMA_ERROR;
+	unsigned long flags;
+	u8 err_code;
+
+	spin_lock_irqsave(&lldev->lock, flags);
+	err_code = lldev->tx_status_list[tre_ch].err_code;
+
+	if (err_code & EVRE_STATUS_COMPLETE)
+		ret = DMA_COMPLETE;
+	else if (err_code & EVRE_STATUS_ERROR)
+		ret = DMA_ERROR;
+	else
+		ret = DMA_IN_PROGRESS;
+	spin_unlock_irqrestore(&lldev->lock, flags);
+
+	return ret;
+}
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 06/10] dma: qcom_hidma: add debugfs hooks
  2016-01-29 22:35 ` Sinan Kaya
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

Add debugfs hooks for debugging the execution behavior of the DMA
channel. The debugfs hooks get initialized by the probe function and
uninitialized by the remove function.

A stats file is created in debugfs. The stats file will show the
information about each HIDMA channel as well as each asynchronous job
queued and completed at a given time.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/dma/qcom/Makefile    |   2 +-
 drivers/dma/qcom/hidma.c     |   3 +
 drivers/dma/qcom/hidma.h     |   2 +
 drivers/dma/qcom/hidma_dbg.c | 219 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 225 insertions(+), 1 deletion(-)
 create mode 100644 drivers/dma/qcom/hidma_dbg.c

diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index 6bf9267..4bfc38b 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -2,4 +2,4 @@ obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
 obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
 hdma_mgmt-objs	 := hidma_mgmt.o hidma_mgmt_sys.o
 obj-$(CONFIG_QCOM_HIDMA) +=  hdma.o
-hdma-objs        := hidma_ll.o hidma.o
+hdma-objs        := hidma_ll.o hidma.o hidma_dbg.o
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
index f8960f1..833db9e 100644
--- a/drivers/dma/qcom/hidma.c
+++ b/drivers/dma/qcom/hidma.c
@@ -681,6 +681,7 @@ static int hidma_probe(struct platform_device *pdev)
 
 	dmadev->irq = chirq;
 	tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
+	hidma_debug_init(dmadev);
 	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
 	platform_set_drvdata(pdev, dmadev);
 	pm_runtime_mark_last_busy(dmadev->ddev.dev);
@@ -689,6 +690,7 @@ static int hidma_probe(struct platform_device *pdev)
 	return 0;
 
 uninit:
+	hidma_debug_uninit(dmadev);
 	hidma_ll_uninit(dmadev->lldev);
 dmafree:
 	if (dmadev)
@@ -706,6 +708,7 @@ static int hidma_remove(struct platform_device *pdev)
 	pm_runtime_get_sync(dmadev->ddev.dev);
 	dma_async_device_unregister(&dmadev->ddev);
 	devm_free_irq(dmadev->ddev.dev, dmadev->irq, dmadev->lldev);
+	hidma_debug_uninit(dmadev);
 	hidma_ll_uninit(dmadev->lldev);
 	hidma_free(dmadev);
 
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
index 1e09d7c..c6b3ae0 100644
--- a/drivers/dma/qcom/hidma.h
+++ b/drivers/dma/qcom/hidma.h
@@ -157,4 +157,6 @@ int hidma_ll_uninit(struct hidma_lldev *llhndl);
 irqreturn_t hidma_ll_inthandler(int irq, void *arg);
 void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
 				u8 err_code);
+int hidma_debug_init(struct hidma_dev *dmadev);
+void hidma_debug_uninit(struct hidma_dev *dmadev);
 #endif
diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
new file mode 100644
index 0000000..fe35002
--- /dev/null
+++ b/drivers/dma/qcom/hidma_dbg.c
@@ -0,0 +1,219 @@
+/*
+ * Qualcomm Technologies HIDMA debug file
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma.h"
+
+static void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
+{
+	struct hidma_lldev *lldev = llhndl;
+	struct hidma_tre *tre;
+	u32 length;
+	dma_addr_t src_start;
+	dma_addr_t dest_start;
+	u32 *tre_local;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev, "invalid TRE number in chstats:%d", tre_ch);
+		return;
+	}
+	tre = &lldev->trepool[tre_ch];
+	seq_printf(s, "------Channel %d -----\n", tre_ch);
+	seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
+	seq_printf(s, "queued = 0x%x\n", tre->queued);
+	seq_printf(s, "err_info = 0x%x\n",
+		   lldev->tx_status_list[tre->idx].err_info);
+	seq_printf(s, "err_code = 0x%x\n",
+		   lldev->tx_status_list[tre->idx].err_code);
+	seq_printf(s, "status = 0x%x\n", tre->status);
+	seq_printf(s, "idx = 0x%x\n", tre->idx);
+	seq_printf(s, "dma_sig = 0x%x\n", tre->dma_sig);
+	seq_printf(s, "dev_name=%s\n", tre->dev_name);
+	seq_printf(s, "callback=%p\n", tre->callback);
+	seq_printf(s, "data=%p\n", tre->data);
+	seq_printf(s, "tre_index = 0x%x\n", tre->tre_index);
+
+	tre_local = &tre->tre_local[0];
+	src_start = tre_local[TRE_SRC_LOW_IDX];
+	src_start = ((u64) (tre_local[TRE_SRC_HI_IDX]) << 32) + src_start;
+	dest_start = tre_local[TRE_DEST_LOW_IDX];
+	dest_start += ((u64) (tre_local[TRE_DEST_HI_IDX]) << 32);
+	length = tre_local[TRE_LEN_IDX];
+
+	seq_printf(s, "src=%pap\n", &src_start);
+	seq_printf(s, "dest=%pap\n", &dest_start);
+	seq_printf(s, "length = 0x%x\n", length);
+}
+
+static void hidma_ll_devstats(struct seq_file *s, void *llhndl)
+{
+	struct hidma_lldev *lldev = llhndl;
+
+	seq_puts(s, "------Device -----\n");
+	seq_printf(s, "lldev init = 0x%x\n", lldev->initialized);
+	seq_printf(s, "trch_state = 0x%x\n", lldev->trch_state);
+	seq_printf(s, "evch_state = 0x%x\n", lldev->evch_state);
+	seq_printf(s, "chidx = 0x%x\n", lldev->chidx);
+	seq_printf(s, "nr_tres = 0x%x\n", lldev->nr_tres);
+	seq_printf(s, "trca=%p\n", lldev->trca);
+	seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
+	seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_ring_handle);
+	seq_printf(s, "tre_ring_size = 0x%x\n", lldev->tre_ring_size);
+	seq_printf(s, "tre_processed_off = 0x%x\n", lldev->tre_processed_off);
+	seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
+	seq_printf(s, "evca=%p\n", lldev->evca);
+	seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
+	seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_ring_handle);
+	seq_printf(s, "evre_ring_size = 0x%x\n", lldev->evre_ring_size);
+	seq_printf(s, "evre_processed_off = 0x%x\n", lldev->evre_processed_off);
+	seq_printf(s, "tre_write_offset = 0x%x\n", lldev->tre_write_offset);
+}
+
+/*
+ * hidma_chan_stats: display HIDMA channel statistics
+ *
+ * Display the statistics for the current HIDMA virtual channel device.
+ */
+static int hidma_chan_stats(struct seq_file *s, void *unused)
+{
+	struct hidma_chan *mchan = s->private;
+	struct hidma_desc *mdesc;
+	struct hidma_dev *dmadev = mchan->dmadev;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	seq_printf(s, "paused=%u\n", mchan->paused);
+	seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
+	seq_puts(s, "prepared\n");
+	list_for_each_entry(mdesc, &mchan->prepared, node)
+		hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+	seq_puts(s, "active\n");
+	list_for_each_entry(mdesc, &mchan->active, node)
+		hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+	seq_puts(s, "completed\n");
+	list_for_each_entry(mdesc, &mchan->completed, node)
+		hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+	hidma_ll_devstats(s, mchan->dmadev->lldev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return 0;
+}
+
+/*
+ * hidma_dma_info: display HIDMA device info
+ *
+ * Display the info for the current HIDMA device.
+ */
+static int hidma_dma_info(struct seq_file *s, void *unused)
+{
+	struct hidma_dev *dmadev = s->private;
+	resource_size_t sz;
+
+	seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
+	seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
+	seq_printf(s, "dev_trca_phys=%pa\n", &dmadev->trca_resource->start);
+	sz = resource_size(dmadev->trca_resource);
+	seq_printf(s, "dev_trca_size=%pa\n", &sz);
+	seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
+	seq_printf(s, "dev_evca_phys=%pa\n", &dmadev->evca_resource->start);
+	sz = resource_size(dmadev->evca_resource);
+	seq_printf(s, "dev_evca_size=%pa\n", &sz);
+	return 0;
+}
+
+static int hidma_chan_stats_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, hidma_chan_stats, inode->i_private);
+}
+
+static int hidma_dma_info_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, hidma_dma_info, inode->i_private);
+}
+
+static const struct file_operations hidma_chan_fops = {
+	.open = hidma_chan_stats_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+static const struct file_operations hidma_dma_fops = {
+	.open = hidma_dma_info_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+void hidma_debug_uninit(struct hidma_dev *dmadev)
+{
+	debugfs_remove_recursive(dmadev->debugfs);
+	debugfs_remove_recursive(dmadev->stats);
+}
+
+int hidma_debug_init(struct hidma_dev *dmadev)
+{
+	int rc = 0;
+	int chidx = 0;
+	struct list_head *position = NULL;
+
+	dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev), NULL);
+	if (!dmadev->debugfs) {
+		rc = -ENODEV;
+		return rc;
+	}
+
+	/* walk through the virtual channel list */
+	list_for_each(position, &dmadev->ddev.channels) {
+		struct hidma_chan *chan;
+
+		chan = list_entry(position, struct hidma_chan,
+				  chan.device_node);
+		sprintf(chan->dbg_name, "chan%d", chidx);
+		chan->debugfs = debugfs_create_dir(chan->dbg_name,
+						   dmadev->debugfs);
+		if (!chan->debugfs) {
+			rc = -ENOMEM;
+			goto cleanup;
+		}
+		chan->stats = debugfs_create_file("stats", S_IRUGO,
+						  chan->debugfs, chan,
+						  &hidma_chan_fops);
+		if (!chan->stats) {
+			rc = -ENOMEM;
+			goto cleanup;
+		}
+		chidx++;
+	}
+
+	dmadev->stats = debugfs_create_file("stats", S_IRUGO,
+					    dmadev->debugfs, dmadev,
+					    &hidma_dma_fops);
+	if (!dmadev->stats) {
+		rc = -ENOMEM;
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	hidma_debug_uninit(dmadev);
+	return rc;
+}
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 06/10] dma: qcom_hidma: add debugfs hooks
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

Add debugfs hooks for debugging the execution behavior of the DMA
channel. The debugfs hooks get initialized by the probe function and
uninitialized by the remove function.

A stats file is created in debugfs. The stats file will show the
information about each HIDMA channel as well as each asynchronous job
queued and completed at a given time.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/dma/qcom/Makefile    |   2 +-
 drivers/dma/qcom/hidma.c     |   3 +
 drivers/dma/qcom/hidma.h     |   2 +
 drivers/dma/qcom/hidma_dbg.c | 219 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 225 insertions(+), 1 deletion(-)
 create mode 100644 drivers/dma/qcom/hidma_dbg.c

diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index 6bf9267..4bfc38b 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -2,4 +2,4 @@ obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
 obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
 hdma_mgmt-objs	 := hidma_mgmt.o hidma_mgmt_sys.o
 obj-$(CONFIG_QCOM_HIDMA) +=  hdma.o
-hdma-objs        := hidma_ll.o hidma.o
+hdma-objs        := hidma_ll.o hidma.o hidma_dbg.o
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
index f8960f1..833db9e 100644
--- a/drivers/dma/qcom/hidma.c
+++ b/drivers/dma/qcom/hidma.c
@@ -681,6 +681,7 @@ static int hidma_probe(struct platform_device *pdev)
 
 	dmadev->irq = chirq;
 	tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
+	hidma_debug_init(dmadev);
 	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
 	platform_set_drvdata(pdev, dmadev);
 	pm_runtime_mark_last_busy(dmadev->ddev.dev);
@@ -689,6 +690,7 @@ static int hidma_probe(struct platform_device *pdev)
 	return 0;
 
 uninit:
+	hidma_debug_uninit(dmadev);
 	hidma_ll_uninit(dmadev->lldev);
 dmafree:
 	if (dmadev)
@@ -706,6 +708,7 @@ static int hidma_remove(struct platform_device *pdev)
 	pm_runtime_get_sync(dmadev->ddev.dev);
 	dma_async_device_unregister(&dmadev->ddev);
 	devm_free_irq(dmadev->ddev.dev, dmadev->irq, dmadev->lldev);
+	hidma_debug_uninit(dmadev);
 	hidma_ll_uninit(dmadev->lldev);
 	hidma_free(dmadev);
 
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
index 1e09d7c..c6b3ae0 100644
--- a/drivers/dma/qcom/hidma.h
+++ b/drivers/dma/qcom/hidma.h
@@ -157,4 +157,6 @@ int hidma_ll_uninit(struct hidma_lldev *llhndl);
 irqreturn_t hidma_ll_inthandler(int irq, void *arg);
 void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
 				u8 err_code);
+int hidma_debug_init(struct hidma_dev *dmadev);
+void hidma_debug_uninit(struct hidma_dev *dmadev);
 #endif
diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
new file mode 100644
index 0000000..fe35002
--- /dev/null
+++ b/drivers/dma/qcom/hidma_dbg.c
@@ -0,0 +1,219 @@
+/*
+ * Qualcomm Technologies HIDMA debug file
+ *
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma.h"
+
+static void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
+{
+	struct hidma_lldev *lldev = llhndl;
+	struct hidma_tre *tre;
+	u32 length;
+	dma_addr_t src_start;
+	dma_addr_t dest_start;
+	u32 *tre_local;
+
+	if (tre_ch >= lldev->nr_tres) {
+		dev_err(lldev->dev, "invalid TRE number in chstats:%d", tre_ch);
+		return;
+	}
+	tre = &lldev->trepool[tre_ch];
+	seq_printf(s, "------Channel %d -----\n", tre_ch);
+	seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
+	seq_printf(s, "queued = 0x%x\n", tre->queued);
+	seq_printf(s, "err_info = 0x%x\n",
+		   lldev->tx_status_list[tre->idx].err_info);
+	seq_printf(s, "err_code = 0x%x\n",
+		   lldev->tx_status_list[tre->idx].err_code);
+	seq_printf(s, "status = 0x%x\n", tre->status);
+	seq_printf(s, "idx = 0x%x\n", tre->idx);
+	seq_printf(s, "dma_sig = 0x%x\n", tre->dma_sig);
+	seq_printf(s, "dev_name=%s\n", tre->dev_name);
+	seq_printf(s, "callback=%p\n", tre->callback);
+	seq_printf(s, "data=%p\n", tre->data);
+	seq_printf(s, "tre_index = 0x%x\n", tre->tre_index);
+
+	tre_local = &tre->tre_local[0];
+	src_start = tre_local[TRE_SRC_LOW_IDX];
+	src_start = ((u64) (tre_local[TRE_SRC_HI_IDX]) << 32) + src_start;
+	dest_start = tre_local[TRE_DEST_LOW_IDX];
+	dest_start += ((u64) (tre_local[TRE_DEST_HI_IDX]) << 32);
+	length = tre_local[TRE_LEN_IDX];
+
+	seq_printf(s, "src=%pap\n", &src_start);
+	seq_printf(s, "dest=%pap\n", &dest_start);
+	seq_printf(s, "length = 0x%x\n", length);
+}
+
+static void hidma_ll_devstats(struct seq_file *s, void *llhndl)
+{
+	struct hidma_lldev *lldev = llhndl;
+
+	seq_puts(s, "------Device -----\n");
+	seq_printf(s, "lldev init = 0x%x\n", lldev->initialized);
+	seq_printf(s, "trch_state = 0x%x\n", lldev->trch_state);
+	seq_printf(s, "evch_state = 0x%x\n", lldev->evch_state);
+	seq_printf(s, "chidx = 0x%x\n", lldev->chidx);
+	seq_printf(s, "nr_tres = 0x%x\n", lldev->nr_tres);
+	seq_printf(s, "trca=%p\n", lldev->trca);
+	seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
+	seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_ring_handle);
+	seq_printf(s, "tre_ring_size = 0x%x\n", lldev->tre_ring_size);
+	seq_printf(s, "tre_processed_off = 0x%x\n", lldev->tre_processed_off);
+	seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
+	seq_printf(s, "evca=%p\n", lldev->evca);
+	seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
+	seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_ring_handle);
+	seq_printf(s, "evre_ring_size = 0x%x\n", lldev->evre_ring_size);
+	seq_printf(s, "evre_processed_off = 0x%x\n", lldev->evre_processed_off);
+	seq_printf(s, "tre_write_offset = 0x%x\n", lldev->tre_write_offset);
+}
+
+/*
+ * hidma_chan_stats: display HIDMA channel statistics
+ *
+ * Display the statistics for the current HIDMA virtual channel device.
+ */
+static int hidma_chan_stats(struct seq_file *s, void *unused)
+{
+	struct hidma_chan *mchan = s->private;
+	struct hidma_desc *mdesc;
+	struct hidma_dev *dmadev = mchan->dmadev;
+
+	pm_runtime_get_sync(dmadev->ddev.dev);
+	seq_printf(s, "paused=%u\n", mchan->paused);
+	seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
+	seq_puts(s, "prepared\n");
+	list_for_each_entry(mdesc, &mchan->prepared, node)
+		hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+	seq_puts(s, "active\n");
+	list_for_each_entry(mdesc, &mchan->active, node)
+		hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+	seq_puts(s, "completed\n");
+	list_for_each_entry(mdesc, &mchan->completed, node)
+		hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+	hidma_ll_devstats(s, mchan->dmadev->lldev);
+	pm_runtime_mark_last_busy(dmadev->ddev.dev);
+	pm_runtime_put_autosuspend(dmadev->ddev.dev);
+	return 0;
+}
+
+/*
+ * hidma_dma_info: display HIDMA device info
+ *
+ * Display the info for the current HIDMA device.
+ */
+static int hidma_dma_info(struct seq_file *s, void *unused)
+{
+	struct hidma_dev *dmadev = s->private;
+	resource_size_t sz;
+
+	seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
+	seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
+	seq_printf(s, "dev_trca_phys=%pa\n", &dmadev->trca_resource->start);
+	sz = resource_size(dmadev->trca_resource);
+	seq_printf(s, "dev_trca_size=%pa\n", &sz);
+	seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
+	seq_printf(s, "dev_evca_phys=%pa\n", &dmadev->evca_resource->start);
+	sz = resource_size(dmadev->evca_resource);
+	seq_printf(s, "dev_evca_size=%pa\n", &sz);
+	return 0;
+}
+
+static int hidma_chan_stats_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, hidma_chan_stats, inode->i_private);
+}
+
+static int hidma_dma_info_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, hidma_dma_info, inode->i_private);
+}
+
+static const struct file_operations hidma_chan_fops = {
+	.open = hidma_chan_stats_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+static const struct file_operations hidma_dma_fops = {
+	.open = hidma_dma_info_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = single_release,
+};
+
+void hidma_debug_uninit(struct hidma_dev *dmadev)
+{
+	debugfs_remove_recursive(dmadev->debugfs);
+	debugfs_remove_recursive(dmadev->stats);
+}
+
+int hidma_debug_init(struct hidma_dev *dmadev)
+{
+	int rc = 0;
+	int chidx = 0;
+	struct list_head *position = NULL;
+
+	dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev), NULL);
+	if (!dmadev->debugfs) {
+		rc = -ENODEV;
+		return rc;
+	}
+
+	/* walk through the virtual channel list */
+	list_for_each(position, &dmadev->ddev.channels) {
+		struct hidma_chan *chan;
+
+		chan = list_entry(position, struct hidma_chan,
+				  chan.device_node);
+		sprintf(chan->dbg_name, "chan%d", chidx);
+		chan->debugfs = debugfs_create_dir(chan->dbg_name,
+						   dmadev->debugfs);
+		if (!chan->debugfs) {
+			rc = -ENOMEM;
+			goto cleanup;
+		}
+		chan->stats = debugfs_create_file("stats", S_IRUGO,
+						  chan->debugfs, chan,
+						  &hidma_chan_fops);
+		if (!chan->stats) {
+			rc = -ENOMEM;
+			goto cleanup;
+		}
+		chidx++;
+	}
+
+	dmadev->stats = debugfs_create_file("stats", S_IRUGO,
+					    dmadev->debugfs, dmadev,
+					    &hidma_dma_fops);
+	if (!dmadev->stats) {
+		rc = -ENOMEM;
+		goto cleanup;
+	}
+
+	return 0;
+cleanup:
+	hidma_debug_uninit(dmadev);
+	return rc;
+}
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 07/10] dma: qcom_hidma: add support for object hierarchy
  2016-01-29 22:35 ` Sinan Kaya
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

In order to create a relationship model between the channels and the
management object, we are adding support for object hierarchy to the
drivers. This patch simplifies the userspace application development.
We will not have to traverse different firmware paths based on device
tree or ACPI baed kernels.

No matter what flavor of kernel is used, objects will be represented as
platform devices.

The new layout is as follows:

hidmam_10: hidma-mgmt@0x5A000000 {
	compatible = "qcom,hidma-mgmt-1.0";
	...

	hidma_10: hidma@0x5a010000 {
			compatible = "qcom,hidma-1.0";
			...
	}
}

The hidma_mgmt_init detects each instance of the hidma-mgmt-1.0 objects
in device tree and calls into the channel driver to create platform devices
for each child of the management object.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 Documentation/ABI/testing/sysfs-platform-hidma |   9 +++
 drivers/dma/qcom/hidma.c                       |  39 +++++++++-
 drivers/dma/qcom/hidma_mgmt.c                  | 104 ++++++++++++++++++++++++-
 3 files changed, 148 insertions(+), 4 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma

diff --git a/Documentation/ABI/testing/sysfs-platform-hidma b/Documentation/ABI/testing/sysfs-platform-hidma
new file mode 100644
index 0000000..d364415
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-platform-hidma
@@ -0,0 +1,9 @@
+What:		/sys/devices/platform/hidma-*/chid
+		/sys/devices/platform/QCOM8061:*/chid
+Date:		Dec 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains the ID of the channel within the HIDMA instance.
+		It is used to associate a given HIDMA channel with the
+		priority and weight calls in the management interface.
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
index 833db9e..ac20bdb 100644
--- a/drivers/dma/qcom/hidma.c
+++ b/drivers/dma/qcom/hidma.c
@@ -549,6 +549,43 @@ static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
 	return hidma_ll_inthandler(chirq, lldev);
 }
 
+static ssize_t hidma_show_values(struct device *dev,
+				 struct device_attribute *attr, char *buf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_dev *mdev = platform_get_drvdata(pdev);
+
+	buf[0] = 0;
+
+	if (strcmp(attr->attr.name, "chid") == 0)
+		sprintf(buf, "%d\n", mdev->chidx);
+
+	return strlen(buf);
+}
+
+static int hidma_create_sysfs_entry(struct hidma_dev *dev, char *name,
+				    int mode)
+{
+	struct device_attribute *attrs;
+	char *name_copy;
+
+	attrs = devm_kmalloc(dev->ddev.dev, sizeof(struct device_attribute),
+			     GFP_KERNEL);
+	if (!attrs)
+		return -ENOMEM;
+
+	name_copy = devm_kstrdup(dev->ddev.dev, name, GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	attrs->attr.name = name_copy;
+	attrs->attr.mode = mode;
+	attrs->show = hidma_show_values;
+	sysfs_attr_init(&attrs->attr);
+
+	return device_create_file(dev->ddev.dev, attrs);
+}
+
 static int hidma_probe(struct platform_device *pdev)
 {
 	struct hidma_dev *dmadev;
@@ -682,6 +719,7 @@ static int hidma_probe(struct platform_device *pdev)
 	dmadev->irq = chirq;
 	tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
 	hidma_debug_init(dmadev);
+	hidma_create_sysfs_entry(dmadev, "chid", S_IRUGO);
 	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
 	platform_set_drvdata(pdev, dmadev);
 	pm_runtime_mark_last_busy(dmadev->ddev.dev);
@@ -730,7 +768,6 @@ static const struct of_device_id hidma_match[] = {
 	{.compatible = "qcom,hidma-1.0",},
 	{},
 };
-
 MODULE_DEVICE_TABLE(of, hidma_match);
 
 static struct platform_driver hidma_driver = {
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
index ef491b8..61fe6b7 100644
--- a/drivers/dma/qcom/hidma_mgmt.c
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -17,13 +17,14 @@
 #include <linux/acpi.h>
 #include <linux/of.h>
 #include <linux/property.h>
-#include <linux/interrupt.h>
-#include <linux/platform_device.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
 #include <linux/module.h>
 #include <linux/uaccess.h>
 #include <linux/slab.h>
 #include <linux/pm_runtime.h>
 #include <linux/bitops.h>
+#include <linux/dma-mapping.h>
 
 #include "hidma_mgmt.h"
 
@@ -298,5 +299,102 @@ static struct platform_driver hidma_mgmt_driver = {
 	},
 };
 
-module_platform_driver(hidma_mgmt_driver);
+#if defined(CONFIG_OF) && defined(CONFIG_OF_IRQ)
+static int object_counter;
+
+static int __init hidma_mgmt_of_populate_channels(struct device_node *np)
+{
+	struct platform_device *pdev_parent = of_find_device_by_node(np);
+	struct platform_device_info pdevinfo;
+	struct of_phandle_args out_irq;
+	struct device_node *child;
+	struct resource *res;
+	const __be32 *cell;
+	int ret = 0, size, i, num;
+	u64 addr, addr_size;
+
+	for_each_available_child_of_node(np, child) {
+		struct resource *res_iter;
+
+		cell = of_get_property(child, "reg", &size);
+		if (!cell) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		size /= sizeof(*cell);
+		num = size /
+			(of_n_addr_cells(child) + of_n_size_cells(child)) + 1;
+
+		/* allocate a resource array */
+		res = kcalloc(num, sizeof(*res), GFP_KERNEL);
+		if (!res) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		/* read each reg value */
+		i = 0;
+		res_iter = res;
+		while (i < size) {
+			addr = of_read_number(&cell[i],
+					      of_n_addr_cells(child));
+			i += of_n_addr_cells(child);
+
+			addr_size = of_read_number(&cell[i],
+						   of_n_size_cells(child));
+			i += of_n_size_cells(child);
+
+			res_iter->start = addr;
+			res_iter->end = res_iter->start + addr_size - 1;
+			res_iter->flags = IORESOURCE_MEM;
+			res_iter++;
+		}
+
+		ret = of_irq_parse_one(child, 0, &out_irq);
+		if (ret)
+			goto out;
+
+		res_iter->start = irq_create_of_mapping(&out_irq);
+		res_iter->name = "hidma event irq";
+		res_iter->flags = IORESOURCE_IRQ;
+
+		pdevinfo.fwnode = &child->fwnode;
+		pdevinfo.parent = pdev_parent ? &pdev_parent->dev : NULL;
+		pdevinfo.name = child->name;
+		pdevinfo.id = object_counter++;
+		pdevinfo.res = res;
+		pdevinfo.num_res = num;
+		pdevinfo.data = NULL;
+		pdevinfo.size_data = 0;
+		pdevinfo.dma_mask = DMA_BIT_MASK(64);
+		platform_device_register_full(&pdevinfo);
+
+		kfree(res);
+		res = NULL;
+	}
+out:
+	kfree(res);
+
+	return ret;
+}
+#endif
+
+static int __init hidma_mgmt_init(void)
+{
+#if defined(CONFIG_OF) && defined(CONFIG_OF_IRQ)
+	struct device_node *child;
+
+	for (child = of_find_matching_node(NULL, hidma_mgmt_match); child;
+	     child = of_find_matching_node(child, hidma_mgmt_match)) {
+		/* device tree based firmware here */
+		hidma_mgmt_of_populate_channels(child);
+		of_node_put(child);
+	}
+#endif
+	platform_driver_register(&hidma_mgmt_driver);
+
+	return 0;
+}
+module_init(hidma_mgmt_init);
 MODULE_LICENSE("GPL v2");
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 07/10] dma: qcom_hidma: add support for object hierarchy
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

In order to create a relationship model between the channels and the
management object, we are adding support for object hierarchy to the
drivers. This patch simplifies the userspace application development.
We will not have to traverse different firmware paths based on device
tree or ACPI baed kernels.

No matter what flavor of kernel is used, objects will be represented as
platform devices.

The new layout is as follows:

hidmam_10: hidma-mgmt at 0x5A000000 {
	compatible = "qcom,hidma-mgmt-1.0";
	...

	hidma_10: hidma at 0x5a010000 {
			compatible = "qcom,hidma-1.0";
			...
	}
}

The hidma_mgmt_init detects each instance of the hidma-mgmt-1.0 objects
in device tree and calls into the channel driver to create platform devices
for each child of the management object.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 Documentation/ABI/testing/sysfs-platform-hidma |   9 +++
 drivers/dma/qcom/hidma.c                       |  39 +++++++++-
 drivers/dma/qcom/hidma_mgmt.c                  | 104 ++++++++++++++++++++++++-
 3 files changed, 148 insertions(+), 4 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-platform-hidma

diff --git a/Documentation/ABI/testing/sysfs-platform-hidma b/Documentation/ABI/testing/sysfs-platform-hidma
new file mode 100644
index 0000000..d364415
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-platform-hidma
@@ -0,0 +1,9 @@
+What:		/sys/devices/platform/hidma-*/chid
+		/sys/devices/platform/QCOM8061:*/chid
+Date:		Dec 2015
+KernelVersion:	4.4
+Contact:	"Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+		Contains the ID of the channel within the HIDMA instance.
+		It is used to associate a given HIDMA channel with the
+		priority and weight calls in the management interface.
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
index 833db9e..ac20bdb 100644
--- a/drivers/dma/qcom/hidma.c
+++ b/drivers/dma/qcom/hidma.c
@@ -549,6 +549,43 @@ static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
 	return hidma_ll_inthandler(chirq, lldev);
 }
 
+static ssize_t hidma_show_values(struct device *dev,
+				 struct device_attribute *attr, char *buf)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct hidma_dev *mdev = platform_get_drvdata(pdev);
+
+	buf[0] = 0;
+
+	if (strcmp(attr->attr.name, "chid") == 0)
+		sprintf(buf, "%d\n", mdev->chidx);
+
+	return strlen(buf);
+}
+
+static int hidma_create_sysfs_entry(struct hidma_dev *dev, char *name,
+				    int mode)
+{
+	struct device_attribute *attrs;
+	char *name_copy;
+
+	attrs = devm_kmalloc(dev->ddev.dev, sizeof(struct device_attribute),
+			     GFP_KERNEL);
+	if (!attrs)
+		return -ENOMEM;
+
+	name_copy = devm_kstrdup(dev->ddev.dev, name, GFP_KERNEL);
+	if (!name_copy)
+		return -ENOMEM;
+
+	attrs->attr.name = name_copy;
+	attrs->attr.mode = mode;
+	attrs->show = hidma_show_values;
+	sysfs_attr_init(&attrs->attr);
+
+	return device_create_file(dev->ddev.dev, attrs);
+}
+
 static int hidma_probe(struct platform_device *pdev)
 {
 	struct hidma_dev *dmadev;
@@ -682,6 +719,7 @@ static int hidma_probe(struct platform_device *pdev)
 	dmadev->irq = chirq;
 	tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
 	hidma_debug_init(dmadev);
+	hidma_create_sysfs_entry(dmadev, "chid", S_IRUGO);
 	dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
 	platform_set_drvdata(pdev, dmadev);
 	pm_runtime_mark_last_busy(dmadev->ddev.dev);
@@ -730,7 +768,6 @@ static const struct of_device_id hidma_match[] = {
 	{.compatible = "qcom,hidma-1.0",},
 	{},
 };
-
 MODULE_DEVICE_TABLE(of, hidma_match);
 
 static struct platform_driver hidma_driver = {
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
index ef491b8..61fe6b7 100644
--- a/drivers/dma/qcom/hidma_mgmt.c
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -17,13 +17,14 @@
 #include <linux/acpi.h>
 #include <linux/of.h>
 #include <linux/property.h>
-#include <linux/interrupt.h>
-#include <linux/platform_device.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
 #include <linux/module.h>
 #include <linux/uaccess.h>
 #include <linux/slab.h>
 #include <linux/pm_runtime.h>
 #include <linux/bitops.h>
+#include <linux/dma-mapping.h>
 
 #include "hidma_mgmt.h"
 
@@ -298,5 +299,102 @@ static struct platform_driver hidma_mgmt_driver = {
 	},
 };
 
-module_platform_driver(hidma_mgmt_driver);
+#if defined(CONFIG_OF) && defined(CONFIG_OF_IRQ)
+static int object_counter;
+
+static int __init hidma_mgmt_of_populate_channels(struct device_node *np)
+{
+	struct platform_device *pdev_parent = of_find_device_by_node(np);
+	struct platform_device_info pdevinfo;
+	struct of_phandle_args out_irq;
+	struct device_node *child;
+	struct resource *res;
+	const __be32 *cell;
+	int ret = 0, size, i, num;
+	u64 addr, addr_size;
+
+	for_each_available_child_of_node(np, child) {
+		struct resource *res_iter;
+
+		cell = of_get_property(child, "reg", &size);
+		if (!cell) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		size /= sizeof(*cell);
+		num = size /
+			(of_n_addr_cells(child) + of_n_size_cells(child)) + 1;
+
+		/* allocate a resource array */
+		res = kcalloc(num, sizeof(*res), GFP_KERNEL);
+		if (!res) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		/* read each reg value */
+		i = 0;
+		res_iter = res;
+		while (i < size) {
+			addr = of_read_number(&cell[i],
+					      of_n_addr_cells(child));
+			i += of_n_addr_cells(child);
+
+			addr_size = of_read_number(&cell[i],
+						   of_n_size_cells(child));
+			i += of_n_size_cells(child);
+
+			res_iter->start = addr;
+			res_iter->end = res_iter->start + addr_size - 1;
+			res_iter->flags = IORESOURCE_MEM;
+			res_iter++;
+		}
+
+		ret = of_irq_parse_one(child, 0, &out_irq);
+		if (ret)
+			goto out;
+
+		res_iter->start = irq_create_of_mapping(&out_irq);
+		res_iter->name = "hidma event irq";
+		res_iter->flags = IORESOURCE_IRQ;
+
+		pdevinfo.fwnode = &child->fwnode;
+		pdevinfo.parent = pdev_parent ? &pdev_parent->dev : NULL;
+		pdevinfo.name = child->name;
+		pdevinfo.id = object_counter++;
+		pdevinfo.res = res;
+		pdevinfo.num_res = num;
+		pdevinfo.data = NULL;
+		pdevinfo.size_data = 0;
+		pdevinfo.dma_mask = DMA_BIT_MASK(64);
+		platform_device_register_full(&pdevinfo);
+
+		kfree(res);
+		res = NULL;
+	}
+out:
+	kfree(res);
+
+	return ret;
+}
+#endif
+
+static int __init hidma_mgmt_init(void)
+{
+#if defined(CONFIG_OF) && defined(CONFIG_OF_IRQ)
+	struct device_node *child;
+
+	for (child = of_find_matching_node(NULL, hidma_mgmt_match); child;
+	     child = of_find_matching_node(child, hidma_mgmt_match)) {
+		/* device tree based firmware here */
+		hidma_mgmt_of_populate_channels(child);
+		of_node_put(child);
+	}
+#endif
+	platform_driver_register(&hidma_mgmt_driver);
+
+	return 0;
+}
+module_init(hidma_mgmt_init);
 MODULE_LICENSE("GPL v2");
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-01-29 22:35 ` Sinan Kaya
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

Removing the flexibility to choose the event channel as there is no real
use case right now. We have been using the values in ACPI that match the HW
defaults. OS is reading the event-channel from the HW register now.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  3 --
 drivers/dma/qcom/hidma.c                           | 39 +---------------------
 2 files changed, 1 insertion(+), 41 deletions(-)

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
index e3677a5..fd5618b 100644
--- a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -51,7 +51,6 @@ Required properties:
 - reg: Addresses for the transfer and event channel
 - interrupts: Should contain the event interrupt
 - desc-count: Number of asynchronous requests this channel can handle
-- channel-index: The HW event channel completions will be delivered.
 - iommus: required a iommu node
 
 Example:
@@ -75,7 +74,6 @@ Hypervisor OS configuration:
 			interrupts = <0 389 0>;
 			desc-count = <10>;
 			iommus = <&system_mmu>;
-			channel-index = <4>;
 		};
 	};
 
@@ -87,6 +85,5 @@ Guest OS configuration:
 		      <0 0x5c0b0000 0x0 0x1000>;
 		interrupts = <0 389 0>;
 		desc-count = <10>;
-		channel-index = <4>;
 		iommus = <&system_mmu>;
 	};
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
index ac20bdb..7180367 100644
--- a/drivers/dma/qcom/hidma.c
+++ b/drivers/dma/qcom/hidma.c
@@ -101,26 +101,6 @@ static unsigned int nr_desc_prm;
 module_param(nr_desc_prm, uint, 0644);
 MODULE_PARM_DESC(nr_desc_prm, "number of descriptors (default: 0)");
 
-#define HIDMA_MAX_CHANNELS	64
-static int channel_idx[HIDMA_MAX_CHANNELS] = {
-	[0 ... (HIDMA_MAX_CHANNELS - 1)] = -1
-};
-
-/*
- * Each DMA channel is associated with an event channel for interrupt
- * delivery. The event channel index usually comes from the firmware through
- * ACPI/DT. When a HIDMA channel is executed in the guest machine context (QEMU)
- * the device tree gets auto-generated based on the memory and IRQ resources
- * this driver uses on the host machine. Any device specific paraemeter such as
- * channel-index gets ignored by the QEMU.
- * We are using this command line parameter to pass the event channel index to
- * the guest machine.
- */
-static unsigned int num_channel_idx;
-module_param_array_named(channel_idx, channel_idx, int, &num_channel_idx,
-			 0644);
-MODULE_PARM_DESC(channel_idx, "channel index array for the notifications");
-static atomic_t channel_ref_count;
 
 /* process completed descriptors */
 static void hidma_process_completed(struct hidma_chan *mchan)
@@ -592,7 +572,6 @@ static int hidma_probe(struct platform_device *pdev)
 	struct resource *trca_resource;
 	struct resource *evca_resource;
 	int chirq;
-	int current_channel_index = atomic_read(&channel_ref_count);
 	void __iomem *evca;
 	void __iomem *trca;
 	int rc;
@@ -668,22 +647,7 @@ static int hidma_probe(struct platform_device *pdev)
 		goto dmafree;
 	}
 
-	if (current_channel_index > HIDMA_MAX_CHANNELS) {
-		rc = -EINVAL;
-		goto dmafree;
-	}
-
-	dmadev->chidx = -1;
-	device_property_read_u32(&pdev->dev, "channel-index", &dmadev->chidx);
-
-	/* kernel command line override for the guest machine */
-	if (channel_idx[current_channel_index] != -1)
-		dmadev->chidx = channel_idx[current_channel_index];
-
-	if (dmadev->chidx == -1) {
-		rc = -EINVAL;
-		goto dmafree;
-	}
+	dmadev->chidx = readl(dmadev->dev_trca + 0x28);
 
 	/* Set DMA mask to 64 bits. */
 	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
@@ -724,7 +688,6 @@ static int hidma_probe(struct platform_device *pdev)
 	platform_set_drvdata(pdev, dmadev);
 	pm_runtime_mark_last_busy(dmadev->ddev.dev);
 	pm_runtime_put_autosuspend(dmadev->ddev.dev);
-	atomic_inc(&channel_ref_count);
 	return 0;
 
 uninit:
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

Removing the flexibility to choose the event channel as there is no real
use case right now. We have been using the values in ACPI that match the HW
defaults. OS is reading the event-channel from the HW register now.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  3 --
 drivers/dma/qcom/hidma.c                           | 39 +---------------------
 2 files changed, 1 insertion(+), 41 deletions(-)

diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
index e3677a5..fd5618b 100644
--- a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
+++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
@@ -51,7 +51,6 @@ Required properties:
 - reg: Addresses for the transfer and event channel
 - interrupts: Should contain the event interrupt
 - desc-count: Number of asynchronous requests this channel can handle
-- channel-index: The HW event channel completions will be delivered.
 - iommus: required a iommu node
 
 Example:
@@ -75,7 +74,6 @@ Hypervisor OS configuration:
 			interrupts = <0 389 0>;
 			desc-count = <10>;
 			iommus = <&system_mmu>;
-			channel-index = <4>;
 		};
 	};
 
@@ -87,6 +85,5 @@ Guest OS configuration:
 		      <0 0x5c0b0000 0x0 0x1000>;
 		interrupts = <0 389 0>;
 		desc-count = <10>;
-		channel-index = <4>;
 		iommus = <&system_mmu>;
 	};
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
index ac20bdb..7180367 100644
--- a/drivers/dma/qcom/hidma.c
+++ b/drivers/dma/qcom/hidma.c
@@ -101,26 +101,6 @@ static unsigned int nr_desc_prm;
 module_param(nr_desc_prm, uint, 0644);
 MODULE_PARM_DESC(nr_desc_prm, "number of descriptors (default: 0)");
 
-#define HIDMA_MAX_CHANNELS	64
-static int channel_idx[HIDMA_MAX_CHANNELS] = {
-	[0 ... (HIDMA_MAX_CHANNELS - 1)] = -1
-};
-
-/*
- * Each DMA channel is associated with an event channel for interrupt
- * delivery. The event channel index usually comes from the firmware through
- * ACPI/DT. When a HIDMA channel is executed in the guest machine context (QEMU)
- * the device tree gets auto-generated based on the memory and IRQ resources
- * this driver uses on the host machine. Any device specific paraemeter such as
- * channel-index gets ignored by the QEMU.
- * We are using this command line parameter to pass the event channel index to
- * the guest machine.
- */
-static unsigned int num_channel_idx;
-module_param_array_named(channel_idx, channel_idx, int, &num_channel_idx,
-			 0644);
-MODULE_PARM_DESC(channel_idx, "channel index array for the notifications");
-static atomic_t channel_ref_count;
 
 /* process completed descriptors */
 static void hidma_process_completed(struct hidma_chan *mchan)
@@ -592,7 +572,6 @@ static int hidma_probe(struct platform_device *pdev)
 	struct resource *trca_resource;
 	struct resource *evca_resource;
 	int chirq;
-	int current_channel_index = atomic_read(&channel_ref_count);
 	void __iomem *evca;
 	void __iomem *trca;
 	int rc;
@@ -668,22 +647,7 @@ static int hidma_probe(struct platform_device *pdev)
 		goto dmafree;
 	}
 
-	if (current_channel_index > HIDMA_MAX_CHANNELS) {
-		rc = -EINVAL;
-		goto dmafree;
-	}
-
-	dmadev->chidx = -1;
-	device_property_read_u32(&pdev->dev, "channel-index", &dmadev->chidx);
-
-	/* kernel command line override for the guest machine */
-	if (channel_idx[current_channel_index] != -1)
-		dmadev->chidx = channel_idx[current_channel_index];
-
-	if (dmadev->chidx == -1) {
-		rc = -EINVAL;
-		goto dmafree;
-	}
+	dmadev->chidx = readl(dmadev->dev_trca + 0x28);
 
 	/* Set DMA mask to 64 bits. */
 	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
@@ -724,7 +688,6 @@ static int hidma_probe(struct platform_device *pdev)
 	platform_set_drvdata(pdev, dmadev);
 	pm_runtime_mark_last_busy(dmadev->ddev.dev);
 	pm_runtime_put_autosuspend(dmadev->ddev.dev);
-	atomic_inc(&channel_ref_count);
 	return 0;
 
 uninit:
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-01-29 22:35 ` Sinan Kaya
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

The code is using the compatible DT string to associate a reset driver with
the actual device itself. The compatible string does not exist on ACPI
based systems. HID is the unique identifier for a device driver instead.
The change allows a driver to register with DT compatible string or ACPI
HID and then match the object with one of these conditions.

For ACPI systems, ACPI HID needs to match and compat in the registered
reset
driver needs to match for ACPI reset driver loading to work.

For OF based systems, DT compatible string needs to match and compat in the
registered reset driver needs to match for DT reset driver loading to work.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 .../vfio/platform/reset/vfio_platform_amdxgbe.c    |  3 +-
 .../platform/reset/vfio_platform_calxedaxgmac.c    |  3 +-
 drivers/vfio/platform/vfio_platform_common.c       | 80 +++++++++++++++++++---
 drivers/vfio/platform/vfio_platform_private.h      | 41 ++++++-----
 4 files changed, 96 insertions(+), 31 deletions(-)

diff --git a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
index d4030d0..cc5b4fa 100644
--- a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
+++ b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
@@ -119,7 +119,8 @@ int vfio_platform_amdxgbe_reset(struct vfio_platform_device *vdev)
 	return 0;
 }
 
-module_vfio_reset_handler("amd,xgbe-seattle-v1a", vfio_platform_amdxgbe_reset);
+module_vfio_reset_handler("amd,xgbe-seattle-v1a", NULL,
+			  vfio_platform_amdxgbe_reset);
 
 MODULE_VERSION("0.1");
 MODULE_LICENSE("GPL v2");
diff --git a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
index e3d3d94..0e57529 100644
--- a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
+++ b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
@@ -77,7 +77,8 @@ int vfio_platform_calxedaxgmac_reset(struct vfio_platform_device *vdev)
 	return 0;
 }
 
-module_vfio_reset_handler("calxeda,hb-xgmac", vfio_platform_calxedaxgmac_reset);
+module_vfio_reset_handler("calxeda,hb-xgmac", NULL,
+			  vfio_platform_calxedaxgmac_reset);
 
 MODULE_VERSION(DRIVER_VERSION);
 MODULE_LICENSE("GPL v2");
diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
index 418cdd9..6927e05 100644
--- a/drivers/vfio/platform/vfio_platform_common.c
+++ b/drivers/vfio/platform/vfio_platform_common.c
@@ -13,6 +13,7 @@
  */
 
 #include <linux/device.h>
+#include <linux/acpi.h>
 #include <linux/iommu.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
@@ -31,14 +32,22 @@ static LIST_HEAD(reset_list);
 static DEFINE_MUTEX(driver_lock);
 
 static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
-					struct module **module)
+				  const char *acpihid, struct module **module)
 {
 	struct vfio_platform_reset_node *iter;
 	vfio_platform_reset_fn_t reset_fn = NULL;
 
 	mutex_lock(&driver_lock);
 	list_for_each_entry(iter, &reset_list, link) {
-		if (!strcmp(iter->compat, compat) &&
+		if (acpihid && iter->acpihid &&
+		    !strcmp(iter->acpihid, acpihid) &&
+			try_module_get(iter->owner)) {
+			*module = iter->owner;
+			reset_fn = iter->reset;
+			break;
+		}
+		if (compat && iter->compat &&
+		    !strcmp(iter->compat, compat) &&
 			try_module_get(iter->owner)) {
 			*module = iter->owner;
 			reset_fn = iter->reset;
@@ -51,11 +60,12 @@ static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
 
 static void vfio_platform_get_reset(struct vfio_platform_device *vdev)
 {
-	vdev->reset = vfio_platform_lookup_reset(vdev->compat,
-						&vdev->reset_module);
+	vdev->reset = vfio_platform_lookup_reset(vdev->compat, vdev->acpihid,
+						 &vdev->reset_module);
 	if (!vdev->reset) {
 		request_module("vfio-reset:%s", vdev->compat);
 		vdev->reset = vfio_platform_lookup_reset(vdev->compat,
+							 vdev->acpihid,
 							 &vdev->reset_module);
 	}
 }
@@ -541,6 +551,46 @@ static const struct vfio_device_ops vfio_platform_ops = {
 	.mmap		= vfio_platform_mmap,
 };
 
+#ifdef CONFIG_ACPI
+int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
+			     struct device *dev)
+{
+	struct acpi_device adev = ACPI_COMPANION(dev);
+
+	if (!adev)
+		return -EINVAL;
+
+	vdev->acpihid = acpi_device_hid(adev);
+	if (!vdev->acpihid) {
+		pr_err("VFIO: cannot find ACPI HID for %s\n",
+		       vdev->name);
+		return -EINVAL;
+	}
+	return 0;
+}
+#else
+int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
+			     struct device *dev)
+{
+	return -EINVAL;
+}
+#endif
+
+int vfio_platform_probe_of(struct vfio_platform_device *vdev,
+			   struct device *dev)
+{
+	int ret;
+
+	ret = device_property_read_string(dev, "compatible",
+					  &vdev->compat);
+	if (ret) {
+		pr_err("VFIO: cannot retrieve compat for %s\n",
+			vdev->name);
+		return -EINVAL;
+	}
+	return 0;
+}
+
 int vfio_platform_probe_common(struct vfio_platform_device *vdev,
 			       struct device *dev)
 {
@@ -550,14 +600,14 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
 	if (!vdev)
 		return -EINVAL;
 
-	ret = device_property_read_string(dev, "compatible", &vdev->compat);
-	if (ret) {
-		pr_err("VFIO: cannot retrieve compat for %s\n", vdev->name);
-		return -EINVAL;
-	}
+	ret = vfio_platform_probe_acpi(vdev, dev);
+	if (ret)
+		ret = vfio_platform_probe_of(vdev, dev);
 
-	vdev->device = dev;
+	if (ret)
+		return ret;
 
+	vdev->device = dev;
 	group = iommu_group_get(dev);
 	if (!group) {
 		pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
@@ -602,13 +652,21 @@ void __vfio_platform_register_reset(struct vfio_platform_reset_node *node)
 EXPORT_SYMBOL_GPL(__vfio_platform_register_reset);
 
 void vfio_platform_unregister_reset(const char *compat,
+				    const char *acpihid,
 				    vfio_platform_reset_fn_t fn)
 {
 	struct vfio_platform_reset_node *iter, *temp;
 
 	mutex_lock(&driver_lock);
 	list_for_each_entry_safe(iter, temp, &reset_list, link) {
-		if (!strcmp(iter->compat, compat) && (iter->reset == fn)) {
+		if (acpihid && iter->acpihid &&
+		    !strcmp(iter->acpihid, acpihid) && (iter->reset == fn)) {
+			list_del(&iter->link);
+			break;
+		}
+
+		if (compat && iter->compat &&
+		    !strcmp(iter->compat, compat) && (iter->reset == fn)) {
 			list_del(&iter->link);
 			break;
 		}
diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
index 42816dd..32feba3 100644
--- a/drivers/vfio/platform/vfio_platform_private.h
+++ b/drivers/vfio/platform/vfio_platform_private.h
@@ -58,6 +58,7 @@ struct vfio_platform_device {
 	struct mutex			igate;
 	struct module			*parent_module;
 	const char			*compat;
+	const char			*acpihid;
 	struct module			*reset_module;
 	struct device			*device;
 
@@ -79,6 +80,7 @@ typedef int (*vfio_platform_reset_fn_t)(struct vfio_platform_device *vdev);
 struct vfio_platform_reset_node {
 	struct list_head link;
 	char *compat;
+	char *acpihid;
 	struct module *owner;
 	vfio_platform_reset_fn_t reset;
 };
@@ -98,27 +100,30 @@ extern int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
 
 extern void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
 extern void vfio_platform_unregister_reset(const char *compat,
+					   const char *acpihid,
 					   vfio_platform_reset_fn_t fn);
-#define vfio_platform_register_reset(__compat, __reset)		\
-static struct vfio_platform_reset_node __reset ## _node = {	\
-	.owner = THIS_MODULE,					\
-	.compat = __compat,					\
-	.reset = __reset,					\
-};								\
+
+#define vfio_platform_register_reset(__compat, __acpihid, __reset)	\
+static struct vfio_platform_reset_node __reset ## _node = {		\
+	.owner = THIS_MODULE,						\
+	.compat = __compat,						\
+	.acpihid = __acpihid,						\
+	.reset = __reset,						\
+};									\
 __vfio_platform_register_reset(&__reset ## _node)
 
-#define module_vfio_reset_handler(compat, reset)		\
-MODULE_ALIAS("vfio-reset:" compat);				\
-static int __init reset ## _module_init(void)			\
-{								\
-	vfio_platform_register_reset(compat, reset);		\
-	return 0;						\
-};								\
-static void __exit reset ## _module_exit(void)			\
-{								\
-	vfio_platform_unregister_reset(compat, reset);		\
-};								\
-module_init(reset ## _module_init);				\
+#define module_vfio_reset_handler(compat, acpihid, reset)		\
+MODULE_ALIAS("vfio-reset:" compat);					\
+static int __init reset ## _module_init(void)				\
+{									\
+	vfio_platform_register_reset(compat, acpihid, reset);		\
+	return 0;							\
+};									\
+static void __exit reset ## _module_exit(void)				\
+{									\
+	vfio_platform_unregister_reset(compat, acpihid, reset);		\
+};									\
+module_init(reset ## _module_init);					\
 module_exit(reset ## _module_exit)
 
 #endif /* VFIO_PLATFORM_PRIVATE_H */
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

The code is using the compatible DT string to associate a reset driver with
the actual device itself. The compatible string does not exist on ACPI
based systems. HID is the unique identifier for a device driver instead.
The change allows a driver to register with DT compatible string or ACPI
HID and then match the object with one of these conditions.

For ACPI systems, ACPI HID needs to match and compat in the registered
reset
driver needs to match for ACPI reset driver loading to work.

For OF based systems, DT compatible string needs to match and compat in the
registered reset driver needs to match for DT reset driver loading to work.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 .../vfio/platform/reset/vfio_platform_amdxgbe.c    |  3 +-
 .../platform/reset/vfio_platform_calxedaxgmac.c    |  3 +-
 drivers/vfio/platform/vfio_platform_common.c       | 80 +++++++++++++++++++---
 drivers/vfio/platform/vfio_platform_private.h      | 41 ++++++-----
 4 files changed, 96 insertions(+), 31 deletions(-)

diff --git a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
index d4030d0..cc5b4fa 100644
--- a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
+++ b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
@@ -119,7 +119,8 @@ int vfio_platform_amdxgbe_reset(struct vfio_platform_device *vdev)
 	return 0;
 }
 
-module_vfio_reset_handler("amd,xgbe-seattle-v1a", vfio_platform_amdxgbe_reset);
+module_vfio_reset_handler("amd,xgbe-seattle-v1a", NULL,
+			  vfio_platform_amdxgbe_reset);
 
 MODULE_VERSION("0.1");
 MODULE_LICENSE("GPL v2");
diff --git a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
index e3d3d94..0e57529 100644
--- a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
+++ b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
@@ -77,7 +77,8 @@ int vfio_platform_calxedaxgmac_reset(struct vfio_platform_device *vdev)
 	return 0;
 }
 
-module_vfio_reset_handler("calxeda,hb-xgmac", vfio_platform_calxedaxgmac_reset);
+module_vfio_reset_handler("calxeda,hb-xgmac", NULL,
+			  vfio_platform_calxedaxgmac_reset);
 
 MODULE_VERSION(DRIVER_VERSION);
 MODULE_LICENSE("GPL v2");
diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
index 418cdd9..6927e05 100644
--- a/drivers/vfio/platform/vfio_platform_common.c
+++ b/drivers/vfio/platform/vfio_platform_common.c
@@ -13,6 +13,7 @@
  */
 
 #include <linux/device.h>
+#include <linux/acpi.h>
 #include <linux/iommu.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
@@ -31,14 +32,22 @@ static LIST_HEAD(reset_list);
 static DEFINE_MUTEX(driver_lock);
 
 static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
-					struct module **module)
+				  const char *acpihid, struct module **module)
 {
 	struct vfio_platform_reset_node *iter;
 	vfio_platform_reset_fn_t reset_fn = NULL;
 
 	mutex_lock(&driver_lock);
 	list_for_each_entry(iter, &reset_list, link) {
-		if (!strcmp(iter->compat, compat) &&
+		if (acpihid && iter->acpihid &&
+		    !strcmp(iter->acpihid, acpihid) &&
+			try_module_get(iter->owner)) {
+			*module = iter->owner;
+			reset_fn = iter->reset;
+			break;
+		}
+		if (compat && iter->compat &&
+		    !strcmp(iter->compat, compat) &&
 			try_module_get(iter->owner)) {
 			*module = iter->owner;
 			reset_fn = iter->reset;
@@ -51,11 +60,12 @@ static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
 
 static void vfio_platform_get_reset(struct vfio_platform_device *vdev)
 {
-	vdev->reset = vfio_platform_lookup_reset(vdev->compat,
-						&vdev->reset_module);
+	vdev->reset = vfio_platform_lookup_reset(vdev->compat, vdev->acpihid,
+						 &vdev->reset_module);
 	if (!vdev->reset) {
 		request_module("vfio-reset:%s", vdev->compat);
 		vdev->reset = vfio_platform_lookup_reset(vdev->compat,
+							 vdev->acpihid,
 							 &vdev->reset_module);
 	}
 }
@@ -541,6 +551,46 @@ static const struct vfio_device_ops vfio_platform_ops = {
 	.mmap		= vfio_platform_mmap,
 };
 
+#ifdef CONFIG_ACPI
+int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
+			     struct device *dev)
+{
+	struct acpi_device adev = ACPI_COMPANION(dev);
+
+	if (!adev)
+		return -EINVAL;
+
+	vdev->acpihid = acpi_device_hid(adev);
+	if (!vdev->acpihid) {
+		pr_err("VFIO: cannot find ACPI HID for %s\n",
+		       vdev->name);
+		return -EINVAL;
+	}
+	return 0;
+}
+#else
+int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
+			     struct device *dev)
+{
+	return -EINVAL;
+}
+#endif
+
+int vfio_platform_probe_of(struct vfio_platform_device *vdev,
+			   struct device *dev)
+{
+	int ret;
+
+	ret = device_property_read_string(dev, "compatible",
+					  &vdev->compat);
+	if (ret) {
+		pr_err("VFIO: cannot retrieve compat for %s\n",
+			vdev->name);
+		return -EINVAL;
+	}
+	return 0;
+}
+
 int vfio_platform_probe_common(struct vfio_platform_device *vdev,
 			       struct device *dev)
 {
@@ -550,14 +600,14 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
 	if (!vdev)
 		return -EINVAL;
 
-	ret = device_property_read_string(dev, "compatible", &vdev->compat);
-	if (ret) {
-		pr_err("VFIO: cannot retrieve compat for %s\n", vdev->name);
-		return -EINVAL;
-	}
+	ret = vfio_platform_probe_acpi(vdev, dev);
+	if (ret)
+		ret = vfio_platform_probe_of(vdev, dev);
 
-	vdev->device = dev;
+	if (ret)
+		return ret;
 
+	vdev->device = dev;
 	group = iommu_group_get(dev);
 	if (!group) {
 		pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
@@ -602,13 +652,21 @@ void __vfio_platform_register_reset(struct vfio_platform_reset_node *node)
 EXPORT_SYMBOL_GPL(__vfio_platform_register_reset);
 
 void vfio_platform_unregister_reset(const char *compat,
+				    const char *acpihid,
 				    vfio_platform_reset_fn_t fn)
 {
 	struct vfio_platform_reset_node *iter, *temp;
 
 	mutex_lock(&driver_lock);
 	list_for_each_entry_safe(iter, temp, &reset_list, link) {
-		if (!strcmp(iter->compat, compat) && (iter->reset == fn)) {
+		if (acpihid && iter->acpihid &&
+		    !strcmp(iter->acpihid, acpihid) && (iter->reset == fn)) {
+			list_del(&iter->link);
+			break;
+		}
+
+		if (compat && iter->compat &&
+		    !strcmp(iter->compat, compat) && (iter->reset == fn)) {
 			list_del(&iter->link);
 			break;
 		}
diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
index 42816dd..32feba3 100644
--- a/drivers/vfio/platform/vfio_platform_private.h
+++ b/drivers/vfio/platform/vfio_platform_private.h
@@ -58,6 +58,7 @@ struct vfio_platform_device {
 	struct mutex			igate;
 	struct module			*parent_module;
 	const char			*compat;
+	const char			*acpihid;
 	struct module			*reset_module;
 	struct device			*device;
 
@@ -79,6 +80,7 @@ typedef int (*vfio_platform_reset_fn_t)(struct vfio_platform_device *vdev);
 struct vfio_platform_reset_node {
 	struct list_head link;
 	char *compat;
+	char *acpihid;
 	struct module *owner;
 	vfio_platform_reset_fn_t reset;
 };
@@ -98,27 +100,30 @@ extern int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
 
 extern void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
 extern void vfio_platform_unregister_reset(const char *compat,
+					   const char *acpihid,
 					   vfio_platform_reset_fn_t fn);
-#define vfio_platform_register_reset(__compat, __reset)		\
-static struct vfio_platform_reset_node __reset ## _node = {	\
-	.owner = THIS_MODULE,					\
-	.compat = __compat,					\
-	.reset = __reset,					\
-};								\
+
+#define vfio_platform_register_reset(__compat, __acpihid, __reset)	\
+static struct vfio_platform_reset_node __reset ## _node = {		\
+	.owner = THIS_MODULE,						\
+	.compat = __compat,						\
+	.acpihid = __acpihid,						\
+	.reset = __reset,						\
+};									\
 __vfio_platform_register_reset(&__reset ## _node)
 
-#define module_vfio_reset_handler(compat, reset)		\
-MODULE_ALIAS("vfio-reset:" compat);				\
-static int __init reset ## _module_init(void)			\
-{								\
-	vfio_platform_register_reset(compat, reset);		\
-	return 0;						\
-};								\
-static void __exit reset ## _module_exit(void)			\
-{								\
-	vfio_platform_unregister_reset(compat, reset);		\
-};								\
-module_init(reset ## _module_init);				\
+#define module_vfio_reset_handler(compat, acpihid, reset)		\
+MODULE_ALIAS("vfio-reset:" compat);					\
+static int __init reset ## _module_init(void)				\
+{									\
+	vfio_platform_register_reset(compat, acpihid, reset);		\
+	return 0;							\
+};									\
+static void __exit reset ## _module_exit(void)				\
+{									\
+	vfio_platform_unregister_reset(compat, acpihid, reset);		\
+};									\
+module_init(reset ## _module_init);					\
 module_exit(reset ## _module_exit)
 
 #endif /* VFIO_PLATFORM_PRIVATE_H */
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 10/10] vfio, platform: add QTI HIDMA reset driver
  2016-01-29 22:35 ` Sinan Kaya
@ 2016-01-29 22:35   ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm
  Cc: shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, Sinan Kaya, linux-kernel

In situations where the userspace driver is stopped abnormally and the
VFIO platform device is released, the assigned HW device currently is
left running. As a consequence the HW device might continue issuing IRQs
and performing DMA accesses.

This patch is implementing a reset driver for HIDMA platform driver.
This gets called by the VFIO platform reset interface.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/vfio/platform/reset/Kconfig                |  9 ++
 drivers/vfio/platform/reset/Makefile               |  2 +
 .../vfio/platform/reset/vfio_platform_qcomhidma.c  | 99 ++++++++++++++++++++++
 3 files changed, 110 insertions(+)
 create mode 100644 drivers/vfio/platform/reset/vfio_platform_qcomhidma.c

diff --git a/drivers/vfio/platform/reset/Kconfig b/drivers/vfio/platform/reset/Kconfig
index 70cccc5..d02b3b5 100644
--- a/drivers/vfio/platform/reset/Kconfig
+++ b/drivers/vfio/platform/reset/Kconfig
@@ -13,3 +13,12 @@ config VFIO_PLATFORM_AMDXGBE_RESET
 	  Enables the VFIO platform driver to handle reset for AMD XGBE
 
 	  If you don't know what to do here, say N.
+
+config VFIO_PLATFORM_QCOMHIDMA_RESET
+	tristate "VFIO support for Qualcomm Technologies HIDMA reset"
+	depends on VFIO_PLATFORM
+	help
+	  Enables the VFIO platform driver to handle reset for Qualcomm Technologies
+	  HIDMA Channel.
+
+	  If you don't know what to do here, say N.
diff --git a/drivers/vfio/platform/reset/Makefile b/drivers/vfio/platform/reset/Makefile
index 93f4e23..ec7748a 100644
--- a/drivers/vfio/platform/reset/Makefile
+++ b/drivers/vfio/platform/reset/Makefile
@@ -1,7 +1,9 @@
 vfio-platform-calxedaxgmac-y := vfio_platform_calxedaxgmac.o
 vfio-platform-amdxgbe-y := vfio_platform_amdxgbe.o
+vfio-platform-qcomhidma-y := vfio_platform_qcomhidma.o
 
 ccflags-y += -Idrivers/vfio/platform
 
 obj-$(CONFIG_VFIO_PLATFORM_CALXEDAXGMAC_RESET) += vfio-platform-calxedaxgmac.o
+obj-$(CONFIG_VFIO_PLATFORM_QCOMHIDMA_RESET) += vfio-platform-qcomhidma.o
 obj-$(CONFIG_VFIO_PLATFORM_AMDXGBE_RESET) += vfio-platform-amdxgbe.o
diff --git a/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
new file mode 100644
index 0000000..4e7a59c
--- /dev/null
+++ b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
@@ -0,0 +1,99 @@
+/*
+ * Qualcomm Technologies HIDMA VFIO Reset Driver
+ *
+ * Copyright (c) 2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/device.h>
+#include <linux/iopoll.h>
+
+#include "vfio_platform_private.h"
+
+#define TRCA_CTRLSTS_OFFSET		0x000
+#define EVCA_CTRLSTS_OFFSET		0x000
+
+#define CH_CONTROL_MASK		GENMASK(7, 0)
+#define CH_STATE_MASK			GENMASK(7, 0)
+#define CH_STATE_BIT_POS		0x8
+
+#define HIDMA_CH_STATE(val)	\
+	((val >> CH_STATE_BIT_POS) & CH_STATE_MASK)
+
+#define EVCA_IRQ_EN_OFFSET		0x110
+
+#define CH_RESET			9
+#define CH_DISABLED			0
+
+int vfio_platform_qcomhidma_reset(struct vfio_platform_device *vdev)
+{
+	struct vfio_platform_region trreg;
+	struct vfio_platform_region evreg;
+	u32 val;
+	int ret;
+
+	if (vdev->num_regions != 2)
+		return -ENODEV;
+
+	trreg = vdev->regions[0];
+	if (!trreg.ioaddr) {
+		trreg.ioaddr =
+			ioremap_nocache(trreg.addr, trreg.size);
+		if (!trreg.ioaddr)
+			return -ENOMEM;
+	}
+
+	evreg = vdev->regions[1];
+	if (!evreg.ioaddr) {
+		evreg.ioaddr =
+			ioremap_nocache(evreg.addr, evreg.size);
+		if (!evreg.ioaddr)
+			return -ENOMEM;
+	}
+
+	/* disable IRQ */
+	writel(0, evreg.ioaddr + EVCA_IRQ_EN_OFFSET);
+
+	/* reset both transfer and event channels */
+	val = readl(trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_RESET << 16;
+	writel(val, trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(trreg.ioaddr + TRCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	val = readl(evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_RESET << 16;
+	writel(val, evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(evreg.ioaddr + EVCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	pr_info("HIDMA channel reset\n");
+	return 0;
+}
+module_vfio_reset_handler("qcom,hidma", NULL,
+			  vfio_platform_qcomhidma_reset);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Reset support for Qualcomm Technologies HIDMA device");
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH V13 10/10] vfio, platform: add QTI HIDMA reset driver
@ 2016-01-29 22:35   ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-29 22:35 UTC (permalink / raw)
  To: linux-arm-kernel

In situations where the userspace driver is stopped abnormally and the
VFIO platform device is released, the assigned HW device currently is
left running. As a consequence the HW device might continue issuing IRQs
and performing DMA accesses.

This patch is implementing a reset driver for HIDMA platform driver.
This gets called by the VFIO platform reset interface.

Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
---
 drivers/vfio/platform/reset/Kconfig                |  9 ++
 drivers/vfio/platform/reset/Makefile               |  2 +
 .../vfio/platform/reset/vfio_platform_qcomhidma.c  | 99 ++++++++++++++++++++++
 3 files changed, 110 insertions(+)
 create mode 100644 drivers/vfio/platform/reset/vfio_platform_qcomhidma.c

diff --git a/drivers/vfio/platform/reset/Kconfig b/drivers/vfio/platform/reset/Kconfig
index 70cccc5..d02b3b5 100644
--- a/drivers/vfio/platform/reset/Kconfig
+++ b/drivers/vfio/platform/reset/Kconfig
@@ -13,3 +13,12 @@ config VFIO_PLATFORM_AMDXGBE_RESET
 	  Enables the VFIO platform driver to handle reset for AMD XGBE
 
 	  If you don't know what to do here, say N.
+
+config VFIO_PLATFORM_QCOMHIDMA_RESET
+	tristate "VFIO support for Qualcomm Technologies HIDMA reset"
+	depends on VFIO_PLATFORM
+	help
+	  Enables the VFIO platform driver to handle reset for Qualcomm Technologies
+	  HIDMA Channel.
+
+	  If you don't know what to do here, say N.
diff --git a/drivers/vfio/platform/reset/Makefile b/drivers/vfio/platform/reset/Makefile
index 93f4e23..ec7748a 100644
--- a/drivers/vfio/platform/reset/Makefile
+++ b/drivers/vfio/platform/reset/Makefile
@@ -1,7 +1,9 @@
 vfio-platform-calxedaxgmac-y := vfio_platform_calxedaxgmac.o
 vfio-platform-amdxgbe-y := vfio_platform_amdxgbe.o
+vfio-platform-qcomhidma-y := vfio_platform_qcomhidma.o
 
 ccflags-y += -Idrivers/vfio/platform
 
 obj-$(CONFIG_VFIO_PLATFORM_CALXEDAXGMAC_RESET) += vfio-platform-calxedaxgmac.o
+obj-$(CONFIG_VFIO_PLATFORM_QCOMHIDMA_RESET) += vfio-platform-qcomhidma.o
 obj-$(CONFIG_VFIO_PLATFORM_AMDXGBE_RESET) += vfio-platform-amdxgbe.o
diff --git a/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
new file mode 100644
index 0000000..4e7a59c
--- /dev/null
+++ b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
@@ -0,0 +1,99 @@
+/*
+ * Qualcomm Technologies HIDMA VFIO Reset Driver
+ *
+ * Copyright (c) 2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/device.h>
+#include <linux/iopoll.h>
+
+#include "vfio_platform_private.h"
+
+#define TRCA_CTRLSTS_OFFSET		0x000
+#define EVCA_CTRLSTS_OFFSET		0x000
+
+#define CH_CONTROL_MASK		GENMASK(7, 0)
+#define CH_STATE_MASK			GENMASK(7, 0)
+#define CH_STATE_BIT_POS		0x8
+
+#define HIDMA_CH_STATE(val)	\
+	((val >> CH_STATE_BIT_POS) & CH_STATE_MASK)
+
+#define EVCA_IRQ_EN_OFFSET		0x110
+
+#define CH_RESET			9
+#define CH_DISABLED			0
+
+int vfio_platform_qcomhidma_reset(struct vfio_platform_device *vdev)
+{
+	struct vfio_platform_region trreg;
+	struct vfio_platform_region evreg;
+	u32 val;
+	int ret;
+
+	if (vdev->num_regions != 2)
+		return -ENODEV;
+
+	trreg = vdev->regions[0];
+	if (!trreg.ioaddr) {
+		trreg.ioaddr =
+			ioremap_nocache(trreg.addr, trreg.size);
+		if (!trreg.ioaddr)
+			return -ENOMEM;
+	}
+
+	evreg = vdev->regions[1];
+	if (!evreg.ioaddr) {
+		evreg.ioaddr =
+			ioremap_nocache(evreg.addr, evreg.size);
+		if (!evreg.ioaddr)
+			return -ENOMEM;
+	}
+
+	/* disable IRQ */
+	writel(0, evreg.ioaddr + EVCA_IRQ_EN_OFFSET);
+
+	/* reset both transfer and event channels */
+	val = readl(trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_RESET << 16;
+	writel(val, trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(trreg.ioaddr + TRCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	val = readl(evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
+	val &= ~(CH_CONTROL_MASK << 16);
+	val |= CH_RESET << 16;
+	writel(val, evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
+
+	ret = readl_poll_timeout(evreg.ioaddr + EVCA_CTRLSTS_OFFSET, val,
+				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
+				 10000);
+	if (ret)
+		return ret;
+
+	pr_info("HIDMA channel reset\n");
+	return 0;
+}
+module_vfio_reset_handler("qcom,hidma", NULL,
+			  vfio_platform_qcomhidma_reset);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Reset support for Qualcomm Technologies HIDMA device");
-- 
1.8.2.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-01-29 22:35   ` Sinan Kaya
  (?)
@ 2016-01-30 12:52     ` kbuild test robot
  -1 siblings, 0 replies; 69+ messages in thread
From: kbuild test robot @ 2016-01-30 12:52 UTC (permalink / raw)
  Cc: kbuild-all, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm, shankerd, vikrams, eric.auger,
	agross, arnd, linux-arm-msm, linux-arm-kernel, Sinan Kaya,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2245 bytes --]

Hi Sinan,

[auto build test ERROR on vfio/next]
[also build test ERROR on v4.5-rc1 next-20160129]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/dma-add-Qualcomm-Technologies-HIDMA-driver/20160130-064551
base:   https://github.com/awilliam/linux-vfio.git next
config: arm64-allmodconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   drivers/vfio/platform/vfio_platform_common.c: In function 'vfio_platform_probe_acpi':
>> drivers/vfio/platform/vfio_platform_common.c:558:9: error: invalid initializer
     struct acpi_device adev = ACPI_COMPANION(dev);
            ^
>> drivers/vfio/platform/vfio_platform_common.c:560:6: error: wrong type argument to unary exclamation mark
     if (!adev)
         ^
>> drivers/vfio/platform/vfio_platform_common.c:563:18: error: incompatible type for argument 1 of 'acpi_device_hid'
     vdev->acpihid = acpi_device_hid(adev);
                     ^
   In file included from include/linux/acpi.h:41:0,
                    from drivers/vfio/platform/vfio_platform_common.c:16:
   include/acpi/acpi_bus.h:253:13: note: expected 'struct acpi_device *' but argument is of type 'struct acpi_device'
    const char *acpi_device_hid(struct acpi_device *device);
                ^

vim +558 drivers/vfio/platform/vfio_platform_common.c

   552	};
   553	
   554	#ifdef CONFIG_ACPI
   555	int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
   556				     struct device *dev)
   557	{
 > 558		struct acpi_device adev = ACPI_COMPANION(dev);
   559	
 > 560		if (!adev)
   561			return -EINVAL;
   562	
 > 563		vdev->acpihid = acpi_device_hid(adev);
   564		if (!vdev->acpihid) {
   565			pr_err("VFIO: cannot find ACPI HID for %s\n",
   566			       vdev->name);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 46685 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-01-30 12:52     ` kbuild test robot
  0 siblings, 0 replies; 69+ messages in thread
From: kbuild test robot @ 2016-01-30 12:52 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: kbuild-all, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm, shankerd, vikrams, eric.auger,
	agross, arnd, linux-arm-msm, linux-arm-kernel, Sinan Kaya,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2245 bytes --]

Hi Sinan,

[auto build test ERROR on vfio/next]
[also build test ERROR on v4.5-rc1 next-20160129]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/dma-add-Qualcomm-Technologies-HIDMA-driver/20160130-064551
base:   https://github.com/awilliam/linux-vfio.git next
config: arm64-allmodconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   drivers/vfio/platform/vfio_platform_common.c: In function 'vfio_platform_probe_acpi':
>> drivers/vfio/platform/vfio_platform_common.c:558:9: error: invalid initializer
     struct acpi_device adev = ACPI_COMPANION(dev);
            ^
>> drivers/vfio/platform/vfio_platform_common.c:560:6: error: wrong type argument to unary exclamation mark
     if (!adev)
         ^
>> drivers/vfio/platform/vfio_platform_common.c:563:18: error: incompatible type for argument 1 of 'acpi_device_hid'
     vdev->acpihid = acpi_device_hid(adev);
                     ^
   In file included from include/linux/acpi.h:41:0,
                    from drivers/vfio/platform/vfio_platform_common.c:16:
   include/acpi/acpi_bus.h:253:13: note: expected 'struct acpi_device *' but argument is of type 'struct acpi_device'
    const char *acpi_device_hid(struct acpi_device *device);
                ^

vim +558 drivers/vfio/platform/vfio_platform_common.c

   552	};
   553	
   554	#ifdef CONFIG_ACPI
   555	int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
   556				     struct device *dev)
   557	{
 > 558		struct acpi_device adev = ACPI_COMPANION(dev);
   559	
 > 560		if (!adev)
   561			return -EINVAL;
   562	
 > 563		vdev->acpihid = acpi_device_hid(adev);
   564		if (!vdev->acpihid) {
   565			pr_err("VFIO: cannot find ACPI HID for %s\n",
   566			       vdev->name);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 46685 bytes --]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-01-30 12:52     ` kbuild test robot
  0 siblings, 0 replies; 69+ messages in thread
From: kbuild test robot @ 2016-01-30 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sinan,

[auto build test ERROR on vfio/next]
[also build test ERROR on v4.5-rc1 next-20160129]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Sinan-Kaya/dma-add-Qualcomm-Technologies-HIDMA-driver/20160130-064551
base:   https://github.com/awilliam/linux-vfio.git next
config: arm64-allmodconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   drivers/vfio/platform/vfio_platform_common.c: In function 'vfio_platform_probe_acpi':
>> drivers/vfio/platform/vfio_platform_common.c:558:9: error: invalid initializer
     struct acpi_device adev = ACPI_COMPANION(dev);
            ^
>> drivers/vfio/platform/vfio_platform_common.c:560:6: error: wrong type argument to unary exclamation mark
     if (!adev)
         ^
>> drivers/vfio/platform/vfio_platform_common.c:563:18: error: incompatible type for argument 1 of 'acpi_device_hid'
     vdev->acpihid = acpi_device_hid(adev);
                     ^
   In file included from include/linux/acpi.h:41:0,
                    from drivers/vfio/platform/vfio_platform_common.c:16:
   include/acpi/acpi_bus.h:253:13: note: expected 'struct acpi_device *' but argument is of type 'struct acpi_device'
    const char *acpi_device_hid(struct acpi_device *device);
                ^

vim +558 drivers/vfio/platform/vfio_platform_common.c

   552	};
   553	
   554	#ifdef CONFIG_ACPI
   555	int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
   556				     struct device *dev)
   557	{
 > 558		struct acpi_device adev = ACPI_COMPANION(dev);
   559	
 > 560		if (!adev)
   561			return -EINVAL;
   562	
 > 563		vdev->acpihid = acpi_device_hid(adev);
   564		if (!vdev->acpihid) {
   565			pr_err("VFIO: cannot find ACPI HID for %s\n",
   566			       vdev->name);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 46685 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160130/b58da2c8/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-01-29 22:35   ` Sinan Kaya
  (?)
@ 2016-01-31 13:53     ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-31 13:53 UTC (permalink / raw)
  To: marc.zyngier, mark.rutland, timur, a.motakis, alex.williamson
  Cc: dmaengine, devicetree, cov, vinod.koul, jcm, shankerd, vikrams,
	eric.auger, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel

Hi Eric, Alex, Antonios;

On 1/29/2016 5:35 PM, Sinan Kaya wrote:
> The code is using the compatible DT string to associate a reset driver with
> the actual device itself. The compatible string does not exist on ACPI
> based systems. HID is the unique identifier for a device driver instead.
> The change allows a driver to register with DT compatible string or ACPI
> HID and then match the object with one of these conditions.
> 
> For ACPI systems, ACPI HID needs to match and compat in the registered
> reset
> driver needs to match for ACPI reset driver loading to work.
> 
> For OF based systems, DT compatible string needs to match and compat in the
> registered reset driver needs to match for DT reset driver loading to work.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../vfio/platform/reset/vfio_platform_amdxgbe.c    |  3 +-
>  .../platform/reset/vfio_platform_calxedaxgmac.c    |  3 +-
>  drivers/vfio/platform/vfio_platform_common.c       | 80 +++++++++++++++++++---
>  drivers/vfio/platform/vfio_platform_private.h      | 41 ++++++-----
>  4 files changed, 96 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> index d4030d0..cc5b4fa 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> @@ -119,7 +119,8 @@ int vfio_platform_amdxgbe_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("amd,xgbe-seattle-v1a", vfio_platform_amdxgbe_reset);
> +module_vfio_reset_handler("amd,xgbe-seattle-v1a", NULL,
> +			  vfio_platform_amdxgbe_reset);
>  
>  MODULE_VERSION("0.1");
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> index e3d3d94..0e57529 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> @@ -77,7 +77,8 @@ int vfio_platform_calxedaxgmac_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("calxeda,hb-xgmac", vfio_platform_calxedaxgmac_reset);
> +module_vfio_reset_handler("calxeda,hb-xgmac", NULL,
> +			  vfio_platform_calxedaxgmac_reset);
>  
>  MODULE_VERSION(DRIVER_VERSION);
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
> index 418cdd9..6927e05 100644
> --- a/drivers/vfio/platform/vfio_platform_common.c
> +++ b/drivers/vfio/platform/vfio_platform_common.c
> @@ -13,6 +13,7 @@
>   */
>  
>  #include <linux/device.h>
> +#include <linux/acpi.h>
>  #include <linux/iommu.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> @@ -31,14 +32,22 @@ static LIST_HEAD(reset_list);
>  static DEFINE_MUTEX(driver_lock);
>  
>  static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
> -					struct module **module)
> +				  const char *acpihid, struct module **module)
>  {
>  	struct vfio_platform_reset_node *iter;
>  	vfio_platform_reset_fn_t reset_fn = NULL;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry(iter, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) &&
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) &&
> +			try_module_get(iter->owner)) {
> +			*module = iter->owner;
> +			reset_fn = iter->reset;
> +			break;
> +		}
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) &&
>  			try_module_get(iter->owner)) {
>  			*module = iter->owner;
>  			reset_fn = iter->reset;
> @@ -51,11 +60,12 @@ static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
>  
>  static void vfio_platform_get_reset(struct vfio_platform_device *vdev)
>  {
> -	vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> -						&vdev->reset_module);
> +	vdev->reset = vfio_platform_lookup_reset(vdev->compat, vdev->acpihid,
> +						 &vdev->reset_module);
>  	if (!vdev->reset) {
>  		request_module("vfio-reset:%s", vdev->compat);
>  		vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> +							 vdev->acpihid,
>  							 &vdev->reset_module);
>  	}
>  }
> @@ -541,6 +551,46 @@ static const struct vfio_device_ops vfio_platform_ops = {
>  	.mmap		= vfio_platform_mmap,
>  };
>  
> +#ifdef CONFIG_ACPI
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	struct acpi_device adev = ACPI_COMPANION(dev);
> +
> +	if (!adev)
> +		return -EINVAL;
> +
> +	vdev->acpihid = acpi_device_hid(adev);
> +	if (!vdev->acpihid) {
> +		pr_err("VFIO: cannot find ACPI HID for %s\n",
> +		       vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +#else
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	return -EINVAL;
> +}
> +#endif
> +
> +int vfio_platform_probe_of(struct vfio_platform_device *vdev,
> +			   struct device *dev)
> +{
> +	int ret;
> +
> +	ret = device_property_read_string(dev, "compatible",
> +					  &vdev->compat);
> +	if (ret) {
> +		pr_err("VFIO: cannot retrieve compat for %s\n",
> +			vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
>  int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  			       struct device *dev)
>  {
> @@ -550,14 +600,14 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  	if (!vdev)
>  		return -EINVAL;
>  
> -	ret = device_property_read_string(dev, "compatible", &vdev->compat);
> -	if (ret) {
> -		pr_err("VFIO: cannot retrieve compat for %s\n", vdev->name);
> -		return -EINVAL;
> -	}
> +	ret = vfio_platform_probe_acpi(vdev, dev);
> +	if (ret)
> +		ret = vfio_platform_probe_of(vdev, dev);
>  
> -	vdev->device = dev;
> +	if (ret)
> +		return ret;
>  
> +	vdev->device = dev;
>  	group = iommu_group_get(dev);
>  	if (!group) {
>  		pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
> @@ -602,13 +652,21 @@ void __vfio_platform_register_reset(struct vfio_platform_reset_node *node)
>  EXPORT_SYMBOL_GPL(__vfio_platform_register_reset);
>  
>  void vfio_platform_unregister_reset(const char *compat,
> +				    const char *acpihid,
>  				    vfio_platform_reset_fn_t fn)
>  {
>  	struct vfio_platform_reset_node *iter, *temp;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry_safe(iter, temp, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) && (iter->reset == fn)) {
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) && (iter->reset == fn)) {
> +			list_del(&iter->link);
> +			break;
> +		}
> +
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) && (iter->reset == fn)) {
>  			list_del(&iter->link);
>  			break;
>  		}
> diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
> index 42816dd..32feba3 100644
> --- a/drivers/vfio/platform/vfio_platform_private.h
> +++ b/drivers/vfio/platform/vfio_platform_private.h
> @@ -58,6 +58,7 @@ struct vfio_platform_device {
>  	struct mutex			igate;
>  	struct module			*parent_module;
>  	const char			*compat;
> +	const char			*acpihid;
>  	struct module			*reset_module;
>  	struct device			*device;
>  
> @@ -79,6 +80,7 @@ typedef int (*vfio_platform_reset_fn_t)(struct vfio_platform_device *vdev);
>  struct vfio_platform_reset_node {
>  	struct list_head link;
>  	char *compat;
> +	char *acpihid;
>  	struct module *owner;
>  	vfio_platform_reset_fn_t reset;
>  };
> @@ -98,27 +100,30 @@ extern int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
>  
>  extern void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
>  extern void vfio_platform_unregister_reset(const char *compat,
> +					   const char *acpihid,
>  					   vfio_platform_reset_fn_t fn);
> -#define vfio_platform_register_reset(__compat, __reset)		\
> -static struct vfio_platform_reset_node __reset ## _node = {	\
> -	.owner = THIS_MODULE,					\
> -	.compat = __compat,					\
> -	.reset = __reset,					\
> -};								\
> +
> +#define vfio_platform_register_reset(__compat, __acpihid, __reset)	\
> +static struct vfio_platform_reset_node __reset ## _node = {		\
> +	.owner = THIS_MODULE,						\
> +	.compat = __compat,						\
> +	.acpihid = __acpihid,						\
> +	.reset = __reset,						\
> +};									\
>  __vfio_platform_register_reset(&__reset ## _node)
>  
> -#define module_vfio_reset_handler(compat, reset)		\
> -MODULE_ALIAS("vfio-reset:" compat);				\
> -static int __init reset ## _module_init(void)			\
> -{								\
> -	vfio_platform_register_reset(compat, reset);		\
> -	return 0;						\
> -};								\
> -static void __exit reset ## _module_exit(void)			\
> -{								\
> -	vfio_platform_unregister_reset(compat, reset);		\
> -};								\
> -module_init(reset ## _module_init);				\
> +#define module_vfio_reset_handler(compat, acpihid, reset)		\
> +MODULE_ALIAS("vfio-reset:" compat);					\
> +static int __init reset ## _module_init(void)				\
> +{									\
> +	vfio_platform_register_reset(compat, acpihid, reset);		\
> +	return 0;							\
> +};									\
> +static void __exit reset ## _module_exit(void)				\
> +{									\
> +	vfio_platform_unregister_reset(compat, acpihid, reset);		\
> +};									\
> +module_init(reset ## _module_init);					\
>  module_exit(reset ## _module_exit)
>  
>  #endif /* VFIO_PLATFORM_PRIVATE_H */
> 

If we put aside the forgotten pointer in acpi_device assignment, (I had a merge conflict
and I screwed it up at the last moment),

Can you review the following two patches?

https://lkml.org/lkml/2016/1/29/679
https://lkml.org/lkml/2016/1/29/677

These patches seem to be in your area. I was relying on the maintainer list to pull you into
the review but for some reason your emails didn't show up.

Sinan



-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-01-31 13:53     ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-31 13:53 UTC (permalink / raw)
  To: marc.zyngier, mark.rutland, timur, a.motakis, eric.auger,
	alex.williamson
  Cc: dmaengine, devicetree, cov, vinod.koul, jcm, shankerd, vikrams,
	eric.auger, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel

Hi Eric, Alex, Antonios;

On 1/29/2016 5:35 PM, Sinan Kaya wrote:
> The code is using the compatible DT string to associate a reset driver with
> the actual device itself. The compatible string does not exist on ACPI
> based systems. HID is the unique identifier for a device driver instead.
> The change allows a driver to register with DT compatible string or ACPI
> HID and then match the object with one of these conditions.
> 
> For ACPI systems, ACPI HID needs to match and compat in the registered
> reset
> driver needs to match for ACPI reset driver loading to work.
> 
> For OF based systems, DT compatible string needs to match and compat in the
> registered reset driver needs to match for DT reset driver loading to work.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../vfio/platform/reset/vfio_platform_amdxgbe.c    |  3 +-
>  .../platform/reset/vfio_platform_calxedaxgmac.c    |  3 +-
>  drivers/vfio/platform/vfio_platform_common.c       | 80 +++++++++++++++++++---
>  drivers/vfio/platform/vfio_platform_private.h      | 41 ++++++-----
>  4 files changed, 96 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> index d4030d0..cc5b4fa 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> @@ -119,7 +119,8 @@ int vfio_platform_amdxgbe_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("amd,xgbe-seattle-v1a", vfio_platform_amdxgbe_reset);
> +module_vfio_reset_handler("amd,xgbe-seattle-v1a", NULL,
> +			  vfio_platform_amdxgbe_reset);
>  
>  MODULE_VERSION("0.1");
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> index e3d3d94..0e57529 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> @@ -77,7 +77,8 @@ int vfio_platform_calxedaxgmac_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("calxeda,hb-xgmac", vfio_platform_calxedaxgmac_reset);
> +module_vfio_reset_handler("calxeda,hb-xgmac", NULL,
> +			  vfio_platform_calxedaxgmac_reset);
>  
>  MODULE_VERSION(DRIVER_VERSION);
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
> index 418cdd9..6927e05 100644
> --- a/drivers/vfio/platform/vfio_platform_common.c
> +++ b/drivers/vfio/platform/vfio_platform_common.c
> @@ -13,6 +13,7 @@
>   */
>  
>  #include <linux/device.h>
> +#include <linux/acpi.h>
>  #include <linux/iommu.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> @@ -31,14 +32,22 @@ static LIST_HEAD(reset_list);
>  static DEFINE_MUTEX(driver_lock);
>  
>  static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
> -					struct module **module)
> +				  const char *acpihid, struct module **module)
>  {
>  	struct vfio_platform_reset_node *iter;
>  	vfio_platform_reset_fn_t reset_fn = NULL;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry(iter, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) &&
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) &&
> +			try_module_get(iter->owner)) {
> +			*module = iter->owner;
> +			reset_fn = iter->reset;
> +			break;
> +		}
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) &&
>  			try_module_get(iter->owner)) {
>  			*module = iter->owner;
>  			reset_fn = iter->reset;
> @@ -51,11 +60,12 @@ static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
>  
>  static void vfio_platform_get_reset(struct vfio_platform_device *vdev)
>  {
> -	vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> -						&vdev->reset_module);
> +	vdev->reset = vfio_platform_lookup_reset(vdev->compat, vdev->acpihid,
> +						 &vdev->reset_module);
>  	if (!vdev->reset) {
>  		request_module("vfio-reset:%s", vdev->compat);
>  		vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> +							 vdev->acpihid,
>  							 &vdev->reset_module);
>  	}
>  }
> @@ -541,6 +551,46 @@ static const struct vfio_device_ops vfio_platform_ops = {
>  	.mmap		= vfio_platform_mmap,
>  };
>  
> +#ifdef CONFIG_ACPI
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	struct acpi_device adev = ACPI_COMPANION(dev);
> +
> +	if (!adev)
> +		return -EINVAL;
> +
> +	vdev->acpihid = acpi_device_hid(adev);
> +	if (!vdev->acpihid) {
> +		pr_err("VFIO: cannot find ACPI HID for %s\n",
> +		       vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +#else
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	return -EINVAL;
> +}
> +#endif
> +
> +int vfio_platform_probe_of(struct vfio_platform_device *vdev,
> +			   struct device *dev)
> +{
> +	int ret;
> +
> +	ret = device_property_read_string(dev, "compatible",
> +					  &vdev->compat);
> +	if (ret) {
> +		pr_err("VFIO: cannot retrieve compat for %s\n",
> +			vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
>  int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  			       struct device *dev)
>  {
> @@ -550,14 +600,14 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  	if (!vdev)
>  		return -EINVAL;
>  
> -	ret = device_property_read_string(dev, "compatible", &vdev->compat);
> -	if (ret) {
> -		pr_err("VFIO: cannot retrieve compat for %s\n", vdev->name);
> -		return -EINVAL;
> -	}
> +	ret = vfio_platform_probe_acpi(vdev, dev);
> +	if (ret)
> +		ret = vfio_platform_probe_of(vdev, dev);
>  
> -	vdev->device = dev;
> +	if (ret)
> +		return ret;
>  
> +	vdev->device = dev;
>  	group = iommu_group_get(dev);
>  	if (!group) {
>  		pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
> @@ -602,13 +652,21 @@ void __vfio_platform_register_reset(struct vfio_platform_reset_node *node)
>  EXPORT_SYMBOL_GPL(__vfio_platform_register_reset);
>  
>  void vfio_platform_unregister_reset(const char *compat,
> +				    const char *acpihid,
>  				    vfio_platform_reset_fn_t fn)
>  {
>  	struct vfio_platform_reset_node *iter, *temp;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry_safe(iter, temp, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) && (iter->reset == fn)) {
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) && (iter->reset == fn)) {
> +			list_del(&iter->link);
> +			break;
> +		}
> +
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) && (iter->reset == fn)) {
>  			list_del(&iter->link);
>  			break;
>  		}
> diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
> index 42816dd..32feba3 100644
> --- a/drivers/vfio/platform/vfio_platform_private.h
> +++ b/drivers/vfio/platform/vfio_platform_private.h
> @@ -58,6 +58,7 @@ struct vfio_platform_device {
>  	struct mutex			igate;
>  	struct module			*parent_module;
>  	const char			*compat;
> +	const char			*acpihid;
>  	struct module			*reset_module;
>  	struct device			*device;
>  
> @@ -79,6 +80,7 @@ typedef int (*vfio_platform_reset_fn_t)(struct vfio_platform_device *vdev);
>  struct vfio_platform_reset_node {
>  	struct list_head link;
>  	char *compat;
> +	char *acpihid;
>  	struct module *owner;
>  	vfio_platform_reset_fn_t reset;
>  };
> @@ -98,27 +100,30 @@ extern int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
>  
>  extern void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
>  extern void vfio_platform_unregister_reset(const char *compat,
> +					   const char *acpihid,
>  					   vfio_platform_reset_fn_t fn);
> -#define vfio_platform_register_reset(__compat, __reset)		\
> -static struct vfio_platform_reset_node __reset ## _node = {	\
> -	.owner = THIS_MODULE,					\
> -	.compat = __compat,					\
> -	.reset = __reset,					\
> -};								\
> +
> +#define vfio_platform_register_reset(__compat, __acpihid, __reset)	\
> +static struct vfio_platform_reset_node __reset ## _node = {		\
> +	.owner = THIS_MODULE,						\
> +	.compat = __compat,						\
> +	.acpihid = __acpihid,						\
> +	.reset = __reset,						\
> +};									\
>  __vfio_platform_register_reset(&__reset ## _node)
>  
> -#define module_vfio_reset_handler(compat, reset)		\
> -MODULE_ALIAS("vfio-reset:" compat);				\
> -static int __init reset ## _module_init(void)			\
> -{								\
> -	vfio_platform_register_reset(compat, reset);		\
> -	return 0;						\
> -};								\
> -static void __exit reset ## _module_exit(void)			\
> -{								\
> -	vfio_platform_unregister_reset(compat, reset);		\
> -};								\
> -module_init(reset ## _module_init);				\
> +#define module_vfio_reset_handler(compat, acpihid, reset)		\
> +MODULE_ALIAS("vfio-reset:" compat);					\
> +static int __init reset ## _module_init(void)				\
> +{									\
> +	vfio_platform_register_reset(compat, acpihid, reset);		\
> +	return 0;							\
> +};									\
> +static void __exit reset ## _module_exit(void)				\
> +{									\
> +	vfio_platform_unregister_reset(compat, acpihid, reset);		\
> +};									\
> +module_init(reset ## _module_init);					\
>  module_exit(reset ## _module_exit)
>  
>  #endif /* VFIO_PLATFORM_PRIVATE_H */
> 

If we put aside the forgotten pointer in acpi_device assignment, (I had a merge conflict
and I screwed it up at the last moment),

Can you review the following two patches?

https://lkml.org/lkml/2016/1/29/679
https://lkml.org/lkml/2016/1/29/677

These patches seem to be in your area. I was relying on the maintainer list to pull you into
the review but for some reason your emails didn't show up.

Sinan



-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-01-31 13:53     ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-01-31 13:53 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Eric, Alex, Antonios;

On 1/29/2016 5:35 PM, Sinan Kaya wrote:
> The code is using the compatible DT string to associate a reset driver with
> the actual device itself. The compatible string does not exist on ACPI
> based systems. HID is the unique identifier for a device driver instead.
> The change allows a driver to register with DT compatible string or ACPI
> HID and then match the object with one of these conditions.
> 
> For ACPI systems, ACPI HID needs to match and compat in the registered
> reset
> driver needs to match for ACPI reset driver loading to work.
> 
> For OF based systems, DT compatible string needs to match and compat in the
> registered reset driver needs to match for DT reset driver loading to work.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../vfio/platform/reset/vfio_platform_amdxgbe.c    |  3 +-
>  .../platform/reset/vfio_platform_calxedaxgmac.c    |  3 +-
>  drivers/vfio/platform/vfio_platform_common.c       | 80 +++++++++++++++++++---
>  drivers/vfio/platform/vfio_platform_private.h      | 41 ++++++-----
>  4 files changed, 96 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> index d4030d0..cc5b4fa 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> @@ -119,7 +119,8 @@ int vfio_platform_amdxgbe_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("amd,xgbe-seattle-v1a", vfio_platform_amdxgbe_reset);
> +module_vfio_reset_handler("amd,xgbe-seattle-v1a", NULL,
> +			  vfio_platform_amdxgbe_reset);
>  
>  MODULE_VERSION("0.1");
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> index e3d3d94..0e57529 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> @@ -77,7 +77,8 @@ int vfio_platform_calxedaxgmac_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("calxeda,hb-xgmac", vfio_platform_calxedaxgmac_reset);
> +module_vfio_reset_handler("calxeda,hb-xgmac", NULL,
> +			  vfio_platform_calxedaxgmac_reset);
>  
>  MODULE_VERSION(DRIVER_VERSION);
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
> index 418cdd9..6927e05 100644
> --- a/drivers/vfio/platform/vfio_platform_common.c
> +++ b/drivers/vfio/platform/vfio_platform_common.c
> @@ -13,6 +13,7 @@
>   */
>  
>  #include <linux/device.h>
> +#include <linux/acpi.h>
>  #include <linux/iommu.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> @@ -31,14 +32,22 @@ static LIST_HEAD(reset_list);
>  static DEFINE_MUTEX(driver_lock);
>  
>  static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
> -					struct module **module)
> +				  const char *acpihid, struct module **module)
>  {
>  	struct vfio_platform_reset_node *iter;
>  	vfio_platform_reset_fn_t reset_fn = NULL;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry(iter, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) &&
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) &&
> +			try_module_get(iter->owner)) {
> +			*module = iter->owner;
> +			reset_fn = iter->reset;
> +			break;
> +		}
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) &&
>  			try_module_get(iter->owner)) {
>  			*module = iter->owner;
>  			reset_fn = iter->reset;
> @@ -51,11 +60,12 @@ static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
>  
>  static void vfio_platform_get_reset(struct vfio_platform_device *vdev)
>  {
> -	vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> -						&vdev->reset_module);
> +	vdev->reset = vfio_platform_lookup_reset(vdev->compat, vdev->acpihid,
> +						 &vdev->reset_module);
>  	if (!vdev->reset) {
>  		request_module("vfio-reset:%s", vdev->compat);
>  		vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> +							 vdev->acpihid,
>  							 &vdev->reset_module);
>  	}
>  }
> @@ -541,6 +551,46 @@ static const struct vfio_device_ops vfio_platform_ops = {
>  	.mmap		= vfio_platform_mmap,
>  };
>  
> +#ifdef CONFIG_ACPI
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	struct acpi_device adev = ACPI_COMPANION(dev);
> +
> +	if (!adev)
> +		return -EINVAL;
> +
> +	vdev->acpihid = acpi_device_hid(adev);
> +	if (!vdev->acpihid) {
> +		pr_err("VFIO: cannot find ACPI HID for %s\n",
> +		       vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +#else
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	return -EINVAL;
> +}
> +#endif
> +
> +int vfio_platform_probe_of(struct vfio_platform_device *vdev,
> +			   struct device *dev)
> +{
> +	int ret;
> +
> +	ret = device_property_read_string(dev, "compatible",
> +					  &vdev->compat);
> +	if (ret) {
> +		pr_err("VFIO: cannot retrieve compat for %s\n",
> +			vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
>  int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  			       struct device *dev)
>  {
> @@ -550,14 +600,14 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  	if (!vdev)
>  		return -EINVAL;
>  
> -	ret = device_property_read_string(dev, "compatible", &vdev->compat);
> -	if (ret) {
> -		pr_err("VFIO: cannot retrieve compat for %s\n", vdev->name);
> -		return -EINVAL;
> -	}
> +	ret = vfio_platform_probe_acpi(vdev, dev);
> +	if (ret)
> +		ret = vfio_platform_probe_of(vdev, dev);
>  
> -	vdev->device = dev;
> +	if (ret)
> +		return ret;
>  
> +	vdev->device = dev;
>  	group = iommu_group_get(dev);
>  	if (!group) {
>  		pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
> @@ -602,13 +652,21 @@ void __vfio_platform_register_reset(struct vfio_platform_reset_node *node)
>  EXPORT_SYMBOL_GPL(__vfio_platform_register_reset);
>  
>  void vfio_platform_unregister_reset(const char *compat,
> +				    const char *acpihid,
>  				    vfio_platform_reset_fn_t fn)
>  {
>  	struct vfio_platform_reset_node *iter, *temp;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry_safe(iter, temp, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) && (iter->reset == fn)) {
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) && (iter->reset == fn)) {
> +			list_del(&iter->link);
> +			break;
> +		}
> +
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) && (iter->reset == fn)) {
>  			list_del(&iter->link);
>  			break;
>  		}
> diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
> index 42816dd..32feba3 100644
> --- a/drivers/vfio/platform/vfio_platform_private.h
> +++ b/drivers/vfio/platform/vfio_platform_private.h
> @@ -58,6 +58,7 @@ struct vfio_platform_device {
>  	struct mutex			igate;
>  	struct module			*parent_module;
>  	const char			*compat;
> +	const char			*acpihid;
>  	struct module			*reset_module;
>  	struct device			*device;
>  
> @@ -79,6 +80,7 @@ typedef int (*vfio_platform_reset_fn_t)(struct vfio_platform_device *vdev);
>  struct vfio_platform_reset_node {
>  	struct list_head link;
>  	char *compat;
> +	char *acpihid;
>  	struct module *owner;
>  	vfio_platform_reset_fn_t reset;
>  };
> @@ -98,27 +100,30 @@ extern int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
>  
>  extern void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
>  extern void vfio_platform_unregister_reset(const char *compat,
> +					   const char *acpihid,
>  					   vfio_platform_reset_fn_t fn);
> -#define vfio_platform_register_reset(__compat, __reset)		\
> -static struct vfio_platform_reset_node __reset ## _node = {	\
> -	.owner = THIS_MODULE,					\
> -	.compat = __compat,					\
> -	.reset = __reset,					\
> -};								\
> +
> +#define vfio_platform_register_reset(__compat, __acpihid, __reset)	\
> +static struct vfio_platform_reset_node __reset ## _node = {		\
> +	.owner = THIS_MODULE,						\
> +	.compat = __compat,						\
> +	.acpihid = __acpihid,						\
> +	.reset = __reset,						\
> +};									\
>  __vfio_platform_register_reset(&__reset ## _node)
>  
> -#define module_vfio_reset_handler(compat, reset)		\
> -MODULE_ALIAS("vfio-reset:" compat);				\
> -static int __init reset ## _module_init(void)			\
> -{								\
> -	vfio_platform_register_reset(compat, reset);		\
> -	return 0;						\
> -};								\
> -static void __exit reset ## _module_exit(void)			\
> -{								\
> -	vfio_platform_unregister_reset(compat, reset);		\
> -};								\
> -module_init(reset ## _module_init);				\
> +#define module_vfio_reset_handler(compat, acpihid, reset)		\
> +MODULE_ALIAS("vfio-reset:" compat);					\
> +static int __init reset ## _module_init(void)				\
> +{									\
> +	vfio_platform_register_reset(compat, acpihid, reset);		\
> +	return 0;							\
> +};									\
> +static void __exit reset ## _module_exit(void)				\
> +{									\
> +	vfio_platform_unregister_reset(compat, acpihid, reset);		\
> +};									\
> +module_init(reset ## _module_init);					\
>  module_exit(reset ## _module_exit)
>  
>  #endif /* VFIO_PLATFORM_PRIVATE_H */
> 

If we put aside the forgotten pointer in acpi_device assignment, (I had a merge conflict
and I screwed it up at the last moment),

Can you review the following two patches?

https://lkml.org/lkml/2016/1/29/679
https://lkml.org/lkml/2016/1/29/677

These patches seem to be in your area. I was relying on the maintainer list to pull you into
the review but for some reason your emails didn't show up.

Sinan



-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-01-29 22:35   ` Sinan Kaya
@ 2016-02-01 15:14     ` Rob Herring
  -1 siblings, 0 replies; 69+ messages in thread
From: Rob Herring @ 2016-02-01 15:14 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm, shankerd, vikrams, eric.auger, agross, arnd,
	linux-arm-msm, linux-arm-kernel, linux-kernel

On Fri, Jan 29, 2016 at 05:35:11PM -0500, Sinan Kaya wrote:
> Removing the flexibility to choose the event channel as there is no real
> use case right now. We have been using the values in ACPI that match the HW
> defaults. OS is reading the event-channel from the HW register now.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  3 --
>  drivers/dma/qcom/hidma.c                           | 39 +---------------------
>  2 files changed, 1 insertion(+), 41 deletions(-)

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-01 15:14     ` Rob Herring
  0 siblings, 0 replies; 69+ messages in thread
From: Rob Herring @ 2016-02-01 15:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 29, 2016 at 05:35:11PM -0500, Sinan Kaya wrote:
> Removing the flexibility to choose the event channel as there is no real
> use case right now. We have been using the values in ACPI that match the HW
> defaults. OS is reading the event-channel from the HW register now.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  3 --
>  drivers/dma/qcom/hidma.c                           | 39 +---------------------
>  2 files changed, 1 insertion(+), 41 deletions(-)

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-01-29 22:35   ` Sinan Kaya
@ 2016-02-01 15:35     ` Mark Rutland
  -1 siblings, 0 replies; 69+ messages in thread
From: Mark Rutland @ 2016-02-01 15:35 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, marc.zyngier, timur, devicetree, cov, vinod.koul, jcm,
	shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, linux-kernel

On Fri, Jan 29, 2016 at 05:35:11PM -0500, Sinan Kaya wrote:
> Removing the flexibility to choose the event channel as there is no real
> use case right now. We have been using the values in ACPI that match the HW
> defaults. OS is reading the event-channel from the HW register now.

Fold this in to the patches adding the binding and the driver.

There is no reason to add this then remove it within the same series.

Mark.

> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  3 --
>  drivers/dma/qcom/hidma.c                           | 39 +---------------------
>  2 files changed, 1 insertion(+), 41 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> index e3677a5..fd5618b 100644
> --- a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> +++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> @@ -51,7 +51,6 @@ Required properties:
>  - reg: Addresses for the transfer and event channel
>  - interrupts: Should contain the event interrupt
>  - desc-count: Number of asynchronous requests this channel can handle
> -- channel-index: The HW event channel completions will be delivered.
>  - iommus: required a iommu node
>  
>  Example:
> @@ -75,7 +74,6 @@ Hypervisor OS configuration:
>  			interrupts = <0 389 0>;
>  			desc-count = <10>;
>  			iommus = <&system_mmu>;
> -			channel-index = <4>;
>  		};
>  	};
>  
> @@ -87,6 +85,5 @@ Guest OS configuration:
>  		      <0 0x5c0b0000 0x0 0x1000>;
>  		interrupts = <0 389 0>;
>  		desc-count = <10>;
> -		channel-index = <4>;
>  		iommus = <&system_mmu>;
>  	};
> diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
> index ac20bdb..7180367 100644
> --- a/drivers/dma/qcom/hidma.c
> +++ b/drivers/dma/qcom/hidma.c
> @@ -101,26 +101,6 @@ static unsigned int nr_desc_prm;
>  module_param(nr_desc_prm, uint, 0644);
>  MODULE_PARM_DESC(nr_desc_prm, "number of descriptors (default: 0)");
>  
> -#define HIDMA_MAX_CHANNELS	64
> -static int channel_idx[HIDMA_MAX_CHANNELS] = {
> -	[0 ... (HIDMA_MAX_CHANNELS - 1)] = -1
> -};
> -
> -/*
> - * Each DMA channel is associated with an event channel for interrupt
> - * delivery. The event channel index usually comes from the firmware through
> - * ACPI/DT. When a HIDMA channel is executed in the guest machine context (QEMU)
> - * the device tree gets auto-generated based on the memory and IRQ resources
> - * this driver uses on the host machine. Any device specific paraemeter such as
> - * channel-index gets ignored by the QEMU.
> - * We are using this command line parameter to pass the event channel index to
> - * the guest machine.
> - */
> -static unsigned int num_channel_idx;
> -module_param_array_named(channel_idx, channel_idx, int, &num_channel_idx,
> -			 0644);
> -MODULE_PARM_DESC(channel_idx, "channel index array for the notifications");
> -static atomic_t channel_ref_count;
>  
>  /* process completed descriptors */
>  static void hidma_process_completed(struct hidma_chan *mchan)
> @@ -592,7 +572,6 @@ static int hidma_probe(struct platform_device *pdev)
>  	struct resource *trca_resource;
>  	struct resource *evca_resource;
>  	int chirq;
> -	int current_channel_index = atomic_read(&channel_ref_count);
>  	void __iomem *evca;
>  	void __iomem *trca;
>  	int rc;
> @@ -668,22 +647,7 @@ static int hidma_probe(struct platform_device *pdev)
>  		goto dmafree;
>  	}
>  
> -	if (current_channel_index > HIDMA_MAX_CHANNELS) {
> -		rc = -EINVAL;
> -		goto dmafree;
> -	}
> -
> -	dmadev->chidx = -1;
> -	device_property_read_u32(&pdev->dev, "channel-index", &dmadev->chidx);
> -
> -	/* kernel command line override for the guest machine */
> -	if (channel_idx[current_channel_index] != -1)
> -		dmadev->chidx = channel_idx[current_channel_index];
> -
> -	if (dmadev->chidx == -1) {
> -		rc = -EINVAL;
> -		goto dmafree;
> -	}
> +	dmadev->chidx = readl(dmadev->dev_trca + 0x28);
>  
>  	/* Set DMA mask to 64 bits. */
>  	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> @@ -724,7 +688,6 @@ static int hidma_probe(struct platform_device *pdev)
>  	platform_set_drvdata(pdev, dmadev);
>  	pm_runtime_mark_last_busy(dmadev->ddev.dev);
>  	pm_runtime_put_autosuspend(dmadev->ddev.dev);
> -	atomic_inc(&channel_ref_count);
>  	return 0;
>  
>  uninit:
> -- 
> 1.8.2.1
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-01 15:35     ` Mark Rutland
  0 siblings, 0 replies; 69+ messages in thread
From: Mark Rutland @ 2016-02-01 15:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 29, 2016 at 05:35:11PM -0500, Sinan Kaya wrote:
> Removing the flexibility to choose the event channel as there is no real
> use case right now. We have been using the values in ACPI that match the HW
> defaults. OS is reading the event-channel from the HW register now.

Fold this in to the patches adding the binding and the driver.

There is no reason to add this then remove it within the same series.

Mark.

> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  3 --
>  drivers/dma/qcom/hidma.c                           | 39 +---------------------
>  2 files changed, 1 insertion(+), 41 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> index e3677a5..fd5618b 100644
> --- a/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> +++ b/Documentation/devicetree/bindings/dma/qcom_hidma_mgmt.txt
> @@ -51,7 +51,6 @@ Required properties:
>  - reg: Addresses for the transfer and event channel
>  - interrupts: Should contain the event interrupt
>  - desc-count: Number of asynchronous requests this channel can handle
> -- channel-index: The HW event channel completions will be delivered.
>  - iommus: required a iommu node
>  
>  Example:
> @@ -75,7 +74,6 @@ Hypervisor OS configuration:
>  			interrupts = <0 389 0>;
>  			desc-count = <10>;
>  			iommus = <&system_mmu>;
> -			channel-index = <4>;
>  		};
>  	};
>  
> @@ -87,6 +85,5 @@ Guest OS configuration:
>  		      <0 0x5c0b0000 0x0 0x1000>;
>  		interrupts = <0 389 0>;
>  		desc-count = <10>;
> -		channel-index = <4>;
>  		iommus = <&system_mmu>;
>  	};
> diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
> index ac20bdb..7180367 100644
> --- a/drivers/dma/qcom/hidma.c
> +++ b/drivers/dma/qcom/hidma.c
> @@ -101,26 +101,6 @@ static unsigned int nr_desc_prm;
>  module_param(nr_desc_prm, uint, 0644);
>  MODULE_PARM_DESC(nr_desc_prm, "number of descriptors (default: 0)");
>  
> -#define HIDMA_MAX_CHANNELS	64
> -static int channel_idx[HIDMA_MAX_CHANNELS] = {
> -	[0 ... (HIDMA_MAX_CHANNELS - 1)] = -1
> -};
> -
> -/*
> - * Each DMA channel is associated with an event channel for interrupt
> - * delivery. The event channel index usually comes from the firmware through
> - * ACPI/DT. When a HIDMA channel is executed in the guest machine context (QEMU)
> - * the device tree gets auto-generated based on the memory and IRQ resources
> - * this driver uses on the host machine. Any device specific paraemeter such as
> - * channel-index gets ignored by the QEMU.
> - * We are using this command line parameter to pass the event channel index to
> - * the guest machine.
> - */
> -static unsigned int num_channel_idx;
> -module_param_array_named(channel_idx, channel_idx, int, &num_channel_idx,
> -			 0644);
> -MODULE_PARM_DESC(channel_idx, "channel index array for the notifications");
> -static atomic_t channel_ref_count;
>  
>  /* process completed descriptors */
>  static void hidma_process_completed(struct hidma_chan *mchan)
> @@ -592,7 +572,6 @@ static int hidma_probe(struct platform_device *pdev)
>  	struct resource *trca_resource;
>  	struct resource *evca_resource;
>  	int chirq;
> -	int current_channel_index = atomic_read(&channel_ref_count);
>  	void __iomem *evca;
>  	void __iomem *trca;
>  	int rc;
> @@ -668,22 +647,7 @@ static int hidma_probe(struct platform_device *pdev)
>  		goto dmafree;
>  	}
>  
> -	if (current_channel_index > HIDMA_MAX_CHANNELS) {
> -		rc = -EINVAL;
> -		goto dmafree;
> -	}
> -
> -	dmadev->chidx = -1;
> -	device_property_read_u32(&pdev->dev, "channel-index", &dmadev->chidx);
> -
> -	/* kernel command line override for the guest machine */
> -	if (channel_idx[current_channel_index] != -1)
> -		dmadev->chidx = channel_idx[current_channel_index];
> -
> -	if (dmadev->chidx == -1) {
> -		rc = -EINVAL;
> -		goto dmafree;
> -	}
> +	dmadev->chidx = readl(dmadev->dev_trca + 0x28);
>  
>  	/* Set DMA mask to 64 bits. */
>  	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> @@ -724,7 +688,6 @@ static int hidma_probe(struct platform_device *pdev)
>  	platform_set_drvdata(pdev, dmadev);
>  	pm_runtime_mark_last_busy(dmadev->ddev.dev);
>  	pm_runtime_put_autosuspend(dmadev->ddev.dev);
> -	atomic_inc(&channel_ref_count);
>  	return 0;
>  
>  uninit:
> -- 
> 1.8.2.1
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 10/10] vfio, platform: add QTI HIDMA reset driver
  2016-01-29 22:35   ` Sinan Kaya
  (?)
@ 2016-02-01 15:41     ` Eric Auger
  -1 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2016-02-01 15:41 UTC (permalink / raw)
  To: Sinan Kaya, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm
  Cc: vikrams, arnd, linux-arm-msm, linux-kernel, shankerd, agross,
	linux-arm-kernel

Hi Sinan,
On 01/29/2016 11:35 PM, Sinan Kaya wrote:
> In situations where the userspace driver is stopped abnormally and the
> VFIO platform device is released, the assigned HW device currently is
> left running. As a consequence the HW device might continue issuing IRQs
> and performing DMA accesses.
> 
> This patch is implementing a reset driver for HIDMA platform driver.
> This gets called by the VFIO platform reset interface.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  drivers/vfio/platform/reset/Kconfig                |  9 ++
>  drivers/vfio/platform/reset/Makefile               |  2 +
>  .../vfio/platform/reset/vfio_platform_qcomhidma.c  | 99 ++++++++++++++++++++++
>  3 files changed, 110 insertions(+)
>  create mode 100644 drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> 
> diff --git a/drivers/vfio/platform/reset/Kconfig b/drivers/vfio/platform/reset/Kconfig
> index 70cccc5..d02b3b5 100644
> --- a/drivers/vfio/platform/reset/Kconfig
> +++ b/drivers/vfio/platform/reset/Kconfig
> @@ -13,3 +13,12 @@ config VFIO_PLATFORM_AMDXGBE_RESET
>  	  Enables the VFIO platform driver to handle reset for AMD XGBE
>  
>  	  If you don't know what to do here, say N.
> +
> +config VFIO_PLATFORM_QCOMHIDMA_RESET
> +	tristate "VFIO support for Qualcomm Technologies HIDMA reset"
> +	depends on VFIO_PLATFORM
> +	help
> +	  Enables the VFIO platform driver to handle reset for Qualcomm Technologies
> +	  HIDMA Channel.
> +
> +	  If you don't know what to do here, say N.
> diff --git a/drivers/vfio/platform/reset/Makefile b/drivers/vfio/platform/reset/Makefile
> index 93f4e23..ec7748a 100644
> --- a/drivers/vfio/platform/reset/Makefile
> +++ b/drivers/vfio/platform/reset/Makefile
> @@ -1,7 +1,9 @@
>  vfio-platform-calxedaxgmac-y := vfio_platform_calxedaxgmac.o
>  vfio-platform-amdxgbe-y := vfio_platform_amdxgbe.o
> +vfio-platform-qcomhidma-y := vfio_platform_qcomhidma.o
>  
>  ccflags-y += -Idrivers/vfio/platform
>  
>  obj-$(CONFIG_VFIO_PLATFORM_CALXEDAXGMAC_RESET) += vfio-platform-calxedaxgmac.o
> +obj-$(CONFIG_VFIO_PLATFORM_QCOMHIDMA_RESET) += vfio-platform-qcomhidma.o
>  obj-$(CONFIG_VFIO_PLATFORM_AMDXGBE_RESET) += vfio-platform-amdxgbe.o
> diff --git a/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> new file mode 100644
> index 0000000..4e7a59c
> --- /dev/null
> +++ b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> @@ -0,0 +1,99 @@
> +/*
> + * Qualcomm Technologies HIDMA VFIO Reset Driver
> + *
> + * Copyright (c) 2016, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/init.h>
> +#include <linux/io.h>
> +#include <linux/device.h>
> +#include <linux/iopoll.h>
> +
> +#include "vfio_platform_private.h"
> +
> +#define TRCA_CTRLSTS_OFFSET		0x000
> +#define EVCA_CTRLSTS_OFFSET		0x000
> +
> +#define CH_CONTROL_MASK		GENMASK(7, 0)
> +#define CH_STATE_MASK			GENMASK(7, 0)
> +#define CH_STATE_BIT_POS		0x8
> +
> +#define HIDMA_CH_STATE(val)	\
> +	((val >> CH_STATE_BIT_POS) & CH_STATE_MASK)
> +
> +#define EVCA_IRQ_EN_OFFSET		0x110
> +
> +#define CH_RESET			9
> +#define CH_DISABLED			0
> +
> +int vfio_platform_qcomhidma_reset(struct vfio_platform_device *vdev)
> +{
> +	struct vfio_platform_region trreg;
> +	struct vfio_platform_region evreg;
> +	u32 val;
> +	int ret;
> +
> +	if (vdev->num_regions != 2)
> +		return -ENODEV;
> +
> +	trreg = vdev->regions[0];
> +	if (!trreg.ioaddr) {
> +		trreg.ioaddr =
> +			ioremap_nocache(trreg.addr, trreg.size);
this is going to leak. See "vfio: platform: reset: calxedaxgmac: fix
ioaddr leak".
> +		if (!trreg.ioaddr)
> +			return -ENOMEM;
> +	}
> +
> +	evreg = vdev->regions[1];
> +	if (!evreg.ioaddr) {
> +		evreg.ioaddr =
> +			ioremap_nocache(evreg.addr, evreg.size);
same here
> +		if (!evreg.ioaddr)
> +			return -ENOMEM;
> +	}
> +
I understood the device is a kind of SR-IOV platform device. When the VF
gets reset are there any interactions with the management driver? Is it
handled by HW?

Best Regards

Eric
> +	/* disable IRQ */
> +	writel(0, evreg.ioaddr + EVCA_IRQ_EN_OFFSET);
> +
> +	/* reset both transfer and event channels */
> +	val = readl(trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
> +	val &= ~(CH_CONTROL_MASK << 16);
> +	val |= CH_RESET << 16;
> +	writel(val, trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
> +
> +	ret = readl_poll_timeout(trreg.ioaddr + TRCA_CTRLSTS_OFFSET, val,
> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
> +				 10000);
> +	if (ret)
> +		return ret;
> +
> +	val = readl(evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
> +	val &= ~(CH_CONTROL_MASK << 16);
> +	val |= CH_RESET << 16;
> +	writel(val, evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
> +
> +	ret = readl_poll_timeout(evreg.ioaddr + EVCA_CTRLSTS_OFFSET, val,
> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
> +				 10000);
> +	if (ret)
> +		return ret;
> +
> +	pr_info("HIDMA channel reset\n");
> +	return 0;
> +}
> +module_vfio_reset_handler("qcom,hidma", NULL,
> +			  vfio_platform_qcomhidma_reset);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("Reset support for Qualcomm Technologies HIDMA device");
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 10/10] vfio, platform: add QTI HIDMA reset driver
@ 2016-02-01 15:41     ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2016-02-01 15:41 UTC (permalink / raw)
  To: Sinan Kaya, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm
  Cc: shankerd, vikrams, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel

Hi Sinan,
On 01/29/2016 11:35 PM, Sinan Kaya wrote:
> In situations where the userspace driver is stopped abnormally and the
> VFIO platform device is released, the assigned HW device currently is
> left running. As a consequence the HW device might continue issuing IRQs
> and performing DMA accesses.
> 
> This patch is implementing a reset driver for HIDMA platform driver.
> This gets called by the VFIO platform reset interface.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  drivers/vfio/platform/reset/Kconfig                |  9 ++
>  drivers/vfio/platform/reset/Makefile               |  2 +
>  .../vfio/platform/reset/vfio_platform_qcomhidma.c  | 99 ++++++++++++++++++++++
>  3 files changed, 110 insertions(+)
>  create mode 100644 drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> 
> diff --git a/drivers/vfio/platform/reset/Kconfig b/drivers/vfio/platform/reset/Kconfig
> index 70cccc5..d02b3b5 100644
> --- a/drivers/vfio/platform/reset/Kconfig
> +++ b/drivers/vfio/platform/reset/Kconfig
> @@ -13,3 +13,12 @@ config VFIO_PLATFORM_AMDXGBE_RESET
>  	  Enables the VFIO platform driver to handle reset for AMD XGBE
>  
>  	  If you don't know what to do here, say N.
> +
> +config VFIO_PLATFORM_QCOMHIDMA_RESET
> +	tristate "VFIO support for Qualcomm Technologies HIDMA reset"
> +	depends on VFIO_PLATFORM
> +	help
> +	  Enables the VFIO platform driver to handle reset for Qualcomm Technologies
> +	  HIDMA Channel.
> +
> +	  If you don't know what to do here, say N.
> diff --git a/drivers/vfio/platform/reset/Makefile b/drivers/vfio/platform/reset/Makefile
> index 93f4e23..ec7748a 100644
> --- a/drivers/vfio/platform/reset/Makefile
> +++ b/drivers/vfio/platform/reset/Makefile
> @@ -1,7 +1,9 @@
>  vfio-platform-calxedaxgmac-y := vfio_platform_calxedaxgmac.o
>  vfio-platform-amdxgbe-y := vfio_platform_amdxgbe.o
> +vfio-platform-qcomhidma-y := vfio_platform_qcomhidma.o
>  
>  ccflags-y += -Idrivers/vfio/platform
>  
>  obj-$(CONFIG_VFIO_PLATFORM_CALXEDAXGMAC_RESET) += vfio-platform-calxedaxgmac.o
> +obj-$(CONFIG_VFIO_PLATFORM_QCOMHIDMA_RESET) += vfio-platform-qcomhidma.o
>  obj-$(CONFIG_VFIO_PLATFORM_AMDXGBE_RESET) += vfio-platform-amdxgbe.o
> diff --git a/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> new file mode 100644
> index 0000000..4e7a59c
> --- /dev/null
> +++ b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> @@ -0,0 +1,99 @@
> +/*
> + * Qualcomm Technologies HIDMA VFIO Reset Driver
> + *
> + * Copyright (c) 2016, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/init.h>
> +#include <linux/io.h>
> +#include <linux/device.h>
> +#include <linux/iopoll.h>
> +
> +#include "vfio_platform_private.h"
> +
> +#define TRCA_CTRLSTS_OFFSET		0x000
> +#define EVCA_CTRLSTS_OFFSET		0x000
> +
> +#define CH_CONTROL_MASK		GENMASK(7, 0)
> +#define CH_STATE_MASK			GENMASK(7, 0)
> +#define CH_STATE_BIT_POS		0x8
> +
> +#define HIDMA_CH_STATE(val)	\
> +	((val >> CH_STATE_BIT_POS) & CH_STATE_MASK)
> +
> +#define EVCA_IRQ_EN_OFFSET		0x110
> +
> +#define CH_RESET			9
> +#define CH_DISABLED			0
> +
> +int vfio_platform_qcomhidma_reset(struct vfio_platform_device *vdev)
> +{
> +	struct vfio_platform_region trreg;
> +	struct vfio_platform_region evreg;
> +	u32 val;
> +	int ret;
> +
> +	if (vdev->num_regions != 2)
> +		return -ENODEV;
> +
> +	trreg = vdev->regions[0];
> +	if (!trreg.ioaddr) {
> +		trreg.ioaddr =
> +			ioremap_nocache(trreg.addr, trreg.size);
this is going to leak. See "vfio: platform: reset: calxedaxgmac: fix
ioaddr leak".
> +		if (!trreg.ioaddr)
> +			return -ENOMEM;
> +	}
> +
> +	evreg = vdev->regions[1];
> +	if (!evreg.ioaddr) {
> +		evreg.ioaddr =
> +			ioremap_nocache(evreg.addr, evreg.size);
same here
> +		if (!evreg.ioaddr)
> +			return -ENOMEM;
> +	}
> +
I understood the device is a kind of SR-IOV platform device. When the VF
gets reset are there any interactions with the management driver? Is it
handled by HW?

Best Regards

Eric
> +	/* disable IRQ */
> +	writel(0, evreg.ioaddr + EVCA_IRQ_EN_OFFSET);
> +
> +	/* reset both transfer and event channels */
> +	val = readl(trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
> +	val &= ~(CH_CONTROL_MASK << 16);
> +	val |= CH_RESET << 16;
> +	writel(val, trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
> +
> +	ret = readl_poll_timeout(trreg.ioaddr + TRCA_CTRLSTS_OFFSET, val,
> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
> +				 10000);
> +	if (ret)
> +		return ret;
> +
> +	val = readl(evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
> +	val &= ~(CH_CONTROL_MASK << 16);
> +	val |= CH_RESET << 16;
> +	writel(val, evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
> +
> +	ret = readl_poll_timeout(evreg.ioaddr + EVCA_CTRLSTS_OFFSET, val,
> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
> +				 10000);
> +	if (ret)
> +		return ret;
> +
> +	pr_info("HIDMA channel reset\n");
> +	return 0;
> +}
> +module_vfio_reset_handler("qcom,hidma", NULL,
> +			  vfio_platform_qcomhidma_reset);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("Reset support for Qualcomm Technologies HIDMA device");
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 10/10] vfio, platform: add QTI HIDMA reset driver
@ 2016-02-01 15:41     ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2016-02-01 15:41 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sinan,
On 01/29/2016 11:35 PM, Sinan Kaya wrote:
> In situations where the userspace driver is stopped abnormally and the
> VFIO platform device is released, the assigned HW device currently is
> left running. As a consequence the HW device might continue issuing IRQs
> and performing DMA accesses.
> 
> This patch is implementing a reset driver for HIDMA platform driver.
> This gets called by the VFIO platform reset interface.
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  drivers/vfio/platform/reset/Kconfig                |  9 ++
>  drivers/vfio/platform/reset/Makefile               |  2 +
>  .../vfio/platform/reset/vfio_platform_qcomhidma.c  | 99 ++++++++++++++++++++++
>  3 files changed, 110 insertions(+)
>  create mode 100644 drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> 
> diff --git a/drivers/vfio/platform/reset/Kconfig b/drivers/vfio/platform/reset/Kconfig
> index 70cccc5..d02b3b5 100644
> --- a/drivers/vfio/platform/reset/Kconfig
> +++ b/drivers/vfio/platform/reset/Kconfig
> @@ -13,3 +13,12 @@ config VFIO_PLATFORM_AMDXGBE_RESET
>  	  Enables the VFIO platform driver to handle reset for AMD XGBE
>  
>  	  If you don't know what to do here, say N.
> +
> +config VFIO_PLATFORM_QCOMHIDMA_RESET
> +	tristate "VFIO support for Qualcomm Technologies HIDMA reset"
> +	depends on VFIO_PLATFORM
> +	help
> +	  Enables the VFIO platform driver to handle reset for Qualcomm Technologies
> +	  HIDMA Channel.
> +
> +	  If you don't know what to do here, say N.
> diff --git a/drivers/vfio/platform/reset/Makefile b/drivers/vfio/platform/reset/Makefile
> index 93f4e23..ec7748a 100644
> --- a/drivers/vfio/platform/reset/Makefile
> +++ b/drivers/vfio/platform/reset/Makefile
> @@ -1,7 +1,9 @@
>  vfio-platform-calxedaxgmac-y := vfio_platform_calxedaxgmac.o
>  vfio-platform-amdxgbe-y := vfio_platform_amdxgbe.o
> +vfio-platform-qcomhidma-y := vfio_platform_qcomhidma.o
>  
>  ccflags-y += -Idrivers/vfio/platform
>  
>  obj-$(CONFIG_VFIO_PLATFORM_CALXEDAXGMAC_RESET) += vfio-platform-calxedaxgmac.o
> +obj-$(CONFIG_VFIO_PLATFORM_QCOMHIDMA_RESET) += vfio-platform-qcomhidma.o
>  obj-$(CONFIG_VFIO_PLATFORM_AMDXGBE_RESET) += vfio-platform-amdxgbe.o
> diff --git a/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> new file mode 100644
> index 0000000..4e7a59c
> --- /dev/null
> +++ b/drivers/vfio/platform/reset/vfio_platform_qcomhidma.c
> @@ -0,0 +1,99 @@
> +/*
> + * Qualcomm Technologies HIDMA VFIO Reset Driver
> + *
> + * Copyright (c) 2016, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/init.h>
> +#include <linux/io.h>
> +#include <linux/device.h>
> +#include <linux/iopoll.h>
> +
> +#include "vfio_platform_private.h"
> +
> +#define TRCA_CTRLSTS_OFFSET		0x000
> +#define EVCA_CTRLSTS_OFFSET		0x000
> +
> +#define CH_CONTROL_MASK		GENMASK(7, 0)
> +#define CH_STATE_MASK			GENMASK(7, 0)
> +#define CH_STATE_BIT_POS		0x8
> +
> +#define HIDMA_CH_STATE(val)	\
> +	((val >> CH_STATE_BIT_POS) & CH_STATE_MASK)
> +
> +#define EVCA_IRQ_EN_OFFSET		0x110
> +
> +#define CH_RESET			9
> +#define CH_DISABLED			0
> +
> +int vfio_platform_qcomhidma_reset(struct vfio_platform_device *vdev)
> +{
> +	struct vfio_platform_region trreg;
> +	struct vfio_platform_region evreg;
> +	u32 val;
> +	int ret;
> +
> +	if (vdev->num_regions != 2)
> +		return -ENODEV;
> +
> +	trreg = vdev->regions[0];
> +	if (!trreg.ioaddr) {
> +		trreg.ioaddr =
> +			ioremap_nocache(trreg.addr, trreg.size);
this is going to leak. See "vfio: platform: reset: calxedaxgmac: fix
ioaddr leak".
> +		if (!trreg.ioaddr)
> +			return -ENOMEM;
> +	}
> +
> +	evreg = vdev->regions[1];
> +	if (!evreg.ioaddr) {
> +		evreg.ioaddr =
> +			ioremap_nocache(evreg.addr, evreg.size);
same here
> +		if (!evreg.ioaddr)
> +			return -ENOMEM;
> +	}
> +
I understood the device is a kind of SR-IOV platform device. When the VF
gets reset are there any interactions with the management driver? Is it
handled by HW?

Best Regards

Eric
> +	/* disable IRQ */
> +	writel(0, evreg.ioaddr + EVCA_IRQ_EN_OFFSET);
> +
> +	/* reset both transfer and event channels */
> +	val = readl(trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
> +	val &= ~(CH_CONTROL_MASK << 16);
> +	val |= CH_RESET << 16;
> +	writel(val, trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
> +
> +	ret = readl_poll_timeout(trreg.ioaddr + TRCA_CTRLSTS_OFFSET, val,
> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
> +				 10000);
> +	if (ret)
> +		return ret;
> +
> +	val = readl(evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
> +	val &= ~(CH_CONTROL_MASK << 16);
> +	val |= CH_RESET << 16;
> +	writel(val, evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
> +
> +	ret = readl_poll_timeout(evreg.ioaddr + EVCA_CTRLSTS_OFFSET, val,
> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
> +				 10000);
> +	if (ret)
> +		return ret;
> +
> +	pr_info("HIDMA channel reset\n");
> +	return 0;
> +}
> +module_vfio_reset_handler("qcom,hidma", NULL,
> +			  vfio_platform_qcomhidma_reset);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("Reset support for Qualcomm Technologies HIDMA device");
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-02-01 15:35     ` Mark Rutland
@ 2016-02-01 15:46       ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 15:46 UTC (permalink / raw)
  To: Mark Rutland
  Cc: dmaengine, marc.zyngier, timur, devicetree, cov, vinod.koul, jcm,
	shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, linux-kernel

On 2/1/2016 10:35 AM, Mark Rutland wrote:
> Fold this in to the patches adding the binding and the driver.
> 
> There is no reason to add this then remove it within the same series.
> 
> Mark.

Will do. I was trying to keep changes to the minimum on the other patches to 
help the reviewers.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-01 15:46       ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 15:46 UTC (permalink / raw)
  To: linux-arm-kernel

On 2/1/2016 10:35 AM, Mark Rutland wrote:
> Fold this in to the patches adding the binding and the driver.
> 
> There is no reason to add this then remove it within the same series.
> 
> Mark.

Will do. I was trying to keep changes to the minimum on the other patches to 
help the reviewers.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 10/10] vfio, platform: add QTI HIDMA reset driver
  2016-02-01 15:41     ` Eric Auger
@ 2016-02-01 15:51       ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 15:51 UTC (permalink / raw)
  To: Eric Auger, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm
  Cc: shankerd, vikrams, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel

On 2/1/2016 10:41 AM, Eric Auger wrote:
> Hi Sinan,

>> +
>> +	trreg = vdev->regions[0];
>> +	if (!trreg.ioaddr) {
>> +		trreg.ioaddr =
>> +			ioremap_nocache(trreg.addr, trreg.size);
> this is going to leak. See "vfio: platform: reset: calxedaxgmac: fix
> ioaddr leak".

Thanks, I was following what other drivers were doing and got hit by the same problem :)

>> +		if (!trreg.ioaddr)
>> +			return -ENOMEM;
>> +	}
>> +
>> +	evreg = vdev->regions[1];
>> +	if (!evreg.ioaddr) {
>> +		evreg.ioaddr =
>> +			ioremap_nocache(evreg.addr, evreg.size);
> same here
I'll take care of it.

>> +		if (!evreg.ioaddr)
>> +			return -ENOMEM;
>> +	}
>> +
> I understood the device is a kind of SR-IOV platform device. When the VF
> gets reset are there any interactions with the management driver? Is it
> handled by HW?

There is no communication between the management interface and the DMA channels. 
Management interface is just a runtime platform resource allocator. 

"if you pay more, you get more DMA bandwidth. Otherwise, you get to share the channel 
with other users"

The DMA channels can be reset independent of other DMA channels or the management HW. 
This is a HW feature.

> 
> Best Regards
> 
> Eric
>> +	/* disable IRQ */
>> +	writel(0, evreg.ioaddr + EVCA_IRQ_EN_OFFSET);
>> +
>> +	/* reset both transfer and event channels */
>> +	val = readl(trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
>> +	val &= ~(CH_CONTROL_MASK << 16);
>> +	val |= CH_RESET << 16;
>> +	writel(val, trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
>> +
>> +	ret = readl_poll_timeout(trreg.ioaddr + TRCA_CTRLSTS_OFFSET, val,
>> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
>> +				 10000);
>> +	if (ret)
>> +		return ret;
>> +
>> +	val = readl(evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
>> +	val &= ~(CH_CONTROL_MASK << 16);
>> +	val |= CH_RESET << 16;
>> +	writel(val, evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
>> +
>> +	ret = readl_poll_timeout(evreg.ioaddr + EVCA_CTRLSTS_OFFSET, val,
>> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
>> +				 10000);
>> +	if (ret)
>> +		return ret;
>> +
>> +	pr_info("HIDMA channel reset\n");
>> +	return 0;
>> +}
>> +module_vfio_reset_handler("qcom,hidma", NULL,
>> +			  vfio_platform_qcomhidma_reset);
>> +
>> +MODULE_LICENSE("GPL v2");
>> +MODULE_DESCRIPTION("Reset support for Qualcomm Technologies HIDMA device");
>>
> 


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 10/10] vfio, platform: add QTI HIDMA reset driver
@ 2016-02-01 15:51       ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 15:51 UTC (permalink / raw)
  To: linux-arm-kernel

On 2/1/2016 10:41 AM, Eric Auger wrote:
> Hi Sinan,

>> +
>> +	trreg = vdev->regions[0];
>> +	if (!trreg.ioaddr) {
>> +		trreg.ioaddr =
>> +			ioremap_nocache(trreg.addr, trreg.size);
> this is going to leak. See "vfio: platform: reset: calxedaxgmac: fix
> ioaddr leak".

Thanks, I was following what other drivers were doing and got hit by the same problem :)

>> +		if (!trreg.ioaddr)
>> +			return -ENOMEM;
>> +	}
>> +
>> +	evreg = vdev->regions[1];
>> +	if (!evreg.ioaddr) {
>> +		evreg.ioaddr =
>> +			ioremap_nocache(evreg.addr, evreg.size);
> same here
I'll take care of it.

>> +		if (!evreg.ioaddr)
>> +			return -ENOMEM;
>> +	}
>> +
> I understood the device is a kind of SR-IOV platform device. When the VF
> gets reset are there any interactions with the management driver? Is it
> handled by HW?

There is no communication between the management interface and the DMA channels. 
Management interface is just a runtime platform resource allocator. 

"if you pay more, you get more DMA bandwidth. Otherwise, you get to share the channel 
with other users"

The DMA channels can be reset independent of other DMA channels or the management HW. 
This is a HW feature.

> 
> Best Regards
> 
> Eric
>> +	/* disable IRQ */
>> +	writel(0, evreg.ioaddr + EVCA_IRQ_EN_OFFSET);
>> +
>> +	/* reset both transfer and event channels */
>> +	val = readl(trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
>> +	val &= ~(CH_CONTROL_MASK << 16);
>> +	val |= CH_RESET << 16;
>> +	writel(val, trreg.ioaddr + TRCA_CTRLSTS_OFFSET);
>> +
>> +	ret = readl_poll_timeout(trreg.ioaddr + TRCA_CTRLSTS_OFFSET, val,
>> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
>> +				 10000);
>> +	if (ret)
>> +		return ret;
>> +
>> +	val = readl(evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
>> +	val &= ~(CH_CONTROL_MASK << 16);
>> +	val |= CH_RESET << 16;
>> +	writel(val, evreg.ioaddr + EVCA_CTRLSTS_OFFSET);
>> +
>> +	ret = readl_poll_timeout(evreg.ioaddr + EVCA_CTRLSTS_OFFSET, val,
>> +				 HIDMA_CH_STATE(val) == CH_DISABLED, 1000,
>> +				 10000);
>> +	if (ret)
>> +		return ret;
>> +
>> +	pr_info("HIDMA channel reset\n");
>> +	return 0;
>> +}
>> +module_vfio_reset_handler("qcom,hidma", NULL,
>> +			  vfio_platform_qcomhidma_reset);
>> +
>> +MODULE_LICENSE("GPL v2");
>> +MODULE_DESCRIPTION("Reset support for Qualcomm Technologies HIDMA device");
>>
> 


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-01-29 22:35   ` Sinan Kaya
@ 2016-02-01 16:08     ` Eric Auger
  -1 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2016-02-01 16:08 UTC (permalink / raw)
  To: Sinan Kaya, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm
  Cc: shankerd, vikrams, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel, Baptiste Reynal

Hi Sinan,
On 01/29/2016 11:35 PM, Sinan Kaya wrote:
> The code is using the compatible DT string to associate a reset driver with
> the actual device itself. The compatible string does not exist on ACPI
> based systems. HID is the unique identifier for a device driver instead.
> The change allows a driver to register with DT compatible string or ACPI
> HID and then match the object with one of these conditions.
> 
> For ACPI systems, ACPI HID needs to match and compat in the registered
> reset
> driver needs to match for ACPI reset driver loading to work.

Don't really get the sentence. For ACPI systems, a registered reset
function is selected if its associated ACPI HID matches the device ACPI HID?
> 
> For OF based systems, DT compatible string needs to match and compat in the
> registered reset driver needs to match for DT reset driver loading to work.
same here

I added Baptiste who is vfio platform driver sub-system maintainer.

On my side I tested with of amd xgbe and I don't observe any regression.

Best Regards

Eric
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../vfio/platform/reset/vfio_platform_amdxgbe.c    |  3 +-
>  .../platform/reset/vfio_platform_calxedaxgmac.c    |  3 +-
>  drivers/vfio/platform/vfio_platform_common.c       | 80 +++++++++++++++++++---
>  drivers/vfio/platform/vfio_platform_private.h      | 41 ++++++-----
>  4 files changed, 96 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> index d4030d0..cc5b4fa 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> @@ -119,7 +119,8 @@ int vfio_platform_amdxgbe_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("amd,xgbe-seattle-v1a", vfio_platform_amdxgbe_reset);
> +module_vfio_reset_handler("amd,xgbe-seattle-v1a", NULL,
> +			  vfio_platform_amdxgbe_reset);
>  
>  MODULE_VERSION("0.1");
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> index e3d3d94..0e57529 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> @@ -77,7 +77,8 @@ int vfio_platform_calxedaxgmac_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("calxeda,hb-xgmac", vfio_platform_calxedaxgmac_reset);
> +module_vfio_reset_handler("calxeda,hb-xgmac", NULL,
> +			  vfio_platform_calxedaxgmac_reset);
>  
>  MODULE_VERSION(DRIVER_VERSION);
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
> index 418cdd9..6927e05 100644
> --- a/drivers/vfio/platform/vfio_platform_common.c
> +++ b/drivers/vfio/platform/vfio_platform_common.c
> @@ -13,6 +13,7 @@
>   */
>  
>  #include <linux/device.h>
> +#include <linux/acpi.h>
>  #include <linux/iommu.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> @@ -31,14 +32,22 @@ static LIST_HEAD(reset_list);
>  static DEFINE_MUTEX(driver_lock);
>  
>  static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
> -					struct module **module)
> +				  const char *acpihid, struct module **module)
>  {
>  	struct vfio_platform_reset_node *iter;
>  	vfio_platform_reset_fn_t reset_fn = NULL;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry(iter, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) &&
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) &&
> +			try_module_get(iter->owner)) {
> +			*module = iter->owner;
> +			reset_fn = iter->reset;
> +			break;
> +		}
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) &&
>  			try_module_get(iter->owner)) {
>  			*module = iter->owner;
>  			reset_fn = iter->reset;
> @@ -51,11 +60,12 @@ static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
>  
>  static void vfio_platform_get_reset(struct vfio_platform_device *vdev)
>  {
> -	vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> -						&vdev->reset_module);
> +	vdev->reset = vfio_platform_lookup_reset(vdev->compat, vdev->acpihid,
> +						 &vdev->reset_module);
>  	if (!vdev->reset) {
>  		request_module("vfio-reset:%s", vdev->compat);
>  		vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> +							 vdev->acpihid,
>  							 &vdev->reset_module);
>  	}
>  }
> @@ -541,6 +551,46 @@ static const struct vfio_device_ops vfio_platform_ops = {
>  	.mmap		= vfio_platform_mmap,
>  };
>  
> +#ifdef CONFIG_ACPI
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	struct acpi_device adev = ACPI_COMPANION(dev);
> +
> +	if (!adev)
> +		return -EINVAL;
> +
> +	vdev->acpihid = acpi_device_hid(adev);
> +	if (!vdev->acpihid) {
> +		pr_err("VFIO: cannot find ACPI HID for %s\n",
> +		       vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +#else
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	return -EINVAL;
> +}
> +#endif
> +
> +int vfio_platform_probe_of(struct vfio_platform_device *vdev,
> +			   struct device *dev)
> +{
> +	int ret;
> +
> +	ret = device_property_read_string(dev, "compatible",
> +					  &vdev->compat);
> +	if (ret) {
> +		pr_err("VFIO: cannot retrieve compat for %s\n",
> +			vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
>  int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  			       struct device *dev)
>  {
> @@ -550,14 +600,14 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  	if (!vdev)
>  		return -EINVAL;
>  
> -	ret = device_property_read_string(dev, "compatible", &vdev->compat);
> -	if (ret) {
> -		pr_err("VFIO: cannot retrieve compat for %s\n", vdev->name);
> -		return -EINVAL;
> -	}
> +	ret = vfio_platform_probe_acpi(vdev, dev);
> +	if (ret)
> +		ret = vfio_platform_probe_of(vdev, dev);
>  
> -	vdev->device = dev;
> +	if (ret)
> +		return ret;
>  
> +	vdev->device = dev;
>  	group = iommu_group_get(dev);
>  	if (!group) {
>  		pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
> @@ -602,13 +652,21 @@ void __vfio_platform_register_reset(struct vfio_platform_reset_node *node)
>  EXPORT_SYMBOL_GPL(__vfio_platform_register_reset);
>  
>  void vfio_platform_unregister_reset(const char *compat,
> +				    const char *acpihid,
>  				    vfio_platform_reset_fn_t fn)
>  {
>  	struct vfio_platform_reset_node *iter, *temp;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry_safe(iter, temp, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) && (iter->reset == fn)) {
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) && (iter->reset == fn)) {
> +			list_del(&iter->link);
> +			break;
> +		}
> +
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) && (iter->reset == fn)) {
>  			list_del(&iter->link);
>  			break;
>  		}
> diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
> index 42816dd..32feba3 100644
> --- a/drivers/vfio/platform/vfio_platform_private.h
> +++ b/drivers/vfio/platform/vfio_platform_private.h
> @@ -58,6 +58,7 @@ struct vfio_platform_device {
>  	struct mutex			igate;
>  	struct module			*parent_module;
>  	const char			*compat;
> +	const char			*acpihid;
>  	struct module			*reset_module;
>  	struct device			*device;
>  
> @@ -79,6 +80,7 @@ typedef int (*vfio_platform_reset_fn_t)(struct vfio_platform_device *vdev);
>  struct vfio_platform_reset_node {
>  	struct list_head link;
>  	char *compat;
> +	char *acpihid;
>  	struct module *owner;
>  	vfio_platform_reset_fn_t reset;
>  };
> @@ -98,27 +100,30 @@ extern int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
>  
>  extern void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
>  extern void vfio_platform_unregister_reset(const char *compat,
> +					   const char *acpihid,
>  					   vfio_platform_reset_fn_t fn);
> -#define vfio_platform_register_reset(__compat, __reset)		\
> -static struct vfio_platform_reset_node __reset ## _node = {	\
> -	.owner = THIS_MODULE,					\
> -	.compat = __compat,					\
> -	.reset = __reset,					\
> -};								\
> +
> +#define vfio_platform_register_reset(__compat, __acpihid, __reset)	\
> +static struct vfio_platform_reset_node __reset ## _node = {		\
> +	.owner = THIS_MODULE,						\
> +	.compat = __compat,						\
> +	.acpihid = __acpihid,						\
> +	.reset = __reset,						\
> +};									\
>  __vfio_platform_register_reset(&__reset ## _node)
>  
> -#define module_vfio_reset_handler(compat, reset)		\
> -MODULE_ALIAS("vfio-reset:" compat);				\
> -static int __init reset ## _module_init(void)			\
> -{								\
> -	vfio_platform_register_reset(compat, reset);		\
> -	return 0;						\
> -};								\
> -static void __exit reset ## _module_exit(void)			\
> -{								\
> -	vfio_platform_unregister_reset(compat, reset);		\
> -};								\
> -module_init(reset ## _module_init);				\
> +#define module_vfio_reset_handler(compat, acpihid, reset)		\
> +MODULE_ALIAS("vfio-reset:" compat);					\
> +static int __init reset ## _module_init(void)				\
> +{									\
> +	vfio_platform_register_reset(compat, acpihid, reset);		\
> +	return 0;							\
> +};									\
> +static void __exit reset ## _module_exit(void)				\
> +{									\
> +	vfio_platform_unregister_reset(compat, acpihid, reset);		\
> +};									\
> +module_init(reset ## _module_init);					\
>  module_exit(reset ## _module_exit)
>  
>  #endif /* VFIO_PLATFORM_PRIVATE_H */
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-02-01 16:08     ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2016-02-01 16:08 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Sinan,
On 01/29/2016 11:35 PM, Sinan Kaya wrote:
> The code is using the compatible DT string to associate a reset driver with
> the actual device itself. The compatible string does not exist on ACPI
> based systems. HID is the unique identifier for a device driver instead.
> The change allows a driver to register with DT compatible string or ACPI
> HID and then match the object with one of these conditions.
> 
> For ACPI systems, ACPI HID needs to match and compat in the registered
> reset
> driver needs to match for ACPI reset driver loading to work.

Don't really get the sentence. For ACPI systems, a registered reset
function is selected if its associated ACPI HID matches the device ACPI HID?
> 
> For OF based systems, DT compatible string needs to match and compat in the
> registered reset driver needs to match for DT reset driver loading to work.
same here

I added Baptiste who is vfio platform driver sub-system maintainer.

On my side I tested with of amd xgbe and I don't observe any regression.

Best Regards

Eric
> 
> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
> ---
>  .../vfio/platform/reset/vfio_platform_amdxgbe.c    |  3 +-
>  .../platform/reset/vfio_platform_calxedaxgmac.c    |  3 +-
>  drivers/vfio/platform/vfio_platform_common.c       | 80 +++++++++++++++++++---
>  drivers/vfio/platform/vfio_platform_private.h      | 41 ++++++-----
>  4 files changed, 96 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> index d4030d0..cc5b4fa 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_amdxgbe.c
> @@ -119,7 +119,8 @@ int vfio_platform_amdxgbe_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("amd,xgbe-seattle-v1a", vfio_platform_amdxgbe_reset);
> +module_vfio_reset_handler("amd,xgbe-seattle-v1a", NULL,
> +			  vfio_platform_amdxgbe_reset);
>  
>  MODULE_VERSION("0.1");
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> index e3d3d94..0e57529 100644
> --- a/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> +++ b/drivers/vfio/platform/reset/vfio_platform_calxedaxgmac.c
> @@ -77,7 +77,8 @@ int vfio_platform_calxedaxgmac_reset(struct vfio_platform_device *vdev)
>  	return 0;
>  }
>  
> -module_vfio_reset_handler("calxeda,hb-xgmac", vfio_platform_calxedaxgmac_reset);
> +module_vfio_reset_handler("calxeda,hb-xgmac", NULL,
> +			  vfio_platform_calxedaxgmac_reset);
>  
>  MODULE_VERSION(DRIVER_VERSION);
>  MODULE_LICENSE("GPL v2");
> diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
> index 418cdd9..6927e05 100644
> --- a/drivers/vfio/platform/vfio_platform_common.c
> +++ b/drivers/vfio/platform/vfio_platform_common.c
> @@ -13,6 +13,7 @@
>   */
>  
>  #include <linux/device.h>
> +#include <linux/acpi.h>
>  #include <linux/iommu.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> @@ -31,14 +32,22 @@ static LIST_HEAD(reset_list);
>  static DEFINE_MUTEX(driver_lock);
>  
>  static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
> -					struct module **module)
> +				  const char *acpihid, struct module **module)
>  {
>  	struct vfio_platform_reset_node *iter;
>  	vfio_platform_reset_fn_t reset_fn = NULL;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry(iter, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) &&
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) &&
> +			try_module_get(iter->owner)) {
> +			*module = iter->owner;
> +			reset_fn = iter->reset;
> +			break;
> +		}
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) &&
>  			try_module_get(iter->owner)) {
>  			*module = iter->owner;
>  			reset_fn = iter->reset;
> @@ -51,11 +60,12 @@ static vfio_platform_reset_fn_t vfio_platform_lookup_reset(const char *compat,
>  
>  static void vfio_platform_get_reset(struct vfio_platform_device *vdev)
>  {
> -	vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> -						&vdev->reset_module);
> +	vdev->reset = vfio_platform_lookup_reset(vdev->compat, vdev->acpihid,
> +						 &vdev->reset_module);
>  	if (!vdev->reset) {
>  		request_module("vfio-reset:%s", vdev->compat);
>  		vdev->reset = vfio_platform_lookup_reset(vdev->compat,
> +							 vdev->acpihid,
>  							 &vdev->reset_module);
>  	}
>  }
> @@ -541,6 +551,46 @@ static const struct vfio_device_ops vfio_platform_ops = {
>  	.mmap		= vfio_platform_mmap,
>  };
>  
> +#ifdef CONFIG_ACPI
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	struct acpi_device adev = ACPI_COMPANION(dev);
> +
> +	if (!adev)
> +		return -EINVAL;
> +
> +	vdev->acpihid = acpi_device_hid(adev);
> +	if (!vdev->acpihid) {
> +		pr_err("VFIO: cannot find ACPI HID for %s\n",
> +		       vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +#else
> +int vfio_platform_probe_acpi(struct vfio_platform_device *vdev,
> +			     struct device *dev)
> +{
> +	return -EINVAL;
> +}
> +#endif
> +
> +int vfio_platform_probe_of(struct vfio_platform_device *vdev,
> +			   struct device *dev)
> +{
> +	int ret;
> +
> +	ret = device_property_read_string(dev, "compatible",
> +					  &vdev->compat);
> +	if (ret) {
> +		pr_err("VFIO: cannot retrieve compat for %s\n",
> +			vdev->name);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
>  int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  			       struct device *dev)
>  {
> @@ -550,14 +600,14 @@ int vfio_platform_probe_common(struct vfio_platform_device *vdev,
>  	if (!vdev)
>  		return -EINVAL;
>  
> -	ret = device_property_read_string(dev, "compatible", &vdev->compat);
> -	if (ret) {
> -		pr_err("VFIO: cannot retrieve compat for %s\n", vdev->name);
> -		return -EINVAL;
> -	}
> +	ret = vfio_platform_probe_acpi(vdev, dev);
> +	if (ret)
> +		ret = vfio_platform_probe_of(vdev, dev);
>  
> -	vdev->device = dev;
> +	if (ret)
> +		return ret;
>  
> +	vdev->device = dev;
>  	group = iommu_group_get(dev);
>  	if (!group) {
>  		pr_err("VFIO: No IOMMU group for device %s\n", vdev->name);
> @@ -602,13 +652,21 @@ void __vfio_platform_register_reset(struct vfio_platform_reset_node *node)
>  EXPORT_SYMBOL_GPL(__vfio_platform_register_reset);
>  
>  void vfio_platform_unregister_reset(const char *compat,
> +				    const char *acpihid,
>  				    vfio_platform_reset_fn_t fn)
>  {
>  	struct vfio_platform_reset_node *iter, *temp;
>  
>  	mutex_lock(&driver_lock);
>  	list_for_each_entry_safe(iter, temp, &reset_list, link) {
> -		if (!strcmp(iter->compat, compat) && (iter->reset == fn)) {
> +		if (acpihid && iter->acpihid &&
> +		    !strcmp(iter->acpihid, acpihid) && (iter->reset == fn)) {
> +			list_del(&iter->link);
> +			break;
> +		}
> +
> +		if (compat && iter->compat &&
> +		    !strcmp(iter->compat, compat) && (iter->reset == fn)) {
>  			list_del(&iter->link);
>  			break;
>  		}
> diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
> index 42816dd..32feba3 100644
> --- a/drivers/vfio/platform/vfio_platform_private.h
> +++ b/drivers/vfio/platform/vfio_platform_private.h
> @@ -58,6 +58,7 @@ struct vfio_platform_device {
>  	struct mutex			igate;
>  	struct module			*parent_module;
>  	const char			*compat;
> +	const char			*acpihid;
>  	struct module			*reset_module;
>  	struct device			*device;
>  
> @@ -79,6 +80,7 @@ typedef int (*vfio_platform_reset_fn_t)(struct vfio_platform_device *vdev);
>  struct vfio_platform_reset_node {
>  	struct list_head link;
>  	char *compat;
> +	char *acpihid;
>  	struct module *owner;
>  	vfio_platform_reset_fn_t reset;
>  };
> @@ -98,27 +100,30 @@ extern int vfio_platform_set_irqs_ioctl(struct vfio_platform_device *vdev,
>  
>  extern void __vfio_platform_register_reset(struct vfio_platform_reset_node *n);
>  extern void vfio_platform_unregister_reset(const char *compat,
> +					   const char *acpihid,
>  					   vfio_platform_reset_fn_t fn);
> -#define vfio_platform_register_reset(__compat, __reset)		\
> -static struct vfio_platform_reset_node __reset ## _node = {	\
> -	.owner = THIS_MODULE,					\
> -	.compat = __compat,					\
> -	.reset = __reset,					\
> -};								\
> +
> +#define vfio_platform_register_reset(__compat, __acpihid, __reset)	\
> +static struct vfio_platform_reset_node __reset ## _node = {		\
> +	.owner = THIS_MODULE,						\
> +	.compat = __compat,						\
> +	.acpihid = __acpihid,						\
> +	.reset = __reset,						\
> +};									\
>  __vfio_platform_register_reset(&__reset ## _node)
>  
> -#define module_vfio_reset_handler(compat, reset)		\
> -MODULE_ALIAS("vfio-reset:" compat);				\
> -static int __init reset ## _module_init(void)			\
> -{								\
> -	vfio_platform_register_reset(compat, reset);		\
> -	return 0;						\
> -};								\
> -static void __exit reset ## _module_exit(void)			\
> -{								\
> -	vfio_platform_unregister_reset(compat, reset);		\
> -};								\
> -module_init(reset ## _module_init);				\
> +#define module_vfio_reset_handler(compat, acpihid, reset)		\
> +MODULE_ALIAS("vfio-reset:" compat);					\
> +static int __init reset ## _module_init(void)				\
> +{									\
> +	vfio_platform_register_reset(compat, acpihid, reset);		\
> +	return 0;							\
> +};									\
> +static void __exit reset ## _module_exit(void)				\
> +{									\
> +	vfio_platform_unregister_reset(compat, acpihid, reset);		\
> +};									\
> +module_init(reset ## _module_init);					\
>  module_exit(reset ## _module_exit)
>  
>  #endif /* VFIO_PLATFORM_PRIVATE_H */
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-02-01 16:08     ` Eric Auger
@ 2016-02-01 16:16       ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 16:16 UTC (permalink / raw)
  To: Eric Auger, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm
  Cc: shankerd, vikrams, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel, Baptiste Reynal

On 2/1/2016 11:08 AM, Eric Auger wrote:
>> For ACPI systems, ACPI HID needs to match and compat in the registered
>> > reset
>> > driver needs to match for ACPI reset driver loading to work.
> Don't really get the sentence. For ACPI systems, a registered reset
> function is selected if its associated ACPI HID matches the device ACPI HID?

Old commit message. I did an internal review before posting the patch. The first
version of the patch was a hack. I simplified the code but forgot to update the 
commit message.

Now, the rule is simple.

- ACPI HID needs match for ACPI systems
- DT compat needs to match for OF systems

as expected. I'll rephrase for the next version.

>> > 
>> > For OF based systems, DT compatible string needs to match and compat in the
>> > registered reset driver needs to match for DT reset driver loading to work.
> same here
> 
> I added Baptiste who is vfio platform driver sub-system maintainer.

Thanks, we should ask Baptiste to get his email into the Maintainer file list.

> 
> On my side I tested with of amd xgbe and I don't observe any regression.
> 
> Best Regards

Can I add your tested by?

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-02-01 16:16       ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 16:16 UTC (permalink / raw)
  To: linux-arm-kernel

On 2/1/2016 11:08 AM, Eric Auger wrote:
>> For ACPI systems, ACPI HID needs to match and compat in the registered
>> > reset
>> > driver needs to match for ACPI reset driver loading to work.
> Don't really get the sentence. For ACPI systems, a registered reset
> function is selected if its associated ACPI HID matches the device ACPI HID?

Old commit message. I did an internal review before posting the patch. The first
version of the patch was a hack. I simplified the code but forgot to update the 
commit message.

Now, the rule is simple.

- ACPI HID needs match for ACPI systems
- DT compat needs to match for OF systems

as expected. I'll rephrase for the next version.

>> > 
>> > For OF based systems, DT compatible string needs to match and compat in the
>> > registered reset driver needs to match for DT reset driver loading to work.
> same here
> 
> I added Baptiste who is vfio platform driver sub-system maintainer.

Thanks, we should ask Baptiste to get his email into the Maintainer file list.

> 
> On my side I tested with of amd xgbe and I don't observe any regression.
> 
> Best Regards

Can I add your tested by?

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-02-01 16:16       ` Sinan Kaya
@ 2016-02-01 16:29         ` Eric Auger
  -1 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2016-02-01 16:29 UTC (permalink / raw)
  To: Sinan Kaya, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm
  Cc: shankerd, vikrams, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel, Baptiste Reynal

On 02/01/2016 05:16 PM, Sinan Kaya wrote:
> On 2/1/2016 11:08 AM, Eric Auger wrote:
>>> For ACPI systems, ACPI HID needs to match and compat in the registered
>>>> reset
>>>> driver needs to match for ACPI reset driver loading to work.
>> Don't really get the sentence. For ACPI systems, a registered reset
>> function is selected if its associated ACPI HID matches the device ACPI HID?
> 
> Old commit message. I did an internal review before posting the patch. The first
> version of the patch was a hack. I simplified the code but forgot to update the 
> commit message.
> 
> Now, the rule is simple.
> 
> - ACPI HID needs match for ACPI systems
> - DT compat needs to match for OF systems
> 
> as expected. I'll rephrase for the next version.
> 
>>>>
>>>> For OF based systems, DT compatible string needs to match and compat in the
>>>> registered reset driver needs to match for DT reset driver loading to work.
>> same here
>>
>> I added Baptiste who is vfio platform driver sub-system maintainer.
> 
> Thanks, we should ask Baptiste to get his email into the Maintainer file list.
I think it is:
./scripts/get_maintainer.pl
0001-vfio-platform-add-support-for-ACPI-while-detecting-t.patch

Baptiste Reynal <b.reynal@virtualopensystems.com> (maintainer:VFIO
PLATFORM DRIVER,commit_signer:1/5=20%)
Alex Williamson <alex.williamson@redhat.com> (maintainer:VFIO
DRIVER,commit_signer:2/3=67%,commit_signer:4/5=80%)
../..
> 
>>
>> On my side I tested with of amd xgbe and I don't observe any regression.
>>
>> Best Regards
> 
> Can I add your tested by?
Well to make things clear I did not test the ACPI part. I just can tell
this does not bring any regression on the of part. But I am not against
if you don't find anybody else and you tested the ACPI part ;-)

Best Regards

Eric
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-02-01 16:29         ` Eric Auger
  0 siblings, 0 replies; 69+ messages in thread
From: Eric Auger @ 2016-02-01 16:29 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/01/2016 05:16 PM, Sinan Kaya wrote:
> On 2/1/2016 11:08 AM, Eric Auger wrote:
>>> For ACPI systems, ACPI HID needs to match and compat in the registered
>>>> reset
>>>> driver needs to match for ACPI reset driver loading to work.
>> Don't really get the sentence. For ACPI systems, a registered reset
>> function is selected if its associated ACPI HID matches the device ACPI HID?
> 
> Old commit message. I did an internal review before posting the patch. The first
> version of the patch was a hack. I simplified the code but forgot to update the 
> commit message.
> 
> Now, the rule is simple.
> 
> - ACPI HID needs match for ACPI systems
> - DT compat needs to match for OF systems
> 
> as expected. I'll rephrase for the next version.
> 
>>>>
>>>> For OF based systems, DT compatible string needs to match and compat in the
>>>> registered reset driver needs to match for DT reset driver loading to work.
>> same here
>>
>> I added Baptiste who is vfio platform driver sub-system maintainer.
> 
> Thanks, we should ask Baptiste to get his email into the Maintainer file list.
I think it is:
./scripts/get_maintainer.pl
0001-vfio-platform-add-support-for-ACPI-while-detecting-t.patch

Baptiste Reynal <b.reynal@virtualopensystems.com> (maintainer:VFIO
PLATFORM DRIVER,commit_signer:1/5=20%)
Alex Williamson <alex.williamson@redhat.com> (maintainer:VFIO
DRIVER,commit_signer:2/3=67%,commit_signer:4/5=80%)
../..
> 
>>
>> On my side I tested with of amd xgbe and I don't observe any regression.
>>
>> Best Regards
> 
> Can I add your tested by?
Well to make things clear I did not test the ACPI part. I just can tell
this does not bring any regression on the of part. But I am not against
if you don't find anybody else and you tested the ACPI part ;-)

Best Regards

Eric
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-02-01 16:29         ` Eric Auger
@ 2016-02-01 16:44           ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 16:44 UTC (permalink / raw)
  To: Eric Auger, dmaengine, marc.zyngier, mark.rutland, timur,
	devicetree, cov, vinod.koul, jcm
  Cc: shankerd, vikrams, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel, Baptiste Reynal

On 2/1/2016 11:29 AM, Eric Auger wrote:
>> hanks, we should ask Baptiste to get his email into the Maintainer file list.
> I think it is:
> ./scripts/get_maintainer.pl
> 0001-vfio-platform-add-support-for-ACPI-while-detecting-t.patch
> 
> Baptiste Reynal <b.reynal@virtualopensystems.com> (maintainer:VFIO
> PLATFORM DRIVER,commit_signer:1/5=20%)
> Alex Williamson <alex.williamson@redhat.com> (maintainer:VFIO
> DRIVER,commit_signer:2/3=67%,commit_signer:4/5=80%)
> ../..

Strange, I'll reset my tree. 

>> > 
>>> >>
>>> >> On my side I tested with of amd xgbe and I don't observe any regression.
>>> >>
>>> >> Best Regards
>> > 
>> > Can I add your tested by?
> Well to make things clear I did not test the ACPI part. I just can tell
> this does not bring any regression on the of part. But I am not against
> if you don't find anybody else and you tested the ACPI part ;-)
> 

How about?

of-tested-by: ...

Shanker and I tested ACPI before pushing the patches to the list and we also 
posted the corresponding QEMU patches as well to the qemu-devel maillist.

http://patchwork.ozlabs.org/patch/575878/

I'll ask him to add his acpi-tested-by for the ACPI part.

> Best Regards


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-02-01 16:44           ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 16:44 UTC (permalink / raw)
  To: linux-arm-kernel

On 2/1/2016 11:29 AM, Eric Auger wrote:
>> hanks, we should ask Baptiste to get his email into the Maintainer file list.
> I think it is:
> ./scripts/get_maintainer.pl
> 0001-vfio-platform-add-support-for-ACPI-while-detecting-t.patch
> 
> Baptiste Reynal <b.reynal@virtualopensystems.com> (maintainer:VFIO
> PLATFORM DRIVER,commit_signer:1/5=20%)
> Alex Williamson <alex.williamson@redhat.com> (maintainer:VFIO
> DRIVER,commit_signer:2/3=67%,commit_signer:4/5=80%)
> ../..

Strange, I'll reset my tree. 

>> > 
>>> >>
>>> >> On my side I tested with of amd xgbe and I don't observe any regression.
>>> >>
>>> >> Best Regards
>> > 
>> > Can I add your tested by?
> Well to make things clear I did not test the ACPI part. I just can tell
> this does not bring any regression on the of part. But I am not against
> if you don't find anybody else and you tested the ACPI part ;-)
> 

How about?

of-tested-by: ...

Shanker and I tested ACPI before pushing the patches to the list and we also 
posted the corresponding QEMU patches as well to the qemu-devel maillist.

http://patchwork.ozlabs.org/patch/575878/

I'll ask him to add his acpi-tested-by for the ACPI part.

> Best Regards


-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-02-01 16:44           ` Sinan Kaya
@ 2016-02-01 16:49             ` Timur Tabi
  -1 siblings, 0 replies; 69+ messages in thread
From: Timur Tabi @ 2016-02-01 16:49 UTC (permalink / raw)
  To: Sinan Kaya, Eric Auger, dmaengine, marc.zyngier, mark.rutland,
	devicetree, cov, vinod.koul, jcm
  Cc: shankerd, vikrams, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel, Baptiste Reynal

Sinan Kaya wrote:
> of-tested-by: ...
>
> Shanker and I tested ACPI before pushing the patches to the list and we also
> posted the corresponding QEMU patches as well to the qemu-devel maillist.
>
> http://patchwork.ozlabs.org/patch/575878/
>
> I'll ask him to add his acpi-tested-by for the ACPI part.

I would rather not see a tested-by variant, and instead add comments to 
the line, like this:

	Tested-by: Eric Auger <eric.auger@linaro.org> (device tree only)
	Tested-by: Shanker Donthineni <shankerd@codeaurora.org> (ACPI only)

That way, people can still grep for "^Tested-by:".

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-02-01 16:49             ` Timur Tabi
  0 siblings, 0 replies; 69+ messages in thread
From: Timur Tabi @ 2016-02-01 16:49 UTC (permalink / raw)
  To: linux-arm-kernel

Sinan Kaya wrote:
> of-tested-by: ...
>
> Shanker and I tested ACPI before pushing the patches to the list and we also
> posted the corresponding QEMU patches as well to the qemu-devel maillist.
>
> http://patchwork.ozlabs.org/patch/575878/
>
> I'll ask him to add his acpi-tested-by for the ACPI part.

I would rather not see a tested-by variant, and instead add comments to 
the line, like this:

	Tested-by: Eric Auger <eric.auger@linaro.org> (device tree only)
	Tested-by: Shanker Donthineni <shankerd@codeaurora.org> (ACPI only)

That way, people can still grep for "^Tested-by:".

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the
Code Aurora Forum, hosted by The Linux Foundation.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-02-01 16:49             ` Timur Tabi
@ 2016-02-01 16:56               ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 16:56 UTC (permalink / raw)
  To: Timur Tabi, Eric Auger, dmaengine, marc.zyngier, mark.rutland,
	devicetree, cov, vinod.koul, jcm
  Cc: shankerd, vikrams, agross, arnd, linux-arm-msm, linux-arm-kernel,
	linux-kernel, Baptiste Reynal

On 2/1/2016 11:49 AM, Timur Tabi wrote:
> Sinan Kaya wrote:
>> of-tested-by: ...
>>
>> Shanker and I tested ACPI before pushing the patches to the list and we also
>> posted the corresponding QEMU patches as well to the qemu-devel maillist.
>>
>> http://patchwork.ozlabs.org/patch/575878/
>>
>> I'll ask him to add his acpi-tested-by for the ACPI part.
> 
> I would rather not see a tested-by variant, and instead add comments to the line, like this:
> 
>     Tested-by: Eric Auger <eric.auger@linaro.org> (device tree only)
>     Tested-by: Shanker Donthineni <shankerd@codeaurora.org> (ACPI only)
> 
> That way, people can still grep for "^Tested-by:".
> 

will do.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-02-01 16:56               ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-01 16:56 UTC (permalink / raw)
  To: linux-arm-kernel

On 2/1/2016 11:49 AM, Timur Tabi wrote:
> Sinan Kaya wrote:
>> of-tested-by: ...
>>
>> Shanker and I tested ACPI before pushing the patches to the list and we also
>> posted the corresponding QEMU patches as well to the qemu-devel maillist.
>>
>> http://patchwork.ozlabs.org/patch/575878/
>>
>> I'll ask him to add his acpi-tested-by for the ACPI part.
> 
> I would rather not see a tested-by variant, and instead add comments to the line, like this:
> 
>     Tested-by: Eric Auger <eric.auger@linaro.org> (device tree only)
>     Tested-by: Shanker Donthineni <shankerd@codeaurora.org> (ACPI only)
> 
> That way, people can still grep for "^Tested-by:".
> 

will do.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-02-01 15:35     ` Mark Rutland
  (?)
@ 2016-02-03 18:32       ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:32 UTC (permalink / raw)
  To: Mark Rutland
  Cc: dmaengine-u79uwXL29TY76Z2rM5mHXA, marc.zyngier-5wv7dgnIgG8,
	timur-sgV2jX0FEOL9JmXXK+q4OQ, devicetree-u79uwXL29TY76Z2rM5mHXA,
	cov-sgV2jX0FEOL9JmXXK+q4OQ, vinod.koul-ral2JQCrhuEAvxtiuMwx3w,
	jcm-H+wXaHxf7aLQT0dZR+AlfA, shankerd-sgV2jX0FEOL9JmXXK+q4OQ,
	vikrams-sgV2jX0FEOL9JmXXK+q4OQ,
	eric.auger-QSEj5FYQhm4dnm+yROfE0A, agross-sgV2jX0FEOL9JmXXK+q4OQ,
	arnd-r2nGTMty4D4, linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

Hi Mark,

On 2/1/2016 10:35 AM, Mark Rutland wrote:
>> Removing the flexibility to choose the event channel as there is no real
>> > use case right now. We have been using the values in ACPI that match the HW
>> > defaults. OS is reading the event-channel from the HW register now.
> Fold this in to the patches adding the binding and the driver.
> 
> There is no reason to add this then remove it within the same series.
> 
> Mark.
> 

Before I post the next rev, I'd like to see if I missed something in the series when 
it comes to your comments. 

Can you please take a look and let me know?

Sinan

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-03 18:32       ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:32 UTC (permalink / raw)
  To: Mark Rutland
  Cc: dmaengine, marc.zyngier, timur, devicetree, cov, vinod.koul, jcm,
	shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, linux-kernel

Hi Mark,

On 2/1/2016 10:35 AM, Mark Rutland wrote:
>> Removing the flexibility to choose the event channel as there is no real
>> > use case right now. We have been using the values in ACPI that match the HW
>> > defaults. OS is reading the event-channel from the HW register now.
> Fold this in to the patches adding the binding and the driver.
> 
> There is no reason to add this then remove it within the same series.
> 
> Mark.
> 

Before I post the next rev, I'd like to see if I missed something in the series when 
it comes to your comments. 

Can you please take a look and let me know?

Sinan

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-03 18:32       ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:32 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

On 2/1/2016 10:35 AM, Mark Rutland wrote:
>> Removing the flexibility to choose the event channel as there is no real
>> > use case right now. We have been using the values in ACPI that match the HW
>> > defaults. OS is reading the event-channel from the HW register now.
> Fold this in to the patches adding the binding and the driver.
> 
> There is no reason to add this then remove it within the same series.
> 
> Mark.
> 

Before I post the next rev, I'd like to see if I missed something in the series when 
it comes to your comments. 

Can you please take a look and let me know?

Sinan

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-02-01 15:14     ` Rob Herring
@ 2016-02-03 18:32       ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:32 UTC (permalink / raw)
  To: Rob Herring
  Cc: dmaengine, marc.zyngier, mark.rutland, timur, devicetree, cov,
	vinod.koul, jcm, shankerd, vikrams, eric.auger, agross, arnd,
	linux-arm-msm, linux-arm-kernel, linux-kernel

On 2/1/2016 10:14 AM, Rob Herring wrote:
> On Fri, Jan 29, 2016 at 05:35:11PM -0500, Sinan Kaya wrote:
>> Removing the flexibility to choose the event channel as there is no real
>> use case right now. We have been using the values in ACPI that match the HW
>> defaults. OS is reading the event-channel from the HW register now.
>>
>> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
>> ---
>>  .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  3 --
>>  drivers/dma/qcom/hidma.c                           | 39 +---------------------
>>  2 files changed, 1 insertion(+), 41 deletions(-)
> 
> Acked-by: Rob Herring <robh@kernel.org>
> 


Thanks Rob,
I'll merge this patch to the existing documentation and code on the next rev.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-03 18:32       ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:32 UTC (permalink / raw)
  To: linux-arm-kernel

On 2/1/2016 10:14 AM, Rob Herring wrote:
> On Fri, Jan 29, 2016 at 05:35:11PM -0500, Sinan Kaya wrote:
>> Removing the flexibility to choose the event channel as there is no real
>> use case right now. We have been using the values in ACPI that match the HW
>> defaults. OS is reading the event-channel from the HW register now.
>>
>> Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
>> ---
>>  .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  3 --
>>  drivers/dma/qcom/hidma.c                           | 39 +---------------------
>>  2 files changed, 1 insertion(+), 41 deletions(-)
> 
> Acked-by: Rob Herring <robh@kernel.org>
> 


Thanks Rob,
I'll merge this patch to the existing documentation and code on the next rev.

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-02-03 18:32       ` Sinan Kaya
@ 2016-02-03 18:36         ` Mark Rutland
  -1 siblings, 0 replies; 69+ messages in thread
From: Mark Rutland @ 2016-02-03 18:36 UTC (permalink / raw)
  To: Sinan Kaya
  Cc: dmaengine, marc.zyngier, timur, devicetree, cov, vinod.koul, jcm,
	shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, linux-kernel

On Wed, Feb 03, 2016 at 01:32:01PM -0500, Sinan Kaya wrote:
> Hi Mark,
> 
> On 2/1/2016 10:35 AM, Mark Rutland wrote:
> >> Removing the flexibility to choose the event channel as there is no real
> >> > use case right now. We have been using the values in ACPI that match the HW
> >> > defaults. OS is reading the event-channel from the HW register now.
> > Fold this in to the patches adding the binding and the driver.
> > 
> > There is no reason to add this then remove it within the same series.
> > 
> > Mark.
> > 
> 
> Before I post the next rev, I'd like to see if I missed something in the series when 
> it comes to your comments. 
> 
> Can you please take a look and let me know?

Sorry, but it's going to be far easier to review with useless code
removed, so I will wait until the next posting where this patch is
folded in.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-03 18:36         ` Mark Rutland
  0 siblings, 0 replies; 69+ messages in thread
From: Mark Rutland @ 2016-02-03 18:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Feb 03, 2016 at 01:32:01PM -0500, Sinan Kaya wrote:
> Hi Mark,
> 
> On 2/1/2016 10:35 AM, Mark Rutland wrote:
> >> Removing the flexibility to choose the event channel as there is no real
> >> > use case right now. We have been using the values in ACPI that match the HW
> >> > defaults. OS is reading the event-channel from the HW register now.
> > Fold this in to the patches adding the binding and the driver.
> > 
> > There is no reason to add this then remove it within the same series.
> > 
> > Mark.
> > 
> 
> Before I post the next rev, I'd like to see if I missed something in the series when 
> it comes to your comments. 
> 
> Can you please take a look and let me know?

Sorry, but it's going to be far easier to review with useless code
removed, so I will wait until the next posting where this patch is
folded in.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
  2016-02-01 16:29         ` Eric Auger
@ 2016-02-03 18:38           ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:38 UTC (permalink / raw)
  To: devicetree, Baptiste Reynal, alex.williamson
  Cc: Eric Auger, dmaengine, marc.zyngier, mark.rutland, timur, cov,
	vinod.koul, jcm, shankerd, vikrams, agross, arnd, linux-arm-msm,
	linux-arm-kernel, linux-kernel

Hi Baptiste, Alex;

On 2/1/2016 11:29 AM, Eric Auger wrote:
>>> I added Baptiste who is vfio platform driver sub-system maintainer.
>> > 
>> > Thanks, we should ask Baptiste to get his email into the Maintainer file list.
> I think it is:
> ./scripts/get_maintainer.pl
> 0001-vfio-platform-add-support-for-ACPI-while-detecting-t.patch

Any comments on these two reviews?

https://lkml.org/lkml/2016/1/29/679
https://lkml.org/lkml/2016/1/29/677

Sinan

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver
@ 2016-02-03 18:38           ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:38 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Baptiste, Alex;

On 2/1/2016 11:29 AM, Eric Auger wrote:
>>> I added Baptiste who is vfio platform driver sub-system maintainer.
>> > 
>> > Thanks, we should ask Baptiste to get his email into the Maintainer file list.
> I think it is:
> ./scripts/get_maintainer.pl
> 0001-vfio-platform-add-support-for-ACPI-while-detecting-t.patch

Any comments on these two reviews?

https://lkml.org/lkml/2016/1/29/679
https://lkml.org/lkml/2016/1/29/677

Sinan

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-02-03 18:36         ` Mark Rutland
@ 2016-02-03 18:46           ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:46 UTC (permalink / raw)
  To: Mark Rutland
  Cc: dmaengine, marc.zyngier, timur, devicetree, cov, vinod.koul, jcm,
	shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, linux-kernel

On 2/3/2016 1:36 PM, Mark Rutland wrote:
>> Can you please take a look and let me know?
> Sorry, but it's going to be far easier to review with useless code
> removed, so I will wait until the next posting where this patch is
> folded in.
> 
> Thanks,
> Mark.

OK, I'll post the next rev by the end of this week. 

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-03 18:46           ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-03 18:46 UTC (permalink / raw)
  To: linux-arm-kernel

On 2/3/2016 1:36 PM, Mark Rutland wrote:
>> Can you please take a look and let me know?
> Sorry, but it's going to be far easier to review with useless code
> removed, so I will wait until the next posting where this patch is
> folded in.
> 
> Thanks,
> Mark.

OK, I'll post the next rev by the end of this week. 

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-02-03 18:36         ` Mark Rutland
  (?)
@ 2016-02-07 15:04           ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-07 15:04 UTC (permalink / raw)
  To: Mark Rutland
  Cc: dmaengine-u79uwXL29TY76Z2rM5mHXA, marc.zyngier-5wv7dgnIgG8,
	timur-sgV2jX0FEOL9JmXXK+q4OQ, devicetree-u79uwXL29TY76Z2rM5mHXA,
	cov-sgV2jX0FEOL9JmXXK+q4OQ, vinod.koul-ral2JQCrhuEAvxtiuMwx3w,
	jcm-H+wXaHxf7aLQT0dZR+AlfA, shankerd-sgV2jX0FEOL9JmXXK+q4OQ,
	vikrams-sgV2jX0FEOL9JmXXK+q4OQ,
	eric.auger-QSEj5FYQhm4dnm+yROfE0A, agross-sgV2jX0FEOL9JmXXK+q4OQ,
	arnd-r2nGTMty4D4, linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

Hi Mark,

On 2/3/2016 1:36 PM, Mark Rutland wrote:
> Sorry, but it's going to be far easier to review with useless code
> removed, so I will wait until the next posting where this patch is
> folded in.

V14 of the patch have been posted on the mail-list where this patch has
been folder into the existing code. 

https://lkml.org/lkml/2016/2/4/1086

I have also captured the information you previously asked into the cover 
letter as you asked.

I'm looking forward to hear your other comments.
Sinan

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-07 15:04           ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-07 15:04 UTC (permalink / raw)
  To: Mark Rutland
  Cc: dmaengine, marc.zyngier, timur, devicetree, cov, vinod.koul, jcm,
	shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, linux-kernel

Hi Mark,

On 2/3/2016 1:36 PM, Mark Rutland wrote:
> Sorry, but it's going to be far easier to review with useless code
> removed, so I will wait until the next posting where this patch is
> folded in.

V14 of the patch have been posted on the mail-list where this patch has
been folder into the existing code. 

https://lkml.org/lkml/2016/2/4/1086

I have also captured the information you previously asked into the cover 
letter as you asked.

I'm looking forward to hear your other comments.
Sinan

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-07 15:04           ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-07 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Mark,

On 2/3/2016 1:36 PM, Mark Rutland wrote:
> Sorry, but it's going to be far easier to review with useless code
> removed, so I will wait until the next posting where this patch is
> folded in.

V14 of the patch have been posted on the mail-list where this patch has
been folder into the existing code. 

https://lkml.org/lkml/2016/2/4/1086

I have also captured the information you previously asked into the cover 
letter as you asked.

I'm looking forward to hear your other comments.
Sinan

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
  2016-02-07 15:04           ` Sinan Kaya
@ 2016-02-09 21:01             ` Sinan Kaya
  -1 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-09 21:01 UTC (permalink / raw)
  To: Mark Rutland
  Cc: dmaengine, marc.zyngier, timur, devicetree, cov, vinod.koul, jcm,
	shankerd, vikrams, eric.auger, agross, arnd, linux-arm-msm,
	linux-arm-kernel, linux-kernel

On 2/7/2016 10:04 AM, Sinan Kaya wrote:
> Hi Mark,
> 
> On 2/3/2016 1:36 PM, Mark Rutland wrote:
>> > Sorry, but it's going to be far easier to review with useless code
>> > removed, so I will wait until the next posting where this patch is
>> > folded in.
> V14 of the patch have been posted on the mail-list where this patch has
> been folder into the existing code. 
> 
> https://lkml.org/lkml/2016/2/4/1086
> 
> I have also captured the information you previously asked into the cover 
> letter as you asked.
> 
> I'm looking forward to hear your other comments.
> Sinan

ping

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW
@ 2016-02-09 21:01             ` Sinan Kaya
  0 siblings, 0 replies; 69+ messages in thread
From: Sinan Kaya @ 2016-02-09 21:01 UTC (permalink / raw)
  To: linux-arm-kernel

On 2/7/2016 10:04 AM, Sinan Kaya wrote:
> Hi Mark,
> 
> On 2/3/2016 1:36 PM, Mark Rutland wrote:
>> > Sorry, but it's going to be far easier to review with useless code
>> > removed, so I will wait until the next posting where this patch is
>> > folded in.
> V14 of the patch have been posted on the mail-list where this patch has
> been folder into the existing code. 
> 
> https://lkml.org/lkml/2016/2/4/1086
> 
> I have also captured the information you previously asked into the cover 
> letter as you asked.
> 
> I'm looking forward to hear your other comments.
> Sinan

ping

-- 
Sinan Kaya
Qualcomm Technologies, Inc. on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2016-02-09 21:01 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-29 22:35 [PATCH V13 00/10] dma: add Qualcomm Technologies HIDMA driver Sinan Kaya
2016-01-29 22:35 ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 01/10] dma: qcom_bam_dma: move to qcom directory Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 02/10] dma: hidma: Add Device Tree binding Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 03/10] dma: add Qualcomm Technologies HIDMA management driver Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 04/10] dma: add Qualcomm Technologies HIDMA channel driver Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 05/10] dma: qcom_hidma: implement lower level hardware interface Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 06/10] dma: qcom_hidma: add debugfs hooks Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 07/10] dma: qcom_hidma: add support for object hierarchy Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 08/10] dma: qcom_hidma: read the channel id from HW Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-02-01 15:14   ` Rob Herring
2016-02-01 15:14     ` Rob Herring
2016-02-03 18:32     ` Sinan Kaya
2016-02-03 18:32       ` Sinan Kaya
2016-02-01 15:35   ` Mark Rutland
2016-02-01 15:35     ` Mark Rutland
2016-02-01 15:46     ` Sinan Kaya
2016-02-01 15:46       ` Sinan Kaya
2016-02-03 18:32     ` Sinan Kaya
2016-02-03 18:32       ` Sinan Kaya
2016-02-03 18:32       ` Sinan Kaya
2016-02-03 18:36       ` Mark Rutland
2016-02-03 18:36         ` Mark Rutland
2016-02-03 18:46         ` Sinan Kaya
2016-02-03 18:46           ` Sinan Kaya
2016-02-07 15:04         ` Sinan Kaya
2016-02-07 15:04           ` Sinan Kaya
2016-02-07 15:04           ` Sinan Kaya
2016-02-09 21:01           ` Sinan Kaya
2016-02-09 21:01             ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 09/10] vfio, platform: add support for ACPI while detecting the reset driver Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-01-30 12:52   ` kbuild test robot
2016-01-30 12:52     ` kbuild test robot
2016-01-30 12:52     ` kbuild test robot
2016-01-31 13:53   ` Sinan Kaya
2016-01-31 13:53     ` Sinan Kaya
2016-01-31 13:53     ` Sinan Kaya
2016-02-01 16:08   ` Eric Auger
2016-02-01 16:08     ` Eric Auger
2016-02-01 16:16     ` Sinan Kaya
2016-02-01 16:16       ` Sinan Kaya
2016-02-01 16:29       ` Eric Auger
2016-02-01 16:29         ` Eric Auger
2016-02-01 16:44         ` Sinan Kaya
2016-02-01 16:44           ` Sinan Kaya
2016-02-01 16:49           ` Timur Tabi
2016-02-01 16:49             ` Timur Tabi
2016-02-01 16:56             ` Sinan Kaya
2016-02-01 16:56               ` Sinan Kaya
2016-02-03 18:38         ` Sinan Kaya
2016-02-03 18:38           ` Sinan Kaya
2016-01-29 22:35 ` [PATCH V13 10/10] vfio, platform: add QTI HIDMA " Sinan Kaya
2016-01-29 22:35   ` Sinan Kaya
2016-02-01 15:41   ` Eric Auger
2016-02-01 15:41     ` Eric Auger
2016-02-01 15:41     ` Eric Auger
2016-02-01 15:51     ` Sinan Kaya
2016-02-01 15:51       ` Sinan Kaya

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.