linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/2] Add Intel LGM soc DMA support
@ 2020-08-14  5:26 Amireddy Mallikarjuna reddy
  2020-08-14  5:26 ` [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC Amireddy Mallikarjuna reddy
  2020-08-14  5:26 ` [PATCH v5 2/2] Add Intel LGM soc DMA support Amireddy Mallikarjuna reddy
  0 siblings, 2 replies; 23+ messages in thread
From: Amireddy Mallikarjuna reddy @ 2020-08-14  5:26 UTC (permalink / raw)
  To: dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, mallikarjunax.reddy, malliamireddy009

Add DMA controller driver for Lightning Mountain(LGM) family of SoCs.

The main function of the DMA controller is the transfer of data from/to any
DPlus compliant peripheral to/from the memory. A memory to memory copy
capability can also be configured.
This ldma driver is used for configure the device and channnels for data
and control paths.

These controllers provide DMA capabilities for a variety of on-chip
devices such as SSC, HSNAND and GSWIP.

-------------
Future Plans:
-------------
LGM SOC also supports Hardware Memory Copy engine.
The role of the HW Memory copy engine is to offload memory copy operations
from the CPU.

Amireddy Mallikarjuna reddy (2):
  dt-bindings: dma: Add bindings for intel LGM SOC
  Add Intel LGM soc DMA support.

 .../devicetree/bindings/dma/intel,ldma.yaml        |  319 ++++
 drivers/dma/Kconfig                                |    2 +
 drivers/dma/Makefile                               |    1 +
 drivers/dma/lgm/Kconfig                            |    9 +
 drivers/dma/lgm/Makefile                           |    2 +
 drivers/dma/lgm/lgm-dma.c                          | 1944 ++++++++++++++++++++
 include/linux/dma/lgm_dma.h                        |   27 +
 7 files changed, 2304 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/intel,ldma.yaml
 create mode 100644 drivers/dma/lgm/Kconfig
 create mode 100644 drivers/dma/lgm/Makefile
 create mode 100644 drivers/dma/lgm/lgm-dma.c
 create mode 100644 include/linux/dma/lgm_dma.h

-- 
2.11.0


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-14  5:26 [PATCH v5 0/2] Add Intel LGM soc DMA support Amireddy Mallikarjuna reddy
@ 2020-08-14  5:26 ` Amireddy Mallikarjuna reddy
  2020-08-14 20:32   ` Rob Herring
  2020-08-14  5:26 ` [PATCH v5 2/2] Add Intel LGM soc DMA support Amireddy Mallikarjuna reddy
  1 sibling, 1 reply; 23+ messages in thread
From: Amireddy Mallikarjuna reddy @ 2020-08-14  5:26 UTC (permalink / raw)
  To: dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, mallikarjunax.reddy, malliamireddy009

Add DT bindings YAML schema for DMA controller driver
of Lightning Mountain(LGM) SoC.

Signed-off-by: Amireddy Mallikarjuna reddy <mallikarjunax.reddy@linux.intel.com>
---
v1:
- Initial version.

v2:
- Fix bot errors.

v3:
- No change.

v4:
- Address Thomas langer comments
  - use node name pattern as dma-controller as in common binding.
  - Remove "_" (underscore) in instance name.
  - Remove "port-" and "chan-" in attribute name for both 'dma-ports' & 'dma-channels' child nodes.

v5:
- Moved some of the attributes in 'dma-ports' & 'dma-channels' child nodes to dma client/consumer side as cells in 'dmas' properties.
---
 .../devicetree/bindings/dma/intel,ldma.yaml        | 319 +++++++++++++++++++++
 1 file changed, 319 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/intel,ldma.yaml

diff --git a/Documentation/devicetree/bindings/dma/intel,ldma.yaml b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
new file mode 100644
index 000000000000..9beaf191a6de
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
@@ -0,0 +1,319 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/dma/intel,ldma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Lightning Mountain centralized low speed DMA and high speed DMA controllers.
+
+maintainers:
+  - chuanhua.lei@intel.com
+  - mallikarjunax.reddy@intel.com
+
+allOf:
+  - $ref: "dma-controller.yaml#"
+
+properties:
+ $nodename:
+   pattern: "^dma-controller(@.*)?$"
+
+ "#dma-cells":
+   const: 1
+
+ compatible:
+  anyOf:
+   - const: intel,lgm-cdma
+   - const: intel,lgm-dma2tx
+   - const: intel,lgm-dma1rx
+   - const: intel,lgm-dma1tx
+   - const: intel,lgm-dma0tx
+   - const: intel,lgm-dma3
+   - const: intel,lgm-toe-dma30
+   - const: intel,lgm-toe-dma31
+
+ reg:
+  maxItems: 1
+
+ clocks:
+  maxItems: 1
+
+ resets:
+  maxItems: 1
+
+ interrupts:
+  maxItems: 1
+
+ intel,dma-poll-cnt:
+   $ref: /schemas/types.yaml#definitions/uint32
+   description:
+     DMA descriptor polling counter. It may need fine tune according
+     to the system application scenario.
+
+ intel,dma-byte-en:
+   type: boolean
+   description:
+     DMA byte enable is only valid for DMA write(RX).
+     Byte enable(1) means DMA write will be based on the number of dwords
+     instead of the whole burst.
+
+ intel,dma-drb:
+    type: boolean
+    description:
+      DMA descriptor read back to make sure data and desc synchronization.
+
+ intel,dma-burst:
+    $ref: /schemas/types.yaml#definitions/uint32
+    description:
+       Specifiy the DMA burst size(in dwords), the valid value will be 8, 16, 32.
+       Default is 16 for data path dma, 32 is for memcopy DMA.
+
+ intel,dma-polling-cnt:
+    $ref: /schemas/types.yaml#definitions/uint32
+    description:
+       DMA descriptor polling counter. It may need fine tune according to
+       the system application scenario.
+
+ intel,dma-desc-in-sram:
+    type: boolean
+    description:
+       DMA descritpors in SRAM or not. Some old controllers descriptors
+       can be in DRAM or SRAM. The new ones are all in SRAM.
+
+ intel,dma-orrc:
+    $ref: /schemas/types.yaml#definitions/uint32
+    description:
+       DMA outstanding read counter. The maximum value is 16, and it may
+       need fine tune according to the system application scenarios.
+
+ intel,dma-dburst-wr:
+    type: boolean
+    description:
+       Enable RX dynamic burst write. It only applies to RX DMA and memcopy DMA.
+
+
+ dma-ports:
+    type: object
+    description:
+       This sub-node must contain a sub-node for each DMA port.
+    properties:
+      '#address-cells':
+        const: 1
+      '#size-cells':
+        const: 0
+
+    patternProperties:
+      "^dma-ports@[0-9]+$":
+          type: object
+
+          properties:
+            reg:
+              items:
+                - enum: [0, 1, 2, 3, 4, 5]
+              description:
+                 Which port this node refers to.
+
+            intel,name:
+              $ref: /schemas/types.yaml#definitions/string-array
+              description:
+                 Port name of each DMA port.
+
+            intel,chans:
+              $ref: /schemas/types.yaml#/definitions/uint32-array
+              description:
+                 The channels included on this port. Format is channel start
+                 number and how many channels on this port.
+
+          required:
+            - reg
+            - intel,name
+            - intel,chans
+
+
+ ldma-channels:
+    type: object
+    description:
+       This sub-node must contain a sub-node for each DMA channel.
+    properties:
+      '#address-cells':
+        const: 1
+      '#size-cells':
+        const: 0
+
+    patternProperties:
+      "^ldma-channels@[0-15]+$":
+          type: object
+
+          properties:
+            reg:
+              items:
+                - enum: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
+              description:
+                 Which channel this node refers to.
+
+            intel,desc_num:
+              $ref: /schemas/types.yaml#/definitions/uint32
+              description:
+                 Per channel maximum descriptor number. The max value is 255.
+
+            intel,hdr-mode:
+              $ref: /schemas/types.yaml#/definitions/uint32-array
+              description:
+                 The first parameter is header mode size, the second
+                 parameter is checksum enable or disable. If enabled,
+                 header mode size is ignored. If disabled, header mode
+                 size must be provided.
+
+            intel,hw-desc:
+              $ref: /schemas/types.yaml#/definitions/uint32-array
+              description:
+                 Per channel dma hardware descriptor configuration.
+                 The first parameter is descriptor physical address and the
+                 second parameter hardware descriptor number.
+
+          required:
+            - reg
+
+required:
+ - compatible
+ - reg
+ - '#dma-cells'
+
+examples:
+ - |
+   dma0: dma-controller@e0e00000 {
+     compatible = "intel,lgm-cdma";
+     reg = <0xe0e00000 0x1000>;
+     #dma-cells = <3>;
+     interrupt-parent = <&ioapic1>;
+     interrupts = <82 1>;
+     resets = <&rcu0 0x30 0>;
+     reset-names = "ctrl";
+     clocks = <&cgu0 80>;
+     intel,dma-poll-cnt = <4>;
+     intel,dma-byte-en;
+     intel,dma-drb;
+     dma-ports {
+       #address-cells = <1>;
+       #size-cells = <0>;
+
+       dma-ports@0 {
+           reg = <0>;
+           intel,name = "SPI0";
+           intel,chans = <0 2>;
+       };
+       dma-ports@1 {
+           reg = <1>;
+           intel,name = "SPI1";
+           intel,chans = <2 2>;
+       };
+       dma-ports@2 {
+           reg = <2>;
+           intel,name = "SPI2";
+           intel,chans = <4 2>;
+       };
+       dma-ports@3 {
+           reg = <3>;
+           intel,name = "SPI3";
+           intel,chans = <6 2>;
+       };
+       dma-ports@4 {
+           reg = <4>;
+           intel,name = "HSNAND";
+           intel,chans = <8 2>;
+       };
+       dma-ports@5 {
+           reg = <5>;
+           intel,name = "PCM";
+           intel,chans = <10 6>;
+       };
+     };
+     ldma-channels {
+       #address-cells = <1>;
+       #size-cells = <0>;
+
+       ldma-channels@0 {
+           reg = <0>;
+           intel,desc_num = <1>;
+       };
+       ldma-channels@1 {
+           reg = <1>;
+           intel,desc_num = <1>;
+       };
+       ldma-channels@2 {
+           reg = <2>;
+           intel,desc_num = <1>;
+       };
+       ldma-channels@3 {
+           reg = <3>;
+           intel,desc_num = <1>;
+       };
+       ldma-channels@4 {
+           reg = <4>;
+           intel,desc_num = <1>;
+       };
+       ldma-channels@5 {
+           reg = <5>;
+           intel,desc_num = <1>;
+       };
+       ldma-channels@6 {
+           reg = <6>;
+           intel,desc_num = <1>;
+       };
+       ldma-channels@7 {
+           reg = <7>;
+           intel,desc_num = <1>;
+       };
+       ldma-channels@8 {
+           reg = <8>;
+       };
+       ldma-channels@9 {
+           reg = <9>;
+       };
+       ldma-channels@10 {
+           reg = <10>;
+       };
+       ldma-channels@11 {
+           reg = <11>;
+       };
+       ldma-channels@12 {
+           reg = <12>;
+       };
+       ldma-channels@13 {
+           reg = <13>;
+       };
+       ldma-channels@14 {
+           reg = <14>;
+       };
+       ldma-channels@15 {
+           reg = <15>;
+       };
+     };
+   };
+ - |
+   dma3: dma-controller@ec800000 {
+     compatible = "intel,lgm-dma3";
+     reg = <0xec800000 0x1000>;
+     clocks = <&cgu0 71>;
+     resets = <&rcu0 0x10 9>;
+     #dma-cells = <7>;
+     intel,dma-burst = <32>;
+     intel,dma-polling-cnt = <16>;
+     intel,dma-desc-in-sram;
+     intel,dma-orrc = <16>;
+     intel,dma-byte-en;
+     intel,dma-dburst-wr;
+     ldma-channels {
+         #address-cells = <1>;
+         #size-cells = <0>;
+
+         ldma-channels@12 {
+             reg = <12>;
+             intel,hdr-mode = <128 0>;
+             intel,hw-desc = <0x20000000 8>;
+         };
+         ldma-channels@13 {
+             reg = <13>;
+             intel,hdr-mode = <128 0>;
+         };
+     };
+   };
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-14  5:26 [PATCH v5 0/2] Add Intel LGM soc DMA support Amireddy Mallikarjuna reddy
  2020-08-14  5:26 ` [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC Amireddy Mallikarjuna reddy
@ 2020-08-14  5:26 ` Amireddy Mallikarjuna reddy
  2020-08-14  8:47   ` kernel test robot
  2020-08-18 10:16   ` Peter Ujfalusi
  1 sibling, 2 replies; 23+ messages in thread
From: Amireddy Mallikarjuna reddy @ 2020-08-14  5:26 UTC (permalink / raw)
  To: dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, mallikarjunax.reddy, malliamireddy009

Add DMA controller driver for Lightning Mountain(LGM) family of SoCs.

The main function of the DMA controller is the transfer of data from/to any
DPlus compliant peripheral to/from the memory. A memory to memory copy
capability can also be configured.

This ldma driver is used for configure the device and channnels for data
and control paths.

Signed-off-by: Amireddy Mallikarjuna reddy <mallikarjunax.reddy@linux.intel.com>
---
v1:
- Initial version.

v2:
- Fix device tree bot issues, correspondign driver changes done.
- Fix kerntel test robot warnings.
  --------------------------------------------------------
  >> drivers/dma/lgm/lgm-dma.c:729:5: warning: no previous prototype for function 'intel_dma_chan_desc_cfg' [-Wmissing-prototypes]
  int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
  ^
  drivers/dma/lgm/lgm-dma.c:729:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
  int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
  ^
  static
  1 warning generated.

  vim +/intel_dma_chan_desc_cfg +729 drivers/dma/lgm/lgm-dma.c

    728
  > 729 int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
    730                             int desc_num)
    731 {
    732         return ldma_chan_desc_cfg(to_ldma_chan(chan), desc_base, desc_num);
    733 }
    734 EXPORT_SYMBOL_GPL(intel_dma_chan_desc_cfg);
    735

   Reported-by: kernel test robot <lkp@intel.com>
   ---------------------------------------------------------------

v3:
- Fix smatch warning.
  ----------------------------------------------------------------
  smatch warnings:
  drivers/dma/lgm/lgm-dma.c:1306 ldma_cfg_init() error: uninitialized symbol 'ret'.

  Reported-by: kernel test robot <lkp@intel.com>
  Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
  ----------------------------------------------------------------

v4:
- Address Thomas Langer comments in dtbinding and corresponding driver side changes.
- Driver side changes to corresponding device tree changes.

v5:
- Add changes to read 'dmas' properties and update the config properties driver side.
- Add virt_dma_desc utilizes virt-dma API.
---
 drivers/dma/Kconfig         |    2 +
 drivers/dma/Makefile        |    1 +
 drivers/dma/lgm/Kconfig     |    9 +
 drivers/dma/lgm/Makefile    |    2 +
 drivers/dma/lgm/lgm-dma.c   | 1944 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/dma/lgm_dma.h |   27 +
 6 files changed, 1985 insertions(+)
 create mode 100644 drivers/dma/lgm/Kconfig
 create mode 100644 drivers/dma/lgm/Makefile
 create mode 100644 drivers/dma/lgm/lgm-dma.c
 create mode 100644 include/linux/dma/lgm_dma.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index de41d7928bff..caeaf12fd524 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -737,6 +737,8 @@ source "drivers/dma/ti/Kconfig"
 
 source "drivers/dma/fsl-dpaa2-qdma/Kconfig"
 
+source "drivers/dma/lgm/Kconfig"
+
 # clients
 comment "DMA Clients"
 	depends on DMA_ENGINE
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index e60f81331d4c..0b899b076f4e 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx_dma.o
 obj-$(CONFIG_ST_FDMA) += st_fdma.o
 obj-$(CONFIG_FSL_DPAA2_QDMA) += fsl-dpaa2-qdma/
+obj-$(CONFIG_INTEL_LDMA) += lgm/
 
 obj-y += mediatek/
 obj-y += qcom/
diff --git a/drivers/dma/lgm/Kconfig b/drivers/dma/lgm/Kconfig
new file mode 100644
index 000000000000..bdb5b0d91afb
--- /dev/null
+++ b/drivers/dma/lgm/Kconfig
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0-only
+config INTEL_LDMA
+	bool "Lightning Mountain centralized low speed DMA and high speed DMA controllers"
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Enable support for intel Lightning Mountain SOC DMA controllers.
+	  These controllers provide DMA capabilities for a variety of on-chip
+	  devices such as SSC, HSNAND and GSWIP.
diff --git a/drivers/dma/lgm/Makefile b/drivers/dma/lgm/Makefile
new file mode 100644
index 000000000000..f318a8eff464
--- /dev/null
+++ b/drivers/dma/lgm/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_INTEL_LDMA)	+= lgm-dma.o
diff --git a/drivers/dma/lgm/lgm-dma.c b/drivers/dma/lgm/lgm-dma.c
new file mode 100644
index 000000000000..4bd6ef9a2656
--- /dev/null
+++ b/drivers/dma/lgm/lgm-dma.c
@@ -0,0 +1,1944 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Lightning Mountain centralized low speed and high speed DMA controller driver
+ *
+ * Copyright (c) 2016 ~ 2020 Intel Corporation.
+ */
+
+#include <linux/bitfield.h>
+#include <linux/clk.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/dma/lgm_dma.h>
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/of_dma.h>
+#include <linux/of_irq.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+
+#include "../dmaengine.h"
+#include "../virt-dma.h"
+
+#define DRIVER_NAME			"lgm-ldma"
+
+#define DMA_ID				0x0008
+#define DMA_ID_REV			GENMASK(7, 0)
+#define DMA_ID_PNR			GENMASK(19, 16)
+#define DMA_ID_CHNR			GENMASK(26, 20)
+#define DMA_ID_DW_128B			BIT(27)
+#define DMA_ID_AW_36B			BIT(28)
+#define DMA_VER32			0x32
+#define DMA_VER31			0x31
+#define DMA_VER22			0x0A
+
+#define DMA_CTRL			0x0010
+#define DMA_CTRL_RST			BIT(0)
+#define DMA_CTRL_DSRAM_PATH		BIT(1)
+#define DMA_CTRL_DBURST_WR		BIT(3)
+#define DMA_CTRL_VLD_DF_ACK		BIT(4)
+#define DMA_CTRL_CH_FL			BIT(6)
+#define DMA_CTRL_DS_FOD			BIT(7)
+#define DMA_CTRL_DRB			BIT(8)
+#define DMA_CTRL_ENBE			BIT(9)
+#define DMA_CTRL_DESC_TMOUT_CNT_V31	GENMASK(27, 16)
+#define DMA_CTRL_DESC_TMOUT_EN_V31	BIT(30)
+#define DMA_CTRL_PKTARB			BIT(31)
+
+#define DMA_CPOLL			0x0014
+#define DMA_CPOLL_CNT			GENMASK(15, 4)
+#define DMA_CPOLL_EN			BIT(31)
+
+#define DMA_CS				0x0018
+#define DMA_CS_MASK			GENMASK(5, 0)
+
+#define DMA_CCTRL			0x001C
+#define DMA_CCTRL_ON			BIT(0)
+#define DMA_CCTRL_RST			BIT(1)
+#define DMA_CCTRL_CH_POLL_EN		BIT(2)
+#define DMA_CCTRL_CH_ABC		BIT(3) /* Adaptive Burst Chop */
+#define DMA_CDBA_MSB			GENMASK(7, 4)
+#define DMA_CCTRL_DIR_TX		BIT(8)
+#define DMA_CCTRL_CLASS			GENMASK(11, 9)
+#define DMA_CCTRL_CLASSH		GENMASK(19, 18)
+#define DMA_CCTRL_WR_NP_EN		BIT(21)
+#define DMA_CCTRL_PDEN			BIT(23)
+#define DMA_MAX_CLASS			(SZ_32 - 1)
+
+#define DMA_CDBA			0x0020
+#define DMA_CDLEN			0x0024
+#define DMA_CIS				0x0028
+#define DMA_CIE				0x002C
+#define DMA_CI_EOP			BIT(1)
+#define DMA_CI_DUR			BIT(2)
+#define DMA_CI_DESCPT			BIT(3)
+#define DMA_CI_CHOFF			BIT(4)
+#define DMA_CI_RDERR			BIT(5)
+#define DMA_CI_ALL							\
+	(DMA_CI_EOP | DMA_CI_DUR | DMA_CI_DESCPT | DMA_CI_CHOFF | DMA_CI_RDERR)
+
+#define DMA_PS				0x0040
+#define DMA_PCTRL			0x0044
+#define DMA_PCTRL_RXBL16		BIT(0)
+#define DMA_PCTRL_TXBL16		BIT(1)
+#define DMA_PCTRL_RXBL			GENMASK(3, 2)
+#define DMA_PCTRL_RXBL_8		3
+#define DMA_PCTRL_TXBL			GENMASK(5, 4)
+#define DMA_PCTRL_TXBL_8		3
+#define DMA_PCTRL_PDEN			BIT(6)
+#define DMA_PCTRL_RXBL32		BIT(7)
+#define DMA_PCTRL_RXENDI		GENMASK(9, 8)
+#define DMA_PCTRL_TXENDI		GENMASK(11, 10)
+#define DMA_PCTRL_TXBL32		BIT(15)
+#define DMA_PCTRL_MEM_FLUSH		BIT(16)
+
+#define DMA_IRNEN1			0x00E8
+#define DMA_IRNCR1			0x00EC
+#define DMA_IRNEN			0x00F4
+#define DMA_IRNCR			0x00F8
+#define DMA_C_DP_TICK			0x100
+#define DMA_C_DP_TICK_TIKNARB		GENMASK(15, 0)
+#define DMA_C_DP_TICK_TIKARB		GENMASK(31, 16)
+
+#define DMA_C_HDRM			0x110
+/*
+ * If header mode is set in DMA descriptor,
+ *   If bit 30 is disabled, HDR_LEN must be configured according to channel
+ *     requirement.
+ *   If bit 30 is enabled(checksum with heade mode), HDR_LEN has no need to
+ *     be configured. It will enable check sum for switch
+ * If header mode is not set in DMA descriptor,
+ *   This register setting doesn't matter
+ */
+#define DMA_C_HDRM_HDR_SUM		BIT(30)
+
+#define DMA_C_BOFF			0x120
+#define DMA_C_BOFF_BOF_LEN		GENMASK(7, 0)
+#define DMA_C_BOFF_EN			BIT(31)
+
+#define DMA_ORRC			0x190
+#define DMA_ORRC_ORRCNT			GENMASK(8, 4)
+#define DMA_ORRC_EN			BIT(31)
+
+#define DMA_C_ENDIAN			0x200
+#define DMA_C_END_DATAENDI		GENMASK(1, 0)
+#define DMA_C_END_DE_EN			BIT(7)
+#define DMA_C_END_DESENDI		GENMASK(9, 8)
+#define DMA_C_END_DES_EN		BIT(16)
+
+/* DMA controller capability */
+#define DMA_ADDR_36BIT			BIT(0)
+#define DMA_DATA_128BIT			BIT(1)
+#define DMA_CHAN_FLOW_CTL		BIT(2)
+#define DMA_DESC_FTOD			BIT(3)
+#define DMA_DESC_IN_SRAM		BIT(4)
+#define DMA_EN_BYTE_EN			BIT(5)
+#define DMA_DBURST_WR			BIT(6)
+#define DMA_VLD_FETCH_ACK		BIT(7)
+#define DMA_DFT_DRB			BIT(8)
+
+#define DMA_ORRC_MAX_CNT		(SZ_32 - 1)
+#define DMA_DFT_POLL_CNT		SZ_4
+
+#define DMA_DFT_BURST_V22		2
+#define DMA_BURSTL_8DW			8
+#define DMA_BURSTL_16DW			16
+#define DMA_BURSTL_32DW			32
+#define DMA_DFT_BURST			DMA_BURSTL_16DW
+
+#define DMA_MAX_DESC_NUM		(SZ_8K - 1)
+#define DMA_MAX_PKT_SZ			(SZ_16K - 1)
+#define DMA_PKT_SZ_DFT			SZ_2K
+#define DMA_CHAN_BOFF_MAX		(SZ_256 - 1)
+
+#define DMA_DFT_ENDIAN			DMA_ENDIAN_TYPE0
+#define DMA_ENDIAN_MAX			DMA_ENDIAN_TYPE3
+
+#define DMA_DFT_DESC_TCNT		50
+#define DMA_HDR_LEN_MAX			(SZ_16K - 1)
+
+/* DMA flags */
+#define DMA_TX_CH			BIT(0)
+#define DMA_RX_CH			BIT(1)
+#define DEVICE_ALLOC_DESC		BIT(2)
+#define CHAN_IN_USE			BIT(3)
+#define DMA_HW_DESC			BIT(4)
+
+#define DMA_CHAN_RST			1
+#define DMA_TX_PORT_DFT_WGT		1
+#define DMA_DFT_DESC_NUM		1
+#define DMA_MAX_SIZE			(BIT(16) - 1)
+#define MAX_LOWER_CHANS			32
+#define MASK_LOWER_CHANS		GENMASK(4, 0)
+#define DMA_OWN				1
+
+enum ldma_chan_on_off {
+	DMA_CH_OFF = 0,
+	DMA_CH_ON = 1,
+};
+
+enum ldma_pkt_drop {
+	DMA_PKT_DROP_DIS = 0,
+	DMA_PKT_DROP_EN,
+};
+
+enum ldma_endian {
+	DMA_ENDIAN_TYPE0 = 0,
+	DMA_ENDIAN_TYPE1,
+	DMA_ENDIAN_TYPE2,
+	DMA_ENDIAN_TYPE3,
+};
+
+enum {
+	DMA_TYPE_TX = 0,
+	DMA_TYPE_RX,
+	DMA_TYPE_MCPY,
+};
+
+struct ldma_dev;
+struct ldma_port;
+struct ldma_chan {
+	struct ldma_port	*port; /* back pointer */
+	char			name[8]; /* Channel name */
+	struct virt_dma_chan	vchan;
+	int			nr; /* Channel id in hardware */
+	u32			flags; /* central way or channel based way */
+	enum ldma_chan_on_off	onoff;
+	dma_addr_t		desc_phys;
+	void			*desc_base; /* Virtual address */
+	u32			desc_cnt; /* Number of descriptors */
+	int			rst;
+	u32			hdrm_len;
+	bool			hdrm_csum;
+	u32			boff_len;
+	u32			data_endian;
+	u32			desc_endian;
+	bool			pden;
+	bool			desc_rx_np;
+	bool			data_endian_en;
+	bool			desc_endian_en;
+	bool			abc_en;
+	bool			desc_init;
+	struct dma_pool		*desc_pool; /* Descriptors pool */
+	u32			desc_num;
+	struct dw2_desc_sw	*ds;
+	struct work_struct	work;
+};
+
+struct ldma_port {
+	struct ldma_dev		*ldev; /* back pointer */
+	const char		*name;
+	u32			portid;
+	u32			rxbl;
+	u32			txbl;
+	u32			chan_start;
+	u32			chan_sz;
+	enum ldma_endian	rxendi;
+	enum ldma_endian	txendi;
+	enum ldma_pkt_drop	pkt_drop;
+	int			flush_memcpy;
+};
+
+/* Instance specific data */
+struct ldma_inst_data {
+	struct dma_dev_ops	*ops;
+	const char	*name;
+	u32 type;
+};
+
+struct ldma_dev {
+	struct device		*dev;
+	void __iomem		*base;
+	struct reset_control	*rst;
+	struct clk		*core_clk;
+	struct dma_device	dma_dev;
+	u32			ver;
+	int			irq;
+	struct ldma_port	*ports;
+	struct ldma_chan	*chans; /* channel list on this DMA or port */
+	spinlock_t		dev_lock; /* Controller register execlusive */
+	u32			chan_nrs;
+	u32			port_nrs;
+	u32			flags;
+	u32			pollcnt;
+	u32			orrc; /* Outstanding read count */
+	const struct ldma_inst_data *inst;
+	struct workqueue_struct *wq;
+};
+
+struct dw2_desc {
+	union {
+		struct {
+			u32 len		:16;
+			u32 res0	:7;
+			u32 bofs	:2;
+			u32 res1	:3;
+			u32 eop		:1;
+			u32 sop		:1;
+			u32 c		:1;
+			u32 own		:1;
+		} __packed field;
+		u32 word;
+	} __packed status;
+	u32 addr;
+} __packed __aligned(8);
+
+struct dw2_desc_sw {
+	struct ldma_chan	*chan;
+	struct virt_dma_desc	vdesc;
+	dma_addr_t		desc_phys;
+	size_t			desc_cnt;
+	size_t			size;
+	struct dw2_desc		*desc_hw;
+};
+
+struct dma_dev_ops {
+	int (*device_alloc_chan_resources)(struct dma_chan *chan);
+	void (*device_free_chan_resources)(struct dma_chan *chan);
+	int (*device_config)(struct dma_chan *chan,
+			     struct dma_slave_config *config);
+	int (*device_pause)(struct dma_chan *chan);
+	int (*device_resume)(struct dma_chan *chan);
+	int (*device_terminate_all)(struct dma_chan *chan);
+	void (*device_synchronize)(struct dma_chan *chan);
+	enum dma_status (*device_tx_status)(struct dma_chan *chan,
+					    dma_cookie_t cookie,
+					    struct dma_tx_state *txstate);
+	struct dma_async_tx_descriptor *(*device_prep_slave_sg)
+		(struct dma_chan *chan, struct scatterlist *sgl,
+		unsigned int sg_len, enum dma_transfer_direction direction,
+		unsigned long flags, void *context);
+	void (*device_issue_pending)(struct dma_chan *chan);
+};
+
+static inline void
+ldma_update_bits(struct ldma_dev *d, u32 mask, u32 val, u32 ofs)
+{
+	u32 old_val, new_val;
+
+	old_val = readl(d->base +  ofs);
+	new_val = (old_val & ~mask) | (val & mask);
+
+	if (new_val != old_val)
+		writel(new_val, d->base + ofs);
+}
+
+static inline struct ldma_chan *to_ldma_chan(struct dma_chan *chan)
+{
+	return container_of(chan, struct ldma_chan, vchan.chan);
+}
+
+static inline struct ldma_dev *to_ldma_dev(struct dma_device *dma_dev)
+{
+	return container_of(dma_dev, struct ldma_dev, dma_dev);
+}
+
+static inline struct dw2_desc_sw *to_lgm_dma_desc(struct virt_dma_desc *vdesc)
+{
+	return container_of(vdesc, struct dw2_desc_sw, vdesc);
+}
+
+static inline bool ldma_chan_tx(struct ldma_chan *c)
+{
+	return !!(c->flags & DMA_TX_CH);
+}
+
+static inline bool ldma_chan_is_hw_desc(struct ldma_chan *c)
+{
+	return !!(c->flags & DMA_HW_DESC);
+}
+
+static void ldma_dev_reset(struct ldma_dev *d)
+
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CTRL_RST, DMA_CTRL_RST, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_pkt_arb_cfg(struct ldma_dev *d, bool enable)
+{
+	unsigned long flags;
+	u32 mask = DMA_CTRL_PKTARB;
+	u32 val = enable ? DMA_CTRL_PKTARB : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_sram_desc_cfg(struct ldma_dev *d, bool enable)
+{
+	unsigned long flags;
+	u32 mask = DMA_CTRL_DSRAM_PATH;
+	u32 val = enable ? DMA_CTRL_DSRAM_PATH : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_chan_flow_ctl_cfg(struct ldma_dev *d, bool enable)
+{
+	unsigned long flags;
+	u32 mask, val;
+
+	if (d->inst->type != DMA_TYPE_TX)
+		return;
+
+	mask = DMA_CTRL_CH_FL;
+	val = enable ? DMA_CTRL_CH_FL : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_global_polling_enable(struct ldma_dev *d)
+{
+	unsigned long flags;
+	u32 mask = DMA_CPOLL_EN | DMA_CPOLL_CNT;
+	u32 val = DMA_CPOLL_EN;
+
+	val |= FIELD_PREP(DMA_CPOLL_CNT, d->pollcnt);
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CPOLL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_desc_fetch_on_demand_cfg(struct ldma_dev *d, bool enable)
+{
+	unsigned long flags;
+	u32 mask, val;
+
+	if (d->inst->type == DMA_TYPE_MCPY)
+		return;
+
+	mask = DMA_CTRL_DS_FOD;
+	val = enable ? DMA_CTRL_DS_FOD : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_byte_enable_cfg(struct ldma_dev *d, bool enable)
+{
+	unsigned long flags;
+	u32 mask = DMA_CTRL_ENBE;
+	u32 val = enable ? DMA_CTRL_ENBE : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+/*
+ * orr_cnt >= 16, it will be 16
+ * 4 <= orr_cnt < 16, it ill be orr_cnt
+ * orr_cnt < 4, it will be 3. Minimum 3 orr supported
+ */
+static void ldma_dev_orrc_cfg(struct ldma_dev *d)
+{
+	unsigned long flags;
+	u32 val = 0;
+	u32 mask;
+
+	if (d->inst->type == DMA_TYPE_RX)
+		return;
+
+	mask = DMA_ORRC_EN | DMA_ORRC_ORRCNT;
+	if (d->orrc > 0 && d->orrc <= DMA_ORRC_MAX_CNT)
+		val = DMA_ORRC_EN | FIELD_PREP(DMA_ORRC_ORRCNT, d->orrc);
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_ORRC);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_df_tout_cfg(struct ldma_dev *d, bool enable, int tcnt)
+{
+	u32 mask = DMA_CTRL_DESC_TMOUT_CNT_V31;
+	unsigned long flags;
+	u32 val;
+
+	if (enable)
+		val = DMA_CTRL_DESC_TMOUT_EN_V31 | FIELD_PREP(DMA_CTRL_DESC_TMOUT_CNT_V31, tcnt);
+	else
+		val = 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_dburst_wr_cfg(struct ldma_dev *d, bool enable)
+{
+	unsigned long flags;
+	u32 mask, val;
+
+	if (d->inst->type != DMA_TYPE_RX && d->inst->type != DMA_TYPE_MCPY)
+		return;
+
+	mask = DMA_CTRL_DBURST_WR;
+	val = enable ? DMA_CTRL_DBURST_WR : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_vld_fetch_ack_cfg(struct ldma_dev *d, bool enable)
+{
+	unsigned long flags;
+	u32 mask, val;
+
+	if (d->inst->type != DMA_TYPE_TX)
+		return;
+
+	mask = DMA_CTRL_VLD_DF_ACK;
+	val = enable ? DMA_CTRL_VLD_DF_ACK : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_dev_drb_cfg(struct ldma_dev *d, int enable)
+{
+	unsigned long flags;
+	u32 mask = DMA_CTRL_DRB;
+	u32 val = enable ? DMA_CTRL_DRB : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, mask, val, DMA_CTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static int ldma_dev_cfg(struct ldma_dev *d)
+{
+	bool enable;
+
+	ldma_dev_pkt_arb_cfg(d, true);
+	ldma_dev_global_polling_enable(d);
+
+	enable = !!(d->flags & DMA_DFT_DRB);
+	ldma_dev_drb_cfg(d, enable);
+
+	enable = !!(d->flags & DMA_EN_BYTE_EN);
+	ldma_dev_byte_enable_cfg(d, enable);
+
+	enable = !!(d->flags & DMA_CHAN_FLOW_CTL);
+	ldma_dev_chan_flow_ctl_cfg(d, enable);
+
+	enable = !!(d->flags & DMA_DESC_FTOD);
+	ldma_dev_desc_fetch_on_demand_cfg(d, enable);
+
+	enable = !!(d->flags & DMA_DESC_IN_SRAM);
+	ldma_dev_sram_desc_cfg(d, enable);
+
+	enable = !!(d->flags & DMA_DBURST_WR);
+	ldma_dev_dburst_wr_cfg(d, enable);
+
+	enable = !!(d->flags & DMA_VLD_FETCH_ACK);
+	ldma_dev_vld_fetch_ack_cfg(d, enable);
+
+	if (d->ver > DMA_VER22) {
+		ldma_dev_orrc_cfg(d);
+		ldma_dev_df_tout_cfg(d, true, DMA_DFT_DESC_TCNT);
+	}
+
+	dev_dbg(d->dev, "%s Controller 0x%08x configuration done\n",
+		d->inst->name, readl(d->base + DMA_CTRL));
+
+	return 0;
+}
+
+static int ldma_chan_cctrl_cfg(struct ldma_chan *c, u32 val)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+	u32 reg;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	reg = readl(d->base + DMA_CCTRL);
+	/* Read from hardware */
+	if (reg & DMA_CCTRL_DIR_TX)
+		c->flags |= DMA_TX_CH;
+	else
+		c->flags |= DMA_RX_CH;
+
+	/* Keep the class value unchanged */
+	reg &= DMA_CCTRL_CLASS | DMA_CCTRL_CLASSH;
+	reg |= val;
+	writel(reg, d->base + DMA_CCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+
+	return 0;
+}
+
+static void ldma_chan_irq_init(struct ldma_chan *c)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+	u32 enofs, crofs;
+	u32 cn_bit;
+
+	if (c->nr < MAX_LOWER_CHANS) {
+		enofs = DMA_IRNEN;
+		crofs = DMA_IRNCR;
+	} else {
+		enofs = DMA_IRNEN1;
+		crofs = DMA_IRNCR1;
+	}
+
+	cn_bit = BIT(c->nr & MASK_LOWER_CHANS);
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+
+	/* Clear all interrupts and disabled it */
+	writel(0, d->base + DMA_CIE);
+	writel(DMA_CI_ALL, d->base + DMA_CIS);
+
+	ldma_update_bits(d, cn_bit, 0, enofs);
+	writel(cn_bit, d->base + crofs);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_chan_set_class(struct ldma_chan *c, u32 val)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+	u32 class_val;
+
+	if (d->inst->type == DMA_TYPE_MCPY || val > DMA_MAX_CLASS)
+		return;
+
+	/* 3 bits low */
+	class_val = FIELD_PREP(DMA_CCTRL_CLASS, val & 0x7);
+	/* 2 bits high */
+	class_val |= FIELD_PREP(DMA_CCTRL_CLASSH, (val >> 3) & 0x3);
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, DMA_CCTRL_CLASS | DMA_CCTRL_CLASSH, class_val,
+			 DMA_CCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static int ldma_chan_on(struct ldma_chan *c)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+
+	/* If descriptors not configured, not allow to turn on channel */
+	if (WARN_ON(!c->desc_init))
+		return -EINVAL;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, DMA_CCTRL_ON, DMA_CCTRL_ON, DMA_CCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+
+	c->onoff = DMA_CH_ON;
+
+	return 0;
+}
+
+static int ldma_chan_off(struct ldma_chan *c)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+	u32 val;
+	int ret;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, DMA_CCTRL_ON, 0, DMA_CCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+
+	ret = readl_poll_timeout_atomic(d->base + DMA_CCTRL, val,
+					!(val & DMA_CCTRL_ON), 0, 10000);
+	if (ret)
+		return ret;
+
+	c->onoff = DMA_CH_OFF;
+
+	return 0;
+}
+
+static void ldma_chan_desc_hw_cfg(struct ldma_chan *c, dma_addr_t desc_base,
+				  int desc_num)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	writel(lower_32_bits(desc_base), d->base + DMA_CDBA);
+	/* High 4 bits */
+	if (IS_ENABLED(CONFIG_64BIT)) {
+		u32 hi = upper_32_bits(desc_base) & 0xF;
+
+		ldma_update_bits(d, DMA_CDBA_MSB,
+				 FIELD_PREP(DMA_CDBA_MSB, hi), DMA_CCTRL);
+	}
+	writel(desc_num, d->base + DMA_CDLEN);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+
+	c->desc_init = true;
+}
+
+/*
+ * Descriptor base address and data pointer must be physical address when
+ * writen to the register.
+ * This API will be used by CBM which configure hardware descriptor.
+ */
+static int ldma_chan_desc_cfg(struct ldma_chan *c, dma_addr_t desc_base,
+			      int desc_num)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+
+	if (!desc_num) {
+		dev_err(d->dev, "Channel %d must allocate descriptor first\n",
+			c->nr);
+		return -EINVAL;
+	}
+
+	if (desc_num > DMA_MAX_DESC_NUM) {
+		dev_err(d->dev, "Channel %d descriptor number out of range %d\n",
+			c->nr, desc_num);
+		return -EINVAL;
+	}
+
+	ldma_chan_desc_hw_cfg(c, desc_base, desc_num);
+
+	c->flags |= DMA_HW_DESC;
+	c->desc_cnt = desc_num;
+	c->desc_phys = desc_base;
+
+	return 0;
+}
+
+int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
+			    int desc_num)
+{
+	return ldma_chan_desc_cfg(to_ldma_chan(chan), desc_base, desc_num);
+}
+EXPORT_SYMBOL_GPL(intel_dma_chan_desc_cfg);
+
+static int ldma_chan_reset(struct ldma_chan *c)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+	u32 val;
+	int ret;
+
+	ret = ldma_chan_off(c);
+	if (ret)
+		return ret;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, DMA_CCTRL_RST, DMA_CCTRL_RST, DMA_CCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+
+	ret = readl_poll_timeout_atomic(d->base + DMA_CCTRL, val,
+					!(val & DMA_CCTRL_RST), 0, 10000);
+	if (ret)
+		return ret;
+
+	c->rst = 1;
+	c->desc_init = false;
+
+	return 0;
+}
+
+static void ldma_chan_byte_offset_cfg(struct ldma_chan *c, u32 boff_len)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	u32 mask = DMA_C_BOFF_EN | DMA_C_BOFF_BOF_LEN;
+	unsigned long flags;
+	u32 val;
+
+	if (boff_len > 0 && boff_len <= DMA_CHAN_BOFF_MAX)
+		val = FIELD_PREP(DMA_C_BOFF_BOF_LEN, boff_len) | DMA_C_BOFF_EN;
+	else
+		val = 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, mask, val, DMA_C_BOFF);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_chan_data_endian_cfg(struct ldma_chan *c, bool enable,
+				      u32 endian_type)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	u32 mask = DMA_C_END_DE_EN | DMA_C_END_DATAENDI;
+	unsigned long flags;
+	u32 val;
+
+	if (enable)
+		val = DMA_C_END_DE_EN | FIELD_PREP(DMA_C_END_DATAENDI, endian_type);
+	else
+		val = 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, mask, val, DMA_C_ENDIAN);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_chan_desc_endian_cfg(struct ldma_chan *c, bool enable,
+				      u32 endian_type)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	u32 mask = DMA_C_END_DES_EN | DMA_C_END_DESENDI;
+	unsigned long flags;
+	u32 val;
+
+	if (enable)
+		val = DMA_C_END_DES_EN | FIELD_PREP(DMA_C_END_DESENDI, endian_type);
+	else
+		val = 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, mask, val, DMA_C_ENDIAN);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_chan_hdr_mode_cfg(struct ldma_chan *c, u32 hdr_len, bool csum)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+
+	unsigned long flags;
+	u32 mask, val;
+
+	/* NB, csum disabled, hdr length must be provided */
+	if (!csum && (!hdr_len || hdr_len > DMA_HDR_LEN_MAX))
+		return;
+
+	mask = DMA_C_HDRM_HDR_SUM;
+	val = DMA_C_HDRM_HDR_SUM;
+
+	if (!csum && hdr_len)
+		val = hdr_len;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, mask, val, DMA_C_HDRM);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_chan_rxwr_np_cfg(struct ldma_chan *c, bool enable)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+	u32 mask, val;
+
+	/* Only valid for RX channel */
+	if (ldma_chan_tx(c))
+		return;
+
+	mask = DMA_CCTRL_WR_NP_EN;
+	val = enable ? DMA_CCTRL_WR_NP_EN : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, mask, val, DMA_CCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void ldma_chan_abc_cfg(struct ldma_chan *c, bool enable)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+	u32 mask, val;
+
+	if (d->ver < DMA_VER32 || ldma_chan_tx(c))
+		return;
+
+	mask = DMA_CCTRL_CH_ABC;
+	val = enable ? DMA_CCTRL_CH_ABC : 0;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	ldma_update_bits(d, DMA_CS_MASK, c->nr, DMA_CS);
+	ldma_update_bits(d, mask, val, DMA_CCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static int ldma_port_cfg(struct ldma_port *p)
+{
+	unsigned long flags;
+	struct ldma_dev *d;
+	u32 reg;
+
+	d = p->ldev;
+	reg = p->flush_memcpy ? DMA_PCTRL_MEM_FLUSH : 0;
+	reg |= FIELD_PREP(DMA_PCTRL_TXENDI, p->txendi);
+	reg |= FIELD_PREP(DMA_PCTRL_RXENDI, p->rxendi);
+
+	if (d->ver == DMA_VER22) {
+		reg |= FIELD_PREP(DMA_PCTRL_TXBL, p->txbl);
+		reg |= FIELD_PREP(DMA_PCTRL_RXBL, p->rxbl);
+	} else {
+		reg |= FIELD_PREP(DMA_PCTRL_PDEN, p->pkt_drop);
+
+		if (p->txbl == DMA_BURSTL_32DW)
+			reg |= DMA_PCTRL_TXBL32;
+		else if (p->txbl == DMA_BURSTL_16DW)
+			reg |= DMA_PCTRL_TXBL16;
+		else
+			reg |= FIELD_PREP(DMA_PCTRL_TXBL, DMA_PCTRL_TXBL_8);
+
+		if (p->rxbl == DMA_BURSTL_32DW)
+			reg |= DMA_PCTRL_RXBL32;
+		else if (p->rxbl == DMA_BURSTL_16DW)
+			reg |= DMA_PCTRL_RXBL16;
+		else
+			reg |= FIELD_PREP(DMA_PCTRL_RXBL, DMA_PCTRL_RXBL_8);
+	}
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	writel(p->portid, d->base + DMA_PS);
+	writel(reg, d->base + DMA_PCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+
+	dev_dbg(d->dev, "%s Port Control 0x%08x configuration done\n",
+		p->name, readl(d->base + DMA_PCTRL));
+
+	return 0;
+}
+
+static int ldma_chan_cfg(struct ldma_chan *c)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	u32 reg;
+
+	reg = c->pden ? DMA_CCTRL_PDEN : 0;
+	reg |= c->onoff ? DMA_CCTRL_ON : 0;
+	reg |= c->rst ? DMA_CCTRL_RST : 0;
+
+	ldma_chan_cctrl_cfg(c, reg);
+	ldma_chan_irq_init(c);
+
+	if (d->ver > DMA_VER22) {
+		ldma_chan_set_class(c, c->nr);
+		ldma_chan_byte_offset_cfg(c, c->boff_len);
+		ldma_chan_data_endian_cfg(c, c->data_endian_en, c->data_endian);
+		ldma_chan_desc_endian_cfg(c, c->desc_endian_en, c->desc_endian);
+		ldma_chan_hdr_mode_cfg(c, c->hdrm_len, c->hdrm_csum);
+		ldma_chan_rxwr_np_cfg(c, c->desc_rx_np);
+		ldma_chan_abc_cfg(c, c->abc_en);
+
+		if (ldma_chan_is_hw_desc(c))
+			ldma_chan_desc_hw_cfg(c, c->desc_phys, c->desc_cnt);
+	}
+
+	return 0;
+}
+
+static void ldma_dev_init(struct ldma_dev *d)
+{
+	struct ldma_port *p;
+	struct ldma_chan *c;
+	int i;
+
+	spin_lock_init(&d->dev_lock);
+	ldma_dev_reset(d);
+	ldma_dev_cfg(d);
+
+	/* DMA port initialization */
+	for (i = 0; i < d->port_nrs; i++) {
+		p = &d->ports[i];
+		ldma_port_cfg(p);
+	}
+
+	/* DMA channel initialization */
+	for (i = 0; i < d->chan_nrs; i++) {
+		c = &d->chans[i];
+		ldma_chan_cfg(c);
+	}
+}
+
+/*
+ * The configuration stored in the devicetree matches the configuration
+ * parameters of the peripheral instance and allows the driver to know which
+ * features are implemented and how it should behave. Users only configure
+ * what is necessary. All other setting will fall back to default setting
+ */
+static int dma_parse_chan_dt(struct fwnode_handle *fw_chan, struct ldma_dev *d)
+{
+	struct ldma_chan *c;
+	u32 v[2], val;
+	int ret;
+
+	ret = fwnode_property_read_u32(fw_chan, "reg", &val);
+	if (ret)
+		return ret;
+
+	/* Sanity check for channel range */
+	if (val >= d->chan_nrs)
+		return -ENODEV;
+
+	c = &d->chans[val];
+
+	ret = fwnode_property_read_u32(fw_chan, "intel,desc_num", &c->desc_num);
+	if (ret || (!ret && c->desc_num > 255))
+		c->desc_num = DMA_DFT_DESC_NUM;
+
+	if (!fwnode_property_read_u32(fw_chan, "intel,data-endian", &val)) {
+		if (val > DMA_ENDIAN_MAX)
+			return -EINVAL;
+		c->data_endian = val;
+	}
+
+	if (!fwnode_property_read_u32(fw_chan, "intel,desc-endian", &val)) {
+		if (val > DMA_ENDIAN_MAX)
+			return -EINVAL;
+		c->desc_endian = val;
+	}
+
+	if (fwnode_property_read_u32(fw_chan, "intel,byte-offset",
+				     &c->boff_len))
+		c->boff_len = 0;
+
+	if (!fwnode_property_read_u32_array(fw_chan, "intel,hdr-mode", v,
+					    ARRAY_SIZE(v))) {
+		c->hdrm_csum = !!v[1];
+		if (!c->hdrm_csum) {
+			if (!v[0] || v[0] > DMA_HDR_LEN_MAX)
+				return -EINVAL;
+		}
+		c->hdrm_len = v[0];
+	}
+
+	if (!fwnode_property_read_u32_array(fw_chan, "intel,chan-hw-desc", v,
+					    ARRAY_SIZE(v))) {
+		u32 cnt = v[1];
+
+		if (!cnt) {
+			dev_err(d->dev,
+				"Channel %d must allocate descriptor first\n",
+				c->nr);
+			return -EINVAL;
+		}
+
+		if (cnt > DMA_MAX_DESC_NUM) {
+			dev_err(d->dev,
+				"Channel %d descriptor number out of range %d\n",
+				c->nr, cnt);
+			return -EINVAL;
+		}
+		c->desc_phys = v[0];
+		c->desc_cnt = cnt;
+		c->flags |= DMA_HW_DESC;
+	}
+
+	return 0;
+}
+
+static unsigned int dma_get_channel_node_count(struct ldma_dev *d)
+{
+	struct fwnode_handle *fwnode = dev_fwnode(d->dev);
+	struct fwnode_handle *fw_chans;
+	struct fwnode_handle *child;
+	unsigned int count = 0;
+
+	fw_chans = fwnode_get_named_child_node(fwnode, "dma-channels");
+	fwnode_for_each_child_node(fw_chans, child)
+		count++;
+
+	return count;
+}
+
+static int dma_parse_port_dt(struct fwnode_handle *fw_port, struct ldma_dev *d)
+{
+	struct ldma_port *p;
+	u32 val, v[2];
+	int ret;
+
+	ret = fwnode_property_read_u32(fw_port, "reg", &val);
+	if (ret)
+		return ret;
+
+	/* Sanit check */
+	if (val >= d->port_nrs)
+		return -ENODEV;
+
+	p = &d->ports[val];
+
+	ret = fwnode_property_read_string(fw_port, "intel,name", &p->name);
+	if (ret) {
+		dev_err(d->dev, "Failed to get port name ret=%d\n", ret);
+		return ret;
+	}
+
+	ret = fwnode_property_read_u32_array(fw_port, "intel,chans", v,
+					     ARRAY_SIZE(v));
+	if (ret) {
+		dev_err(d->dev, "Failed to get port chan mapping ret=%d\n",
+			ret);
+		return ret;
+	}
+	p->chan_start = v[0];
+	p->chan_sz = v[1];
+
+	p->txbl = DMA_DFT_BURST_V22;
+
+	/* TX and RX has the same burst length */
+	p->txbl = ilog2(p->txbl);
+	p->rxbl = p->txbl;
+
+	if (!strncmp(p->name, "MEMCPY", 4))
+		p->flush_memcpy = 1;
+
+	/*
+	 * Get max available channels instead of reading from regsiters
+	 */
+	d->chan_nrs = dma_get_channel_node_count(d);
+
+	dev_dbg(d->dev, "Port %s burst %d endian %d\n",
+		p->name, p->rxbl, p->rxendi);
+
+	return 0;
+}
+
+static int ldma_cfg_init(struct ldma_dev *d)
+{
+	struct fwnode_handle *fwnode = dev_fwnode(d->dev);
+	struct fwnode_handle *fw_chans, *fw_chan;
+	struct fwnode_handle *fw_ports, *fw_port;
+	struct ldma_chan *c;
+	struct ldma_port *p;
+	u32 prop, val;
+	int ret, i;
+
+	if (fwnode_property_read_bool(fwnode, "intel,dma-chan-fc"))
+		d->flags |= DMA_CHAN_FLOW_CTL;
+
+	if (fwnode_property_read_bool(fwnode, "intel,dma-desc-fod"))
+		d->flags |= DMA_DESC_FTOD;
+
+	if (fwnode_property_read_bool(fwnode, "intel,dma-desc-in-sram"))
+		d->flags |= DMA_DESC_IN_SRAM;
+
+	if (fwnode_property_read_bool(fwnode, "intel,dma-byte-en"))
+		d->flags |= DMA_EN_BYTE_EN;
+
+	if (fwnode_property_read_bool(fwnode, "intel,dma-dfetch-ack"))
+		d->flags |= DMA_VLD_FETCH_ACK;
+
+	if (fwnode_property_read_bool(fwnode, "intel,dma-dburst-wr"))
+		d->flags |= DMA_DBURST_WR;
+
+	if (fwnode_property_read_bool(fwnode, "intel,dma-drb"))
+		d->flags |= DMA_DFT_DRB;
+
+	if (fwnode_property_read_u32(fwnode, "intel,dma-polling-cnt",
+				     &d->pollcnt))
+		d->pollcnt = DMA_DFT_POLL_CNT;
+
+	if (!fwnode_property_read_u32(fwnode, "intel,dma-orrc", &val)) {
+		if (val > DMA_ORRC_MAX_CNT)
+			return -EINVAL;
+		d->orrc = val;
+	}
+
+	if (d->ver > DMA_VER22) {
+		if (!d->port_nrs)
+			return -EINVAL;
+
+		for (i = 0; i < d->port_nrs; i++) {
+			p = &d->ports[i];
+			p->rxendi = DMA_DFT_ENDIAN;
+			p->txendi = DMA_DFT_ENDIAN;
+
+			if (!fwnode_property_read_u32(fwnode, "intel,dma-burst",
+						      &prop)) {
+				p->rxbl = prop;
+				p->txbl = prop;
+			} else {
+				p->rxbl = DMA_DFT_BURST;
+				p->txbl = DMA_DFT_BURST;
+			}
+
+			p->pkt_drop = DMA_PKT_DROP_DIS;
+			p->flush_memcpy = 0;
+		}
+	}
+
+	/* Port specific, required for dma0 */
+	fw_ports = fwnode_get_named_child_node(fwnode, "dma-ports");
+	if (!fw_ports && d->ver == DMA_VER22) {
+		dev_err(d->dev, "Failed to get ports settings\n");
+		return -ENODEV;
+	}
+	if (fw_ports) {
+		fwnode_for_each_child_node(fw_ports, fw_port) {
+			ret = dma_parse_port_dt(fw_port, d);
+			if (ret) {
+				fwnode_handle_put(fw_port);
+				fwnode_handle_put(fw_ports);
+				return -EINVAL;
+			}
+		}
+		fwnode_handle_put(fw_ports);
+	}
+
+	d->chans = devm_kcalloc(d->dev, d->chan_nrs, sizeof(*c), GFP_KERNEL);
+	if (!d->chans)
+		return -ENOMEM;
+
+	/* Channel based configuration if available, optional */
+	fw_chans = fwnode_get_named_child_node(fwnode, "dma-channels");
+	if (fw_chans) {
+		fwnode_for_each_child_node(fw_chans, fw_chan) {
+			if (dma_parse_chan_dt(fw_chan, d)) {
+				fwnode_handle_put(fw_chan);
+				fwnode_handle_put(fw_chans);
+				return -EINVAL;
+			}
+		}
+		fwnode_handle_put(fw_chans);
+	}
+
+	return 0;
+}
+
+static void dma_free_desc_resource(struct virt_dma_desc *vdesc)
+{
+	struct dw2_desc_sw *ds = to_lgm_dma_desc(vdesc);
+	struct ldma_chan *c = ds->chan;
+
+	dma_pool_free(c->desc_pool, ds->desc_hw, ds->desc_phys);
+	kfree(ds);
+	c->ds = NULL;
+}
+
+static struct dw2_desc_sw *
+dma_alloc_desc_resource(int num, struct ldma_chan *c)
+{
+	struct device *dev = c->vchan.chan.device->dev;
+	struct dw2_desc_sw *ds;
+
+	if (num > c->desc_num) {
+		dev_err(dev, "sg num %d exceed max %d\n", num, c->desc_num);
+		return NULL;
+	}
+
+	ds = kzalloc(sizeof(*ds), GFP_NOWAIT);
+	if (!ds)
+		return NULL;
+
+	ds->chan = c;
+
+	ds->desc_hw = dma_pool_zalloc(c->desc_pool, GFP_ATOMIC,
+				      &ds->desc_phys);
+	if (!ds->desc_hw) {
+		dev_dbg(dev, "out of memory for link descriptor\n");
+		kfree(ds);
+		return NULL;
+	}
+	ds->desc_cnt = num;
+
+	return ds;
+}
+
+static void ldma_chan_irq_en(struct ldma_chan *c)
+{
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	writel(c->nr, d->base + DMA_CS);
+	writel(DMA_CI_EOP, d->base + DMA_CIE);
+	writel(BIT(c->nr), d->base + DMA_IRNEN);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+}
+
+static void dma_issue_pending(struct dma_chan *chan)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	unsigned long flags;
+
+	if (d->ver == DMA_VER22) {
+		spin_lock_irqsave(&c->vchan.lock, flags);
+		if (vchan_issue_pending(&c->vchan)) {
+			struct virt_dma_desc *vdesc;
+
+			/* Get the next descriptor */
+			vdesc = vchan_next_desc(&c->vchan);
+			if (!vdesc) {
+				c->ds = NULL;
+				return;
+			}
+			list_del(&vdesc->node);
+			c->ds = to_lgm_dma_desc(vdesc);
+			spin_unlock_irqrestore(&c->vchan.lock, flags);
+			ldma_chan_desc_hw_cfg(c, c->ds->desc_phys, c->ds->desc_cnt);
+			ldma_chan_irq_en(c);
+		}
+	}
+	ldma_chan_on(c);
+}
+
+static void dma_synchronize(struct dma_chan *chan)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+
+	/*
+	 * clear any pending work if any. In that
+	 * case the resource needs to be free here.
+	 */
+	cancel_work_sync(&c->work);
+	vchan_synchronize(&c->vchan);
+	if (c->ds)
+		dma_free_desc_resource(&c->ds->vdesc);
+}
+
+static int dma_terminate_all(struct dma_chan *chan)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irqsave(&c->vchan.lock, flags);
+	vchan_get_all_descriptors(&c->vchan, &head);
+	spin_unlock_irqrestore(&c->vchan.lock, flags);
+	vchan_dma_desc_free_list(&c->vchan, &head);
+
+	return ldma_chan_reset(c);
+}
+
+static int dma_resume_chan(struct dma_chan *chan)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+
+	ldma_chan_on(c);
+
+	return 0;
+}
+
+static int dma_pause_chan(struct dma_chan *chan)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+
+	return ldma_chan_off(c);
+}
+
+static enum dma_status
+dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+	      struct dma_tx_state *txstate)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	enum dma_status status = DMA_COMPLETE;
+
+	if (d->ver == DMA_VER22)
+		status = dma_cookie_status(chan, cookie, txstate);
+
+	return status;
+}
+
+static void dma_chan_irq(int irq, void *data)
+{
+	struct ldma_chan *c = data;
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	u32 stat;
+
+	/* Disable channel interrupts  */
+	writel(c->nr, d->base + DMA_CS);
+	stat = readl(d->base + DMA_CIS);
+	if (!stat)
+		return;
+
+	writel(readl(d->base + DMA_CIE) & ~DMA_CI_ALL, d->base + DMA_CIE);
+	writel(stat, d->base + DMA_CIS);
+	queue_work(d->wq, &c->work);
+}
+
+static irqreturn_t dma_interrupt(int irq, void *dev_id)
+{
+	struct ldma_dev *d = dev_id;
+	struct ldma_chan *c;
+	unsigned long irncr;
+	u32 cid;
+
+	irncr = readl(d->base + DMA_IRNCR);
+	if (!irncr) {
+		dev_err(d->dev, "dummy interrupt\n");
+		return IRQ_NONE;
+	}
+
+	for_each_set_bit(cid, &irncr, d->chan_nrs) {
+		/* Mask */
+		writel(readl(d->base + DMA_IRNEN) & ~BIT(cid), d->base + DMA_IRNEN);
+		/* Ack */
+		writel(readl(d->base + DMA_IRNCR) | BIT(cid), d->base + DMA_IRNCR);
+
+		c = &d->chans[cid];
+		dma_chan_irq(irq, c);
+	}
+
+	return IRQ_HANDLED;
+}
+
+static struct dma_async_tx_descriptor *
+dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
+		  unsigned int sglen, enum dma_transfer_direction dir,
+		  unsigned long flags, void *context)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+	size_t len, avail, total = 0;
+	struct dw2_desc *hw_ds;
+	struct dw2_desc_sw *ds;
+	struct scatterlist *sg;
+	int num = sglen, i;
+	dma_addr_t addr;
+
+	if (!sgl)
+		return NULL;
+
+	for_each_sg(sgl, sg, sglen, i) {
+		avail = sg_dma_len(sg);
+		if (avail > DMA_MAX_SIZE)
+			num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1;
+	}
+
+	ds = dma_alloc_desc_resource(num, c);
+	if (!ds)
+		return NULL;
+
+	c->ds = ds;
+
+	num = 0;
+	/* sop and eop has to be handled nicely */
+	for_each_sg(sgl, sg, sglen, i) {
+		addr = sg_dma_address(sg);
+		avail = sg_dma_len(sg);
+		total += avail;
+
+		do {
+			len = min_t(size_t, avail, DMA_MAX_SIZE);
+
+			hw_ds = &ds->desc_hw[num];
+			switch (sglen) {
+			case 1:
+				hw_ds->status.field.sop = 1;
+				hw_ds->status.field.eop = 1;
+				break;
+			default:
+				if (num == 0) {
+					hw_ds->status.field.sop = 1;
+					hw_ds->status.field.eop = 0;
+				} else if (num == (sglen - 1)) {
+					hw_ds->status.field.sop = 0;
+					hw_ds->status.field.eop = 1;
+				} else {
+					hw_ds->status.field.sop = 0;
+					hw_ds->status.field.eop = 0;
+				}
+				break;
+			}
+
+			/* Only 32 bit address supported */
+			hw_ds->addr = (u32)addr;
+			hw_ds->status.field.len = len;
+			hw_ds->status.field.c = 0;
+			hw_ds->status.field.bofs = addr & 0x3;
+			/* Ensure data ready before ownership change */
+			wmb();
+			hw_ds->status.field.own = DMA_OWN;
+			/* Ensure ownership changed before moving forward */
+			wmb();
+			num++;
+			addr += len;
+			avail -= len;
+		} while (avail);
+	}
+
+	ds->size = total;
+
+	return vchan_tx_prep(&c->vchan, &ds->vdesc, DMA_CTRL_ACK);
+}
+
+static int
+dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	struct ldma_port *p = c->port;
+	unsigned long flags;
+	u32 bl;
+
+	if ((cfg->direction == DMA_DEV_TO_MEM &&
+	     cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
+	    (cfg->direction == DMA_MEM_TO_DEV &&
+	     cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
+	    !is_slave_direction(cfg->direction))
+		return -EINVAL;
+
+	/* Default setting will be used */
+	if (!cfg->src_maxburst && !cfg->dst_maxburst)
+		return 0;
+
+	/* Must be the same */
+	if (cfg->src_maxburst && cfg->dst_maxburst &&
+	    cfg->src_maxburst != cfg->dst_maxburst)
+		return -EINVAL;
+
+	if (cfg->dst_maxburst)
+		cfg->src_maxburst = cfg->dst_maxburst;
+
+	bl = ilog2(cfg->src_maxburst);
+
+	spin_lock_irqsave(&d->dev_lock, flags);
+	writel(p->portid, d->base + DMA_PS);
+	ldma_update_bits(d, DMA_PCTRL_RXBL | DMA_PCTRL_TXBL,
+			 FIELD_PREP(DMA_PCTRL_RXBL, bl) |
+			 FIELD_PREP(DMA_PCTRL_TXBL, bl), DMA_PCTRL);
+	spin_unlock_irqrestore(&d->dev_lock, flags);
+
+	return 0;
+}
+
+static int dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+	struct device *dev = c->vchan.chan.device->dev;
+	size_t	desc_sz;
+
+	if (d->ver > DMA_VER22) {
+		c->flags |= CHAN_IN_USE;
+		return 0;
+	}
+
+	if (c->desc_pool)
+		return c->desc_num;
+
+	desc_sz = c->desc_num * sizeof(struct dw2_desc);
+	c->desc_pool = dma_pool_create(c->name, dev, desc_sz,
+				       __alignof__(struct dw2_desc), 0);
+
+	if (!c->desc_pool) {
+		dev_err(dev, "unable to allocate descriptor pool\n");
+		return -ENOMEM;
+	}
+
+	return c->desc_num;
+}
+
+static void dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct ldma_chan *c = to_ldma_chan(chan);
+	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
+
+	if (d->ver == DMA_VER22) {
+		dma_pool_destroy(c->desc_pool);
+		c->desc_pool = NULL;
+		vchan_free_chan_resources(to_virt_chan(chan));
+		ldma_chan_reset(c);
+	} else {
+		c->flags &= ~CHAN_IN_USE;
+	}
+}
+
+static void dma_work(struct work_struct *work)
+{
+	struct ldma_chan *c = container_of(work, struct ldma_chan, work);
+	struct dma_async_tx_descriptor *tx = &c->ds->vdesc.tx;
+	struct dmaengine_desc_callback cb;
+
+	dmaengine_desc_get_callback(tx, &cb);
+	dma_cookie_complete(tx);
+	dmaengine_desc_callback_invoke(&cb, NULL);
+}
+
+int update_client_configs(struct of_dma *ofdma, struct of_phandle_args *spec)
+{
+	struct ldma_dev *d = ofdma->of_dma_data;
+	struct ldma_port *p;
+	struct ldma_chan *c;
+	u32 chan_id =  spec->args[0];
+	u32 port_id =  spec->args[1];
+
+	if (chan_id >= d->chan_nrs || port_id >= d->port_nrs)
+		return 0;
+
+	p = &d->ports[port_id];
+	c = &d->chans[chan_id];
+
+	if (d->ver == DMA_VER22) {
+		u32 burst = spec->args[2];
+
+		if (burst != 2 && burst != 4 && burst != 8)
+			return 0;
+
+		/* TX and RX has the same burst length */
+		p->txbl = ilog2(burst);
+		p->rxbl = p->txbl;
+
+		ldma_port_cfg(p);
+	} else {
+		if (spec->args[2] > 0 && spec->args[2] <= DMA_ENDIAN_TYPE3) {
+			c->data_endian = spec->args[2];
+			c->data_endian_en = true;
+		}
+
+		if (spec->args[3] > 0 && spec->args[3] <= DMA_ENDIAN_TYPE3) {
+			c->desc_endian = spec->args[3];
+			c->desc_endian_en = true;
+		}
+
+		if (spec->args[4] > 0 && spec->args[4] < 128)
+			c->boff_len = spec->args[4];
+
+		if (spec->args[5])
+			c->desc_rx_np = true;
+
+		/*
+		 * If channel packet drop enabled, port packet drop should
+		 * be enabled
+		 */
+		if (spec->args[6]) {
+			c->pden = true;
+			p->pkt_drop = DMA_PKT_DROP_EN;
+		}
+		ldma_port_cfg(p);
+		ldma_chan_cfg(c);
+	}
+
+	return 1;
+}
+
+static struct dma_chan *ldma_xlate(struct of_phandle_args *spec,
+				   struct of_dma *ofdma)
+{
+	struct ldma_dev *d = ofdma->of_dma_data;
+	u32 chan_id =  spec->args[0];
+	int ret;
+
+	if (!spec->args_count)
+		return NULL;
+
+	/* if args_count is 1 use driver default config settings */
+	if (spec->args_count > 1) {
+		ret = update_client_configs(ofdma, spec);
+		if (!ret)
+			return NULL;
+	}
+
+	return dma_get_slave_channel(&d->chans[chan_id].vchan.chan);
+}
+
+static void ldma_clk_disable(void *data)
+{
+	struct ldma_dev *d = data;
+
+	clk_disable_unprepare(d->core_clk);
+}
+
+static struct dma_dev_ops dma0_ops = {
+	.device_alloc_chan_resources = dma_alloc_chan_resources,
+	.device_free_chan_resources = dma_free_chan_resources,
+	.device_config = dma_slave_config,
+	.device_prep_slave_sg = dma_prep_slave_sg,
+	.device_tx_status = dma_tx_status,
+	.device_pause = dma_pause_chan,
+	.device_resume = dma_resume_chan,
+	.device_terminate_all = dma_terminate_all,
+	.device_synchronize = dma_synchronize,
+	.device_issue_pending = dma_issue_pending,
+};
+
+static struct dma_dev_ops hdma_ops = {
+	.device_alloc_chan_resources = dma_alloc_chan_resources,
+	.device_free_chan_resources = dma_free_chan_resources,
+	.device_terminate_all = dma_terminate_all,
+	.device_issue_pending = dma_issue_pending,
+	.device_tx_status = dma_tx_status,
+	.device_resume = dma_resume_chan,
+	.device_pause = dma_pause_chan,
+};
+
+static const struct ldma_inst_data dma0 = {
+	.name = "dma0",
+	.ops = &dma0_ops,
+};
+
+static const struct ldma_inst_data dma2tx = {
+	.name = "dma2tx",
+	.type = DMA_TYPE_TX,
+	.ops = &hdma_ops,
+};
+
+static const struct ldma_inst_data dma1rx = {
+	.name = "dma1rx",
+	.type = DMA_TYPE_RX,
+	.ops = &hdma_ops,
+};
+
+static const struct ldma_inst_data dma1tx = {
+	.name = "dma1tx",
+	.type = DMA_TYPE_TX,
+	.ops = &hdma_ops,
+};
+
+static const struct ldma_inst_data dma0tx = {
+	.name = "dma0tx",
+	.type = DMA_TYPE_TX,
+	.ops = &hdma_ops,
+};
+
+static const struct ldma_inst_data dma3 = {
+	.name = "dma3",
+	.type = DMA_TYPE_MCPY,
+	.ops = &hdma_ops,
+};
+
+static const struct ldma_inst_data toe_dma30 = {
+	.name = "toe_dma30",
+	.type = DMA_TYPE_MCPY,
+	.ops = &hdma_ops,
+};
+
+static const struct ldma_inst_data toe_dma31 = {
+	.name = "toe_dma31",
+	.type = DMA_TYPE_MCPY,
+	.ops = &hdma_ops,
+};
+
+static const struct of_device_id intel_ldma_match[] = {
+	{ .compatible = "intel,lgm-cdma", .data = &dma0},
+	{ .compatible = "intel,lgm-dma2tx", .data = &dma2tx},
+	{ .compatible = "intel,lgm-dma1rx", .data = &dma1rx},
+	{ .compatible = "intel,lgm-dma1tx", .data = &dma1tx},
+	{ .compatible = "intel,lgm-dma0tx", .data = &dma0tx},
+	{ .compatible = "intel,lgm-dma3", .data = &dma3},
+	{ .compatible = "intel,lgm-toe-dma30", .data = &toe_dma30},
+	{ .compatible = "intel,lgm-toe-dma31", .data = &toe_dma31},
+	{}
+};
+
+static int intel_ldma_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct dma_device *dma_dev;
+	struct ldma_chan *c;
+	struct ldma_port *p;
+	struct ldma_dev *d;
+	u32 id, bitn = 32;
+	int i, j, k, ret;
+
+	d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL);
+	if (!d)
+		return -ENOMEM;
+
+	/* Link controller to platform device */
+	d->dev = &pdev->dev;
+
+	d->inst = device_get_match_data(dev);
+	if (!d->inst) {
+		dev_err(dev, "No device match found\n");
+		return -ENODEV;
+	}
+
+	d->base = devm_platform_ioremap_resource(pdev, 0);
+	if (IS_ERR(d->base))
+		return PTR_ERR(d->base);
+
+	/* Power up and reset the dma engine, some DMAs always on?? */
+	d->core_clk = devm_clk_get_optional(dev, NULL);
+	if (IS_ERR(d->core_clk))
+		return PTR_ERR(d->core_clk);
+	clk_prepare_enable(d->core_clk);
+
+	ret = devm_add_action_or_reset(dev, ldma_clk_disable, d);
+	if (ret) {
+		dev_err(dev, "Failed to devm_add_action_or_reset, %d\n", ret);
+		return ret;
+	}
+
+	d->rst = devm_reset_control_get_optional(dev, NULL);
+	if (IS_ERR(d->rst))
+		return PTR_ERR(d->rst);
+	reset_control_deassert(d->rst);
+
+	id = readl(d->base + DMA_ID);
+	d->chan_nrs = FIELD_GET(DMA_ID_CHNR, id);
+	d->port_nrs = FIELD_GET(DMA_ID_PNR, id);
+	d->ver = FIELD_GET(DMA_ID_REV, id);
+
+	if (id & DMA_ID_AW_36B)
+		d->flags |= DMA_ADDR_36BIT;
+
+	if (IS_ENABLED(CONFIG_64BIT)) {
+		if (id & DMA_ID_AW_36B)
+			bitn = 36;
+	}
+
+	if (id & DMA_ID_DW_128B)
+		d->flags |= DMA_DATA_128BIT;
+
+	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(bitn));
+	if (ret) {
+		dev_err(dev, "No usable DMA configuration\n");
+		return ret;
+	}
+
+	if (d->ver == DMA_VER22) {
+		d->irq = platform_get_irq(pdev, 0);
+		if (d->irq < 0)
+			return d->irq;
+
+		ret = devm_request_irq(&pdev->dev, d->irq, dma_interrupt,
+				       0, DRIVER_NAME, d);
+		if (ret)
+			return ret;
+
+		d->wq = alloc_ordered_workqueue("dma_wq", WQ_MEM_RECLAIM |
+						WQ_HIGHPRI);
+		if (!d->wq)
+			return -ENOMEM;
+	}
+
+	dma_dev = &d->dma_dev;
+	dma_cap_set(DMA_SLAVE, dma_dev->cap_mask);
+
+	/* Channel initializations */
+	INIT_LIST_HEAD(&dma_dev->channels);
+
+	/* Port Initializations */
+	d->ports = devm_kcalloc(dev, d->port_nrs, sizeof(*p), GFP_KERNEL);
+	if (!d->ports)
+		return -ENOMEM;
+
+	for (i = 0; i < d->port_nrs; i++) {
+		p = &d->ports[i];
+		p->portid = i;
+		p->ldev = d;
+	}
+
+	ret = ldma_cfg_init(d);
+	if (ret)
+		return ret;
+
+	dma_dev->dev = &pdev->dev;
+	/*
+	 * Link channel id to channel index and link to dma channel list
+	 * It also back points to controller and its port
+	 */
+	for (i = 0, k = 0; i < d->port_nrs; i++) {
+		if (d->ver == DMA_VER22) {
+			u32 chan_end;
+
+			p = &d->ports[i];
+			chan_end = p->chan_start + p->chan_sz;
+			for (j = p->chan_start; j < chan_end; j++) {
+				c = &d->chans[k];
+				c->port = p;
+				c->nr = j; /* Real channel number */
+				c->rst = DMA_CHAN_RST;
+				snprintf(c->name, sizeof(c->name), "chan%d",
+					 c->nr);
+				INIT_WORK(&c->work, dma_work);
+				c->vchan.desc_free = dma_free_desc_resource;
+				vchan_init(&c->vchan, dma_dev);
+				k++;
+			}
+		} else {
+			p = &d->ports[i];
+			for (i = 0; i < d->chan_nrs; i++) {
+				c = &d->chans[i];
+				c->port = p;
+				c->data_endian = DMA_DFT_ENDIAN;
+				c->desc_endian = DMA_DFT_ENDIAN;
+				c->data_endian_en = false;
+				c->desc_endian_en = false;
+				c->desc_rx_np = false;
+				c->flags |= DEVICE_ALLOC_DESC;
+				c->onoff = DMA_CH_OFF;
+				c->rst = DMA_CHAN_RST;
+				c->abc_en = true;
+				c->nr = i;
+				c->vchan.desc_free = dma_free_desc_resource;
+				vchan_init(&c->vchan, dma_dev);
+			}
+		}
+	}
+
+	/* Set DMA capabilities */
+	dma_cap_zero(dma_dev->cap_mask);
+
+	dma_dev->device_alloc_chan_resources =
+		d->inst->ops->device_alloc_chan_resources;
+	dma_dev->device_free_chan_resources =
+		d->inst->ops->device_free_chan_resources;
+	dma_dev->device_terminate_all = d->inst->ops->device_terminate_all;
+	dma_dev->device_issue_pending = d->inst->ops->device_issue_pending;
+	dma_dev->device_tx_status = d->inst->ops->device_tx_status;
+	dma_dev->device_resume = d->inst->ops->device_resume;
+	dma_dev->device_pause = d->inst->ops->device_pause;
+	dma_dev->device_config = d->inst->ops->device_config;
+	dma_dev->device_prep_slave_sg = d->inst->ops->device_prep_slave_sg;
+	dma_dev->device_synchronize = d->inst->ops->device_synchronize;
+
+	if (d->ver == DMA_VER22) {
+		dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+		dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+		dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
+				      BIT(DMA_DEV_TO_MEM);
+		dma_dev->residue_granularity =
+					DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
+	}
+
+	platform_set_drvdata(pdev, d);
+
+	ldma_dev_init(d);
+
+	ret = dma_async_device_register(dma_dev);
+	if (ret) {
+		dev_err(dev, "Failed to register slave DMA engine device\n");
+		return ret;
+	}
+
+	ret = of_dma_controller_register(pdev->dev.of_node, ldma_xlate, d);
+	if (ret) {
+		dev_err(dev, "Failed to register of DMA controller\n");
+		dma_async_device_unregister(dma_dev);
+		return ret;
+	}
+
+	dev_info(dev, "Init done - rev: %x, ports: %d channels: %d\n", d->ver,
+		 d->port_nrs, d->chan_nrs);
+
+	return 0;
+}
+
+static struct platform_driver intel_ldma_driver = {
+	.probe = intel_ldma_probe,
+	.driver = {
+		.name = DRIVER_NAME,
+		.of_match_table = intel_ldma_match,
+	},
+};
+
+static int __init intel_ldma_init(void)
+{
+	return platform_driver_register(&intel_ldma_driver);
+}
+
+device_initcall(intel_ldma_init);
diff --git a/include/linux/dma/lgm_dma.h b/include/linux/dma/lgm_dma.h
new file mode 100644
index 000000000000..3a2ee6ad0710
--- /dev/null
+++ b/include/linux/dma/lgm_dma.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2016 ~ 2019 Intel Corporation.
+ */
+#ifndef LGM_DMA_H
+#define LGM_DMA_H
+
+#include <linux/types.h>
+#include <linux/dmaengine.h>
+
+/*!
+ * \fn int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
+ *                                 int desc_num)
+ * \brief Configure low level channel descriptors
+ * \param[in] chan   pointer to DMA channel that the client is using
+ * \param[in] desc_base   descriptor base physical address
+ * \param[in] desc_num   number of descriptors
+ * \return   0 on success
+ * \return   kernel bug reported on failure
+ *
+ * This function configure the low level channel descriptors. It will be
+ * used by CBM whose descriptor is not DDR, actually some registers.
+ */
+int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
+			    int desc_num);
+
+#endif /* LGM_DMA_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-14  5:26 ` [PATCH v5 2/2] Add Intel LGM soc DMA support Amireddy Mallikarjuna reddy
@ 2020-08-14  8:47   ` kernel test robot
  2020-08-18 10:16   ` Peter Ujfalusi
  1 sibling, 0 replies; 23+ messages in thread
From: kernel test robot @ 2020-08-14  8:47 UTC (permalink / raw)
  To: Amireddy Mallikarjuna reddy, dmaengine, vkoul, devicetree, robh+dt
  Cc: kbuild-all, linux-kernel, andriy.shevchenko, cheol.yong.kim,
	qi-ming.wu, chuanhua.lei, mallikarjunax.reddy

[-- Attachment #1: Type: text/plain, Size: 3146 bytes --]

Hi Amireddy,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on vkoul-dmaengine/next]
[also build test WARNING on v5.8 next-20200814]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Amireddy-Mallikarjuna-reddy/Add-Intel-LGM-soc-DMA-support/20200814-134726
base:   https://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine.git next
config: nios2-allyesconfig (attached as .config)
compiler: nios2-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=nios2 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/dma/lgm/lgm-dma.c:1570:5: warning: no previous prototype for 'update_client_configs' [-Wmissing-prototypes]
    1570 | int update_client_configs(struct of_dma *ofdma, struct of_phandle_args *spec)
         |     ^~~~~~~~~~~~~~~~~~~~~

vim +/update_client_configs +1570 drivers/dma/lgm/lgm-dma.c

  1569	
> 1570	int update_client_configs(struct of_dma *ofdma, struct of_phandle_args *spec)
  1571	{
  1572		struct ldma_dev *d = ofdma->of_dma_data;
  1573		struct ldma_port *p;
  1574		struct ldma_chan *c;
  1575		u32 chan_id =  spec->args[0];
  1576		u32 port_id =  spec->args[1];
  1577	
  1578		if (chan_id >= d->chan_nrs || port_id >= d->port_nrs)
  1579			return 0;
  1580	
  1581		p = &d->ports[port_id];
  1582		c = &d->chans[chan_id];
  1583	
  1584		if (d->ver == DMA_VER22) {
  1585			u32 burst = spec->args[2];
  1586	
  1587			if (burst != 2 && burst != 4 && burst != 8)
  1588				return 0;
  1589	
  1590			/* TX and RX has the same burst length */
  1591			p->txbl = ilog2(burst);
  1592			p->rxbl = p->txbl;
  1593	
  1594			ldma_port_cfg(p);
  1595		} else {
  1596			if (spec->args[2] > 0 && spec->args[2] <= DMA_ENDIAN_TYPE3) {
  1597				c->data_endian = spec->args[2];
  1598				c->data_endian_en = true;
  1599			}
  1600	
  1601			if (spec->args[3] > 0 && spec->args[3] <= DMA_ENDIAN_TYPE3) {
  1602				c->desc_endian = spec->args[3];
  1603				c->desc_endian_en = true;
  1604			}
  1605	
  1606			if (spec->args[4] > 0 && spec->args[4] < 128)
  1607				c->boff_len = spec->args[4];
  1608	
  1609			if (spec->args[5])
  1610				c->desc_rx_np = true;
  1611	
  1612			/*
  1613			 * If channel packet drop enabled, port packet drop should
  1614			 * be enabled
  1615			 */
  1616			if (spec->args[6]) {
  1617				c->pden = true;
  1618				p->pkt_drop = DMA_PKT_DROP_EN;
  1619			}
  1620			ldma_port_cfg(p);
  1621			ldma_chan_cfg(c);
  1622		}
  1623	
  1624		return 1;
  1625	}
  1626	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 56601 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-14  5:26 ` [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC Amireddy Mallikarjuna reddy
@ 2020-08-14 20:32   ` Rob Herring
  2020-08-18  7:00     ` Reddy, MallikarjunaX
  0 siblings, 1 reply; 23+ messages in thread
From: Rob Herring @ 2020-08-14 20:32 UTC (permalink / raw)
  To: Amireddy Mallikarjuna reddy
  Cc: dmaengine, vkoul, devicetree, linux-kernel, andriy.shevchenko,
	cheol.yong.kim, qi-ming.wu, chuanhua.lei, malliamireddy009

On Fri, Aug 14, 2020 at 01:26:09PM +0800, Amireddy Mallikarjuna reddy wrote:
> Add DT bindings YAML schema for DMA controller driver
> of Lightning Mountain(LGM) SoC.
> 
> Signed-off-by: Amireddy Mallikarjuna reddy <mallikarjunax.reddy@linux.intel.com>
> ---
> v1:
> - Initial version.
> 
> v2:
> - Fix bot errors.
> 
> v3:
> - No change.
> 
> v4:
> - Address Thomas langer comments
>   - use node name pattern as dma-controller as in common binding.
>   - Remove "_" (underscore) in instance name.
>   - Remove "port-" and "chan-" in attribute name for both 'dma-ports' & 'dma-channels' child nodes.
> 
> v5:
> - Moved some of the attributes in 'dma-ports' & 'dma-channels' child nodes to dma client/consumer side as cells in 'dmas' properties.
> ---
>  .../devicetree/bindings/dma/intel,ldma.yaml        | 319 +++++++++++++++++++++
>  1 file changed, 319 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/intel,ldma.yaml
> 
> diff --git a/Documentation/devicetree/bindings/dma/intel,ldma.yaml b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
> new file mode 100644
> index 000000000000..9beaf191a6de
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
> @@ -0,0 +1,319 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/dma/intel,ldma.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Lightning Mountain centralized low speed DMA and high speed DMA controllers.
> +
> +maintainers:
> +  - chuanhua.lei@intel.com
> +  - mallikarjunax.reddy@intel.com
> +
> +allOf:
> +  - $ref: "dma-controller.yaml#"
> +
> +properties:
> + $nodename:
> +   pattern: "^dma-controller(@.*)?$"
> +
> + "#dma-cells":
> +   const: 1

Example says 3.

> +
> + compatible:
> +  anyOf:
> +   - const: intel,lgm-cdma
> +   - const: intel,lgm-dma2tx
> +   - const: intel,lgm-dma1rx
> +   - const: intel,lgm-dma1tx
> +   - const: intel,lgm-dma0tx
> +   - const: intel,lgm-dma3
> +   - const: intel,lgm-toe-dma30
> +   - const: intel,lgm-toe-dma31

'anyOf' doesn't make sense here. This should be a single 'enum'.

> +
> + reg:
> +  maxItems: 1
> +
> + clocks:
> +  maxItems: 1
> +
> + resets:
> +  maxItems: 1
> +
> + interrupts:
> +  maxItems: 1
> +
> + intel,dma-poll-cnt:
> +   $ref: /schemas/types.yaml#definitions/uint32
> +   description:
> +     DMA descriptor polling counter. It may need fine tune according
> +     to the system application scenario.
> +
> + intel,dma-byte-en:
> +   type: boolean
> +   description:
> +     DMA byte enable is only valid for DMA write(RX).
> +     Byte enable(1) means DMA write will be based on the number of dwords
> +     instead of the whole burst.
> +
> + intel,dma-drb:
> +    type: boolean
> +    description:
> +      DMA descriptor read back to make sure data and desc synchronization.
> +
> + intel,dma-burst:
> +    $ref: /schemas/types.yaml#definitions/uint32
> +    description:
> +       Specifiy the DMA burst size(in dwords), the valid value will be 8, 16, 32.
> +       Default is 16 for data path dma, 32 is for memcopy DMA.
> +
> + intel,dma-polling-cnt:
> +    $ref: /schemas/types.yaml#definitions/uint32
> +    description:
> +       DMA descriptor polling counter. It may need fine tune according to
> +       the system application scenario.
> +
> + intel,dma-desc-in-sram:
> +    type: boolean
> +    description:
> +       DMA descritpors in SRAM or not. Some old controllers descriptors
> +       can be in DRAM or SRAM. The new ones are all in SRAM.
> +
> + intel,dma-orrc:
> +    $ref: /schemas/types.yaml#definitions/uint32
> +    description:
> +       DMA outstanding read counter. The maximum value is 16, and it may
> +       need fine tune according to the system application scenarios.
> +
> + intel,dma-dburst-wr:
> +    type: boolean
> +    description:
> +       Enable RX dynamic burst write. It only applies to RX DMA and memcopy DMA.
> +
> +
> + dma-ports:
> +    type: object
> +    description:
> +       This sub-node must contain a sub-node for each DMA port.
> +    properties:
> +      '#address-cells':
> +        const: 1
> +      '#size-cells':
> +        const: 0
> +
> +    patternProperties:
> +      "^dma-ports@[0-9]+$":
> +          type: object
> +
> +          properties:
> +            reg:
> +              items:
> +                - enum: [0, 1, 2, 3, 4, 5]
> +              description:
> +                 Which port this node refers to.
> +
> +            intel,name:
> +              $ref: /schemas/types.yaml#definitions/string-array
> +              description:
> +                 Port name of each DMA port.

No other DMA controller needs this, why do you?

> +
> +            intel,chans:
> +              $ref: /schemas/types.yaml#/definitions/uint32-array
> +              description:
> +                 The channels included on this port. Format is channel start
> +                 number and how many channels on this port.

Why does this need to be in DT? This all seems like it can be in the dma 
cells for each client.

> +
> +          required:
> +            - reg
> +            - intel,name
> +            - intel,chans
> +
> +
> + ldma-channels:
> +    type: object
> +    description:
> +       This sub-node must contain a sub-node for each DMA channel.
> +    properties:
> +      '#address-cells':
> +        const: 1
> +      '#size-cells':
> +        const: 0
> +
> +    patternProperties:
> +      "^ldma-channels@[0-15]+$":
> +          type: object
> +
> +          properties:
> +            reg:
> +              items:
> +                - enum: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
> +              description:
> +                 Which channel this node refers to.
> +
> +            intel,desc_num:
> +              $ref: /schemas/types.yaml#/definitions/uint32
> +              description:
> +                 Per channel maximum descriptor number. The max value is 255.
> +
> +            intel,hdr-mode:
> +              $ref: /schemas/types.yaml#/definitions/uint32-array
> +              description:
> +                 The first parameter is header mode size, the second
> +                 parameter is checksum enable or disable. If enabled,
> +                 header mode size is ignored. If disabled, header mode
> +                 size must be provided.
> +
> +            intel,hw-desc:
> +              $ref: /schemas/types.yaml#/definitions/uint32-array
> +              description:
> +                 Per channel dma hardware descriptor configuration.
> +                 The first parameter is descriptor physical address and the
> +                 second parameter hardware descriptor number.

Again, this all seems like per client information for dma cells.

> +
> +          required:
> +            - reg
> +
> +required:
> + - compatible
> + - reg
> + - '#dma-cells'

Add:

additionalProperties: false

> +
> +examples:
> + - |
> +   dma0: dma-controller@e0e00000 {
> +     compatible = "intel,lgm-cdma";
> +     reg = <0xe0e00000 0x1000>;
> +     #dma-cells = <3>;
> +     interrupt-parent = <&ioapic1>;
> +     interrupts = <82 1>;
> +     resets = <&rcu0 0x30 0>;
> +     reset-names = "ctrl";

Not documented.

> +     clocks = <&cgu0 80>;
> +     intel,dma-poll-cnt = <4>;
> +     intel,dma-byte-en;
> +     intel,dma-drb;
> +     dma-ports {
> +       #address-cells = <1>;
> +       #size-cells = <0>;
> +
> +       dma-ports@0 {
> +           reg = <0>;
> +           intel,name = "SPI0";
> +           intel,chans = <0 2>;
> +       };
> +       dma-ports@1 {
> +           reg = <1>;
> +           intel,name = "SPI1";
> +           intel,chans = <2 2>;
> +       };
> +       dma-ports@2 {
> +           reg = <2>;
> +           intel,name = "SPI2";
> +           intel,chans = <4 2>;
> +       };
> +       dma-ports@3 {
> +           reg = <3>;
> +           intel,name = "SPI3";
> +           intel,chans = <6 2>;
> +       };
> +       dma-ports@4 {
> +           reg = <4>;
> +           intel,name = "HSNAND";
> +           intel,chans = <8 2>;
> +       };
> +       dma-ports@5 {
> +           reg = <5>;
> +           intel,name = "PCM";
> +           intel,chans = <10 6>;
> +       };
> +     };
> +     ldma-channels {
> +       #address-cells = <1>;
> +       #size-cells = <0>;
> +
> +       ldma-channels@0 {
> +           reg = <0>;
> +           intel,desc_num = <1>;
> +       };
> +       ldma-channels@1 {
> +           reg = <1>;
> +           intel,desc_num = <1>;
> +       };
> +       ldma-channels@2 {
> +           reg = <2>;
> +           intel,desc_num = <1>;
> +       };
> +       ldma-channels@3 {
> +           reg = <3>;
> +           intel,desc_num = <1>;
> +       };
> +       ldma-channels@4 {
> +           reg = <4>;
> +           intel,desc_num = <1>;
> +       };
> +       ldma-channels@5 {
> +           reg = <5>;
> +           intel,desc_num = <1>;
> +       };
> +       ldma-channels@6 {
> +           reg = <6>;
> +           intel,desc_num = <1>;
> +       };
> +       ldma-channels@7 {
> +           reg = <7>;
> +           intel,desc_num = <1>;
> +       };
> +       ldma-channels@8 {
> +           reg = <8>;
> +       };
> +       ldma-channels@9 {
> +           reg = <9>;
> +       };
> +       ldma-channels@10 {
> +           reg = <10>;
> +       };
> +       ldma-channels@11 {
> +           reg = <11>;
> +       };
> +       ldma-channels@12 {
> +           reg = <12>;
> +       };
> +       ldma-channels@13 {
> +           reg = <13>;
> +       };
> +       ldma-channels@14 {
> +           reg = <14>;
> +       };
> +       ldma-channels@15 {
> +           reg = <15>;
> +       };
> +     };
> +   };
> + - |
> +   dma3: dma-controller@ec800000 {
> +     compatible = "intel,lgm-dma3";
> +     reg = <0xec800000 0x1000>;
> +     clocks = <&cgu0 71>;
> +     resets = <&rcu0 0x10 9>;
> +     #dma-cells = <7>;
> +     intel,dma-burst = <32>;
> +     intel,dma-polling-cnt = <16>;
> +     intel,dma-desc-in-sram;
> +     intel,dma-orrc = <16>;
> +     intel,dma-byte-en;
> +     intel,dma-dburst-wr;
> +     ldma-channels {
> +         #address-cells = <1>;
> +         #size-cells = <0>;
> +
> +         ldma-channels@12 {
> +             reg = <12>;
> +             intel,hdr-mode = <128 0>;
> +             intel,hw-desc = <0x20000000 8>;
> +         };
> +         ldma-channels@13 {
> +             reg = <13>;
> +             intel,hdr-mode = <128 0>;
> +         };
> +     };
> +   };
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-14 20:32   ` Rob Herring
@ 2020-08-18  7:00     ` Reddy, MallikarjunaX
  2020-08-25 11:21       ` Vinod Koul
  2020-09-04  6:31       ` Peter Ujfalusi
  0 siblings, 2 replies; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-08-18  7:00 UTC (permalink / raw)
  To: Rob Herring
  Cc: dmaengine, vkoul, devicetree, linux-kernel, andriy.shevchenko,
	cheol.yong.kim, qi-ming.wu, chuanhua.lei, malliamireddy009

Hi Rob,
Thanks for your valuable comments. Please see my comments inline..

On 8/15/2020 4:32 AM, Rob Herring wrote:
> On Fri, Aug 14, 2020 at 01:26:09PM +0800, Amireddy Mallikarjuna reddy wrote:
>> Add DT bindings YAML schema for DMA controller driver
>> of Lightning Mountain(LGM) SoC.
>>
>> Signed-off-by: Amireddy Mallikarjuna reddy <mallikarjunax.reddy@linux.intel.com>
>> ---
>> v1:
>> - Initial version.
>>
>> v2:
>> - Fix bot errors.
>>
>> v3:
>> - No change.
>>
>> v4:
>> - Address Thomas langer comments
>>    - use node name pattern as dma-controller as in common binding.
>>    - Remove "_" (underscore) in instance name.
>>    - Remove "port-" and "chan-" in attribute name for both 'dma-ports' & 'dma-channels' child nodes.
>>
>> v5:
>> - Moved some of the attributes in 'dma-ports' & 'dma-channels' child nodes to dma client/consumer side as cells in 'dmas' properties.
>> ---
>>   .../devicetree/bindings/dma/intel,ldma.yaml        | 319 +++++++++++++++++++++
>>   1 file changed, 319 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/dma/intel,ldma.yaml b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
>> new file mode 100644
>> index 000000000000..9beaf191a6de
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
>> @@ -0,0 +1,319 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/dma/intel,ldma.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Lightning Mountain centralized low speed DMA and high speed DMA controllers.
>> +
>> +maintainers:
>> +  - chuanhua.lei@intel.com
>> +  - mallikarjunax.reddy@intel.com
>> +
>> +allOf:
>> +  - $ref: "dma-controller.yaml#"
>> +
>> +properties:
>> + $nodename:
>> +   pattern: "^dma-controller(@.*)?$"
>> +
>> + "#dma-cells":
>> +   const: 1
> Example says 3.
OK, i will fix it.
>
>> +
>> + compatible:
>> +  anyOf:
>> +   - const: intel,lgm-cdma
>> +   - const: intel,lgm-dma2tx
>> +   - const: intel,lgm-dma1rx
>> +   - const: intel,lgm-dma1tx
>> +   - const: intel,lgm-dma0tx
>> +   - const: intel,lgm-dma3
>> +   - const: intel,lgm-toe-dma30
>> +   - const: intel,lgm-toe-dma31
> 'anyOf' doesn't make sense here. This should be a single 'enum'.
ok. I will update it.
>
>> +
>> + reg:
>> +  maxItems: 1
>> +
>> + clocks:
>> +  maxItems: 1
>> +
>> + resets:
>> +  maxItems: 1
>> +
>> + interrupts:
>> +  maxItems: 1
>> +
>> + intel,dma-poll-cnt:
>> +   $ref: /schemas/types.yaml#definitions/uint32
>> +   description:
>> +     DMA descriptor polling counter. It may need fine tune according
>> +     to the system application scenario.
>> +
>> + intel,dma-byte-en:
>> +   type: boolean
>> +   description:
>> +     DMA byte enable is only valid for DMA write(RX).
>> +     Byte enable(1) means DMA write will be based on the number of dwords
>> +     instead of the whole burst.
>> +
>> + intel,dma-drb:
>> +    type: boolean
>> +    description:
>> +      DMA descriptor read back to make sure data and desc synchronization.
>> +
>> + intel,dma-burst:
>> +    $ref: /schemas/types.yaml#definitions/uint32
>> +    description:
>> +       Specifiy the DMA burst size(in dwords), the valid value will be 8, 16, 32.
>> +       Default is 16 for data path dma, 32 is for memcopy DMA.
>> +
>> + intel,dma-polling-cnt:
>> +    $ref: /schemas/types.yaml#definitions/uint32
>> +    description:
>> +       DMA descriptor polling counter. It may need fine tune according to
>> +       the system application scenario.
>> +
>> + intel,dma-desc-in-sram:
>> +    type: boolean
>> +    description:
>> +       DMA descritpors in SRAM or not. Some old controllers descriptors
>> +       can be in DRAM or SRAM. The new ones are all in SRAM.
>> +
>> + intel,dma-orrc:
>> +    $ref: /schemas/types.yaml#definitions/uint32
>> +    description:
>> +       DMA outstanding read counter. The maximum value is 16, and it may
>> +       need fine tune according to the system application scenarios.
>> +
>> + intel,dma-dburst-wr:
>> +    type: boolean
>> +    description:
>> +       Enable RX dynamic burst write. It only applies to RX DMA and memcopy DMA.
>> +
>> +
>> + dma-ports:
>> +    type: object
>> +    description:
>> +       This sub-node must contain a sub-node for each DMA port.
>> +    properties:
>> +      '#address-cells':
>> +        const: 1
>> +      '#size-cells':
>> +        const: 0
>> +
>> +    patternProperties:
>> +      "^dma-ports@[0-9]+$":
>> +          type: object
>> +
>> +          properties:
>> +            reg:
>> +              items:
>> +                - enum: [0, 1, 2, 3, 4, 5]
>> +              description:
>> +                 Which port this node refers to.
>> +
>> +            intel,name:
>> +              $ref: /schemas/types.yaml#definitions/string-array
>> +              description:
>> +                 Port name of each DMA port.
> No other DMA controller needs this, why do you?
  Answered below. (*ABC)
>
>> +
>> +            intel,chans:
>> +              $ref: /schemas/types.yaml#/definitions/uint32-array
>> +              description:
>> +                 The channels included on this port. Format is channel start
>> +                 number and how many channels on this port.
> Why does this need to be in DT? This all seems like it can be in the dma
> cells for each client.
(*ABC)
Yes. We need this.
for dma0(lgm-cdma) old SOC supports 16 channels and the new SOC supports 
22 channels. and the logical channel mapping for the peripherals also 
differ b/w old and new SOCs.

Because of this hardware limitation we are trying to configure the total 
channels and port-channel mapping dynamically from device tree.

based on port name we are trying to configure the default values for 
different peripherals(ports).
Example: burst length is not same for all ports, so using port name to 
do default configurations.
>
>> +
>> +          required:
>> +            - reg
>> +            - intel,name
>> +            - intel,chans
>> +
>> +
>> + ldma-channels:
>> +    type: object
>> +    description:
>> +       This sub-node must contain a sub-node for each DMA channel.
>> +    properties:
>> +      '#address-cells':
>> +        const: 1
>> +      '#size-cells':
>> +        const: 0
>> +
>> +    patternProperties:
>> +      "^ldma-channels@[0-15]+$":
>> +          type: object
>> +
>> +          properties:
>> +            reg:
>> +              items:
>> +                - enum: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
>> +              description:
>> +                 Which channel this node refers to.
>> +
>> +            intel,desc_num:
>> +              $ref: /schemas/types.yaml#/definitions/uint32
>> +              description:
>> +                 Per channel maximum descriptor number. The max value is 255.
>> +
>> +            intel,hdr-mode:
>> +              $ref: /schemas/types.yaml#/definitions/uint32-array
>> +              description:
>> +                 The first parameter is header mode size, the second
>> +                 parameter is checksum enable or disable. If enabled,
>> +                 header mode size is ignored. If disabled, header mode
>> +                 size must be provided.
>> +
>> +            intel,hw-desc:
>> +              $ref: /schemas/types.yaml#/definitions/uint32-array
>> +              description:
>> +                 Per channel dma hardware descriptor configuration.
>> +                 The first parameter is descriptor physical address and the
>> +                 second parameter hardware descriptor number.
> Again, this all seems like per client information for dma cells.
  Ok, if we move all these attributes to 'dmas' then 'dma-channels' 
child node is not needed in dtsi.
#dma-cells number i am already using 7. If we move all these attributes 
to 'dmas' then integer cells will increase.

Is there any limitation in using a number of integer cells & as 
determined by the #dma-cells property?

>
>> +
>> +          required:
>> +            - reg
>> +
>> +required:
>> + - compatible
>> + - reg
>> + - '#dma-cells'
> Add:
>
> additionalProperties: false
Sure i will update it in the next patch..

>
>> +
>> +examples:
>> + - |
>> +   dma0: dma-controller@e0e00000 {
>> +     compatible = "intel,lgm-cdma";
>> +     reg = <0xe0e00000 0x1000>;
>> +     #dma-cells = <3>;
>> +     interrupt-parent = <&ioapic1>;
>> +     interrupts = <82 1>;
>> +     resets = <&rcu0 0x30 0>;
>> +     reset-names = "ctrl";
> Not documented.
ok. will Document in the next patch.
>
>> +     clocks = <&cgu0 80>;
>> +     intel,dma-poll-cnt = <4>;
>> +     intel,dma-byte-en;
>> +     intel,dma-drb;
>> +     dma-ports {
>> +       #address-cells = <1>;
>> +       #size-cells = <0>;
>> +
>> +       dma-ports@0 {
>> +           reg = <0>;
>> +           intel,name = "SPI0";
>> +           intel,chans = <0 2>;
>> +       };
>> +       dma-ports@1 {
>> +           reg = <1>;
>> +           intel,name = "SPI1";
>> +           intel,chans = <2 2>;
>> +       };
>> +       dma-ports@2 {
>> +           reg = <2>;
>> +           intel,name = "SPI2";
>> +           intel,chans = <4 2>;
>> +       };
>> +       dma-ports@3 {
>> +           reg = <3>;
>> +           intel,name = "SPI3";
>> +           intel,chans = <6 2>;
>> +       };
>> +       dma-ports@4 {
>> +           reg = <4>;
>> +           intel,name = "HSNAND";
>> +           intel,chans = <8 2>;
>> +       };
>> +       dma-ports@5 {
>> +           reg = <5>;
>> +           intel,name = "PCM";
>> +           intel,chans = <10 6>;
>> +       };
>> +     };
>> +     ldma-channels {
>> +       #address-cells = <1>;
>> +       #size-cells = <0>;
>> +
>> +       ldma-channels@0 {
>> +           reg = <0>;
>> +           intel,desc_num = <1>;
>> +       };
>> +       ldma-channels@1 {
>> +           reg = <1>;
>> +           intel,desc_num = <1>;
>> +       };
>> +       ldma-channels@2 {
>> +           reg = <2>;
>> +           intel,desc_num = <1>;
>> +       };
>> +       ldma-channels@3 {
>> +           reg = <3>;
>> +           intel,desc_num = <1>;
>> +       };
>> +       ldma-channels@4 {
>> +           reg = <4>;
>> +           intel,desc_num = <1>;
>> +       };
>> +       ldma-channels@5 {
>> +           reg = <5>;
>> +           intel,desc_num = <1>;
>> +       };
>> +       ldma-channels@6 {
>> +           reg = <6>;
>> +           intel,desc_num = <1>;
>> +       };
>> +       ldma-channels@7 {
>> +           reg = <7>;
>> +           intel,desc_num = <1>;
>> +       };
>> +       ldma-channels@8 {
>> +           reg = <8>;
>> +       };
>> +       ldma-channels@9 {
>> +           reg = <9>;
>> +       };
>> +       ldma-channels@10 {
>> +           reg = <10>;
>> +       };
>> +       ldma-channels@11 {
>> +           reg = <11>;
>> +       };
>> +       ldma-channels@12 {
>> +           reg = <12>;
>> +       };
>> +       ldma-channels@13 {
>> +           reg = <13>;
>> +       };
>> +       ldma-channels@14 {
>> +           reg = <14>;
>> +       };
>> +       ldma-channels@15 {
>> +           reg = <15>;
>> +       };
>> +     };
>> +   };
>> + - |
>> +   dma3: dma-controller@ec800000 {
>> +     compatible = "intel,lgm-dma3";
>> +     reg = <0xec800000 0x1000>;
>> +     clocks = <&cgu0 71>;
>> +     resets = <&rcu0 0x10 9>;
>> +     #dma-cells = <7>;
>> +     intel,dma-burst = <32>;
>> +     intel,dma-polling-cnt = <16>;
>> +     intel,dma-desc-in-sram;
>> +     intel,dma-orrc = <16>;
>> +     intel,dma-byte-en;
>> +     intel,dma-dburst-wr;
>> +     ldma-channels {
>> +         #address-cells = <1>;
>> +         #size-cells = <0>;
>> +
>> +         ldma-channels@12 {
>> +             reg = <12>;
>> +             intel,hdr-mode = <128 0>;
>> +             intel,hw-desc = <0x20000000 8>;
>> +         };
>> +         ldma-channels@13 {
>> +             reg = <13>;
>> +             intel,hdr-mode = <128 0>;
>> +         };
>> +     };
>> +   };
>> -- 
>> 2.11.0
>>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-14  5:26 ` [PATCH v5 2/2] Add Intel LGM soc DMA support Amireddy Mallikarjuna reddy
  2020-08-14  8:47   ` kernel test robot
@ 2020-08-18 10:16   ` Peter Ujfalusi
  2020-08-18 10:29     ` Peter Ujfalusi
  2020-08-24  2:30     ` Reddy, MallikarjunaX
  1 sibling, 2 replies; 23+ messages in thread
From: Peter Ujfalusi @ 2020-08-18 10:16 UTC (permalink / raw)
  To: Amireddy Mallikarjuna reddy, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009

Hi,

On 14/08/2020 8.26, Amireddy Mallikarjuna reddy wrote:
> Add DMA controller driver for Lightning Mountain(LGM) family of SoCs.
> 
> The main function of the DMA controller is the transfer of data from/to any
> DPlus compliant peripheral to/from the memory. A memory to memory copy
> capability can also be configured.
> 
> This ldma driver is used for configure the device and channnels for data
> and control paths.
> 
> Signed-off-by: Amireddy Mallikarjuna reddy <mallikarjunax.reddy@linux.intel.com>

...

> +static int ldma_chan_cfg(struct ldma_chan *c)
> +{
> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
> +	u32 reg;
> +
> +	reg = c->pden ? DMA_CCTRL_PDEN : 0;
> +	reg |= c->onoff ? DMA_CCTRL_ON : 0;
> +	reg |= c->rst ? DMA_CCTRL_RST : 0;
> +
> +	ldma_chan_cctrl_cfg(c, reg);
> +	ldma_chan_irq_init(c);
> +
> +	if (d->ver > DMA_VER22) {
> +		ldma_chan_set_class(c, c->nr);
> +		ldma_chan_byte_offset_cfg(c, c->boff_len);
> +		ldma_chan_data_endian_cfg(c, c->data_endian_en, c->data_endian);
> +		ldma_chan_desc_endian_cfg(c, c->desc_endian_en, c->desc_endian);
> +		ldma_chan_hdr_mode_cfg(c, c->hdrm_len, c->hdrm_csum);
> +		ldma_chan_rxwr_np_cfg(c, c->desc_rx_np);
> +		ldma_chan_abc_cfg(c, c->abc_en);

Each of these functions will lock and unlock the same lock, would it
make sense to restructur things to have less activity with the spinlock?

> +
> +		if (ldma_chan_is_hw_desc(c))
> +			ldma_chan_desc_hw_cfg(c, c->desc_phys, c->desc_cnt);
> +	}
> +
> +	return 0;
> +}

...

> +static void dma_free_desc_resource(struct virt_dma_desc *vdesc)
> +{
> +	struct dw2_desc_sw *ds = to_lgm_dma_desc(vdesc);
> +	struct ldma_chan *c = ds->chan;
> +
> +	dma_pool_free(c->desc_pool, ds->desc_hw, ds->desc_phys);
> +	kfree(ds);
> +	c->ds = NULL;

Is there a chance that c->ds != ds?

> +}
> +
> +static struct dw2_desc_sw *
> +dma_alloc_desc_resource(int num, struct ldma_chan *c)
> +{
> +	struct device *dev = c->vchan.chan.device->dev;
> +	struct dw2_desc_sw *ds;
> +
> +	if (num > c->desc_num) {
> +		dev_err(dev, "sg num %d exceed max %d\n", num, c->desc_num);
> +		return NULL;
> +	}
> +
> +	ds = kzalloc(sizeof(*ds), GFP_NOWAIT);
> +	if (!ds)
> +		return NULL;
> +
> +	ds->chan = c;
> +
> +	ds->desc_hw = dma_pool_zalloc(c->desc_pool, GFP_ATOMIC,
> +				      &ds->desc_phys);
> +	if (!ds->desc_hw) {
> +		dev_dbg(dev, "out of memory for link descriptor\n");
> +		kfree(ds);
> +		return NULL;
> +	}
> +	ds->desc_cnt = num;
> +
> +	return ds;
> +}
> +
> +static void ldma_chan_irq_en(struct ldma_chan *c)
> +{
> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&d->dev_lock, flags);
> +	writel(c->nr, d->base + DMA_CS);
> +	writel(DMA_CI_EOP, d->base + DMA_CIE);
> +	writel(BIT(c->nr), d->base + DMA_IRNEN);
> +	spin_unlock_irqrestore(&d->dev_lock, flags);
> +}
> +
> +static void dma_issue_pending(struct dma_chan *chan)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
> +	unsigned long flags;
> +
> +	if (d->ver == DMA_VER22) {
> +		spin_lock_irqsave(&c->vchan.lock, flags);
> +		if (vchan_issue_pending(&c->vchan)) {
> +			struct virt_dma_desc *vdesc;
> +
> +			/* Get the next descriptor */
> +			vdesc = vchan_next_desc(&c->vchan);
> +			if (!vdesc) {
> +				c->ds = NULL;
> +				return;
> +			}
> +			list_del(&vdesc->node);
> +			c->ds = to_lgm_dma_desc(vdesc);

you have set c->ds in dma_prep_slave_sg and the only way I can see that
you will not leak memory is that the client must terminate_sync() after
each transfer so that the synchronize callback is invoked between each
prep_sg/issue_pending/competion.

> +			spin_unlock_irqrestore(&c->vchan.lock, flags);
> +			ldma_chan_desc_hw_cfg(c, c->ds->desc_phys, c->ds->desc_cnt);
> +			ldma_chan_irq_en(c);
> +		}

If there is nothing peding, you will leave the spinlock wide open...

> +	}
> +	ldma_chan_on(c);
> +}
> +
> +static void dma_synchronize(struct dma_chan *chan)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +
> +	/*
> +	 * clear any pending work if any. In that
> +	 * case the resource needs to be free here.
> +	 */
> +	cancel_work_sync(&c->work);
> +	vchan_synchronize(&c->vchan);
> +	if (c->ds)
> +		dma_free_desc_resource(&c->ds->vdesc);
> +}
> +
> +static int dma_terminate_all(struct dma_chan *chan)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +	unsigned long flags;
> +	LIST_HEAD(head);
> +
> +	spin_lock_irqsave(&c->vchan.lock, flags);
> +	vchan_get_all_descriptors(&c->vchan, &head);
> +	spin_unlock_irqrestore(&c->vchan.lock, flags);
> +	vchan_dma_desc_free_list(&c->vchan, &head);
> +
> +	return ldma_chan_reset(c);
> +}
> +
> +static int dma_resume_chan(struct dma_chan *chan)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +
> +	ldma_chan_on(c);
> +
> +	return 0;
> +}
> +
> +static int dma_pause_chan(struct dma_chan *chan)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +
> +	return ldma_chan_off(c);
> +}
> +
> +static enum dma_status
> +dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
> +	      struct dma_tx_state *txstate)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
> +	enum dma_status status = DMA_COMPLETE;
> +
> +	if (d->ver == DMA_VER22)
> +		status = dma_cookie_status(chan, cookie, txstate);
> +
> +	return status;
> +}
> +
> +static void dma_chan_irq(int irq, void *data)
> +{
> +	struct ldma_chan *c = data;
> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
> +	u32 stat;
> +
> +	/* Disable channel interrupts  */
> +	writel(c->nr, d->base + DMA_CS);
> +	stat = readl(d->base + DMA_CIS);
> +	if (!stat)
> +		return;
> +
> +	writel(readl(d->base + DMA_CIE) & ~DMA_CI_ALL, d->base + DMA_CIE);
> +	writel(stat, d->base + DMA_CIS);
> +	queue_work(d->wq, &c->work);
> +}
> +
> +static irqreturn_t dma_interrupt(int irq, void *dev_id)
> +{
> +	struct ldma_dev *d = dev_id;
> +	struct ldma_chan *c;
> +	unsigned long irncr;
> +	u32 cid;
> +
> +	irncr = readl(d->base + DMA_IRNCR);
> +	if (!irncr) {
> +		dev_err(d->dev, "dummy interrupt\n");
> +		return IRQ_NONE;
> +	}
> +
> +	for_each_set_bit(cid, &irncr, d->chan_nrs) {
> +		/* Mask */
> +		writel(readl(d->base + DMA_IRNEN) & ~BIT(cid), d->base + DMA_IRNEN);
> +		/* Ack */
> +		writel(readl(d->base + DMA_IRNCR) | BIT(cid), d->base + DMA_IRNCR);
> +
> +		c = &d->chans[cid];
> +		dma_chan_irq(irq, c);
> +	}
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static struct dma_async_tx_descriptor *
> +dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
> +		  unsigned int sglen, enum dma_transfer_direction dir,
> +		  unsigned long flags, void *context)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +	size_t len, avail, total = 0;
> +	struct dw2_desc *hw_ds;
> +	struct dw2_desc_sw *ds;
> +	struct scatterlist *sg;
> +	int num = sglen, i;
> +	dma_addr_t addr;
> +
> +	if (!sgl)
> +		return NULL;
> +
> +	for_each_sg(sgl, sg, sglen, i) {
> +		avail = sg_dma_len(sg);
> +		if (avail > DMA_MAX_SIZE)
> +			num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1;
> +	}
> +
> +	ds = dma_alloc_desc_resource(num, c);
> +	if (!ds)
> +		return NULL;
> +
> +	c->ds = ds;

If you still have a transfer running then you are going to get lost that
dscriptor?

> +
> +	num = 0;
> +	/* sop and eop has to be handled nicely */
> +	for_each_sg(sgl, sg, sglen, i) {
> +		addr = sg_dma_address(sg);
> +		avail = sg_dma_len(sg);
> +		total += avail;
> +
> +		do {
> +			len = min_t(size_t, avail, DMA_MAX_SIZE);
> +
> +			hw_ds = &ds->desc_hw[num];
> +			switch (sglen) {
> +			case 1:
> +				hw_ds->status.field.sop = 1;
> +				hw_ds->status.field.eop = 1;
> +				break;
> +			default:
> +				if (num == 0) {
> +					hw_ds->status.field.sop = 1;
> +					hw_ds->status.field.eop = 0;
> +				} else if (num == (sglen - 1)) {
> +					hw_ds->status.field.sop = 0;
> +					hw_ds->status.field.eop = 1;
> +				} else {
> +					hw_ds->status.field.sop = 0;
> +					hw_ds->status.field.eop = 0;
> +				}
> +				break;
> +			}
> +
> +			/* Only 32 bit address supported */
> +			hw_ds->addr = (u32)addr;
> +			hw_ds->status.field.len = len;
> +			hw_ds->status.field.c = 0;
> +			hw_ds->status.field.bofs = addr & 0x3;
> +			/* Ensure data ready before ownership change */
> +			wmb();
> +			hw_ds->status.field.own = DMA_OWN;
> +			/* Ensure ownership changed before moving forward */
> +			wmb();
> +			num++;
> +			addr += len;
> +			avail -= len;
> +		} while (avail);
> +	}
> +
> +	ds->size = total;
> +
> +	return vchan_tx_prep(&c->vchan, &ds->vdesc, DMA_CTRL_ACK);
> +}
> +
> +static int
> +dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
> +	struct ldma_port *p = c->port;
> +	unsigned long flags;
> +	u32 bl;
> +
> +	if ((cfg->direction == DMA_DEV_TO_MEM &&
> +	     cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
> +	    (cfg->direction == DMA_MEM_TO_DEV &&
> +	     cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||

According to the probe function these width restrictions are only valid
for DMA_VER22?

> +	    !is_slave_direction(cfg->direction))
> +		return -EINVAL;
> +
> +	/* Default setting will be used */
> +	if (!cfg->src_maxburst && !cfg->dst_maxburst)
> +		return 0;

maxburst == 0 is identical to maxburst == 1, it is just not set
explicitly. Iow 1 word per DMA request.

> +
> +	/* Must be the same */
> +	if (cfg->src_maxburst && cfg->dst_maxburst &&
> +	    cfg->src_maxburst != cfg->dst_maxburst)
> +		return -EINVAL;
> +
> +	if (cfg->dst_maxburst)
> +		cfg->src_maxburst = cfg->dst_maxburst;
> +
> +	bl = ilog2(cfg->src_maxburst);
> +
> +	spin_lock_irqsave(&d->dev_lock, flags);
> +	writel(p->portid, d->base + DMA_PS);
> +	ldma_update_bits(d, DMA_PCTRL_RXBL | DMA_PCTRL_TXBL,
> +			 FIELD_PREP(DMA_PCTRL_RXBL, bl) |
> +			 FIELD_PREP(DMA_PCTRL_TXBL, bl), DMA_PCTRL);
> +	spin_unlock_irqrestore(&d->dev_lock, flags);

What drivers usually do is to save the cfg and inprepare time take it
into account when setting up the transfer.
Write the change to the HW before the trasnfer is started (if it has
changed from previous settings)

Client drivers usually set the slave config ones, in most cases during
probe, so the slave config rarely changes runtime, but there are cases
for that.

> +
> +	return 0;
> +}
> +
> +static int dma_alloc_chan_resources(struct dma_chan *chan)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
> +	struct device *dev = c->vchan.chan.device->dev;
> +	size_t	desc_sz;
> +
> +	if (d->ver > DMA_VER22) {
> +		c->flags |= CHAN_IN_USE;
> +		return 0;
> +	}
> +
> +	if (c->desc_pool)
> +		return c->desc_num;
> +
> +	desc_sz = c->desc_num * sizeof(struct dw2_desc);
> +	c->desc_pool = dma_pool_create(c->name, dev, desc_sz,
> +				       __alignof__(struct dw2_desc), 0);
> +
> +	if (!c->desc_pool) {
> +		dev_err(dev, "unable to allocate descriptor pool\n");
> +		return -ENOMEM;
> +	}
> +
> +	return c->desc_num;
> +}
> +
> +static void dma_free_chan_resources(struct dma_chan *chan)
> +{
> +	struct ldma_chan *c = to_ldma_chan(chan);
> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
> +
> +	if (d->ver == DMA_VER22) {
> +		dma_pool_destroy(c->desc_pool);
> +		c->desc_pool = NULL;
> +		vchan_free_chan_resources(to_virt_chan(chan));
> +		ldma_chan_reset(c);
> +	} else {
> +		c->flags &= ~CHAN_IN_USE;
> +	}
> +}
> +
> +static void dma_work(struct work_struct *work)
> +{
> +	struct ldma_chan *c = container_of(work, struct ldma_chan, work);
> +	struct dma_async_tx_descriptor *tx = &c->ds->vdesc.tx;
> +	struct dmaengine_desc_callback cb;
> +
> +	dmaengine_desc_get_callback(tx, &cb);
> +	dma_cookie_complete(tx);
> +	dmaengine_desc_callback_invoke(&cb, NULL);

When you are going to free up the descriptor?

> +}
> +
> +int update_client_configs(struct of_dma *ofdma, struct of_phandle_args *spec)
> +{
> +	struct ldma_dev *d = ofdma->of_dma_data;
> +	struct ldma_port *p;
> +	struct ldma_chan *c;
> +	u32 chan_id =  spec->args[0];
> +	u32 port_id =  spec->args[1];
> +
> +	if (chan_id >= d->chan_nrs || port_id >= d->port_nrs)
> +		return 0;
> +
> +	p = &d->ports[port_id];
> +	c = &d->chans[chan_id];
> +
> +	if (d->ver == DMA_VER22) {
> +		u32 burst = spec->args[2];
> +
> +		if (burst != 2 && burst != 4 && burst != 8)
> +			return 0;
> +
> +		/* TX and RX has the same burst length */
> +		p->txbl = ilog2(burst);
> +		p->rxbl = p->txbl;
> +
> +		ldma_port_cfg(p);
> +	} else {
> +		if (spec->args[2] > 0 && spec->args[2] <= DMA_ENDIAN_TYPE3) {
> +			c->data_endian = spec->args[2];
> +			c->data_endian_en = true;
> +		}
> +
> +		if (spec->args[3] > 0 && spec->args[3] <= DMA_ENDIAN_TYPE3) {
> +			c->desc_endian = spec->args[3];
> +			c->desc_endian_en = true;
> +		}
> +
> +		if (spec->args[4] > 0 && spec->args[4] < 128)
> +			c->boff_len = spec->args[4];
> +
> +		if (spec->args[5])
> +			c->desc_rx_np = true;
> +
> +		/*
> +		 * If channel packet drop enabled, port packet drop should
> +		 * be enabled
> +		 */
> +		if (spec->args[6]) {
> +			c->pden = true;
> +			p->pkt_drop = DMA_PKT_DROP_EN;
> +		}
> +		ldma_port_cfg(p);
> +		ldma_chan_cfg(c);
> +	}
> +
> +	return 1;
> +}
> +
> +static struct dma_chan *ldma_xlate(struct of_phandle_args *spec,
> +				   struct of_dma *ofdma)
> +{
> +	struct ldma_dev *d = ofdma->of_dma_data;
> +	u32 chan_id =  spec->args[0];
> +	int ret;
> +
> +	if (!spec->args_count)
> +		return NULL;
> +
> +	/* if args_count is 1 use driver default config settings */
> +	if (spec->args_count > 1) {
> +		ret = update_client_configs(ofdma, spec);
> +		if (!ret)
> +			return NULL;
> +	}
> +
> +	return dma_get_slave_channel(&d->chans[chan_id].vchan.chan);
> +}
> +
> +static void ldma_clk_disable(void *data)
> +{
> +	struct ldma_dev *d = data;
> +
> +	clk_disable_unprepare(d->core_clk);
> +}
> +
> +static struct dma_dev_ops dma0_ops = {
> +	.device_alloc_chan_resources = dma_alloc_chan_resources,
> +	.device_free_chan_resources = dma_free_chan_resources,
> +	.device_config = dma_slave_config,
> +	.device_prep_slave_sg = dma_prep_slave_sg,
> +	.device_tx_status = dma_tx_status,
> +	.device_pause = dma_pause_chan,
> +	.device_resume = dma_resume_chan,
> +	.device_terminate_all = dma_terminate_all,
> +	.device_synchronize = dma_synchronize,
> +	.device_issue_pending = dma_issue_pending,
> +};
> +
> +static struct dma_dev_ops hdma_ops = {
> +	.device_alloc_chan_resources = dma_alloc_chan_resources,
> +	.device_free_chan_resources = dma_free_chan_resources,
> +	.device_terminate_all = dma_terminate_all,
> +	.device_issue_pending = dma_issue_pending,
> +	.device_tx_status = dma_tx_status,
> +	.device_resume = dma_resume_chan,
> +	.device_pause = dma_pause_chan,
> +};
> +
> +static const struct ldma_inst_data dma0 = {
> +	.name = "dma0",
> +	.ops = &dma0_ops,
> +};
> +
> +static const struct ldma_inst_data dma2tx = {
> +	.name = "dma2tx",
> +	.type = DMA_TYPE_TX,
> +	.ops = &hdma_ops,
> +};
> +
> +static const struct ldma_inst_data dma1rx = {
> +	.name = "dma1rx",
> +	.type = DMA_TYPE_RX,
> +	.ops = &hdma_ops,
> +};
> +
> +static const struct ldma_inst_data dma1tx = {
> +	.name = "dma1tx",
> +	.type = DMA_TYPE_TX,
> +	.ops = &hdma_ops,
> +};
> +
> +static const struct ldma_inst_data dma0tx = {
> +	.name = "dma0tx",
> +	.type = DMA_TYPE_TX,
> +	.ops = &hdma_ops,
> +};
> +
> +static const struct ldma_inst_data dma3 = {
> +	.name = "dma3",
> +	.type = DMA_TYPE_MCPY,
> +	.ops = &hdma_ops,
> +};
> +
> +static const struct ldma_inst_data toe_dma30 = {
> +	.name = "toe_dma30",
> +	.type = DMA_TYPE_MCPY,
> +	.ops = &hdma_ops,
> +};
> +
> +static const struct ldma_inst_data toe_dma31 = {
> +	.name = "toe_dma31",
> +	.type = DMA_TYPE_MCPY,
> +	.ops = &hdma_ops,
> +};
> +
> +static const struct of_device_id intel_ldma_match[] = {
> +	{ .compatible = "intel,lgm-cdma", .data = &dma0},
> +	{ .compatible = "intel,lgm-dma2tx", .data = &dma2tx},
> +	{ .compatible = "intel,lgm-dma1rx", .data = &dma1rx},
> +	{ .compatible = "intel,lgm-dma1tx", .data = &dma1tx},
> +	{ .compatible = "intel,lgm-dma0tx", .data = &dma0tx},
> +	{ .compatible = "intel,lgm-dma3", .data = &dma3},
> +	{ .compatible = "intel,lgm-toe-dma30", .data = &toe_dma30},
> +	{ .compatible = "intel,lgm-toe-dma31", .data = &toe_dma31},
> +	{}
> +};
> +
> +static int intel_ldma_probe(struct platform_device *pdev)
> +{
> +	struct device *dev = &pdev->dev;
> +	struct dma_device *dma_dev;
> +	struct ldma_chan *c;
> +	struct ldma_port *p;
> +	struct ldma_dev *d;
> +	u32 id, bitn = 32;
> +	int i, j, k, ret;
> +
> +	d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL);
> +	if (!d)
> +		return -ENOMEM;
> +
> +	/* Link controller to platform device */
> +	d->dev = &pdev->dev;
> +
> +	d->inst = device_get_match_data(dev);
> +	if (!d->inst) {
> +		dev_err(dev, "No device match found\n");
> +		return -ENODEV;
> +	}
> +
> +	d->base = devm_platform_ioremap_resource(pdev, 0);
> +	if (IS_ERR(d->base))
> +		return PTR_ERR(d->base);
> +
> +	/* Power up and reset the dma engine, some DMAs always on?? */
> +	d->core_clk = devm_clk_get_optional(dev, NULL);
> +	if (IS_ERR(d->core_clk))
> +		return PTR_ERR(d->core_clk);
> +	clk_prepare_enable(d->core_clk);
> +
> +	ret = devm_add_action_or_reset(dev, ldma_clk_disable, d);
> +	if (ret) {
> +		dev_err(dev, "Failed to devm_add_action_or_reset, %d\n", ret);
> +		return ret;
> +	}
> +
> +	d->rst = devm_reset_control_get_optional(dev, NULL);
> +	if (IS_ERR(d->rst))
> +		return PTR_ERR(d->rst);
> +	reset_control_deassert(d->rst);
> +
> +	id = readl(d->base + DMA_ID);
> +	d->chan_nrs = FIELD_GET(DMA_ID_CHNR, id);
> +	d->port_nrs = FIELD_GET(DMA_ID_PNR, id);
> +	d->ver = FIELD_GET(DMA_ID_REV, id);
> +
> +	if (id & DMA_ID_AW_36B)
> +		d->flags |= DMA_ADDR_36BIT;
> +
> +	if (IS_ENABLED(CONFIG_64BIT)) {
> +		if (id & DMA_ID_AW_36B)
> +			bitn = 36;
> +	}
> +
> +	if (id & DMA_ID_DW_128B)
> +		d->flags |= DMA_DATA_128BIT;
> +
> +	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(bitn));
> +	if (ret) {
> +		dev_err(dev, "No usable DMA configuration\n");
> +		return ret;
> +	}
> +
> +	if (d->ver == DMA_VER22) {
> +		d->irq = platform_get_irq(pdev, 0);
> +		if (d->irq < 0)
> +			return d->irq;
> +
> +		ret = devm_request_irq(&pdev->dev, d->irq, dma_interrupt,
> +				       0, DRIVER_NAME, d);
> +		if (ret)
> +			return ret;
> +
> +		d->wq = alloc_ordered_workqueue("dma_wq", WQ_MEM_RECLAIM |
> +						WQ_HIGHPRI);
> +		if (!d->wq)
> +			return -ENOMEM;
> +	}
> +
> +	dma_dev = &d->dma_dev;
> +	dma_cap_set(DMA_SLAVE, dma_dev->cap_mask);
> +
> +	/* Channel initializations */
> +	INIT_LIST_HEAD(&dma_dev->channels);
> +
> +	/* Port Initializations */
> +	d->ports = devm_kcalloc(dev, d->port_nrs, sizeof(*p), GFP_KERNEL);
> +	if (!d->ports)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < d->port_nrs; i++) {
> +		p = &d->ports[i];
> +		p->portid = i;
> +		p->ldev = d;
> +	}
> +
> +	ret = ldma_cfg_init(d);
> +	if (ret)
> +		return ret;
> +
> +	dma_dev->dev = &pdev->dev;
> +	/*
> +	 * Link channel id to channel index and link to dma channel list
> +	 * It also back points to controller and its port
> +	 */
> +	for (i = 0, k = 0; i < d->port_nrs; i++) {
> +		if (d->ver == DMA_VER22) {
> +			u32 chan_end;
> +
> +			p = &d->ports[i];
> +			chan_end = p->chan_start + p->chan_sz;
> +			for (j = p->chan_start; j < chan_end; j++) {
> +				c = &d->chans[k];
> +				c->port = p;
> +				c->nr = j; /* Real channel number */
> +				c->rst = DMA_CHAN_RST;
> +				snprintf(c->name, sizeof(c->name), "chan%d",
> +					 c->nr);
> +				INIT_WORK(&c->work, dma_work);
> +				c->vchan.desc_free = dma_free_desc_resource;
> +				vchan_init(&c->vchan, dma_dev);
> +				k++;
> +			}
> +		} else {
> +			p = &d->ports[i];
> +			for (i = 0; i < d->chan_nrs; i++) {
> +				c = &d->chans[i];
> +				c->port = p;
> +				c->data_endian = DMA_DFT_ENDIAN;
> +				c->desc_endian = DMA_DFT_ENDIAN;
> +				c->data_endian_en = false;
> +				c->desc_endian_en = false;
> +				c->desc_rx_np = false;
> +				c->flags |= DEVICE_ALLOC_DESC;
> +				c->onoff = DMA_CH_OFF;
> +				c->rst = DMA_CHAN_RST;
> +				c->abc_en = true;
> +				c->nr = i;
> +				c->vchan.desc_free = dma_free_desc_resource;
> +				vchan_init(&c->vchan, dma_dev);
> +			}
> +		}
> +	}
> +
> +	/* Set DMA capabilities */
> +	dma_cap_zero(dma_dev->cap_mask);

You just cleared the DMA_SLAVE capability you set earlier...

> +
> +	dma_dev->device_alloc_chan_resources =
> +		d->inst->ops->device_alloc_chan_resources;
> +	dma_dev->device_free_chan_resources =
> +		d->inst->ops->device_free_chan_resources;
> +	dma_dev->device_terminate_all = d->inst->ops->device_terminate_all;
> +	dma_dev->device_issue_pending = d->inst->ops->device_issue_pending;
> +	dma_dev->device_tx_status = d->inst->ops->device_tx_status;
> +	dma_dev->device_resume = d->inst->ops->device_resume;
> +	dma_dev->device_pause = d->inst->ops->device_pause;
> +	dma_dev->device_config = d->inst->ops->device_config;
> +	dma_dev->device_prep_slave_sg = d->inst->ops->device_prep_slave_sg;
> +	dma_dev->device_synchronize = d->inst->ops->device_synchronize;
> +
> +	if (d->ver == DMA_VER22) {
> +		dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
> +		dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
> +		dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
> +				      BIT(DMA_DEV_TO_MEM);
> +		dma_dev->residue_granularity =
> +					DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
> +	}

So, if version is != DMA_VER22, then you don't support any direction?
Why register the DMA device if it can not do any transfer?

> +
> +	platform_set_drvdata(pdev, d);
> +
> +	ldma_dev_init(d);
> +
> +	ret = dma_async_device_register(dma_dev);
> +	if (ret) {
> +		dev_err(dev, "Failed to register slave DMA engine device\n");
> +		return ret;
> +	}
> +
> +	ret = of_dma_controller_register(pdev->dev.of_node, ldma_xlate, d);
> +	if (ret) {
> +		dev_err(dev, "Failed to register of DMA controller\n");
> +		dma_async_device_unregister(dma_dev);
> +		return ret;
> +	}
> +
> +	dev_info(dev, "Init done - rev: %x, ports: %d channels: %d\n", d->ver,
> +		 d->port_nrs, d->chan_nrs);
> +
> +	return 0;
> +}
> +
> +static struct platform_driver intel_ldma_driver = {
> +	.probe = intel_ldma_probe,
> +	.driver = {
> +		.name = DRIVER_NAME,
> +		.of_match_table = intel_ldma_match,
> +	},
> +};
> +
> +static int __init intel_ldma_init(void)
> +{
> +	return platform_driver_register(&intel_ldma_driver);
> +}
> +
> +device_initcall(intel_ldma_init);
> diff --git a/include/linux/dma/lgm_dma.h b/include/linux/dma/lgm_dma.h
> new file mode 100644
> index 000000000000..3a2ee6ad0710
> --- /dev/null
> +++ b/include/linux/dma/lgm_dma.h
> @@ -0,0 +1,27 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2016 ~ 2019 Intel Corporation.
> + */
> +#ifndef LGM_DMA_H
> +#define LGM_DMA_H
> +
> +#include <linux/types.h>
> +#include <linux/dmaengine.h>
> +
> +/*!
> + * \fn int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
> + *                                 int desc_num)
> + * \brief Configure low level channel descriptors
> + * \param[in] chan   pointer to DMA channel that the client is using
> + * \param[in] desc_base   descriptor base physical address
> + * \param[in] desc_num   number of descriptors
> + * \return   0 on success
> + * \return   kernel bug reported on failure
> + *
> + * This function configure the low level channel descriptors. It will be
> + * used by CBM whose descriptor is not DDR, actually some registers.
> + */
> +int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
> +			    int desc_num);
> +
> +#endif /* LGM_DMA_H */
> 

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-18 10:16   ` Peter Ujfalusi
@ 2020-08-18 10:29     ` Peter Ujfalusi
  2020-08-24  2:30     ` Reddy, MallikarjunaX
  1 sibling, 0 replies; 23+ messages in thread
From: Peter Ujfalusi @ 2020-08-18 10:29 UTC (permalink / raw)
  To: Amireddy Mallikarjuna reddy, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009



On 18/08/2020 13.16, Peter Ujfalusi wrote:

...

>> +static void dma_issue_pending(struct dma_chan *chan)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +	unsigned long flags;
>> +
>> +	if (d->ver == DMA_VER22) {
>> +		spin_lock_irqsave(&c->vchan.lock, flags);
>> +		if (vchan_issue_pending(&c->vchan)) {
>> +			struct virt_dma_desc *vdesc;
>> +
>> +			/* Get the next descriptor */
>> +			vdesc = vchan_next_desc(&c->vchan);
>> +			if (!vdesc) {
>> +				c->ds = NULL;
>> +				return;
>> +			}
>> +			list_del(&vdesc->node);
>> +			c->ds = to_lgm_dma_desc(vdesc);
> 
> you have set c->ds in dma_prep_slave_sg and the only way I can see that
> you will not leak memory is that the client must terminate_sync() after
> each transfer so that the synchronize callback is invoked between each
> prep_sg/issue_pending/competion.
> 
>> +			spin_unlock_irqrestore(&c->vchan.lock, flags);
>> +			ldma_chan_desc_hw_cfg(c, c->ds->desc_phys, c->ds->desc_cnt);
>> +			ldma_chan_irq_en(c);
>> +		}
> 
> If there is nothing pending, you will leave the spinlock wide open...

you leave it locked...

> 
>> +	}
>> +	ldma_chan_on(c);
>> +}

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-18 10:16   ` Peter Ujfalusi
  2020-08-18 10:29     ` Peter Ujfalusi
@ 2020-08-24  2:30     ` Reddy, MallikarjunaX
  2020-08-24 11:24       ` Peter Ujfalusi
  1 sibling, 1 reply; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-08-24  2:30 UTC (permalink / raw)
  To: Peter Ujfalusi, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009

Hi Peter,
Thanks for the review comments. Please see my comments inline..

On 8/18/2020 6:16 PM, Peter Ujfalusi wrote:
> Hi,
>
> On 14/08/2020 8.26, Amireddy Mallikarjuna reddy wrote:
>> Add DMA controller driver for Lightning Mountain(LGM) family of SoCs.
>>
>> The main function of the DMA controller is the transfer of data from/to any
>> DPlus compliant peripheral to/from the memory. A memory to memory copy
>> capability can also be configured.
>>
>> This ldma driver is used for configure the device and channnels for data
>> and control paths.
>>
>> Signed-off-by: Amireddy Mallikarjuna reddy <mallikarjunax.reddy@linux.intel.com>
> ...
>
>> +static int ldma_chan_cfg(struct ldma_chan *c)
>> +{
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +	u32 reg;
>> +
>> +	reg = c->pden ? DMA_CCTRL_PDEN : 0;
>> +	reg |= c->onoff ? DMA_CCTRL_ON : 0;
>> +	reg |= c->rst ? DMA_CCTRL_RST : 0;
>> +
>> +	ldma_chan_cctrl_cfg(c, reg);
>> +	ldma_chan_irq_init(c);
>> +
>> +	if (d->ver > DMA_VER22) {
>> +		ldma_chan_set_class(c, c->nr);
>> +		ldma_chan_byte_offset_cfg(c, c->boff_len);
>> +		ldma_chan_data_endian_cfg(c, c->data_endian_en, c->data_endian);
>> +		ldma_chan_desc_endian_cfg(c, c->desc_endian_en, c->desc_endian);
>> +		ldma_chan_hdr_mode_cfg(c, c->hdrm_len, c->hdrm_csum);
>> +		ldma_chan_rxwr_np_cfg(c, c->desc_rx_np);
>> +		ldma_chan_abc_cfg(c, c->abc_en);
> Each of these functions will lock and unlock the same lock, would it
> make sense to restructur things to have less activity with the spinlock?
Ok. Instead of lock & unlock at each function i will try to lock & 
unlock only once from here.
>
>> +
>> +		if (ldma_chan_is_hw_desc(c))
>> +			ldma_chan_desc_hw_cfg(c, c->desc_phys, c->desc_cnt);
>> +	}
>> +
>> +	return 0;
>> +}
> ...
>
>> +static void dma_free_desc_resource(struct virt_dma_desc *vdesc)
>> +{
>> +	struct dw2_desc_sw *ds = to_lgm_dma_desc(vdesc);
>> +	struct ldma_chan *c = ds->chan;
>> +
>> +	dma_pool_free(c->desc_pool, ds->desc_hw, ds->desc_phys);
>> +	kfree(ds);
>> +	c->ds = NULL;
> Is there a chance that c->ds != ds?
No, from the code i don't see any such scenario, let me know if you find 
any corner case.
>
>> +}
>> +
>> +static struct dw2_desc_sw *
>> +dma_alloc_desc_resource(int num, struct ldma_chan *c)
>> +{
>> +	struct device *dev = c->vchan.chan.device->dev;
>> +	struct dw2_desc_sw *ds;
>> +
>> +	if (num > c->desc_num) {
>> +		dev_err(dev, "sg num %d exceed max %d\n", num, c->desc_num);
>> +		return NULL;
>> +	}
>> +
>> +	ds = kzalloc(sizeof(*ds), GFP_NOWAIT);
>> +	if (!ds)
>> +		return NULL;
>> +
>> +	ds->chan = c;
>> +
>> +	ds->desc_hw = dma_pool_zalloc(c->desc_pool, GFP_ATOMIC,
>> +				      &ds->desc_phys);
>> +	if (!ds->desc_hw) {
>> +		dev_dbg(dev, "out of memory for link descriptor\n");
>> +		kfree(ds);
>> +		return NULL;
>> +	}
>> +	ds->desc_cnt = num;
>> +
>> +	return ds;
>> +}
>> +
>> +static void ldma_chan_irq_en(struct ldma_chan *c)
>> +{
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&d->dev_lock, flags);
>> +	writel(c->nr, d->base + DMA_CS);
>> +	writel(DMA_CI_EOP, d->base + DMA_CIE);
>> +	writel(BIT(c->nr), d->base + DMA_IRNEN);
>> +	spin_unlock_irqrestore(&d->dev_lock, flags);
>> +}
>> +
>> +static void dma_issue_pending(struct dma_chan *chan)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +	unsigned long flags;
>> +
>> +	if (d->ver == DMA_VER22) {
>> +		spin_lock_irqsave(&c->vchan.lock, flags);
>> +		if (vchan_issue_pending(&c->vchan)) {
>> +			struct virt_dma_desc *vdesc;
>> +
>> +			/* Get the next descriptor */
>> +			vdesc = vchan_next_desc(&c->vchan);
>> +			if (!vdesc) {
>> +				c->ds = NULL;
>> +				return;
>> +			}
>> +			list_del(&vdesc->node);
>> +			c->ds = to_lgm_dma_desc(vdesc);
> you have set c->ds in dma_prep_slave_sg and the only way I can see that
> you will not leak memory is that the client must terminate_sync() after
> each transfer so that the synchronize callback is invoked between each
> prep_sg/issue_pending/competion.
Yes, client must call dmaengine_synchronize after each transfer to make 
sure free the memory assoicated with previously issued descriptors if any.
and also from the driver we are freeing up the descriptor from work 
queue atfer each transfer.(addressed below comments **)
>
>> +			spin_unlock_irqrestore(&c->vchan.lock, flags);
>> +			ldma_chan_desc_hw_cfg(c, c->ds->desc_phys, c->ds->desc_cnt);
>> +			ldma_chan_irq_en(c);
>> +		}
> If there is nothing peding, you will leave the spinlock wide open...
Seems i misplaced the lock. i will fix it in next version.
>
>> +	}
>> +	ldma_chan_on(c);
>> +}
>> +
>> +static void dma_synchronize(struct dma_chan *chan)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +
>> +	/*
>> +	 * clear any pending work if any. In that
>> +	 * case the resource needs to be free here.
>> +	 */
>> +	cancel_work_sync(&c->work);
>> +	vchan_synchronize(&c->vchan);
>> +	if (c->ds)
>> +		dma_free_desc_resource(&c->ds->vdesc);
>> +}
>> +
>> +static int dma_terminate_all(struct dma_chan *chan)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +	unsigned long flags;
>> +	LIST_HEAD(head);
>> +
>> +	spin_lock_irqsave(&c->vchan.lock, flags);
>> +	vchan_get_all_descriptors(&c->vchan, &head);
>> +	spin_unlock_irqrestore(&c->vchan.lock, flags);
>> +	vchan_dma_desc_free_list(&c->vchan, &head);
>> +
>> +	return ldma_chan_reset(c);
>> +}
>> +
>> +static int dma_resume_chan(struct dma_chan *chan)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +
>> +	ldma_chan_on(c);
>> +
>> +	return 0;
>> +}
>> +
>> +static int dma_pause_chan(struct dma_chan *chan)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +
>> +	return ldma_chan_off(c);
>> +}
>> +
>> +static enum dma_status
>> +dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
>> +	      struct dma_tx_state *txstate)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +	enum dma_status status = DMA_COMPLETE;
>> +
>> +	if (d->ver == DMA_VER22)
>> +		status = dma_cookie_status(chan, cookie, txstate);
>> +
>> +	return status;
>> +}
>> +
>> +static void dma_chan_irq(int irq, void *data)
>> +{
>> +	struct ldma_chan *c = data;
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +	u32 stat;
>> +
>> +	/* Disable channel interrupts  */
>> +	writel(c->nr, d->base + DMA_CS);
>> +	stat = readl(d->base + DMA_CIS);
>> +	if (!stat)
>> +		return;
>> +
>> +	writel(readl(d->base + DMA_CIE) & ~DMA_CI_ALL, d->base + DMA_CIE);
>> +	writel(stat, d->base + DMA_CIS);
>> +	queue_work(d->wq, &c->work);
>> +}
>> +
>> +static irqreturn_t dma_interrupt(int irq, void *dev_id)
>> +{
>> +	struct ldma_dev *d = dev_id;
>> +	struct ldma_chan *c;
>> +	unsigned long irncr;
>> +	u32 cid;
>> +
>> +	irncr = readl(d->base + DMA_IRNCR);
>> +	if (!irncr) {
>> +		dev_err(d->dev, "dummy interrupt\n");
>> +		return IRQ_NONE;
>> +	}
>> +
>> +	for_each_set_bit(cid, &irncr, d->chan_nrs) {
>> +		/* Mask */
>> +		writel(readl(d->base + DMA_IRNEN) & ~BIT(cid), d->base + DMA_IRNEN);
>> +		/* Ack */
>> +		writel(readl(d->base + DMA_IRNCR) | BIT(cid), d->base + DMA_IRNCR);
>> +
>> +		c = &d->chans[cid];
>> +		dma_chan_irq(irq, c);
>> +	}
>> +
>> +	return IRQ_HANDLED;
>> +}
>> +
>> +static struct dma_async_tx_descriptor *
>> +dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
>> +		  unsigned int sglen, enum dma_transfer_direction dir,
>> +		  unsigned long flags, void *context)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +	size_t len, avail, total = 0;
>> +	struct dw2_desc *hw_ds;
>> +	struct dw2_desc_sw *ds;
>> +	struct scatterlist *sg;
>> +	int num = sglen, i;
>> +	dma_addr_t addr;
>> +
>> +	if (!sgl)
>> +		return NULL;
>> +
>> +	for_each_sg(sgl, sg, sglen, i) {
>> +		avail = sg_dma_len(sg);
>> +		if (avail > DMA_MAX_SIZE)
>> +			num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1;
>> +	}
>> +
>> +	ds = dma_alloc_desc_resource(num, c);
>> +	if (!ds)
>> +		return NULL;
>> +
>> +	c->ds = ds;
> If you still have a transfer running then you are going to get lost that
> dscriptor?
No, please let me know if you find any such corner case.
>
>> +
>> +	num = 0;
>> +	/* sop and eop has to be handled nicely */
>> +	for_each_sg(sgl, sg, sglen, i) {
>> +		addr = sg_dma_address(sg);
>> +		avail = sg_dma_len(sg);
>> +		total += avail;
>> +
>> +		do {
>> +			len = min_t(size_t, avail, DMA_MAX_SIZE);
>> +
>> +			hw_ds = &ds->desc_hw[num];
>> +			switch (sglen) {
>> +			case 1:
>> +				hw_ds->status.field.sop = 1;
>> +				hw_ds->status.field.eop = 1;
>> +				break;
>> +			default:
>> +				if (num == 0) {
>> +					hw_ds->status.field.sop = 1;
>> +					hw_ds->status.field.eop = 0;
>> +				} else if (num == (sglen - 1)) {
>> +					hw_ds->status.field.sop = 0;
>> +					hw_ds->status.field.eop = 1;
>> +				} else {
>> +					hw_ds->status.field.sop = 0;
>> +					hw_ds->status.field.eop = 0;
>> +				}
>> +				break;
>> +			}
>> +
>> +			/* Only 32 bit address supported */
>> +			hw_ds->addr = (u32)addr;
>> +			hw_ds->status.field.len = len;
>> +			hw_ds->status.field.c = 0;
>> +			hw_ds->status.field.bofs = addr & 0x3;
>> +			/* Ensure data ready before ownership change */
>> +			wmb();
>> +			hw_ds->status.field.own = DMA_OWN;
>> +			/* Ensure ownership changed before moving forward */
>> +			wmb();
>> +			num++;
>> +			addr += len;
>> +			avail -= len;
>> +		} while (avail);
>> +	}
>> +
>> +	ds->size = total;
>> +
>> +	return vchan_tx_prep(&c->vchan, &ds->vdesc, DMA_CTRL_ACK);
>> +}
>> +
>> +static int
>> +dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +	struct ldma_port *p = c->port;
>> +	unsigned long flags;
>> +	u32 bl;
>> +
>> +	if ((cfg->direction == DMA_DEV_TO_MEM &&
>> +	     cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
>> +	    (cfg->direction == DMA_MEM_TO_DEV &&
>> +	     cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
> According to the probe function these width restrictions are only valid
> for DMA_VER22?
YES
>
>> +	    !is_slave_direction(cfg->direction))
>> +		return -EINVAL;
>> +
>> +	/* Default setting will be used */
>> +	if (!cfg->src_maxburst && !cfg->dst_maxburst)
>> +		return 0;
> maxburst == 0 is identical to maxburst == 1, it is just not set
> explicitly. Iow 1 word per DMA request.
This is not clear to me. Can you elaborate?
>
>> +
>> +	/* Must be the same */
>> +	if (cfg->src_maxburst && cfg->dst_maxburst &&
>> +	    cfg->src_maxburst != cfg->dst_maxburst)
>> +		return -EINVAL;
>> +
>> +	if (cfg->dst_maxburst)
>> +		cfg->src_maxburst = cfg->dst_maxburst;
>> +
>> +	bl = ilog2(cfg->src_maxburst);
>> +
>> +	spin_lock_irqsave(&d->dev_lock, flags);
>> +	writel(p->portid, d->base + DMA_PS);
>> +	ldma_update_bits(d, DMA_PCTRL_RXBL | DMA_PCTRL_TXBL,
>> +			 FIELD_PREP(DMA_PCTRL_RXBL, bl) |
>> +			 FIELD_PREP(DMA_PCTRL_TXBL, bl), DMA_PCTRL);
>> +	spin_unlock_irqrestore(&d->dev_lock, flags);
> What drivers usually do is to save the cfg and inprepare time take it
> into account when setting up the transfer.
> Write the change to the HW before the trasnfer is started (if it has
> changed from previous settings)
>
> Client drivers usually set the slave config ones, in most cases during
> probe, so the slave config rarely changes runtime, but there are cases
> for that.
Ok, got it. i will update in the next version.
>
>> +
>> +	return 0;
>> +}
>> +
>> +static int dma_alloc_chan_resources(struct dma_chan *chan)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +	struct device *dev = c->vchan.chan.device->dev;
>> +	size_t	desc_sz;
>> +
>> +	if (d->ver > DMA_VER22) {
>> +		c->flags |= CHAN_IN_USE;
>> +		return 0;
>> +	}
>> +
>> +	if (c->desc_pool)
>> +		return c->desc_num;
>> +
>> +	desc_sz = c->desc_num * sizeof(struct dw2_desc);
>> +	c->desc_pool = dma_pool_create(c->name, dev, desc_sz,
>> +				       __alignof__(struct dw2_desc), 0);
>> +
>> +	if (!c->desc_pool) {
>> +		dev_err(dev, "unable to allocate descriptor pool\n");
>> +		return -ENOMEM;
>> +	}
>> +
>> +	return c->desc_num;
>> +}
>> +
>> +static void dma_free_chan_resources(struct dma_chan *chan)
>> +{
>> +	struct ldma_chan *c = to_ldma_chan(chan);
>> +	struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>> +
>> +	if (d->ver == DMA_VER22) {
>> +		dma_pool_destroy(c->desc_pool);
>> +		c->desc_pool = NULL;
>> +		vchan_free_chan_resources(to_virt_chan(chan));
>> +		ldma_chan_reset(c);
>> +	} else {
>> +		c->flags &= ~CHAN_IN_USE;
>> +	}
>> +}
>> +
>> +static void dma_work(struct work_struct *work)
>> +{
>> +	struct ldma_chan *c = container_of(work, struct ldma_chan, work);
>> +	struct dma_async_tx_descriptor *tx = &c->ds->vdesc.tx;
>> +	struct dmaengine_desc_callback cb;
>> +
>> +	dmaengine_desc_get_callback(tx, &cb);
>> +	dma_cookie_complete(tx);
>> +	dmaengine_desc_callback_invoke(&cb, NULL);
> When you are going to free up the descriptor?
**
Seems i missed free up the descriptor here. i will fix and update in the 
next version.
>
>> +}
>> +
>> +int update_client_configs(struct of_dma *ofdma, struct of_phandle_args *spec)
>> +{
>> +	struct ldma_dev *d = ofdma->of_dma_data;
>> +	struct ldma_port *p;
>> +	struct ldma_chan *c;
>> +	u32 chan_id =  spec->args[0];
>> +	u32 port_id =  spec->args[1];
>> +
>> +	if (chan_id >= d->chan_nrs || port_id >= d->port_nrs)
>> +		return 0;
>> +
>> +	p = &d->ports[port_id];
>> +	c = &d->chans[chan_id];
>> +
>> +	if (d->ver == DMA_VER22) {
>> +		u32 burst = spec->args[2];
>> +
>> +		if (burst != 2 && burst != 4 && burst != 8)
>> +			return 0;
>> +
>> +		/* TX and RX has the same burst length */
>> +		p->txbl = ilog2(burst);
>> +		p->rxbl = p->txbl;
>> +
>> +		ldma_port_cfg(p);
>> +	} else {
>> +		if (spec->args[2] > 0 && spec->args[2] <= DMA_ENDIAN_TYPE3) {
>> +			c->data_endian = spec->args[2];
>> +			c->data_endian_en = true;
>> +		}
>> +
>> +		if (spec->args[3] > 0 && spec->args[3] <= DMA_ENDIAN_TYPE3) {
>> +			c->desc_endian = spec->args[3];
>> +			c->desc_endian_en = true;
>> +		}
>> +
>> +		if (spec->args[4] > 0 && spec->args[4] < 128)
>> +			c->boff_len = spec->args[4];
>> +
>> +		if (spec->args[5])
>> +			c->desc_rx_np = true;
>> +
>> +		/*
>> +		 * If channel packet drop enabled, port packet drop should
>> +		 * be enabled
>> +		 */
>> +		if (spec->args[6]) {
>> +			c->pden = true;
>> +			p->pkt_drop = DMA_PKT_DROP_EN;
>> +		}
>> +		ldma_port_cfg(p);
>> +		ldma_chan_cfg(c);
>> +	}
>> +
>> +	return 1;
>> +}
>> +
>> +static struct dma_chan *ldma_xlate(struct of_phandle_args *spec,
>> +				   struct of_dma *ofdma)
>> +{
>> +	struct ldma_dev *d = ofdma->of_dma_data;
>> +	u32 chan_id =  spec->args[0];
>> +	int ret;
>> +
>> +	if (!spec->args_count)
>> +		return NULL;
>> +
>> +	/* if args_count is 1 use driver default config settings */
>> +	if (spec->args_count > 1) {
>> +		ret = update_client_configs(ofdma, spec);
>> +		if (!ret)
>> +			return NULL;
>> +	}
>> +
>> +	return dma_get_slave_channel(&d->chans[chan_id].vchan.chan);
>> +}
>> +
>> +static void ldma_clk_disable(void *data)
>> +{
>> +	struct ldma_dev *d = data;
>> +
>> +	clk_disable_unprepare(d->core_clk);
>> +}
>> +
>> +static struct dma_dev_ops dma0_ops = {
>> +	.device_alloc_chan_resources = dma_alloc_chan_resources,
>> +	.device_free_chan_resources = dma_free_chan_resources,
>> +	.device_config = dma_slave_config,
>> +	.device_prep_slave_sg = dma_prep_slave_sg,
>> +	.device_tx_status = dma_tx_status,
>> +	.device_pause = dma_pause_chan,
>> +	.device_resume = dma_resume_chan,
>> +	.device_terminate_all = dma_terminate_all,
>> +	.device_synchronize = dma_synchronize,
>> +	.device_issue_pending = dma_issue_pending,
>> +};
>> +
>> +static struct dma_dev_ops hdma_ops = {
>> +	.device_alloc_chan_resources = dma_alloc_chan_resources,
>> +	.device_free_chan_resources = dma_free_chan_resources,
>> +	.device_terminate_all = dma_terminate_all,
>> +	.device_issue_pending = dma_issue_pending,
>> +	.device_tx_status = dma_tx_status,
>> +	.device_resume = dma_resume_chan,
>> +	.device_pause = dma_pause_chan,
>> +};
>> +
>> +static const struct ldma_inst_data dma0 = {
>> +	.name = "dma0",
>> +	.ops = &dma0_ops,
>> +};
>> +
>> +static const struct ldma_inst_data dma2tx = {
>> +	.name = "dma2tx",
>> +	.type = DMA_TYPE_TX,
>> +	.ops = &hdma_ops,
>> +};
>> +
>> +static const struct ldma_inst_data dma1rx = {
>> +	.name = "dma1rx",
>> +	.type = DMA_TYPE_RX,
>> +	.ops = &hdma_ops,
>> +};
>> +
>> +static const struct ldma_inst_data dma1tx = {
>> +	.name = "dma1tx",
>> +	.type = DMA_TYPE_TX,
>> +	.ops = &hdma_ops,
>> +};
>> +
>> +static const struct ldma_inst_data dma0tx = {
>> +	.name = "dma0tx",
>> +	.type = DMA_TYPE_TX,
>> +	.ops = &hdma_ops,
>> +};
>> +
>> +static const struct ldma_inst_data dma3 = {
>> +	.name = "dma3",
>> +	.type = DMA_TYPE_MCPY,
>> +	.ops = &hdma_ops,
>> +};
>> +
>> +static const struct ldma_inst_data toe_dma30 = {
>> +	.name = "toe_dma30",
>> +	.type = DMA_TYPE_MCPY,
>> +	.ops = &hdma_ops,
>> +};
>> +
>> +static const struct ldma_inst_data toe_dma31 = {
>> +	.name = "toe_dma31",
>> +	.type = DMA_TYPE_MCPY,
>> +	.ops = &hdma_ops,
>> +};
>> +
>> +static const struct of_device_id intel_ldma_match[] = {
>> +	{ .compatible = "intel,lgm-cdma", .data = &dma0},
>> +	{ .compatible = "intel,lgm-dma2tx", .data = &dma2tx},
>> +	{ .compatible = "intel,lgm-dma1rx", .data = &dma1rx},
>> +	{ .compatible = "intel,lgm-dma1tx", .data = &dma1tx},
>> +	{ .compatible = "intel,lgm-dma0tx", .data = &dma0tx},
>> +	{ .compatible = "intel,lgm-dma3", .data = &dma3},
>> +	{ .compatible = "intel,lgm-toe-dma30", .data = &toe_dma30},
>> +	{ .compatible = "intel,lgm-toe-dma31", .data = &toe_dma31},
>> +	{}
>> +};
>> +
>> +static int intel_ldma_probe(struct platform_device *pdev)
>> +{
>> +	struct device *dev = &pdev->dev;
>> +	struct dma_device *dma_dev;
>> +	struct ldma_chan *c;
>> +	struct ldma_port *p;
>> +	struct ldma_dev *d;
>> +	u32 id, bitn = 32;
>> +	int i, j, k, ret;
>> +
>> +	d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL);
>> +	if (!d)
>> +		return -ENOMEM;
>> +
>> +	/* Link controller to platform device */
>> +	d->dev = &pdev->dev;
>> +
>> +	d->inst = device_get_match_data(dev);
>> +	if (!d->inst) {
>> +		dev_err(dev, "No device match found\n");
>> +		return -ENODEV;
>> +	}
>> +
>> +	d->base = devm_platform_ioremap_resource(pdev, 0);
>> +	if (IS_ERR(d->base))
>> +		return PTR_ERR(d->base);
>> +
>> +	/* Power up and reset the dma engine, some DMAs always on?? */
>> +	d->core_clk = devm_clk_get_optional(dev, NULL);
>> +	if (IS_ERR(d->core_clk))
>> +		return PTR_ERR(d->core_clk);
>> +	clk_prepare_enable(d->core_clk);
>> +
>> +	ret = devm_add_action_or_reset(dev, ldma_clk_disable, d);
>> +	if (ret) {
>> +		dev_err(dev, "Failed to devm_add_action_or_reset, %d\n", ret);
>> +		return ret;
>> +	}
>> +
>> +	d->rst = devm_reset_control_get_optional(dev, NULL);
>> +	if (IS_ERR(d->rst))
>> +		return PTR_ERR(d->rst);
>> +	reset_control_deassert(d->rst);
>> +
>> +	id = readl(d->base + DMA_ID);
>> +	d->chan_nrs = FIELD_GET(DMA_ID_CHNR, id);
>> +	d->port_nrs = FIELD_GET(DMA_ID_PNR, id);
>> +	d->ver = FIELD_GET(DMA_ID_REV, id);
>> +
>> +	if (id & DMA_ID_AW_36B)
>> +		d->flags |= DMA_ADDR_36BIT;
>> +
>> +	if (IS_ENABLED(CONFIG_64BIT)) {
>> +		if (id & DMA_ID_AW_36B)
>> +			bitn = 36;
>> +	}
>> +
>> +	if (id & DMA_ID_DW_128B)
>> +		d->flags |= DMA_DATA_128BIT;
>> +
>> +	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(bitn));
>> +	if (ret) {
>> +		dev_err(dev, "No usable DMA configuration\n");
>> +		return ret;
>> +	}
>> +
>> +	if (d->ver == DMA_VER22) {
>> +		d->irq = platform_get_irq(pdev, 0);
>> +		if (d->irq < 0)
>> +			return d->irq;
>> +
>> +		ret = devm_request_irq(&pdev->dev, d->irq, dma_interrupt,
>> +				       0, DRIVER_NAME, d);
>> +		if (ret)
>> +			return ret;
>> +
>> +		d->wq = alloc_ordered_workqueue("dma_wq", WQ_MEM_RECLAIM |
>> +						WQ_HIGHPRI);
>> +		if (!d->wq)
>> +			return -ENOMEM;
>> +	}
>> +
>> +	dma_dev = &d->dma_dev;
>> +	dma_cap_set(DMA_SLAVE, dma_dev->cap_mask);
>> +
>> +	/* Channel initializations */
>> +	INIT_LIST_HEAD(&dma_dev->channels);
>> +
>> +	/* Port Initializations */
>> +	d->ports = devm_kcalloc(dev, d->port_nrs, sizeof(*p), GFP_KERNEL);
>> +	if (!d->ports)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < d->port_nrs; i++) {
>> +		p = &d->ports[i];
>> +		p->portid = i;
>> +		p->ldev = d;
>> +	}
>> +
>> +	ret = ldma_cfg_init(d);
>> +	if (ret)
>> +		return ret;
>> +
>> +	dma_dev->dev = &pdev->dev;
>> +	/*
>> +	 * Link channel id to channel index and link to dma channel list
>> +	 * It also back points to controller and its port
>> +	 */
>> +	for (i = 0, k = 0; i < d->port_nrs; i++) {
>> +		if (d->ver == DMA_VER22) {
>> +			u32 chan_end;
>> +
>> +			p = &d->ports[i];
>> +			chan_end = p->chan_start + p->chan_sz;
>> +			for (j = p->chan_start; j < chan_end; j++) {
>> +				c = &d->chans[k];
>> +				c->port = p;
>> +				c->nr = j; /* Real channel number */
>> +				c->rst = DMA_CHAN_RST;
>> +				snprintf(c->name, sizeof(c->name), "chan%d",
>> +					 c->nr);
>> +				INIT_WORK(&c->work, dma_work);
>> +				c->vchan.desc_free = dma_free_desc_resource;
>> +				vchan_init(&c->vchan, dma_dev);
>> +				k++;
>> +			}
>> +		} else {
>> +			p = &d->ports[i];
>> +			for (i = 0; i < d->chan_nrs; i++) {
>> +				c = &d->chans[i];
>> +				c->port = p;
>> +				c->data_endian = DMA_DFT_ENDIAN;
>> +				c->desc_endian = DMA_DFT_ENDIAN;
>> +				c->data_endian_en = false;
>> +				c->desc_endian_en = false;
>> +				c->desc_rx_np = false;
>> +				c->flags |= DEVICE_ALLOC_DESC;
>> +				c->onoff = DMA_CH_OFF;
>> +				c->rst = DMA_CHAN_RST;
>> +				c->abc_en = true;
>> +				c->nr = i;
>> +				c->vchan.desc_free = dma_free_desc_resource;
>> +				vchan_init(&c->vchan, dma_dev);
>> +			}
>> +		}
>> +	}
>> +
>> +	/* Set DMA capabilities */
>> +	dma_cap_zero(dma_dev->cap_mask);
> You just cleared the DMA_SLAVE capability you set earlier...
Yes correct. i will fix it.
>
>> +
>> +	dma_dev->device_alloc_chan_resources =
>> +		d->inst->ops->device_alloc_chan_resources;
>> +	dma_dev->device_free_chan_resources =
>> +		d->inst->ops->device_free_chan_resources;
>> +	dma_dev->device_terminate_all = d->inst->ops->device_terminate_all;
>> +	dma_dev->device_issue_pending = d->inst->ops->device_issue_pending;
>> +	dma_dev->device_tx_status = d->inst->ops->device_tx_status;
>> +	dma_dev->device_resume = d->inst->ops->device_resume;
>> +	dma_dev->device_pause = d->inst->ops->device_pause;
>> +	dma_dev->device_config = d->inst->ops->device_config;
>> +	dma_dev->device_prep_slave_sg = d->inst->ops->device_prep_slave_sg;
>> +	dma_dev->device_synchronize = d->inst->ops->device_synchronize;
>> +
>> +	if (d->ver == DMA_VER22) {
>> +		dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>> +		dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>> +		dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
>> +				      BIT(DMA_DEV_TO_MEM);
>> +		dma_dev->residue_granularity =
>> +					DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
>> +	}
> So, if version is != DMA_VER22, then you don't support any direction?
> Why register the DMA device if it can not do any transfer?
Only dma0 instance (intel,lgm-cdma) is used as a general purpose slave 
DMA. we set both control and datapath here.
Other instances we set only control path. data path is taken care by dma 
client(GSWIP).
Only thing needs to do is get the channel, set the descriptor and just 
on the channel.
>
>> +
>> +	platform_set_drvdata(pdev, d);
>> +
>> +	ldma_dev_init(d);
>> +
>> +	ret = dma_async_device_register(dma_dev);
>> +	if (ret) {
>> +		dev_err(dev, "Failed to register slave DMA engine device\n");
>> +		return ret;
>> +	}
>> +
>> +	ret = of_dma_controller_register(pdev->dev.of_node, ldma_xlate, d);
>> +	if (ret) {
>> +		dev_err(dev, "Failed to register of DMA controller\n");
>> +		dma_async_device_unregister(dma_dev);
>> +		return ret;
>> +	}
>> +
>> +	dev_info(dev, "Init done - rev: %x, ports: %d channels: %d\n", d->ver,
>> +		 d->port_nrs, d->chan_nrs);
>> +
>> +	return 0;
>> +}
>> +
>> +static struct platform_driver intel_ldma_driver = {
>> +	.probe = intel_ldma_probe,
>> +	.driver = {
>> +		.name = DRIVER_NAME,
>> +		.of_match_table = intel_ldma_match,
>> +	},
>> +};
>> +
>> +static int __init intel_ldma_init(void)
>> +{
>> +	return platform_driver_register(&intel_ldma_driver);
>> +}
>> +
>> +device_initcall(intel_ldma_init);
>> diff --git a/include/linux/dma/lgm_dma.h b/include/linux/dma/lgm_dma.h
>> new file mode 100644
>> index 000000000000..3a2ee6ad0710
>> --- /dev/null
>> +++ b/include/linux/dma/lgm_dma.h
>> @@ -0,0 +1,27 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2016 ~ 2019 Intel Corporation.
>> + */
>> +#ifndef LGM_DMA_H
>> +#define LGM_DMA_H
>> +
>> +#include <linux/types.h>
>> +#include <linux/dmaengine.h>
>> +
>> +/*!
>> + * \fn int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
>> + *                                 int desc_num)
>> + * \brief Configure low level channel descriptors
>> + * \param[in] chan   pointer to DMA channel that the client is using
>> + * \param[in] desc_base   descriptor base physical address
>> + * \param[in] desc_num   number of descriptors
>> + * \return   0 on success
>> + * \return   kernel bug reported on failure
>> + *
>> + * This function configure the low level channel descriptors. It will be
>> + * used by CBM whose descriptor is not DDR, actually some registers.
>> + */
>> +int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
>> +			    int desc_num);
>> +
>> +#endif /* LGM_DMA_H */
>>
> - Péter
>
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-24  2:30     ` Reddy, MallikarjunaX
@ 2020-08-24 11:24       ` Peter Ujfalusi
  2020-08-27 14:41         ` Reddy, MallikarjunaX
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Ujfalusi @ 2020-08-24 11:24 UTC (permalink / raw)
  To: Reddy, MallikarjunaX, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009

Hi,

On 24/08/2020 5.30, Reddy, MallikarjunaX wrote:

>>> +static void dma_free_desc_resource(struct virt_dma_desc *vdesc)
>>> +{
>>> +    struct dw2_desc_sw *ds = to_lgm_dma_desc(vdesc);
>>> +    struct ldma_chan *c = ds->chan;
>>> +
>>> +    dma_pool_free(c->desc_pool, ds->desc_hw, ds->desc_phys);
>>> +    kfree(ds);
>>> +    c->ds = NULL;
>> Is there a chance that c->ds != ds?
> No, from the code i don't see any such scenario, let me know if you find
> any corner case.

The desc_free callback is used to _free_ up the memory used for the
descriptor. Nothing less, nothing more.
You should not touch the c->ds in this callback, just free up the memory
used for the given vdesc.

More on that a bit later.

>>> +}
>>> +
>>> +static struct dw2_desc_sw *
>>> +dma_alloc_desc_resource(int num, struct ldma_chan *c)
>>> +{
>>> +    struct device *dev = c->vchan.chan.device->dev;
>>> +    struct dw2_desc_sw *ds;
>>> +
>>> +    if (num > c->desc_num) {
>>> +        dev_err(dev, "sg num %d exceed max %d\n", num, c->desc_num);
>>> +        return NULL;
>>> +    }
>>> +
>>> +    ds = kzalloc(sizeof(*ds), GFP_NOWAIT);
>>> +    if (!ds)
>>> +        return NULL;
>>> +
>>> +    ds->chan = c;
>>> +
>>> +    ds->desc_hw = dma_pool_zalloc(c->desc_pool, GFP_ATOMIC,
>>> +                      &ds->desc_phys);
>>> +    if (!ds->desc_hw) {
>>> +        dev_dbg(dev, "out of memory for link descriptor\n");
>>> +        kfree(ds);
>>> +        return NULL;
>>> +    }
>>> +    ds->desc_cnt = num;
>>> +
>>> +    return ds;
>>> +}
>>> +
>>> +static void ldma_chan_irq_en(struct ldma_chan *c)
>>> +{
>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>> +    unsigned long flags;
>>> +
>>> +    spin_lock_irqsave(&d->dev_lock, flags);
>>> +    writel(c->nr, d->base + DMA_CS);
>>> +    writel(DMA_CI_EOP, d->base + DMA_CIE);
>>> +    writel(BIT(c->nr), d->base + DMA_IRNEN);
>>> +    spin_unlock_irqrestore(&d->dev_lock, flags);
>>> +}
>>> +
>>> +static void dma_issue_pending(struct dma_chan *chan)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>> +    unsigned long flags;
>>> +
>>> +    if (d->ver == DMA_VER22) {
>>> +        spin_lock_irqsave(&c->vchan.lock, flags);
>>> +        if (vchan_issue_pending(&c->vchan)) {
>>> +            struct virt_dma_desc *vdesc;
>>> +
>>> +            /* Get the next descriptor */
>>> +            vdesc = vchan_next_desc(&c->vchan);
>>> +            if (!vdesc) {
>>> +                c->ds = NULL;
>>> +                return;
>>> +            }
>>> +            list_del(&vdesc->node);
>>> +            c->ds = to_lgm_dma_desc(vdesc);
>> you have set c->ds in dma_prep_slave_sg and the only way I can see that
>> you will not leak memory is that the client must terminate_sync() after
>> each transfer so that the synchronize callback is invoked between each
>> prep_sg/issue_pending/competion.
> Yes, client must call dmaengine_synchronize after each transfer to make
> sure free the memory assoicated with previously issued descriptors if any.

No, client should not need to invoke synchronize between transfers,
clients must be able to do:
dmaengine_prep_slave_single/dmaengine_prep_slave_sg
dma_async_issue_pending
* handle the callback
dmaengine_prep_slave_single/dmaengine_prep_slave_sg
dma_async_issue_pending
* handle the callback

without any terminate_all_sync() in between.
Imagine that the client is preparing/issuing a new transfer in the
completion callback for example.

Clients must be able to do also:
dmaengine_prep_slave_single/dmaengine_prep_slave_sg
dmaengine_prep_slave_single/dmaengine_prep_slave_sg
...
dmaengine_prep_slave_single/dmaengine_prep_slave_sg
dma_async_issue_pending

and then the DMA will complete the transfers in FIFO order and when the
first is completed it will move to the next one. Client will receive
callbacks for each completion (if requested).

> and also from the driver we are freeing up the descriptor from work
> queue atfer each transfer.(addressed below comments **)
>>
>>> +            spin_unlock_irqrestore(&c->vchan.lock, flags);
>>> +            ldma_chan_desc_hw_cfg(c, c->ds->desc_phys,
>>> c->ds->desc_cnt);
>>> +            ldma_chan_irq_en(c);
>>> +        }
>> If there is nothing peding, you will leave the spinlock wide open...
> Seems i misplaced the lock. i will fix it in next version.
>>
>>> +    }
>>> +    ldma_chan_on(c);
>>> +}
>>> +
>>> +static void dma_synchronize(struct dma_chan *chan)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +
>>> +    /*
>>> +     * clear any pending work if any. In that
>>> +     * case the resource needs to be free here.
>>> +     */
>>> +    cancel_work_sync(&c->work);
>>> +    vchan_synchronize(&c->vchan);
>>> +    if (c->ds)
>>> +        dma_free_desc_resource(&c->ds->vdesc);
>>> +}
>>> +
>>> +static int dma_terminate_all(struct dma_chan *chan)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +    unsigned long flags;
>>> +    LIST_HEAD(head);
>>> +
>>> +    spin_lock_irqsave(&c->vchan.lock, flags);
>>> +    vchan_get_all_descriptors(&c->vchan, &head);
>>> +    spin_unlock_irqrestore(&c->vchan.lock, flags);
>>> +    vchan_dma_desc_free_list(&c->vchan, &head);
>>> +
>>> +    return ldma_chan_reset(c);
>>> +}
>>> +
>>> +static int dma_resume_chan(struct dma_chan *chan)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +
>>> +    ldma_chan_on(c);
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static int dma_pause_chan(struct dma_chan *chan)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +
>>> +    return ldma_chan_off(c);
>>> +}
>>> +
>>> +static enum dma_status
>>> +dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
>>> +          struct dma_tx_state *txstate)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>> +    enum dma_status status = DMA_COMPLETE;
>>> +
>>> +    if (d->ver == DMA_VER22)
>>> +        status = dma_cookie_status(chan, cookie, txstate);
>>> +
>>> +    return status;
>>> +}
>>> +
>>> +static void dma_chan_irq(int irq, void *data)
>>> +{
>>> +    struct ldma_chan *c = data;
>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>> +    u32 stat;
>>> +
>>> +    /* Disable channel interrupts  */
>>> +    writel(c->nr, d->base + DMA_CS);
>>> +    stat = readl(d->base + DMA_CIS);
>>> +    if (!stat)
>>> +        return;
>>> +
>>> +    writel(readl(d->base + DMA_CIE) & ~DMA_CI_ALL, d->base + DMA_CIE);
>>> +    writel(stat, d->base + DMA_CIS);
>>> +    queue_work(d->wq, &c->work);
>>> +}
>>> +
>>> +static irqreturn_t dma_interrupt(int irq, void *dev_id)
>>> +{
>>> +    struct ldma_dev *d = dev_id;
>>> +    struct ldma_chan *c;
>>> +    unsigned long irncr;
>>> +    u32 cid;
>>> +
>>> +    irncr = readl(d->base + DMA_IRNCR);
>>> +    if (!irncr) {
>>> +        dev_err(d->dev, "dummy interrupt\n");
>>> +        return IRQ_NONE;
>>> +    }
>>> +
>>> +    for_each_set_bit(cid, &irncr, d->chan_nrs) {
>>> +        /* Mask */
>>> +        writel(readl(d->base + DMA_IRNEN) & ~BIT(cid), d->base +
>>> DMA_IRNEN);
>>> +        /* Ack */
>>> +        writel(readl(d->base + DMA_IRNCR) | BIT(cid), d->base +
>>> DMA_IRNCR);
>>> +
>>> +        c = &d->chans[cid];
>>> +        dma_chan_irq(irq, c);
>>> +    }
>>> +
>>> +    return IRQ_HANDLED;
>>> +}
>>> +
>>> +static struct dma_async_tx_descriptor *
>>> +dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
>>> +          unsigned int sglen, enum dma_transfer_direction dir,
>>> +          unsigned long flags, void *context)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +    size_t len, avail, total = 0;
>>> +    struct dw2_desc *hw_ds;
>>> +    struct dw2_desc_sw *ds;
>>> +    struct scatterlist *sg;
>>> +    int num = sglen, i;
>>> +    dma_addr_t addr;
>>> +
>>> +    if (!sgl)
>>> +        return NULL;
>>> +
>>> +    for_each_sg(sgl, sg, sglen, i) {
>>> +        avail = sg_dma_len(sg);
>>> +        if (avail > DMA_MAX_SIZE)
>>> +            num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1;
>>> +    }
>>> +
>>> +    ds = dma_alloc_desc_resource(num, c);
>>> +    if (!ds)
>>> +        return NULL;
>>> +
>>> +    c->ds = ds;
>> If you still have a transfer running then you are going to get lost that
>> dscriptor?
> No, please let me know if you find any such corner case.

This is the place when you prepare the descriptor, but it is not yet
commited to the hardware. Client might never call issue_pending, client
might prepare another transfer before telling the DMA driver to start
the transfers.

>>
>>> +
>>> +    num = 0;
>>> +    /* sop and eop has to be handled nicely */
>>> +    for_each_sg(sgl, sg, sglen, i) {
>>> +        addr = sg_dma_address(sg);
>>> +        avail = sg_dma_len(sg);
>>> +        total += avail;
>>> +
>>> +        do {
>>> +            len = min_t(size_t, avail, DMA_MAX_SIZE);
>>> +
>>> +            hw_ds = &ds->desc_hw[num];
>>> +            switch (sglen) {
>>> +            case 1:
>>> +                hw_ds->status.field.sop = 1;
>>> +                hw_ds->status.field.eop = 1;
>>> +                break;
>>> +            default:
>>> +                if (num == 0) {
>>> +                    hw_ds->status.field.sop = 1;
>>> +                    hw_ds->status.field.eop = 0;
>>> +                } else if (num == (sglen - 1)) {
>>> +                    hw_ds->status.field.sop = 0;
>>> +                    hw_ds->status.field.eop = 1;
>>> +                } else {
>>> +                    hw_ds->status.field.sop = 0;
>>> +                    hw_ds->status.field.eop = 0;
>>> +                }
>>> +                break;
>>> +            }
>>> +
>>> +            /* Only 32 bit address supported */
>>> +            hw_ds->addr = (u32)addr;
>>> +            hw_ds->status.field.len = len;
>>> +            hw_ds->status.field.c = 0;
>>> +            hw_ds->status.field.bofs = addr & 0x3;
>>> +            /* Ensure data ready before ownership change */
>>> +            wmb();
>>> +            hw_ds->status.field.own = DMA_OWN;
>>> +            /* Ensure ownership changed before moving forward */
>>> +            wmb();
>>> +            num++;
>>> +            addr += len;
>>> +            avail -= len;
>>> +        } while (avail);
>>> +    }
>>> +
>>> +    ds->size = total;
>>> +
>>> +    return vchan_tx_prep(&c->vchan, &ds->vdesc, DMA_CTRL_ACK);
>>> +}
>>> +
>>> +static int
>>> +dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>> +    struct ldma_port *p = c->port;
>>> +    unsigned long flags;
>>> +    u32 bl;
>>> +
>>> +    if ((cfg->direction == DMA_DEV_TO_MEM &&
>>> +         cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
>>> +        (cfg->direction == DMA_MEM_TO_DEV &&
>>> +         cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
>> According to the probe function these width restrictions are only valid
>> for DMA_VER22?
> YES

Right, hdma_ops does not have device_config specified.

>>
>>> +        !is_slave_direction(cfg->direction))
>>> +        return -EINVAL;
>>> +
>>> +    /* Default setting will be used */
>>> +    if (!cfg->src_maxburst && !cfg->dst_maxburst)
>>> +        return 0;
>> maxburst == 0 is identical to maxburst == 1, it is just not set
>> explicitly. Iow 1 word per DMA request.
> This is not clear to me. Can you elaborate?

You handle the *_maxburst == 1 and *maxburst == 0 differently while they
are the same thing.

>>
>>> +
>>> +    /* Must be the same */
>>> +    if (cfg->src_maxburst && cfg->dst_maxburst &&
>>> +        cfg->src_maxburst != cfg->dst_maxburst)
>>> +        return -EINVAL;
>>> +
>>> +    if (cfg->dst_maxburst)
>>> +        cfg->src_maxburst = cfg->dst_maxburst;
>>> +
>>> +    bl = ilog2(cfg->src_maxburst);
>>> +
>>> +    spin_lock_irqsave(&d->dev_lock, flags);
>>> +    writel(p->portid, d->base + DMA_PS);
>>> +    ldma_update_bits(d, DMA_PCTRL_RXBL | DMA_PCTRL_TXBL,
>>> +             FIELD_PREP(DMA_PCTRL_RXBL, bl) |
>>> +             FIELD_PREP(DMA_PCTRL_TXBL, bl), DMA_PCTRL);
>>> +    spin_unlock_irqrestore(&d->dev_lock, flags);
>> What drivers usually do is to save the cfg and inprepare time take it
>> into account when setting up the transfer.
>> Write the change to the HW before the trasnfer is started (if it has
>> changed from previous settings)
>>
>> Client drivers usually set the slave config ones, in most cases during
>> probe, so the slave config rarely changes runtime, but there are cases
>> for that.
> Ok, got it. i will update in the next version.

Thanks

>>
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static int dma_alloc_chan_resources(struct dma_chan *chan)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>> +    struct device *dev = c->vchan.chan.device->dev;
>>> +    size_t    desc_sz;
>>> +
>>> +    if (d->ver > DMA_VER22) {
>>> +        c->flags |= CHAN_IN_USE;
>>> +        return 0;
>>> +    }
>>> +
>>> +    if (c->desc_pool)
>>> +        return c->desc_num;
>>> +
>>> +    desc_sz = c->desc_num * sizeof(struct dw2_desc);
>>> +    c->desc_pool = dma_pool_create(c->name, dev, desc_sz,
>>> +                       __alignof__(struct dw2_desc), 0);
>>> +
>>> +    if (!c->desc_pool) {
>>> +        dev_err(dev, "unable to allocate descriptor pool\n");
>>> +        return -ENOMEM;
>>> +    }
>>> +
>>> +    return c->desc_num;
>>> +}
>>> +
>>> +static void dma_free_chan_resources(struct dma_chan *chan)
>>> +{
>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>> +
>>> +    if (d->ver == DMA_VER22) {
>>> +        dma_pool_destroy(c->desc_pool);
>>> +        c->desc_pool = NULL;
>>> +        vchan_free_chan_resources(to_virt_chan(chan));
>>> +        ldma_chan_reset(c);
>>> +    } else {
>>> +        c->flags &= ~CHAN_IN_USE;
>>> +    }
>>> +}
>>> +
>>> +static void dma_work(struct work_struct *work)
>>> +{
>>> +    struct ldma_chan *c = container_of(work, struct ldma_chan, work);
>>> +    struct dma_async_tx_descriptor *tx = &c->ds->vdesc.tx;
>>> +    struct dmaengine_desc_callback cb;
>>> +
>>> +    dmaengine_desc_get_callback(tx, &cb);
>>> +    dma_cookie_complete(tx);
>>> +    dmaengine_desc_callback_invoke(&cb, NULL);
>> When you are going to free up the descriptor?
> **
> Seems i missed free up the descriptor here. i will fix and update in the
> next version.

This is the place when you can do c->ds = NULL and you should also look
for entries in the issued list to pick up any issued but not commited
transfers.

Let's say the client issued two transfers, in issue_pending you have
started the first one.
Now that is completed, you let the client know about it (you can free up
the descritor as well safely) and then you pick up the next one from the
issued list and start that.
If there is nothing pending then c->ds would be NULL, if there were
pending transfer, you will set c->ds to the next transfer.
c->ds indicates that there is a transfer in progress by the HW.

>>> +
>>> +    dma_dev->device_alloc_chan_resources =
>>> +        d->inst->ops->device_alloc_chan_resources;
>>> +    dma_dev->device_free_chan_resources =
>>> +        d->inst->ops->device_free_chan_resources;
>>> +    dma_dev->device_terminate_all = d->inst->ops->device_terminate_all;
>>> +    dma_dev->device_issue_pending = d->inst->ops->device_issue_pending;
>>> +    dma_dev->device_tx_status = d->inst->ops->device_tx_status;
>>> +    dma_dev->device_resume = d->inst->ops->device_resume;
>>> +    dma_dev->device_pause = d->inst->ops->device_pause;
>>> +    dma_dev->device_config = d->inst->ops->device_config;
>>> +    dma_dev->device_prep_slave_sg = d->inst->ops->device_prep_slave_sg;
>>> +    dma_dev->device_synchronize = d->inst->ops->device_synchronize;
>>> +
>>> +    if (d->ver == DMA_VER22) {
>>> +        dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>> +        dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>> +        dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
>>> +                      BIT(DMA_DEV_TO_MEM);
>>> +        dma_dev->residue_granularity =
>>> +                    DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
>>> +    }
>> So, if version is != DMA_VER22, then you don't support any direction?
>> Why register the DMA device if it can not do any transfer?
> Only dma0 instance (intel,lgm-cdma) is used as a general purpose slave
> DMA. we set both control and datapath here.
> Other instances we set only control path. data path is taken care by dma
> client(GSWIP).

How the client (GSWIP) can request a channel from intel,lgm-* ? Don't
you need some capabilities for the DMA device so core can sort out the
request?

> Only thing needs to do is get the channel, set the descriptor and just
> on the channel.

How do you 'on' the channel?

>>
>>> +
>>> +    platform_set_drvdata(pdev, d);
>>> +
>>> +    ldma_dev_init(d);
>>> +
>>> +    ret = dma_async_device_register(dma_dev);
>>> +    if (ret) {
>>> +        dev_err(dev, "Failed to register slave DMA engine device\n");
>>> +        return ret;
>>> +    }
>>> +
>>> +    ret = of_dma_controller_register(pdev->dev.of_node, ldma_xlate, d);
>>> +    if (ret) {
>>> +        dev_err(dev, "Failed to register of DMA controller\n");
>>> +        dma_async_device_unregister(dma_dev);
>>> +        return ret;
>>> +    }
>>> +
>>> +    dev_info(dev, "Init done - rev: %x, ports: %d channels: %d\n",
>>> d->ver,
>>> +         d->port_nrs, d->chan_nrs);
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static struct platform_driver intel_ldma_driver = {
>>> +    .probe = intel_ldma_probe,
>>> +    .driver = {
>>> +        .name = DRIVER_NAME,
>>> +        .of_match_table = intel_ldma_match,
>>> +    },
>>> +};
>>> +
>>> +static int __init intel_ldma_init(void)
>>> +{
>>> +    return platform_driver_register(&intel_ldma_driver);
>>> +}
>>> +
>>> +device_initcall(intel_ldma_init);
>>> diff --git a/include/linux/dma/lgm_dma.h b/include/linux/dma/lgm_dma.h
>>> new file mode 100644
>>> index 000000000000..3a2ee6ad0710
>>> --- /dev/null
>>> +++ b/include/linux/dma/lgm_dma.h
>>> @@ -0,0 +1,27 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Copyright (c) 2016 ~ 2019 Intel Corporation.
>>> + */
>>> +#ifndef LGM_DMA_H
>>> +#define LGM_DMA_H
>>> +
>>> +#include <linux/types.h>
>>> +#include <linux/dmaengine.h>
>>> +
>>> +/*!
>>> + * \fn int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t
>>> desc_base,
>>> + *                                 int desc_num)
>>> + * \brief Configure low level channel descriptors
>>> + * \param[in] chan   pointer to DMA channel that the client is using
>>> + * \param[in] desc_base   descriptor base physical address
>>> + * \param[in] desc_num   number of descriptors
>>> + * \return   0 on success
>>> + * \return   kernel bug reported on failure
>>> + *
>>> + * This function configure the low level channel descriptors. It
>>> will be
>>> + * used by CBM whose descriptor is not DDR, actually some registers.
>>> + */
>>> +int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t
>>> desc_base,
>>> +                int desc_num);
>>> +
>>> +#endif /* LGM_DMA_H */
>>>
>> - Péter
>>
>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>>

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-18  7:00     ` Reddy, MallikarjunaX
@ 2020-08-25 11:21       ` Vinod Koul
  2020-08-27  9:54         ` Reddy, MallikarjunaX
  2020-09-04  6:31       ` Peter Ujfalusi
  1 sibling, 1 reply; 23+ messages in thread
From: Vinod Koul @ 2020-08-25 11:21 UTC (permalink / raw)
  To: Reddy, MallikarjunaX
  Cc: Rob Herring, dmaengine, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, qi-ming.wu, chuanhua.lei,
	malliamireddy009

On 18-08-20, 15:00, Reddy, MallikarjunaX wrote:

> > > +
> > > +            intel,chans:
> > > +              $ref: /schemas/types.yaml#/definitions/uint32-array
> > > +              description:
> > > +                 The channels included on this port. Format is channel start
> > > +                 number and how many channels on this port.
> > Why does this need to be in DT? This all seems like it can be in the dma
> > cells for each client.
> (*ABC)
> Yes. We need this.
> for dma0(lgm-cdma) old SOC supports 16 channels and the new SOC supports 22
> channels. and the logical channel mapping for the peripherals also differ
> b/w old and new SOCs.
> 
> Because of this hardware limitation we are trying to configure the total
> channels and port-channel mapping dynamically from device tree.
> 
> based on port name we are trying to configure the default values for
> different peripherals(ports).
> Example: burst length is not same for all ports, so using port name to do
> default configurations.

Sorry that does not make sense to me, why not specify the values to be
used here instead of defining your own name scheme!

Only older soc it should create 16 channels and new 22 (hint this is hw
description so perfectly okay to specify in DT or in using driver_data
and compatible for each version

> > 
> > > +
> > > +          required:
> > > +            - reg
> > > +            - intel,name
> > > +            - intel,chans
> > > +
> > > +
> > > + ldma-channels:
> > > +    type: object
> > > +    description:
> > > +       This sub-node must contain a sub-node for each DMA channel.
> > > +    properties:
> > > +      '#address-cells':
> > > +        const: 1
> > > +      '#size-cells':
> > > +        const: 0
> > > +
> > > +    patternProperties:
> > > +      "^ldma-channels@[0-15]+$":
> > > +          type: object
> > > +
> > > +          properties:
> > > +            reg:
> > > +              items:
> > > +                - enum: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
> > > +              description:
> > > +                 Which channel this node refers to.
> > > +
> > > +            intel,desc_num:
> > > +              $ref: /schemas/types.yaml#/definitions/uint32
> > > +              description:
> > > +                 Per channel maximum descriptor number. The max value is 255.
> > > +
> > > +            intel,hdr-mode:
> > > +              $ref: /schemas/types.yaml#/definitions/uint32-array
> > > +              description:
> > > +                 The first parameter is header mode size, the second
> > > +                 parameter is checksum enable or disable. If enabled,
> > > +                 header mode size is ignored. If disabled, header mode
> > > +                 size must be provided.
> > > +
> > > +            intel,hw-desc:
> > > +              $ref: /schemas/types.yaml#/definitions/uint32-array
> > > +              description:
> > > +                 Per channel dma hardware descriptor configuration.
> > > +                 The first parameter is descriptor physical address and the
> > > +                 second parameter hardware descriptor number.
> > Again, this all seems like per client information for dma cells.
>  Ok, if we move all these attributes to 'dmas' then 'dma-channels' child
> node is not needed in dtsi.
> #dma-cells number i am already using 7. If we move all these attributes to
> 'dmas' then integer cells will increase.
> 
> Is there any limitation in using a number of integer cells & as determined
> by the #dma-cells property?

No I dont think there is but it needs to make sense :-)

-- 
~Vinod

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-25 11:21       ` Vinod Koul
@ 2020-08-27  9:54         ` Reddy, MallikarjunaX
  2020-08-28 10:45           ` Vinod Koul
  0 siblings, 1 reply; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-08-27  9:54 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Rob Herring, dmaengine, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, qi-ming.wu, chuanhua.lei,
	malliamireddy009

Hi Vinod,
Thanks for the review comments.

On 8/25/2020 7:21 PM, Vinod Koul wrote:
> On 18-08-20, 15:00, Reddy, MallikarjunaX wrote:
>
>>>> +
>>>> +            intel,chans:
>>>> +              $ref: /schemas/types.yaml#/definitions/uint32-array
>>>> +              description:
>>>> +                 The channels included on this port. Format is channel start
>>>> +                 number and how many channels on this port.
>>> Why does this need to be in DT? This all seems like it can be in the dma
>>> cells for each client.
>> (*ABC)
>> Yes. We need this.
>> for dma0(lgm-cdma) old SOC supports 16 channels and the new SOC supports 22
>> channels. and the logical channel mapping for the peripherals also differ
>> b/w old and new SOCs.
>>
>> Because of this hardware limitation we are trying to configure the total
>> channels and port-channel mapping dynamically from device tree.
>>
>> based on port name we are trying to configure the default values for
>> different peripherals(ports).
>> Example: burst length is not same for all ports, so using port name to do
>> default configurations.
> Sorry that does not make sense to me, why not specify the values to be
> used here instead of defining your own name scheme!
OK. Agreed. I will remove port name from DT and only use intel,chans
>
> Only older soc it should create 16 channels and new 22 (hint this is hw
> description so perfectly okay to specify in DT or in using driver_data
> and compatible for each version
>
>>>> +
>>>> +          required:
>>>> +            - reg
>>>> +            - intel,name
>>>> +            - intel,chans
>>>> +
>>>> +
>>>> + ldma-channels:
>>>> +    type: object
>>>> +    description:
>>>> +       This sub-node must contain a sub-node for each DMA channel.
>>>> +    properties:
>>>> +      '#address-cells':
>>>> +        const: 1
>>>> +      '#size-cells':
>>>> +        const: 0
>>>> +
>>>> +    patternProperties:
>>>> +      "^ldma-channels@[0-15]+$":
>>>> +          type: object
>>>> +
>>>> +          properties:
>>>> +            reg:
>>>> +              items:
>>>> +                - enum: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
>>>> +              description:
>>>> +                 Which channel this node refers to.
>>>> +
>>>> +            intel,desc_num:
>>>> +              $ref: /schemas/types.yaml#/definitions/uint32
>>>> +              description:
>>>> +                 Per channel maximum descriptor number. The max value is 255.
>>>> +
>>>> +            intel,hdr-mode:
>>>> +              $ref: /schemas/types.yaml#/definitions/uint32-array
>>>> +              description:
>>>> +                 The first parameter is header mode size, the second
>>>> +                 parameter is checksum enable or disable. If enabled,
>>>> +                 header mode size is ignored. If disabled, header mode
>>>> +                 size must be provided.
>>>> +
>>>> +            intel,hw-desc:
>>>> +              $ref: /schemas/types.yaml#/definitions/uint32-array
>>>> +              description:
>>>> +                 Per channel dma hardware descriptor configuration.
>>>> +                 The first parameter is descriptor physical address and the
>>>> +                 second parameter hardware descriptor number.
>>> Again, this all seems like per client information for dma cells.
>>   Ok, if we move all these attributes to 'dmas' then 'dma-channels' child
>> node is not needed in dtsi.
>> #dma-cells number i am already using 7. If we move all these attributes to
>> 'dmas' then integer cells will increase.
>>
>> Is there any limitation in using a number of integer cells & as determined
>> by the #dma-cells property?
> No I dont think there is but it needs to make sense :-)
OK.
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-24 11:24       ` Peter Ujfalusi
@ 2020-08-27 14:41         ` Reddy, MallikarjunaX
  2020-08-28 11:17           ` Peter Ujfalusi
  0 siblings, 1 reply; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-08-27 14:41 UTC (permalink / raw)
  To: Peter Ujfalusi, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009

Hi Peter,
Thank you very much for the review and detailed explanation, i will take 
the inputs and update in the next patch.
My comments inline..

On 8/24/2020 7:24 PM, Peter Ujfalusi wrote:
> Hi,
>
> On 24/08/2020 5.30, Reddy, MallikarjunaX wrote:
>
>>>> +static void dma_free_desc_resource(struct virt_dma_desc *vdesc)
>>>> +{
>>>> +    struct dw2_desc_sw *ds = to_lgm_dma_desc(vdesc);
>>>> +    struct ldma_chan *c = ds->chan;
>>>> +
>>>> +    dma_pool_free(c->desc_pool, ds->desc_hw, ds->desc_phys);
>>>> +    kfree(ds);
>>>> +    c->ds = NULL;
>>> Is there a chance that c->ds != ds?
>> No, from the code i don't see any such scenario, let me know if you find
>> any corner case.
> The desc_free callback is used to _free_ up the memory used for the
> descriptor. Nothing less, nothing more.
> You should not touch the c->ds in this callback, just free up the memory
> used for the given vdesc.
>
> More on that a bit later.
Ok.
>
>>>> +}
>>>> +
>>>> +static struct dw2_desc_sw *
>>>> +dma_alloc_desc_resource(int num, struct ldma_chan *c)
>>>> +{
>>>> +    struct device *dev = c->vchan.chan.device->dev;
>>>> +    struct dw2_desc_sw *ds;
>>>> +
>>>> +    if (num > c->desc_num) {
>>>> +        dev_err(dev, "sg num %d exceed max %d\n", num, c->desc_num);
>>>> +        return NULL;
>>>> +    }
>>>> +
>>>> +    ds = kzalloc(sizeof(*ds), GFP_NOWAIT);
>>>> +    if (!ds)
>>>> +        return NULL;
>>>> +
>>>> +    ds->chan = c;
>>>> +
>>>> +    ds->desc_hw = dma_pool_zalloc(c->desc_pool, GFP_ATOMIC,
>>>> +                      &ds->desc_phys);
>>>> +    if (!ds->desc_hw) {
>>>> +        dev_dbg(dev, "out of memory for link descriptor\n");
>>>> +        kfree(ds);
>>>> +        return NULL;
>>>> +    }
>>>> +    ds->desc_cnt = num;
>>>> +
>>>> +    return ds;
>>>> +}
>>>> +
>>>> +static void ldma_chan_irq_en(struct ldma_chan *c)
>>>> +{
>>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>>> +    unsigned long flags;
>>>> +
>>>> +    spin_lock_irqsave(&d->dev_lock, flags);
>>>> +    writel(c->nr, d->base + DMA_CS);
>>>> +    writel(DMA_CI_EOP, d->base + DMA_CIE);
>>>> +    writel(BIT(c->nr), d->base + DMA_IRNEN);
>>>> +    spin_unlock_irqrestore(&d->dev_lock, flags);
>>>> +}
>>>> +
>>>> +static void dma_issue_pending(struct dma_chan *chan)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>>> +    unsigned long flags;
>>>> +
>>>> +    if (d->ver == DMA_VER22) {
>>>> +        spin_lock_irqsave(&c->vchan.lock, flags);
>>>> +        if (vchan_issue_pending(&c->vchan)) {
>>>> +            struct virt_dma_desc *vdesc;
>>>> +
>>>> +            /* Get the next descriptor */
>>>> +            vdesc = vchan_next_desc(&c->vchan);
>>>> +            if (!vdesc) {
>>>> +                c->ds = NULL;
>>>> +                return;
>>>> +            }
>>>> +            list_del(&vdesc->node);
>>>> +            c->ds = to_lgm_dma_desc(vdesc);
>>> you have set c->ds in dma_prep_slave_sg and the only way I can see that
>>> you will not leak memory is that the client must terminate_sync() after
>>> each transfer so that the synchronize callback is invoked between each
>>> prep_sg/issue_pending/competion.
>> Yes, client must call dmaengine_synchronize after each transfer to make
>> sure free the memory assoicated with previously issued descriptors if any.
> No, client should not need to invoke synchronize between transfers,
> clients must be able to do:
> dmaengine_prep_slave_single/dmaengine_prep_slave_sg
> dma_async_issue_pending
> * handle the callback
> dmaengine_prep_slave_single/dmaengine_prep_slave_sg
> dma_async_issue_pending
> * handle the callback
>
> without any terminate_all_sync() in between.
> Imagine that the client is preparing/issuing a new transfer in the
> completion callback for example.
>
> Clients must be able to do also:
> dmaengine_prep_slave_single/dmaengine_prep_slave_sg
> dmaengine_prep_slave_single/dmaengine_prep_slave_sg
> ...
> dmaengine_prep_slave_single/dmaengine_prep_slave_sg
> dma_async_issue_pending
>
> and then the DMA will complete the transfers in FIFO order and when the
> first is completed it will move to the next one. Client will receive
> callbacks for each completion (if requested).
Thanks for the detailed explanation peter.
>
>> and also from the driver we are freeing up the descriptor from work
>> queue atfer each transfer.(addressed below comments **)
>>>> +            spin_unlock_irqrestore(&c->vchan.lock, flags);
>>>> +            ldma_chan_desc_hw_cfg(c, c->ds->desc_phys,
>>>> c->ds->desc_cnt);
>>>> +            ldma_chan_irq_en(c);
>>>> +        }
>>> If there is nothing peding, you will leave the spinlock wide open...
>> Seems i misplaced the lock. i will fix it in next version.
>>>> +    }
>>>> +    ldma_chan_on(c);
>>>> +}
>>>> +
>>>> +static void dma_synchronize(struct dma_chan *chan)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +
>>>> +    /*
>>>> +     * clear any pending work if any. In that
>>>> +     * case the resource needs to be free here.
>>>> +     */
>>>> +    cancel_work_sync(&c->work);
>>>> +    vchan_synchronize(&c->vchan);
>>>> +    if (c->ds)
>>>> +        dma_free_desc_resource(&c->ds->vdesc);
>>>> +}
>>>> +
>>>> +static int dma_terminate_all(struct dma_chan *chan)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +    unsigned long flags;
>>>> +    LIST_HEAD(head);
>>>> +
>>>> +    spin_lock_irqsave(&c->vchan.lock, flags);
>>>> +    vchan_get_all_descriptors(&c->vchan, &head);
>>>> +    spin_unlock_irqrestore(&c->vchan.lock, flags);
>>>> +    vchan_dma_desc_free_list(&c->vchan, &head);
>>>> +
>>>> +    return ldma_chan_reset(c);
>>>> +}
>>>> +
>>>> +static int dma_resume_chan(struct dma_chan *chan)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +
>>>> +    ldma_chan_on(c);
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +static int dma_pause_chan(struct dma_chan *chan)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +
>>>> +    return ldma_chan_off(c);
>>>> +}
>>>> +
>>>> +static enum dma_status
>>>> +dma_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
>>>> +          struct dma_tx_state *txstate)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>>> +    enum dma_status status = DMA_COMPLETE;
>>>> +
>>>> +    if (d->ver == DMA_VER22)
>>>> +        status = dma_cookie_status(chan, cookie, txstate);
>>>> +
>>>> +    return status;
>>>> +}
>>>> +
>>>> +static void dma_chan_irq(int irq, void *data)
>>>> +{
>>>> +    struct ldma_chan *c = data;
>>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>>> +    u32 stat;
>>>> +
>>>> +    /* Disable channel interrupts  */
>>>> +    writel(c->nr, d->base + DMA_CS);
>>>> +    stat = readl(d->base + DMA_CIS);
>>>> +    if (!stat)
>>>> +        return;
>>>> +
>>>> +    writel(readl(d->base + DMA_CIE) & ~DMA_CI_ALL, d->base + DMA_CIE);
>>>> +    writel(stat, d->base + DMA_CIS);
>>>> +    queue_work(d->wq, &c->work);
>>>> +}
>>>> +
>>>> +static irqreturn_t dma_interrupt(int irq, void *dev_id)
>>>> +{
>>>> +    struct ldma_dev *d = dev_id;
>>>> +    struct ldma_chan *c;
>>>> +    unsigned long irncr;
>>>> +    u32 cid;
>>>> +
>>>> +    irncr = readl(d->base + DMA_IRNCR);
>>>> +    if (!irncr) {
>>>> +        dev_err(d->dev, "dummy interrupt\n");
>>>> +        return IRQ_NONE;
>>>> +    }
>>>> +
>>>> +    for_each_set_bit(cid, &irncr, d->chan_nrs) {
>>>> +        /* Mask */
>>>> +        writel(readl(d->base + DMA_IRNEN) & ~BIT(cid), d->base +
>>>> DMA_IRNEN);
>>>> +        /* Ack */
>>>> +        writel(readl(d->base + DMA_IRNCR) | BIT(cid), d->base +
>>>> DMA_IRNCR);
>>>> +
>>>> +        c = &d->chans[cid];
>>>> +        dma_chan_irq(irq, c);
>>>> +    }
>>>> +
>>>> +    return IRQ_HANDLED;
>>>> +}
>>>> +
>>>> +static struct dma_async_tx_descriptor *
>>>> +dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
>>>> +          unsigned int sglen, enum dma_transfer_direction dir,
>>>> +          unsigned long flags, void *context)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +    size_t len, avail, total = 0;
>>>> +    struct dw2_desc *hw_ds;
>>>> +    struct dw2_desc_sw *ds;
>>>> +    struct scatterlist *sg;
>>>> +    int num = sglen, i;
>>>> +    dma_addr_t addr;
>>>> +
>>>> +    if (!sgl)
>>>> +        return NULL;
>>>> +
>>>> +    for_each_sg(sgl, sg, sglen, i) {
>>>> +        avail = sg_dma_len(sg);
>>>> +        if (avail > DMA_MAX_SIZE)
>>>> +            num += DIV_ROUND_UP(avail, DMA_MAX_SIZE) - 1;
>>>> +    }
>>>> +
>>>> +    ds = dma_alloc_desc_resource(num, c);
>>>> +    if (!ds)
>>>> +        return NULL;
>>>> +
>>>> +    c->ds = ds;
>>> If you still have a transfer running then you are going to get lost that
>>> dscriptor?
>> No, please let me know if you find any such corner case.
> This is the place when you prepare the descriptor, but it is not yet
> commited to the hardware. Client might never call issue_pending, client
> might prepare another transfer before telling the DMA driver to start
> the transfers.
ok
>
>>>> +
>>>> +    num = 0;
>>>> +    /* sop and eop has to be handled nicely */
>>>> +    for_each_sg(sgl, sg, sglen, i) {
>>>> +        addr = sg_dma_address(sg);
>>>> +        avail = sg_dma_len(sg);
>>>> +        total += avail;
>>>> +
>>>> +        do {
>>>> +            len = min_t(size_t, avail, DMA_MAX_SIZE);
>>>> +
>>>> +            hw_ds = &ds->desc_hw[num];
>>>> +            switch (sglen) {
>>>> +            case 1:
>>>> +                hw_ds->status.field.sop = 1;
>>>> +                hw_ds->status.field.eop = 1;
>>>> +                break;
>>>> +            default:
>>>> +                if (num == 0) {
>>>> +                    hw_ds->status.field.sop = 1;
>>>> +                    hw_ds->status.field.eop = 0;
>>>> +                } else if (num == (sglen - 1)) {
>>>> +                    hw_ds->status.field.sop = 0;
>>>> +                    hw_ds->status.field.eop = 1;
>>>> +                } else {
>>>> +                    hw_ds->status.field.sop = 0;
>>>> +                    hw_ds->status.field.eop = 0;
>>>> +                }
>>>> +                break;
>>>> +            }
>>>> +
>>>> +            /* Only 32 bit address supported */
>>>> +            hw_ds->addr = (u32)addr;
>>>> +            hw_ds->status.field.len = len;
>>>> +            hw_ds->status.field.c = 0;
>>>> +            hw_ds->status.field.bofs = addr & 0x3;
>>>> +            /* Ensure data ready before ownership change */
>>>> +            wmb();
>>>> +            hw_ds->status.field.own = DMA_OWN;
>>>> +            /* Ensure ownership changed before moving forward */
>>>> +            wmb();
>>>> +            num++;
>>>> +            addr += len;
>>>> +            avail -= len;
>>>> +        } while (avail);
>>>> +    }
>>>> +
>>>> +    ds->size = total;
>>>> +
>>>> +    return vchan_tx_prep(&c->vchan, &ds->vdesc, DMA_CTRL_ACK);
>>>> +}
>>>> +
>>>> +static int
>>>> +dma_slave_config(struct dma_chan *chan, struct dma_slave_config *cfg)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>>> +    struct ldma_port *p = c->port;
>>>> +    unsigned long flags;
>>>> +    u32 bl;
>>>> +
>>>> +    if ((cfg->direction == DMA_DEV_TO_MEM &&
>>>> +         cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
>>>> +        (cfg->direction == DMA_MEM_TO_DEV &&
>>>> +         cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES) ||
>>> According to the probe function these width restrictions are only valid
>>> for DMA_VER22?
>> YES
> Right, hdma_ops does not have device_config specified.
>
>>>> +        !is_slave_direction(cfg->direction))
>>>> +        return -EINVAL;
>>>> +
>>>> +    /* Default setting will be used */
>>>> +    if (!cfg->src_maxburst && !cfg->dst_maxburst)
>>>> +        return 0;
>>> maxburst == 0 is identical to maxburst == 1, it is just not set
>>> explicitly. Iow 1 word per DMA request.
>> This is not clear to me. Can you elaborate?
> You handle the *_maxburst == 1 and *maxburst == 0 differently while they
> are the same thing.
ok, got it. i will fix it in next version.
>
>>>> +
>>>> +    /* Must be the same */
>>>> +    if (cfg->src_maxburst && cfg->dst_maxburst &&
>>>> +        cfg->src_maxburst != cfg->dst_maxburst)
>>>> +        return -EINVAL;
>>>> +
>>>> +    if (cfg->dst_maxburst)
>>>> +        cfg->src_maxburst = cfg->dst_maxburst;
>>>> +
>>>> +    bl = ilog2(cfg->src_maxburst);
>>>> +
>>>> +    spin_lock_irqsave(&d->dev_lock, flags);
>>>> +    writel(p->portid, d->base + DMA_PS);
>>>> +    ldma_update_bits(d, DMA_PCTRL_RXBL | DMA_PCTRL_TXBL,
>>>> +             FIELD_PREP(DMA_PCTRL_RXBL, bl) |
>>>> +             FIELD_PREP(DMA_PCTRL_TXBL, bl), DMA_PCTRL);
>>>> +    spin_unlock_irqrestore(&d->dev_lock, flags);
>>> What drivers usually do is to save the cfg and inprepare time take it
>>> into account when setting up the transfer.
>>> Write the change to the HW before the trasnfer is started (if it has
>>> changed from previous settings)
>>>
>>> Client drivers usually set the slave config ones, in most cases during
>>> probe, so the slave config rarely changes runtime, but there are cases
>>> for that.
>> Ok, got it. i will update in the next version.
> Thanks
>
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +static int dma_alloc_chan_resources(struct dma_chan *chan)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>>> +    struct device *dev = c->vchan.chan.device->dev;
>>>> +    size_t    desc_sz;
>>>> +
>>>> +    if (d->ver > DMA_VER22) {
>>>> +        c->flags |= CHAN_IN_USE;
>>>> +        return 0;
>>>> +    }
>>>> +
>>>> +    if (c->desc_pool)
>>>> +        return c->desc_num;
>>>> +
>>>> +    desc_sz = c->desc_num * sizeof(struct dw2_desc);
>>>> +    c->desc_pool = dma_pool_create(c->name, dev, desc_sz,
>>>> +                       __alignof__(struct dw2_desc), 0);
>>>> +
>>>> +    if (!c->desc_pool) {
>>>> +        dev_err(dev, "unable to allocate descriptor pool\n");
>>>> +        return -ENOMEM;
>>>> +    }
>>>> +
>>>> +    return c->desc_num;
>>>> +}
>>>> +
>>>> +static void dma_free_chan_resources(struct dma_chan *chan)
>>>> +{
>>>> +    struct ldma_chan *c = to_ldma_chan(chan);
>>>> +    struct ldma_dev *d = to_ldma_dev(c->vchan.chan.device);
>>>> +
>>>> +    if (d->ver == DMA_VER22) {
>>>> +        dma_pool_destroy(c->desc_pool);
>>>> +        c->desc_pool = NULL;
>>>> +        vchan_free_chan_resources(to_virt_chan(chan));
>>>> +        ldma_chan_reset(c);
>>>> +    } else {
>>>> +        c->flags &= ~CHAN_IN_USE;
>>>> +    }
>>>> +}
>>>> +
>>>> +static void dma_work(struct work_struct *work)
>>>> +{
>>>> +    struct ldma_chan *c = container_of(work, struct ldma_chan, work);
>>>> +    struct dma_async_tx_descriptor *tx = &c->ds->vdesc.tx;
>>>> +    struct dmaengine_desc_callback cb;
>>>> +
>>>> +    dmaengine_desc_get_callback(tx, &cb);
>>>> +    dma_cookie_complete(tx);
>>>> +    dmaengine_desc_callback_invoke(&cb, NULL);
>>> When you are going to free up the descriptor?
>> **
>> Seems i missed free up the descriptor here. i will fix and update in the
>> next version.
> This is the place when you can do c->ds = NULL and you should also look
> for entries in the issued list to pick up any issued but not commited
> transfers.
>
> Let's say the client issued two transfers, in issue_pending you have
> started the first one.
> Now that is completed, you let the client know about it (you can free up
> the descritor as well safely) and then you pick up the next one from the
> issued list and start that.
> If there is nothing pending then c->ds would be NULL, if there were
> pending transfer, you will set c->ds to the next transfer.
> c->ds indicates that there is a transfer in progress by the HW.
Thanks peter. I will take your inputs and address the not committed 
transfers in the next patch.
>
>>>> +
>>>> +    dma_dev->device_alloc_chan_resources =
>>>> +        d->inst->ops->device_alloc_chan_resources;
>>>> +    dma_dev->device_free_chan_resources =
>>>> +        d->inst->ops->device_free_chan_resources;
>>>> +    dma_dev->device_terminate_all = d->inst->ops->device_terminate_all;
>>>> +    dma_dev->device_issue_pending = d->inst->ops->device_issue_pending;
>>>> +    dma_dev->device_tx_status = d->inst->ops->device_tx_status;
>>>> +    dma_dev->device_resume = d->inst->ops->device_resume;
>>>> +    dma_dev->device_pause = d->inst->ops->device_pause;
>>>> +    dma_dev->device_config = d->inst->ops->device_config;
>>>> +    dma_dev->device_prep_slave_sg = d->inst->ops->device_prep_slave_sg;
>>>> +    dma_dev->device_synchronize = d->inst->ops->device_synchronize;
>>>> +
>>>> +    if (d->ver == DMA_VER22) {
>>>> +        dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>> +        dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>> +        dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
>>>> +                      BIT(DMA_DEV_TO_MEM);
>>>> +        dma_dev->residue_granularity =
>>>> +                    DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
>>>> +    }
>>> So, if version is != DMA_VER22, then you don't support any direction?
>>> Why register the DMA device if it can not do any transfer?
>> Only dma0 instance (intel,lgm-cdma) is used as a general purpose slave
>> DMA. we set both control and datapath here.
>> Other instances we set only control path. data path is taken care by dma
>> client(GSWIP).
> How the client (GSWIP) can request a channel from intel,lgm-* ? Don't
> you need some capabilities for the DMA device so core can sort out the
> request?
client request channel by name, dma_request_slave_channel(dev, name);
>
>> Only thing needs to do is get the channel, set the descriptor and just
>> on the channel.
> How do you 'on' the channel?
we on the channel in issue_pending.
>
>>>> +
>>>> +    platform_set_drvdata(pdev, d);
>>>> +
>>>> +    ldma_dev_init(d);
>>>> +
>>>> +    ret = dma_async_device_register(dma_dev);
>>>> +    if (ret) {
>>>> +        dev_err(dev, "Failed to register slave DMA engine device\n");
>>>> +        return ret;
>>>> +    }
>>>> +
>>>> +    ret = of_dma_controller_register(pdev->dev.of_node, ldma_xlate, d);
>>>> +    if (ret) {
>>>> +        dev_err(dev, "Failed to register of DMA controller\n");
>>>> +        dma_async_device_unregister(dma_dev);
>>>> +        return ret;
>>>> +    }
>>>> +
>>>> +    dev_info(dev, "Init done - rev: %x, ports: %d channels: %d\n",
>>>> d->ver,
>>>> +         d->port_nrs, d->chan_nrs);
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +static struct platform_driver intel_ldma_driver = {
>>>> +    .probe = intel_ldma_probe,
>>>> +    .driver = {
>>>> +        .name = DRIVER_NAME,
>>>> +        .of_match_table = intel_ldma_match,
>>>> +    },
>>>> +};
>>>> +
>>>> +static int __init intel_ldma_init(void)
>>>> +{
>>>> +    return platform_driver_register(&intel_ldma_driver);
>>>> +}
>>>> +
>>>> +device_initcall(intel_ldma_init);
>>>> diff --git a/include/linux/dma/lgm_dma.h b/include/linux/dma/lgm_dma.h
>>>> new file mode 100644
>>>> index 000000000000..3a2ee6ad0710
>>>> --- /dev/null
>>>> +++ b/include/linux/dma/lgm_dma.h
>>>> @@ -0,0 +1,27 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>> +/*
>>>> + * Copyright (c) 2016 ~ 2019 Intel Corporation.
>>>> + */
>>>> +#ifndef LGM_DMA_H
>>>> +#define LGM_DMA_H
>>>> +
>>>> +#include <linux/types.h>
>>>> +#include <linux/dmaengine.h>
>>>> +
>>>> +/*!
>>>> + * \fn int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t
>>>> desc_base,
>>>> + *                                 int desc_num)
>>>> + * \brief Configure low level channel descriptors
>>>> + * \param[in] chan   pointer to DMA channel that the client is using
>>>> + * \param[in] desc_base   descriptor base physical address
>>>> + * \param[in] desc_num   number of descriptors
>>>> + * \return   0 on success
>>>> + * \return   kernel bug reported on failure
>>>> + *
>>>> + * This function configure the low level channel descriptors. It
>>>> will be
>>>> + * used by CBM whose descriptor is not DDR, actually some registers.
>>>> + */
>>>> +int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t
>>>> desc_base,
>>>> +                int desc_num);
>>>> +
>>>> +#endif /* LGM_DMA_H */
>>>>
>>> - Péter
>>>
>>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>>>
> - Péter
>
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-27  9:54         ` Reddy, MallikarjunaX
@ 2020-08-28 10:45           ` Vinod Koul
  2020-08-31  8:06             ` Reddy, MallikarjunaX
  0 siblings, 1 reply; 23+ messages in thread
From: Vinod Koul @ 2020-08-28 10:45 UTC (permalink / raw)
  To: Reddy, MallikarjunaX
  Cc: Rob Herring, dmaengine, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, qi-ming.wu, chuanhua.lei,
	malliamireddy009

On 27-08-20, 17:54, Reddy, MallikarjunaX wrote:
> Hi Vinod,
> Thanks for the review comments.
> 
> On 8/25/2020 7:21 PM, Vinod Koul wrote:
> > On 18-08-20, 15:00, Reddy, MallikarjunaX wrote:
> > 
> > > > > +
> > > > > +            intel,chans:
> > > > > +              $ref: /schemas/types.yaml#/definitions/uint32-array
> > > > > +              description:
> > > > > +                 The channels included on this port. Format is channel start
> > > > > +                 number and how many channels on this port.
> > > > Why does this need to be in DT? This all seems like it can be in the dma
> > > > cells for each client.
> > > (*ABC)
> > > Yes. We need this.
> > > for dma0(lgm-cdma) old SOC supports 16 channels and the new SOC supports 22
> > > channels. and the logical channel mapping for the peripherals also differ
> > > b/w old and new SOCs.
> > > 
> > > Because of this hardware limitation we are trying to configure the total
> > > channels and port-channel mapping dynamically from device tree.
> > > 
> > > based on port name we are trying to configure the default values for
> > > different peripherals(ports).
> > > Example: burst length is not same for all ports, so using port name to do
> > > default configurations.
> > Sorry that does not make sense to me, why not specify the values to be
> > used here instead of defining your own name scheme!
> OK. Agreed. I will remove port name from DT and only use intel,chans

what is intel,chans, why not use dma-channels?

-- 
~Vinod

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-27 14:41         ` Reddy, MallikarjunaX
@ 2020-08-28 11:17           ` Peter Ujfalusi
  2020-08-31  8:07             ` Reddy, MallikarjunaX
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Ujfalusi @ 2020-08-28 11:17 UTC (permalink / raw)
  To: Reddy, MallikarjunaX, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009

Hi,

On 27/08/2020 17.41, Reddy, MallikarjunaX wrote:
>>>>> +
>>>>> +    dma_dev->device_alloc_chan_resources =
>>>>> +        d->inst->ops->device_alloc_chan_resources;
>>>>> +    dma_dev->device_free_chan_resources =
>>>>> +        d->inst->ops->device_free_chan_resources;
>>>>> +    dma_dev->device_terminate_all =
>>>>> d->inst->ops->device_terminate_all;
>>>>> +    dma_dev->device_issue_pending =
>>>>> d->inst->ops->device_issue_pending;
>>>>> +    dma_dev->device_tx_status = d->inst->ops->device_tx_status;
>>>>> +    dma_dev->device_resume = d->inst->ops->device_resume;
>>>>> +    dma_dev->device_pause = d->inst->ops->device_pause;
>>>>> +    dma_dev->device_config = d->inst->ops->device_config;
>>>>> +    dma_dev->device_prep_slave_sg =
>>>>> d->inst->ops->device_prep_slave_sg;
>>>>> +    dma_dev->device_synchronize = d->inst->ops->device_synchronize;
>>>>> +
>>>>> +    if (d->ver == DMA_VER22) {
>>>>> +        dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>>> +        dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>>> +        dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
>>>>> +                      BIT(DMA_DEV_TO_MEM);
>>>>> +        dma_dev->residue_granularity =
>>>>> +                    DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
>>>>> +    }
>>>> So, if version is != DMA_VER22, then you don't support any direction?
>>>> Why register the DMA device if it can not do any transfer?
>>> Only dma0 instance (intel,lgm-cdma) is used as a general purpose slave
>>> DMA. we set both control and datapath here.
>>> Other instances we set only control path. data path is taken care by dma
>>> client(GSWIP).
>> How the client (GSWIP) can request a channel from intel,lgm-* ? Don't
>> you need some capabilities for the DMA device so core can sort out the
>> request?
> client request channel by name, dma_request_slave_channel(dev, name);

clients should use dma_request_chan(dev, name);

If the channel can be requested via DT or ACPI then we don't check the
capabilities at all, so yes, that could work.

>>> Only thing needs to do is get the channel, set the descriptor and just
>>> on the channel.
>> How do you 'on' the channel?
> we on the channel in issue_pending.

Right.
Basically you only prep_slave_sg/single for the DMA_VER22? Or do you
that for the others w/o direction support?

For the intel,lgm-* DMAs you only call issue_pending() and probably
terminate_all?

Interesting setup ;)

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-28 10:45           ` Vinod Koul
@ 2020-08-31  8:06             ` Reddy, MallikarjunaX
  2020-08-31 11:00               ` Vinod Koul
  0 siblings, 1 reply; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-08-31  8:06 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Rob Herring, dmaengine, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, qi-ming.wu, chuanhua.lei,
	malliamireddy009

Hi Vinod,

Thanks for the review. Please see my comment inline.

On 8/28/2020 6:45 PM, Vinod Koul wrote:
> On 27-08-20, 17:54, Reddy, MallikarjunaX wrote:
>> Hi Vinod,
>> Thanks for the review comments.
>>
>> On 8/25/2020 7:21 PM, Vinod Koul wrote:
>>> On 18-08-20, 15:00, Reddy, MallikarjunaX wrote:
>>>
>>>>>> +
>>>>>> +            intel,chans:
>>>>>> +              $ref: /schemas/types.yaml#/definitions/uint32-array
>>>>>> +              description:
>>>>>> +                 The channels included on this port. Format is channel start
>>>>>> +                 number and how many channels on this port.
>>>>> Why does this need to be in DT? This all seems like it can be in the dma
>>>>> cells for each client.
>>>> (*ABC)
>>>> Yes. We need this.
>>>> for dma0(lgm-cdma) old SOC supports 16 channels and the new SOC supports 22
>>>> channels. and the logical channel mapping for the peripherals also differ
>>>> b/w old and new SOCs.
>>>>
>>>> Because of this hardware limitation we are trying to configure the total
>>>> channels and port-channel mapping dynamically from device tree.
>>>>
>>>> based on port name we are trying to configure the default values for
>>>> different peripherals(ports).
>>>> Example: burst length is not same for all ports, so using port name to do
>>>> default configurations.
>>> Sorry that does not make sense to me, why not specify the values to be
>>> used here instead of defining your own name scheme!
>> OK. Agreed. I will remove port name from DT and only use intel,chans
> what is intel,chans, why not use dma-channels?
  The intel,chans says about the channels included on the correspondng 
port. Format is channel start number and how many channels on this port.
  The reasong behind using this attribute instead of standrad 
dma-channels is...


DMA_VER22 HW supports 22 channels. But there is a hole in HW, total it 
can use only 16.

Old soc supports 4ports and 16 channels.
New soc supports 6ports and 22 channels.
(old and new soc carry the same version VER22)

port channel mapping for the old and new soc also not the same.
old soc: logical channels:(Rx, Tx)
0, 1 - SPI0
2, 3 - SPI1
4, 5 - HSNAND
12, 14, 13, 15 - Memcopy

New soc: Logical channels(Rx, Tx)
0, 1 - SPI0
2, 3 - SPI1
4, 5 - SPI2
6, 7 - SPI3
8, 9 - HSNAND
10 to 21 - Mcopy

Because of these reasons we are trying to use "intel,chans" attribute, 
and reading then number of channels from the dt.
Advantaage:
1. we can map the channels correspondign to port
2. Dynamically configure the channels (due to hw limitation)

If this is not ok, please suggest us the better way to handle this.
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-28 11:17           ` Peter Ujfalusi
@ 2020-08-31  8:07             ` Reddy, MallikarjunaX
  2020-09-04  6:31               ` Peter Ujfalusi
  0 siblings, 1 reply; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-08-31  8:07 UTC (permalink / raw)
  To: Peter Ujfalusi, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009


On 8/28/2020 7:17 PM, Peter Ujfalusi wrote:
> Hi,
>
> On 27/08/2020 17.41, Reddy, MallikarjunaX wrote:
>>>>>> +
>>>>>> +    dma_dev->device_alloc_chan_resources =
>>>>>> +        d->inst->ops->device_alloc_chan_resources;
>>>>>> +    dma_dev->device_free_chan_resources =
>>>>>> +        d->inst->ops->device_free_chan_resources;
>>>>>> +    dma_dev->device_terminate_all =
>>>>>> d->inst->ops->device_terminate_all;
>>>>>> +    dma_dev->device_issue_pending =
>>>>>> d->inst->ops->device_issue_pending;
>>>>>> +    dma_dev->device_tx_status = d->inst->ops->device_tx_status;
>>>>>> +    dma_dev->device_resume = d->inst->ops->device_resume;
>>>>>> +    dma_dev->device_pause = d->inst->ops->device_pause;
>>>>>> +    dma_dev->device_config = d->inst->ops->device_config;
>>>>>> +    dma_dev->device_prep_slave_sg =
>>>>>> d->inst->ops->device_prep_slave_sg;
>>>>>> +    dma_dev->device_synchronize = d->inst->ops->device_synchronize;
>>>>>> +
>>>>>> +    if (d->ver == DMA_VER22) {
>>>>>> +        dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>>>> +        dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>>>> +        dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
>>>>>> +                      BIT(DMA_DEV_TO_MEM);
>>>>>> +        dma_dev->residue_granularity =
>>>>>> +                    DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
>>>>>> +    }
>>>>> So, if version is != DMA_VER22, then you don't support any direction?
>>>>> Why register the DMA device if it can not do any transfer?
>>>> Only dma0 instance (intel,lgm-cdma) is used as a general purpose slave
>>>> DMA. we set both control and datapath here.
>>>> Other instances we set only control path. data path is taken care by dma
>>>> client(GSWIP).
>>> How the client (GSWIP) can request a channel from intel,lgm-* ? Don't
>>> you need some capabilities for the DMA device so core can sort out the
>>> request?
>> client request channel by name, dma_request_slave_channel(dev, name);
> clients should use dma_request_chan(dev, name);
>
> If the channel can be requested via DT or ACPI then we don't check the
> capabilities at all, so yes, that could work.
>
>>>> Only thing needs to do is get the channel, set the descriptor and just
>>>> on the channel.
>>> How do you 'on' the channel?
>> we on the channel in issue_pending.
> Right.
> Basically you only prep_slave_sg/single for the DMA_VER22? Or do you
> that for the others w/o direction support?
Yes. prep_slave_sg/single only for the DMA_VER22.
>
> For the intel,lgm-* DMAs you only call issue_pending() and probably
> terminate_all?
Yes, correct.
>
> Interesting setup ;)
>
> - Péter
>
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-31  8:06             ` Reddy, MallikarjunaX
@ 2020-08-31 11:00               ` Vinod Koul
  2020-09-01 15:03                 ` Reddy, MallikarjunaX
  0 siblings, 1 reply; 23+ messages in thread
From: Vinod Koul @ 2020-08-31 11:00 UTC (permalink / raw)
  To: Reddy, MallikarjunaX
  Cc: Rob Herring, dmaengine, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, qi-ming.wu, chuanhua.lei,
	malliamireddy009

On 31-08-20, 16:06, Reddy, MallikarjunaX wrote:
> Hi Vinod,
> 
> Thanks for the review. Please see my comment inline.
> 
> On 8/28/2020 6:45 PM, Vinod Koul wrote:
> > On 27-08-20, 17:54, Reddy, MallikarjunaX wrote:
> > > Hi Vinod,
> > > Thanks for the review comments.
> > > 
> > > On 8/25/2020 7:21 PM, Vinod Koul wrote:
> > > > On 18-08-20, 15:00, Reddy, MallikarjunaX wrote:
> > > > 
> > > > > > > +
> > > > > > > +            intel,chans:
> > > > > > > +              $ref: /schemas/types.yaml#/definitions/uint32-array
> > > > > > > +              description:
> > > > > > > +                 The channels included on this port. Format is channel start
> > > > > > > +                 number and how many channels on this port.
> > > > > > Why does this need to be in DT? This all seems like it can be in the dma
> > > > > > cells for each client.
> > > > > (*ABC)
> > > > > Yes. We need this.
> > > > > for dma0(lgm-cdma) old SOC supports 16 channels and the new SOC supports 22
> > > > > channels. and the logical channel mapping for the peripherals also differ
> > > > > b/w old and new SOCs.
> > > > > 
> > > > > Because of this hardware limitation we are trying to configure the total
> > > > > channels and port-channel mapping dynamically from device tree.
> > > > > 
> > > > > based on port name we are trying to configure the default values for
> > > > > different peripherals(ports).
> > > > > Example: burst length is not same for all ports, so using port name to do
> > > > > default configurations.
> > > > Sorry that does not make sense to me, why not specify the values to be
> > > > used here instead of defining your own name scheme!
> > > OK. Agreed. I will remove port name from DT and only use intel,chans
> > what is intel,chans, why not use dma-channels?
>  The intel,chans says about the channels included on the correspondng port.

What do you mean by a port here?

> Format is channel start number and how many channels on this port.

It is perfectly reasonable to have 16 channels but linux not use use all, lets
say from 5th channel channel onwards

So you need to use standard dma-channels also with dma-channel-mask to
specify which channels linux can use

>  The reasong behind using this attribute instead of standrad dma-channels
> is...
> 
> 
> DMA_VER22 HW supports 22 channels. But there is a hole in HW, total it can
> use only 16.
> 
> Old soc supports 4ports and 16 channels.
> New soc supports 6ports and 22 channels.
> (old and new soc carry the same version VER22)
> 
> port channel mapping for the old and new soc also not the same.
> old soc: logical channels:(Rx, Tx)
> 0, 1 - SPI0
> 2, 3 - SPI1
> 4, 5 - HSNAND
> 12, 14, 13, 15 - Memcopy
> 
> New soc: Logical channels(Rx, Tx)
> 0, 1 - SPI0
> 2, 3 - SPI1
> 4, 5 - SPI2
> 6, 7 - SPI3
> 8, 9 - HSNAND
> 10 to 21 - Mcopy

Mapping is different, client can set that channel required in dmas
property and use a specific required channel.

> Because of these reasons we are trying to use "intel,chans" attribute, and
> reading then number of channels from the dt.
> Advantaage:
> 1. we can map the channels correspondign to port
> 2. Dynamically configure the channels (due to hw limitation)
> 
> If this is not ok, please suggest us the better way to handle this.
> > 

-- 
~Vinod

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-31 11:00               ` Vinod Koul
@ 2020-09-01 15:03                 ` Reddy, MallikarjunaX
  0 siblings, 0 replies; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-09-01 15:03 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Rob Herring, dmaengine, devicetree, linux-kernel,
	andriy.shevchenko, cheol.yong.kim, qi-ming.wu, chuanhua.lei,
	malliamireddy009

Hi Vinod,
Thanks for the review, please see my comments inline.

On 8/31/2020 7:00 PM, Vinod Koul wrote:
> On 31-08-20, 16:06, Reddy, MallikarjunaX wrote:
>> Hi Vinod,
>>
>> Thanks for the review. Please see my comment inline.
>>
>> On 8/28/2020 6:45 PM, Vinod Koul wrote:
>>> On 27-08-20, 17:54, Reddy, MallikarjunaX wrote:
>>>> Hi Vinod,
>>>> Thanks for the review comments.
>>>>
>>>> On 8/25/2020 7:21 PM, Vinod Koul wrote:
>>>>> On 18-08-20, 15:00, Reddy, MallikarjunaX wrote:
>>>>>
>>>>>>>> +
>>>>>>>> +            intel,chans:
>>>>>>>> +              $ref: /schemas/types.yaml#/definitions/uint32-array
>>>>>>>> +              description:
>>>>>>>> +                 The channels included on this port. Format is channel start
>>>>>>>> +                 number and how many channels on this port.
>>>>>>> Why does this need to be in DT? This all seems like it can be in the dma
>>>>>>> cells for each client.
>>>>>> (*ABC)
>>>>>> Yes. We need this.
>>>>>> for dma0(lgm-cdma) old SOC supports 16 channels and the new SOC supports 22
>>>>>> channels. and the logical channel mapping for the peripherals also differ
>>>>>> b/w old and new SOCs.
>>>>>>
>>>>>> Because of this hardware limitation we are trying to configure the total
>>>>>> channels and port-channel mapping dynamically from device tree.
>>>>>>
>>>>>> based on port name we are trying to configure the default values for
>>>>>> different peripherals(ports).
>>>>>> Example: burst length is not same for all ports, so using port name to do
>>>>>> default configurations.
>>>>> Sorry that does not make sense to me, why not specify the values to be
>>>>> used here instead of defining your own name scheme!
>>>> OK. Agreed. I will remove port name from DT and only use intel,chans
>>> what is intel,chans, why not use dma-channels?
>>   The intel,chans says about the channels included on the correspondng port.
> What do you mean by a port here?
Here Port is nothing but Peripheral(SPI, HSNAND...)
>
>> Format is channel start number and how many channels on this port.
> It is perfectly reasonable to have 16 channels but linux not use use all, lets
> say from 5th channel channel onwards
>
> So you need to use standard dma-channels also with dma-channel-mask to
> specify which channels linux can use
Ok, let me verify and use the standard dma-channels also dma-channel-mask.
>
>>   The reasong behind using this attribute instead of standrad dma-channels
>> is...
>>
>>
>> DMA_VER22 HW supports 22 channels. But there is a hole in HW, total it can
>> use only 16.
>>
>> Old soc supports 4ports and 16 channels.
>> New soc supports 6ports and 22 channels.
>> (old and new soc carry the same version VER22)
>>
>> port channel mapping for the old and new soc also not the same.
>> old soc: logical channels:(Rx, Tx)
>> 0, 1 - SPI0
>> 2, 3 - SPI1
>> 4, 5 - HSNAND
>> 12, 14, 13, 15 - Memcopy
>>
>> New soc: Logical channels(Rx, Tx)
>> 0, 1 - SPI0
>> 2, 3 - SPI1
>> 4, 5 - SPI2
>> 6, 7 - SPI3
>> 8, 9 - HSNAND
>> 10 to 21 - Mcopy
> Mapping is different, client can set that channel required in dmas
> property and use a specific required channel.
OK.
>
>> Because of these reasons we are trying to use "intel,chans" attribute, and
>> reading then number of channels from the dt.
>> Advantaage:
>> 1. we can map the channels correspondign to port
>> 2. Dynamically configure the channels (due to hw limitation)
>>
>> If this is not ok, please suggest us the better way to handle this.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-08-18  7:00     ` Reddy, MallikarjunaX
  2020-08-25 11:21       ` Vinod Koul
@ 2020-09-04  6:31       ` Peter Ujfalusi
  2020-09-07  7:25         ` Reddy, MallikarjunaX
  1 sibling, 1 reply; 23+ messages in thread
From: Peter Ujfalusi @ 2020-09-04  6:31 UTC (permalink / raw)
  To: Reddy, MallikarjunaX, Rob Herring
  Cc: dmaengine, vkoul, devicetree, linux-kernel, andriy.shevchenko,
	cheol.yong.kim, qi-ming.wu, chuanhua.lei, malliamireddy009



On 18/08/2020 10.00, Reddy, MallikarjunaX wrote:
> Hi Rob,
> Thanks for your valuable comments. Please see my comments inline..
> 
> On 8/15/2020 4:32 AM, Rob Herring wrote:
>> On Fri, Aug 14, 2020 at 01:26:09PM +0800, Amireddy Mallikarjuna reddy
>> wrote:
>>> Add DT bindings YAML schema for DMA controller driver
>>> of Lightning Mountain(LGM) SoC.
>>>
>>> Signed-off-by: Amireddy Mallikarjuna reddy
>>> <mallikarjunax.reddy@linux.intel.com>
>>> ---
>>> v1:
>>> - Initial version.
>>>
>>> v2:
>>> - Fix bot errors.
>>>
>>> v3:
>>> - No change.
>>>
>>> v4:
>>> - Address Thomas langer comments
>>>    - use node name pattern as dma-controller as in common binding.
>>>    - Remove "_" (underscore) in instance name.
>>>    - Remove "port-" and "chan-" in attribute name for both
>>> 'dma-ports' & 'dma-channels' child nodes.
>>>
>>> v5:
>>> - Moved some of the attributes in 'dma-ports' & 'dma-channels' child
>>> nodes to dma client/consumer side as cells in 'dmas' properties.
>>> ---
>>>   .../devicetree/bindings/dma/intel,ldma.yaml        | 319
>>> +++++++++++++++++++++
>>>   1 file changed, 319 insertions(+)
>>>   create mode 100644
>>> Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>>
>>> diff --git a/Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>> b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>> new file mode 100644
>>> index 000000000000..9beaf191a6de
>>> --- /dev/null
>>> +++ b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>> @@ -0,0 +1,319 @@
>>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>>> +%YAML 1.2
>>> +---
>>> +$id: http://devicetree.org/schemas/dma/intel,ldma.yaml#
>>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>>> +
>>> +title: Lightning Mountain centralized low speed DMA and high speed
>>> DMA controllers.
>>> +
>>> +maintainers:
>>> +  - chuanhua.lei@intel.com
>>> +  - mallikarjunax.reddy@intel.com
>>> +
>>> +allOf:
>>> +  - $ref: "dma-controller.yaml#"
>>> +
>>> +properties:
>>> + $nodename:
>>> +   pattern: "^dma-controller(@.*)?$"
>>> +
>>> + "#dma-cells":
>>> +   const: 1
>> Example says 3.
> OK, i will fix it.

It would help if you would add description of what is the meaning of the
individual cell.

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-08-31  8:07             ` Reddy, MallikarjunaX
@ 2020-09-04  6:31               ` Peter Ujfalusi
  2020-09-07  7:32                 ` Reddy, MallikarjunaX
  0 siblings, 1 reply; 23+ messages in thread
From: Peter Ujfalusi @ 2020-09-04  6:31 UTC (permalink / raw)
  To: Reddy, MallikarjunaX, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009



On 31/08/2020 11.07, Reddy, MallikarjunaX wrote:
> 
> On 8/28/2020 7:17 PM, Peter Ujfalusi wrote:
>> Hi,
>>
>> On 27/08/2020 17.41, Reddy, MallikarjunaX wrote:
>>>>>>> +
>>>>>>> +    dma_dev->device_alloc_chan_resources =
>>>>>>> +        d->inst->ops->device_alloc_chan_resources;
>>>>>>> +    dma_dev->device_free_chan_resources =
>>>>>>> +        d->inst->ops->device_free_chan_resources;
>>>>>>> +    dma_dev->device_terminate_all =
>>>>>>> d->inst->ops->device_terminate_all;
>>>>>>> +    dma_dev->device_issue_pending =
>>>>>>> d->inst->ops->device_issue_pending;
>>>>>>> +    dma_dev->device_tx_status = d->inst->ops->device_tx_status;
>>>>>>> +    dma_dev->device_resume = d->inst->ops->device_resume;
>>>>>>> +    dma_dev->device_pause = d->inst->ops->device_pause;
>>>>>>> +    dma_dev->device_config = d->inst->ops->device_config;
>>>>>>> +    dma_dev->device_prep_slave_sg =
>>>>>>> d->inst->ops->device_prep_slave_sg;
>>>>>>> +    dma_dev->device_synchronize = d->inst->ops->device_synchronize;
>>>>>>> +
>>>>>>> +    if (d->ver == DMA_VER22) {
>>>>>>> +        dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>>>>> +        dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>>>>> +        dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
>>>>>>> +                      BIT(DMA_DEV_TO_MEM);
>>>>>>> +        dma_dev->residue_granularity =
>>>>>>> +                    DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
>>>>>>> +    }
>>>>>> So, if version is != DMA_VER22, then you don't support any direction?
>>>>>> Why register the DMA device if it can not do any transfer?
>>>>> Only dma0 instance (intel,lgm-cdma) is used as a general purpose slave
>>>>> DMA. we set both control and datapath here.
>>>>> Other instances we set only control path. data path is taken care
>>>>> by dma
>>>>> client(GSWIP).
>>>> How the client (GSWIP) can request a channel from intel,lgm-* ? Don't
>>>> you need some capabilities for the DMA device so core can sort out the
>>>> request?
>>> client request channel by name, dma_request_slave_channel(dev, name);
>> clients should use dma_request_chan(dev, name);
>>
>> If the channel can be requested via DT or ACPI then we don't check the
>> capabilities at all, so yes, that could work.
>>
>>>>> Only thing needs to do is get the channel, set the descriptor and just
>>>>> on the channel.
>>>> How do you 'on' the channel?
>>> we on the channel in issue_pending.
>> Right.
>> Basically you only prep_slave_sg/single for the DMA_VER22? Or do you
>> that for the others w/o direction support?
>
> Yes. prep_slave_sg/single only for the DMA_VER22.

So, you place the transfers to DMA_VER22's channel

>>
>> For the intel,lgm-* DMAs you only call issue_pending() and probably
>> terminate_all?
> Yes, correct.

and issue_pending on a channel which does not have anything pending?
How client knows which of the 'only need to be ON' channel to enable
when it placed a transfer to another channel?

How would the client driver (and the IP) would be integrated with
different system DMA which have standard channel management?

If DMA_VER22 knows which intel,lgm-* it should place the transfer then
with virt-dma you can handle that just fine?

Do you have public documentation for the DMA? It sounds a bit like TI's
EDMA which is in essence is a two part  setup:
CC: Channel Controller - to submit transfers (like your DMA_VER22)
TC: Transfer Controller - it executes the transfers submitted by the CC

One CC can be backed by multiple TCs. I don't have direct control over
the TC (can not start/stop), it is controlled by the CC.

Documentation/devicetree/bindings/dma/ti-edma.txt

Or is it a different setup?

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC
  2020-09-04  6:31       ` Peter Ujfalusi
@ 2020-09-07  7:25         ` Reddy, MallikarjunaX
  0 siblings, 0 replies; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-09-07  7:25 UTC (permalink / raw)
  To: Peter Ujfalusi, Rob Herring
  Cc: dmaengine, vkoul, devicetree, linux-kernel, andriy.shevchenko,
	cheol.yong.kim, qi-ming.wu, chuanhua.lei, malliamireddy009


On 9/4/2020 2:31 PM, Peter Ujfalusi wrote:
>
> On 18/08/2020 10.00, Reddy, MallikarjunaX wrote:
>> Hi Rob,
>> Thanks for your valuable comments. Please see my comments inline..
>>
>> On 8/15/2020 4:32 AM, Rob Herring wrote:
>>> On Fri, Aug 14, 2020 at 01:26:09PM +0800, Amireddy Mallikarjuna reddy
>>> wrote:
>>>> Add DT bindings YAML schema for DMA controller driver
>>>> of Lightning Mountain(LGM) SoC.
>>>>
>>>> Signed-off-by: Amireddy Mallikarjuna reddy
>>>> <mallikarjunax.reddy@linux.intel.com>
>>>> ---
>>>> v1:
>>>> - Initial version.
>>>>
>>>> v2:
>>>> - Fix bot errors.
>>>>
>>>> v3:
>>>> - No change.
>>>>
>>>> v4:
>>>> - Address Thomas langer comments
>>>>     - use node name pattern as dma-controller as in common binding.
>>>>     - Remove "_" (underscore) in instance name.
>>>>     - Remove "port-" and "chan-" in attribute name for both
>>>> 'dma-ports' & 'dma-channels' child nodes.
>>>>
>>>> v5:
>>>> - Moved some of the attributes in 'dma-ports' & 'dma-channels' child
>>>> nodes to dma client/consumer side as cells in 'dmas' properties.
>>>> ---
>>>>    .../devicetree/bindings/dma/intel,ldma.yaml        | 319
>>>> +++++++++++++++++++++
>>>>    1 file changed, 319 insertions(+)
>>>>    create mode 100644
>>>> Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>>>
>>>> diff --git a/Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>>> b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>>> new file mode 100644
>>>> index 000000000000..9beaf191a6de
>>>> --- /dev/null
>>>> +++ b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
>>>> @@ -0,0 +1,319 @@
>>>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>>>> +%YAML 1.2
>>>> +---
>>>> +$id: http://devicetree.org/schemas/dma/intel,ldma.yaml#
>>>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>>>> +
>>>> +title: Lightning Mountain centralized low speed DMA and high speed
>>>> DMA controllers.
>>>> +
>>>> +maintainers:
>>>> +  - chuanhua.lei@intel.com
>>>> +  - mallikarjunax.reddy@intel.com
>>>> +
>>>> +allOf:
>>>> +  - $ref: "dma-controller.yaml#"
>>>> +
>>>> +properties:
>>>> + $nodename:
>>>> +   pattern: "^dma-controller(@.*)?$"
>>>> +
>>>> + "#dma-cells":
>>>> +   const: 1
>>> Example says 3.
>> OK, i will fix it.
> It would help if you would add description of what is the meaning of the
> individual cell.
I am already prepared the patch by addressing previous comments and just 
before sending i received your review comment. :-)

Let me Edit , include the description and prepare the patch again.
>
> - Péter
>
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v5 2/2] Add Intel LGM soc DMA support.
  2020-09-04  6:31               ` Peter Ujfalusi
@ 2020-09-07  7:32                 ` Reddy, MallikarjunaX
  0 siblings, 0 replies; 23+ messages in thread
From: Reddy, MallikarjunaX @ 2020-09-07  7:32 UTC (permalink / raw)
  To: Peter Ujfalusi, dmaengine, vkoul, devicetree, robh+dt
  Cc: linux-kernel, andriy.shevchenko, cheol.yong.kim, qi-ming.wu,
	chuanhua.lei, malliamireddy009

Hi Peter,
Thanks for the review, please see my comment inline..

On 9/4/2020 2:31 PM, Peter Ujfalusi wrote:
>
> On 31/08/2020 11.07, Reddy, MallikarjunaX wrote:
>> On 8/28/2020 7:17 PM, Peter Ujfalusi wrote:
>>> Hi,
>>>
>>> On 27/08/2020 17.41, Reddy, MallikarjunaX wrote:
>>>>>>>> +
>>>>>>>> +    dma_dev->device_alloc_chan_resources =
>>>>>>>> +        d->inst->ops->device_alloc_chan_resources;
>>>>>>>> +    dma_dev->device_free_chan_resources =
>>>>>>>> +        d->inst->ops->device_free_chan_resources;
>>>>>>>> +    dma_dev->device_terminate_all =
>>>>>>>> d->inst->ops->device_terminate_all;
>>>>>>>> +    dma_dev->device_issue_pending =
>>>>>>>> d->inst->ops->device_issue_pending;
>>>>>>>> +    dma_dev->device_tx_status = d->inst->ops->device_tx_status;
>>>>>>>> +    dma_dev->device_resume = d->inst->ops->device_resume;
>>>>>>>> +    dma_dev->device_pause = d->inst->ops->device_pause;
>>>>>>>> +    dma_dev->device_config = d->inst->ops->device_config;
>>>>>>>> +    dma_dev->device_prep_slave_sg =
>>>>>>>> d->inst->ops->device_prep_slave_sg;
>>>>>>>> +    dma_dev->device_synchronize = d->inst->ops->device_synchronize;
>>>>>>>> +
>>>>>>>> +    if (d->ver == DMA_VER22) {
>>>>>>>> +        dma_dev->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>>>>>> +        dma_dev->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
>>>>>>>> +        dma_dev->directions = BIT(DMA_MEM_TO_DEV) |
>>>>>>>> +                      BIT(DMA_DEV_TO_MEM);
>>>>>>>> +        dma_dev->residue_granularity =
>>>>>>>> +                    DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
>>>>>>>> +    }
>>>>>>> So, if version is != DMA_VER22, then you don't support any direction?
>>>>>>> Why register the DMA device if it can not do any transfer?
>>>>>> Only dma0 instance (intel,lgm-cdma) is used as a general purpose slave
>>>>>> DMA. we set both control and datapath here.
>>>>>> Other instances we set only control path. data path is taken care
>>>>>> by dma
>>>>>> client(GSWIP).
>>>>> How the client (GSWIP) can request a channel from intel,lgm-* ? Don't
>>>>> you need some capabilities for the DMA device so core can sort out the
>>>>> request?
>>>> client request channel by name, dma_request_slave_channel(dev, name);
>>> clients should use dma_request_chan(dev, name);
>>>
>>> If the channel can be requested via DT or ACPI then we don't check the
>>> capabilities at all, so yes, that could work.
>>>
>>>>>> Only thing needs to do is get the channel, set the descriptor and just
>>>>>> on the channel.
>>>>> How do you 'on' the channel?
>>>> we on the channel in issue_pending.
>>> Right.
>>> Basically you only prep_slave_sg/single for the DMA_VER22? Or do you
>>> that for the others w/o direction support?
>> Yes. prep_slave_sg/single only for the DMA_VER22.
> So, you place the transfers to DMA_VER22's channel
>
>>> For the intel,lgm-* DMAs you only call issue_pending() and probably
>>> terminate_all?
>> Yes, correct.
> and issue_pending on a channel which does not have anything pending?
> How client knows which of the 'only need to be ON' channel to enable
> when it placed a transfer to another channel?
>
> How would the client driver (and the IP) would be integrated with
> different system DMA which have standard channel management?
>
> If DMA_VER22 knows which intel,lgm-* it should place the transfer then
> with virt-dma you can handle that just fine?
In the version point of view, "intel,lgm-cdma" dma instance support 
DMA_VER22. and other "intel,lgm-*" dma instances support  > DMA_VER22 .

The channels registered to kernel from "intel,lgm-cdma" dma instance 
used as a general purpose dma. The clients are SPI, ebu_nand etc..
It uses standard dmaengine calls dma_request_chan(), 
dmaengine_slave_config(), dmaengine_prep_slave_sg() & 
dma_async_issue_pending()..

and other dma instances the clients are GSWIP & CQM (Central Queue 
manager) which uses the DMA in their packet processing.
Each dma dma2tx, dma1rx, dma1tx, dma0tx and dma3 had channels 16, 8, 
16,16 and 16 respectively and these channels are used by GSWIP & CQM as 
part of
ingress(IGP) and egress(EGP) ports. and each of the dma port uses a dma 
channel. This is one to one mapping.

The GSWIP & CQM talk to the dma controller driver via dmaengine to get 
the dma channels using dma_request_slave_channel().
Configures the descriptor base address using intel_dma_chan_desc_cfg() 
which is EXPORT API from dma driver(this desc base address is the 
register address(descriptor) of the IGP/EGP), and ON the channel using 
dma_async_issue_pending.

GSWIP & CQM is highly low level register configurable/programmable take 
care about the the packet processing through the register configurations.
>
> Do you have public documentation for the DMA?
Unfortunately we dont have public documentation for this DMA IP.
>   It sounds a bit like TI's
> EDMA which is in essence is a two part  setup:
> CC: Channel Controller - to submit transfers (like your DMA_VER22)
> TC: Transfer Controller - it executes the transfers submitted by the CC
>
> One CC can be backed by multiple TCs. I don't have direct control over
> the TC (can not start/stop), it is controlled by the CC.
>
> Documentation/devicetree/bindings/dma/ti-edma.txt
>
> Or is it a different setup?
This is different setup. Explain above. Transfer control is directly 
controlled by client.
>
> - Péter
>
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2020-09-07  7:32 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-14  5:26 [PATCH v5 0/2] Add Intel LGM soc DMA support Amireddy Mallikarjuna reddy
2020-08-14  5:26 ` [PATCH v5 1/2] dt-bindings: dma: Add bindings for intel LGM SOC Amireddy Mallikarjuna reddy
2020-08-14 20:32   ` Rob Herring
2020-08-18  7:00     ` Reddy, MallikarjunaX
2020-08-25 11:21       ` Vinod Koul
2020-08-27  9:54         ` Reddy, MallikarjunaX
2020-08-28 10:45           ` Vinod Koul
2020-08-31  8:06             ` Reddy, MallikarjunaX
2020-08-31 11:00               ` Vinod Koul
2020-09-01 15:03                 ` Reddy, MallikarjunaX
2020-09-04  6:31       ` Peter Ujfalusi
2020-09-07  7:25         ` Reddy, MallikarjunaX
2020-08-14  5:26 ` [PATCH v5 2/2] Add Intel LGM soc DMA support Amireddy Mallikarjuna reddy
2020-08-14  8:47   ` kernel test robot
2020-08-18 10:16   ` Peter Ujfalusi
2020-08-18 10:29     ` Peter Ujfalusi
2020-08-24  2:30     ` Reddy, MallikarjunaX
2020-08-24 11:24       ` Peter Ujfalusi
2020-08-27 14:41         ` Reddy, MallikarjunaX
2020-08-28 11:17           ` Peter Ujfalusi
2020-08-31  8:07             ` Reddy, MallikarjunaX
2020-09-04  6:31               ` Peter Ujfalusi
2020-09-07  7:32                 ` Reddy, MallikarjunaX

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).