linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller
@ 2020-12-15 11:09 Dongjiu Geng
  2020-12-15 11:09 ` [PATCH v7 1/4] dt-bindings: Document the hi3559a clock bindings Dongjiu Geng
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Dongjiu Geng @ 2020-12-15 11:09 UTC (permalink / raw)
  To: mturquette, sboyd, robh+dt, vkoul, dan.j.williams, p.zabel,
	linux-clk, devicetree, linux-kernel, dmaengine, gengdongjiu

v6->v7:
1. rename hisi,misc-control to hisi,misc-control to hisilicon,misc-control

v5->v6:
1. Drop #size-cells and #address-cell in the hisilicon,hi3559av100-clock.yaml
2. Add discription for #reset-cells in the hisilicon,hi3559av100-clock.yaml
3. Remove #clock-cells in hisilicon,hiedmacv310.yaml 
4. Merge property misc_ctrl_base and misc_regmap together for hiedmacv310 driver

v4->v5:
1. change the patch author mail name

v3->v4:
1. fix the 'make dt_binding_check' issues.
2. Combine the 'Enable HiSilicon Hiedma Controller' series patches to this series.
3. fix the 'make dt_binding_check' issues in 'Enable HiSilicon Hiedma Controller' patchset

v2->v3:
1. change dt-bindings documents from txt to yaml format.
2. Add SHUB clock to access the devices of m7

Dongjiu Geng (4):
  dt-bindings: Document the hi3559a clock bindings
  clk: hisilicon: Add clock driver for hi3559A SoC
  dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller
  dmaengine: dma: Add Hiedma Controller v310 Device Driver

 .../clock/hisilicon,hi3559av100-clock.yaml    |   59 +
 .../bindings/dma/hisilicon,hiedmacv310.yaml   |   94 ++
 drivers/clk/hisilicon/Kconfig                 |    7 +
 drivers/clk/hisilicon/Makefile                |    1 +
 drivers/clk/hisilicon/clk-hi3559a.c           |  865 ++++++++++
 drivers/dma/Kconfig                           |   14 +
 drivers/dma/Makefile                          |    1 +
 drivers/dma/hiedmacv310.c                     | 1442 +++++++++++++++++
 drivers/dma/hiedmacv310.h                     |  136 ++
 include/dt-bindings/clock/hi3559av100-clock.h |  165 ++
 10 files changed, 2784 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml
 create mode 100644 Documentation/devicetree/bindings/dma/hisilicon,hiedmacv310.yaml
 create mode 100644 drivers/clk/hisilicon/clk-hi3559a.c
 create mode 100644 drivers/dma/hiedmacv310.c
 create mode 100644 drivers/dma/hiedmacv310.h
 create mode 100644 include/dt-bindings/clock/hi3559av100-clock.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v7 1/4] dt-bindings: Document the hi3559a clock bindings
  2020-12-15 11:09 [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Dongjiu Geng
@ 2020-12-15 11:09 ` Dongjiu Geng
  2020-12-21 18:54   ` Rob Herring
  2020-12-15 11:09 ` [PATCH v7 2/4] clk: hisilicon: Add clock driver for hi3559A SoC Dongjiu Geng
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Dongjiu Geng @ 2020-12-15 11:09 UTC (permalink / raw)
  To: mturquette, sboyd, robh+dt, vkoul, dan.j.williams, p.zabel,
	linux-clk, devicetree, linux-kernel, dmaengine, gengdongjiu

Add DT bindings documentation for hi3559a SoC clock.

Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
---
 .../clock/hisilicon,hi3559av100-clock.yaml    |  59 +++++++
 include/dt-bindings/clock/hi3559av100-clock.h | 165 ++++++++++++++++++
 2 files changed, 224 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml
 create mode 100644 include/dt-bindings/clock/hi3559av100-clock.h

diff --git a/Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml b/Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml
new file mode 100644
index 000000000000..3ceb29cec704
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml
@@ -0,0 +1,59 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/clock/hisilicon,hi3559av100-clock.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Hisilicon SOC Clock for HI3559AV100
+
+maintainers:
+  - Dongjiu Geng <gengdongjiu@huawei.com>
+
+description: |
+  Hisilicon SOC clock control module which supports the clocks, resets and
+  power domains on HI3559AV100.
+
+  See also:
+    dt-bindings/clock/hi3559av100-clock.h
+
+properties:
+  compatible:
+    enum:
+      - hisilicon,hi3559av100-clock
+      - hisilicon,hi3559av100-shub-clock
+
+  reg:
+    minItems: 1
+    maxItems: 2
+
+  '#clock-cells':
+    const: 1
+
+  '#reset-cells':
+    const: 2
+    description: |
+      First cell is reset request register offset.
+      Second cell is bit offset in reset request register.
+
+required:
+  - compatible
+  - reg
+  - '#clock-cells'
+  - '#reset-cells'
+
+additionalProperties: false
+
+examples:
+  - |
+    soc {
+        #address-cells = <2>;
+        #size-cells = <2>;
+
+        clock-controller@12010000 {
+            compatible = "hisilicon,hi3559av100-clock";
+            #clock-cells = <1>;
+            #reset-cells = <2>;
+            reg = <0x0 0x12010000 0x0 0x10000>;
+        };
+    };
+...
diff --git a/include/dt-bindings/clock/hi3559av100-clock.h b/include/dt-bindings/clock/hi3559av100-clock.h
new file mode 100644
index 000000000000..5fe7689010a0
--- /dev/null
+++ b/include/dt-bindings/clock/hi3559av100-clock.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later or BSD-2-Clause */
+/*
+ * Copyright (c) 2019-2020, Huawei Tech. Co., Ltd.
+ *
+ * Author: Dongjiu Geng <gengdongjiu@huawei.com>
+ */
+
+#ifndef __DTS_HI3559AV100_CLOCK_H
+#define __DTS_HI3559AV100_CLOCK_H
+
+/*  fixed   rate    */
+#define HI3559AV100_FIXED_1188M     1
+#define HI3559AV100_FIXED_1000M     2
+#define HI3559AV100_FIXED_842M      3
+#define HI3559AV100_FIXED_792M      4
+#define HI3559AV100_FIXED_750M      5
+#define HI3559AV100_FIXED_710M      6
+#define HI3559AV100_FIXED_680M      7
+#define HI3559AV100_FIXED_667M      8
+#define HI3559AV100_FIXED_631M      9
+#define HI3559AV100_FIXED_600M      10
+#define HI3559AV100_FIXED_568M      11
+#define HI3559AV100_FIXED_500M      12
+#define HI3559AV100_FIXED_475M      13
+#define HI3559AV100_FIXED_428M      14
+#define HI3559AV100_FIXED_400M      15
+#define HI3559AV100_FIXED_396M      16
+#define HI3559AV100_FIXED_300M      17
+#define HI3559AV100_FIXED_250M      18
+#define HI3559AV100_FIXED_198M      19
+#define HI3559AV100_FIXED_187p5M    20
+#define HI3559AV100_FIXED_150M      21
+#define HI3559AV100_FIXED_148p5M    22
+#define HI3559AV100_FIXED_125M      23
+#define HI3559AV100_FIXED_107M      24
+#define HI3559AV100_FIXED_100M      25
+#define HI3559AV100_FIXED_99M       26
+#define HI3559AV100_FIXED_74p25M    27
+#define HI3559AV100_FIXED_72M       28
+#define HI3559AV100_FIXED_60M       29
+#define HI3559AV100_FIXED_54M       30
+#define HI3559AV100_FIXED_50M       31
+#define HI3559AV100_FIXED_49p5M     32
+#define HI3559AV100_FIXED_37p125M   33
+#define HI3559AV100_FIXED_36M       34
+#define HI3559AV100_FIXED_32p4M     35
+#define HI3559AV100_FIXED_27M       36
+#define HI3559AV100_FIXED_25M       37
+#define HI3559AV100_FIXED_24M       38
+#define HI3559AV100_FIXED_12M       39
+#define HI3559AV100_FIXED_3M        40
+#define HI3559AV100_FIXED_1p6M      41
+#define HI3559AV100_FIXED_400K      42
+#define HI3559AV100_FIXED_100K      43
+#define HI3559AV100_FIXED_200M      44
+#define HI3559AV100_FIXED_75M       75
+
+#define HI3559AV100_I2C0_CLK    50
+#define HI3559AV100_I2C1_CLK    51
+#define HI3559AV100_I2C2_CLK    52
+#define HI3559AV100_I2C3_CLK    53
+#define HI3559AV100_I2C4_CLK    54
+#define HI3559AV100_I2C5_CLK    55
+#define HI3559AV100_I2C6_CLK    56
+#define HI3559AV100_I2C7_CLK    57
+#define HI3559AV100_I2C8_CLK    58
+#define HI3559AV100_I2C9_CLK    59
+#define HI3559AV100_I2C10_CLK   60
+#define HI3559AV100_I2C11_CLK   61
+
+#define HI3559AV100_SPI0_CLK    62
+#define HI3559AV100_SPI1_CLK    63
+#define HI3559AV100_SPI2_CLK    64
+#define HI3559AV100_SPI3_CLK    65
+#define HI3559AV100_SPI4_CLK    66
+#define HI3559AV100_SPI5_CLK    67
+#define HI3559AV100_SPI6_CLK    68
+
+#define HI3559AV100_EDMAC_CLK     69
+#define HI3559AV100_EDMAC_AXICLK  70
+#define HI3559AV100_EDMAC1_CLK    71
+#define HI3559AV100_EDMAC1_AXICLK 72
+#define HI3559AV100_VDMAC_CLK     73
+
+/*  mux clocks  */
+#define HI3559AV100_FMC_MUX     80
+#define HI3559AV100_SYSAPB_MUX  81
+#define HI3559AV100_UART_MUX    82
+#define HI3559AV100_SYSBUS_MUX  83
+#define HI3559AV100_A73_MUX     84
+#define HI3559AV100_MMC0_MUX    85
+#define HI3559AV100_MMC1_MUX    86
+#define HI3559AV100_MMC2_MUX    87
+#define HI3559AV100_MMC3_MUX    88
+
+/*  gate    clocks  */
+#define HI3559AV100_FMC_CLK     90
+#define HI3559AV100_UART0_CLK   91
+#define HI3559AV100_UART1_CLK   92
+#define HI3559AV100_UART2_CLK   93
+#define HI3559AV100_UART3_CLK   94
+#define HI3559AV100_UART4_CLK   95
+#define HI3559AV100_MMC0_CLK    96
+#define HI3559AV100_MMC1_CLK    97
+#define HI3559AV100_MMC2_CLK    98
+#define HI3559AV100_MMC3_CLK    99
+
+#define HI3559AV100_ETH_CLK         100
+#define HI3559AV100_ETH_MACIF_CLK   101
+#define HI3559AV100_ETH1_CLK        102
+#define HI3559AV100_ETH1_MACIF_CLK  103
+
+/*  complex */
+#define HI3559AV100_MAC0_CLK                110
+#define HI3559AV100_MAC1_CLK                111
+#define HI3559AV100_SATA_CLK                112
+#define HI3559AV100_USB_CLK                 113
+#define HI3559AV100_USB1_CLK                114
+
+/* pll clocks */
+#define HI3559AV100_APLL_CLK                250
+#define HI3559AV100_GPLL_CLK                251
+
+#define HI3559AV100_CRG_NR_CLKS	            256
+
+#define HI3559AV100_SHUB_SOURCE_SOC_24M	    0
+#define HI3559AV100_SHUB_SOURCE_SOC_200M    1
+#define HI3559AV100_SHUB_SOURCE_SOC_300M    2
+#define HI3559AV100_SHUB_SOURCE_PLL         3
+#define HI3559AV100_SHUB_SOURCE_CLK         4
+
+#define HI3559AV100_SHUB_I2C0_CLK           10
+#define HI3559AV100_SHUB_I2C1_CLK           11
+#define HI3559AV100_SHUB_I2C2_CLK           12
+#define HI3559AV100_SHUB_I2C3_CLK           13
+#define HI3559AV100_SHUB_I2C4_CLK           14
+#define HI3559AV100_SHUB_I2C5_CLK           15
+#define HI3559AV100_SHUB_I2C6_CLK           16
+#define HI3559AV100_SHUB_I2C7_CLK           17
+
+#define HI3559AV100_SHUB_SPI_SOURCE_CLK     20
+#define HI3559AV100_SHUB_SPI4_SOURCE_CLK    21
+#define HI3559AV100_SHUB_SPI0_CLK           22
+#define HI3559AV100_SHUB_SPI1_CLK           23
+#define HI3559AV100_SHUB_SPI2_CLK           24
+#define HI3559AV100_SHUB_SPI3_CLK           25
+#define HI3559AV100_SHUB_SPI4_CLK           26
+
+#define HI3559AV100_SHUB_UART_CLK_32K       30
+#define HI3559AV100_SHUB_UART_SOURCE_CLK    31
+#define HI3559AV100_SHUB_UART_DIV_CLK       32
+#define HI3559AV100_SHUB_UART0_CLK          33
+#define HI3559AV100_SHUB_UART1_CLK          34
+#define HI3559AV100_SHUB_UART2_CLK          35
+#define HI3559AV100_SHUB_UART3_CLK          36
+#define HI3559AV100_SHUB_UART4_CLK          37
+#define HI3559AV100_SHUB_UART5_CLK          38
+#define HI3559AV100_SHUB_UART6_CLK          39
+
+#define HI3559AV100_SHUB_EDMAC_CLK          40
+
+#define HI3559AV100_SHUB_NR_CLKS            50
+
+#endif  /* __DTS_HI3559AV100_CLOCK_H */
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v7 2/4] clk: hisilicon: Add clock driver for hi3559A SoC
  2020-12-15 11:09 [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Dongjiu Geng
  2020-12-15 11:09 ` [PATCH v7 1/4] dt-bindings: Document the hi3559a clock bindings Dongjiu Geng
@ 2020-12-15 11:09 ` Dongjiu Geng
  2021-01-12 19:48   ` Stephen Boyd
  2020-12-15 11:09 ` [PATCH v7 3/4] dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller Dongjiu Geng
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Dongjiu Geng @ 2020-12-15 11:09 UTC (permalink / raw)
  To: mturquette, sboyd, robh+dt, vkoul, dan.j.williams, p.zabel,
	linux-clk, devicetree, linux-kernel, dmaengine, gengdongjiu

Add clock drivers for hi3559A SoC, this driver
controls the SoC registers to supply different
clocks to different IPs in the SoC.

Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
---
 drivers/clk/hisilicon/Kconfig       |   7 +
 drivers/clk/hisilicon/Makefile      |   1 +
 drivers/clk/hisilicon/clk-hi3559a.c | 865 ++++++++++++++++++++++++++++
 3 files changed, 873 insertions(+)
 create mode 100644 drivers/clk/hisilicon/clk-hi3559a.c

diff --git a/drivers/clk/hisilicon/Kconfig b/drivers/clk/hisilicon/Kconfig
index 6a9e93a0bb95..5ecc37aaa118 100644
--- a/drivers/clk/hisilicon/Kconfig
+++ b/drivers/clk/hisilicon/Kconfig
@@ -15,6 +15,13 @@ config COMMON_CLK_HI3519
 	help
 	  Build the clock driver for hi3519.
 
+config COMMON_CLK_HI3559A
+	bool "Hi3559A Clock Driver"
+	depends on ARCH_HISI || COMPILE_TEST
+	default ARCH_HISI
+	help
+	  Build the clock driver for hi3559a.
+
 config COMMON_CLK_HI3660
 	bool "Hi3660 Clock Driver"
 	depends on ARCH_HISI || COMPILE_TEST
diff --git a/drivers/clk/hisilicon/Makefile b/drivers/clk/hisilicon/Makefile
index b2441b99f3d5..bc101833b35e 100644
--- a/drivers/clk/hisilicon/Makefile
+++ b/drivers/clk/hisilicon/Makefile
@@ -17,3 +17,4 @@ obj-$(CONFIG_COMMON_CLK_HI6220)	+= clk-hi6220.o
 obj-$(CONFIG_RESET_HISI)	+= reset.o
 obj-$(CONFIG_STUB_CLK_HI6220)	+= clk-hi6220-stub.o
 obj-$(CONFIG_STUB_CLK_HI3660)	+= clk-hi3660-stub.o
+obj-$(CONFIG_COMMON_CLK_HI3559A)	+= clk-hi3559a.o
diff --git a/drivers/clk/hisilicon/clk-hi3559a.c b/drivers/clk/hisilicon/clk-hi3559a.c
new file mode 100644
index 000000000000..d7693e488006
--- /dev/null
+++ b/drivers/clk/hisilicon/clk-hi3559a.c
@@ -0,0 +1,865 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Hisilicon Hi3559A clock driver
+ *
+ * Copyright (c) 2019-2020, Huawei Tech. Co., Ltd.
+ *
+ * Author: Dongjiu Geng <gengdongjiu@huawei.com>
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include <dt-bindings/clock/hi3559av100-clock.h>
+
+#include "clk.h"
+#include "crg.h"
+#include "reset.h"
+
+#define CRG_BASE_ADDR  0x18020000
+
+struct hi3559av100_pll_clock {
+	u32	id;
+	const char  *name;
+	const char  *parent_name;
+	u32	ctrl_reg1;
+	u8	frac_shift;
+	u8	frac_width;
+	u8	postdiv1_shift;
+	u8	postdiv1_width;
+	u8	postdiv2_shift;
+	u8	postdiv2_width;
+	u32	ctrl_reg2;
+	u8	fbdiv_shift;
+	u8	fbdiv_width;
+	u8	refdiv_shift;
+	u8	refdiv_width;
+};
+
+struct hi3559av100_clk_pll {
+	struct clk_hw	hw;
+	u32	id;
+	void __iomem	*ctrl_reg1;
+	u8	frac_shift;
+	u8	frac_width;
+	u8	postdiv1_shift;
+	u8	postdiv1_width;
+	u8	postdiv2_shift;
+	u8	postdiv2_width;
+	void __iomem	*ctrl_reg2;
+	u8	fbdiv_shift;
+	u8	fbdiv_width;
+	u8	refdiv_shift;
+	u8	refdiv_width;
+};
+
+/* soc clk config */
+static const struct hisi_fixed_rate_clock hi3559av100_fixed_rate_clks_crg[] = {
+	{ HI3559AV100_FIXED_1188M, "1188m",   NULL, 0, 1188000000, },
+	{ HI3559AV100_FIXED_1000M, "1000m",   NULL, 0, 1000000000, },
+	{ HI3559AV100_FIXED_842M, "842m",    NULL, 0, 842000000, },
+	{ HI3559AV100_FIXED_792M, "792m",    NULL, 0, 792000000, },
+	{ HI3559AV100_FIXED_750M, "750m",    NULL, 0, 750000000, },
+	{ HI3559AV100_FIXED_710M, "710m",    NULL, 0, 710000000, },
+	{ HI3559AV100_FIXED_680M, "680m",    NULL, 0, 680000000, },
+	{ HI3559AV100_FIXED_667M, "667m",    NULL, 0, 667000000, },
+	{ HI3559AV100_FIXED_631M, "631m",    NULL, 0, 631000000, },
+	{ HI3559AV100_FIXED_600M, "600m",    NULL, 0, 600000000, },
+	{ HI3559AV100_FIXED_568M, "568m",    NULL, 0, 568000000, },
+	{ HI3559AV100_FIXED_500M, "500m",    NULL, 0, 500000000, },
+	{ HI3559AV100_FIXED_475M, "475m",    NULL, 0, 475000000, },
+	{ HI3559AV100_FIXED_428M, "428m",    NULL, 0, 428000000, },
+	{ HI3559AV100_FIXED_400M, "400m",    NULL, 0, 400000000, },
+	{ HI3559AV100_FIXED_396M, "396m",    NULL, 0, 396000000, },
+	{ HI3559AV100_FIXED_300M, "300m",    NULL, 0, 300000000, },
+	{ HI3559AV100_FIXED_250M, "250m",    NULL, 0, 250000000, },
+	{ HI3559AV100_FIXED_200M, "200m",    NULL, 0, 200000000, },
+	{ HI3559AV100_FIXED_198M, "198m",    NULL, 0, 198000000, },
+	{ HI3559AV100_FIXED_187p5M, "187p5m",  NULL, 0, 187500000, },
+	{ HI3559AV100_FIXED_150M, "150m",    NULL, 0, 150000000, },
+	{ HI3559AV100_FIXED_148p5M, "148p5m",  NULL, 0, 1485000000, },
+	{ HI3559AV100_FIXED_125M, "125m",    NULL, 0, 125000000, },
+	{ HI3559AV100_FIXED_107M, "107m",    NULL, 0, 107000000, },
+	{ HI3559AV100_FIXED_100M, "100m",    NULL, 0, 100000000, },
+	{ HI3559AV100_FIXED_99M, "99m",     NULL, 0, 99000000, },
+	{ HI3559AV100_FIXED_75M, "75m",  NULL, 0, 75000000, },
+	{ HI3559AV100_FIXED_74p25M, "74p25m",  NULL, 0, 74250000, },
+	{ HI3559AV100_FIXED_72M, "72m",     NULL, 0, 72000000, },
+	{ HI3559AV100_FIXED_60M, "60m",     NULL, 0, 60000000, },
+	{ HI3559AV100_FIXED_54M, "54m",     NULL, 0, 54000000, },
+	{ HI3559AV100_FIXED_50M, "50m",     NULL, 0, 50000000, },
+	{ HI3559AV100_FIXED_49p5M, "49p5m",   NULL, 0, 49500000, },
+	{ HI3559AV100_FIXED_37p125M, "37p125m", NULL, 0, 37125000, },
+	{ HI3559AV100_FIXED_36M, "36m",     NULL, 0, 36000000, },
+	{ HI3559AV100_FIXED_32p4M, "32p4m",   NULL, 0, 32400000, },
+	{ HI3559AV100_FIXED_27M, "27m",     NULL, 0, 27000000, },
+	{ HI3559AV100_FIXED_25M, "25m",     NULL, 0, 25000000, },
+	{ HI3559AV100_FIXED_24M, "24m",     NULL, 0, 24000000, },
+	{ HI3559AV100_FIXED_12M, "12m",     NULL, 0, 12000000, },
+	{ HI3559AV100_FIXED_3M,	 "3m",      NULL, 0, 3000000, },
+	{ HI3559AV100_FIXED_1p6M, "1p6m",    NULL, 0, 1600000, },
+	{ HI3559AV100_FIXED_400K, "400k",    NULL, 0, 400000, },
+	{ HI3559AV100_FIXED_100K, "100k",    NULL, 0, 100000, },
+};
+
+
+static const char *fmc_mux_p[] __initconst = {
+	"24m", "75m", "125m", "150m", "200m", "250m", "300m", "400m"
+};
+static u32 fmc_mux_table[] = {0, 1, 2, 3, 4, 5, 6, 7};
+
+static const char *mmc_mux_p[] __initconst = {
+	"100k", "25m", "49p5m", "99m", "187p5m", "150m", "198m", "400k"
+};
+static u32 mmc_mux_table[] = {0, 1, 2, 3, 4, 5, 6, 7};
+
+static const char *sysapb_mux_p[] __initconst = {
+	"24m", "50m",
+};
+static u32 sysapb_mux_table[] = {0, 1};
+
+static const char *sysbus_mux_p[] __initconst = {
+	"24m", "300m"
+};
+static u32 sysbus_mux_table[] = {0, 1};
+
+static const char *uart_mux_p[] __initconst = {"50m", "24m", "3m"};
+static u32 uart_mux_table[] = {0, 1, 2};
+
+static const char *a73_clksel_mux_p[] __initconst = {
+	"24m", "apll", "1000m"
+};
+static u32 a73_clksel_mux_table[] = {0, 1, 2};
+
+static struct hisi_mux_clock hi3559av100_mux_clks_crg[] __initdata = {
+	{
+		HI3559AV100_FMC_MUX, "fmc_mux", fmc_mux_p, ARRAY_SIZE(fmc_mux_p),
+		CLK_SET_RATE_PARENT, 0x170, 2, 3, 0, fmc_mux_table,
+	},
+	{
+		HI3559AV100_MMC0_MUX, "mmc0_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p),
+		CLK_SET_RATE_PARENT, 0x1a8, 24, 3, 0, mmc_mux_table,
+	},
+	{
+		HI3559AV100_MMC1_MUX, "mmc1_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p),
+		CLK_SET_RATE_PARENT, 0x1ec, 24, 3, 0, mmc_mux_table,
+	},
+
+	{
+		HI3559AV100_MMC2_MUX, "mmc2_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p),
+		CLK_SET_RATE_PARENT, 0x214, 24, 3, 0, mmc_mux_table,
+	},
+
+	{
+		HI3559AV100_MMC3_MUX, "mmc3_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p),
+		CLK_SET_RATE_PARENT, 0x23c, 24, 3, 0, mmc_mux_table,
+	},
+
+	{
+		HI3559AV100_SYSAPB_MUX, "sysapb_mux", sysapb_mux_p, ARRAY_SIZE(sysapb_mux_p),
+		CLK_SET_RATE_PARENT, 0xe8, 3, 1, 0, sysapb_mux_table
+	},
+
+	{
+		HI3559AV100_SYSBUS_MUX, "sysbus_mux", sysbus_mux_p, ARRAY_SIZE(sysbus_mux_p),
+		CLK_SET_RATE_PARENT, 0xe8, 0, 1, 0, sysbus_mux_table
+	},
+
+	{
+		HI3559AV100_UART_MUX, "uart_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p),
+		CLK_SET_RATE_PARENT, 0x198, 28, 2, 0, uart_mux_table
+	},
+
+	{
+		HI3559AV100_A73_MUX, "a73_mux", a73_clksel_mux_p, ARRAY_SIZE(a73_clksel_mux_p),
+		CLK_SET_RATE_PARENT, 0xe4, 0, 2, 0, a73_clksel_mux_table
+	},
+};
+
+static struct hisi_fixed_factor_clock hi3559av100_fixed_factor_clks[] __initdata
+	= {
+};
+
+static struct hisi_gate_clock hi3559av100_gate_clks[] __initdata = {
+	{
+		HI3559AV100_FMC_CLK, "clk_fmc", "fmc_mux",
+		CLK_SET_RATE_PARENT, 0x170, 1, 0,
+	},
+	{
+		HI3559AV100_MMC0_CLK, "clk_mmc0", "mmc0_mux",
+		CLK_SET_RATE_PARENT, 0x1a8, 28, 0,
+	},
+	{
+		HI3559AV100_MMC1_CLK, "clk_mmc1", "mmc1_mux",
+		CLK_SET_RATE_PARENT, 0x1ec, 28, 0,
+	},
+	{
+		HI3559AV100_MMC2_CLK, "clk_mmc2", "mmc2_mux",
+		CLK_SET_RATE_PARENT, 0x214, 28, 0,
+	},
+	{
+		HI3559AV100_MMC3_CLK, "clk_mmc3", "mmc3_mux",
+		CLK_SET_RATE_PARENT, 0x23c, 28, 0,
+	},
+	{
+		HI3559AV100_UART0_CLK, "clk_uart0", "uart_mux",
+		CLK_SET_RATE_PARENT, 0x198, 23, 0,
+	},
+	{
+		HI3559AV100_UART1_CLK, "clk_uart1", "uart_mux",
+		CLK_SET_RATE_PARENT, 0x198, 24, 0,
+	},
+	{
+		HI3559AV100_UART2_CLK, "clk_uart2", "uart_mux",
+		CLK_SET_RATE_PARENT, 0x198, 25, 0,
+	},
+	{
+		HI3559AV100_UART3_CLK, "clk_uart3", "uart_mux",
+		CLK_SET_RATE_PARENT, 0x198, 26, 0,
+	},
+	{
+		HI3559AV100_UART4_CLK, "clk_uart4", "uart_mux",
+		CLK_SET_RATE_PARENT, 0x198, 27, 0,
+	},
+	{
+		HI3559AV100_ETH_CLK, "clk_eth", NULL,
+		CLK_SET_RATE_PARENT, 0x0174, 1, 0,
+	},
+	{
+		HI3559AV100_ETH_MACIF_CLK, "clk_eth_macif", NULL,
+		CLK_SET_RATE_PARENT, 0x0174, 5, 0,
+	},
+	{
+		HI3559AV100_ETH1_CLK, "clk_eth1", NULL,
+		CLK_SET_RATE_PARENT, 0x0174, 3, 0,
+	},
+	{
+		HI3559AV100_ETH1_MACIF_CLK, "clk_eth1_macif", NULL,
+		CLK_SET_RATE_PARENT, 0x0174, 7, 0,
+	},
+	{
+		HI3559AV100_I2C0_CLK, "clk_i2c0", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 16, 0,
+	},
+	{
+		HI3559AV100_I2C1_CLK, "clk_i2c1", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 17, 0,
+	},
+	{
+		HI3559AV100_I2C2_CLK, "clk_i2c2", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 18, 0,
+	},
+	{
+		HI3559AV100_I2C3_CLK, "clk_i2c3", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 19, 0,
+	},
+	{
+		HI3559AV100_I2C4_CLK, "clk_i2c4", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 20, 0,
+	},
+	{
+		HI3559AV100_I2C5_CLK, "clk_i2c5", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 21, 0,
+	},
+	{
+		HI3559AV100_I2C6_CLK, "clk_i2c6", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 22, 0,
+	},
+	{
+		HI3559AV100_I2C7_CLK, "clk_i2c7", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 23, 0,
+	},
+	{
+		HI3559AV100_I2C8_CLK, "clk_i2c8", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 24, 0,
+	},
+	{
+		HI3559AV100_I2C9_CLK, "clk_i2c9", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 25, 0,
+	},
+	{
+		HI3559AV100_I2C10_CLK, "clk_i2c10", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 26, 0,
+	},
+	{
+		HI3559AV100_I2C11_CLK, "clk_i2c11", "50m",
+		CLK_SET_RATE_PARENT, 0x01a0, 27, 0,
+	},
+	{
+		HI3559AV100_SPI0_CLK, "clk_spi0", "100m",
+		CLK_SET_RATE_PARENT, 0x0198, 16, 0,
+	},
+	{
+		HI3559AV100_SPI1_CLK, "clk_spi1", "100m",
+		CLK_SET_RATE_PARENT, 0x0198, 17, 0,
+	},
+	{
+		HI3559AV100_SPI2_CLK, "clk_spi2", "100m",
+		CLK_SET_RATE_PARENT, 0x0198, 18, 0,
+	},
+	{
+		HI3559AV100_SPI3_CLK, "clk_spi3", "100m",
+		CLK_SET_RATE_PARENT, 0x0198, 19, 0,
+	},
+	{
+		HI3559AV100_SPI4_CLK, "clk_spi4", "100m",
+		CLK_SET_RATE_PARENT, 0x0198, 20, 0,
+	},
+	{
+		HI3559AV100_SPI5_CLK, "clk_spi5", "100m",
+		CLK_SET_RATE_PARENT, 0x0198, 21, 0,
+	},
+	{
+		HI3559AV100_SPI6_CLK, "clk_spi6", "100m",
+		CLK_SET_RATE_PARENT, 0x0198, 22, 0,
+	},
+	{
+		HI3559AV100_EDMAC_AXICLK, "axi_clk_edmac", NULL,
+		CLK_SET_RATE_PARENT, 0x16c, 6, 0,
+	},
+	{
+		HI3559AV100_EDMAC_CLK, "clk_edmac", NULL,
+		CLK_SET_RATE_PARENT, 0x16c, 5, 0,
+	},
+	{
+		HI3559AV100_EDMAC1_AXICLK, "axi_clk_edmac1", NULL,
+		CLK_SET_RATE_PARENT, 0x16c, 9, 0,
+	},
+	{
+		HI3559AV100_EDMAC1_CLK, "clk_edmac1", NULL,
+		CLK_SET_RATE_PARENT, 0x16c, 8, 0,
+	},
+	{
+		HI3559AV100_VDMAC_CLK, "clk_vdmac", NULL,
+		CLK_SET_RATE_PARENT, 0x14c, 5, 0,
+	},
+};
+
+static struct hi3559av100_pll_clock hi3559av100_pll_clks[] __initdata = {
+	{
+		HI3559AV100_APLL_CLK, "apll", NULL, 0x0, 0, 24, 24, 3, 28, 3,
+		0x4, 0, 12, 12, 6
+	},
+	{
+		HI3559AV100_GPLL_CLK, "gpll", NULL, 0x20, 0, 24, 24, 3, 28, 3,
+		0x24, 0, 12, 12, 6
+	},
+};
+
+#define to_pll_clk(_hw) container_of(_hw, struct hi3559av100_clk_pll, hw)
+static void hi3559av100_calc_pll(u32 *frac_val, u32 *postdiv1_val,
+				 u32 *postdiv2_val,
+				 u32 *fbdiv_val, u32 *refdiv_val, u64 rate)
+{
+	u64 rem;
+
+	*postdiv1_val = 2;
+	*postdiv2_val = 1;
+
+	rate = rate * ((*postdiv1_val) * (*postdiv2_val));
+
+	*frac_val = 0;
+	rem = do_div(rate, 1000000);
+	rem = do_div(rate, 24);
+	*fbdiv_val = rate;
+	*refdiv_val = 1;
+	rem = rem * (1 << 24);
+	do_div(rem, 24);
+	*frac_val = rem;
+}
+
+static int clk_pll_set_rate(struct clk_hw *hw,
+			    unsigned long rate,
+			    unsigned long parent_rate)
+{
+	struct hi3559av100_clk_pll *clk = to_pll_clk(hw);
+	u32 frac_val, postdiv1_val, postdiv2_val, fbdiv_val, refdiv_val;
+	u32 val;
+
+	postdiv1_val = postdiv2_val = 0;
+
+	hi3559av100_calc_pll(&frac_val, &postdiv1_val, &postdiv2_val,
+			     &fbdiv_val, &refdiv_val, (u64)rate);
+
+	val = readl_relaxed(clk->ctrl_reg1);
+	val &= ~(((1 << clk->frac_width) - 1) << clk->frac_shift);
+	val &= ~(((1 << clk->postdiv1_width) - 1) << clk->postdiv1_shift);
+	val &= ~(((1 << clk->postdiv2_width) - 1) << clk->postdiv2_shift);
+
+	val |= frac_val << clk->frac_shift;
+	val |= postdiv1_val << clk->postdiv1_shift;
+	val |= postdiv2_val << clk->postdiv2_shift;
+	writel_relaxed(val, clk->ctrl_reg1);
+
+	val = readl_relaxed(clk->ctrl_reg2);
+	val &= ~(((1 << clk->fbdiv_width) - 1) << clk->fbdiv_shift);
+	val &= ~(((1 << clk->refdiv_width) - 1) << clk->refdiv_shift);
+
+	val |= fbdiv_val << clk->fbdiv_shift;
+	val |= refdiv_val << clk->refdiv_shift;
+	writel_relaxed(val, clk->ctrl_reg2);
+
+	return 0;
+}
+
+static unsigned long clk_pll_recalc_rate(struct clk_hw *hw,
+		unsigned long parent_rate)
+{
+	struct hi3559av100_clk_pll *clk = to_pll_clk(hw);
+	u64 frac_val, fbdiv_val, refdiv_val;
+	u32 postdiv1_val, postdiv2_val;
+	u32 val;
+	u64 tmp, rate;
+
+	val = readl_relaxed(clk->ctrl_reg1);
+	val = val >> clk->frac_shift;
+	val &= ((1 << clk->frac_width) - 1);
+	frac_val = val;
+
+	val = readl_relaxed(clk->ctrl_reg1);
+	val = val >> clk->postdiv1_shift;
+	val &= ((1 << clk->postdiv1_width) - 1);
+	postdiv1_val = val;
+
+	val = readl_relaxed(clk->ctrl_reg1);
+	val = val >> clk->postdiv2_shift;
+	val &= ((1 << clk->postdiv2_width) - 1);
+	postdiv2_val = val;
+
+	val = readl_relaxed(clk->ctrl_reg2);
+	val = val >> clk->fbdiv_shift;
+	val &= ((1 << clk->fbdiv_width) - 1);
+	fbdiv_val = val;
+
+	val = readl_relaxed(clk->ctrl_reg2);
+	val = val >> clk->refdiv_shift;
+	val &= ((1 << clk->refdiv_width) - 1);
+	refdiv_val = val;
+
+	/* rate = 24000000 * (fbdiv + frac / (1<<24) ) / refdiv  */
+	rate = 0;
+	tmp = 24000000 * fbdiv_val + (24000000 * frac_val) / (1 << 24);
+	rate += tmp;
+	do_div(rate, refdiv_val);
+	do_div(rate, postdiv1_val * postdiv2_val);
+
+	return rate;
+}
+
+static int clk_pll_determine_rate(struct clk_hw *hw,
+				  struct clk_rate_request *req)
+{
+	return req->rate;
+}
+
+static const struct clk_ops clk_pll_ops = {
+	.set_rate = clk_pll_set_rate,
+	.determine_rate = clk_pll_determine_rate,
+	.recalc_rate = clk_pll_recalc_rate,
+};
+
+static void hisi_clk_register_pll(struct hi3559av100_pll_clock *clks,
+			   int nums, struct hisi_clock_data *data)
+{
+	void __iomem *base = data->base;
+	int i;
+
+	for (i = 0; i < nums; i++) {
+		struct hi3559av100_clk_pll *p_clk = NULL;
+		struct clk *clk = NULL;
+		struct clk_init_data init;
+
+		p_clk = kzalloc(sizeof(*p_clk), GFP_KERNEL);
+		if (!p_clk)
+			return;
+
+		init.name = clks[i].name;
+		init.flags = 0;
+		init.parent_names =
+			(clks[i].parent_name ? &clks[i].parent_name : NULL);
+		init.num_parents = (clks[i].parent_name ? 1 : 0);
+		init.ops = &clk_pll_ops;
+
+		p_clk->ctrl_reg1 = base + clks[i].ctrl_reg1;
+		p_clk->frac_shift = clks[i].frac_shift;
+		p_clk->frac_width = clks[i].frac_width;
+		p_clk->postdiv1_shift = clks[i].postdiv1_shift;
+		p_clk->postdiv1_width = clks[i].postdiv1_width;
+		p_clk->postdiv2_shift = clks[i].postdiv2_shift;
+		p_clk->postdiv2_width = clks[i].postdiv2_width;
+
+		p_clk->ctrl_reg2 = base + clks[i].ctrl_reg2;
+		p_clk->fbdiv_shift = clks[i].fbdiv_shift;
+		p_clk->fbdiv_width = clks[i].fbdiv_width;
+		p_clk->refdiv_shift = clks[i].refdiv_shift;
+		p_clk->refdiv_width = clks[i].refdiv_width;
+		p_clk->hw.init = &init;
+
+		clk = clk_register(NULL, &p_clk->hw);
+		if (IS_ERR(clk)) {
+			kfree(p_clk);
+			pr_err("%s: failed to register clock %s\n",
+			       __func__, clks[i].name);
+			continue;
+		}
+
+		data->clk_data.clks[clks[i].id] = clk;
+	}
+}
+
+static __init struct hisi_clock_data *hi3559av100_clk_register(
+	struct platform_device *pdev)
+{
+	struct hisi_clock_data *clk_data;
+	int ret;
+
+	clk_data = hisi_clk_alloc(pdev, HI3559AV100_CRG_NR_CLKS);
+	if (!clk_data)
+		return ERR_PTR(-ENOMEM);
+
+	ret = hisi_clk_register_fixed_rate(hi3559av100_fixed_rate_clks_crg,
+					   ARRAY_SIZE(hi3559av100_fixed_rate_clks_crg), clk_data);
+	if (ret)
+		return ERR_PTR(ret);
+
+	hisi_clk_register_pll(hi3559av100_pll_clks,
+			      ARRAY_SIZE(hi3559av100_pll_clks), clk_data);
+
+	ret = hisi_clk_register_mux(hi3559av100_mux_clks_crg,
+				    ARRAY_SIZE(hi3559av100_mux_clks_crg), clk_data);
+	if (ret)
+		goto unregister_fixed_rate;
+
+	ret = hisi_clk_register_fixed_factor(hi3559av100_fixed_factor_clks,
+					     ARRAY_SIZE(hi3559av100_fixed_factor_clks), clk_data);
+	if (ret)
+		goto unregister_mux;
+
+	ret = hisi_clk_register_gate(hi3559av100_gate_clks,
+				     ARRAY_SIZE(hi3559av100_gate_clks), clk_data);
+	if (ret)
+		goto unregister_factor;
+
+	ret = of_clk_add_provider(pdev->dev.of_node,
+				  of_clk_src_onecell_get, &clk_data->clk_data);
+	if (ret)
+		goto unregister_gate;
+
+	return clk_data;
+
+unregister_gate:
+	hisi_clk_unregister_gate(hi3559av100_gate_clks,
+				 ARRAY_SIZE(hi3559av100_gate_clks), clk_data);
+unregister_factor:
+	hisi_clk_unregister_fixed_factor(hi3559av100_fixed_factor_clks,
+					 ARRAY_SIZE(hi3559av100_fixed_factor_clks), clk_data);
+unregister_mux:
+	hisi_clk_unregister_mux(hi3559av100_mux_clks_crg,
+				ARRAY_SIZE(hi3559av100_mux_clks_crg), clk_data);
+unregister_fixed_rate:
+	hisi_clk_unregister_fixed_rate(hi3559av100_fixed_rate_clks_crg,
+				       ARRAY_SIZE(hi3559av100_fixed_rate_clks_crg), clk_data);
+	return ERR_PTR(ret);
+}
+
+static __init void hi3559av100_clk_unregister(struct platform_device *pdev)
+{
+	struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
+
+	of_clk_del_provider(pdev->dev.of_node);
+
+	hisi_clk_unregister_gate(hi3559av100_gate_clks,
+				 ARRAY_SIZE(hi3559av100_gate_clks), crg->clk_data);
+	hisi_clk_unregister_mux(hi3559av100_mux_clks_crg,
+				ARRAY_SIZE(hi3559av100_mux_clks_crg), crg->clk_data);
+	hisi_clk_unregister_fixed_factor(hi3559av100_fixed_factor_clks,
+					 ARRAY_SIZE(hi3559av100_fixed_factor_clks), crg->clk_data);
+	hisi_clk_unregister_fixed_rate(hi3559av100_fixed_rate_clks_crg,
+				       ARRAY_SIZE(hi3559av100_fixed_rate_clks_crg), crg->clk_data);
+}
+
+static const struct hisi_crg_funcs hi3559av100_crg_funcs = {
+	.register_clks = hi3559av100_clk_register,
+	.unregister_clks = hi3559av100_clk_unregister,
+};
+
+static struct hisi_fixed_rate_clock hi3559av100_shub_fixed_rate_clks[]
+	__initdata = {
+	{ HI3559AV100_SHUB_SOURCE_SOC_24M, "clk_source_24M", NULL, 0, 24000000UL, },
+	{ HI3559AV100_SHUB_SOURCE_SOC_200M, "clk_source_200M", NULL, 0, 200000000UL, },
+	{ HI3559AV100_SHUB_SOURCE_SOC_300M, "clk_source_300M", NULL, 0, 300000000UL, },
+	{ HI3559AV100_SHUB_SOURCE_PLL, "clk_source_PLL", NULL, 0, 192000000UL, },
+	{ HI3559AV100_SHUB_I2C0_CLK, "clk_shub_i2c0", NULL, 0, 48000000UL, },
+	{ HI3559AV100_SHUB_I2C1_CLK, "clk_shub_i2c1", NULL, 0, 48000000UL, },
+	{ HI3559AV100_SHUB_I2C2_CLK, "clk_shub_i2c2", NULL, 0, 48000000UL, },
+	{ HI3559AV100_SHUB_I2C3_CLK, "clk_shub_i2c3", NULL, 0, 48000000UL, },
+	{ HI3559AV100_SHUB_I2C4_CLK, "clk_shub_i2c4", NULL, 0, 48000000UL, },
+	{ HI3559AV100_SHUB_I2C5_CLK, "clk_shub_i2c5", NULL, 0, 48000000UL, },
+	{ HI3559AV100_SHUB_I2C6_CLK, "clk_shub_i2c6", NULL, 0, 48000000UL, },
+	{ HI3559AV100_SHUB_I2C7_CLK, "clk_shub_i2c7", NULL, 0, 48000000UL, },
+	{ HI3559AV100_SHUB_UART_CLK_32K, "clk_uart_32K", NULL, 0, 32000UL, },
+};
+
+/* shub mux clk */
+static u32 shub_source_clk_mux_table[] = {0, 1, 2, 3};
+static const char *shub_source_clk_mux_p[] __initconst = {
+	"clk_source_24M", "clk_source_200M", "clk_source_300M", "clk_source_PLL"
+};
+
+static u32 shub_uart_source_clk_mux_table[] = {0, 1, 2, 3};
+static const char *shub_uart_source_clk_mux_p[] __initconst = {
+	"clk_uart_32K", "clk_uart_div_clk", "clk_uart_div_clk", "clk_source_24M"
+};
+
+static struct hisi_mux_clock hi3559av100_shub_mux_clks[] __initdata = {
+	{
+		HI3559AV100_SHUB_SOURCE_CLK, "shub_clk", shub_source_clk_mux_p,
+		ARRAY_SIZE(shub_source_clk_mux_p),
+		0, 0x0, 0, 2, 0, shub_source_clk_mux_table,
+	},
+
+	{
+		HI3559AV100_SHUB_UART_SOURCE_CLK, "shub_uart_source_clk",
+		shub_uart_source_clk_mux_p, ARRAY_SIZE(shub_uart_source_clk_mux_p),
+		0, 0x1c, 28, 2, 0, shub_uart_source_clk_mux_table,
+	},
+};
+
+
+/* shub div clk */
+struct clk_div_table shub_spi_clk_table[] = {{0, 8}, {1, 4}, {2, 2}};
+struct clk_div_table shub_spi4_clk_table[] = {{0, 8}, {1, 4}, {2, 2}, {3, 1}};
+struct clk_div_table shub_uart_div_clk_table[] = {{1, 8}, {2, 4}};
+
+struct hisi_divider_clock hi3559av100_shub_div_clks[] __initdata = {
+	{ HI3559AV100_SHUB_SPI_SOURCE_CLK, "clk_spi_clk", "shub_clk", 0, 0x20, 24, 2,
+	  CLK_DIVIDER_ALLOW_ZERO, shub_spi_clk_table,
+	},
+	{ HI3559AV100_SHUB_UART_DIV_CLK, "clk_uart_div_clk", "shub_clk", 0, 0x1c, 28, 2,
+	  CLK_DIVIDER_ALLOW_ZERO, shub_uart_div_clk_table,
+	},
+};
+
+/* shub gate clk */
+static struct hisi_gate_clock hi3559av100_shub_gate_clks[] __initdata = {
+	{
+		HI3559AV100_SHUB_SPI0_CLK, "clk_shub_spi0", "clk_spi_clk",
+		0, 0x20, 1, 0,
+	},
+	{
+		HI3559AV100_SHUB_SPI1_CLK, "clk_shub_spi1", "clk_spi_clk",
+		0, 0x20, 5, 0,
+	},
+	{
+		HI3559AV100_SHUB_SPI2_CLK, "clk_shub_spi2", "clk_spi_clk",
+		0, 0x20, 9, 0,
+	},
+
+	{
+		HI3559AV100_SHUB_UART0_CLK, "clk_shub_uart0", "shub_uart_source_clk",
+		0, 0x1c, 1, 0,
+	},
+	{
+		HI3559AV100_SHUB_UART1_CLK, "clk_shub_uart1", "shub_uart_source_clk",
+		0, 0x1c, 5, 0,
+	},
+	{
+		HI3559AV100_SHUB_UART2_CLK, "clk_shub_uart2", "shub_uart_source_clk",
+		0, 0x1c, 9, 0,
+	},
+	{
+		HI3559AV100_SHUB_UART3_CLK, "clk_shub_uart3", "shub_uart_source_clk",
+		0, 0x1c, 13, 0,
+	},
+	{
+		HI3559AV100_SHUB_UART4_CLK, "clk_shub_uart4", "shub_uart_source_clk",
+		0, 0x1c, 17, 0,
+	},
+	{
+		HI3559AV100_SHUB_UART5_CLK, "clk_shub_uart5", "shub_uart_source_clk",
+		0, 0x1c, 21, 0,
+	},
+	{
+		HI3559AV100_SHUB_UART6_CLK, "clk_shub_uart6", "shub_uart_source_clk",
+		0, 0x1c, 25, 0,
+	},
+
+	{
+		HI3559AV100_SHUB_EDMAC_CLK, "clk_shub_dmac", "shub_clk",
+		0, 0x24, 4, 0,
+	},
+};
+
+static int hi3559av100_shub_default_clk_set(void)
+{
+	void *crg_base;
+	unsigned int val;
+
+	crg_base = ioremap(CRG_BASE_ADDR, SZ_4K);
+
+	/* SSP: 192M/2 */
+	val = readl_relaxed(crg_base + 0x20);
+	val |= (0x2 << 24);
+	writel_relaxed(val, crg_base + 0x20);
+
+	/* UART: 192M/8 */
+	val = readl_relaxed(crg_base + 0x1C);
+	val |= (0x1 << 28);
+	writel_relaxed(val, crg_base + 0x1C);
+
+	iounmap(crg_base);
+	crg_base = NULL;
+
+	return 0;
+}
+
+static __init struct hisi_clock_data *hi3559av100_shub_clk_register(
+	struct platform_device *pdev)
+{
+	struct hisi_clock_data *clk_data = NULL;
+	int ret;
+
+	hi3559av100_shub_default_clk_set();
+
+	clk_data = hisi_clk_alloc(pdev, HI3559AV100_SHUB_NR_CLKS);
+	if (!clk_data)
+		return ERR_PTR(-ENOMEM);
+
+	ret = hisi_clk_register_fixed_rate(hi3559av100_shub_fixed_rate_clks,
+					   ARRAY_SIZE(hi3559av100_shub_fixed_rate_clks), clk_data);
+	if (ret)
+		return ERR_PTR(ret);
+
+	ret = hisi_clk_register_mux(hi3559av100_shub_mux_clks,
+				    ARRAY_SIZE(hi3559av100_shub_mux_clks), clk_data);
+	if (ret)
+		goto unregister_fixed_rate;
+
+	ret = hisi_clk_register_divider(hi3559av100_shub_div_clks,
+					ARRAY_SIZE(hi3559av100_shub_div_clks), clk_data);
+	if (ret)
+		goto unregister_mux;
+
+	ret = hisi_clk_register_gate(hi3559av100_shub_gate_clks,
+				     ARRAY_SIZE(hi3559av100_shub_gate_clks), clk_data);
+	if (ret)
+		goto unregister_factor;
+
+	ret = of_clk_add_provider(pdev->dev.of_node,
+				  of_clk_src_onecell_get, &clk_data->clk_data);
+	if (ret)
+		goto unregister_gate;
+
+	return clk_data;
+
+unregister_gate:
+	hisi_clk_unregister_gate(hi3559av100_shub_gate_clks,
+				 ARRAY_SIZE(hi3559av100_shub_gate_clks), clk_data);
+unregister_factor:
+	hisi_clk_unregister_divider(hi3559av100_shub_div_clks,
+				    ARRAY_SIZE(hi3559av100_shub_div_clks), clk_data);
+unregister_mux:
+	hisi_clk_unregister_mux(hi3559av100_shub_mux_clks,
+				ARRAY_SIZE(hi3559av100_shub_mux_clks), clk_data);
+unregister_fixed_rate:
+	hisi_clk_unregister_fixed_rate(hi3559av100_shub_fixed_rate_clks,
+				       ARRAY_SIZE(hi3559av100_shub_fixed_rate_clks), clk_data);
+	return ERR_PTR(ret);
+}
+
+static __init void hi3559av100_shub_clk_unregister(struct platform_device *pdev)
+{
+	struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
+
+	of_clk_del_provider(pdev->dev.of_node);
+
+	hisi_clk_unregister_gate(hi3559av100_shub_gate_clks,
+				 ARRAY_SIZE(hi3559av100_shub_gate_clks), crg->clk_data);
+	hisi_clk_unregister_divider(hi3559av100_shub_div_clks,
+				    ARRAY_SIZE(hi3559av100_shub_div_clks), crg->clk_data);
+	hisi_clk_unregister_mux(hi3559av100_shub_mux_clks,
+				ARRAY_SIZE(hi3559av100_shub_mux_clks), crg->clk_data);
+	hisi_clk_unregister_fixed_rate(hi3559av100_shub_fixed_rate_clks,
+				       ARRAY_SIZE(hi3559av100_shub_fixed_rate_clks), crg->clk_data);
+}
+
+static const struct hisi_crg_funcs hi3559av100_shub_crg_funcs = {
+	.register_clks = hi3559av100_shub_clk_register,
+	.unregister_clks = hi3559av100_shub_clk_unregister,
+};
+
+static const struct of_device_id hi3559av100_crg_match_table[] = {
+	{
+		.compatible = "hisilicon,hi3559av100-clock",
+		.data = &hi3559av100_crg_funcs
+	},
+	{
+		.compatible = "hisilicon,hi3559av100-shub-clock",
+		.data = &hi3559av100_shub_crg_funcs
+	},
+	{ }
+};
+MODULE_DEVICE_TABLE(of, hi3559av100_crg_match_table);
+
+static int hi3559av100_crg_probe(struct platform_device *pdev)
+{
+	struct hisi_crg_dev *crg;
+
+	crg = devm_kmalloc(&pdev->dev, sizeof(*crg), GFP_KERNEL);
+	if (!crg)
+		return -ENOMEM;
+
+	crg->funcs = of_device_get_match_data(&pdev->dev);
+	if (!crg->funcs)
+		return -ENOENT;
+
+	crg->rstc = hisi_reset_init(pdev);
+	if (!crg->rstc)
+		return -ENOMEM;
+
+	crg->clk_data = crg->funcs->register_clks(pdev);
+	if (IS_ERR(crg->clk_data)) {
+		hisi_reset_exit(crg->rstc);
+		return PTR_ERR(crg->clk_data);
+	}
+
+	platform_set_drvdata(pdev, crg);
+	return 0;
+}
+
+static int hi3559av100_crg_remove(struct platform_device *pdev)
+{
+	struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
+
+	hisi_reset_exit(crg->rstc);
+	crg->funcs->unregister_clks(pdev);
+	return 0;
+}
+
+static struct platform_driver hi3559av100_crg_driver = {
+	.probe		= hi3559av100_crg_probe,
+	.remove     = hi3559av100_crg_remove,
+	.driver		= {
+		.name	= "hi3559av100-clock",
+		.of_match_table = hi3559av100_crg_match_table,
+	},
+};
+
+static int __init hi3559av100_crg_init(void)
+{
+	return platform_driver_register(&hi3559av100_crg_driver);
+}
+core_initcall(hi3559av100_crg_init);
+
+static void __exit hi3559av100_crg_exit(void)
+{
+	platform_driver_unregister(&hi3559av100_crg_driver);
+}
+module_exit(hi3559av100_crg_exit);
+
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("HiSilicon Hi3559AV100 CRG Driver");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v7 3/4] dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller
  2020-12-15 11:09 [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Dongjiu Geng
  2020-12-15 11:09 ` [PATCH v7 1/4] dt-bindings: Document the hi3559a clock bindings Dongjiu Geng
  2020-12-15 11:09 ` [PATCH v7 2/4] clk: hisilicon: Add clock driver for hi3559A SoC Dongjiu Geng
@ 2020-12-15 11:09 ` Dongjiu Geng
  2020-12-21 18:55   ` Rob Herring
  2020-12-15 11:09 ` [PATCH v7 4/4] dmaengine: dma: Add Hiedma Controller v310 Device Driver Dongjiu Geng
  2021-01-12 10:40 ` [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Vinod Koul
  4 siblings, 1 reply; 12+ messages in thread
From: Dongjiu Geng @ 2020-12-15 11:09 UTC (permalink / raw)
  To: mturquette, sboyd, robh+dt, vkoul, dan.j.williams, p.zabel,
	linux-clk, devicetree, linux-kernel, dmaengine, gengdongjiu

The Hiedma Controller v310 Provides eight DMA channels, each
channel can be configured for one-way transfer. The data can
be transferred in 8-bit, 16-bit, 32-bit, or 64-bit mode. This
documentation describes DT bindings of this controller.

Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
---
 .../bindings/dma/hisilicon,hiedmacv310.yaml   | 94 +++++++++++++++++++
 1 file changed, 94 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/hisilicon,hiedmacv310.yaml

diff --git a/Documentation/devicetree/bindings/dma/hisilicon,hiedmacv310.yaml b/Documentation/devicetree/bindings/dma/hisilicon,hiedmacv310.yaml
new file mode 100644
index 000000000000..06a1ebe76360
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/hisilicon,hiedmacv310.yaml
@@ -0,0 +1,94 @@
+# SPDX-License-Identifier:  GPL-2.0-only OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/dma/hisilicon,hiedmacv310.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: HiSilicon Hiedma Controller v310 Device Tree Bindings
+
+description: |
+  These bindings describe the DMA engine included in the HiSilicon Hiedma
+  Controller v310 Device.
+
+maintainers:
+  - Dongjiu Geng <gengdongjiu@huawei.com>
+
+allOf:
+  - $ref: "dma-controller.yaml#"
+
+properties:
+  "#dma-cells":
+    const: 2
+
+  compatible:
+    const: hisilicon,hiedmacv310
+
+  reg:
+    maxItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  hisilicon,misc-control:
+    $ref: /schemas/types.yaml#definitions/phandle-array
+    description: phandle pointing to the misc controller provider node and base register.
+
+  clocks:
+    items:
+      - description: apb clock
+      - description: axi clock
+
+  clock-names:
+    items:
+      - const: apb_pclk
+      - const: axi_aclk
+
+  resets:
+    description: phandle pointing to the dma reset controller provider node.
+
+  reset-names:
+    items:
+      - const: dma-reset
+
+  dma-requests:
+    maximum: 32
+
+  dma-channels:
+    maximum: 8
+
+
+required:
+  - "#dma-cells"
+  - compatible
+  - hisilicon,misc-control
+  - reg
+  - interrupts
+  - clocks
+  - clock-names
+  - resets
+  - reset-names
+  - dma-requests
+  - dma-channels
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+    #include <dt-bindings/clock/hi3559av100-clock.h>
+
+    dma: dma-controller@10040000 {
+      compatible = "hisilicon,hiedmacv310";
+      reg = <0x10040000 0x1000>;
+      hisilicon,misc-control = <&misc_ctrl 0x144>;
+      interrupts = <0 82 4>;
+      clocks = <&clock HI3559AV100_EDMAC1_CLK>, <&clock HI3559AV100_EDMAC1_AXICLK>;
+      clock-names = "apb_pclk", "axi_aclk";
+      resets = <&clock 0x16c 7>;
+      reset-names = "dma-reset";
+      dma-requests = <32>;
+      dma-channels = <8>;
+      #dma-cells = <2>;
+    };
+
+...
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v7 4/4] dmaengine: dma: Add Hiedma Controller v310 Device Driver
  2020-12-15 11:09 [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Dongjiu Geng
                   ` (2 preceding siblings ...)
  2020-12-15 11:09 ` [PATCH v7 3/4] dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller Dongjiu Geng
@ 2020-12-15 11:09 ` Dongjiu Geng
  2021-01-12 12:06   ` Vinod Koul
  2021-01-12 10:40 ` [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Vinod Koul
  4 siblings, 1 reply; 12+ messages in thread
From: Dongjiu Geng @ 2020-12-15 11:09 UTC (permalink / raw)
  To: mturquette, sboyd, robh+dt, vkoul, dan.j.williams, p.zabel,
	linux-clk, devicetree, linux-kernel, dmaengine, gengdongjiu

Hisilicon EDMA Controller(EDMAC) directly transfers data
between a memory and a peripheral, between peripherals, or
between memories. This avoids the CPU intervention and reduces
the interrupt handling overhead of the CPU, this driver enables
this controller.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
---
 drivers/dma/Kconfig       |   14 +
 drivers/dma/Makefile      |    1 +
 drivers/dma/hiedmacv310.c | 1442 +++++++++++++++++++++++++++++++++++++
 drivers/dma/hiedmacv310.h |  136 ++++
 4 files changed, 1593 insertions(+)
 create mode 100644 drivers/dma/hiedmacv310.c
 create mode 100644 drivers/dma/hiedmacv310.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 90284ffda58a..3e5107120ff1 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -327,6 +327,20 @@ config K3_DMA
 	  Support the DMA engine for Hisilicon K3 platform
 	  devices.
 
+config HIEDMACV310
+	tristate "Hisilicon EDMAC Controller support"
+	depends on ARCH_HISI
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  The Direction Memory Access(EDMA) is a high-speed data transfer
+	  operation. It supports data read/write between peripherals and
+	  memories without using the CPU.
+	  Hisilicon EDMA Controller(EDMAC) directly transfers data between
+	  a memory and a peripheral, between peripherals, or between memories.
+	  This avoids the CPU intervention and reduces the interrupt handling
+	  overhead of the CPU.
+
 config LPC18XX_DMAMUX
 	bool "NXP LPC18xx/43xx DMA MUX for PL080"
 	depends on ARCH_LPC18XX || COMPILE_TEST
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 948a8da05f8b..28c7298b671e 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -82,6 +82,7 @@ obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx_dma.o
 obj-$(CONFIG_ST_FDMA) += st_fdma.o
 obj-$(CONFIG_FSL_DPAA2_QDMA) += fsl-dpaa2-qdma/
+obj-$(CONFIG_HIEDMACV310) += hiedmacv310.o
 
 obj-y += mediatek/
 obj-y += qcom/
diff --git a/drivers/dma/hiedmacv310.c b/drivers/dma/hiedmacv310.c
new file mode 100644
index 000000000000..c0df5088a6a1
--- /dev/null
+++ b/drivers/dma/hiedmacv310.c
@@ -0,0 +1,1442 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * The Hiedma Controller v310 Device Driver for HiSilicon
+ *
+ * Copyright (c) 2019-2020, Huawei Tech. Co., Ltd.
+ *
+ * Author: Dongjiu Geng <gengdongjiu@huawei.com>
+ */
+
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/clk.h>
+#include <linux/reset.h>
+#include <linux/platform_device.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+#include <linux/dmapool.h>
+#include <linux/dma-mapping.h>
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_dma.h>
+#include <linux/pm_runtime.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/io.h>
+#include <linux/regmap.h>
+#include <linux/mfd/syscon.h>
+
+#include "hiedmacv310.h"
+#include "dmaengine.h"
+#include "virt-dma.h"
+
+#define DRIVER_NAME "hiedmacv310"
+
+#define MAX_TSFR_LLIS           512
+#define EDMACV300_LLI_WORDS     64
+#define EDMACV300_POOL_ALIGN    64
+#define BITS_PER_HALF_WORD 32
+
+struct hiedmac_lli {
+	u64 next_lli;
+	u32 reserved[5];
+	u32 count;
+	u64 src_addr;
+	u64 dest_addr;
+	u32 config;
+	u32 pad[3];
+};
+
+struct hiedmac_sg {
+	dma_addr_t src_addr;
+	dma_addr_t dst_addr;
+	size_t len;
+	struct list_head node;
+};
+
+struct transfer_desc {
+	struct virt_dma_desc virt_desc;
+	dma_addr_t llis_busaddr;
+	u64 *llis_vaddr;
+	u32 ccfg;
+	size_t size;
+	bool done;
+	bool cyclic;
+};
+
+enum edmac_dma_chan_state {
+	HIEDMAC_CHAN_IDLE,
+	HIEDMAC_CHAN_RUNNING,
+	HIEDMAC_CHAN_PAUSED,
+	HIEDMAC_CHAN_WAITING,
+};
+
+struct hiedmacv310_dma_chan {
+	bool slave;
+	int signal;
+	int id;
+	struct virt_dma_chan virt_chan;
+	struct hiedmacv310_phy_chan *phychan;
+	struct dma_slave_config cfg;
+	struct transfer_desc *at;
+	struct hiedmacv310_driver_data *host;
+	enum edmac_dma_chan_state state;
+};
+
+struct hiedmacv310_phy_chan {
+	unsigned int id;
+	void __iomem *base;
+	spinlock_t lock;
+	struct hiedmacv310_dma_chan *serving;
+};
+
+struct hiedmacv310_driver_data {
+	struct platform_device *dev;
+	struct dma_device slave;
+	struct dma_device memcpy;
+	void __iomem *base;
+	struct regmap *misc_regmap;
+	void __iomem *crg_ctrl;
+	struct hiedmacv310_phy_chan *phy_chans;
+	struct dma_pool *pool;
+	unsigned int misc_ctrl_base;
+	int irq;
+	struct clk *clk;
+	struct clk *axi_clk;
+	struct reset_control *rstc;
+	unsigned int channels;
+	unsigned int slave_requests;
+	unsigned int max_transfer_size;
+};
+
+#ifdef DEBUG_HIEDMAC
+static void dump_lli(const u64 *llis_vaddr, unsigned int num)
+{
+	struct hiedmac_lli *plli = (struct hiedmac_lli *)llis_vaddr;
+	unsigned int i;
+
+	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL, "lli num = 0%d", num);
+	for (i = 0; i < num; i++) {
+		hiedmacv310_info("lli%d:lli_L:      0x%llx\n", i,
+			plli[i].next_lli & 0xffffffff);
+		hiedmacv310_info("lli%d:lli_H:      0x%llx\n", i,
+			(plli[i].next_lli >> BITS_PER_HALF_WORD) & 0xffffffff);
+		hiedmacv310_info("lli%d:count:      0x%x\n", i,
+			plli[i].count);
+		hiedmacv310_info("lli%d:src_addr_L: 0x%llx\n", i,
+			plli[i].src_addr & 0xffffffff);
+		hiedmacv310_info("lli%d:src_addr_H: 0x%llx\n", i,
+			(plli[i].src_addr >> BITS_PER_HALF_WORD) & 0xffffffff);
+		hiedmacv310_info("lli%d:dst_addr_L: 0x%llx\n", i,
+				 plli[i].dest_addr & 0xffffffff);
+		hiedmacv310_info("lli%d:dst_addr_H: 0x%llx\n", i,
+			(plli[i].dest_addr >> BITS_PER_HALF_WORD) & 0xffffffff);
+		hiedmacv310_info("lli%d:CONFIG:	  0x%x\n", i,
+				 plli[i].config);
+	}
+}
+
+#else
+static void dump_lli(u64 *llis_vaddr, unsigned int num)
+{
+}
+#endif
+
+static inline struct hiedmacv310_dma_chan *to_edamc_chan(const struct dma_chan *chan)
+{
+	return container_of(chan, struct hiedmacv310_dma_chan, virt_chan.chan);
+}
+
+static inline struct transfer_desc *to_edmac_transfer_desc(
+	const struct dma_async_tx_descriptor *tx)
+{
+	return container_of(tx, struct transfer_desc, virt_desc.tx);
+}
+
+static struct dma_chan *hiedmac_find_chan_id(
+	const struct hiedmacv310_driver_data *hiedmac,
+	int request_num)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = NULL;
+
+	list_for_each_entry(edmac_dma_chan, &hiedmac->slave.channels,
+			    virt_chan.chan.device_node) {
+		if (edmac_dma_chan->id == request_num)
+			return &edmac_dma_chan->virt_chan.chan;
+	}
+	return NULL;
+}
+
+static struct dma_chan *hiedma_of_xlate(struct of_phandle_args *dma_spec,
+					struct of_dma *ofdma)
+{
+	struct hiedmacv310_driver_data *hiedmac = ofdma->of_dma_data;
+	struct hiedmacv310_dma_chan *edmac_dma_chan = NULL;
+	struct dma_chan *dma_chan = NULL;
+	struct regmap *misc = NULL;
+	unsigned int signal, request_num;
+	unsigned int reg = 0;
+	unsigned int offset = 0;
+
+	if (!hiedmac)
+		return NULL;
+
+	misc = hiedmac->misc_regmap;
+
+	if (dma_spec->args_count != 2) { /* check num of dts node args */
+		hiedmacv310_error("args count not true!");
+		return NULL;
+	}
+
+	request_num = dma_spec->args[0];
+	signal = dma_spec->args[1];
+
+	if (misc != NULL) {
+		offset = hiedmac->misc_ctrl_base + (request_num & (~0x3));
+		regmap_read(misc, offset, &reg);
+		/* set misc for signal line */
+		reg &= ~(0x3f << ((request_num & 0x3) << 3));
+		reg |= signal << ((request_num & 0x3) << 3);
+		regmap_write(misc, offset, reg);
+	}
+
+	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL,
+			  "offset = 0x%x, reg = 0x%x", offset, reg);
+
+	dma_chan = hiedmac_find_chan_id(hiedmac, request_num);
+	if (!dma_chan) {
+		hiedmacv310_error("DMA slave channel is not found!");
+		return NULL;
+	}
+
+	edmac_dma_chan = to_edamc_chan(dma_chan);
+	edmac_dma_chan->signal = request_num;
+	return dma_get_slave_channel(dma_chan);
+}
+
+static int hiedmacv310_devm_get(struct hiedmacv310_driver_data *hiedmac)
+{
+	struct platform_device *platdev = hiedmac->dev;
+	struct resource *res = NULL;
+
+	hiedmac->clk = devm_clk_get(&(platdev->dev), "apb_pclk");
+	if (IS_ERR(hiedmac->clk))
+		return PTR_ERR(hiedmac->clk);
+
+	hiedmac->axi_clk = devm_clk_get(&(platdev->dev), "axi_aclk");
+	if (IS_ERR(hiedmac->axi_clk))
+		return PTR_ERR(hiedmac->axi_clk);
+
+	hiedmac->irq = platform_get_irq(platdev, 0);
+	if (unlikely(hiedmac->irq < 0))
+		return -ENODEV;
+
+	hiedmac->rstc = devm_reset_control_get(&(platdev->dev), "dma-reset");
+	if (IS_ERR(hiedmac->rstc))
+		return PTR_ERR(hiedmac->rstc);
+
+	res = platform_get_resource(platdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		hiedmacv310_error("no reg resource");
+		return -ENODEV;
+	}
+
+	hiedmac->base = devm_ioremap_resource(&(platdev->dev), res);
+	if (IS_ERR(hiedmac->base))
+		return PTR_ERR(hiedmac->base);
+
+	return 0;
+}
+
+static int hiedmacv310_of_property_read(struct hiedmacv310_driver_data *hiedmac)
+{
+	struct platform_device *platdev = hiedmac->dev;
+	struct device_node *np = platdev->dev.of_node;
+	int ret;
+
+	hiedmac->misc_regmap = syscon_regmap_lookup_by_phandle(np, "hisilicon,misc-control");
+	if (IS_ERR(hiedmac->misc_regmap))
+		return PTR_ERR(hiedmac->misc_regmap);
+
+	ret = of_property_read_u32_index(np, "hisilicon,misc-control", 1,
+					 &(hiedmac->misc_ctrl_base));
+	if (ret) {
+		hiedmacv310_error("get dma-misc_ctrl_base fail");
+		return -ENODEV;
+	}
+
+	ret = of_property_read_u32(np, "dma-channels", &(hiedmac->channels));
+	if (ret) {
+		hiedmacv310_error("get dma-channels fail");
+		return -ENODEV;
+	}
+	ret = of_property_read_u32(np, "dma-requests", &(hiedmac->slave_requests));
+	if (ret) {
+		hiedmacv310_error("get dma-requests fail");
+		return -ENODEV;
+	}
+	hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL, "dma-channels = %d, dma-requests = %d",
+			  hiedmac->channels, hiedmac->slave_requests);
+	return 0;
+}
+
+static int get_of_probe(struct hiedmacv310_driver_data *hiedmac)
+{
+	struct platform_device *platdev = hiedmac->dev;
+	int ret;
+
+	ret = hiedmacv310_devm_get(hiedmac);
+	if (ret)
+		return ret;
+
+	ret = hiedmacv310_of_property_read(hiedmac);
+	if (ret)
+		return ret;
+
+	return of_dma_controller_register(platdev->dev.of_node,
+					  hiedma_of_xlate, hiedmac);
+}
+
+static void hiedmac_free_chan_resources(struct dma_chan *chan)
+{
+	vchan_free_chan_resources(to_virt_chan(chan));
+}
+
+static size_t read_residue_from_phychan(
+	struct hiedmacv310_dma_chan *edmac_dma_chan,
+	struct transfer_desc *tsf_desc)
+{
+	size_t bytes;
+	u64 next_lli;
+	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
+	unsigned int i, index;
+	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
+	struct hiedmac_lli *plli = NULL;
+
+	next_lli = (hiedmacv310_readl(hiedmac->base + hiedmac_cx_lli_l(phychan->id)) &
+			(~(HIEDMAC_LLI_ALIGN - 1)));
+	next_lli |= ((u64)(hiedmacv310_readl(hiedmac->base + hiedmac_cx_lli_h(
+			phychan->id)) & 0xffffffff) << BITS_PER_HALF_WORD);
+	bytes = hiedmacv310_readl(hiedmac->base + hiedmac_cx_curr_cnt0(
+			phychan->id));
+	if (next_lli != 0) {
+		/* It means lli mode */
+		bytes += tsf_desc->size;
+		index = (next_lli - tsf_desc->llis_busaddr) / sizeof(*plli);
+		plli = (struct hiedmac_lli *)(tsf_desc->llis_vaddr);
+		for (i = 0; i < index; i++)
+			bytes -= plli[i].count;
+	}
+	return bytes;
+}
+
+static enum dma_status hiedmac_tx_status(struct dma_chan *chan,
+					 dma_cookie_t cookie,
+					 struct dma_tx_state *txstate)
+{
+	enum dma_status ret;
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	struct virt_dma_desc *vd = NULL;
+	struct transfer_desc *tsf_desc = NULL;
+	unsigned long flags;
+	size_t bytes;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	if (edmac_dma_chan->state == HIEDMAC_CHAN_PAUSED && ret == DMA_IN_PROGRESS) {
+		ret = DMA_PAUSED;
+		return ret;
+	}
+
+	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
+	vd = vchan_find_desc(&edmac_dma_chan->virt_chan, cookie);
+	if (vd) {
+		/* no been trasfered */
+		tsf_desc = to_edmac_transfer_desc(&vd->tx);
+		bytes = tsf_desc->size;
+	} else {
+		/* trasfering */
+		tsf_desc = edmac_dma_chan->at;
+
+		if (!(edmac_dma_chan->phychan) || !tsf_desc) {
+			spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+			return ret;
+		}
+		bytes = read_residue_from_phychan(edmac_dma_chan, tsf_desc);
+	}
+	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+	dma_set_residue(txstate, bytes);
+	return ret;
+}
+
+static struct hiedmacv310_phy_chan *hiedmac_get_phy_channel(
+	const struct hiedmacv310_driver_data *hiedmac,
+	struct hiedmacv310_dma_chan *edmac_dma_chan)
+{
+	struct hiedmacv310_phy_chan *ch = NULL;
+	unsigned long flags;
+	int i;
+
+	for (i = 0; i < hiedmac->channels; i++) {
+		ch = &hiedmac->phy_chans[i];
+
+		spin_lock_irqsave(&ch->lock, flags);
+
+		if (!ch->serving) {
+			ch->serving = edmac_dma_chan;
+			spin_unlock_irqrestore(&ch->lock, flags);
+			break;
+		}
+		spin_unlock_irqrestore(&ch->lock, flags);
+	}
+
+	if (i == hiedmac->channels)
+		return NULL;
+
+	return ch;
+}
+
+static void hiedmac_write_lli(const struct hiedmacv310_driver_data *hiedmac,
+			      const struct hiedmacv310_phy_chan *phychan,
+			      const struct transfer_desc *tsf_desc)
+{
+	struct hiedmac_lli *plli = (struct hiedmac_lli *)tsf_desc->llis_vaddr;
+
+	if (plli->next_lli != 0x0)
+		hiedmacv310_writel((plli->next_lli & 0xffffffff) | HIEDMAC_LLI_ENABLE,
+				   hiedmac->base + hiedmac_cx_lli_l(phychan->id));
+	else
+		hiedmacv310_writel((plli->next_lli & 0xffffffff),
+				   hiedmac->base + hiedmac_cx_lli_l(phychan->id));
+
+	hiedmacv310_writel(((plli->next_lli >> 32) & 0xffffffff),
+			   hiedmac->base + hiedmac_cx_lli_h(phychan->id));
+	hiedmacv310_writel(plli->count, hiedmac->base + hiedmac_cx_cnt0(phychan->id));
+	hiedmacv310_writel(plli->src_addr & 0xffffffff,
+			   hiedmac->base + hiedmac_cx_src_addr_l(phychan->id));
+	hiedmacv310_writel((plli->src_addr >> 32) & 0xffffffff,
+			   hiedmac->base + hiedmac_cx_src_addr_h(phychan->id));
+	hiedmacv310_writel(plli->dest_addr & 0xffffffff,
+			   hiedmac->base + hiedmac_cx_dest_addr_l(phychan->id));
+	hiedmacv310_writel((plli->dest_addr >> 32) & 0xffffffff,
+			   hiedmac->base + hiedmac_cx_dest_addr_h(phychan->id));
+	hiedmacv310_writel(plli->config,
+			   hiedmac->base + hiedmac_cx_config(phychan->id));
+}
+
+static void hiedmac_start_next_txd(struct hiedmacv310_dma_chan *edmac_dma_chan)
+{
+	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
+	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
+	struct virt_dma_desc *vd = vchan_next_desc(&edmac_dma_chan->virt_chan);
+	struct transfer_desc *tsf_desc = to_edmac_transfer_desc(&vd->tx);
+	unsigned int val;
+
+	list_del(&tsf_desc->virt_desc.node);
+	edmac_dma_chan->at = tsf_desc;
+	hiedmac_write_lli(hiedmac, phychan, tsf_desc);
+	val = hiedmacv310_readl(hiedmac->base + hiedmac_cx_config(phychan->id));
+	hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL, " HIEDMAC_Cx_CONFIG  = 0x%x", val);
+	hiedmacv310_writel(val | HIEDMAC_CXCONFIG_LLI_START,
+			   hiedmac->base + hiedmac_cx_config(phychan->id));
+}
+
+static void hiedmac_start(struct hiedmacv310_dma_chan *edmac_dma_chan)
+{
+	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
+	struct hiedmacv310_phy_chan *ch;
+
+	ch = hiedmac_get_phy_channel(hiedmac, edmac_dma_chan);
+	if (!ch) {
+		hiedmacv310_error("no phy channel available !");
+		edmac_dma_chan->state = HIEDMAC_CHAN_WAITING;
+		return;
+	}
+	edmac_dma_chan->phychan = ch;
+	edmac_dma_chan->state = HIEDMAC_CHAN_RUNNING;
+	hiedmac_start_next_txd(edmac_dma_chan);
+}
+
+static void hiedmac_issue_pending(struct dma_chan *chan)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
+	if (vchan_issue_pending(&edmac_dma_chan->virt_chan)) {
+		if (!edmac_dma_chan->phychan && edmac_dma_chan->state != HIEDMAC_CHAN_WAITING)
+			hiedmac_start(edmac_dma_chan);
+	}
+	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+}
+
+static void hiedmac_free_txd_list(struct hiedmacv310_dma_chan *edmac_dma_chan)
+{
+	LIST_HEAD(head);
+
+	vchan_get_all_descriptors(&edmac_dma_chan->virt_chan, &head);
+	vchan_dma_desc_free_list(&edmac_dma_chan->virt_chan, &head);
+}
+
+static int hiedmac_config(struct dma_chan *chan,
+			  struct dma_slave_config *config)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+
+	if (!edmac_dma_chan->slave) {
+		hiedmacv310_error("slave is null!");
+		return -EINVAL;
+	}
+	edmac_dma_chan->cfg = *config;
+	return 0;
+}
+
+static void hiedmac_pause_phy_chan(const struct hiedmacv310_dma_chan *edmac_dma_chan)
+{
+	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
+	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
+	unsigned int val;
+	int timeout;
+
+	val = hiedmacv310_readl(hiedmac->base + hiedmac_cx_config(phychan->id));
+	val &= ~CCFG_EN;
+	hiedmacv310_writel(val, hiedmac->base + hiedmac_cx_config(phychan->id));
+	/* Wait for channel inactive */
+	for (timeout = 2000; timeout > 0; timeout--) {
+		if (!((0x1 << phychan->id) & hiedmacv310_readl(hiedmac->base + HIEDMAC_CH_STAT)))
+			break;
+		hiedmacv310_writel(val, hiedmac->base + hiedmac_cx_config(phychan->id));
+		udelay(1);
+	}
+
+	if (timeout == 0) {
+		hiedmacv310_error(":channel%u timeout waiting for pause, timeout:%d",
+				  phychan->id, timeout);
+	}
+}
+
+static int hiedmac_pause(struct dma_chan *chan)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
+	if (!edmac_dma_chan->phychan) {
+		spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+		return 0;
+	}
+	hiedmac_pause_phy_chan(edmac_dma_chan);
+	edmac_dma_chan->state = HIEDMAC_CHAN_PAUSED;
+	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+	return 0;
+}
+
+static void hiedmac_resume_phy_chan(const struct hiedmacv310_dma_chan *edmac_dma_chan)
+{
+	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
+	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
+	unsigned int val;
+
+	val = hiedmacv310_readl(hiedmac->base + hiedmac_cx_config(phychan->id));
+	val |= CCFG_EN;
+	hiedmacv310_writel(val, hiedmac->base + hiedmac_cx_config(phychan->id));
+}
+
+static int hiedmac_resume(struct dma_chan *chan)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
+
+	if (!edmac_dma_chan->phychan) {
+		spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+		return 0;
+	}
+
+	hiedmac_resume_phy_chan(edmac_dma_chan);
+	edmac_dma_chan->state = HIEDMAC_CHAN_RUNNING;
+	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+
+	return 0;
+}
+
+void hiedmac_phy_free(struct hiedmacv310_dma_chan *chan);
+static void hiedmac_desc_free(struct virt_dma_desc *vd);
+static int hiedmac_terminate_all(struct dma_chan *chan)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
+	if (!edmac_dma_chan->phychan && !edmac_dma_chan->at) {
+		spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+		return 0;
+	}
+
+	edmac_dma_chan->state = HIEDMAC_CHAN_IDLE;
+
+	if (edmac_dma_chan->phychan)
+		hiedmac_phy_free(edmac_dma_chan);
+	if (edmac_dma_chan->at) {
+		hiedmac_desc_free(&edmac_dma_chan->at->virt_desc);
+		edmac_dma_chan->at = NULL;
+	}
+	hiedmac_free_txd_list(edmac_dma_chan);
+	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
+
+	return 0;
+}
+
+static u32 get_width(enum dma_slave_buswidth width)
+{
+	switch (width) {
+	case DMA_SLAVE_BUSWIDTH_1_BYTE:
+		return HIEDMAC_WIDTH_8BIT;
+	case DMA_SLAVE_BUSWIDTH_2_BYTES:
+		return HIEDMAC_WIDTH_16BIT;
+	case DMA_SLAVE_BUSWIDTH_4_BYTES:
+		return HIEDMAC_WIDTH_32BIT;
+	case DMA_SLAVE_BUSWIDTH_8_BYTES:
+		return HIEDMAC_WIDTH_64BIT;
+	default:
+		hiedmacv310_error("check here, width warning!");
+		return ~0;
+	}
+}
+
+static unsigned int hiedmac_set_config_value(enum dma_transfer_direction direction,
+					     unsigned int addr_width,
+					     unsigned int burst,
+					     unsigned int signal)
+{
+	unsigned int config, width;
+
+	if (direction == DMA_MEM_TO_DEV)
+		config = HIEDMAC_CONFIG_SRC_INC;
+	else
+		config = HIEDMAC_CONFIG_DST_INC;
+
+	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL, "addr_width = 0x%x", addr_width);
+	width = get_width(addr_width);
+	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL, "width = 0x%x", width);
+	config |= width << HIEDMAC_CONFIG_SRC_WIDTH_SHIFT;
+	config |= width << HIEDMAC_CONFIG_DST_WIDTH_SHIFT;
+	hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL, "tsf_desc->ccfg = 0x%x", config);
+	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL, "burst = 0x%x", burst);
+	config |= burst << HIEDMAC_CONFIG_SRC_BURST_SHIFT;
+	config |= burst << HIEDMAC_CONFIG_DST_BURST_SHIFT;
+	if (signal >= 0) {
+		hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL,
+				  "edmac_dma_chan->signal = %d", signal);
+		config |= (unsigned int)signal << HIEDMAC_CXCONFIG_SIGNAL_SHIFT;
+	}
+	config |= HIEDMAC_CXCONFIG_DEV_MEM_TYPE << HIEDMAC_CXCONFIG_TSF_TYPE_SHIFT;
+	return config;
+}
+
+static struct transfer_desc *hiedmac_init_tsf_desc(struct dma_chan *chan,
+	enum dma_transfer_direction direction,
+	dma_addr_t *slave_addr)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	struct transfer_desc *tsf_desc;
+	unsigned int burst = 0;
+	unsigned int addr_width = 0;
+	unsigned int maxburst = 0;
+
+	tsf_desc = kzalloc(sizeof(*tsf_desc), GFP_NOWAIT);
+	if (!tsf_desc)
+		return NULL;
+	if (direction == DMA_MEM_TO_DEV) {
+		*slave_addr = edmac_dma_chan->cfg.dst_addr;
+		addr_width = edmac_dma_chan->cfg.dst_addr_width;
+		maxburst = edmac_dma_chan->cfg.dst_maxburst;
+	} else if (direction == DMA_DEV_TO_MEM) {
+		*slave_addr = edmac_dma_chan->cfg.src_addr;
+		addr_width = edmac_dma_chan->cfg.src_addr_width;
+		maxburst = edmac_dma_chan->cfg.src_maxburst;
+	} else {
+		kfree(tsf_desc);
+		hiedmacv310_error("direction unsupported!");
+		return NULL;
+	}
+
+	if (maxburst > (HIEDMAC_MAX_BURST_WIDTH))
+		burst |= (HIEDMAC_MAX_BURST_WIDTH - 1);
+	else if (maxburst == 0)
+		burst |= HIEDMAC_MIN_BURST_WIDTH;
+	else
+		burst |= (maxburst - 1);
+
+	tsf_desc->ccfg = hiedmac_set_config_value(direction, addr_width,
+				 burst, edmac_dma_chan->signal);
+	hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL, "tsf_desc->ccfg = 0x%x", tsf_desc->ccfg);
+	return tsf_desc;
+}
+
+static int hiedmac_fill_desc(const struct hiedmac_sg *dsg,
+			     struct transfer_desc *tsf_desc,
+			     unsigned int length, unsigned int num)
+{
+	struct hiedmac_lli *plli = NULL;
+
+	if (num >= MAX_TSFR_LLIS) {
+		hiedmacv310_error("lli out of range.");
+		return -ENOMEM;
+	}
+
+	plli = (struct hiedmac_lli *)(tsf_desc->llis_vaddr);
+	memset(&plli[num], 0x0, sizeof(*plli));
+
+	plli[num].src_addr = dsg->src_addr;
+	plli[num].dest_addr = dsg->dst_addr;
+	plli[num].config = tsf_desc->ccfg;
+	plli[num].count = length;
+	tsf_desc->size += length;
+
+	if (num > 0) {
+		plli[num - 1].next_lli = (tsf_desc->llis_busaddr + (num) * sizeof(
+					  *plli)) & (~(HIEDMAC_LLI_ALIGN - 1));
+		plli[num - 1].next_lli |= HIEDMAC_LLI_ENABLE;
+	}
+	return 0;
+}
+
+static void free_dsg(struct list_head *dsg_head)
+{
+	struct hiedmac_sg *dsg = NULL;
+	struct hiedmac_sg *_dsg = NULL;
+
+	list_for_each_entry_safe(dsg, _dsg, dsg_head, node) {
+		list_del(&dsg->node);
+		kfree(dsg);
+	}
+}
+
+static int hiedmac_add_sg(struct list_head *sg_head,
+			  dma_addr_t dst, dma_addr_t src,
+			  size_t len)
+{
+	struct hiedmac_sg *dsg = NULL;
+
+	if (len == 0) {
+		hiedmacv310_error("Transfer length is 0.");
+		return -ENOMEM;
+	}
+
+	dsg = kzalloc(sizeof(*dsg), GFP_NOWAIT);
+	if (!dsg) {
+		free_dsg(sg_head);
+		hiedmacv310_error("alloc memory for dsg fail.");
+		return -ENOMEM;
+	}
+
+	list_add_tail(&dsg->node, sg_head);
+	dsg->src_addr = src;
+	dsg->dst_addr = dst;
+	dsg->len = len;
+	return 0;
+}
+
+static int hiedmac_add_sg_slave(struct list_head *sg_head,
+				dma_addr_t slave_addr, dma_addr_t addr,
+				size_t length,
+				enum dma_transfer_direction direction)
+{
+	dma_addr_t src = 0;
+	dma_addr_t dst = 0;
+
+	if (direction == DMA_MEM_TO_DEV) {
+		src = addr;
+		dst = slave_addr;
+	} else if (direction == DMA_DEV_TO_MEM) {
+		src = slave_addr;
+		dst = addr;
+	} else {
+		hiedmacv310_error("invali dma_transfer_direction.");
+		return -ENOMEM;
+	}
+	return hiedmac_add_sg(sg_head, dst, src, length);
+}
+
+static int hiedmac_fill_sg_for_slave(struct list_head *sg_head,
+				     dma_addr_t slave_addr,
+				     struct scatterlist *sgl,
+				     unsigned int sg_len,
+				     enum dma_transfer_direction direction)
+{
+	struct scatterlist *sg = NULL;
+	int tmp, ret;
+	size_t length;
+	dma_addr_t addr;
+
+	if (sgl == NULL) {
+		hiedmacv310_error("sgl is null!");
+		return -ENOMEM;
+	}
+
+	for_each_sg(sgl, sg, sg_len, tmp) {
+		addr = sg_dma_address(sg);
+		length = sg_dma_len(sg);
+		ret = hiedmac_add_sg_slave(sg_head, slave_addr, addr, length, direction);
+		if (ret)
+			break;
+	}
+	return ret;
+}
+
+static inline int hiedmac_fill_sg_for_memcpy(struct list_head *sg_head,
+					     dma_addr_t dst, dma_addr_t src,
+					     size_t len)
+{
+	return hiedmac_add_sg(sg_head, dst, src, len);
+}
+
+static int hiedmac_fill_sg_for_cyclic(struct list_head *sg_head,
+				      dma_addr_t slave_addr,
+				      dma_addr_t buf_addr, size_t buf_len,
+				      size_t period_len,
+				      enum dma_transfer_direction direction)
+{
+	size_t count_in_sg = 0;
+	size_t trans_bytes;
+	int ret;
+
+	while (count_in_sg < buf_len) {
+		trans_bytes = min(period_len, buf_len - count_in_sg);
+		count_in_sg += trans_bytes;
+		ret = hiedmac_add_sg_slave(sg_head, slave_addr,
+					   buf_addr + count_in_sg,
+					   count_in_sg, direction);
+		if (ret)
+			return ret;
+	}
+	return 0;
+}
+
+static inline unsigned short get_max_width(dma_addr_t ccfg)
+{
+	unsigned short src_width = (ccfg & HIEDMAC_CONTROL_SRC_WIDTH_MASK) >>
+				    HIEDMAC_CONFIG_SRC_WIDTH_SHIFT;
+	unsigned short dst_width = (ccfg & HIEDMAC_CONTROL_DST_WIDTH_MASK) >>
+				    HIEDMAC_CONFIG_DST_WIDTH_SHIFT;
+
+	return 1 << max(src_width, dst_width); /* to byte */
+}
+
+static int hiedmac_fill_asg_lli_for_desc(struct hiedmac_sg *dsg,
+					 struct transfer_desc *tsf_desc,
+					 unsigned int *lli_count)
+{
+	int ret;
+	unsigned short width = get_max_width(tsf_desc->ccfg);
+
+	while (dsg->len != 0) {
+		size_t lli_len = MAX_TRANSFER_BYTES;
+
+		lli_len = (lli_len / width) * width; /* bus width align */
+		lli_len = min(lli_len, dsg->len);
+		ret = hiedmac_fill_desc(dsg, tsf_desc, lli_len, *lli_count);
+		if (ret)
+			return ret;
+
+		if (tsf_desc->ccfg & HIEDMAC_CONFIG_SRC_INC)
+			dsg->src_addr += lli_len;
+		if (tsf_desc->ccfg & HIEDMAC_CONFIG_DST_INC)
+			dsg->dst_addr += lli_len;
+		dsg->len -= lli_len;
+		(*lli_count)++;
+	}
+	return 0;
+}
+
+static int hiedmac_fill_lli_for_desc(struct list_head *sg_head,
+				     struct transfer_desc *tsf_desc)
+{
+	struct hiedmac_sg *dsg = NULL;
+	struct hiedmac_lli *last_plli = NULL;
+	unsigned int lli_count = 0;
+	int ret;
+
+	list_for_each_entry(dsg, sg_head, node) {
+		ret = hiedmac_fill_asg_lli_for_desc(dsg, tsf_desc, &lli_count);
+		if (ret)
+			return ret;
+	}
+
+	if (tsf_desc->cyclic) {
+		last_plli = (struct hiedmac_lli *)((uintptr_t)tsf_desc->llis_vaddr +
+					    (lli_count - 1) * sizeof(*last_plli));
+		last_plli->next_lli = tsf_desc->llis_busaddr | HIEDMAC_LLI_ENABLE;
+	} else {
+		last_plli = (struct hiedmac_lli *)((uintptr_t)tsf_desc->llis_vaddr +
+					    (lli_count - 1) * sizeof(*last_plli));
+		last_plli->next_lli = 0;
+	}
+	dump_lli(tsf_desc->llis_vaddr, lli_count);
+	return 0;
+}
+
+static struct dma_async_tx_descriptor *hiedmac_prep_slave_sg(
+	struct dma_chan *chan, struct scatterlist *sgl,
+	unsigned int sg_len, enum dma_transfer_direction direction,
+	unsigned long flags, void *context)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
+	struct transfer_desc *tsf_desc = NULL;
+	dma_addr_t slave_addr = 0;
+	int ret;
+	LIST_HEAD(sg_head);
+
+	if (sgl == NULL) {
+		hiedmacv310_error("sgl is null!");
+		return NULL;
+	}
+
+	tsf_desc = hiedmac_init_tsf_desc(chan, direction, &slave_addr);
+	if (!tsf_desc)
+		return NULL;
+
+	tsf_desc->llis_vaddr = dma_pool_alloc(hiedmac->pool, GFP_NOWAIT,
+					      &tsf_desc->llis_busaddr);
+	if (!tsf_desc->llis_vaddr) {
+		hiedmacv310_error("malloc memory from pool fail !");
+		goto err_alloc_lli;
+	}
+
+	ret = hiedmac_fill_sg_for_slave(&sg_head, slave_addr, sgl, sg_len, direction);
+	if (ret)
+		goto err_fill_sg;
+	ret = hiedmac_fill_lli_for_desc(&sg_head, tsf_desc);
+	free_dsg(&sg_head);
+	if (ret)
+		goto err_fill_sg;
+	return vchan_tx_prep(&edmac_dma_chan->virt_chan, &tsf_desc->virt_desc, flags);
+
+err_fill_sg:
+	dma_pool_free(hiedmac->pool, tsf_desc->llis_vaddr, tsf_desc->llis_busaddr);
+err_alloc_lli:
+	kfree(tsf_desc);
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor *hiedmac_prep_dma_memcpy(
+	struct dma_chan *chan, dma_addr_t dst, dma_addr_t src,
+	size_t len, unsigned long flags)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
+	struct transfer_desc *tsf_desc = NULL;
+	LIST_HEAD(sg_head);
+	u32 config = 0;
+	int ret;
+
+	if (!len)
+		return NULL;
+
+	tsf_desc = kzalloc(sizeof(*tsf_desc), GFP_NOWAIT);
+	if (tsf_desc == NULL) {
+		hiedmacv310_error("get tsf desc fail!");
+		return NULL;
+	}
+
+	tsf_desc->llis_vaddr = dma_pool_alloc(hiedmac->pool, GFP_NOWAIT,
+					      &tsf_desc->llis_busaddr);
+	if (!tsf_desc->llis_vaddr) {
+		hiedmacv310_error("malloc memory from pool fail !");
+		goto err_alloc_lli;
+	}
+
+	config |= HIEDMAC_CONFIG_SRC_INC | HIEDMAC_CONFIG_DST_INC;
+	config |= HIEDMAC_CXCONFIG_MEM_TYPE << HIEDMAC_CXCONFIG_TSF_TYPE_SHIFT;
+	/*  max burst width is 16 ,but reg value set 0xf */
+	config |= (HIEDMAC_MAX_BURST_WIDTH - 1) << HIEDMAC_CONFIG_SRC_BURST_SHIFT;
+	config |= (HIEDMAC_MAX_BURST_WIDTH - 1) << HIEDMAC_CONFIG_DST_BURST_SHIFT;
+	config |= HIEDMAC_MEM_BIT_WIDTH << HIEDMAC_CONFIG_SRC_WIDTH_SHIFT;
+	config |= HIEDMAC_MEM_BIT_WIDTH << HIEDMAC_CONFIG_DST_WIDTH_SHIFT;
+	tsf_desc->ccfg = config;
+	ret = hiedmac_fill_sg_for_memcpy(&sg_head, dst, src, len);
+	if (ret)
+		goto err_fill_sg;
+	ret = hiedmac_fill_lli_for_desc(&sg_head, tsf_desc);
+	free_dsg(&sg_head);
+	if (ret)
+		goto err_fill_sg;
+	return vchan_tx_prep(&edmac_dma_chan->virt_chan, &tsf_desc->virt_desc, flags);
+
+err_fill_sg:
+	dma_pool_free(hiedmac->pool, tsf_desc->llis_vaddr, tsf_desc->llis_busaddr);
+err_alloc_lli:
+	kfree(tsf_desc);
+	return NULL;
+}
+
+
+static struct dma_async_tx_descriptor *hiedmac_prep_dma_cyclic(
+	struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
+	size_t period_len, enum dma_transfer_direction direction,
+	unsigned long flags)
+{
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
+	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
+	struct transfer_desc *tsf_desc = NULL;
+	dma_addr_t slave_addr = 0;
+	LIST_HEAD(sg_head);
+	int ret;
+
+	tsf_desc = hiedmac_init_tsf_desc(chan, direction, &slave_addr);
+	if (!tsf_desc)
+		return NULL;
+
+	tsf_desc->llis_vaddr = dma_pool_alloc(hiedmac->pool, GFP_NOWAIT,
+			&tsf_desc->llis_busaddr);
+	if (!tsf_desc->llis_vaddr) {
+		hiedmacv310_error("malloc memory from pool fail !");
+		goto err_alloc_lli;
+	}
+
+	tsf_desc->cyclic = true;
+	ret = hiedmac_fill_sg_for_cyclic(&sg_head, slave_addr, buf_addr,
+					 buf_len, period_len, direction);
+	if (ret)
+		goto err_fill_sg;
+	ret = hiedmac_fill_lli_for_desc(&sg_head, tsf_desc);
+	free_dsg(&sg_head);
+	if (ret)
+		goto err_fill_sg;
+	return vchan_tx_prep(&edmac_dma_chan->virt_chan, &tsf_desc->virt_desc, flags);
+
+err_fill_sg:
+	dma_pool_free(hiedmac->pool, tsf_desc->llis_vaddr, tsf_desc->llis_busaddr);
+err_alloc_lli:
+	kfree(tsf_desc);
+	return NULL;
+}
+
+static void hiedmac_phy_reassign(struct hiedmacv310_phy_chan *phy_chan,
+				 struct hiedmacv310_dma_chan *chan)
+{
+	phy_chan->serving = chan;
+	chan->phychan = phy_chan;
+	chan->state = HIEDMAC_CHAN_RUNNING;
+
+	hiedmac_start_next_txd(chan);
+}
+
+static void hiedmac_terminate_phy_chan(struct hiedmacv310_driver_data *hiedmac,
+				       const struct hiedmacv310_dma_chan *edmac_dma_chan)
+{
+	unsigned int val;
+	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
+
+	hiedmac_pause_phy_chan(edmac_dma_chan);
+	val = 0x1 << phychan->id;
+	hiedmacv310_writel(val, hiedmac->base + HIEDMAC_INT_TC1_RAW);
+	hiedmacv310_writel(val, hiedmac->base + HIEDMAC_INT_ERR1_RAW);
+	hiedmacv310_writel(val, hiedmac->base + HIEDMAC_INT_ERR2_RAW);
+}
+
+void hiedmac_phy_free(struct hiedmacv310_dma_chan *chan)
+{
+	struct hiedmacv310_driver_data *hiedmac = chan->host;
+	struct hiedmacv310_dma_chan *p = NULL;
+	struct hiedmacv310_dma_chan *next = NULL;
+
+	list_for_each_entry(p, &hiedmac->memcpy.channels, virt_chan.chan.device_node) {
+		if (p->state == HIEDMAC_CHAN_WAITING) {
+			next = p;
+			break;
+		}
+	}
+
+	if (!next) {
+		list_for_each_entry(p, &hiedmac->slave.channels, virt_chan.chan.device_node) {
+			if (p->state == HIEDMAC_CHAN_WAITING) {
+				next = p;
+				break;
+			}
+		}
+	}
+	hiedmac_terminate_phy_chan(hiedmac, chan);
+
+	if (next) {
+		spin_lock(&next->virt_chan.lock);
+		hiedmac_phy_reassign(chan->phychan, next);
+		spin_unlock(&next->virt_chan.lock);
+	} else {
+		chan->phychan->serving = NULL;
+	}
+
+	chan->phychan = NULL;
+	chan->state = HIEDMAC_CHAN_IDLE;
+}
+
+static bool handle_irq(struct hiedmacv310_driver_data *hiedmac, int chan_id)
+{
+	struct hiedmacv310_dma_chan *chan = NULL;
+	struct hiedmacv310_phy_chan *phy_chan = NULL;
+	struct transfer_desc *tsf_desc = NULL;
+	unsigned int channel_tc_status;
+
+	phy_chan = &hiedmac->phy_chans[chan_id];
+	chan = phy_chan->serving;
+	if (!chan) {
+		hiedmacv310_error("error interrupt on chan: %d!", chan_id);
+		return 0;
+	}
+	tsf_desc = chan->at;
+
+	channel_tc_status = hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_TC1_RAW);
+	channel_tc_status = (channel_tc_status >> chan_id) & 0x01;
+	if (channel_tc_status)
+		hiedmacv310_writel(channel_tc_status << chan_id,
+				   hiedmac->base + HIEDMAC_INT_TC1_RAW);
+
+	channel_tc_status = hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_TC2);
+	channel_tc_status = (channel_tc_status >> chan_id) & 0x01;
+	if (channel_tc_status)
+		hiedmacv310_writel(channel_tc_status << chan_id,
+				   hiedmac->base + HIEDMAC_INT_TC2_RAW);
+
+	if ((hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_ERR1) |
+	    hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_ERR2) |
+	    hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_ERR3)) &
+	    (1 << chan_id)) {
+		hiedmacv310_writel(1 << chan_id, hiedmac->base + HIEDMAC_INT_ERR1_RAW);
+		hiedmacv310_writel(1 << chan_id, hiedmac->base + HIEDMAC_INT_ERR2_RAW);
+		hiedmacv310_writel(1 << chan_id, hiedmac->base + HIEDMAC_INT_ERR3_RAW);
+	}
+
+	spin_lock(&chan->virt_chan.lock);
+
+	if (tsf_desc->cyclic) {
+		vchan_cyclic_callback(&tsf_desc->virt_desc);
+		spin_unlock(&chan->virt_chan.lock);
+		return 1;
+	}
+	chan->at = NULL;
+	tsf_desc->done = true;
+	vchan_cookie_complete(&tsf_desc->virt_desc);
+
+	if (vchan_next_desc(&chan->virt_chan))
+		hiedmac_start_next_txd(chan);
+	else
+		hiedmac_phy_free(chan);
+	spin_unlock(&chan->virt_chan.lock);
+	return 1;
+}
+
+static irqreturn_t hiemdacv310_irq(int irq, void *dev)
+{
+	struct hiedmacv310_driver_data *hiedmac = (struct hiedmacv310_driver_data *)dev;
+	u32 mask = 0;
+	unsigned int channel_status, temp, i;
+
+	channel_status = hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_STAT);
+	if (!channel_status) {
+		hiedmacv310_error("channel_status = 0x%x", channel_status);
+		return IRQ_NONE;
+	}
+
+	for (i = 0; i < hiedmac->channels; i++) {
+		temp = (channel_status >> i) & 0x1;
+		if (temp)
+			mask |= handle_irq(hiedmac, i) << i;
+	}
+	return mask ? IRQ_HANDLED : IRQ_NONE;
+}
+
+static inline void hiedmac_dma_slave_init(struct hiedmacv310_dma_chan *chan)
+{
+	chan->slave = true;
+}
+
+static void hiedmac_desc_free(struct virt_dma_desc *vd)
+{
+	struct transfer_desc *tsf_desc = to_edmac_transfer_desc(&vd->tx);
+	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(vd->tx.chan);
+
+	dma_descriptor_unmap(&vd->tx);
+	dma_pool_free(edmac_dma_chan->host->pool, tsf_desc->llis_vaddr, tsf_desc->llis_busaddr);
+	kfree(tsf_desc);
+}
+
+static int hiedmac_init_virt_channels(struct hiedmacv310_driver_data *hiedmac,
+				      struct dma_device *dmadev,
+				      unsigned int channels, bool slave)
+{
+	struct hiedmacv310_dma_chan *chan = NULL;
+	int i;
+
+	INIT_LIST_HEAD(&dmadev->channels);
+	for (i = 0; i < channels; i++) {
+		chan = kzalloc(sizeof(struct hiedmacv310_dma_chan), GFP_KERNEL);
+		if (!chan) {
+			hiedmacv310_error("fail to allocate memory for virt channels!");
+			return -1;
+		}
+
+		chan->host = hiedmac;
+		chan->state = HIEDMAC_CHAN_IDLE;
+		chan->signal = -1;
+
+		if (slave) {
+			chan->id = i;
+			hiedmac_dma_slave_init(chan);
+		}
+		chan->virt_chan.desc_free = hiedmac_desc_free;
+		vchan_init(&chan->virt_chan, dmadev);
+	}
+	return 0;
+}
+
+static void hiedmac_free_virt_channels(struct dma_device *dmadev)
+{
+	struct hiedmacv310_dma_chan *chan = NULL;
+	struct hiedmacv310_dma_chan *next = NULL;
+
+	list_for_each_entry_safe(chan, next, &dmadev->channels, virt_chan.chan.device_node) {
+		list_del(&chan->virt_chan.chan.device_node);
+		kfree(chan);
+	}
+}
+
+static void hiedmacv310_prep_dma_device(struct platform_device *pdev,
+					struct hiedmacv310_driver_data *hiedmac)
+{
+	dma_cap_set(DMA_MEMCPY, hiedmac->memcpy.cap_mask);
+	hiedmac->memcpy.dev = &pdev->dev;
+	hiedmac->memcpy.device_free_chan_resources = hiedmac_free_chan_resources;
+	hiedmac->memcpy.device_prep_dma_memcpy = hiedmac_prep_dma_memcpy;
+	hiedmac->memcpy.device_tx_status = hiedmac_tx_status;
+	hiedmac->memcpy.device_issue_pending = hiedmac_issue_pending;
+	hiedmac->memcpy.device_config = hiedmac_config;
+	hiedmac->memcpy.device_pause = hiedmac_pause;
+	hiedmac->memcpy.device_resume = hiedmac_resume;
+	hiedmac->memcpy.device_terminate_all = hiedmac_terminate_all;
+	hiedmac->memcpy.directions = BIT(DMA_MEM_TO_MEM);
+	hiedmac->memcpy.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+
+	dma_cap_set(DMA_SLAVE, hiedmac->slave.cap_mask);
+	dma_cap_set(DMA_CYCLIC, hiedmac->slave.cap_mask);
+	hiedmac->slave.dev = &pdev->dev;
+	hiedmac->slave.device_free_chan_resources = hiedmac_free_chan_resources;
+	hiedmac->slave.device_tx_status = hiedmac_tx_status;
+	hiedmac->slave.device_issue_pending = hiedmac_issue_pending;
+	hiedmac->slave.device_prep_slave_sg = hiedmac_prep_slave_sg;
+	hiedmac->slave.device_prep_dma_cyclic = hiedmac_prep_dma_cyclic;
+	hiedmac->slave.device_config = hiedmac_config;
+	hiedmac->slave.device_resume = hiedmac_resume;
+	hiedmac->slave.device_pause = hiedmac_pause;
+	hiedmac->slave.device_terminate_all = hiedmac_terminate_all;
+	hiedmac->slave.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	hiedmac->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+}
+
+static int hiedmacv310_init_chan(struct hiedmacv310_driver_data *hiedmac)
+{
+	int i, ret;
+
+	hiedmac->phy_chans = kzalloc((hiedmac->channels * sizeof(
+				     struct hiedmacv310_phy_chan)),
+				     GFP_KERNEL);
+	if (!hiedmac->phy_chans) {
+		hiedmacv310_error("malloc for phy chans fail!");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < hiedmac->channels; i++) {
+		struct hiedmacv310_phy_chan *phy_ch = &hiedmac->phy_chans[i];
+
+		phy_ch->id = i;
+		phy_ch->base = hiedmac->base + hiedmac_cx_base(i);
+		spin_lock_init(&phy_ch->lock);
+		phy_ch->serving = NULL;
+	}
+
+	ret = hiedmac_init_virt_channels(hiedmac, &hiedmac->memcpy, hiedmac->channels,
+					 false);
+	if (ret) {
+		hiedmacv310_error("fail to init memory virt channels!");
+		goto  free_phychans;
+	}
+
+	ret = hiedmac_init_virt_channels(hiedmac, &hiedmac->slave, hiedmac->slave_requests,
+					 true);
+	if (ret) {
+		hiedmacv310_error("fail to init slave virt channels!");
+		goto  free_memory_virt_channels;
+	}
+	return 0;
+
+free_memory_virt_channels:
+	hiedmac_free_virt_channels(&hiedmac->memcpy);
+free_phychans:
+	kfree(hiedmac->phy_chans);
+	return -ENOMEM;
+}
+
+static void hiedmacv310_free_chan(struct hiedmacv310_driver_data *hiedmac)
+{
+	hiedmac_free_virt_channels(&hiedmac->slave);
+	hiedmac_free_virt_channels(&hiedmac->memcpy);
+	kfree(hiedmac->phy_chans);
+}
+
+static void hiedmacv310_prep_phy_device(const struct hiedmacv310_driver_data *hiedmac)
+{
+	clk_prepare_enable(hiedmac->clk);
+	clk_prepare_enable(hiedmac->axi_clk);
+	reset_control_deassert(hiedmac->rstc);
+
+	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_TC1_RAW);
+	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_TC2_RAW);
+	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_ERR1_RAW);
+	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_ERR2_RAW);
+	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_ERR3_RAW);
+	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
+			   hiedmac->base + HIEDMAC_INT_TC1_MASK);
+	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
+			   hiedmac->base + HIEDMAC_INT_TC2_MASK);
+	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
+			   hiedmac->base + HIEDMAC_INT_ERR1_MASK);
+	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
+			   hiedmac->base + HIEDMAC_INT_ERR2_MASK);
+	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
+			   hiedmac->base + HIEDMAC_INT_ERR3_MASK);
+}
+
+static struct hiedmacv310_driver_data *hiedmacv310_prep_hiedmac_device(struct platform_device *pdev)
+{
+	int ret;
+	struct hiedmacv310_driver_data *hiedmac = NULL;
+	ssize_t transfer_size;
+
+	ret = dma_set_mask_and_coherent(&(pdev->dev), DMA_BIT_MASK(64));
+	if (ret)
+		return NULL;
+
+	hiedmac = kzalloc(sizeof(*hiedmac), GFP_KERNEL);
+	if (!hiedmac) {
+		hiedmacv310_error("malloc for hiedmac fail!");
+		return NULL;
+	}
+
+	hiedmac->dev = pdev;
+
+	ret = get_of_probe(hiedmac);
+	if (ret) {
+		hiedmacv310_error("get dts info fail!");
+		goto free_hiedmac;
+	}
+
+	hiedmacv310_prep_dma_device(pdev, hiedmac);
+	hiedmac->max_transfer_size = MAX_TRANSFER_BYTES;
+	transfer_size = MAX_TSFR_LLIS * EDMACV300_LLI_WORDS * sizeof(u32);
+
+	hiedmac->pool = dma_pool_create(DRIVER_NAME, &(pdev->dev),
+					transfer_size, EDMACV300_POOL_ALIGN, 0);
+	if (!hiedmac->pool) {
+		hiedmacv310_error("create pool fail!");
+		goto free_hiedmac;
+	}
+
+	ret = hiedmacv310_init_chan(hiedmac);
+	if (ret)
+		goto free_pool;
+
+	return hiedmac;
+
+free_pool:
+	dma_pool_destroy(hiedmac->pool);
+free_hiedmac:
+	kfree(hiedmac);
+	return NULL;
+}
+
+static void free_hiedmac_device(struct hiedmacv310_driver_data *hiedmac)
+{
+	hiedmacv310_free_chan(hiedmac);
+	dma_pool_destroy(hiedmac->pool);
+	kfree(hiedmac);
+}
+
+static int __init hiedmacv310_probe(struct platform_device *pdev)
+{
+	int ret;
+	struct hiedmacv310_driver_data *hiedmac = NULL;
+
+	hiedmac = hiedmacv310_prep_hiedmac_device(pdev);
+	if (hiedmac == NULL)
+		return -ENOMEM;
+
+	ret = request_irq(hiedmac->irq, hiemdacv310_irq, 0, DRIVER_NAME, hiedmac);
+	if (ret) {
+		hiedmacv310_error("fail to request irq");
+		goto free_hiedmac;
+	}
+	hiedmacv310_prep_phy_device(hiedmac);
+	ret = dma_async_device_register(&hiedmac->memcpy);
+	if (ret) {
+		hiedmacv310_error("%s failed to register memcpy as an async device - %d",
+				  __func__, ret);
+		goto free_irq_res;
+	}
+
+	ret = dma_async_device_register(&hiedmac->slave);
+	if (ret) {
+		hiedmacv310_error("%s failed to register slave as an async device - %d",
+				  __func__, ret);
+		goto free_memcpy_device;
+	}
+	return 0;
+
+free_memcpy_device:
+	dma_async_device_unregister(&hiedmac->memcpy);
+free_irq_res:
+	free_irq(hiedmac->irq, hiedmac);
+free_hiedmac:
+	free_hiedmac_device(hiedmac);
+	return -ENOMEM;
+}
+
+static int hiemda_remove(struct platform_device *pdev)
+{
+	int err = 0;
+	return err;
+}
+
+static const struct of_device_id hiedmacv310_match[] = {
+	{ .compatible = "hisilicon,hiedmacv310" },
+	{},
+};
+
+static struct platform_driver hiedmacv310_driver = {
+	.remove = hiemda_remove,
+	.driver = {
+		.name = "hiedmacv310",
+		.of_match_table = hiedmacv310_match,
+	},
+};
+
+static int __init hiedmacv310_init(void)
+{
+	return platform_driver_probe(&hiedmacv310_driver, hiedmacv310_probe);
+}
+subsys_initcall(hiedmacv310_init);
+
+static void __exit hiedmacv310_exit(void)
+{
+	platform_driver_unregister(&hiedmacv310_driver);
+}
+module_exit(hiedmacv310_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Hisilicon");
diff --git a/drivers/dma/hiedmacv310.h b/drivers/dma/hiedmacv310.h
new file mode 100644
index 000000000000..99e1720263e3
--- /dev/null
+++ b/drivers/dma/hiedmacv310.h
@@ -0,0 +1,136 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * The Hiedma Controller v310 Device Driver for HiSilicon
+ *
+ * Copyright (c) 2019-2020, Huawei Tech. Co., Ltd.
+ *
+ * Author: Dongjiu Geng <gengdongjiu@huawei.com>
+ */
+#ifndef __HIEDMACV310_H__
+#define __HIEDMACV310_H__
+
+/* debug control */
+#define HIEDMACV310_CONFIG_TRACE_LEVEL 3
+#define HIEDMACV310_TRACE_LEVEL 0
+#define HIEDMACV310_REG_TRACE_LEVEL 3
+
+#ifdef DEBUG_HIEDMAC
+#define hiedmacv310_trace(level, msg, a...) do { \
+	if ((level) >= HIEDMACV310_TRACE_LEVEL) { \
+		pr_info("%s: %d: " msg,  __func__, __LINE__, ## a); \
+	} \
+} while (0)
+
+#define hiedmacv310_assert(cond) do { \
+	if (!(cond)) { \
+		pr_info("Assert:hiedmacv310:%s:%d\n", \
+				__func__, \
+				__LINE__); \
+		__WARN(); \
+	} \
+} while (0)
+
+#define hiedmacv310_error(s, a...) \
+	pr_err("hiedmacv310:%s:%d: " s, __func__, __LINE__, ## a)
+
+#define hiedmacv310_info(s, a...) \
+	pr_info("hiedmacv310:%s:%d: " s, __func__, __LINE__, ## a)
+
+#else
+
+#define hiedmacv310_trace(level, msg, a...)
+#define hiedmacv310_assert(cond)
+#define hiedmacv310_error(s, a...)
+
+#endif
+
+
+#define hiedmacv310_readl(addr) ((unsigned int)readl((void *)(addr)))
+
+#define hiedmacv310_writel(v, addr) writel(v, (void *)(addr))
+
+
+#define MAX_TRANSFER_BYTES  0xffff
+
+/* reg offset */
+#define HIEDMAC_INT_STAT                  0x0
+#define HIEDMAC_INT_TC1                   0x4
+#define HIEDMAC_INT_TC2                   0x8
+#define HIEDMAC_INT_ERR1                  0xc
+#define HIEDMAC_INT_ERR2                  0x10
+#define HIEDMAC_INT_ERR3                  0x14
+
+#define HIEDMAC_INT_TC1_MASK              0x18
+#define HIEDMAC_INT_TC2_MASK              0x1c
+#define HIEDMAC_INT_ERR1_MASK             0x20
+#define HIEDMAC_INT_ERR2_MASK             0x24
+#define HIEDMAC_INT_ERR3_MASK             0x28
+
+#define HIEDMAC_INT_TC1_RAW               0x600
+#define HIEDMAC_INT_TC2_RAW               0x608
+#define HIEDMAC_INT_ERR1_RAW              0x610
+#define HIEDMAC_INT_ERR2_RAW              0x618
+#define HIEDMAC_INT_ERR3_RAW              0x620
+
+#define hiedmac_cx_curr_cnt0(cn)          (0x404 + (cn) * 0x20)
+#define hiedmac_cx_curr_src_addr_l(cn)    (0x408 + (cn) * 0x20)
+#define hiedmac_cx_curr_src_addr_h(cn)    (0x40c + (cn) * 0x20)
+#define hiedmac_cx_curr_dest_addr_l(cn)    (0x410 + (cn) * 0x20)
+#define hiedmac_cx_curr_dest_addr_h(cn)    (0x414 + (cn) * 0x20)
+
+#define HIEDMAC_CH_PRI                    0x688
+#define HIEDMAC_CH_STAT                   0x690
+#define HIEDMAC_DMA_CTRL                  0x698
+
+#define hiedmac_cx_base(cn)               (0x800 + (cn) * 0x40)
+#define hiedmac_cx_lli_l(cn)              (0x800 + (cn) * 0x40)
+#define hiedmac_cx_lli_h(cn)              (0x804 + (cn) * 0x40)
+#define hiedmac_cx_cnt0(cn)               (0x81c + (cn) * 0x40)
+#define hiedmac_cx_src_addr_l(cn)         (0x820 + (cn) * 0x40)
+#define hiedmac_cx_src_addr_h(cn)         (0x824 + (cn) * 0x40)
+#define hiedmac_cx_dest_addr_l(cn)        (0x828 + (cn) * 0x40)
+#define hiedmac_cx_dest_addr_h(cn)        (0x82c + (cn) * 0x40)
+#define hiedmac_cx_config(cn)             (0x830 + (cn) * 0x40)
+
+#define HIEDMAC_ALL_CHAN_CLR        0xff
+#define HIEDMAC_INT_ENABLE_ALL_CHAN 0xff
+
+
+#define HIEDMAC_CONFIG_SRC_INC          (1 << 31)
+#define HIEDMAC_CONFIG_DST_INC          (1 << 30)
+
+#define HIEDMAC_CONFIG_SRC_WIDTH_SHIFT  16
+#define HIEDMAC_CONFIG_DST_WIDTH_SHIFT  12
+#define HIEDMAC_WIDTH_8BIT              0b0
+#define HIEDMAC_WIDTH_16BIT             0b1
+#define HIEDMAC_WIDTH_32BIT             0b10
+#define HIEDMAC_WIDTH_64BIT             0b11
+#ifdef CONFIG_64BIT
+#define HIEDMAC_MEM_BIT_WIDTH HIEDMAC_WIDTH_64BIT
+#else
+#define HIEDMAC_MEM_BIT_WIDTH HIEDMAC_WIDTH_32BIT
+#endif
+
+#define HIEDMAC_MAX_BURST_WIDTH         16
+#define HIEDMAC_MIN_BURST_WIDTH         1
+#define HIEDMAC_CONFIG_SRC_BURST_SHIFT  24
+#define HIEDMAC_CONFIG_DST_BURST_SHIFT  20
+
+#define HIEDMAC_LLI_ALIGN   0x40
+#define HIEDMAC_LLI_DISABLE 0x0
+#define HIEDMAC_LLI_ENABLE 0x2
+
+#define HIEDMAC_CXCONFIG_SIGNAL_SHIFT   0x4
+#define HIEDMAC_CXCONFIG_MEM_TYPE       0x0
+#define HIEDMAC_CXCONFIG_DEV_MEM_TYPE   0x1
+#define HIEDMAC_CXCONFIG_TSF_TYPE_SHIFT 0x2
+#define HIEDMAC_CXCONFIG_LLI_START      0x1
+
+#define HIEDMAC_CXCONFIG_ITC_EN     0x1
+#define HIEDMAC_CXCONFIG_ITC_EN_SHIFT   0x1
+
+#define CCFG_EN 0x1
+
+#define HIEDMAC_CONTROL_SRC_WIDTH_MASK GENMASK(18, 16)
+#define HIEDMAC_CONTROL_DST_WIDTH_MASK GENMASK(14, 12)
+#endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v7 1/4] dt-bindings: Document the hi3559a clock bindings
  2020-12-15 11:09 ` [PATCH v7 1/4] dt-bindings: Document the hi3559a clock bindings Dongjiu Geng
@ 2020-12-21 18:54   ` Rob Herring
  2021-01-07  4:11     ` Dongjiu Geng
  0 siblings, 1 reply; 12+ messages in thread
From: Rob Herring @ 2020-12-21 18:54 UTC (permalink / raw)
  To: Dongjiu Geng
  Cc: sboyd, linux-clk, devicetree, dmaengine, robh+dt, p.zabel, vkoul,
	linux-kernel, dan.j.williams, mturquette

On Tue, 15 Dec 2020 11:09:44 +0000, Dongjiu Geng wrote:
> Add DT bindings documentation for hi3559a SoC clock.
> 
> Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
> ---
>  .../clock/hisilicon,hi3559av100-clock.yaml    |  59 +++++++
>  include/dt-bindings/clock/hi3559av100-clock.h | 165 ++++++++++++++++++
>  2 files changed, 224 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml
>  create mode 100644 include/dt-bindings/clock/hi3559av100-clock.h
> 

Reviewed-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7 3/4] dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller
  2020-12-15 11:09 ` [PATCH v7 3/4] dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller Dongjiu Geng
@ 2020-12-21 18:55   ` Rob Herring
  0 siblings, 0 replies; 12+ messages in thread
From: Rob Herring @ 2020-12-21 18:55 UTC (permalink / raw)
  To: Dongjiu Geng
  Cc: devicetree, mturquette, dan.j.williams, linux-clk, dmaengine,
	vkoul, p.zabel, robh+dt, sboyd, linux-kernel

On Tue, 15 Dec 2020 11:09:46 +0000, Dongjiu Geng wrote:
> The Hiedma Controller v310 Provides eight DMA channels, each
> channel can be configured for one-way transfer. The data can
> be transferred in 8-bit, 16-bit, 32-bit, or 64-bit mode. This
> documentation describes DT bindings of this controller.
> 
> Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
> ---
>  .../bindings/dma/hisilicon,hiedmacv310.yaml   | 94 +++++++++++++++++++
>  1 file changed, 94 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/hisilicon,hiedmacv310.yaml
> 

Reviewed-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7 1/4] dt-bindings: Document the hi3559a clock bindings
  2020-12-21 18:54   ` Rob Herring
@ 2021-01-07  4:11     ` Dongjiu Geng
  0 siblings, 0 replies; 12+ messages in thread
From: Dongjiu Geng @ 2021-01-07  4:11 UTC (permalink / raw)
  To: Rob Herring
  Cc: sboyd, linux-clk, devicetree, dmaengine, robh+dt, p.zabel, vkoul,
	linux-kernel, dan.j.williams, mturquette



On 2020/12/22 2:54, Rob Herring wrote:
> On Tue, 15 Dec 2020 11:09:44 +0000, Dongjiu Geng wrote:
>> Add DT bindings documentation for hi3559a SoC clock.
>>
>> Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
>> ---
>>  .../clock/hisilicon,hi3559av100-clock.yaml    |  59 +++++++
>>  include/dt-bindings/clock/hi3559av100-clock.h | 165 ++++++++++++++++++
>>  2 files changed, 224 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml
>>  create mode 100644 include/dt-bindings/clock/hi3559av100-clock.h
>>
> 
> Reviewed-by: Rob Herring <robh@kernel.org>
Thanks a lot for the Reviewed-by.
Whether this series patches can be queued to mainline?

> .
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller
  2020-12-15 11:09 [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Dongjiu Geng
                   ` (3 preceding siblings ...)
  2020-12-15 11:09 ` [PATCH v7 4/4] dmaengine: dma: Add Hiedma Controller v310 Device Driver Dongjiu Geng
@ 2021-01-12 10:40 ` Vinod Koul
  2021-01-12 11:22   ` Dongjiu Geng
  4 siblings, 1 reply; 12+ messages in thread
From: Vinod Koul @ 2021-01-12 10:40 UTC (permalink / raw)
  To: Dongjiu Geng
  Cc: mturquette, sboyd, robh+dt, dan.j.williams, p.zabel, linux-clk,
	devicetree, linux-kernel, dmaengine

On 15-12-20, 11:09, Dongjiu Geng wrote:
> v6->v7:
> 1. rename hisi,misc-control to hisi,misc-control to hisilicon,misc-control
> 
> v5->v6:
> 1. Drop #size-cells and #address-cell in the hisilicon,hi3559av100-clock.yaml
> 2. Add discription for #reset-cells in the hisilicon,hi3559av100-clock.yaml
> 3. Remove #clock-cells in hisilicon,hiedmacv310.yaml 
> 4. Merge property misc_ctrl_base and misc_regmap together for hiedmacv310 driver
> 
> v4->v5:
> 1. change the patch author mail name
> 
> v3->v4:
> 1. fix the 'make dt_binding_check' issues.
> 2. Combine the 'Enable HiSilicon Hiedma Controller' series patches to this series.
> 3. fix the 'make dt_binding_check' issues in 'Enable HiSilicon Hiedma Controller' patchset
> 
> v2->v3:
> 1. change dt-bindings documents from txt to yaml format.
> 2. Add SHUB clock to access the devices of m7
> 
> Dongjiu Geng (4):
>   dt-bindings: Document the hi3559a clock bindings
>   clk: hisilicon: Add clock driver for hi3559A SoC
>   dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller
>   dmaengine: dma: Add Hiedma Controller v310 Device Driver

Is there a reason to have dma and clk drivers in a single series..? I am
sure I have skipping few versions thinking this is clock driver series..

Unless there is a dependency please split up.. If there is a dependency
please specify that


> 
>  .../clock/hisilicon,hi3559av100-clock.yaml    |   59 +
>  .../bindings/dma/hisilicon,hiedmacv310.yaml   |   94 ++
>  drivers/clk/hisilicon/Kconfig                 |    7 +
>  drivers/clk/hisilicon/Makefile                |    1 +
>  drivers/clk/hisilicon/clk-hi3559a.c           |  865 ++++++++++
>  drivers/dma/Kconfig                           |   14 +
>  drivers/dma/Makefile                          |    1 +
>  drivers/dma/hiedmacv310.c                     | 1442 +++++++++++++++++
>  drivers/dma/hiedmacv310.h                     |  136 ++
>  include/dt-bindings/clock/hi3559av100-clock.h |  165 ++
>  10 files changed, 2784 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml
>  create mode 100644 Documentation/devicetree/bindings/dma/hisilicon,hiedmacv310.yaml
>  create mode 100644 drivers/clk/hisilicon/clk-hi3559a.c
>  create mode 100644 drivers/dma/hiedmacv310.c
>  create mode 100644 drivers/dma/hiedmacv310.h
>  create mode 100644 include/dt-bindings/clock/hi3559av100-clock.h
> 
> -- 
> 2.17.1

-- 
~Vinod

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller
  2021-01-12 10:40 ` [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Vinod Koul
@ 2021-01-12 11:22   ` Dongjiu Geng
  0 siblings, 0 replies; 12+ messages in thread
From: Dongjiu Geng @ 2021-01-12 11:22 UTC (permalink / raw)
  To: Vinod Koul
  Cc: mturquette, sboyd, robh+dt, dan.j.williams, p.zabel, linux-clk,
	devicetree, linux-kernel, dmaengine

On 2021/1/12 18:40, Vinod Koul wrote:
> On 15-12-20, 11:09, Dongjiu Geng wrote:
>> v6->v7:
>> 1. rename hisi,misc-control to hisi,misc-control to hisilicon,misc-control
>>
>> v5->v6:
>> 1. Drop #size-cells and #address-cell in the hisilicon,hi3559av100-clock.yaml
>> 2. Add discription for #reset-cells in the hisilicon,hi3559av100-clock.yaml
>> 3. Remove #clock-cells in hisilicon,hiedmacv310.yaml 
>> 4. Merge property misc_ctrl_base and misc_regmap together for hiedmacv310 driver
>>
>> v4->v5:
>> 1. change the patch author mail name
>>
>> v3->v4:
>> 1. fix the 'make dt_binding_check' issues.
>> 2. Combine the 'Enable HiSilicon Hiedma Controller' series patches to this series.
>> 3. fix the 'make dt_binding_check' issues in 'Enable HiSilicon Hiedma Controller' patchset
>>
>> v2->v3:
>> 1. change dt-bindings documents from txt to yaml format.
>> 2. Add SHUB clock to access the devices of m7
>>
>> Dongjiu Geng (4):
>>   dt-bindings: Document the hi3559a clock bindings
>>   clk: hisilicon: Add clock driver for hi3559A SoC
>>   dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller
>>   dmaengine: dma: Add Hiedma Controller v310 Device Driver
> 
> Is there a reason to have dma and clk drivers in a single series..? I am
> sure I have skipping few versions thinking this is clock driver series..
> 
> Unless there is a dependency please split up.. If there is a dependency
> please specify that

Thank you very much for your pointing out.  I will split up.

> 
> 
>>
>>  .../clock/hisilicon,hi3559av100-clock.yaml    |   59 +
>>  .../bindings/dma/hisilicon,hiedmacv310.yaml   |   94 ++
>>  drivers/clk/hisilicon/Kconfig                 |    7 +
>>  drivers/clk/hisilicon/Makefile                |    1 +
>>  drivers/clk/hisilicon/clk-hi3559a.c           |  865 ++++++++++
>>  drivers/dma/Kconfig                           |   14 +
>>  drivers/dma/Makefile                          |    1 +
>>  drivers/dma/hiedmacv310.c                     | 1442 +++++++++++++++++
>>  drivers/dma/hiedmacv310.h                     |  136 ++
>>  include/dt-bindings/clock/hi3559av100-clock.h |  165 ++
>>  10 files changed, 2784 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/clock/hisilicon,hi3559av100-clock.yaml
>>  create mode 100644 Documentation/devicetree/bindings/dma/hisilicon,hiedmacv310.yaml
>>  create mode 100644 drivers/clk/hisilicon/clk-hi3559a.c
>>  create mode 100644 drivers/dma/hiedmacv310.c
>>  create mode 100644 drivers/dma/hiedmacv310.h
>>  create mode 100644 include/dt-bindings/clock/hi3559av100-clock.h
>>
>> -- 
>> 2.17.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7 4/4] dmaengine: dma: Add Hiedma Controller v310 Device Driver
  2020-12-15 11:09 ` [PATCH v7 4/4] dmaengine: dma: Add Hiedma Controller v310 Device Driver Dongjiu Geng
@ 2021-01-12 12:06   ` Vinod Koul
  0 siblings, 0 replies; 12+ messages in thread
From: Vinod Koul @ 2021-01-12 12:06 UTC (permalink / raw)
  To: Dongjiu Geng
  Cc: mturquette, sboyd, robh+dt, dan.j.williams, p.zabel, linux-clk,
	devicetree, linux-kernel, dmaengine

On 15-12-20, 11:09, Dongjiu Geng wrote:
> Hisilicon EDMA Controller(EDMAC) directly transfers data
> between a memory and a peripheral, between peripherals, or
> between memories. This avoids the CPU intervention and reduces
> the interrupt handling overhead of the CPU, this driver enables
> this controller.
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
> ---
>  drivers/dma/Kconfig       |   14 +
>  drivers/dma/Makefile      |    1 +
>  drivers/dma/hiedmacv310.c | 1442 +++++++++++++++++++++++++++++++++++++
>  drivers/dma/hiedmacv310.h |  136 ++++
>  4 files changed, 1593 insertions(+)
>  create mode 100644 drivers/dma/hiedmacv310.c
>  create mode 100644 drivers/dma/hiedmacv310.h
> 
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index 90284ffda58a..3e5107120ff1 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -327,6 +327,20 @@ config K3_DMA
>  	  Support the DMA engine for Hisilicon K3 platform
>  	  devices.
>  
> +config HIEDMACV310
> +	tristate "Hisilicon EDMAC Controller support"
> +	depends on ARCH_HISI
> +	select DMA_ENGINE
> +	select DMA_VIRTUAL_CHANNELS
> +	help
> +	  The Direction Memory Access(EDMA) is a high-speed data transfer
> +	  operation. It supports data read/write between peripherals and
> +	  memories without using the CPU.
> +	  Hisilicon EDMA Controller(EDMAC) directly transfers data between
> +	  a memory and a peripheral, between peripherals, or between memories.
> +	  This avoids the CPU intervention and reduces the interrupt handling
> +	  overhead of the CPU.
> +
>  config LPC18XX_DMAMUX
>  	bool "NXP LPC18xx/43xx DMA MUX for PL080"
>  	depends on ARCH_LPC18XX || COMPILE_TEST
> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
> index 948a8da05f8b..28c7298b671e 100644
> --- a/drivers/dma/Makefile
> +++ b/drivers/dma/Makefile
> @@ -82,6 +82,7 @@ obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
>  obj-$(CONFIG_ZX_DMA) += zx_dma.o
>  obj-$(CONFIG_ST_FDMA) += st_fdma.o
>  obj-$(CONFIG_FSL_DPAA2_QDMA) += fsl-dpaa2-qdma/
> +obj-$(CONFIG_HIEDMACV310) += hiedmacv310.o
>  
>  obj-y += mediatek/
>  obj-y += qcom/
> diff --git a/drivers/dma/hiedmacv310.c b/drivers/dma/hiedmacv310.c
> new file mode 100644
> index 000000000000..c0df5088a6a1
> --- /dev/null
> +++ b/drivers/dma/hiedmacv310.c
> @@ -0,0 +1,1442 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * The Hiedma Controller v310 Device Driver for HiSilicon
> + *
> + * Copyright (c) 2019-2020, Huawei Tech. Co., Ltd.
> + *
> + * Author: Dongjiu Geng <gengdongjiu@huawei.com>
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/delay.h>
> +#include <linux/clk.h>
> +#include <linux/reset.h>
> +#include <linux/platform_device.h>
> +#include <linux/device.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dmapool.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/export.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_dma.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/seq_file.h>
> +#include <linux/slab.h>
> +#include <linux/io.h>
> +#include <linux/regmap.h>
> +#include <linux/mfd/syscon.h>

Do you need all of these? Also keep them sorted pls

> +
> +#include "hiedmacv310.h"
> +#include "dmaengine.h"
> +#include "virt-dma.h"
> +
> +#define DRIVER_NAME "hiedmacv310"
> +
> +#define MAX_TSFR_LLIS           512
> +#define EDMACV300_LLI_WORDS     64
> +#define EDMACV300_POOL_ALIGN    64
> +#define BITS_PER_HALF_WORD 32

Space or tab, not both!

> +
> +struct hiedmac_lli {
> +	u64 next_lli;
> +	u32 reserved[5];

why reserved..?

> +	u32 count;
> +	u64 src_addr;
> +	u64 dest_addr;
> +	u32 config;
> +	u32 pad[3];
> +};
> +
> +struct hiedmac_sg {
> +	dma_addr_t src_addr;
> +	dma_addr_t dst_addr;
> +	size_t len;
> +	struct list_head node;
> +};

why invent your own sg..? why not use kernel list?

> +
> +struct transfer_desc {
> +	struct virt_dma_desc virt_desc;
> +	dma_addr_t llis_busaddr;
> +	u64 *llis_vaddr;
> +	u32 ccfg;
> +	size_t size;
> +	bool done;
> +	bool cyclic;
> +};
> +
> +enum edmac_dma_chan_state {
> +	HIEDMAC_CHAN_IDLE,
> +	HIEDMAC_CHAN_RUNNING,
> +	HIEDMAC_CHAN_PAUSED,
> +	HIEDMAC_CHAN_WAITING,
> +};
> +
> +struct hiedmacv310_dma_chan {
> +	bool slave;
> +	int signal;
> +	int id;
> +	struct virt_dma_chan virt_chan;
> +	struct hiedmacv310_phy_chan *phychan;
> +	struct dma_slave_config cfg;
> +	struct transfer_desc *at;
> +	struct hiedmacv310_driver_data *host;
> +	enum edmac_dma_chan_state state;
> +};
> +
> +struct hiedmacv310_phy_chan {
> +	unsigned int id;
> +	void __iomem *base;
> +	spinlock_t lock;
> +	struct hiedmacv310_dma_chan *serving;

So you have a physical channel and a dma_chan (virtual..?) right..?

> +};
> +
> +struct hiedmacv310_driver_data {
> +	struct platform_device *dev;
> +	struct dma_device slave;
> +	struct dma_device memcpy;
> +	void __iomem *base;
> +	struct regmap *misc_regmap;
> +	void __iomem *crg_ctrl;
> +	struct hiedmacv310_phy_chan *phy_chans;
> +	struct dma_pool *pool;
> +	unsigned int misc_ctrl_base;
> +	int irq;
> +	struct clk *clk;
> +	struct clk *axi_clk;
> +	struct reset_control *rstc;
> +	unsigned int channels;
> +	unsigned int slave_requests;
> +	unsigned int max_transfer_size;
> +};
> +
> +#ifdef DEBUG_HIEDMAC
> +static void dump_lli(const u64 *llis_vaddr, unsigned int num)
> +{
> +	struct hiedmac_lli *plli = (struct hiedmac_lli *)llis_vaddr;
> +	unsigned int i;
> +
> +	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL, "lli num = 0%d", num);
> +	for (i = 0; i < num; i++) {
> +		hiedmacv310_info("lli%d:lli_L:      0x%llx\n", i,
> +			plli[i].next_lli & 0xffffffff);
> +		hiedmacv310_info("lli%d:lli_H:      0x%llx\n", i,
> +			(plli[i].next_lli >> BITS_PER_HALF_WORD) & 0xffffffff);
> +		hiedmacv310_info("lli%d:count:      0x%x\n", i,
> +			plli[i].count);
> +		hiedmacv310_info("lli%d:src_addr_L: 0x%llx\n", i,
> +			plli[i].src_addr & 0xffffffff);
> +		hiedmacv310_info("lli%d:src_addr_H: 0x%llx\n", i,
> +			(plli[i].src_addr >> BITS_PER_HALF_WORD) & 0xffffffff);
> +		hiedmacv310_info("lli%d:dst_addr_L: 0x%llx\n", i,
> +				 plli[i].dest_addr & 0xffffffff);
> +		hiedmacv310_info("lli%d:dst_addr_H: 0x%llx\n", i,
> +			(plli[i].dest_addr >> BITS_PER_HALF_WORD) & 0xffffffff);
> +		hiedmacv310_info("lli%d:CONFIG:	  0x%x\n", i,
> +				 plli[i].config);

what is wrong with dev_dbg()..? Tip you can use dynamic debug with them!

> +	}
> +}
> +
> +#else
> +static void dump_lli(u64 *llis_vaddr, unsigned int num)
> +{
> +}
> +#endif
> +
> +static inline struct hiedmacv310_dma_chan *to_edamc_chan(const struct dma_chan *chan)
> +{
> +	return container_of(chan, struct hiedmacv310_dma_chan, virt_chan.chan);
> +}
> +
> +static inline struct transfer_desc *to_edmac_transfer_desc(
> +	const struct dma_async_tx_descriptor *tx)
> +{
> +	return container_of(tx, struct transfer_desc, virt_desc.tx);
> +}
> +
> +static struct dma_chan *hiedmac_find_chan_id(
> +	const struct hiedmacv310_driver_data *hiedmac,
> +	int request_num)

Please run checkpatch --strict that will help you with alignment for
this..

> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = NULL;
> +
> +	list_for_each_entry(edmac_dma_chan, &hiedmac->slave.channels,
> +			    virt_chan.chan.device_node) {
> +		if (edmac_dma_chan->id == request_num)
> +			return &edmac_dma_chan->virt_chan.chan;
> +	}
> +	return NULL;
> +}
> +
> +static struct dma_chan *hiedma_of_xlate(struct of_phandle_args *dma_spec,
> +					struct of_dma *ofdma)
> +{
> +	struct hiedmacv310_driver_data *hiedmac = ofdma->of_dma_data;
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = NULL;
> +	struct dma_chan *dma_chan = NULL;
> +	struct regmap *misc = NULL;
> +	unsigned int signal, request_num;
> +	unsigned int reg = 0;
> +	unsigned int offset = 0;
> +
> +	if (!hiedmac)
> +		return NULL;
> +
> +	misc = hiedmac->misc_regmap;
> +
> +	if (dma_spec->args_count != 2) { /* check num of dts node args */
> +		hiedmacv310_error("args count not true!");
> +		return NULL;
> +	}
> +
> +	request_num = dma_spec->args[0];
> +	signal = dma_spec->args[1];
> +
> +	if (misc != NULL) {
> +		offset = hiedmac->misc_ctrl_base + (request_num & (~0x3));
> +		regmap_read(misc, offset, &reg);
> +		/* set misc for signal line */
> +		reg &= ~(0x3f << ((request_num & 0x3) << 3));
> +		reg |= signal << ((request_num & 0x3) << 3);

magic numbers..?

> +		regmap_write(misc, offset, reg);
> +	}
> +
> +	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL,
> +			  "offset = 0x%x, reg = 0x%x", offset, reg);
> +
> +	dma_chan = hiedmac_find_chan_id(hiedmac, request_num);
> +	if (!dma_chan) {
> +		hiedmacv310_error("DMA slave channel is not found!");
> +		return NULL;
> +	}
> +
> +	edmac_dma_chan = to_edamc_chan(dma_chan);
> +	edmac_dma_chan->signal = request_num;
> +	return dma_get_slave_channel(dma_chan);
> +}
> +
> +static int hiedmacv310_devm_get(struct hiedmacv310_driver_data *hiedmac)
> +{
> +	struct platform_device *platdev = hiedmac->dev;
> +	struct resource *res = NULL;
> +
> +	hiedmac->clk = devm_clk_get(&(platdev->dev), "apb_pclk");
> +	if (IS_ERR(hiedmac->clk))
> +		return PTR_ERR(hiedmac->clk);
> +
> +	hiedmac->axi_clk = devm_clk_get(&(platdev->dev), "axi_aclk");
> +	if (IS_ERR(hiedmac->axi_clk))
> +		return PTR_ERR(hiedmac->axi_clk);
> +
> +	hiedmac->irq = platform_get_irq(platdev, 0);
> +	if (unlikely(hiedmac->irq < 0))
> +		return -ENODEV;
> +
> +	hiedmac->rstc = devm_reset_control_get(&(platdev->dev), "dma-reset");
> +	if (IS_ERR(hiedmac->rstc))
> +		return PTR_ERR(hiedmac->rstc);
> +
> +	res = platform_get_resource(platdev, IORESOURCE_MEM, 0);
> +	if (!res) {
> +		hiedmacv310_error("no reg resource");
> +		return -ENODEV;
> +	}
> +
> +	hiedmac->base = devm_ioremap_resource(&(platdev->dev), res);
> +	if (IS_ERR(hiedmac->base))
> +		return PTR_ERR(hiedmac->base);
> +
> +	return 0;
> +}
> +
> +static int hiedmacv310_of_property_read(struct hiedmacv310_driver_data *hiedmac)
> +{
> +	struct platform_device *platdev = hiedmac->dev;
> +	struct device_node *np = platdev->dev.of_node;
> +	int ret;
> +
> +	hiedmac->misc_regmap = syscon_regmap_lookup_by_phandle(np, "hisilicon,misc-control");

why are looking up something else here..?

> +	if (IS_ERR(hiedmac->misc_regmap))
> +		return PTR_ERR(hiedmac->misc_regmap);
> +
> +	ret = of_property_read_u32_index(np, "hisilicon,misc-control", 1,
> +					 &(hiedmac->misc_ctrl_base));
> +	if (ret) {
> +		hiedmacv310_error("get dma-misc_ctrl_base fail");
> +		return -ENODEV;
> +	}
> +
> +	ret = of_property_read_u32(np, "dma-channels", &(hiedmac->channels));
> +	if (ret) {
> +		hiedmacv310_error("get dma-channels fail");
> +		return -ENODEV;
> +	}
> +	ret = of_property_read_u32(np, "dma-requests", &(hiedmac->slave_requests));
> +	if (ret) {
> +		hiedmacv310_error("get dma-requests fail");
> +		return -ENODEV;
> +	}
> +	hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL, "dma-channels = %d, dma-requests = %d",
> +			  hiedmac->channels, hiedmac->slave_requests);
> +	return 0;
> +}
> +
> +static int get_of_probe(struct hiedmacv310_driver_data *hiedmac)
> +{
> +	struct platform_device *platdev = hiedmac->dev;
> +	int ret;
> +
> +	ret = hiedmacv310_devm_get(hiedmac);
> +	if (ret)
> +		return ret;
> +
> +	ret = hiedmacv310_of_property_read(hiedmac);
> +	if (ret)
> +		return ret;
> +
> +	return of_dma_controller_register(platdev->dev.of_node,
> +					  hiedma_of_xlate, hiedmac);
> +}
> +
> +static void hiedmac_free_chan_resources(struct dma_chan *chan)
> +{
> +	vchan_free_chan_resources(to_virt_chan(chan));
> +}
> +
> +static size_t read_residue_from_phychan(
> +	struct hiedmacv310_dma_chan *edmac_dma_chan,
> +	struct transfer_desc *tsf_desc)
> +{
> +	size_t bytes;
> +	u64 next_lli;
> +	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
> +	unsigned int i, index;
> +	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
> +	struct hiedmac_lli *plli = NULL;

reverse christmas tree pls
> +
> +	next_lli = (hiedmacv310_readl(hiedmac->base + hiedmac_cx_lli_l(phychan->id)) &
> +			(~(HIEDMAC_LLI_ALIGN - 1)));
> +	next_lli |= ((u64)(hiedmacv310_readl(hiedmac->base + hiedmac_cx_lli_h(
> +			phychan->id)) & 0xffffffff) << BITS_PER_HALF_WORD);
> +	bytes = hiedmacv310_readl(hiedmac->base + hiedmac_cx_curr_cnt0(
> +			phychan->id));
> +	if (next_lli != 0) {
> +		/* It means lli mode */
> +		bytes += tsf_desc->size;
> +		index = (next_lli - tsf_desc->llis_busaddr) / sizeof(*plli);
> +		plli = (struct hiedmac_lli *)(tsf_desc->llis_vaddr);
> +		for (i = 0; i < index; i++)
> +			bytes -= plli[i].count;
> +	}
> +	return bytes;
> +}
> +
> +static enum dma_status hiedmac_tx_status(struct dma_chan *chan,
> +					 dma_cookie_t cookie,
> +					 struct dma_tx_state *txstate)
> +{
> +	enum dma_status ret;
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	struct virt_dma_desc *vd = NULL;
> +	struct transfer_desc *tsf_desc = NULL;
> +	unsigned long flags;
> +	size_t bytes;
> +
> +	ret = dma_cookie_status(chan, cookie, txstate);
> +	if (ret == DMA_COMPLETE)
> +		return ret;

residue can be NULL so no need to continue for that

> +
> +	if (edmac_dma_chan->state == HIEDMAC_CHAN_PAUSED && ret == DMA_IN_PROGRESS) {
> +		ret = DMA_PAUSED;
> +		return ret;
> +	}
> +
> +	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
> +	vd = vchan_find_desc(&edmac_dma_chan->virt_chan, cookie);
> +	if (vd) {
> +		/* no been trasfered */
> +		tsf_desc = to_edmac_transfer_desc(&vd->tx);
> +		bytes = tsf_desc->size;
> +	} else {
> +		/* trasfering */
> +		tsf_desc = edmac_dma_chan->at;
> +
> +		if (!(edmac_dma_chan->phychan) || !tsf_desc) {
> +			spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +			return ret;
> +		}
> +		bytes = read_residue_from_phychan(edmac_dma_chan, tsf_desc);
> +	}
> +	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +	dma_set_residue(txstate, bytes);
> +	return ret;
> +}
> +
> +static struct hiedmacv310_phy_chan *hiedmac_get_phy_channel(
> +	const struct hiedmacv310_driver_data *hiedmac,
> +	struct hiedmacv310_dma_chan *edmac_dma_chan)
> +{
> +	struct hiedmacv310_phy_chan *ch = NULL;
> +	unsigned long flags;
> +	int i;
> +
> +	for (i = 0; i < hiedmac->channels; i++) {
> +		ch = &hiedmac->phy_chans[i];
> +
> +		spin_lock_irqsave(&ch->lock, flags);
> +
> +		if (!ch->serving) {
> +			ch->serving = edmac_dma_chan;
> +			spin_unlock_irqrestore(&ch->lock, flags);
> +			break;
> +		}
> +		spin_unlock_irqrestore(&ch->lock, flags);
> +	}
> +
> +	if (i == hiedmac->channels)
> +		return NULL;
> +
> +	return ch;
> +}
> +
> +static void hiedmac_write_lli(const struct hiedmacv310_driver_data *hiedmac,
> +			      const struct hiedmacv310_phy_chan *phychan,
> +			      const struct transfer_desc *tsf_desc)
> +{
> +	struct hiedmac_lli *plli = (struct hiedmac_lli *)tsf_desc->llis_vaddr;
> +
> +	if (plli->next_lli != 0x0)
> +		hiedmacv310_writel((plli->next_lli & 0xffffffff) | HIEDMAC_LLI_ENABLE,
> +				   hiedmac->base + hiedmac_cx_lli_l(phychan->id));
> +	else
> +		hiedmacv310_writel((plli->next_lli & 0xffffffff),
> +				   hiedmac->base + hiedmac_cx_lli_l(phychan->id));
> +
> +	hiedmacv310_writel(((plli->next_lli >> 32) & 0xffffffff),
> +			   hiedmac->base + hiedmac_cx_lli_h(phychan->id));
> +	hiedmacv310_writel(plli->count, hiedmac->base + hiedmac_cx_cnt0(phychan->id));
> +	hiedmacv310_writel(plli->src_addr & 0xffffffff,
> +			   hiedmac->base + hiedmac_cx_src_addr_l(phychan->id));
> +	hiedmacv310_writel((plli->src_addr >> 32) & 0xffffffff,
> +			   hiedmac->base + hiedmac_cx_src_addr_h(phychan->id));
> +	hiedmacv310_writel(plli->dest_addr & 0xffffffff,
> +			   hiedmac->base + hiedmac_cx_dest_addr_l(phychan->id));
> +	hiedmacv310_writel((plli->dest_addr >> 32) & 0xffffffff,
> +			   hiedmac->base + hiedmac_cx_dest_addr_h(phychan->id));
> +	hiedmacv310_writel(plli->config,
> +			   hiedmac->base + hiedmac_cx_config(phychan->id));
> +}
> +
> +static void hiedmac_start_next_txd(struct hiedmacv310_dma_chan *edmac_dma_chan)
> +{
> +	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
> +	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
> +	struct virt_dma_desc *vd = vchan_next_desc(&edmac_dma_chan->virt_chan);
> +	struct transfer_desc *tsf_desc = to_edmac_transfer_desc(&vd->tx);
> +	unsigned int val;
> +
> +	list_del(&tsf_desc->virt_desc.node);
> +	edmac_dma_chan->at = tsf_desc;
> +	hiedmac_write_lli(hiedmac, phychan, tsf_desc);
> +	val = hiedmacv310_readl(hiedmac->base + hiedmac_cx_config(phychan->id));
> +	hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL, " HIEDMAC_Cx_CONFIG  = 0x%x", val);
> +	hiedmacv310_writel(val | HIEDMAC_CXCONFIG_LLI_START,
> +			   hiedmac->base + hiedmac_cx_config(phychan->id));
> +}
> +
> +static void hiedmac_start(struct hiedmacv310_dma_chan *edmac_dma_chan)
> +{
> +	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
> +	struct hiedmacv310_phy_chan *ch;
> +
> +	ch = hiedmac_get_phy_channel(hiedmac, edmac_dma_chan);
> +	if (!ch) {
> +		hiedmacv310_error("no phy channel available !");
> +		edmac_dma_chan->state = HIEDMAC_CHAN_WAITING;
> +		return;
> +	}
> +	edmac_dma_chan->phychan = ch;
> +	edmac_dma_chan->state = HIEDMAC_CHAN_RUNNING;
> +	hiedmac_start_next_txd(edmac_dma_chan);
> +}
> +
> +static void hiedmac_issue_pending(struct dma_chan *chan)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
> +	if (vchan_issue_pending(&edmac_dma_chan->virt_chan)) {
> +		if (!edmac_dma_chan->phychan && edmac_dma_chan->state != HIEDMAC_CHAN_WAITING)
> +			hiedmac_start(edmac_dma_chan);
> +	}
> +	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +}
> +
> +static void hiedmac_free_txd_list(struct hiedmacv310_dma_chan *edmac_dma_chan)
> +{
> +	LIST_HEAD(head);
> +
> +	vchan_get_all_descriptors(&edmac_dma_chan->virt_chan, &head);
> +	vchan_dma_desc_free_list(&edmac_dma_chan->virt_chan, &head);
> +}
> +
> +static int hiedmac_config(struct dma_chan *chan,
> +			  struct dma_slave_config *config)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +
> +	if (!edmac_dma_chan->slave) {
> +		hiedmacv310_error("slave is null!");
> +		return -EINVAL;
> +	}
> +	edmac_dma_chan->cfg = *config;

not memcpy of the data..?

> +	return 0;
> +}
> +
> +static void hiedmac_pause_phy_chan(const struct hiedmacv310_dma_chan *edmac_dma_chan)
> +{
> +	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
> +	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
> +	unsigned int val;
> +	int timeout;
> +
> +	val = hiedmacv310_readl(hiedmac->base + hiedmac_cx_config(phychan->id));
> +	val &= ~CCFG_EN;
> +	hiedmacv310_writel(val, hiedmac->base + hiedmac_cx_config(phychan->id));
> +	/* Wait for channel inactive */
> +	for (timeout = 2000; timeout > 0; timeout--) {
> +		if (!((0x1 << phychan->id) & hiedmacv310_readl(hiedmac->base + HIEDMAC_CH_STAT)))
> +			break;
> +		hiedmacv310_writel(val, hiedmac->base + hiedmac_cx_config(phychan->id));
> +		udelay(1);
> +	}
> +
> +	if (timeout == 0) {
> +		hiedmacv310_error(":channel%u timeout waiting for pause, timeout:%d",
> +				  phychan->id, timeout);
> +	}

braces seem not required

> +}
> +
> +static int hiedmac_pause(struct dma_chan *chan)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
> +	if (!edmac_dma_chan->phychan) {
> +		spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +		return 0;
> +	}
> +	hiedmac_pause_phy_chan(edmac_dma_chan);
> +	edmac_dma_chan->state = HIEDMAC_CHAN_PAUSED;
> +	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +	return 0;
> +}
> +
> +static void hiedmac_resume_phy_chan(const struct hiedmacv310_dma_chan *edmac_dma_chan)
> +{
> +	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
> +	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
> +	unsigned int val;
> +
> +	val = hiedmacv310_readl(hiedmac->base + hiedmac_cx_config(phychan->id));
> +	val |= CCFG_EN;
> +	hiedmacv310_writel(val, hiedmac->base + hiedmac_cx_config(phychan->id));
> +}
> +
> +static int hiedmac_resume(struct dma_chan *chan)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
> +
> +	if (!edmac_dma_chan->phychan) {
> +		spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +		return 0;
> +	}
> +
> +	hiedmac_resume_phy_chan(edmac_dma_chan);
> +	edmac_dma_chan->state = HIEDMAC_CHAN_RUNNING;
> +	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +
> +	return 0;
> +}
> +
> +void hiedmac_phy_free(struct hiedmacv310_dma_chan *chan);
> +static void hiedmac_desc_free(struct virt_dma_desc *vd);

why not define them here..?

> +static int hiedmac_terminate_all(struct dma_chan *chan)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&edmac_dma_chan->virt_chan.lock, flags);
> +	if (!edmac_dma_chan->phychan && !edmac_dma_chan->at) {
> +		spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +		return 0;
> +	}
> +
> +	edmac_dma_chan->state = HIEDMAC_CHAN_IDLE;
> +
> +	if (edmac_dma_chan->phychan)
> +		hiedmac_phy_free(edmac_dma_chan);
> +	if (edmac_dma_chan->at) {
> +		hiedmac_desc_free(&edmac_dma_chan->at->virt_desc);
> +		edmac_dma_chan->at = NULL;
> +	}
> +	hiedmac_free_txd_list(edmac_dma_chan);
> +	spin_unlock_irqrestore(&edmac_dma_chan->virt_chan.lock, flags);
> +
> +	return 0;
> +}
> +
> +static u32 get_width(enum dma_slave_buswidth width)
> +{
> +	switch (width) {
> +	case DMA_SLAVE_BUSWIDTH_1_BYTE:
> +		return HIEDMAC_WIDTH_8BIT;
> +	case DMA_SLAVE_BUSWIDTH_2_BYTES:
> +		return HIEDMAC_WIDTH_16BIT;
> +	case DMA_SLAVE_BUSWIDTH_4_BYTES:
> +		return HIEDMAC_WIDTH_32BIT;
> +	case DMA_SLAVE_BUSWIDTH_8_BYTES:
> +		return HIEDMAC_WIDTH_64BIT;

sounds like ffs(width) - 1 to me..

> +	default:
> +		hiedmacv310_error("check here, width warning!");
> +		return ~0;
> +	}
> +}
> +
> +static unsigned int hiedmac_set_config_value(enum dma_transfer_direction direction,
> +					     unsigned int addr_width,
> +					     unsigned int burst,
> +					     unsigned int signal)
> +{
> +	unsigned int config, width;
> +
> +	if (direction == DMA_MEM_TO_DEV)
> +		config = HIEDMAC_CONFIG_SRC_INC;
> +	else
> +		config = HIEDMAC_CONFIG_DST_INC;
> +
> +	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL, "addr_width = 0x%x", addr_width);
> +	width = get_width(addr_width);
> +	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL, "width = 0x%x", width);
> +	config |= width << HIEDMAC_CONFIG_SRC_WIDTH_SHIFT;
> +	config |= width << HIEDMAC_CONFIG_DST_WIDTH_SHIFT;
> +	hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL, "tsf_desc->ccfg = 0x%x", config);
> +	hiedmacv310_trace(HIEDMACV310_CONFIG_TRACE_LEVEL, "burst = 0x%x", burst);
> +	config |= burst << HIEDMAC_CONFIG_SRC_BURST_SHIFT;
> +	config |= burst << HIEDMAC_CONFIG_DST_BURST_SHIFT;
> +	if (signal >= 0) {
> +		hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL,
> +				  "edmac_dma_chan->signal = %d", signal);
> +		config |= (unsigned int)signal << HIEDMAC_CXCONFIG_SIGNAL_SHIFT;
> +	}
> +	config |= HIEDMAC_CXCONFIG_DEV_MEM_TYPE << HIEDMAC_CXCONFIG_TSF_TYPE_SHIFT;
> +	return config;
> +}
> +
> +static struct transfer_desc *hiedmac_init_tsf_desc(struct dma_chan *chan,
> +	enum dma_transfer_direction direction,
> +	dma_addr_t *slave_addr)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	struct transfer_desc *tsf_desc;
> +	unsigned int burst = 0;
> +	unsigned int addr_width = 0;
> +	unsigned int maxburst = 0;
> +
> +	tsf_desc = kzalloc(sizeof(*tsf_desc), GFP_NOWAIT);
> +	if (!tsf_desc)
> +		return NULL;
> +	if (direction == DMA_MEM_TO_DEV) {
> +		*slave_addr = edmac_dma_chan->cfg.dst_addr;
> +		addr_width = edmac_dma_chan->cfg.dst_addr_width;
> +		maxburst = edmac_dma_chan->cfg.dst_maxburst;
> +	} else if (direction == DMA_DEV_TO_MEM) {
> +		*slave_addr = edmac_dma_chan->cfg.src_addr;
> +		addr_width = edmac_dma_chan->cfg.src_addr_width;
> +		maxburst = edmac_dma_chan->cfg.src_maxburst;
> +	} else {
> +		kfree(tsf_desc);
> +		hiedmacv310_error("direction unsupported!");
> +		return NULL;
> +	}
> +
> +	if (maxburst > (HIEDMAC_MAX_BURST_WIDTH))
> +		burst |= (HIEDMAC_MAX_BURST_WIDTH - 1);
> +	else if (maxburst == 0)
> +		burst |= HIEDMAC_MIN_BURST_WIDTH;
> +	else
> +		burst |= (maxburst - 1);
> +
> +	tsf_desc->ccfg = hiedmac_set_config_value(direction, addr_width,
> +				 burst, edmac_dma_chan->signal);
> +	hiedmacv310_trace(HIEDMACV310_REG_TRACE_LEVEL, "tsf_desc->ccfg = 0x%x", tsf_desc->ccfg);
> +	return tsf_desc;
> +}
> +
> +static int hiedmac_fill_desc(const struct hiedmac_sg *dsg,
> +			     struct transfer_desc *tsf_desc,
> +			     unsigned int length, unsigned int num)
> +{
> +	struct hiedmac_lli *plli = NULL;
> +
> +	if (num >= MAX_TSFR_LLIS) {
> +		hiedmacv310_error("lli out of range.");
> +		return -ENOMEM;
> +	}
> +
> +	plli = (struct hiedmac_lli *)(tsf_desc->llis_vaddr);
> +	memset(&plli[num], 0x0, sizeof(*plli));
> +
> +	plli[num].src_addr = dsg->src_addr;
> +	plli[num].dest_addr = dsg->dst_addr;
> +	plli[num].config = tsf_desc->ccfg;
> +	plli[num].count = length;
> +	tsf_desc->size += length;
> +
> +	if (num > 0) {
> +		plli[num - 1].next_lli = (tsf_desc->llis_busaddr + (num) * sizeof(
> +					  *plli)) & (~(HIEDMAC_LLI_ALIGN - 1));
> +		plli[num - 1].next_lli |= HIEDMAC_LLI_ENABLE;
> +	}
> +	return 0;
> +}
> +
> +static void free_dsg(struct list_head *dsg_head)
> +{
> +	struct hiedmac_sg *dsg = NULL;
> +	struct hiedmac_sg *_dsg = NULL;
> +
> +	list_for_each_entry_safe(dsg, _dsg, dsg_head, node) {
> +		list_del(&dsg->node);
> +		kfree(dsg);
> +	}
> +}
> +
> +static int hiedmac_add_sg(struct list_head *sg_head,
> +			  dma_addr_t dst, dma_addr_t src,
> +			  size_t len)
> +{
> +	struct hiedmac_sg *dsg = NULL;
> +
> +	if (len == 0) {
> +		hiedmacv310_error("Transfer length is 0.");
> +		return -ENOMEM;
> +	}
> +
> +	dsg = kzalloc(sizeof(*dsg), GFP_NOWAIT);
> +	if (!dsg) {
> +		free_dsg(sg_head);
> +		hiedmacv310_error("alloc memory for dsg fail.");
> +		return -ENOMEM;
> +	}
> +
> +	list_add_tail(&dsg->node, sg_head);
> +	dsg->src_addr = src;
> +	dsg->dst_addr = dst;
> +	dsg->len = len;
> +	return 0;
> +}
> +
> +static int hiedmac_add_sg_slave(struct list_head *sg_head,
> +				dma_addr_t slave_addr, dma_addr_t addr,
> +				size_t length,
> +				enum dma_transfer_direction direction)
> +{
> +	dma_addr_t src = 0;
> +	dma_addr_t dst = 0;
> +
> +	if (direction == DMA_MEM_TO_DEV) {
> +		src = addr;
> +		dst = slave_addr;
> +	} else if (direction == DMA_DEV_TO_MEM) {
> +		src = slave_addr;
> +		dst = addr;
> +	} else {
> +		hiedmacv310_error("invali dma_transfer_direction.");
> +		return -ENOMEM;
> +	}
> +	return hiedmac_add_sg(sg_head, dst, src, length);
> +}
> +
> +static int hiedmac_fill_sg_for_slave(struct list_head *sg_head,
> +				     dma_addr_t slave_addr,
> +				     struct scatterlist *sgl,
> +				     unsigned int sg_len,
> +				     enum dma_transfer_direction direction)
> +{
> +	struct scatterlist *sg = NULL;
> +	int tmp, ret;
> +	size_t length;
> +	dma_addr_t addr;
> +
> +	if (sgl == NULL) {
> +		hiedmacv310_error("sgl is null!");
> +		return -ENOMEM;
> +	}
> +
> +	for_each_sg(sgl, sg, sg_len, tmp) {
> +		addr = sg_dma_address(sg);
> +		length = sg_dma_len(sg);
> +		ret = hiedmac_add_sg_slave(sg_head, slave_addr, addr, length, direction);
> +		if (ret)
> +			break;
> +	}
> +	return ret;
> +}
> +
> +static inline int hiedmac_fill_sg_for_memcpy(struct list_head *sg_head,
> +					     dma_addr_t dst, dma_addr_t src,
> +					     size_t len)
> +{
> +	return hiedmac_add_sg(sg_head, dst, src, len);
> +}
> +
> +static int hiedmac_fill_sg_for_cyclic(struct list_head *sg_head,
> +				      dma_addr_t slave_addr,
> +				      dma_addr_t buf_addr, size_t buf_len,
> +				      size_t period_len,
> +				      enum dma_transfer_direction direction)
> +{
> +	size_t count_in_sg = 0;
> +	size_t trans_bytes;
> +	int ret;
> +
> +	while (count_in_sg < buf_len) {
> +		trans_bytes = min(period_len, buf_len - count_in_sg);
> +		count_in_sg += trans_bytes;
> +		ret = hiedmac_add_sg_slave(sg_head, slave_addr,
> +					   buf_addr + count_in_sg,
> +					   count_in_sg, direction);
> +		if (ret)
> +			return ret;
> +	}
> +	return 0;
> +}
> +
> +static inline unsigned short get_max_width(dma_addr_t ccfg)
> +{
> +	unsigned short src_width = (ccfg & HIEDMAC_CONTROL_SRC_WIDTH_MASK) >>
> +				    HIEDMAC_CONFIG_SRC_WIDTH_SHIFT;
> +	unsigned short dst_width = (ccfg & HIEDMAC_CONTROL_DST_WIDTH_MASK) >>
> +				    HIEDMAC_CONFIG_DST_WIDTH_SHIFT;
> +
> +	return 1 << max(src_width, dst_width); /* to byte */
> +}
> +
> +static int hiedmac_fill_asg_lli_for_desc(struct hiedmac_sg *dsg,
> +					 struct transfer_desc *tsf_desc,
> +					 unsigned int *lli_count)
> +{
> +	int ret;
> +	unsigned short width = get_max_width(tsf_desc->ccfg);
> +
> +	while (dsg->len != 0) {
> +		size_t lli_len = MAX_TRANSFER_BYTES;
> +
> +		lli_len = (lli_len / width) * width; /* bus width align */
> +		lli_len = min(lli_len, dsg->len);
> +		ret = hiedmac_fill_desc(dsg, tsf_desc, lli_len, *lli_count);
> +		if (ret)
> +			return ret;
> +
> +		if (tsf_desc->ccfg & HIEDMAC_CONFIG_SRC_INC)
> +			dsg->src_addr += lli_len;
> +		if (tsf_desc->ccfg & HIEDMAC_CONFIG_DST_INC)
> +			dsg->dst_addr += lli_len;
> +		dsg->len -= lli_len;
> +		(*lli_count)++;
> +	}
> +	return 0;
> +}
> +
> +static int hiedmac_fill_lli_for_desc(struct list_head *sg_head,
> +				     struct transfer_desc *tsf_desc)
> +{
> +	struct hiedmac_sg *dsg = NULL;
> +	struct hiedmac_lli *last_plli = NULL;
> +	unsigned int lli_count = 0;
> +	int ret;
> +
> +	list_for_each_entry(dsg, sg_head, node) {
> +		ret = hiedmac_fill_asg_lli_for_desc(dsg, tsf_desc, &lli_count);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	if (tsf_desc->cyclic) {
> +		last_plli = (struct hiedmac_lli *)((uintptr_t)tsf_desc->llis_vaddr +
> +					    (lli_count - 1) * sizeof(*last_plli));
> +		last_plli->next_lli = tsf_desc->llis_busaddr | HIEDMAC_LLI_ENABLE;
> +	} else {
> +		last_plli = (struct hiedmac_lli *)((uintptr_t)tsf_desc->llis_vaddr +
> +					    (lli_count - 1) * sizeof(*last_plli));
> +		last_plli->next_lli = 0;
> +	}
> +	dump_lli(tsf_desc->llis_vaddr, lli_count);
> +	return 0;
> +}
> +
> +static struct dma_async_tx_descriptor *hiedmac_prep_slave_sg(
> +	struct dma_chan *chan, struct scatterlist *sgl,
> +	unsigned int sg_len, enum dma_transfer_direction direction,
> +	unsigned long flags, void *context)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
> +	struct transfer_desc *tsf_desc = NULL;
> +	dma_addr_t slave_addr = 0;
> +	int ret;
> +	LIST_HEAD(sg_head);
> +
> +	if (sgl == NULL) {
> +		hiedmacv310_error("sgl is null!");
> +		return NULL;
> +	}
> +
> +	tsf_desc = hiedmac_init_tsf_desc(chan, direction, &slave_addr);
> +	if (!tsf_desc)
> +		return NULL;
> +
> +	tsf_desc->llis_vaddr = dma_pool_alloc(hiedmac->pool, GFP_NOWAIT,
> +					      &tsf_desc->llis_busaddr);
> +	if (!tsf_desc->llis_vaddr) {
> +		hiedmacv310_error("malloc memory from pool fail !");
> +		goto err_alloc_lli;
> +	}
> +
> +	ret = hiedmac_fill_sg_for_slave(&sg_head, slave_addr, sgl, sg_len, direction);
> +	if (ret)
> +		goto err_fill_sg;
> +	ret = hiedmac_fill_lli_for_desc(&sg_head, tsf_desc);
> +	free_dsg(&sg_head);
> +	if (ret)
> +		goto err_fill_sg;
> +	return vchan_tx_prep(&edmac_dma_chan->virt_chan, &tsf_desc->virt_desc, flags);
> +
> +err_fill_sg:
> +	dma_pool_free(hiedmac->pool, tsf_desc->llis_vaddr, tsf_desc->llis_busaddr);
> +err_alloc_lli:
> +	kfree(tsf_desc);
> +	return NULL;
> +}
> +
> +static struct dma_async_tx_descriptor *hiedmac_prep_dma_memcpy(
> +	struct dma_chan *chan, dma_addr_t dst, dma_addr_t src,
> +	size_t len, unsigned long flags)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
> +	struct transfer_desc *tsf_desc = NULL;
> +	LIST_HEAD(sg_head);
> +	u32 config = 0;
> +	int ret;
> +
> +	if (!len)
> +		return NULL;
> +
> +	tsf_desc = kzalloc(sizeof(*tsf_desc), GFP_NOWAIT);
> +	if (tsf_desc == NULL) {
> +		hiedmacv310_error("get tsf desc fail!");
> +		return NULL;
> +	}
> +
> +	tsf_desc->llis_vaddr = dma_pool_alloc(hiedmac->pool, GFP_NOWAIT,
> +					      &tsf_desc->llis_busaddr);
> +	if (!tsf_desc->llis_vaddr) {
> +		hiedmacv310_error("malloc memory from pool fail !");
> +		goto err_alloc_lli;
> +	}
> +
> +	config |= HIEDMAC_CONFIG_SRC_INC | HIEDMAC_CONFIG_DST_INC;
> +	config |= HIEDMAC_CXCONFIG_MEM_TYPE << HIEDMAC_CXCONFIG_TSF_TYPE_SHIFT;
> +	/*  max burst width is 16 ,but reg value set 0xf */
> +	config |= (HIEDMAC_MAX_BURST_WIDTH - 1) << HIEDMAC_CONFIG_SRC_BURST_SHIFT;
> +	config |= (HIEDMAC_MAX_BURST_WIDTH - 1) << HIEDMAC_CONFIG_DST_BURST_SHIFT;
> +	config |= HIEDMAC_MEM_BIT_WIDTH << HIEDMAC_CONFIG_SRC_WIDTH_SHIFT;
> +	config |= HIEDMAC_MEM_BIT_WIDTH << HIEDMAC_CONFIG_DST_WIDTH_SHIFT;
> +	tsf_desc->ccfg = config;
> +	ret = hiedmac_fill_sg_for_memcpy(&sg_head, dst, src, len);
> +	if (ret)
> +		goto err_fill_sg;
> +	ret = hiedmac_fill_lli_for_desc(&sg_head, tsf_desc);
> +	free_dsg(&sg_head);
> +	if (ret)
> +		goto err_fill_sg;
> +	return vchan_tx_prep(&edmac_dma_chan->virt_chan, &tsf_desc->virt_desc, flags);
> +
> +err_fill_sg:
> +	dma_pool_free(hiedmac->pool, tsf_desc->llis_vaddr, tsf_desc->llis_busaddr);
> +err_alloc_lli:
> +	kfree(tsf_desc);
> +	return NULL;
> +}
> +
> +
> +static struct dma_async_tx_descriptor *hiedmac_prep_dma_cyclic(
> +	struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
> +	size_t period_len, enum dma_transfer_direction direction,
> +	unsigned long flags)
> +{
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(chan);
> +	struct hiedmacv310_driver_data *hiedmac = edmac_dma_chan->host;
> +	struct transfer_desc *tsf_desc = NULL;
> +	dma_addr_t slave_addr = 0;
> +	LIST_HEAD(sg_head);
> +	int ret;
> +
> +	tsf_desc = hiedmac_init_tsf_desc(chan, direction, &slave_addr);
> +	if (!tsf_desc)
> +		return NULL;
> +
> +	tsf_desc->llis_vaddr = dma_pool_alloc(hiedmac->pool, GFP_NOWAIT,
> +			&tsf_desc->llis_busaddr);
> +	if (!tsf_desc->llis_vaddr) {
> +		hiedmacv310_error("malloc memory from pool fail !");
> +		goto err_alloc_lli;
> +	}
> +
> +	tsf_desc->cyclic = true;
> +	ret = hiedmac_fill_sg_for_cyclic(&sg_head, slave_addr, buf_addr,
> +					 buf_len, period_len, direction);
> +	if (ret)
> +		goto err_fill_sg;
> +	ret = hiedmac_fill_lli_for_desc(&sg_head, tsf_desc);
> +	free_dsg(&sg_head);
> +	if (ret)
> +		goto err_fill_sg;
> +	return vchan_tx_prep(&edmac_dma_chan->virt_chan, &tsf_desc->virt_desc, flags);
> +
> +err_fill_sg:
> +	dma_pool_free(hiedmac->pool, tsf_desc->llis_vaddr, tsf_desc->llis_busaddr);
> +err_alloc_lli:
> +	kfree(tsf_desc);
> +	return NULL;
> +}
> +
> +static void hiedmac_phy_reassign(struct hiedmacv310_phy_chan *phy_chan,
> +				 struct hiedmacv310_dma_chan *chan)
> +{
> +	phy_chan->serving = chan;
> +	chan->phychan = phy_chan;
> +	chan->state = HIEDMAC_CHAN_RUNNING;
> +
> +	hiedmac_start_next_txd(chan);
> +}
> +
> +static void hiedmac_terminate_phy_chan(struct hiedmacv310_driver_data *hiedmac,
> +				       const struct hiedmacv310_dma_chan *edmac_dma_chan)
> +{
> +	unsigned int val;
> +	struct hiedmacv310_phy_chan *phychan = edmac_dma_chan->phychan;
> +
> +	hiedmac_pause_phy_chan(edmac_dma_chan);
> +	val = 0x1 << phychan->id;
> +	hiedmacv310_writel(val, hiedmac->base + HIEDMAC_INT_TC1_RAW);
> +	hiedmacv310_writel(val, hiedmac->base + HIEDMAC_INT_ERR1_RAW);
> +	hiedmacv310_writel(val, hiedmac->base + HIEDMAC_INT_ERR2_RAW);
> +}
> +
> +void hiedmac_phy_free(struct hiedmacv310_dma_chan *chan)
> +{
> +	struct hiedmacv310_driver_data *hiedmac = chan->host;
> +	struct hiedmacv310_dma_chan *p = NULL;
> +	struct hiedmacv310_dma_chan *next = NULL;
> +
> +	list_for_each_entry(p, &hiedmac->memcpy.channels, virt_chan.chan.device_node) {
> +		if (p->state == HIEDMAC_CHAN_WAITING) {
> +			next = p;
> +			break;
> +		}
> +	}
> +
> +	if (!next) {
> +		list_for_each_entry(p, &hiedmac->slave.channels, virt_chan.chan.device_node) {
> +			if (p->state == HIEDMAC_CHAN_WAITING) {
> +				next = p;
> +				break;
> +			}
> +		}
> +	}
> +	hiedmac_terminate_phy_chan(hiedmac, chan);
> +
> +	if (next) {
> +		spin_lock(&next->virt_chan.lock);
> +		hiedmac_phy_reassign(chan->phychan, next);
> +		spin_unlock(&next->virt_chan.lock);
> +	} else {
> +		chan->phychan->serving = NULL;
> +	}
> +
> +	chan->phychan = NULL;
> +	chan->state = HIEDMAC_CHAN_IDLE;
> +}
> +
> +static bool handle_irq(struct hiedmacv310_driver_data *hiedmac, int chan_id)
> +{
> +	struct hiedmacv310_dma_chan *chan = NULL;
> +	struct hiedmacv310_phy_chan *phy_chan = NULL;
> +	struct transfer_desc *tsf_desc = NULL;
> +	unsigned int channel_tc_status;
> +
> +	phy_chan = &hiedmac->phy_chans[chan_id];
> +	chan = phy_chan->serving;
> +	if (!chan) {
> +		hiedmacv310_error("error interrupt on chan: %d!", chan_id);
> +		return 0;
> +	}
> +	tsf_desc = chan->at;
> +
> +	channel_tc_status = hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_TC1_RAW);
> +	channel_tc_status = (channel_tc_status >> chan_id) & 0x01;
> +	if (channel_tc_status)
> +		hiedmacv310_writel(channel_tc_status << chan_id,
> +				   hiedmac->base + HIEDMAC_INT_TC1_RAW);
> +
> +	channel_tc_status = hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_TC2);
> +	channel_tc_status = (channel_tc_status >> chan_id) & 0x01;
> +	if (channel_tc_status)
> +		hiedmacv310_writel(channel_tc_status << chan_id,
> +				   hiedmac->base + HIEDMAC_INT_TC2_RAW);
> +
> +	if ((hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_ERR1) |
> +	    hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_ERR2) |
> +	    hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_ERR3)) &
> +	    (1 << chan_id)) {
> +		hiedmacv310_writel(1 << chan_id, hiedmac->base + HIEDMAC_INT_ERR1_RAW);
> +		hiedmacv310_writel(1 << chan_id, hiedmac->base + HIEDMAC_INT_ERR2_RAW);
> +		hiedmacv310_writel(1 << chan_id, hiedmac->base + HIEDMAC_INT_ERR3_RAW);
> +	}
> +
> +	spin_lock(&chan->virt_chan.lock);
> +
> +	if (tsf_desc->cyclic) {
> +		vchan_cyclic_callback(&tsf_desc->virt_desc);
> +		spin_unlock(&chan->virt_chan.lock);
> +		return 1;
> +	}
> +	chan->at = NULL;
> +	tsf_desc->done = true;
> +	vchan_cookie_complete(&tsf_desc->virt_desc);
> +
> +	if (vchan_next_desc(&chan->virt_chan))
> +		hiedmac_start_next_txd(chan);
> +	else
> +		hiedmac_phy_free(chan);
> +	spin_unlock(&chan->virt_chan.lock);
> +	return 1;
> +}
> +
> +static irqreturn_t hiemdacv310_irq(int irq, void *dev)
> +{
> +	struct hiedmacv310_driver_data *hiedmac = (struct hiedmacv310_driver_data *)dev;
> +	u32 mask = 0;
> +	unsigned int channel_status, temp, i;
> +
> +	channel_status = hiedmacv310_readl(hiedmac->base + HIEDMAC_INT_STAT);
> +	if (!channel_status) {
> +		hiedmacv310_error("channel_status = 0x%x", channel_status);
> +		return IRQ_NONE;
> +	}
> +
> +	for (i = 0; i < hiedmac->channels; i++) {
> +		temp = (channel_status >> i) & 0x1;
> +		if (temp)
> +			mask |= handle_irq(hiedmac, i) << i;
> +	}
> +	return mask ? IRQ_HANDLED : IRQ_NONE;
> +}
> +
> +static inline void hiedmac_dma_slave_init(struct hiedmacv310_dma_chan *chan)
> +{
> +	chan->slave = true;
> +}
> +
> +static void hiedmac_desc_free(struct virt_dma_desc *vd)
> +{
> +	struct transfer_desc *tsf_desc = to_edmac_transfer_desc(&vd->tx);
> +	struct hiedmacv310_dma_chan *edmac_dma_chan = to_edamc_chan(vd->tx.chan);
> +
> +	dma_descriptor_unmap(&vd->tx);
> +	dma_pool_free(edmac_dma_chan->host->pool, tsf_desc->llis_vaddr, tsf_desc->llis_busaddr);
> +	kfree(tsf_desc);
> +}
> +
> +static int hiedmac_init_virt_channels(struct hiedmacv310_driver_data *hiedmac,
> +				      struct dma_device *dmadev,
> +				      unsigned int channels, bool slave)
> +{
> +	struct hiedmacv310_dma_chan *chan = NULL;
> +	int i;
> +
> +	INIT_LIST_HEAD(&dmadev->channels);
> +	for (i = 0; i < channels; i++) {
> +		chan = kzalloc(sizeof(struct hiedmacv310_dma_chan), GFP_KERNEL);
> +		if (!chan) {
> +			hiedmacv310_error("fail to allocate memory for virt channels!");
> +			return -1;
> +		}
> +
> +		chan->host = hiedmac;
> +		chan->state = HIEDMAC_CHAN_IDLE;
> +		chan->signal = -1;
> +
> +		if (slave) {
> +			chan->id = i;
> +			hiedmac_dma_slave_init(chan);
> +		}
> +		chan->virt_chan.desc_free = hiedmac_desc_free;
> +		vchan_init(&chan->virt_chan, dmadev);
> +	}
> +	return 0;
> +}
> +
> +static void hiedmac_free_virt_channels(struct dma_device *dmadev)
> +{
> +	struct hiedmacv310_dma_chan *chan = NULL;
> +	struct hiedmacv310_dma_chan *next = NULL;
> +
> +	list_for_each_entry_safe(chan, next, &dmadev->channels, virt_chan.chan.device_node) {
> +		list_del(&chan->virt_chan.chan.device_node);
> +		kfree(chan);
> +	}
> +}
> +
> +static void hiedmacv310_prep_dma_device(struct platform_device *pdev,
> +					struct hiedmacv310_driver_data *hiedmac)
> +{
> +	dma_cap_set(DMA_MEMCPY, hiedmac->memcpy.cap_mask);
> +	hiedmac->memcpy.dev = &pdev->dev;
> +	hiedmac->memcpy.device_free_chan_resources = hiedmac_free_chan_resources;
> +	hiedmac->memcpy.device_prep_dma_memcpy = hiedmac_prep_dma_memcpy;
> +	hiedmac->memcpy.device_tx_status = hiedmac_tx_status;
> +	hiedmac->memcpy.device_issue_pending = hiedmac_issue_pending;
> +	hiedmac->memcpy.device_config = hiedmac_config;
> +	hiedmac->memcpy.device_pause = hiedmac_pause;
> +	hiedmac->memcpy.device_resume = hiedmac_resume;
> +	hiedmac->memcpy.device_terminate_all = hiedmac_terminate_all;
> +	hiedmac->memcpy.directions = BIT(DMA_MEM_TO_MEM);
> +	hiedmac->memcpy.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +
> +	dma_cap_set(DMA_SLAVE, hiedmac->slave.cap_mask);
> +	dma_cap_set(DMA_CYCLIC, hiedmac->slave.cap_mask);
> +	hiedmac->slave.dev = &pdev->dev;
> +	hiedmac->slave.device_free_chan_resources = hiedmac_free_chan_resources;
> +	hiedmac->slave.device_tx_status = hiedmac_tx_status;
> +	hiedmac->slave.device_issue_pending = hiedmac_issue_pending;
> +	hiedmac->slave.device_prep_slave_sg = hiedmac_prep_slave_sg;
> +	hiedmac->slave.device_prep_dma_cyclic = hiedmac_prep_dma_cyclic;
> +	hiedmac->slave.device_config = hiedmac_config;
> +	hiedmac->slave.device_resume = hiedmac_resume;
> +	hiedmac->slave.device_pause = hiedmac_pause;
> +	hiedmac->slave.device_terminate_all = hiedmac_terminate_all;
> +	hiedmac->slave.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +	hiedmac->slave.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +}
> +
> +static int hiedmacv310_init_chan(struct hiedmacv310_driver_data *hiedmac)
> +{
> +	int i, ret;
> +
> +	hiedmac->phy_chans = kzalloc((hiedmac->channels * sizeof(
> +				     struct hiedmacv310_phy_chan)),
> +				     GFP_KERNEL);
> +	if (!hiedmac->phy_chans) {
> +		hiedmacv310_error("malloc for phy chans fail!");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < hiedmac->channels; i++) {
> +		struct hiedmacv310_phy_chan *phy_ch = &hiedmac->phy_chans[i];
> +
> +		phy_ch->id = i;
> +		phy_ch->base = hiedmac->base + hiedmac_cx_base(i);
> +		spin_lock_init(&phy_ch->lock);
> +		phy_ch->serving = NULL;
> +	}
> +
> +	ret = hiedmac_init_virt_channels(hiedmac, &hiedmac->memcpy, hiedmac->channels,
> +					 false);
> +	if (ret) {
> +		hiedmacv310_error("fail to init memory virt channels!");
> +		goto  free_phychans;
> +	}
> +
> +	ret = hiedmac_init_virt_channels(hiedmac, &hiedmac->slave, hiedmac->slave_requests,
> +					 true);
> +	if (ret) {
> +		hiedmacv310_error("fail to init slave virt channels!");
> +		goto  free_memory_virt_channels;
> +	}
> +	return 0;
> +
> +free_memory_virt_channels:
> +	hiedmac_free_virt_channels(&hiedmac->memcpy);
> +free_phychans:
> +	kfree(hiedmac->phy_chans);
> +	return -ENOMEM;
> +}
> +
> +static void hiedmacv310_free_chan(struct hiedmacv310_driver_data *hiedmac)
> +{
> +	hiedmac_free_virt_channels(&hiedmac->slave);
> +	hiedmac_free_virt_channels(&hiedmac->memcpy);
> +	kfree(hiedmac->phy_chans);
> +}
> +
> +static void hiedmacv310_prep_phy_device(const struct hiedmacv310_driver_data *hiedmac)
> +{
> +	clk_prepare_enable(hiedmac->clk);
> +	clk_prepare_enable(hiedmac->axi_clk);
> +	reset_control_deassert(hiedmac->rstc);
> +
> +	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_TC1_RAW);
> +	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_TC2_RAW);
> +	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_ERR1_RAW);
> +	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_ERR2_RAW);
> +	hiedmacv310_writel(HIEDMAC_ALL_CHAN_CLR, hiedmac->base + HIEDMAC_INT_ERR3_RAW);
> +	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
> +			   hiedmac->base + HIEDMAC_INT_TC1_MASK);
> +	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
> +			   hiedmac->base + HIEDMAC_INT_TC2_MASK);
> +	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
> +			   hiedmac->base + HIEDMAC_INT_ERR1_MASK);
> +	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
> +			   hiedmac->base + HIEDMAC_INT_ERR2_MASK);
> +	hiedmacv310_writel(HIEDMAC_INT_ENABLE_ALL_CHAN,
> +			   hiedmac->base + HIEDMAC_INT_ERR3_MASK);
> +}
> +
> +static struct hiedmacv310_driver_data *hiedmacv310_prep_hiedmac_device(struct platform_device *pdev)
> +{
> +	int ret;
> +	struct hiedmacv310_driver_data *hiedmac = NULL;
> +	ssize_t transfer_size;
> +
> +	ret = dma_set_mask_and_coherent(&(pdev->dev), DMA_BIT_MASK(64));
> +	if (ret)
> +		return NULL;
> +
> +	hiedmac = kzalloc(sizeof(*hiedmac), GFP_KERNEL);
> +	if (!hiedmac) {
> +		hiedmacv310_error("malloc for hiedmac fail!");
> +		return NULL;
> +	}
> +
> +	hiedmac->dev = pdev;
> +
> +	ret = get_of_probe(hiedmac);
> +	if (ret) {
> +		hiedmacv310_error("get dts info fail!");
> +		goto free_hiedmac;
> +	}
> +
> +	hiedmacv310_prep_dma_device(pdev, hiedmac);
> +	hiedmac->max_transfer_size = MAX_TRANSFER_BYTES;
> +	transfer_size = MAX_TSFR_LLIS * EDMACV300_LLI_WORDS * sizeof(u32);
> +
> +	hiedmac->pool = dma_pool_create(DRIVER_NAME, &(pdev->dev),
> +					transfer_size, EDMACV300_POOL_ALIGN, 0);
> +	if (!hiedmac->pool) {
> +		hiedmacv310_error("create pool fail!");
> +		goto free_hiedmac;
> +	}
> +
> +	ret = hiedmacv310_init_chan(hiedmac);
> +	if (ret)
> +		goto free_pool;
> +
> +	return hiedmac;
> +
> +free_pool:
> +	dma_pool_destroy(hiedmac->pool);
> +free_hiedmac:
> +	kfree(hiedmac);
> +	return NULL;
> +}
> +
> +static void free_hiedmac_device(struct hiedmacv310_driver_data *hiedmac)
> +{
> +	hiedmacv310_free_chan(hiedmac);
> +	dma_pool_destroy(hiedmac->pool);
> +	kfree(hiedmac);
> +}
> +
> +static int __init hiedmacv310_probe(struct platform_device *pdev)
> +{
> +	int ret;
> +	struct hiedmacv310_driver_data *hiedmac = NULL;
> +
> +	hiedmac = hiedmacv310_prep_hiedmac_device(pdev);
> +	if (hiedmac == NULL)
> +		return -ENOMEM;
> +
> +	ret = request_irq(hiedmac->irq, hiemdacv310_irq, 0, DRIVER_NAME, hiedmac);
> +	if (ret) {
> +		hiedmacv310_error("fail to request irq");
> +		goto free_hiedmac;
> +	}
> +	hiedmacv310_prep_phy_device(hiedmac);
> +	ret = dma_async_device_register(&hiedmac->memcpy);
> +	if (ret) {
> +		hiedmacv310_error("%s failed to register memcpy as an async device - %d",
> +				  __func__, ret);
> +		goto free_irq_res;
> +	}
> +
> +	ret = dma_async_device_register(&hiedmac->slave);
> +	if (ret) {
> +		hiedmacv310_error("%s failed to register slave as an async device - %d",
> +				  __func__, ret);
> +		goto free_memcpy_device;
> +	}
> +	return 0;
> +
> +free_memcpy_device:
> +	dma_async_device_unregister(&hiedmac->memcpy);
> +free_irq_res:
> +	free_irq(hiedmac->irq, hiedmac);
> +free_hiedmac:
> +	free_hiedmac_device(hiedmac);
> +	return -ENOMEM;
> +}
> +
> +static int hiemda_remove(struct platform_device *pdev)
> +{
> +	int err = 0;
> +	return err;
> +}
> +
> +static const struct of_device_id hiedmacv310_match[] = {
> +	{ .compatible = "hisilicon,hiedmacv310" },
> +	{},
> +};
> +
> +static struct platform_driver hiedmacv310_driver = {
> +	.remove = hiemda_remove,
> +	.driver = {
> +		.name = "hiedmacv310",
> +		.of_match_table = hiedmacv310_match,
> +	},
> +};
> +
> +static int __init hiedmacv310_init(void)
> +{
> +	return platform_driver_probe(&hiedmacv310_driver, hiedmacv310_probe);
> +}
> +subsys_initcall(hiedmacv310_init);
> +
> +static void __exit hiedmacv310_exit(void)
> +{
> +	platform_driver_unregister(&hiedmacv310_driver);
> +}
> +module_exit(hiedmacv310_exit);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Hisilicon");
> diff --git a/drivers/dma/hiedmacv310.h b/drivers/dma/hiedmacv310.h
> new file mode 100644
> index 000000000000..99e1720263e3
> --- /dev/null
> +++ b/drivers/dma/hiedmacv310.h
> @@ -0,0 +1,136 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * The Hiedma Controller v310 Device Driver for HiSilicon
> + *
> + * Copyright (c) 2019-2020, Huawei Tech. Co., Ltd.
> + *
> + * Author: Dongjiu Geng <gengdongjiu@huawei.com>
> + */
> +#ifndef __HIEDMACV310_H__
> +#define __HIEDMACV310_H__
> +
> +/* debug control */
> +#define HIEDMACV310_CONFIG_TRACE_LEVEL 3
> +#define HIEDMACV310_TRACE_LEVEL 0
> +#define HIEDMACV310_REG_TRACE_LEVEL 3
> +
> +#ifdef DEBUG_HIEDMAC
> +#define hiedmacv310_trace(level, msg, a...) do { \
> +	if ((level) >= HIEDMACV310_TRACE_LEVEL) { \
> +		pr_info("%s: %d: " msg,  __func__, __LINE__, ## a); \
> +	} \
> +} while (0)
> +
> +#define hiedmacv310_assert(cond) do { \
> +	if (!(cond)) { \
> +		pr_info("Assert:hiedmacv310:%s:%d\n", \
> +				__func__, \
> +				__LINE__); \
> +		__WARN(); \
> +	} \
> +} while (0)
> +
> +#define hiedmacv310_error(s, a...) \
> +	pr_err("hiedmacv310:%s:%d: " s, __func__, __LINE__, ## a)
> +
> +#define hiedmacv310_info(s, a...) \
> +	pr_info("hiedmacv310:%s:%d: " s, __func__, __LINE__, ## a)
> +
> +#else
> +
> +#define hiedmacv310_trace(level, msg, a...)
> +#define hiedmacv310_assert(cond)
> +#define hiedmacv310_error(s, a...)
> +
> +#endif
> +
> +
> +#define hiedmacv310_readl(addr) ((unsigned int)readl((void *)(addr)))
> +
> +#define hiedmacv310_writel(v, addr) writel(v, (void *)(addr))
> +
> +
> +#define MAX_TRANSFER_BYTES  0xffff
> +
> +/* reg offset */
> +#define HIEDMAC_INT_STAT                  0x0
> +#define HIEDMAC_INT_TC1                   0x4
> +#define HIEDMAC_INT_TC2                   0x8
> +#define HIEDMAC_INT_ERR1                  0xc
> +#define HIEDMAC_INT_ERR2                  0x10
> +#define HIEDMAC_INT_ERR3                  0x14
> +
> +#define HIEDMAC_INT_TC1_MASK              0x18
> +#define HIEDMAC_INT_TC2_MASK              0x1c
> +#define HIEDMAC_INT_ERR1_MASK             0x20
> +#define HIEDMAC_INT_ERR2_MASK             0x24
> +#define HIEDMAC_INT_ERR3_MASK             0x28
> +
> +#define HIEDMAC_INT_TC1_RAW               0x600
> +#define HIEDMAC_INT_TC2_RAW               0x608
> +#define HIEDMAC_INT_ERR1_RAW              0x610
> +#define HIEDMAC_INT_ERR2_RAW              0x618
> +#define HIEDMAC_INT_ERR3_RAW              0x620
> +
> +#define hiedmac_cx_curr_cnt0(cn)          (0x404 + (cn) * 0x20)
> +#define hiedmac_cx_curr_src_addr_l(cn)    (0x408 + (cn) * 0x20)
> +#define hiedmac_cx_curr_src_addr_h(cn)    (0x40c + (cn) * 0x20)
> +#define hiedmac_cx_curr_dest_addr_l(cn)    (0x410 + (cn) * 0x20)
> +#define hiedmac_cx_curr_dest_addr_h(cn)    (0x414 + (cn) * 0x20)
> +
> +#define HIEDMAC_CH_PRI                    0x688
> +#define HIEDMAC_CH_STAT                   0x690
> +#define HIEDMAC_DMA_CTRL                  0x698
> +
> +#define hiedmac_cx_base(cn)               (0x800 + (cn) * 0x40)
> +#define hiedmac_cx_lli_l(cn)              (0x800 + (cn) * 0x40)
> +#define hiedmac_cx_lli_h(cn)              (0x804 + (cn) * 0x40)
> +#define hiedmac_cx_cnt0(cn)               (0x81c + (cn) * 0x40)
> +#define hiedmac_cx_src_addr_l(cn)         (0x820 + (cn) * 0x40)
> +#define hiedmac_cx_src_addr_h(cn)         (0x824 + (cn) * 0x40)
> +#define hiedmac_cx_dest_addr_l(cn)        (0x828 + (cn) * 0x40)
> +#define hiedmac_cx_dest_addr_h(cn)        (0x82c + (cn) * 0x40)
> +#define hiedmac_cx_config(cn)             (0x830 + (cn) * 0x40)
> +
> +#define HIEDMAC_ALL_CHAN_CLR        0xff
> +#define HIEDMAC_INT_ENABLE_ALL_CHAN 0xff
> +
> +
> +#define HIEDMAC_CONFIG_SRC_INC          (1 << 31)
> +#define HIEDMAC_CONFIG_DST_INC          (1 << 30)

GENMASK() ?

> +
> +#define HIEDMAC_CONFIG_SRC_WIDTH_SHIFT  16
> +#define HIEDMAC_CONFIG_DST_WIDTH_SHIFT  12
> +#define HIEDMAC_WIDTH_8BIT              0b0
> +#define HIEDMAC_WIDTH_16BIT             0b1
> +#define HIEDMAC_WIDTH_32BIT             0b10
> +#define HIEDMAC_WIDTH_64BIT             0b11

BIT() or GENMASK()

> +#ifdef CONFIG_64BIT
> +#define HIEDMAC_MEM_BIT_WIDTH HIEDMAC_WIDTH_64BIT
> +#else
> +#define HIEDMAC_MEM_BIT_WIDTH HIEDMAC_WIDTH_32BIT
> +#endif
> +
> +#define HIEDMAC_MAX_BURST_WIDTH         16
> +#define HIEDMAC_MIN_BURST_WIDTH         1
> +#define HIEDMAC_CONFIG_SRC_BURST_SHIFT  24
> +#define HIEDMAC_CONFIG_DST_BURST_SHIFT  20
> +
> +#define HIEDMAC_LLI_ALIGN   0x40
> +#define HIEDMAC_LLI_DISABLE 0x0
> +#define HIEDMAC_LLI_ENABLE 0x2
> +
> +#define HIEDMAC_CXCONFIG_SIGNAL_SHIFT   0x4

Dont use shift, define the bitmask and use helpers in include/linux/bitfield.h

-- 
~Vinod

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7 2/4] clk: hisilicon: Add clock driver for hi3559A SoC
  2020-12-15 11:09 ` [PATCH v7 2/4] clk: hisilicon: Add clock driver for hi3559A SoC Dongjiu Geng
@ 2021-01-12 19:48   ` Stephen Boyd
  0 siblings, 0 replies; 12+ messages in thread
From: Stephen Boyd @ 2021-01-12 19:48 UTC (permalink / raw)
  To: dan.j.williams, devicetree, dmaengine, gengdongjiu, linux-clk,
	linux-kernel, mturquette, p.zabel, robh+dt, vkoul

Quoting Dongjiu Geng (2020-12-15 03:09:45)
> diff --git a/drivers/clk/hisilicon/Makefile b/drivers/clk/hisilicon/Makefile
> index b2441b99f3d5..bc101833b35e 100644
> --- a/drivers/clk/hisilicon/Makefile
> +++ b/drivers/clk/hisilicon/Makefile
> @@ -17,3 +17,4 @@ obj-$(CONFIG_COMMON_CLK_HI6220)       += clk-hi6220.o
>  obj-$(CONFIG_RESET_HISI)       += reset.o
>  obj-$(CONFIG_STUB_CLK_HI6220)  += clk-hi6220-stub.o
>  obj-$(CONFIG_STUB_CLK_HI3660)  += clk-hi3660-stub.o
> +obj-$(CONFIG_COMMON_CLK_HI3559A)       += clk-hi3559a.o

Can this file be sorted somehow?

> diff --git a/drivers/clk/hisilicon/clk-hi3559a.c b/drivers/clk/hisilicon/clk-hi3559a.c
> new file mode 100644
> index 000000000000..d7693e488006
> --- /dev/null
> +++ b/drivers/clk/hisilicon/clk-hi3559a.c
> @@ -0,0 +1,865 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Hisilicon Hi3559A clock driver
> + *
> + * Copyright (c) 2019-2020, Huawei Tech. Co., Ltd.
> + *
> + * Author: Dongjiu Geng <gengdongjiu@huawei.com>
> + */
> +
> +#include <linux/clk-provider.h>
> +#include <linux/module.h>
> +#include <linux/of_device.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +
> +#include <dt-bindings/clock/hi3559av100-clock.h>
> +
> +#include "clk.h"
> +#include "crg.h"
> +#include "reset.h"
> +
> +#define CRG_BASE_ADDR  0x18020000
> +
> +struct hi3559av100_pll_clock {
> +       u32     id;

unsigned int?

> +       const char  *name;
> +       const char  *parent_name;
> +       u32     ctrl_reg1;
> +       u8      frac_shift;
> +       u8      frac_width;
> +       u8      postdiv1_shift;
> +       u8      postdiv1_width;
> +       u8      postdiv2_shift;
> +       u8      postdiv2_width;
> +       u32     ctrl_reg2;
> +       u8      fbdiv_shift;
> +       u8      fbdiv_width;
> +       u8      refdiv_shift;
> +       u8      refdiv_width;
> +};
> +
> +struct hi3559av100_clk_pll {
> +       struct clk_hw   hw;
> +       u32     id;
> +       void __iomem    *ctrl_reg1;
> +       u8      frac_shift;
> +       u8      frac_width;
> +       u8      postdiv1_shift;
> +       u8      postdiv1_width;
> +       u8      postdiv2_shift;
> +       u8      postdiv2_width;
> +       void __iomem    *ctrl_reg2;
> +       u8      fbdiv_shift;
> +       u8      fbdiv_width;
> +       u8      refdiv_shift;
> +       u8      refdiv_width;
> +};
> +
> +/* soc clk config */
> +static const struct hisi_fixed_rate_clock hi3559av100_fixed_rate_clks_crg[] = {
> +       { HI3559AV100_FIXED_1188M, "1188m",   NULL, 0, 1188000000, },
> +       { HI3559AV100_FIXED_1000M, "1000m",   NULL, 0, 1000000000, },
> +       { HI3559AV100_FIXED_842M, "842m",    NULL, 0, 842000000, },
> +       { HI3559AV100_FIXED_792M, "792m",    NULL, 0, 792000000, },
> +       { HI3559AV100_FIXED_750M, "750m",    NULL, 0, 750000000, },
> +       { HI3559AV100_FIXED_710M, "710m",    NULL, 0, 710000000, },
> +       { HI3559AV100_FIXED_680M, "680m",    NULL, 0, 680000000, },
> +       { HI3559AV100_FIXED_667M, "667m",    NULL, 0, 667000000, },
> +       { HI3559AV100_FIXED_631M, "631m",    NULL, 0, 631000000, },
> +       { HI3559AV100_FIXED_600M, "600m",    NULL, 0, 600000000, },
> +       { HI3559AV100_FIXED_568M, "568m",    NULL, 0, 568000000, },
> +       { HI3559AV100_FIXED_500M, "500m",    NULL, 0, 500000000, },
> +       { HI3559AV100_FIXED_475M, "475m",    NULL, 0, 475000000, },
> +       { HI3559AV100_FIXED_428M, "428m",    NULL, 0, 428000000, },
> +       { HI3559AV100_FIXED_400M, "400m",    NULL, 0, 400000000, },
> +       { HI3559AV100_FIXED_396M, "396m",    NULL, 0, 396000000, },
> +       { HI3559AV100_FIXED_300M, "300m",    NULL, 0, 300000000, },
> +       { HI3559AV100_FIXED_250M, "250m",    NULL, 0, 250000000, },
> +       { HI3559AV100_FIXED_200M, "200m",    NULL, 0, 200000000, },
> +       { HI3559AV100_FIXED_198M, "198m",    NULL, 0, 198000000, },
> +       { HI3559AV100_FIXED_187p5M, "187p5m",  NULL, 0, 187500000, },
> +       { HI3559AV100_FIXED_150M, "150m",    NULL, 0, 150000000, },
> +       { HI3559AV100_FIXED_148p5M, "148p5m",  NULL, 0, 1485000000, },
> +       { HI3559AV100_FIXED_125M, "125m",    NULL, 0, 125000000, },
> +       { HI3559AV100_FIXED_107M, "107m",    NULL, 0, 107000000, },
> +       { HI3559AV100_FIXED_100M, "100m",    NULL, 0, 100000000, },
> +       { HI3559AV100_FIXED_99M, "99m",     NULL, 0, 99000000, },
> +       { HI3559AV100_FIXED_75M, "75m",  NULL, 0, 75000000, },
> +       { HI3559AV100_FIXED_74p25M, "74p25m",  NULL, 0, 74250000, },
> +       { HI3559AV100_FIXED_72M, "72m",     NULL, 0, 72000000, },
> +       { HI3559AV100_FIXED_60M, "60m",     NULL, 0, 60000000, },
> +       { HI3559AV100_FIXED_54M, "54m",     NULL, 0, 54000000, },
> +       { HI3559AV100_FIXED_50M, "50m",     NULL, 0, 50000000, },
> +       { HI3559AV100_FIXED_49p5M, "49p5m",   NULL, 0, 49500000, },
> +       { HI3559AV100_FIXED_37p125M, "37p125m", NULL, 0, 37125000, },
> +       { HI3559AV100_FIXED_36M, "36m",     NULL, 0, 36000000, },
> +       { HI3559AV100_FIXED_32p4M, "32p4m",   NULL, 0, 32400000, },
> +       { HI3559AV100_FIXED_27M, "27m",     NULL, 0, 27000000, },
> +       { HI3559AV100_FIXED_25M, "25m",     NULL, 0, 25000000, },
> +       { HI3559AV100_FIXED_24M, "24m",     NULL, 0, 24000000, },
> +       { HI3559AV100_FIXED_12M, "12m",     NULL, 0, 12000000, },
> +       { HI3559AV100_FIXED_3M,  "3m",      NULL, 0, 3000000, },
> +       { HI3559AV100_FIXED_1p6M, "1p6m",    NULL, 0, 1600000, },
> +       { HI3559AV100_FIXED_400K, "400k",    NULL, 0, 400000, },
> +       { HI3559AV100_FIXED_100K, "100k",    NULL, 0, 100000, },
> +};
> +
> +
> +static const char *fmc_mux_p[] __initconst = {
> +       "24m", "75m", "125m", "150m", "200m", "250m", "300m", "400m"
> +};
> +static u32 fmc_mux_table[] = {0, 1, 2, 3, 4, 5, 6, 7};

const?

> +
> +static const char *mmc_mux_p[] __initconst = {
> +       "100k", "25m", "49p5m", "99m", "187p5m", "150m", "198m", "400k"
> +};
> +static u32 mmc_mux_table[] = {0, 1, 2, 3, 4, 5, 6, 7};

const?

> +
> +static const char *sysapb_mux_p[] __initconst = {
> +       "24m", "50m",
> +};
> +static u32 sysapb_mux_table[] = {0, 1};

const?

> +
> +static const char *sysbus_mux_p[] __initconst = {
> +       "24m", "300m"
> +};
> +static u32 sysbus_mux_table[] = {0, 1};

const?

> +
> +static const char *uart_mux_p[] __initconst = {"50m", "24m", "3m"};
> +static u32 uart_mux_table[] = {0, 1, 2};

const?

> +
> +static const char *a73_clksel_mux_p[] __initconst = {
> +       "24m", "apll", "1000m"
> +};
> +static u32 a73_clksel_mux_table[] = {0, 1, 2};

const?

Please also add space after { and before }.

> +
> +static struct hisi_mux_clock hi3559av100_mux_clks_crg[] __initdata = {
> +       {
> +               HI3559AV100_FMC_MUX, "fmc_mux", fmc_mux_p, ARRAY_SIZE(fmc_mux_p),
> +               CLK_SET_RATE_PARENT, 0x170, 2, 3, 0, fmc_mux_table,
> +       },
> +       {
> +               HI3559AV100_MMC0_MUX, "mmc0_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p),
> +               CLK_SET_RATE_PARENT, 0x1a8, 24, 3, 0, mmc_mux_table,
> +       },
> +       {
> +               HI3559AV100_MMC1_MUX, "mmc1_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p),
> +               CLK_SET_RATE_PARENT, 0x1ec, 24, 3, 0, mmc_mux_table,
> +       },
> +
> +       {
> +               HI3559AV100_MMC2_MUX, "mmc2_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p),
> +               CLK_SET_RATE_PARENT, 0x214, 24, 3, 0, mmc_mux_table,
> +       },
> +
> +       {
> +               HI3559AV100_MMC3_MUX, "mmc3_mux", mmc_mux_p, ARRAY_SIZE(mmc_mux_p),
> +               CLK_SET_RATE_PARENT, 0x23c, 24, 3, 0, mmc_mux_table,
> +       },
> +
> +       {
> +               HI3559AV100_SYSAPB_MUX, "sysapb_mux", sysapb_mux_p, ARRAY_SIZE(sysapb_mux_p),
> +               CLK_SET_RATE_PARENT, 0xe8, 3, 1, 0, sysapb_mux_table
> +       },
> +
> +       {
> +               HI3559AV100_SYSBUS_MUX, "sysbus_mux", sysbus_mux_p, ARRAY_SIZE(sysbus_mux_p),
> +               CLK_SET_RATE_PARENT, 0xe8, 0, 1, 0, sysbus_mux_table
> +       },
> +
> +       {
> +               HI3559AV100_UART_MUX, "uart_mux", uart_mux_p, ARRAY_SIZE(uart_mux_p),
> +               CLK_SET_RATE_PARENT, 0x198, 28, 2, 0, uart_mux_table
> +       },
> +
> +       {
> +               HI3559AV100_A73_MUX, "a73_mux", a73_clksel_mux_p, ARRAY_SIZE(a73_clksel_mux_p),
> +               CLK_SET_RATE_PARENT, 0xe4, 0, 2, 0, a73_clksel_mux_table
> +       },
> +};
> +
> +static struct hisi_fixed_factor_clock hi3559av100_fixed_factor_clks[] __initdata
> +       = {
> +};

This looks weird.

> +
> +static struct hisi_gate_clock hi3559av100_gate_clks[] __initdata = {
> +       {
> +               HI3559AV100_FMC_CLK, "clk_fmc", "fmc_mux",
> +               CLK_SET_RATE_PARENT, 0x170, 1, 0,
> +       },
> +       {
> +               HI3559AV100_MMC0_CLK, "clk_mmc0", "mmc0_mux",
> +               CLK_SET_RATE_PARENT, 0x1a8, 28, 0,
> +       },
> +       {
> +               HI3559AV100_MMC1_CLK, "clk_mmc1", "mmc1_mux",
> +               CLK_SET_RATE_PARENT, 0x1ec, 28, 0,
> +       },
> +       {
> +               HI3559AV100_MMC2_CLK, "clk_mmc2", "mmc2_mux",
> +               CLK_SET_RATE_PARENT, 0x214, 28, 0,
> +       },
> +       {
> +               HI3559AV100_MMC3_CLK, "clk_mmc3", "mmc3_mux",
> +               CLK_SET_RATE_PARENT, 0x23c, 28, 0,
> +       },
> +       {
> +               HI3559AV100_UART0_CLK, "clk_uart0", "uart_mux",
> +               CLK_SET_RATE_PARENT, 0x198, 23, 0,
> +       },
> +       {
> +               HI3559AV100_UART1_CLK, "clk_uart1", "uart_mux",
> +               CLK_SET_RATE_PARENT, 0x198, 24, 0,
> +       },
> +       {
> +               HI3559AV100_UART2_CLK, "clk_uart2", "uart_mux",
> +               CLK_SET_RATE_PARENT, 0x198, 25, 0,
> +       },
> +       {
> +               HI3559AV100_UART3_CLK, "clk_uart3", "uart_mux",
> +               CLK_SET_RATE_PARENT, 0x198, 26, 0,
> +       },
> +       {
> +               HI3559AV100_UART4_CLK, "clk_uart4", "uart_mux",
> +               CLK_SET_RATE_PARENT, 0x198, 27, 0,
> +       },
> +       {
> +               HI3559AV100_ETH_CLK, "clk_eth", NULL,
> +               CLK_SET_RATE_PARENT, 0x0174, 1, 0,
> +       },
> +       {
> +               HI3559AV100_ETH_MACIF_CLK, "clk_eth_macif", NULL,
> +               CLK_SET_RATE_PARENT, 0x0174, 5, 0,
> +       },
> +       {
> +               HI3559AV100_ETH1_CLK, "clk_eth1", NULL,
> +               CLK_SET_RATE_PARENT, 0x0174, 3, 0,
> +       },
> +       {
> +               HI3559AV100_ETH1_MACIF_CLK, "clk_eth1_macif", NULL,
> +               CLK_SET_RATE_PARENT, 0x0174, 7, 0,
> +       },
> +       {
> +               HI3559AV100_I2C0_CLK, "clk_i2c0", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 16, 0,
> +       },
> +       {
> +               HI3559AV100_I2C1_CLK, "clk_i2c1", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 17, 0,
> +       },
> +       {
> +               HI3559AV100_I2C2_CLK, "clk_i2c2", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 18, 0,
> +       },
> +       {
> +               HI3559AV100_I2C3_CLK, "clk_i2c3", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 19, 0,
> +       },
> +       {
> +               HI3559AV100_I2C4_CLK, "clk_i2c4", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 20, 0,
> +       },
> +       {
> +               HI3559AV100_I2C5_CLK, "clk_i2c5", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 21, 0,
> +       },
> +       {
> +               HI3559AV100_I2C6_CLK, "clk_i2c6", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 22, 0,
> +       },
> +       {
> +               HI3559AV100_I2C7_CLK, "clk_i2c7", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 23, 0,
> +       },
> +       {
> +               HI3559AV100_I2C8_CLK, "clk_i2c8", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 24, 0,
> +       },
> +       {
> +               HI3559AV100_I2C9_CLK, "clk_i2c9", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 25, 0,
> +       },
> +       {
> +               HI3559AV100_I2C10_CLK, "clk_i2c10", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 26, 0,
> +       },
> +       {
> +               HI3559AV100_I2C11_CLK, "clk_i2c11", "50m",
> +               CLK_SET_RATE_PARENT, 0x01a0, 27, 0,
> +       },
> +       {
> +               HI3559AV100_SPI0_CLK, "clk_spi0", "100m",
> +               CLK_SET_RATE_PARENT, 0x0198, 16, 0,
> +       },
> +       {
> +               HI3559AV100_SPI1_CLK, "clk_spi1", "100m",
> +               CLK_SET_RATE_PARENT, 0x0198, 17, 0,
> +       },
> +       {
> +               HI3559AV100_SPI2_CLK, "clk_spi2", "100m",
> +               CLK_SET_RATE_PARENT, 0x0198, 18, 0,
> +       },
> +       {
> +               HI3559AV100_SPI3_CLK, "clk_spi3", "100m",
> +               CLK_SET_RATE_PARENT, 0x0198, 19, 0,
> +       },
> +       {
> +               HI3559AV100_SPI4_CLK, "clk_spi4", "100m",
> +               CLK_SET_RATE_PARENT, 0x0198, 20, 0,
> +       },
> +       {
> +               HI3559AV100_SPI5_CLK, "clk_spi5", "100m",
> +               CLK_SET_RATE_PARENT, 0x0198, 21, 0,
> +       },
> +       {
> +               HI3559AV100_SPI6_CLK, "clk_spi6", "100m",
> +               CLK_SET_RATE_PARENT, 0x0198, 22, 0,
> +       },
> +       {
> +               HI3559AV100_EDMAC_AXICLK, "axi_clk_edmac", NULL,
> +               CLK_SET_RATE_PARENT, 0x16c, 6, 0,
> +       },
> +       {
> +               HI3559AV100_EDMAC_CLK, "clk_edmac", NULL,
> +               CLK_SET_RATE_PARENT, 0x16c, 5, 0,

Is the CLK_SET_RATE_PARENT flag necessary all the time? Do all of these
clks actually have parents that the rate should be set on?

> +       },
> +       {
> +               HI3559AV100_EDMAC1_AXICLK, "axi_clk_edmac1", NULL,
> +               CLK_SET_RATE_PARENT, 0x16c, 9, 0,
> +       },
> +       {
> +               HI3559AV100_EDMAC1_CLK, "clk_edmac1", NULL,
> +               CLK_SET_RATE_PARENT, 0x16c, 8, 0,
> +       },
> +       {
> +               HI3559AV100_VDMAC_CLK, "clk_vdmac", NULL,
> +               CLK_SET_RATE_PARENT, 0x14c, 5, 0,
> +       },
> +};
> +
> +static struct hi3559av100_pll_clock hi3559av100_pll_clks[] __initdata = {
> +       {
> +               HI3559AV100_APLL_CLK, "apll", NULL, 0x0, 0, 24, 24, 3, 28, 3,
> +               0x4, 0, 12, 12, 6
> +       },
> +       {
> +               HI3559AV100_GPLL_CLK, "gpll", NULL, 0x20, 0, 24, 24, 3, 28, 3,
> +               0x24, 0, 12, 12, 6
> +       },
> +};
> +
> +#define to_pll_clk(_hw) container_of(_hw, struct hi3559av100_clk_pll, hw)
> +static void hi3559av100_calc_pll(u32 *frac_val, u32 *postdiv1_val,
> +                                u32 *postdiv2_val,
> +                                u32 *fbdiv_val, u32 *refdiv_val, u64 rate)
> +{
> +       u64 rem;
> +
> +       *postdiv1_val = 2;
> +       *postdiv2_val = 1;
> +
> +       rate = rate * ((*postdiv1_val) * (*postdiv2_val));
> +
> +       *frac_val = 0;
> +       rem = do_div(rate, 1000000);
> +       rem = do_div(rate, 24);
> +       *fbdiv_val = rate;
> +       *refdiv_val = 1;
> +       rem = rem * (1 << 24);
> +       do_div(rem, 24);

What is the significance of 24? Is it a mask width? Please add a define
if so.

> +       *frac_val = rem;
> +}
> +
> +static int clk_pll_set_rate(struct clk_hw *hw,
> +                           unsigned long rate,
> +                           unsigned long parent_rate)
> +{
> +       struct hi3559av100_clk_pll *clk = to_pll_clk(hw);
> +       u32 frac_val, postdiv1_val, postdiv2_val, fbdiv_val, refdiv_val;
> +       u32 val;
> +
> +       postdiv1_val = postdiv2_val = 0;
> +
> +       hi3559av100_calc_pll(&frac_val, &postdiv1_val, &postdiv2_val,
> +                            &fbdiv_val, &refdiv_val, (u64)rate);
> +
> +       val = readl_relaxed(clk->ctrl_reg1);
> +       val &= ~(((1 << clk->frac_width) - 1) << clk->frac_shift);
> +       val &= ~(((1 << clk->postdiv1_width) - 1) << clk->postdiv1_shift);
> +       val &= ~(((1 << clk->postdiv2_width) - 1) << clk->postdiv2_shift);

Make local variables for these masks please.

> +
> +       val |= frac_val << clk->frac_shift;
> +       val |= postdiv1_val << clk->postdiv1_shift;
> +       val |= postdiv2_val << clk->postdiv2_shift;
> +       writel_relaxed(val, clk->ctrl_reg1);
> +
> +       val = readl_relaxed(clk->ctrl_reg2);
> +       val &= ~(((1 << clk->fbdiv_width) - 1) << clk->fbdiv_shift);
> +       val &= ~(((1 << clk->refdiv_width) - 1) << clk->refdiv_shift);
> +
> +       val |= fbdiv_val << clk->fbdiv_shift;
> +       val |= refdiv_val << clk->refdiv_shift;
> +       writel_relaxed(val, clk->ctrl_reg2);
> +
> +       return 0;
> +}
> +
> +static unsigned long clk_pll_recalc_rate(struct clk_hw *hw,
> +               unsigned long parent_rate)
> +{
> +       struct hi3559av100_clk_pll *clk = to_pll_clk(hw);
> +       u64 frac_val, fbdiv_val, refdiv_val;
> +       u32 postdiv1_val, postdiv2_val;
> +       u32 val;
> +       u64 tmp, rate;
> +
> +       val = readl_relaxed(clk->ctrl_reg1);
> +       val = val >> clk->frac_shift;
> +       val &= ((1 << clk->frac_width) - 1);
> +       frac_val = val;
> +
> +       val = readl_relaxed(clk->ctrl_reg1);
> +       val = val >> clk->postdiv1_shift;
> +       val &= ((1 << clk->postdiv1_width) - 1);
> +       postdiv1_val = val;
> +
> +       val = readl_relaxed(clk->ctrl_reg1);
> +       val = val >> clk->postdiv2_shift;
> +       val &= ((1 << clk->postdiv2_width) - 1);
> +       postdiv2_val = val;
> +
> +       val = readl_relaxed(clk->ctrl_reg2);
> +       val = val >> clk->fbdiv_shift;
> +       val &= ((1 << clk->fbdiv_width) - 1);
> +       fbdiv_val = val;
> +
> +       val = readl_relaxed(clk->ctrl_reg2);
> +       val = val >> clk->refdiv_shift;
> +       val &= ((1 << clk->refdiv_width) - 1);
> +       refdiv_val = val;
> +
> +       /* rate = 24000000 * (fbdiv + frac / (1<<24) ) / refdiv  */
> +       rate = 0;
> +       tmp = 24000000 * fbdiv_val + (24000000 * frac_val) / (1 << 24);
> +       rate += tmp;
> +       do_div(rate, refdiv_val);
> +       do_div(rate, postdiv1_val * postdiv2_val);
> +
> +       return rate;
> +}
> +
> +static int clk_pll_determine_rate(struct clk_hw *hw,
> +                                 struct clk_rate_request *req)
> +{
> +       return req->rate;

That's not really how it's supposed to work. We should be calculating
the rate that can be achieved in the hardware instead of blindly telling
the framework that any rate is supported.

> +}
> +
> +static const struct clk_ops clk_pll_ops = {

Maybe hisi_clk_pll_ops? Or hi3559_pll_ops?

> +       .set_rate = clk_pll_set_rate,
> +       .determine_rate = clk_pll_determine_rate,
> +       .recalc_rate = clk_pll_recalc_rate,
> +};
> +
> +static void hisi_clk_register_pll(struct hi3559av100_pll_clock *clks,
> +                          int nums, struct hisi_clock_data *data)
> +{
> +       void __iomem *base = data->base;
> +       int i;
> +
> +       for (i = 0; i < nums; i++) {
> +               struct hi3559av100_clk_pll *p_clk = NULL;
> +               struct clk *clk = NULL;
> +               struct clk_init_data init;

Please move these local variables to the start of the function instead
of living in this loop scope.

> +
> +               p_clk = kzalloc(sizeof(*p_clk), GFP_KERNEL);

Any chance we can allocate an array of p_clk at the start instead of
many smaller ones?

> +               if (!p_clk)
> +                       return;
> +
> +               init.name = clks[i].name;
> +               init.flags = 0;
> +               init.parent_names =
> +                       (clks[i].parent_name ? &clks[i].parent_name : NULL);
> +               init.num_parents = (clks[i].parent_name ? 1 : 0);

Can we get the parent name from DT instead by using clk_parent_data
instead?

> +               init.ops = &clk_pll_ops;
> +
> +               p_clk->ctrl_reg1 = base + clks[i].ctrl_reg1;
> +               p_clk->frac_shift = clks[i].frac_shift;
> +               p_clk->frac_width = clks[i].frac_width;
> +               p_clk->postdiv1_shift = clks[i].postdiv1_shift;
> +               p_clk->postdiv1_width = clks[i].postdiv1_width;
> +               p_clk->postdiv2_shift = clks[i].postdiv2_shift;
> +               p_clk->postdiv2_width = clks[i].postdiv2_width;
> +
> +               p_clk->ctrl_reg2 = base + clks[i].ctrl_reg2;
> +               p_clk->fbdiv_shift = clks[i].fbdiv_shift;
> +               p_clk->fbdiv_width = clks[i].fbdiv_width;
> +               p_clk->refdiv_shift = clks[i].refdiv_shift;
> +               p_clk->refdiv_width = clks[i].refdiv_width;
> +               p_clk->hw.init = &init;
> +
> +               clk = clk_register(NULL, &p_clk->hw);

Please pass a device and consider using devm. Also, use clk_hw_register.

> +               if (IS_ERR(clk)) {
> +                       kfree(p_clk);
> +                       pr_err("%s: failed to register clock %s\n",

Use dev_err().

> +                              __func__, clks[i].name);
> +                       continue;
> +               }
> +
> +               data->clk_data.clks[clks[i].id] = clk;
> +       }
> +}
> +
> +static __init struct hisi_clock_data *hi3559av100_clk_register(
> +       struct platform_device *pdev)
> +{
> +       struct hisi_clock_data *clk_data;
> +       int ret;
> +
> +       clk_data = hisi_clk_alloc(pdev, HI3559AV100_CRG_NR_CLKS);
> +       if (!clk_data)
> +               return ERR_PTR(-ENOMEM);
> +
> +       ret = hisi_clk_register_fixed_rate(hi3559av100_fixed_rate_clks_crg,
> +                                          ARRAY_SIZE(hi3559av100_fixed_rate_clks_crg), clk_data);
> +       if (ret)
> +               return ERR_PTR(ret);
> +
> +       hisi_clk_register_pll(hi3559av100_pll_clks,
> +                             ARRAY_SIZE(hi3559av100_pll_clks), clk_data);
> +
> +       ret = hisi_clk_register_mux(hi3559av100_mux_clks_crg,
> +                                   ARRAY_SIZE(hi3559av100_mux_clks_crg), clk_data);
> +       if (ret)
> +               goto unregister_fixed_rate;
> +
> +       ret = hisi_clk_register_fixed_factor(hi3559av100_fixed_factor_clks,
> +                                            ARRAY_SIZE(hi3559av100_fixed_factor_clks), clk_data);
> +       if (ret)
> +               goto unregister_mux;
> +
> +       ret = hisi_clk_register_gate(hi3559av100_gate_clks,
> +                                    ARRAY_SIZE(hi3559av100_gate_clks), clk_data);
> +       if (ret)
> +               goto unregister_factor;
> +
> +       ret = of_clk_add_provider(pdev->dev.of_node,
> +                                 of_clk_src_onecell_get, &clk_data->clk_data);
> +       if (ret)
> +               goto unregister_gate;
> +
> +       return clk_data;
> +
> +unregister_gate:
> +       hisi_clk_unregister_gate(hi3559av100_gate_clks,
> +                                ARRAY_SIZE(hi3559av100_gate_clks), clk_data);
> +unregister_factor:
> +       hisi_clk_unregister_fixed_factor(hi3559av100_fixed_factor_clks,
> +                                        ARRAY_SIZE(hi3559av100_fixed_factor_clks), clk_data);
> +unregister_mux:
> +       hisi_clk_unregister_mux(hi3559av100_mux_clks_crg,
> +                               ARRAY_SIZE(hi3559av100_mux_clks_crg), clk_data);
> +unregister_fixed_rate:
> +       hisi_clk_unregister_fixed_rate(hi3559av100_fixed_rate_clks_crg,
> +                                      ARRAY_SIZE(hi3559av100_fixed_rate_clks_crg), clk_data);
> +       return ERR_PTR(ret);
> +}
> +
> +static __init void hi3559av100_clk_unregister(struct platform_device *pdev)

Why is this marked __init?

> +{
> +       struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
> +
> +       of_clk_del_provider(pdev->dev.of_node);
> +
> +       hisi_clk_unregister_gate(hi3559av100_gate_clks,
> +                                ARRAY_SIZE(hi3559av100_gate_clks), crg->clk_data);
> +       hisi_clk_unregister_mux(hi3559av100_mux_clks_crg,
> +                               ARRAY_SIZE(hi3559av100_mux_clks_crg), crg->clk_data);
> +       hisi_clk_unregister_fixed_factor(hi3559av100_fixed_factor_clks,
> +                                        ARRAY_SIZE(hi3559av100_fixed_factor_clks), crg->clk_data);
> +       hisi_clk_unregister_fixed_rate(hi3559av100_fixed_rate_clks_crg,
> +                                      ARRAY_SIZE(hi3559av100_fixed_rate_clks_crg), crg->clk_data);
> +}
> +
> +static const struct hisi_crg_funcs hi3559av100_crg_funcs = {
> +       .register_clks = hi3559av100_clk_register,
> +       .unregister_clks = hi3559av100_clk_unregister,
> +};
> +
> +static struct hisi_fixed_rate_clock hi3559av100_shub_fixed_rate_clks[]
> +       __initdata = {
> +       { HI3559AV100_SHUB_SOURCE_SOC_24M, "clk_source_24M", NULL, 0, 24000000UL, },
> +       { HI3559AV100_SHUB_SOURCE_SOC_200M, "clk_source_200M", NULL, 0, 200000000UL, },
> +       { HI3559AV100_SHUB_SOURCE_SOC_300M, "clk_source_300M", NULL, 0, 300000000UL, },
> +       { HI3559AV100_SHUB_SOURCE_PLL, "clk_source_PLL", NULL, 0, 192000000UL, },
> +       { HI3559AV100_SHUB_I2C0_CLK, "clk_shub_i2c0", NULL, 0, 48000000UL, },
> +       { HI3559AV100_SHUB_I2C1_CLK, "clk_shub_i2c1", NULL, 0, 48000000UL, },
> +       { HI3559AV100_SHUB_I2C2_CLK, "clk_shub_i2c2", NULL, 0, 48000000UL, },
> +       { HI3559AV100_SHUB_I2C3_CLK, "clk_shub_i2c3", NULL, 0, 48000000UL, },
> +       { HI3559AV100_SHUB_I2C4_CLK, "clk_shub_i2c4", NULL, 0, 48000000UL, },
> +       { HI3559AV100_SHUB_I2C5_CLK, "clk_shub_i2c5", NULL, 0, 48000000UL, },
> +       { HI3559AV100_SHUB_I2C6_CLK, "clk_shub_i2c6", NULL, 0, 48000000UL, },
> +       { HI3559AV100_SHUB_I2C7_CLK, "clk_shub_i2c7", NULL, 0, 48000000UL, },
> +       { HI3559AV100_SHUB_UART_CLK_32K, "clk_uart_32K", NULL, 0, 32000UL, },
> +};
> +
> +/* shub mux clk */
> +static u32 shub_source_clk_mux_table[] = {0, 1, 2, 3};
> +static const char *shub_source_clk_mux_p[] __initconst = {
> +       "clk_source_24M", "clk_source_200M", "clk_source_300M", "clk_source_PLL"
> +};
> +
> +static u32 shub_uart_source_clk_mux_table[] = {0, 1, 2, 3};
> +static const char *shub_uart_source_clk_mux_p[] __initconst = {
> +       "clk_uart_32K", "clk_uart_div_clk", "clk_uart_div_clk", "clk_source_24M"
> +};
> +
> +static struct hisi_mux_clock hi3559av100_shub_mux_clks[] __initdata = {
> +       {
> +               HI3559AV100_SHUB_SOURCE_CLK, "shub_clk", shub_source_clk_mux_p,
> +               ARRAY_SIZE(shub_source_clk_mux_p),
> +               0, 0x0, 0, 2, 0, shub_source_clk_mux_table,
> +       },
> +
> +       {
> +               HI3559AV100_SHUB_UART_SOURCE_CLK, "shub_uart_source_clk",
> +               shub_uart_source_clk_mux_p, ARRAY_SIZE(shub_uart_source_clk_mux_p),
> +               0, 0x1c, 28, 2, 0, shub_uart_source_clk_mux_table,
> +       },
> +};
> +
> +
> +/* shub div clk */
> +struct clk_div_table shub_spi_clk_table[] = {{0, 8}, {1, 4}, {2, 2}};
> +struct clk_div_table shub_spi4_clk_table[] = {{0, 8}, {1, 4}, {2, 2}, {3, 1}};
> +struct clk_div_table shub_uart_div_clk_table[] = {{1, 8}, {2, 4}};
> +
> +struct hisi_divider_clock hi3559av100_shub_div_clks[] __initdata = {
> +       { HI3559AV100_SHUB_SPI_SOURCE_CLK, "clk_spi_clk", "shub_clk", 0, 0x20, 24, 2,
> +         CLK_DIVIDER_ALLOW_ZERO, shub_spi_clk_table,
> +       },
> +       { HI3559AV100_SHUB_UART_DIV_CLK, "clk_uart_div_clk", "shub_clk", 0, 0x1c, 28, 2,
> +         CLK_DIVIDER_ALLOW_ZERO, shub_uart_div_clk_table,
> +       },
> +};
> +
> +/* shub gate clk */
> +static struct hisi_gate_clock hi3559av100_shub_gate_clks[] __initdata = {
> +       {
> +               HI3559AV100_SHUB_SPI0_CLK, "clk_shub_spi0", "clk_spi_clk",
> +               0, 0x20, 1, 0,
> +       },
> +       {
> +               HI3559AV100_SHUB_SPI1_CLK, "clk_shub_spi1", "clk_spi_clk",
> +               0, 0x20, 5, 0,
> +       },
> +       {
> +               HI3559AV100_SHUB_SPI2_CLK, "clk_shub_spi2", "clk_spi_clk",
> +               0, 0x20, 9, 0,
> +       },
> +
> +       {
> +               HI3559AV100_SHUB_UART0_CLK, "clk_shub_uart0", "shub_uart_source_clk",
> +               0, 0x1c, 1, 0,
> +       },
> +       {
> +               HI3559AV100_SHUB_UART1_CLK, "clk_shub_uart1", "shub_uart_source_clk",
> +               0, 0x1c, 5, 0,
> +       },
> +       {
> +               HI3559AV100_SHUB_UART2_CLK, "clk_shub_uart2", "shub_uart_source_clk",
> +               0, 0x1c, 9, 0,
> +       },
> +       {
> +               HI3559AV100_SHUB_UART3_CLK, "clk_shub_uart3", "shub_uart_source_clk",
> +               0, 0x1c, 13, 0,
> +       },
> +       {
> +               HI3559AV100_SHUB_UART4_CLK, "clk_shub_uart4", "shub_uart_source_clk",
> +               0, 0x1c, 17, 0,
> +       },
> +       {
> +               HI3559AV100_SHUB_UART5_CLK, "clk_shub_uart5", "shub_uart_source_clk",
> +               0, 0x1c, 21, 0,
> +       },
> +       {
> +               HI3559AV100_SHUB_UART6_CLK, "clk_shub_uart6", "shub_uart_source_clk",
> +               0, 0x1c, 25, 0,
> +       },
> +
> +       {
> +               HI3559AV100_SHUB_EDMAC_CLK, "clk_shub_dmac", "shub_clk",
> +               0, 0x24, 4, 0,
> +       },
> +};
> +
> +static int hi3559av100_shub_default_clk_set(void)
> +{
> +       void *crg_base;
> +       unsigned int val;
> +
> +       crg_base = ioremap(CRG_BASE_ADDR, SZ_4K);
> +
> +       /* SSP: 192M/2 */
> +       val = readl_relaxed(crg_base + 0x20);
> +       val |= (0x2 << 24);
> +       writel_relaxed(val, crg_base + 0x20);
> +
> +       /* UART: 192M/8 */
> +       val = readl_relaxed(crg_base + 0x1C);
> +       val |= (0x1 << 28);
> +       writel_relaxed(val, crg_base + 0x1C);
> +
> +       iounmap(crg_base);
> +       crg_base = NULL;
> +
> +       return 0;
> +}
> +
> +static __init struct hisi_clock_data *hi3559av100_shub_clk_register(
> +       struct platform_device *pdev)

We have a platform device here.

> +{
> +       struct hisi_clock_data *clk_data = NULL;
> +       int ret;
> +
> +       hi3559av100_shub_default_clk_set();
> +
> +       clk_data = hisi_clk_alloc(pdev, HI3559AV100_SHUB_NR_CLKS);
> +       if (!clk_data)
> +               return ERR_PTR(-ENOMEM);
> +
> +       ret = hisi_clk_register_fixed_rate(hi3559av100_shub_fixed_rate_clks,
> +                                          ARRAY_SIZE(hi3559av100_shub_fixed_rate_clks), clk_data);
> +       if (ret)
> +               return ERR_PTR(ret);
> +
> +       ret = hisi_clk_register_mux(hi3559av100_shub_mux_clks,
> +                                   ARRAY_SIZE(hi3559av100_shub_mux_clks), clk_data);
> +       if (ret)
> +               goto unregister_fixed_rate;
> +
> +       ret = hisi_clk_register_divider(hi3559av100_shub_div_clks,
> +                                       ARRAY_SIZE(hi3559av100_shub_div_clks), clk_data);
> +       if (ret)
> +               goto unregister_mux;
> +
> +       ret = hisi_clk_register_gate(hi3559av100_shub_gate_clks,
> +                                    ARRAY_SIZE(hi3559av100_shub_gate_clks), clk_data);
> +       if (ret)
> +               goto unregister_factor;
> +
> +       ret = of_clk_add_provider(pdev->dev.of_node,

But we're not using devm? Why?

> +                                 of_clk_src_onecell_get, &clk_data->clk_data);
> +       if (ret)
> +               goto unregister_gate;
> +
> +       return clk_data;
> +
> +unregister_gate:
> +       hisi_clk_unregister_gate(hi3559av100_shub_gate_clks,
> +                                ARRAY_SIZE(hi3559av100_shub_gate_clks), clk_data);
> +unregister_factor:
> +       hisi_clk_unregister_divider(hi3559av100_shub_div_clks,
> +                                   ARRAY_SIZE(hi3559av100_shub_div_clks), clk_data);
> +unregister_mux:
> +       hisi_clk_unregister_mux(hi3559av100_shub_mux_clks,
> +                               ARRAY_SIZE(hi3559av100_shub_mux_clks), clk_data);
> +unregister_fixed_rate:
> +       hisi_clk_unregister_fixed_rate(hi3559av100_shub_fixed_rate_clks,
> +                                      ARRAY_SIZE(hi3559av100_shub_fixed_rate_clks), clk_data);
> +       return ERR_PTR(ret);
> +}
> +
> +static __init void hi3559av100_shub_clk_unregister(struct platform_device *pdev)
> +{
> +       struct hisi_crg_dev *crg = platform_get_drvdata(pdev);
> +
> +       of_clk_del_provider(pdev->dev.of_node);
> +
> +       hisi_clk_unregister_gate(hi3559av100_shub_gate_clks,
> +                                ARRAY_SIZE(hi3559av100_shub_gate_clks), crg->clk_data);
> +       hisi_clk_unregister_divider(hi3559av100_shub_div_clks,
> +                                   ARRAY_SIZE(hi3559av100_shub_div_clks), crg->clk_data);
> +       hisi_clk_unregister_mux(hi3559av100_shub_mux_clks,
> +                               ARRAY_SIZE(hi3559av100_shub_mux_clks), crg->clk_data);
> +       hisi_clk_unregister_fixed_rate(hi3559av100_shub_fixed_rate_clks,
> +                                      ARRAY_SIZE(hi3559av100_shub_fixed_rate_clks), crg->clk_data);
> +}
> +
> +static const struct hisi_crg_funcs hi3559av100_shub_crg_funcs = {
> +       .register_clks = hi3559av100_shub_clk_register,
> +       .unregister_clks = hi3559av100_shub_clk_unregister,
> +};
> +

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-01-12 19:50 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-15 11:09 [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Dongjiu Geng
2020-12-15 11:09 ` [PATCH v7 1/4] dt-bindings: Document the hi3559a clock bindings Dongjiu Geng
2020-12-21 18:54   ` Rob Herring
2021-01-07  4:11     ` Dongjiu Geng
2020-12-15 11:09 ` [PATCH v7 2/4] clk: hisilicon: Add clock driver for hi3559A SoC Dongjiu Geng
2021-01-12 19:48   ` Stephen Boyd
2020-12-15 11:09 ` [PATCH v7 3/4] dt: bindings: dma: Add DT bindings for HiSilicon Hiedma Controller Dongjiu Geng
2020-12-21 18:55   ` Rob Herring
2020-12-15 11:09 ` [PATCH v7 4/4] dmaengine: dma: Add Hiedma Controller v310 Device Driver Dongjiu Geng
2021-01-12 12:06   ` Vinod Koul
2021-01-12 10:40 ` [PATCH v7 0/4] Enable Hi3559A SOC clock and HiSilicon Hiedma Controller Vinod Koul
2021-01-12 11:22   ` Dongjiu Geng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).