All of lore.kernel.org
 help / color / mirror / Atom feed
* [U-Boot] [PATCH v6 0/5] dma: add channels support
@ 2018-10-22 21:24 Grygorii Strashko
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 1/5] dma: move dma_ops to dma-uclass.h Grygorii Strashko
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Grygorii Strashko @ 2018-10-22 21:24 UTC (permalink / raw)
  To: u-boot

Hi All,

This series is the next try to add DMA channels support for DMA controllers
which last version was posted by Álvaro Fernández Rojas [1].
i've kept version numbering.

Comparing to the original post I've added few changes:
 - added possibility to pass DMA driver/channel's specific data per each transfer
 using additional parameter "metadata" in dma_send/dma_receive() API.
 For example, port number for network packets to be directed to the
 specific port on multi port ethernet controllers.
 - added new dma_prepare_rcv_buf() API which allows to implement zero-copy
 DEV_TO_MEM transfer using DMA streaming channels which is usual case
 for Networking.
 - added dma-uclass test
 - removed unused function dma_get_by_index_platdata()
 - updated comments

Hence, originally DMA channels support was introduced to add support for
"bmips: add bcm6348-enet support" - that series can be easily updated following
DMA channels API changes (if still actual).

Patches 5-6 - Here I'm providing DMA and networking drivers for the new
TI AM65x SoC [2] as an illustration of DMA channels API usage only. Unfortunately,
those drivers are still under development so NOT-FOR-MERGE (!will not build!)
- we'd like code/bindings to be accepted in LKML first.
Full sources can be found at [3].

[1] https://patchwork.ozlabs.org/cover/881642/
[2] http://www.ti.com/lit/ug/spruid7b/spruid7b.pdf
[3] git at git.ti.com:~gragst/ti-u-boot/gragsts-ti-u-boot.git
    branch: master-am6-dma-wip
===

A DMA is a feature of computer systems that allows certain hardware
subsystems to access main system memory, independent of the CPU.
DMA channels are typically generated externally to the HW module
consuming them, by an entity this API calls a DMA provider. This API
provides a standard means for drivers to enable and disable DMAs, and to
copy, send and receive data using DMA.

DMA channel API:
 dma_get_by_index()
 dma_get_by_name()
 dma_request()
 dma_free()
 dma_enable()
 dma_disable()
 dma_prepare_rcv_buf()
 dma_receive()
 dma_send()

A driver that implements UCLASS_DMA is a DMA provider. A provider will
often implement multiple separate DMAs channels, since the hardware it manages
often has this capability. dma_uclass.h describes the interface which
DMA providers must implement.

DMA consumers/clients are the HW modules driven by the DMA channels. 

DMA consumer DMA_MEM_TO_DEV (transmit) usage example (based on networking).
Note. In u-boot dma_send() is sync operation always - it'll start transfer and
will poll for it to complete:
- get/request dma channel
	struct dma dma_tx;
	ret = dma_get_by_name(common->dev, "tx0", &dma_tx);
	if (ret) ...

- enable dma channel
	ret = dma_enable(&dma_tx);
	if (ret) ...

- dma transmit DMA_MEM_TO_DEV.
	struct ti_drv_packet_data packet_data;
	
	packet_data.opt1 = val1;
	packet_data.opt2 = val2;
	ret = dma_send(&dma_tx, packet, length, &packet_data);
	if (ret) ..

DMA consumer DMA_DEV_TO_MEM (receive) usage example (based on networking).
Note. dma_receive() is sync operation always - it'll start transfer
(if required) and will poll for it to complete (or for any previously
configured dev2mem transfer to complete):
- get/request dma channel
	struct dma dma_rx;
	ret = dma_get_by_name(common->dev, "rx0", &dma_rx);
	if (ret) ...

- enable dma channel
	ret = dma_enable(&dma_rx);
	if (ret) ...

- dma receive DMA_DEV_TO_MEM.
	struct ti_drv_packet_data packet_data;
	
	len = dma_receive(&dma_rx, (void **)packet, &packet_data);
	if (ret < 0) ...

DMA consumer DMA_DEV_TO_MEM (receive) zero-copy usage example (based on
networking). Networking subsystem allows to configure and use few receive
buffers (dev2mem), as Networking RX DMA channels usually implemented
as streaming interface
- get/request dma channel
	struct dma dma_rx;
	ret = dma_get_by_name(common->dev, "rx0", &dma_rx);
	if (ret) ...
	
	for (i = 0; i < RX_DESC_NUM; i++) {
		ret = dma_prepare_rcv_buf(&dma_rx,
					  net_rx_packets[i],
					  RX_BUF_SIZE);
		if (ret) ...
	}

- enable dma channel
	ret = dma_enable(&dma_rx);
	if (ret) ...

- dma receive DMA_DEV_TO_MEM.
	struct ti_drv_packet_data packet_data;
	void *packet;
	
	len = dma_receive(&dma_rx, &packet, &packet_data);
	if (ret < 0) ..
	
	/* packet - points on buffer prepared by dma_prepare_rcv_buf().
	   process packet*/
	
	- return buffer back to DAM channel
	ret = dma_prepare_rcv_buf(&dma_rx,
				  net_rx_packets[rx_next],
				  RX_BUF_SIZE);



Grygorii Strashko (2):
  test: dma: add dma-uclass test
  net: ethernet: ti: introduce am654 gigabit eth switch subsystem driver

Vignesh R (1):
  dma: ti: add driver to K3 UDMA

Álvaro Fernández Rojas (2):
  dma: move dma_ops to dma-uclass.h
  dma: add channels support

 arch/sandbox/dts/test.dts         |    8 +
 configs/sandbox_defconfig         |    3 +
 drivers/dma/Kconfig               |   16 +
 drivers/dma/Makefile              |    3 +
 drivers/dma/dma-uclass.c          |  183 +++-
 drivers/dma/sandbox-dma-test.c    |  284 +++++++
 drivers/dma/ti-edma3.c            |    2 +-
 drivers/dma/ti/Kconfig            |   14 +
 drivers/dma/ti/Makefile           |    3 +
 drivers/dma/ti/k3-udma-hwdef.h    |  184 ++++
 drivers/dma/ti/k3-udma.c          | 1661 +++++++++++++++++++++++++++++++++++++
 drivers/net/Kconfig               |    8 +
 drivers/net/Makefile              |    1 +
 drivers/net/am65-cpsw-nuss.c      |  962 +++++++++++++++++++++
 include/dma-uclass.h              |  128 +++
 include/dma.h                     |  282 ++++++-
 include/dt-bindings/dma/k3-udma.h |   26 +
 include/linux/soc/ti/ti-udma.h    |   24 +
 test/dm/Makefile                  |    1 +
 test/dm/dma.c                     |  121 +++
 20 files changed, 3884 insertions(+), 30 deletions(-)
 create mode 100644 drivers/dma/sandbox-dma-test.c
 create mode 100644 drivers/dma/ti/Kconfig
 create mode 100644 drivers/dma/ti/Makefile
 create mode 100644 drivers/dma/ti/k3-udma-hwdef.h
 create mode 100644 drivers/dma/ti/k3-udma.c
 create mode 100644 drivers/net/am65-cpsw-nuss.c
 create mode 100644 include/dma-uclass.h
 create mode 100644 include/dt-bindings/dma/k3-udma.h
 create mode 100644 include/linux/soc/ti/ti-udma.h
 create mode 100644 test/dm/dma.c

-- 
2.10.5

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [U-Boot] [PATCH v6 1/5] dma: move dma_ops to dma-uclass.h
  2018-10-22 21:24 [U-Boot] [PATCH v6 0/5] dma: add channels support Grygorii Strashko
@ 2018-10-22 21:24 ` Grygorii Strashko
  2018-11-02 20:29   ` Tom Rini
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 2/5] dma: add channels support Grygorii Strashko
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Grygorii Strashko @ 2018-10-22 21:24 UTC (permalink / raw)
  To: u-boot

From: Álvaro Fernández Rojas <noltari@gmail.com>

Move dma_ops to a separate header file, following other uclass
implementations. While doing so, this patch also improves dma_ops
documentation.

Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com>
---
 drivers/dma/dma-uclass.c |  2 +-
 drivers/dma/ti-edma3.c   |  2 +-
 include/dma-uclass.h     | 39 +++++++++++++++++++++++++++++++++++++++
 include/dma.h            | 22 ----------------------
 4 files changed, 41 insertions(+), 24 deletions(-)
 create mode 100644 include/dma-uclass.h

diff --git a/drivers/dma/dma-uclass.c b/drivers/dma/dma-uclass.c
index a33f7d5..6c3506c 100644
--- a/drivers/dma/dma-uclass.c
+++ b/drivers/dma/dma-uclass.c
@@ -9,10 +9,10 @@
  */
 
 #include <common.h>
-#include <dma.h>
 #include <dm.h>
 #include <dm/uclass-internal.h>
 #include <dm/device-internal.h>
+#include <dma-uclass.h>
 #include <errno.h>
 
 int dma_get_device(u32 transfer_type, struct udevice **devp)
diff --git a/drivers/dma/ti-edma3.c b/drivers/dma/ti-edma3.c
index 2131e10..7e11b13 100644
--- a/drivers/dma/ti-edma3.c
+++ b/drivers/dma/ti-edma3.c
@@ -11,7 +11,7 @@
 #include <asm/io.h>
 #include <common.h>
 #include <dm.h>
-#include <dma.h>
+#include <dma-uclass.h>
 #include <asm/omap_common.h>
 #include <asm/ti-common/ti-edma3.h>
 
diff --git a/include/dma-uclass.h b/include/dma-uclass.h
new file mode 100644
index 0000000..c9308c8
--- /dev/null
+++ b/include/dma-uclass.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Copyright (C) 2018 Álvaro Fernández Rojas <noltari@gmail.com>
+ * Copyright (C) 2018 Texas Instruments Incorporated <www.ti.com>
+ * Written by Mugunthan V N <mugunthanvnm@ti.com>
+ *
+ */
+
+#ifndef _DMA_UCLASS_H
+#define _DMA_UCLASS_H
+
+/* See dma.h for background documentation. */
+
+#include <dma.h>
+
+/*
+ * struct dma_ops - Driver model DMA operations
+ *
+ * The uclass interface is implemented by all DMA devices which use
+ * driver model.
+ */
+struct dma_ops {
+	/**
+	 * transfer() - Issue a DMA transfer. The implementation must
+	 *   wait until the transfer is done.
+	 *
+	 * @dev: The DMA device
+	 * @direction: direction of data transfer (should be one from
+	 *   enum dma_direction)
+	 * @dst: The destination pointer.
+	 * @src: The source pointer.
+	 * @len: Length of the data to be copied (number of bytes).
+	 * @return zero on success, or -ve error code.
+	 */
+	int (*transfer)(struct udevice *dev, int direction, void *dst,
+			void *src, size_t len);
+};
+
+#endif /* _DMA_UCLASS_H */
diff --git a/include/dma.h b/include/dma.h
index 50e9652..97fa0cf 100644
--- a/include/dma.h
+++ b/include/dma.h
@@ -27,28 +27,6 @@ enum dma_direction {
 #define DMA_SUPPORTS_DEV_TO_DEV	BIT(3)
 
 /*
- * struct dma_ops - Driver model DMA operations
- *
- * The uclass interface is implemented by all DMA devices which use
- * driver model.
- */
-struct dma_ops {
-	/*
-	 * Get the current timer count
-	 *
-	 * @dev: The DMA device
-	 * @direction: direction of data transfer should be one from
-		       enum dma_direction
-	 * @dst: Destination pointer
-	 * @src: Source pointer
-	 * @len: Length of the data to be copied.
-	 * @return: 0 if OK, -ve on error
-	 */
-	int (*transfer)(struct udevice *dev, int direction, void *dst,
-			void *src, size_t len);
-};
-
-/*
  * struct dma_dev_priv - information about a device used by the uclass
  *
  * @supported: mode of transfers that DMA can support, should be
-- 
2.10.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [U-Boot] [PATCH v6 2/5] dma: add channels support
  2018-10-22 21:24 [U-Boot] [PATCH v6 0/5] dma: add channels support Grygorii Strashko
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 1/5] dma: move dma_ops to dma-uclass.h Grygorii Strashko
@ 2018-10-22 21:24 ` Grygorii Strashko
  2018-11-02 20:29   ` Tom Rini
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 3/5] sandbox: dma: add dma-uclass test Grygorii Strashko
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Grygorii Strashko @ 2018-10-22 21:24 UTC (permalink / raw)
  To: u-boot

From: Álvaro Fernández Rojas <noltari@gmail.com>

This adds channels support for dma controllers that have multiple channels
which can transfer data to/from different devices (enet, usb...).

DMA channel API:
 dma_get_by_index()
 dma_get_by_name()
 dma_request()
 dma_free()
 dma_enable()
 dma_disable()
 dma_prepare_rcv_buf()
 dma_receive()
 dma_send()

Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com>
[grygorii.strashko at ti.com: drop unused dma_get_by_index_platdata(),
 add metadata to send/receive ops, add dma_prepare_rcv_buf(),
 minor clean up]
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
---
 drivers/dma/Kconfig      |   7 ++
 drivers/dma/dma-uclass.c | 181 ++++++++++++++++++++++++++++++++-
 include/dma-uclass.h     |  89 ++++++++++++++++
 include/dma.h            | 260 ++++++++++++++++++++++++++++++++++++++++++++++-
 4 files changed, 531 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 4ee6afa..b9b85c6 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -12,6 +12,13 @@ config DMA
 	  buses that is used to transfer data to and from memory.
 	  The uclass interface is defined in include/dma.h.
 
+config DMA_CHANNELS
+	bool "Enable DMA channels support"
+	depends on DMA
+	help
+	  Enable channels support for DMA. Some DMA controllers have multiple
+	  channels which can either transfer data to/from different devices.
+
 config TI_EDMA3
 	bool "TI EDMA3 driver"
 	help
diff --git a/drivers/dma/dma-uclass.c b/drivers/dma/dma-uclass.c
index 6c3506c..313e17f 100644
--- a/drivers/dma/dma-uclass.c
+++ b/drivers/dma/dma-uclass.c
@@ -2,19 +2,192 @@
 /*
  * Direct Memory Access U-Class driver
  *
- * (C) Copyright 2015
- *     Texas Instruments Incorporated, <www.ti.com>
+ * Copyright (C) 2018 Álvaro Fernández Rojas <noltari@gmail.com>
+ * Copyright (C) 2018 Texas Instruments Incorporated <www.ti.com>
+ * Written by Mugunthan V N <mugunthanvnm@ti.com>
  *
  * Author: Mugunthan V N <mugunthanvnm@ti.com>
  */
 
 #include <common.h>
 #include <dm.h>
-#include <dm/uclass-internal.h>
-#include <dm/device-internal.h>
+#include <dm/read.h>
 #include <dma-uclass.h>
+#include <dt-structs.h>
 #include <errno.h>
 
+#ifdef CONFIG_DMA_CHANNELS
+static inline struct dma_ops *dma_dev_ops(struct udevice *dev)
+{
+	return (struct dma_ops *)dev->driver->ops;
+}
+
+# if CONFIG_IS_ENABLED(OF_CONTROL)
+static int dma_of_xlate_default(struct dma *dma,
+				struct ofnode_phandle_args *args)
+{
+	debug("%s(dma=%p)\n", __func__, dma);
+
+	if (args->args_count > 1) {
+		pr_err("Invaild args_count: %d\n", args->args_count);
+		return -EINVAL;
+	}
+
+	if (args->args_count)
+		dma->id = args->args[0];
+	else
+		dma->id = 0;
+
+	return 0;
+}
+
+int dma_get_by_index(struct udevice *dev, int index, struct dma *dma)
+{
+	int ret;
+	struct ofnode_phandle_args args;
+	struct udevice *dev_dma;
+	const struct dma_ops *ops;
+
+	debug("%s(dev=%p, index=%d, dma=%p)\n", __func__, dev, index, dma);
+
+	assert(dma);
+	dma->dev = NULL;
+
+	ret = dev_read_phandle_with_args(dev, "dmas", "#dma-cells", 0, index,
+					 &args);
+	if (ret) {
+		pr_err("%s: dev_read_phandle_with_args failed: err=%d\n",
+		       __func__, ret);
+		return ret;
+	}
+
+	ret = uclass_get_device_by_ofnode(UCLASS_DMA, args.node, &dev_dma);
+	if (ret) {
+		pr_err("%s: uclass_get_device_by_ofnode failed: err=%d\n",
+		       __func__, ret);
+		return ret;
+	}
+
+	dma->dev = dev_dma;
+
+	ops = dma_dev_ops(dev_dma);
+
+	if (ops->of_xlate)
+		ret = ops->of_xlate(dma, &args);
+	else
+		ret = dma_of_xlate_default(dma, &args);
+	if (ret) {
+		pr_err("of_xlate() failed: %d\n", ret);
+		return ret;
+	}
+
+	return dma_request(dev_dma, dma);
+}
+
+int dma_get_by_name(struct udevice *dev, const char *name, struct dma *dma)
+{
+	int index;
+
+	debug("%s(dev=%p, name=%s, dma=%p)\n", __func__, dev, name, dma);
+	dma->dev = NULL;
+
+	index = dev_read_stringlist_search(dev, "dma-names", name);
+	if (index < 0) {
+		pr_err("dev_read_stringlist_search() failed: %d\n", index);
+		return index;
+	}
+
+	return dma_get_by_index(dev, index, dma);
+}
+# endif /* OF_CONTROL */
+
+int dma_request(struct udevice *dev, struct dma *dma)
+{
+	struct dma_ops *ops = dma_dev_ops(dev);
+
+	debug("%s(dev=%p, dma=%p)\n", __func__, dev, dma);
+
+	dma->dev = dev;
+
+	if (!ops->request)
+		return 0;
+
+	return ops->request(dma);
+}
+
+int dma_free(struct dma *dma)
+{
+	struct dma_ops *ops = dma_dev_ops(dma->dev);
+
+	debug("%s(dma=%p)\n", __func__, dma);
+
+	if (!ops->free)
+		return 0;
+
+	return ops->free(dma);
+}
+
+int dma_enable(struct dma *dma)
+{
+	struct dma_ops *ops = dma_dev_ops(dma->dev);
+
+	debug("%s(dma=%p)\n", __func__, dma);
+
+	if (!ops->enable)
+		return -ENOSYS;
+
+	return ops->enable(dma);
+}
+
+int dma_disable(struct dma *dma)
+{
+	struct dma_ops *ops = dma_dev_ops(dma->dev);
+
+	debug("%s(dma=%p)\n", __func__, dma);
+
+	if (!ops->disable)
+		return -ENOSYS;
+
+	return ops->disable(dma);
+}
+
+int dma_prepare_rcv_buf(struct dma *dma, void *dst, size_t size)
+{
+	struct dma_ops *ops = dma_dev_ops(dma->dev);
+
+	debug("%s(dma=%p)\n", __func__, dma);
+
+	if (!ops->prepare_rcv_buf)
+		return -1;
+
+	return ops->prepare_rcv_buf(dma, dst, size);
+}
+
+int dma_receive(struct dma *dma, void **dst, void *metadata)
+{
+	struct dma_ops *ops = dma_dev_ops(dma->dev);
+
+	debug("%s(dma=%p)\n", __func__, dma);
+
+	if (!ops->receive)
+		return -ENOSYS;
+
+	return ops->receive(dma, dst, metadata);
+}
+
+int dma_send(struct dma *dma, void *src, size_t len, void *metadata)
+{
+	struct dma_ops *ops = dma_dev_ops(dma->dev);
+
+	debug("%s(dma=%p)\n", __func__, dma);
+
+	if (!ops->send)
+		return -ENOSYS;
+
+	return ops->send(dma, src, len, metadata);
+}
+#endif /* CONFIG_DMA_CHANNELS */
+
 int dma_get_device(u32 transfer_type, struct udevice **devp)
 {
 	struct udevice *dev;
diff --git a/include/dma-uclass.h b/include/dma-uclass.h
index c9308c8..38f5f4c 100644
--- a/include/dma-uclass.h
+++ b/include/dma-uclass.h
@@ -13,6 +13,8 @@
 
 #include <dma.h>
 
+struct ofnode_phandle_args;
+
 /*
  * struct dma_ops - Driver model DMA operations
  *
@@ -20,6 +22,93 @@
  * driver model.
  */
 struct dma_ops {
+#ifdef CONFIG_DMA_CHANNELS
+	/**
+	 * of_xlate - Translate a client's device-tree (OF) DMA specifier.
+	 *
+	 * The DMA core calls this function as the first step in implementing
+	 * a client's dma_get_by_*() call.
+	 *
+	 * If this function pointer is set to NULL, the DMA core will use a
+	 * default implementation, which assumes #dma-cells = <1>, and that
+	 * the DT cell contains a simple integer DMA Channel.
+	 *
+	 * At present, the DMA API solely supports device-tree. If this
+	 * changes, other xxx_xlate() functions may be added to support those
+	 * other mechanisms.
+	 *
+	 * @dma: The dma struct to hold the translation result.
+	 * @args:	The dma specifier values from device tree.
+	 * @return 0 if OK, or a negative error code.
+	 */
+	int (*of_xlate)(struct dma *dma,
+			struct ofnode_phandle_args *args);
+	/**
+	 * request - Request a translated DMA.
+	 *
+	 * The DMA core calls this function as the second step in
+	 * implementing a client's dma_get_by_*() call, following a successful
+	 * xxx_xlate() call, or as the only step in implementing a client's
+	 * dma_request() call.
+	 *
+	 * @dma: The DMA struct to request; this has been filled in by
+	 *   a previoux xxx_xlate() function call, or by the caller of
+	 *   dma_request().
+	 * @return 0 if OK, or a negative error code.
+	 */
+	int (*request)(struct dma *dma);
+	/**
+	 * free - Free a previously requested dma.
+	 *
+	 * This is the implementation of the client dma_free() API.
+	 *
+	 * @dma: The DMA to free.
+	 * @return 0 if OK, or a negative error code.
+	 */
+	int (*free)(struct dma *dma);
+	/**
+	 * enable() - Enable a DMA Channel.
+	 *
+	 * @dma: The DMA Channel to manipulate.
+	 * @return zero on success, or -ve error code.
+	 */
+	int (*enable)(struct dma *dma);
+	/**
+	 * disable() - Disable a DMA Channel.
+	 *
+	 * @dma: The DMA Channel to manipulate.
+	 * @return zero on success, or -ve error code.
+	 */
+	int (*disable)(struct dma *dma);
+	/**
+	 * add_rcv_buf() - Prepare/Add receive DMA buffer.
+	 *
+	 * @dma: The DMA Channel to manipulate.
+	 * @dst: The receive buffer pointer.
+	 * @size: The receive buffer size
+	 * @return zero on success, or -ve error code.
+	 */
+	int (*prepare_rcv_buf)(struct dma *dma, void *dst, size_t size);
+	/**
+	 * receive() - Receive a DMA transfer.
+	 *
+	 * @dma: The DMA Channel to manipulate.
+	 * @dst: The destination pointer.
+	 * @metadata: DMA driver's specific data
+	 * @return zero on success, or -ve error code.
+	 */
+	int (*receive)(struct dma *dma, void **dst, void *metadata);
+	/**
+	 * send() - Send a DMA transfer.
+	 *
+	 * @dma: The DMA Channel to manipulate.
+	 * @src: The source pointer.
+	 * @len: Length of the data to be sent (number of bytes).
+	 * @metadata: DMA driver's specific data
+	 * @return zero on success, or -ve error code.
+	 */
+	int (*send)(struct dma *dma, void *src, size_t len, void *metadata);
+#endif /* CONFIG_DMA_CHANNELS */
 	/**
 	 * transfer() - Issue a DMA transfer. The implementation must
 	 *   wait until the transfer is done.
diff --git a/include/dma.h b/include/dma.h
index 97fa0cf..711efc4 100644
--- a/include/dma.h
+++ b/include/dma.h
@@ -1,12 +1,17 @@
 /* SPDX-License-Identifier: GPL-2.0+ */
 /*
- * (C) Copyright 2015
- *     Texas Instruments Incorporated, <www.ti.com>
+ * Copyright (C) 2018 Álvaro Fernández Rojas <noltari@gmail.com>
+ * Copyright (C) 2018 Texas Instruments Incorporated <www.ti.com>
+ * Written by Mugunthan V N <mugunthanvnm@ti.com>
+ *
  */
 
 #ifndef _DMA_H_
 #define _DMA_H_
 
+#include <linux/errno.h>
+#include <linux/types.h>
+
 /*
  * enum dma_direction - dma transfer direction indicator
  * @DMA_MEM_TO_MEM: Memcpy mode
@@ -36,6 +41,257 @@ struct dma_dev_priv {
 	u32 supported;
 };
 
+#ifdef CONFIG_DMA_CHANNELS
+/**
+ * A DMA is a feature of computer systems that allows certain hardware
+ * subsystems to access main system memory, independent of the CPU.
+ * DMA channels are typically generated externally to the HW module
+ * consuming them, by an entity this API calls a DMA provider. This API
+ * provides a standard means for drivers to enable and disable DMAs, and to
+ * copy, send and receive data using DMA.
+ *
+ * A driver that implements UCLASS_DMA is a DMA provider. A provider will
+ * often implement multiple separate DMAs, since the hardware it manages
+ * often has this capability. dma_uclass.h describes the interface which
+ * DMA providers must implement.
+ *
+ * DMA consumers/clients are the HW modules driven by the DMA channels. This
+ * header file describes the API used by drivers for those HW modules.
+ *
+ * DMA consumer DMA_MEM_TO_DEV (transmit) usage example (based on networking).
+ * Note. dma_send() is sync operation always -  it'll start transfer and will
+ * poll for it to complete:
+ *	- get/request dma channel
+ *	struct dma dma_tx;
+ *	ret = dma_get_by_name(common->dev, "tx0", &dma_tx);
+ *	if (ret) ...
+ *
+ *	- enable dma channel
+ *	ret = dma_enable(&dma_tx);
+ *	if (ret) ...
+ *
+ *	- dma transmit DMA_MEM_TO_DEV.
+ *	struct ti_drv_packet_data packet_data;
+ *
+ *	packet_data.opt1 = val1;
+ *	packet_data.opt2 = val2;
+ *	ret = dma_send(&dma_tx, packet, length, &packet_data);
+ *	if (ret) ..
+ *
+ * DMA consumer DMA_DEV_TO_MEM (receive) usage example (based on networking).
+ * Note. dma_receive() is sync operation always - it'll start transfer
+ * (if required) and will poll for it to complete (or for any previously
+ * configured dev2mem transfer to complete):
+ *	- get/request dma channel
+ *	struct dma dma_rx;
+ *	ret = dma_get_by_name(common->dev, "rx0", &dma_rx);
+ *	if (ret) ...
+ *
+ *	- enable dma channel
+ *	ret = dma_enable(&dma_rx);
+ *	if (ret) ...
+ *
+ *	- dma receive DMA_DEV_TO_MEM.
+ *	struct ti_drv_packet_data packet_data;
+ *
+ *	len = dma_receive(&dma_rx, (void **)packet, &packet_data);
+ *	if (ret < 0) ...
+ *
+ * DMA consumer DMA_DEV_TO_MEM (receive) zero-copy usage example (based on
+ * networking). Networking subsystem allows to configure and use few receive
+ * buffers (dev2mem), as Networking RX DMA channels usually implemented
+ * as streaming interface
+ *	- get/request dma channel
+ *	struct dma dma_rx;
+ *	ret = dma_get_by_name(common->dev, "rx0", &dma_rx);
+ *	if (ret) ...
+ *
+ *	for (i = 0; i < RX_DESC_NUM; i++) {
+ *		ret = dma_prepare_rcv_buf(&dma_rx,
+ *					  net_rx_packets[i],
+ *					  RX_BUF_SIZE);
+ *		if (ret) ...
+ *	}
+ *
+ *	- enable dma channel
+ *	ret = dma_enable(&dma_rx);
+ *	if (ret) ...
+ *
+ *	- dma receive DMA_DEV_TO_MEM.
+ *	struct ti_drv_packet_data packet_data;
+ *
+ *	len = dma_receive(&dma_rx, (void **)packet, &packet_data);
+ *	if (ret < 0) ..
+ *
+ *	-- process packet --
+ *
+ *	- return buffer back to DAM channel
+ *	ret = dma_prepare_rcv_buf(&dma_rx,
+ *				  net_rx_packets[rx_next],
+ *				  RX_BUF_SIZE);
+ */
+
+struct udevice;
+
+/**
+ * struct dma - A handle to (allowing control of) a single DMA.
+ *
+ * Clients provide storage for DMA handles. The content of the structure is
+ * managed solely by the DMA API and DMA drivers. A DMA struct is
+ * initialized by "get"ing the DMA struct. The DMA struct is passed to all
+ * other DMA APIs to identify which DMA channel to operate upon.
+ *
+ * @dev: The device which implements the DMA channel.
+ * @id: The DMA channel ID within the provider.
+ *
+ * Currently, the DMA API assumes that a single integer ID is enough to
+ * identify and configure any DMA channel for any DMA provider. If this
+ * assumption becomes invalid in the future, the struct could be expanded to
+ * either (a) add more fields to allow DMA providers to store additional
+ * information, or (b) replace the id field with an opaque pointer, which the
+ * provider would dynamically allocated during its .of_xlate op, and process
+ * during is .request op. This may require the addition of an extra op to clean
+ * up the allocation.
+ */
+struct dma {
+	struct udevice *dev;
+	/*
+	 * Written by of_xlate. We assume a single id is enough for now. In the
+	 * future, we might add more fields here.
+	 */
+	unsigned long id;
+};
+
+# if CONFIG_IS_ENABLED(OF_CONTROL) && CONFIG_IS_ENABLED(DMA)
+/**
+ * dma_get_by_index - Get/request a DMA by integer index.
+ *
+ * This looks up and requests a DMA. The index is relative to the client
+ * device; each device is assumed to have n DMAs associated with it somehow,
+ * and this function finds and requests one of them. The mapping of client
+ * device DMA indices to provider DMAs may be via device-tree properties,
+ * board-provided mapping tables, or some other mechanism.
+ *
+ * @dev:	The client device.
+ * @index:	The index of the DMA to request, within the client's list of
+ *		DMA channels.
+ * @dma:	A pointer to a DMA struct to initialize.
+ * @return 0 if OK, or a negative error code.
+ */
+int dma_get_by_index(struct udevice *dev, int index, struct dma *dma);
+
+/**
+ * dma_get_by_name - Get/request a DMA by name.
+ *
+ * This looks up and requests a DMA. The name is relative to the client
+ * device; each device is assumed to have n DMAs associated with it somehow,
+ * and this function finds and requests one of them. The mapping of client
+ * device DMA names to provider DMAs may be via device-tree properties,
+ * board-provided mapping tables, or some other mechanism.
+ *
+ * @dev:	The client device.
+ * @name:	The name of the DMA to request, within the client's list of
+ *		DMA channels.
+ * @dma:	A pointer to a DMA struct to initialize.
+ * @return 0 if OK, or a negative error code.
+ */
+int dma_get_by_name(struct udevice *dev, const char *name, struct dma *dma);
+# else
+static inline int dma_get_by_index(struct udevice *dev, int index,
+				   struct dma *dma)
+{
+	return -ENOSYS;
+}
+
+static inline int dma_get_by_name(struct udevice *dev, const char *name,
+				  struct dma *dma)
+{
+	return -ENOSYS;
+}
+# endif
+
+/**
+ * dma_request - Request a DMA by provider-specific ID.
+ *
+ * This requests a DMA using a provider-specific ID. Generally, this function
+ * should not be used, since dma_get_by_index/name() provide an interface that
+ * better separates clients from intimate knowledge of DMA providers.
+ * However, this function may be useful in core SoC-specific code.
+ *
+ * @dev: The DMA provider device.
+ * @dma: A pointer to a DMA struct to initialize. The caller must
+ *	 have already initialized any field in this struct which the
+ *	 DMA provider uses to identify the DMA channel.
+ * @return 0 if OK, or a negative error code.
+ */
+int dma_request(struct udevice *dev, struct dma *dma);
+
+/**
+ * dma_free - Free a previously requested DMA.
+ *
+ * @dma: A DMA struct that was previously successfully requested by
+ *	 dma_request/get_by_*().
+ * @return 0 if OK, or a negative error code.
+ */
+int dma_free(struct dma *dma);
+
+/**
+ * dma_enable() - Enable (turn on) a DMA channel.
+ *
+ * @dma: A DMA struct that was previously successfully requested by
+ *	 dma_request/get_by_*().
+ * @return zero on success, or -ve error code.
+ */
+int dma_enable(struct dma *dma);
+
+/**
+ * dma_disable() - Disable (turn off) a DMA channel.
+ *
+ * @dma: A DMA struct that was previously successfully requested by
+ *	 dma_request/get_by_*().
+ * @return zero on success, or -ve error code.
+ */
+int dma_disable(struct dma *dma);
+
+/**
+ * dma_prepare_rcv_buf() - Prepare/add receive DMA buffer.
+ *
+ * It allows to implement zero-copy async DMA_DEV_TO_MEM (receive) transactions
+ * if supported by DMA providers.
+ *
+ * @dma: A DMA struct that was previously successfully requested by
+ *	 dma_request/get_by_*().
+ * @dst: The receive buffer pointer.
+ * @size: The receive buffer size
+ * @return zero on success, or -ve error code.
+ */
+int dma_prepare_rcv_buf(struct dma *dma, void *dst, size_t size);
+
+/**
+ * dma_receive() - Receive a DMA transfer.
+ *
+ * @dma: A DMA struct that was previously successfully requested by
+ *	 dma_request/get_by_*().
+ * @dst: The destination pointer.
+ * @metadata: DMA driver's channel specific data
+ * @return length of received data on success, or zero - no data,
+ * or -ve error code.
+ */
+int dma_receive(struct dma *dma, void **dst, void *metadata);
+
+/**
+ * dma_send() - Send a DMA transfer.
+ *
+ * @dma: A DMA struct that was previously successfully requested by
+ *	 dma_request/get_by_*().
+ * @src: The source pointer.
+ * @len: Length of the data to be sent (number of bytes).
+ * @metadata: DMA driver's channel specific data
+ * @return zero on success, or -ve error code.
+ */
+int dma_send(struct dma *dma, void *src, size_t len, void *metadata);
+#endif /* CONFIG_DMA_CHANNELS */
+
 /*
  * dma_get_device - get a DMA device which supports transfer
  * type of transfer_type
-- 
2.10.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [U-Boot] [PATCH v6 3/5] sandbox: dma: add dma-uclass test
  2018-10-22 21:24 [U-Boot] [PATCH v6 0/5] dma: add channels support Grygorii Strashko
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 1/5] dma: move dma_ops to dma-uclass.h Grygorii Strashko
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 2/5] dma: add channels support Grygorii Strashko
@ 2018-10-22 21:24 ` Grygorii Strashko
  2018-11-02 20:29   ` Tom Rini
  2018-10-22 21:24 ` [U-Boot] [NOT-FOR-MERGE-PATCH v6 4/5] dma: ti: add driver to K3 UDMA Grygorii Strashko
  2018-10-22 21:25 ` [U-Boot] [NOT-FOR-MERGE-PATCH v6 5/5] net: ethernet: ti: introduce am654 gigabit eth switch subsystem driver Grygorii Strashko
  4 siblings, 1 reply; 9+ messages in thread
From: Grygorii Strashko @ 2018-10-22 21:24 UTC (permalink / raw)
  To: u-boot

Add a sandbox DMA driver implementation (provider) and corresponding DM
test.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
---
 arch/sandbox/dts/test.dts      |   8 ++
 configs/sandbox_defconfig      |   3 +
 drivers/dma/Kconfig            |   7 +
 drivers/dma/Makefile           |   1 +
 drivers/dma/sandbox-dma-test.c | 284 +++++++++++++++++++++++++++++++++++++++++
 test/dm/Makefile               |   1 +
 test/dm/dma.c                  | 121 ++++++++++++++++++
 7 files changed, 425 insertions(+)
 create mode 100644 drivers/dma/sandbox-dma-test.c
 create mode 100644 test/dm/dma.c

diff --git a/arch/sandbox/dts/test.dts b/arch/sandbox/dts/test.dts
index 420b72f..3672a2c 100644
--- a/arch/sandbox/dts/test.dts
+++ b/arch/sandbox/dts/test.dts
@@ -708,6 +708,14 @@
 	sandbox_tee {
 		compatible = "sandbox,tee";
 	};
+
+	dma: dma {
+		compatible = "sandbox,dma";
+		#dma-cells = <1>;
+
+		dmas = <&dma 0>, <&dma 1>, <&dma 2>;
+		dma-names = "m2m", "tx0", "rx0";
+	};
 };
 
 #include "sandbox_pmic.dtsi"
diff --git a/configs/sandbox_defconfig b/configs/sandbox_defconfig
index 2ce336f..7cdc01c 100644
--- a/configs/sandbox_defconfig
+++ b/configs/sandbox_defconfig
@@ -217,3 +217,6 @@ CONFIG_UT_TIME=y
 CONFIG_UT_DM=y
 CONFIG_UT_ENV=y
 CONFIG_UT_OVERLAY=y
+CONFIG_DMA=y
+CONFIG_DMA_CHANNELS=y
+CONFIG_SANDBOX_DMA=y
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index b9b85c6..8a4162e 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -19,6 +19,13 @@ config DMA_CHANNELS
 	  Enable channels support for DMA. Some DMA controllers have multiple
 	  channels which can either transfer data to/from different devices.
 
+config SANDBOX_DMA
+	bool "Enable the sandbox DMA test driver"
+	depends on DMA && DMA_CHANNELS && SANDBOX
+	help
+	  Enable support for a test DMA uclass implementation. It stimulates
+	  DMA transfer by simple copying data between channels.
+
 config TI_EDMA3
 	bool "TI EDMA3 driver"
 	help
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 4eaef8a..aff31f9 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_DMA) += dma-uclass.o
 obj-$(CONFIG_FSLDMAFEC) += MCD_tasksInit.o MCD_dmaApi.o MCD_tasks.o
 obj-$(CONFIG_APBH_DMA) += apbh_dma.o
 obj-$(CONFIG_FSL_DMA) += fsl_dma.o
+obj-$(CONFIG_SANDBOX_DMA) += sandbox-dma-test.o
 obj-$(CONFIG_TI_KSNAV) += keystone_nav.o keystone_nav_cfg.o
 obj-$(CONFIG_TI_EDMA3) += ti-edma3.o
 obj-$(CONFIG_DMA_LPC32XX) += lpc32xx_dma.o
diff --git a/drivers/dma/sandbox-dma-test.c b/drivers/dma/sandbox-dma-test.c
new file mode 100644
index 0000000..77e5091
--- /dev/null
+++ b/drivers/dma/sandbox-dma-test.c
@@ -0,0 +1,284 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Direct Memory Access U-Class driver
+ *
+ * Copyright (C) 2018 Álvaro Fernández Rojas <noltari@gmail.com>
+ * Copyright (C) 2018 Texas Instruments Incorporated <www.ti.com>
+ * Written by Mugunthan V N <mugunthanvnm@ti.com>
+ *
+ * Author: Mugunthan V N <mugunthanvnm@ti.com>
+ */
+
+#include <common.h>
+#include <dm.h>
+#include <dm/read.h>
+#include <dma-uclass.h>
+#include <dt-structs.h>
+#include <errno.h>
+
+#define SANDBOX_DMA_CH_CNT 3
+#define SANDBOX_DMA_BUF_SIZE 1024
+
+struct sandbox_dma_chan {
+	struct sandbox_dma_dev *ud;
+	char name[20];
+	u32 id;
+	enum dma_direction dir;
+	bool in_use;
+	bool enabled;
+};
+
+struct sandbox_dma_dev {
+	struct device *dev;
+	u32 ch_count;
+	struct sandbox_dma_chan channels[SANDBOX_DMA_CH_CNT];
+	uchar   buf[SANDBOX_DMA_BUF_SIZE];
+	uchar	*buf_rx;
+	size_t	data_len;
+	u32	meta;
+};
+
+static int sandbox_dma_transfer(struct udevice *dev, int direction,
+				void *dst, void *src, size_t len)
+{
+	memcpy(dst, src, len);
+
+	return 0;
+}
+
+static int sandbox_dma_of_xlate(struct dma *dma,
+				struct ofnode_phandle_args *args)
+{
+	struct sandbox_dma_dev *ud = dev_get_priv(dma->dev);
+	struct sandbox_dma_chan *uc;
+
+	debug("%s(dma id=%u)\n", __func__, args->args[0]);
+
+	if (args->args[0] >= SANDBOX_DMA_CH_CNT)
+		return -EINVAL;
+
+	dma->id = args->args[0];
+
+	uc = &ud->channels[dma->id];
+
+	if (dma->id == 1)
+		uc->dir = DMA_MEM_TO_DEV;
+	else if (dma->id == 2)
+		uc->dir = DMA_DEV_TO_MEM;
+	else
+		uc->dir = DMA_MEM_TO_MEM;
+	debug("%s(dma id=%lu dir=%d)\n", __func__, dma->id, uc->dir);
+
+	return 0;
+}
+
+static int sandbox_dma_request(struct dma *dma)
+{
+	struct sandbox_dma_dev *ud = dev_get_priv(dma->dev);
+	struct sandbox_dma_chan *uc;
+
+	if (dma->id >= SANDBOX_DMA_CH_CNT)
+		return -EINVAL;
+
+	uc = &ud->channels[dma->id];
+	if (uc->in_use)
+		return -EBUSY;
+
+	uc->in_use = true;
+	debug("%s(dma id=%lu in_use=%d)\n", __func__, dma->id, uc->in_use);
+
+	return 0;
+}
+
+static int sandbox_dma_free(struct dma *dma)
+{
+	struct sandbox_dma_dev *ud = dev_get_priv(dma->dev);
+	struct sandbox_dma_chan *uc;
+
+	if (dma->id >= SANDBOX_DMA_CH_CNT)
+		return -EINVAL;
+
+	uc = &ud->channels[dma->id];
+	if (!uc->in_use)
+		return -EINVAL;
+
+	uc->in_use = false;
+	ud->buf_rx = NULL;
+	ud->data_len = 0;
+	debug("%s(dma id=%lu in_use=%d)\n", __func__, dma->id, uc->in_use);
+
+	return 0;
+}
+
+static int sandbox_dma_enable(struct dma *dma)
+{
+	struct sandbox_dma_dev *ud = dev_get_priv(dma->dev);
+	struct sandbox_dma_chan *uc;
+
+	if (dma->id >= SANDBOX_DMA_CH_CNT)
+		return -EINVAL;
+
+	uc = &ud->channels[dma->id];
+	if (!uc->in_use)
+		return -EINVAL;
+	if (uc->enabled)
+		return -EINVAL;
+
+	uc->enabled = true;
+	debug("%s(dma id=%lu enabled=%d)\n", __func__, dma->id, uc->enabled);
+
+	return 0;
+}
+
+static int sandbox_dma_disable(struct dma *dma)
+{
+	struct sandbox_dma_dev *ud = dev_get_priv(dma->dev);
+	struct sandbox_dma_chan *uc;
+
+	if (dma->id >= SANDBOX_DMA_CH_CNT)
+		return -EINVAL;
+
+	uc = &ud->channels[dma->id];
+	if (!uc->in_use)
+		return -EINVAL;
+	if (!uc->enabled)
+		return -EINVAL;
+
+	uc->enabled = false;
+	debug("%s(dma id=%lu enabled=%d)\n", __func__, dma->id, uc->enabled);
+
+	return 0;
+}
+
+static int sandbox_dma_send(struct dma *dma,
+			    void *src, size_t len, void *metadata)
+{
+	struct sandbox_dma_dev *ud = dev_get_priv(dma->dev);
+	struct sandbox_dma_chan *uc;
+
+	if (dma->id >= SANDBOX_DMA_CH_CNT)
+		return -EINVAL;
+	if (!src || !metadata)
+		return -EINVAL;
+
+	debug("%s(dma id=%lu)\n", __func__, dma->id);
+
+	uc = &ud->channels[dma->id];
+	if (uc->dir != DMA_MEM_TO_DEV)
+		return -EINVAL;
+	if (!uc->in_use)
+		return -EINVAL;
+	if (!uc->enabled)
+		return -EINVAL;
+	if (len >= SANDBOX_DMA_BUF_SIZE)
+		return -EINVAL;
+
+	memcpy(ud->buf, src, len);
+	ud->data_len = len;
+	ud->meta = *((u32 *)metadata);
+
+	debug("%s(dma id=%lu len=%zu meta=%08x)\n",
+	      __func__, dma->id, len, ud->meta);
+
+	return 0;
+}
+
+static int sandbox_dma_receive(struct dma *dma, void **dst, void *metadata)
+{
+	struct sandbox_dma_dev *ud = dev_get_priv(dma->dev);
+	struct sandbox_dma_chan *uc;
+
+	if (dma->id >= SANDBOX_DMA_CH_CNT)
+		return -EINVAL;
+	if (!dst || !metadata)
+		return -EINVAL;
+
+	uc = &ud->channels[dma->id];
+	if (uc->dir != DMA_DEV_TO_MEM)
+		return -EINVAL;
+	if (!uc->in_use)
+		return -EINVAL;
+	if (!uc->enabled)
+		return -EINVAL;
+	if (!ud->data_len)
+		return 0;
+
+	if (ud->buf_rx) {
+		memcpy(ud->buf_rx, ud->buf, ud->data_len);
+		*dst = ud->buf_rx;
+	} else {
+		memcpy(*dst, ud->buf, ud->data_len);
+	}
+
+	*((u32 *)metadata) = ud->meta;
+
+	debug("%s(dma id=%lu len=%zu meta=%08x %p)\n",
+	      __func__, dma->id, ud->data_len, ud->meta, *dst);
+
+	return ud->data_len;
+}
+
+static int sandbox_dma_add_rcv_buf(struct dma *dma, void *dst, size_t size)
+{
+	struct sandbox_dma_dev *ud = dev_get_priv(dma->dev);
+
+	ud->buf_rx = dst;
+
+	return 0;
+}
+
+static const struct dma_ops sandbox_dma_ops = {
+	.transfer	= sandbox_dma_transfer,
+	.of_xlate	= sandbox_dma_of_xlate,
+	.request	= sandbox_dma_request,
+	.free		= sandbox_dma_free,
+	.enable		= sandbox_dma_enable,
+	.disable	= sandbox_dma_disable,
+	.send		= sandbox_dma_send,
+	.receive	= sandbox_dma_receive,
+	.add_rcv_buf	= sandbox_dma_add_rcv_buf,
+};
+
+static int sandbox_dma_probe(struct udevice *dev)
+{
+	struct dma_dev_priv *uc_priv = dev_get_uclass_priv(dev);
+	struct sandbox_dma_dev *ud = dev_get_priv(dev);
+	int i, ret = 0;
+
+	uc_priv->supported = DMA_SUPPORTS_MEM_TO_MEM |
+			     DMA_SUPPORTS_MEM_TO_DEV |
+			     DMA_SUPPORTS_DEV_TO_MEM;
+
+	ud->ch_count = SANDBOX_DMA_CH_CNT;
+	ud->buf_rx = NULL;
+	ud->meta = 0;
+	ud->data_len = 0;
+
+	pr_err("Number of channels: %u\n", ud->ch_count);
+
+	for (i = 0; i < ud->ch_count; i++) {
+		struct sandbox_dma_chan *uc = &ud->channels[i];
+
+		uc->ud = ud;
+		uc->id = i;
+		sprintf(uc->name, "DMA chan%d\n", i);
+		uc->in_use = false;
+		uc->enabled = false;
+	}
+
+	return ret;
+}
+
+static const struct udevice_id sandbox_dma_ids[] = {
+	{ .compatible = "sandbox,dma" },
+	{ }
+};
+
+U_BOOT_DRIVER(sandbox_dma) = {
+	.name	= "sandbox-dma",
+	.id	= UCLASS_DMA,
+	.of_match = sandbox_dma_ids,
+	.ops	= &sandbox_dma_ops,
+	.probe = sandbox_dma_probe,
+	.priv_auto_alloc_size = sizeof(struct sandbox_dma_dev),
+};
diff --git a/test/dm/Makefile b/test/dm/Makefile
index b490cf2..afc6664 100644
--- a/test/dm/Makefile
+++ b/test/dm/Makefile
@@ -53,4 +53,5 @@ obj-$(CONFIG_MISC) += misc.o
 obj-$(CONFIG_DM_SERIAL) += serial.o
 obj-$(CONFIG_CPU) += cpu.o
 obj-$(CONFIG_TEE) += tee.o
+obj-$(CONFIG_DMA) += dma.o
 endif
diff --git a/test/dm/dma.c b/test/dm/dma.c
new file mode 100644
index 0000000..76bc60f
--- /dev/null
+++ b/test/dm/dma.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * (C) Copyright 2018
+ * Mario Six, Guntermann & Drunck GmbH, mario.six at gdsys.cc
+ */
+
+#include <common.h>
+#include <dm.h>
+#include <dm/test.h>
+#include <dma.h>
+#include <test/ut.h>
+
+static int dm_test_dma_m2m(struct unit_test_state *uts)
+{
+	struct udevice *dev;
+	struct dma dma_m2m;
+	u8 src_buf[512];
+	u8 dst_buf[512];
+	size_t len = 512;
+	int i;
+
+	ut_assertok(uclass_get_device_by_name(UCLASS_DMA, "dma", &dev));
+	ut_assertok(dma_get_by_name(dev, "m2m", &dma_m2m));
+
+	memset(dst_buf, 0, len);
+	for (i = 0; i < len; i++)
+		src_buf[i] = i;
+
+	ut_assertok(dma_memcpy(dst_buf, src_buf, len));
+
+	ut_assertok(memcmp(src_buf, dst_buf, len));
+	return 0;
+}
+DM_TEST(dm_test_dma_m2m, DM_TESTF_SCAN_FDT);
+
+static int dm_test_dma(struct unit_test_state *uts)
+{
+	struct udevice *dev;
+	struct dma dma_tx, dma_rx;
+	u8 src_buf[512];
+	u8 dst_buf[512];
+	void *dst_ptr;
+	size_t len = 512;
+	u32 meta1, meta2;
+	int i;
+
+	ut_assertok(uclass_get_device_by_name(UCLASS_DMA, "dma", &dev));
+
+	ut_assertok(dma_get_by_name(dev, "tx0", &dma_tx));
+	ut_assertok(dma_get_by_name(dev, "rx0", &dma_rx));
+
+	ut_assertok(dma_enable(&dma_tx));
+	ut_assertok(dma_enable(&dma_rx));
+
+	memset(dst_buf, 0, len);
+	for (i = 0; i < len; i++)
+		src_buf[i] = i;
+	meta1 = 0xADADDEAD;
+	meta2 = 0;
+	dst_ptr = &dst_buf;
+
+	ut_assertok(dma_send(&dma_tx, src_buf, len, &meta1));
+
+	ut_asserteq(len, dma_receive(&dma_rx, &dst_ptr, &meta2));
+	ut_asserteq(0xADADDEAD, meta2);
+
+	ut_assertok(dma_disable(&dma_tx));
+	ut_assertok(dma_disable(&dma_rx));
+
+	ut_assertok(dma_free(&dma_tx));
+	ut_assertok(dma_free(&dma_rx));
+	ut_assertok(memcmp(src_buf, dst_buf, len));
+
+	return 0;
+}
+DM_TEST(dm_test_dma, DM_TESTF_SCAN_FDT);
+
+static int dm_test_dma_rx(struct unit_test_state *uts)
+{
+	struct udevice *dev;
+	struct dma dma_tx, dma_rx;
+	u8 src_buf[512];
+	u8 dst_buf[512];
+	void *dst_ptr;
+	size_t len = 512;
+	u32 meta1, meta2;
+	int i;
+
+	ut_assertok(uclass_get_device_by_name(UCLASS_DMA, "dma", &dev));
+
+	ut_assertok(dma_get_by_name(dev, "tx0", &dma_tx));
+	ut_assertok(dma_get_by_name(dev, "rx0", &dma_rx));
+
+	ut_assertok(dma_enable(&dma_tx));
+	ut_assertok(dma_enable(&dma_rx));
+
+	memset(dst_buf, 0, len);
+	for (i = 0; i < len; i++)
+		src_buf[i] = i;
+	meta1 = 0xADADDEAD;
+	meta2 = 0;
+	dst_ptr = NULL;
+
+	ut_assertok(dma_add_rcv_buf(&dma_tx, dst_buf, len));
+
+	ut_assertok(dma_send(&dma_tx, src_buf, len, &meta1));
+
+	ut_asserteq(len, dma_receive(&dma_rx, &dst_ptr, &meta2));
+	ut_asserteq(0xADADDEAD, meta2);
+	ut_asserteq_ptr(dst_buf, dst_ptr);
+
+	ut_assertok(dma_disable(&dma_tx));
+	ut_assertok(dma_disable(&dma_rx));
+
+	ut_assertok(dma_free(&dma_tx));
+	ut_assertok(dma_free(&dma_rx));
+	ut_assertok(memcmp(src_buf, dst_buf, len));
+
+	return 0;
+}
+DM_TEST(dm_test_dma_rx, DM_TESTF_SCAN_FDT);
-- 
2.10.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [U-Boot] [NOT-FOR-MERGE-PATCH v6 4/5] dma: ti: add driver to K3 UDMA
  2018-10-22 21:24 [U-Boot] [PATCH v6 0/5] dma: add channels support Grygorii Strashko
                   ` (2 preceding siblings ...)
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 3/5] sandbox: dma: add dma-uclass test Grygorii Strashko
@ 2018-10-22 21:24 ` Grygorii Strashko
  2018-10-22 21:25 ` [U-Boot] [NOT-FOR-MERGE-PATCH v6 5/5] net: ethernet: ti: introduce am654 gigabit eth switch subsystem driver Grygorii Strashko
  4 siblings, 0 replies; 9+ messages in thread
From: Grygorii Strashko @ 2018-10-22 21:24 UTC (permalink / raw)
  To: u-boot

From: Vignesh R <vigneshr@ti.com>

Add support for K3 AM65x UDMA with only support pktmode MEM_TO_MEM
transfers and DEV_TO_MEM/MEM_TO_DEV through DMA channels.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vignesh R <vigneshr@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
---
 drivers/dma/Kconfig               |    2 +
 drivers/dma/Makefile              |    2 +
 drivers/dma/ti/Kconfig            |   14 +
 drivers/dma/ti/Makefile           |    3 +
 drivers/dma/ti/k3-udma-hwdef.h    |  184 ++++
 drivers/dma/ti/k3-udma.c          | 1661 +++++++++++++++++++++++++++++++++++++
 include/dt-bindings/dma/k3-udma.h |   26 +
 include/linux/soc/ti/ti-udma.h    |   24 +
 8 files changed, 1916 insertions(+)
 create mode 100644 drivers/dma/ti/Kconfig
 create mode 100644 drivers/dma/ti/Makefile
 create mode 100644 drivers/dma/ti/k3-udma-hwdef.h
 create mode 100644 drivers/dma/ti/k3-udma.c
 create mode 100644 include/dt-bindings/dma/k3-udma.h
 create mode 100644 include/linux/soc/ti/ti-udma.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 8a4162e..306261c 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -48,4 +48,6 @@ config APBH_DMA_BURST8
 
 endif
 
+source "drivers/dma/ti/Kconfig"
+
 endmenu # menu "DMA Support"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index aff31f9..ef2865a 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -12,3 +12,5 @@ obj-$(CONFIG_SANDBOX_DMA) += sandbox-dma-test.o
 obj-$(CONFIG_TI_KSNAV) += keystone_nav.o keystone_nav_cfg.o
 obj-$(CONFIG_TI_EDMA3) += ti-edma3.o
 obj-$(CONFIG_DMA_LPC32XX) += lpc32xx_dma.o
+
+obj-y += ti/
diff --git a/drivers/dma/ti/Kconfig b/drivers/dma/ti/Kconfig
new file mode 100644
index 0000000..3d54983
--- /dev/null
+++ b/drivers/dma/ti/Kconfig
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: GPL-2.0+
+
+if ARCH_K3
+
+config TI_K3_NAVSS_UDMA
+        bool "Texas Instruments UDMA"
+        depends on ARCH_K3
+        select DMA
+        select TI_K3_NAVSS_RINGACC
+        select TI_K3_NAVSS_PSILCFG
+        default n
+        help
+          Support for UDMA used in K3 devices.
+endif
diff --git a/drivers/dma/ti/Makefile b/drivers/dma/ti/Makefile
new file mode 100644
index 0000000..de2f9ac
--- /dev/null
+++ b/drivers/dma/ti/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0+
+
+obj-$(CONFIG_TI_K3_NAVSS_UDMA) += k3-udma.o
diff --git a/drivers/dma/ti/k3-udma-hwdef.h b/drivers/dma/ti/k3-udma-hwdef.h
new file mode 100644
index 0000000..c88399a
--- /dev/null
+++ b/drivers/dma/ti/k3-udma-hwdef.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ *  Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#ifndef K3_NAVSS_UDMA_HWDEF_H_
+#define K3_NAVSS_UDMA_HWDEF_H_
+
+#define UDMA_PSIL_DST_THREAD_ID_OFFSET 0x8000
+
+/* Global registers */
+#define UDMA_REV_REG			0x0
+#define UDMA_PERF_CTL_REG		0x4
+#define UDMA_EMU_CTL_REG		0x8
+#define UDMA_PSIL_TO_REG		0x10
+#define UDMA_UTC_CTL_REG		0x1c
+#define UDMA_CAP_REG(i)			(0x20 + (i * 4))
+#define UDMA_RX_FLOW_ID_FW_OES_REG	0x80
+#define UDMA_RX_FLOW_ID_FW_STATUS_REG	0x88
+
+/* RX Flow regs */
+#define UDMA_RFLOW_RFA_REG		0x0
+#define UDMA_RFLOW_RFB_REG		0x4
+#define UDMA_RFLOW_RFC_REG		0x8
+#define UDMA_RFLOW_RFD_REG		0xc
+#define UDMA_RFLOW_RFE_REG		0x10
+#define UDMA_RFLOW_RFF_REG		0x14
+#define UDMA_RFLOW_RFG_REG		0x18
+#define UDMA_RFLOW_RFH_REG		0x1c
+
+#define UDMA_RFLOW_REG(x) (UDMA_RFLOW_RF##x##_REG)
+
+/* TX chan regs */
+#define UDMA_TCHAN_TCFG_REG		0x0
+#define UDMA_TCHAN_TCREDIT_REG		0x4
+#define UDMA_TCHAN_TCQ_REG		0x14
+#define UDMA_TCHAN_TOES_REG(i)		(0x20 + (i) * 4)
+#define UDMA_TCHAN_TEOES_REG		0x60
+#define UDMA_TCHAN_TPRI_CTRL_REG	0x64
+#define UDMA_TCHAN_THREAD_ID_REG	0x68
+#define UDMA_TCHAN_TFIFO_DEPTH_REG	0x70
+#define UDMA_TCHAN_TST_SCHED_REG	0x80
+
+/* RX chan regs */
+#define UDMA_RCHAN_RCFG_REG		0x0
+#define UDMA_RCHAN_RCQ_REG		0x14
+#define UDMA_RCHAN_ROES_REG(i)		(0x20 + (i) * 4)
+#define UDMA_RCHAN_REOES_REG		0x60
+#define UDMA_RCHAN_RPRI_CTRL_REG	0x64
+#define UDMA_RCHAN_THREAD_ID_REG	0x68
+#define UDMA_RCHAN_RST_SCHED_REG	0x80
+#define UDMA_RCHAN_RFLOW_RNG_REG	0xf0
+
+/* TX chan RT regs */
+#define UDMA_TCHAN_RT_CTL_REG		0x0
+#define UDMA_TCHAN_RT_SWTRIG_REG	0x8
+#define UDMA_TCHAN_RT_STDATA_REG	0x80
+
+#define UDMA_TCHAN_RT_PEERn_REG(i)	(0x200 + (i * 0x4))
+#define UDMA_TCHAN_RT_PEER_STATIC_TR_XY_REG	\
+	UDMA_TCHAN_RT_PEERn_REG(0)	/* PSI-L: 0x400 */
+#define UDMA_TCHAN_RT_PEER_STATIC_TR_Z_REG	\
+	UDMA_TCHAN_RT_PEERn_REG(1)	/* PSI-L: 0x401 */
+#define UDMA_TCHAN_RT_PEER_BCNT_REG		\
+	UDMA_TCHAN_RT_PEERn_REG(4)	/* PSI-L: 0x404 */
+#define UDMA_TCHAN_RT_PEER_RT_EN_REG		\
+	UDMA_TCHAN_RT_PEERn_REG(8)	/* PSI-L: 0x408 */
+
+#define UDMA_TCHAN_RT_PCNT_REG		0x400
+#define UDMA_TCHAN_RT_BCNT_REG		0x408
+#define UDMA_TCHAN_RT_SBCNT_REG		0x410
+
+/* RX chan RT regs */
+#define UDMA_RCHAN_RT_CTL_REG		0x0
+#define UDMA_RCHAN_RT_SWTRIG_REG	0x8
+#define UDMA_RCHAN_RT_STDATA_REG	0x80
+
+#define UDMA_RCHAN_RT_PEERn_REG(i)	(0x200 + (i * 0x4))
+#define UDMA_RCHAN_RT_PEER_STATIC_TR_XY_REG	\
+	UDMA_RCHAN_RT_PEERn_REG(0)	/* PSI-L: 0x400 */
+#define UDMA_RCHAN_RT_PEER_STATIC_TR_Z_REG	\
+	UDMA_RCHAN_RT_PEERn_REG(1)	/* PSI-L: 0x401 */
+#define UDMA_RCHAN_RT_PEER_BCNT_REG		\
+	UDMA_RCHAN_RT_PEERn_REG(4)	/* PSI-L: 0x404 */
+#define UDMA_RCHAN_RT_PEER_RT_EN_REG		\
+	UDMA_RCHAN_RT_PEERn_REG(8)	/* PSI-L: 0x408 */
+
+#define UDMA_RCHAN_RT_PCNT_REG		0x400
+#define UDMA_RCHAN_RT_BCNT_REG		0x408
+#define UDMA_RCHAN_RT_SBCNT_REG		0x410
+
+/* UDMA_TCHAN_TCFG_REG/UDMA_RCHAN_RCFG_REG */
+#define UDMA_CHAN_CFG_PAUSE_ON_ERR		BIT(31)
+#define UDMA_TCHAN_CFG_FILT_EINFO		BIT(30)
+#define UDMA_TCHAN_CFG_FILT_PSWORDS		BIT(29)
+#define UDMA_CHAN_CFG_ATYPE_MASK		GENMASK(25, 24)
+#define UDMA_CHAN_CFG_ATYPE_SHIFT		24
+#define UDMA_CHAN_CFG_CHAN_TYPE_MASK		GENMASK(19, 16)
+#define UDMA_CHAN_CFG_CHAN_TYPE_SHIFT		16
+/*
+ * PBVR - using pass by value rings
+ * PBRR - using pass by reference rings
+ * 3RDP - Third Party DMA
+ * BC - Block Copy
+ * SB - single buffer packet mode enabled
+ */
+#define UDMA_CHAN_CFG_CHAN_TYPE_PACKET_PBRR \
+	(2 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_CHAN_CFG_CHAN_TYPE_PACKET_SB_PBRR \
+	(3 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_CHAN_CFG_CHAN_TYPE_3RDP_PBRR \
+	(10 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_CHAN_CFG_CHAN_TYPE_3RDP_PBVR \
+	(11 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_CHAN_CFG_CHAN_TYPE_3RDP_BC_PBRR \
+	(12 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_RCHAN_CFG_IGNORE_SHORT		BIT(15)
+#define UDMA_RCHAN_CFG_IGNORE_LONG		BIT(14)
+#define UDMA_TCHAN_CFG_SUPR_TDPKT		BIT(8)
+#define UDMA_CHAN_CFG_FETCH_SIZE_MASK		GENMASK(6, 0)
+#define UDMA_CHAN_CFG_FETCH_SIZE_SHIFT		0
+
+/* UDMA_TCHAN_RT_CTL_REG/UDMA_RCHAN_RT_CTL_REG */
+#define UDMA_CHAN_RT_CTL_EN	BIT(31)
+#define UDMA_CHAN_RT_CTL_TDOWN	BIT(30)
+#define UDMA_CHAN_RT_CTL_PAUSE	BIT(29)
+#define UDMA_CHAN_RT_CTL_FTDOWN	BIT(28)
+#define UDMA_CHAN_RT_CTL_ERROR	BIT(0)
+
+/* UDMA_TCHAN_RT_PEER_RT_EN_REG/UDMA_RCHAN_RT_PEER_RT_EN_REG (PSI-L: 0x408) */
+#define UDMA_PEER_RT_EN_ENABLE		BIT(31)
+#define UDMA_PEER_RT_EN_TEARDOWN	BIT(30)
+#define UDMA_PEER_RT_EN_PAUSE		BIT(29)
+#define UDMA_PEER_RT_EN_FLUSH		BIT(28)
+#define UDMA_PEER_RT_EN_IDLE		BIT(1)
+
+/* RX Flow reg RFA */
+#define UDMA_RFLOW_RFA_EINFO			BIT(30)
+#define UDMA_RFLOW_RFA_PSINFO			BIT(29)
+#define UDMA_RFLOW_RFA_ERR_HANDLING		BIT(28)
+#define UDMA_RFLOW_RFA_DESC_TYPE_MASK		GENMASK(27, 26)
+#define UDMA_RFLOW_RFA_DESC_TYPE_SHIFT		26
+#define UDMA_RFLOW_RFA_PS_LOC			BIT(25)
+#define UDMA_RFLOW_RFA_SOP_OFF_MASK		GENMASK(24, 16)
+#define UDMA_RFLOW_RFA_SOP_OFF_SHIFT		16
+#define UDMA_RFLOW_RFA_DEST_QNUM_MASK		GENMASK(15, 0)
+#define UDMA_RFLOW_RFA_DEST_QNUM_SHIFT		0
+
+/* RX Flow reg RFC */
+#define UDMA_RFLOW_RFC_SRC_TAG_HI_SEL_SHIFT	28
+#define UDMA_RFLOW_RFC_SRC_TAG_LO_SEL_SHIFT	24
+#define UDMA_RFLOW_RFC_DST_TAG_HI_SEL_SHIFT	20
+#define UDMA_RFLOW_RFC_DST_TAG_LO_SE_SHIFT	16
+
+/*
+ * UDMA_TCHAN_RT_PEER_STATIC_TR_XY_REG /
+ * UDMA_RCHAN_RT_PEER_STATIC_TR_XY_REG
+ */
+#define PDMA_STATIC_TR_X_MASK		GENMASK(26, 24)
+#define PDMA_STATIC_TR_X_SHIFT		(24)
+#define PDMA_STATIC_TR_Y_MASK		GENMASK(11, 0)
+#define PDMA_STATIC_TR_Y_SHIFT		(0)
+
+#define PDMA_STATIC_TR_Y(x)	\
+	(((x) << PDMA_STATIC_TR_Y_SHIFT) & PDMA_STATIC_TR_Y_MASK)
+#define PDMA_STATIC_TR_X(x)	\
+	(((x) << PDMA_STATIC_TR_X_SHIFT) & PDMA_STATIC_TR_X_MASK)
+
+/*
+ * UDMA_TCHAN_RT_PEER_STATIC_TR_Z_REG /
+ * UDMA_RCHAN_RT_PEER_STATIC_TR_Z_REG
+ */
+#define PDMA_STATIC_TR_Z_MASK		GENMASK(11, 0)
+#define PDMA_STATIC_TR_Z_SHIFT		(0)
+#define PDMA_STATIC_TR_Z(x)	\
+	(((x) << PDMA_STATIC_TR_Z_SHIFT) & PDMA_STATIC_TR_Z_MASK)
+
+#endif /* K3_NAVSS_UDMA_HWDEF_H_ */
diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
new file mode 100644
index 0000000..cf6b682
--- /dev/null
+++ b/drivers/dma/ti/k3-udma.c
@@ -0,0 +1,1661 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ *  Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ *  Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
+ */
+#define pr_fmt(fmt) "udma: " fmt
+
+#include <common.h>
+#include <asm/io.h>
+#include <asm/bitops.h>
+#include <malloc.h>
+#include <asm/dma-mapping.h>
+#include <dm.h>
+#include <dm/read.h>
+#include <dm/of_access.h>
+#include <dma.h>
+#include <dma-uclass.h>
+#include <linux/delay.h>
+#include <linux/soc/ti/k3-navss-psilcfg.h>
+#include <dt-bindings/dma/k3-udma.h>
+#include <linux/soc/ti/k3-navss-ringacc.h>
+#include <linux/soc/ti/cppi5.h>
+#include <linux/soc/ti/ti-udma.h>
+#include <linux/soc/ti/ti_sci_protocol.h>
+
+#include "k3-udma-hwdef.h"
+
+#define RINGACC_RING_USE_PROXY (0)
+
+struct udma_chan;
+
+enum udma_mmr {
+	MMR_GCFG = 0,
+	MMR_RCHANRT,
+	MMR_TCHANRT,
+	MMR_LAST,
+};
+
+static const char * const mmr_names[] = {
+	"gcfg", "rchanrt", "tchanrt"
+};
+
+struct udma_tchan {
+	void __iomem *reg_rt;
+
+	int id;
+	struct k3_nav_ring *t_ring; /* Transmit ring */
+	struct k3_nav_ring *tc_ring; /* Transmit Completion ring */
+};
+
+struct udma_rchan {
+	void __iomem *reg_rt;
+
+	int id;
+	struct k3_nav_ring *fd_ring; /* Free Descriptor ring */
+	struct k3_nav_ring *r_ring; /* Receive ring*/
+};
+
+struct udma_rflow {
+	int id;
+};
+
+struct udma_dev {
+	struct device *dev;
+	void __iomem *mmrs[MMR_LAST];
+
+	struct udevice *psil_node;
+	struct k3_nav_ringacc *ringacc;
+
+	u32 features;
+
+	int tchan_cnt;
+	int echan_cnt;
+	int rchan_cnt;
+	int rflow_cnt;
+	unsigned long *tchan_map;
+	unsigned long *rchan_map;
+	unsigned long *rflow_map;
+
+	struct udma_tchan *tchans;
+	struct udma_rchan *rchans;
+	struct udma_rflow *rflows;
+
+	struct udma_chan *channels;
+	u32 psil_base;
+
+	u32 ch_count;
+	const struct ti_sci_handle *tisci;
+	const struct ti_sci_rm_udmap_ops *tisci_udmap_ops;
+	u32  tisci_dev_id;
+};
+
+struct udma_chan {
+	struct udma_dev *ud;
+	char name[20];
+
+	struct udma_tchan *tchan;
+	struct udma_rchan *rchan;
+	struct udma_rflow *rflow;
+
+	struct k3_nav_psil_entry *psi_link;
+
+	u32 bcnt; /* number of bytes completed since the start of the channel */
+
+	bool pkt_mode; /* TR or packet */
+	bool needs_epib; /* EPIB is needed for the communication or not */
+	u32 psd_size; /* size of Protocol Specific Data */
+	u32 metadata_size; /* (needs_epib ? 16:0) + psd_size */
+	int slave_thread_id;
+	u32 src_thread;
+	u32 dst_thread;
+	u32 static_tr_type;
+
+	u32 id;
+	enum dma_direction dir;
+
+	struct knav_udmap_host_desc_t *desc_tx;
+	u32 hdesc_size;
+	bool in_use;
+	void	*desc_rx;
+	u32	num_rx_bufs;
+	u32	desc_rx_cur;
+
+};
+
+#define UDMA_CH_1000(ch)		(ch * 0x1000)
+#define UDMA_CH_100(ch)			(ch * 0x100)
+#define UDMA_CH_40(ch)			(ch * 0x40)
+
+#ifdef PKTBUFSRX
+#define UDMA_RX_DESC_NUM PKTBUFSRX
+#else
+#define UDMA_RX_DESC_NUM 4
+#endif
+
+#ifdef K3_UDMA_DEBUG
+#define	k3_udma_dbg(arg...) pr_err(arg)
+#define	k3_udma_dev_dbg(dev, arg...) dev_err(dev, arg)
+static void k3_udma_print_buf(ulong addr, const void *data, uint width,
+			      uint count, uint linelen)
+{
+	print_buffer(addr, data, width, count, linelen);
+}
+#else
+#define	k3_udma_dbg(arg...)
+#define	k3_udma_dev_dbg(arg...)
+static void k3_udma_print_buf(ulong addr, const void *data, uint width,
+			      uint count, uint linelen)
+{}
+#endif
+
+/* Generic register access functions */
+static inline u32 udma_read(void __iomem *base, int reg)
+{
+	u32 v;
+
+	v = __raw_readl(base + reg);
+	k3_udma_dbg("READL(32): v(%08X)<--reg(%p)\n", v, base + reg);
+	return v;
+}
+
+static inline void udma_write(void __iomem *base, int reg, u32 val)
+{
+	k3_udma_dbg("WRITEL(32): v(%08X)-->reg(%p)\n", val, base + reg);
+	__raw_writel(val, base + reg);
+}
+
+static inline void udma_update_bits(void __iomem *base, int reg,
+				    u32 mask, u32 val)
+{
+	u32 tmp, orig;
+
+	orig = udma_read(base, reg);
+	tmp = orig & ~mask;
+	tmp |= (val & mask);
+
+	if (tmp != orig)
+		udma_write(base, reg, tmp);
+}
+
+/* TCHANRT */
+static inline u32 udma_tchanrt_read(struct udma_tchan *tchan, int reg)
+{
+	if (!tchan)
+		return 0;
+	return udma_read(tchan->reg_rt, reg);
+}
+
+static inline void udma_tchanrt_write(struct udma_tchan *tchan,
+				      int reg, u32 val)
+{
+	if (!tchan)
+		return;
+	udma_write(tchan->reg_rt, reg, val);
+}
+
+/* RCHANRT */
+static inline u32 udma_rchanrt_read(struct udma_rchan *rchan, int reg)
+{
+	if (!rchan)
+		return 0;
+	return udma_read(rchan->reg_rt, reg);
+}
+
+static inline void udma_rchanrt_write(struct udma_rchan *rchan,
+				      int reg, u32 val)
+{
+	if (!rchan)
+		return;
+	udma_write(rchan->reg_rt, reg, val);
+}
+
+static inline char *udma_get_dir_text(enum dma_direction dir)
+{
+	switch (dir) {
+	case DMA_DEV_TO_MEM:
+		return "DEV_TO_MEM";
+	case DMA_MEM_TO_DEV:
+		return "MEM_TO_DEV";
+	case DMA_MEM_TO_MEM:
+		return "MEM_TO_MEM";
+	case DMA_DEV_TO_DEV:
+		return "DEV_TO_DEV";
+	default:
+		break;
+	}
+
+	return "invalid";
+}
+
+static inline bool udma_is_chan_running(struct udma_chan *uc)
+{
+	u32 trt_ctl = 0;
+	u32 rrt_ctl = 0;
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		rrt_ctl = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_CTL_REG);
+		k3_udma_dbg("%s: rrt_ctl: 0x%08x (peer: 0x%08x)\n",
+			    __func__, rrt_ctl,
+			    udma_rchanrt_read(uc->rchan,
+					      UDMA_RCHAN_RT_PEER_RT_EN_REG));
+		break;
+	case DMA_MEM_TO_DEV:
+		trt_ctl = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_CTL_REG);
+		k3_udma_dbg("%s: trt_ctl: 0x%08x (peer: 0x%08x)\n",
+			    __func__, trt_ctl,
+			    udma_tchanrt_read(uc->tchan,
+					      UDMA_TCHAN_RT_PEER_RT_EN_REG));
+		break;
+	case DMA_MEM_TO_MEM:
+		trt_ctl = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_CTL_REG);
+		rrt_ctl = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_CTL_REG);
+		break;
+	default:
+		break;
+	}
+
+	if (trt_ctl & UDMA_CHAN_RT_CTL_EN || rrt_ctl & UDMA_CHAN_RT_CTL_EN)
+		return true;
+
+	return false;
+}
+
+static int udma_pop_from_ring(struct udma_chan *uc, dma_addr_t *addr)
+{
+	struct k3_nav_ring *ring = NULL;
+	int ret = -ENOENT;
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		ring = uc->rchan->r_ring;
+		break;
+	case DMA_MEM_TO_DEV:
+		ring = uc->tchan->tc_ring;
+		break;
+	case DMA_MEM_TO_MEM:
+		ring = uc->rchan->r_ring;
+		break;
+	default:
+		break;
+	}
+
+	if (ring && k3_nav_ringacc_ring_get_occ(ring))
+		ret = k3_nav_ringacc_ring_pop(ring, addr);
+
+	return ret;
+}
+
+static void udma_reset_rings(struct udma_chan *uc)
+{
+	struct k3_nav_ring *ring1 = NULL;
+	struct k3_nav_ring *ring2 = NULL;
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		ring1 = uc->rchan->fd_ring;
+		ring2 = uc->rchan->r_ring;
+		break;
+	case DMA_MEM_TO_DEV:
+		ring1 = uc->tchan->t_ring;
+		ring2 = uc->tchan->tc_ring;
+		break;
+	case DMA_MEM_TO_MEM:
+		ring1 = uc->tchan->t_ring;
+		ring2 = uc->tchan->tc_ring;
+		break;
+	default:
+		break;
+	}
+
+	if (ring1)
+		k3_nav_ringacc_ring_reset_dma(ring1, 0);
+	if (ring2)
+		k3_nav_ringacc_ring_reset(ring2);
+}
+
+static void udma_reset_counters(struct udma_chan *uc)
+{
+	u32 val;
+
+	if (uc->tchan) {
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_BCNT_REG);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_BCNT_REG, val);
+
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_SBCNT_REG);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_SBCNT_REG, val);
+
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_PCNT_REG);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_PCNT_REG, val);
+
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_PEER_BCNT_REG);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_PEER_BCNT_REG, val);
+	}
+
+	if (uc->rchan) {
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_BCNT_REG);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_BCNT_REG, val);
+
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_SBCNT_REG);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_SBCNT_REG, val);
+
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_PCNT_REG);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PCNT_REG, val);
+
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_PEER_BCNT_REG);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PEER_BCNT_REG, val);
+	}
+
+	uc->bcnt = 0;
+}
+
+static inline int udma_stop_hard(struct udma_chan *uc)
+{
+	k3_udma_dbg("%s: ENTER (chan%d)\n", __func__, uc->id);
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PEER_RT_EN_REG, 0);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG, 0);
+		break;
+	case DMA_MEM_TO_DEV:
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG, 0);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_PEER_RT_EN_REG, 0);
+		break;
+	case DMA_MEM_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG, 0);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG, 0);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int udma_start(struct udma_chan *uc)
+{
+	/* Channel is already running, no need to proceed further */
+	if (udma_is_chan_running(uc))
+		goto out;
+
+	k3_udma_dbg("%s: chan:%d dir:%s (static_tr_type: %d)\n",
+		    __func__, uc->id, udma_get_dir_text(uc->dir),
+		    uc->static_tr_type);
+
+	/* Make sure that we clear the teardown bit, if it is set */
+	udma_stop_hard(uc);
+
+	/* Reset all counters */
+	udma_reset_counters(uc);
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG,
+				   UDMA_CHAN_RT_CTL_EN);
+
+		/* Enable remote */
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PEER_RT_EN_REG,
+				   UDMA_PEER_RT_EN_ENABLE);
+
+		k3_udma_dbg("%s(rx): RT_CTL:0x%08x PEER RT_ENABLE:0x%08x\n",
+			    __func__,
+			    udma_rchanrt_read(uc->rchan,
+					      UDMA_RCHAN_RT_CTL_REG),
+			    udma_rchanrt_read(uc->rchan,
+					      UDMA_RCHAN_RT_PEER_RT_EN_REG));
+		break;
+	case DMA_MEM_TO_DEV:
+		/* Enable remote */
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_PEER_RT_EN_REG,
+				   UDMA_PEER_RT_EN_ENABLE);
+
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG,
+				   UDMA_CHAN_RT_CTL_EN);
+
+		k3_udma_dbg("%s(tx): RT_CTL:0x%08x PEER RT_ENABLE:0x%08x\n",
+			    __func__,
+			    udma_rchanrt_read(uc->rchan,
+					      UDMA_TCHAN_RT_CTL_REG),
+			    udma_rchanrt_read(uc->rchan,
+					      UDMA_TCHAN_RT_PEER_RT_EN_REG));
+		break;
+	case DMA_MEM_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG,
+				   UDMA_CHAN_RT_CTL_EN);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG,
+				   UDMA_CHAN_RT_CTL_EN);
+
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	k3_udma_dbg("%s: DONE chan:%d\n", __func__, uc->id);
+out:
+	return 0;
+}
+
+static inline void udma_stop_mem2dev(struct udma_chan *uc, bool sync)
+{
+	int i = 0;
+	u32 val;
+
+	udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG,
+			   UDMA_CHAN_RT_CTL_EN |
+			   UDMA_CHAN_RT_CTL_TDOWN);
+
+	val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_CTL_REG);
+
+	while (sync && (val & UDMA_CHAN_RT_CTL_EN)) {
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_CTL_REG);
+		udelay(1);
+		if (i > 1000) {
+			printf(" %s TIMEOUT !\n", __func__);
+			break;
+		}
+		i++;
+	}
+
+	val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_PEER_RT_EN_REG);
+	if (val & UDMA_PEER_RT_EN_ENABLE)
+		printf("%s: peer not stopped TIMEOUT !\n", __func__);
+}
+
+static inline void udma_stop_dev2mem(struct udma_chan *uc, bool sync)
+{
+	int i = 0;
+	u32 val;
+
+	udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PEER_RT_EN_REG,
+			   UDMA_PEER_RT_EN_ENABLE |
+			   UDMA_PEER_RT_EN_TEARDOWN);
+
+	val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_CTL_REG);
+
+	while (sync && (val & UDMA_CHAN_RT_CTL_EN)) {
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_CTL_REG);
+		udelay(1);
+		if (i > 1000) {
+			printf("%s TIMEOUT !\n", __func__);
+			break;
+		}
+		i++;
+	}
+
+	val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_PEER_RT_EN_REG);
+	if (val & UDMA_PEER_RT_EN_ENABLE)
+		printf("%s: peer not stopped TIMEOUT !\n", __func__);
+}
+
+static inline int udma_stop(struct udma_chan *uc)
+{
+	k3_udma_dbg("%s: chan:%d dir:%s\n",
+		    __func__, uc->id, udma_get_dir_text(uc->dir));
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		udma_stop_dev2mem(uc, true);
+		break;
+	case DMA_MEM_TO_DEV:
+		udma_stop_mem2dev(uc, true);
+		break;
+	case DMA_MEM_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG, 0);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG, 0);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void udma_poll_completion(struct udma_chan *uc, dma_addr_t *paddr)
+{
+	int i = 1;
+
+	while (udma_pop_from_ring(uc, paddr)) {
+		udelay(1);
+		if (!(i % 1000000))
+			printf(".");
+		i++;
+	}
+}
+
+#define UDMA_RESERVE_RESOURCE(res)					\
+static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud,	\
+					       int id)			\
+{									\
+	if (id >= 0) {							\
+		if (test_bit(id, ud->res##_map)) {			\
+			dev_err(ud->dev, "res##%d is in use\n", id);	\
+			return ERR_PTR(-ENOENT);			\
+		}							\
+	} else {							\
+		id = find_first_zero_bit(ud->res##_map, ud->res##_cnt); \
+		if (id == ud->res##_cnt) {				\
+			return ERR_PTR(-ENOENT);			\
+		}							\
+	}								\
+									\
+	__set_bit(id, ud->res##_map);					\
+	return &ud->res##s[id];						\
+}
+
+UDMA_RESERVE_RESOURCE(tchan);
+UDMA_RESERVE_RESOURCE(rchan);
+UDMA_RESERVE_RESOURCE(rflow);
+
+static int udma_get_tchan(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->tchan) {
+		dev_dbg(ud->dev, "chan%d: already have tchan%d allocated\n",
+			uc->id, uc->tchan->id);
+		return 0;
+	}
+
+	uc->tchan = __udma_reserve_tchan(ud, -1);
+	if (IS_ERR(uc->tchan))
+		return PTR_ERR(uc->tchan);
+
+	k3_udma_dbg("chan%d: got tchan%d\n", uc->id, uc->tchan->id);
+
+	if (udma_is_chan_running(uc)) {
+		dev_warn(ud->dev, "chan%d: tchan%d is running!\n", uc->id,
+			 uc->tchan->id);
+		udma_stop(uc);
+		if (udma_is_chan_running(uc))
+			dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+	}
+
+	return 0;
+}
+
+static int udma_get_rchan(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->rchan) {
+		dev_dbg(ud->dev, "chan%d: already have rchan%d allocated\n",
+			uc->id, uc->rchan->id);
+		return 0;
+	}
+
+	uc->rchan = __udma_reserve_rchan(ud, -1);
+	if (IS_ERR(uc->rchan))
+		return PTR_ERR(uc->rchan);
+
+	k3_udma_dbg("chan%d: got rchan%d\n", uc->id, uc->rchan->id);
+
+	if (udma_is_chan_running(uc)) {
+		dev_warn(ud->dev, "chan%d: rchan%d is running!\n", uc->id,
+			 uc->rchan->id);
+		udma_stop(uc);
+		if (udma_is_chan_running(uc))
+			dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+	}
+
+	return 0;
+}
+
+static int udma_get_chan_pair(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+	int chan_id, end;
+
+	if ((uc->tchan && uc->rchan) && uc->tchan->id == uc->rchan->id) {
+		dev_info(ud->dev, "chan%d: already have %d pair allocated\n",
+			 uc->id, uc->tchan->id);
+		return 0;
+	}
+
+	if (uc->tchan) {
+		dev_err(ud->dev, "chan%d: already have tchan%d allocated\n",
+			uc->id, uc->tchan->id);
+		return -EBUSY;
+	} else if (uc->rchan) {
+		dev_err(ud->dev, "chan%d: already have rchan%d allocated\n",
+			uc->id, uc->rchan->id);
+		return -EBUSY;
+	}
+
+	/* Can be optimized, but let's have it like this for now */
+	end = min(ud->tchan_cnt, ud->rchan_cnt);
+	for (chan_id = 0; chan_id < end; chan_id++) {
+		if (!test_bit(chan_id, ud->tchan_map) &&
+		    !test_bit(chan_id, ud->rchan_map))
+			break;
+	}
+
+	if (chan_id == end)
+		return -ENOENT;
+
+	__set_bit(chan_id, ud->tchan_map);
+	__set_bit(chan_id, ud->rchan_map);
+	uc->tchan = &ud->tchans[chan_id];
+	uc->rchan = &ud->rchans[chan_id];
+
+	k3_udma_dbg("chan%d: got t/rchan%d pair\n", uc->id, chan_id);
+
+	if (udma_is_chan_running(uc)) {
+		dev_warn(ud->dev, "chan%d: t/rchan%d pair is running!\n",
+			 uc->id, chan_id);
+		udma_stop(uc);
+		if (udma_is_chan_running(uc))
+			dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+	}
+
+	return 0;
+}
+
+static int udma_get_rflow(struct udma_chan *uc, int flow_id)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->rflow) {
+		dev_dbg(ud->dev, "chan%d: already have rflow%d allocated\n",
+			uc->id, uc->rflow->id);
+		return 0;
+	}
+
+	if (!uc->rchan)
+		dev_warn(ud->dev, "chan%d: does not have rchan??\n", uc->id);
+
+	uc->rflow = __udma_reserve_rflow(ud, flow_id);
+	if (IS_ERR(uc->rflow))
+		return PTR_ERR(uc->rflow);
+
+	k3_udma_dbg("chan%d: got rflow%d\n", uc->id, uc->rflow->id);
+	return 0;
+}
+
+static void udma_put_rchan(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->rchan) {
+		dev_dbg(ud->dev, "chan%d: put rchan%d\n", uc->id,
+			uc->rchan->id);
+		__clear_bit(uc->rchan->id, ud->rchan_map);
+		uc->rchan = NULL;
+	}
+}
+
+static void udma_put_tchan(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->tchan) {
+		dev_dbg(ud->dev, "chan%d: put tchan%d\n", uc->id,
+			uc->tchan->id);
+		__clear_bit(uc->tchan->id, ud->tchan_map);
+		uc->tchan = NULL;
+	}
+}
+
+static void udma_put_rflow(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->rflow) {
+		dev_dbg(ud->dev, "chan%d: put rflow%d\n", uc->id,
+			uc->rflow->id);
+		__clear_bit(uc->rflow->id, ud->rflow_map);
+		uc->rflow = NULL;
+	}
+}
+
+static void udma_free_tx_resources(struct udma_chan *uc)
+{
+	if (!uc->tchan)
+		return;
+
+	k3_nav_ringacc_ring_free(uc->tchan->t_ring);
+	k3_nav_ringacc_ring_free(uc->tchan->tc_ring);
+	uc->tchan->t_ring = NULL;
+	uc->tchan->tc_ring = NULL;
+
+	udma_put_tchan(uc);
+}
+
+static int udma_alloc_tx_resources(struct udma_chan *uc)
+{
+	struct k3_nav_ring_cfg ring_cfg;
+	struct udma_dev *ud = uc->ud;
+	int ret;
+
+	ret = udma_get_tchan(uc);
+	if (ret)
+		return ret;
+
+	uc->tchan->t_ring = k3_nav_ringacc_request_ring(
+				ud->ringacc, uc->tchan->id,
+				RINGACC_RING_USE_PROXY);
+	if (!uc->tchan->t_ring) {
+		ret = -EBUSY;
+		goto err_tx_ring;
+	}
+
+	uc->tchan->tc_ring = k3_nav_ringacc_request_ring(
+				ud->ringacc, -1, RINGACC_RING_USE_PROXY);
+	if (!uc->tchan->tc_ring) {
+		ret = -EBUSY;
+		goto err_txc_ring;
+	}
+
+	memset(&ring_cfg, 0, sizeof(ring_cfg));
+	ring_cfg.size = 16;
+	ring_cfg.elm_size = K3_NAV_RINGACC_RING_ELSIZE_8;
+	ring_cfg.mode = K3_NAV_RINGACC_RING_MODE_MESSAGE;
+
+	ret = k3_nav_ringacc_ring_cfg(uc->tchan->t_ring, &ring_cfg);
+	ret |= k3_nav_ringacc_ring_cfg(uc->tchan->tc_ring, &ring_cfg);
+
+	if (ret)
+		goto err_ringcfg;
+
+	return 0;
+
+err_ringcfg:
+	k3_nav_ringacc_ring_free(uc->tchan->tc_ring);
+	uc->tchan->tc_ring = NULL;
+err_txc_ring:
+	k3_nav_ringacc_ring_free(uc->tchan->t_ring);
+	uc->tchan->t_ring = NULL;
+err_tx_ring:
+	udma_put_tchan(uc);
+
+	return ret;
+}
+
+static void udma_free_rx_resources(struct udma_chan *uc)
+{
+	if (!uc->rchan)
+		return;
+
+	k3_nav_ringacc_ring_free(uc->rchan->fd_ring);
+	k3_nav_ringacc_ring_free(uc->rchan->r_ring);
+	uc->rchan->fd_ring = NULL;
+	uc->rchan->r_ring = NULL;
+
+	udma_put_rflow(uc);
+	udma_put_rchan(uc);
+}
+
+static int udma_alloc_rx_resources(struct udma_chan *uc)
+{
+	struct k3_nav_ring_cfg ring_cfg;
+	struct udma_dev *ud = uc->ud;
+	int fd_ring_id;
+	int ret;
+
+	ret = udma_get_rchan(uc);
+	if (ret)
+		return ret;
+
+	ret = udma_get_rflow(uc, uc->rchan->id);
+	if (ret) {
+		ret = -EBUSY;
+		goto err_rflow;
+	}
+
+	if (uc->dir == DMA_MEM_TO_MEM)
+		fd_ring_id = -1;
+	else
+		fd_ring_id = ud->tchan_cnt + ud->echan_cnt + uc->rchan->id;
+
+	uc->rchan->fd_ring = k3_nav_ringacc_request_ring(
+				ud->ringacc, fd_ring_id,
+				RINGACC_RING_USE_PROXY);
+	if (!uc->rchan->fd_ring) {
+		ret = -EBUSY;
+		goto err_rx_ring;
+	}
+
+	uc->rchan->r_ring = k3_nav_ringacc_request_ring(
+				ud->ringacc, -1, RINGACC_RING_USE_PROXY);
+	if (!uc->rchan->r_ring) {
+		ret = -EBUSY;
+		goto err_rxc_ring;
+	}
+
+	memset(&ring_cfg, 0, sizeof(ring_cfg));
+	ring_cfg.size = 16;
+	ring_cfg.elm_size = K3_NAV_RINGACC_RING_ELSIZE_8;
+	ring_cfg.mode = K3_NAV_RINGACC_RING_MODE_MESSAGE;
+
+	ret = k3_nav_ringacc_ring_cfg(uc->rchan->fd_ring, &ring_cfg);
+	ret |= k3_nav_ringacc_ring_cfg(uc->rchan->r_ring, &ring_cfg);
+
+	if (ret)
+		goto err_ringcfg;
+
+	return 0;
+
+err_ringcfg:
+	k3_nav_ringacc_ring_free(uc->rchan->r_ring);
+	uc->rchan->r_ring = NULL;
+err_rxc_ring:
+	k3_nav_ringacc_ring_free(uc->rchan->fd_ring);
+	uc->rchan->fd_ring = NULL;
+err_rx_ring:
+	udma_put_rflow(uc);
+err_rflow:
+	udma_put_rchan(uc);
+
+	return ret;
+}
+
+static int udma_alloc_tchan_sci_req(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+	int tc_ring = k3_nav_ringacc_get_ring_id(uc->tchan->tc_ring);
+	struct ti_sci_msg_rm_udmap_tx_ch_cfg req;
+	u32 mode;
+	int ret;
+
+	if (uc->pkt_mode)
+		mode = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR;
+	else
+		mode = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBRR;
+
+	req.valid_params = TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID;
+	req.nav_id = ud->tisci_dev_id;
+	req.index = uc->tchan->id;
+	req.tx_chan_type = mode;
+	req.tx_fetch_size = 16;
+	req.txcq_qnum = tc_ring;
+
+	ret = ud->tisci_udmap_ops->tx_ch_cfg(ud->tisci, &req);
+	if (ret)
+		dev_err(ud->dev, "tisci tx alloc failed %d\n", ret);
+
+	return ret;
+}
+
+static int udma_alloc_rchan_sci_req(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+	int fd_ring = k3_nav_ringacc_get_ring_id(uc->rchan->fd_ring);
+	int rx_ring = k3_nav_ringacc_get_ring_id(uc->rchan->r_ring);
+	struct ti_sci_msg_rm_udmap_rx_ch_cfg req = { 0 };
+	struct ti_sci_msg_rm_udmap_flow_cfg flow_req = { 0 };
+	u32 mode;
+	int ret;
+
+	if (uc->pkt_mode)
+		mode = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR;
+	else
+		mode = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBRR;
+
+	req.valid_params = TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID;
+	req.nav_id = ud->tisci_dev_id;
+	req.index = uc->rchan->id;
+	req.rx_fetch_size = 16;
+	req.rxcq_qnum = rx_ring;
+	if (uc->rflow->id != uc->rchan->id) {
+		req.flowid_start = uc->rflow->id;
+		req.flowid_cnt = 1;
+		req.valid_params |=
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID;
+	}
+	req.rx_chan_type = mode;
+
+	ret = ud->tisci_udmap_ops->rx_ch_cfg(ud->tisci, &req);
+	if (ret) {
+		dev_err(ud->dev, "tisci rx %u cfg failed %d\n",
+			uc->rchan->id, ret);
+		return ret;
+	}
+
+	flow_req.valid_params =
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DESC_TYPE_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_SEL_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_SEL_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_SEL_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_SEL_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ0_SZ0_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ1_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ2_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ3_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PS_LOCATION_VALID;
+
+	flow_req.nav_id = ud->tisci_dev_id;
+	flow_req.flow_index = uc->rflow->id;
+
+	if (uc->needs_epib)
+		flow_req.rx_einfo_present = 1;
+	else
+		flow_req.rx_einfo_present = 0;
+
+	if (uc->psd_size)
+		flow_req.rx_psinfo_present = 1;
+	else
+		flow_req.rx_psinfo_present = 0;
+
+	flow_req.rx_error_handling = 0;
+	flow_req.rx_desc_type = 0;
+	flow_req.rx_dest_qnum = rx_ring;
+	flow_req.rx_src_tag_hi_sel = 2;
+	flow_req.rx_src_tag_lo_sel = 4;
+	flow_req.rx_dest_tag_hi_sel = 5;
+	flow_req.rx_dest_tag_lo_sel = 4;
+	flow_req.rx_fdq0_sz0_qnum = fd_ring;
+	flow_req.rx_fdq1_qnum = fd_ring;
+	flow_req.rx_fdq2_qnum = fd_ring;
+	flow_req.rx_fdq3_qnum = fd_ring;
+	flow_req.rx_ps_location = 0;
+
+	ret = ud->tisci_udmap_ops->rx_flow_cfg(ud->tisci, &flow_req);
+	if (ret)
+		dev_err(ud->dev, "tisci rx %u flow %u cfg failed %d\n",
+			uc->rchan->id, uc->rflow->id, ret);
+
+	return ret;
+}
+
+static int udma_alloc_chan_resources(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+	int ret;
+
+	k3_udma_dbg("%s: chan:%d as %s\n",
+		    __func__, uc->id, udma_get_dir_text(uc->dir));
+
+	switch (uc->dir) {
+	case DMA_MEM_TO_MEM:
+		/* Non synchronized - mem to mem type of transfer */
+		ret = udma_get_chan_pair(uc);
+		if (ret)
+			return ret;
+
+		ret = udma_alloc_tx_resources(uc);
+		if (ret)
+			goto err_free_res;
+
+		ret = udma_alloc_rx_resources(uc);
+		if (ret)
+			goto err_free_res;
+
+		uc->src_thread = ud->psil_base + uc->tchan->id;
+		uc->dst_thread = (ud->psil_base + uc->rchan->id) | 0x8000;
+		uc->pkt_mode = true;
+		break;
+	case DMA_MEM_TO_DEV:
+		/* Slave transfer synchronized - mem to dev (TX) trasnfer */
+		ret = udma_alloc_tx_resources(uc);
+		if (ret)
+			goto err_free_res;
+
+		uc->src_thread = ud->psil_base + uc->tchan->id;
+		uc->dst_thread = uc->slave_thread_id;
+		if (!(uc->dst_thread & 0x8000))
+			uc->dst_thread |= 0x8000;
+
+		break;
+	case DMA_DEV_TO_MEM:
+		/* Slave transfer synchronized - dev to mem (RX) trasnfer */
+		ret = udma_alloc_rx_resources(uc);
+		if (ret)
+			goto err_free_res;
+
+		uc->src_thread = uc->slave_thread_id;
+		uc->dst_thread = (ud->psil_base + uc->rchan->id) | 0x8000;
+
+		break;
+	default:
+		/* Can not happen */
+		k3_udma_dbg("%s: chan:%d invalid direction (%u)\n",
+			    __func__, uc->id, uc->dir);
+		return -EINVAL;
+	}
+
+	/* We have channel indexes and rings */
+	if (uc->dir == DMA_MEM_TO_MEM) {
+		ret = udma_alloc_tchan_sci_req(uc);
+		if (ret)
+			goto err_free_res;
+
+		ret = udma_alloc_rchan_sci_req(uc);
+		if (ret)
+			goto err_free_res;
+	} else {
+		/* Slave transfer */
+		if (uc->dir == DMA_MEM_TO_DEV) {
+			ret = udma_alloc_tchan_sci_req(uc);
+			if (ret)
+				goto err_free_res;
+		} else {
+			ret = udma_alloc_rchan_sci_req(uc);
+			if (ret)
+				goto err_free_res;
+		}
+	}
+
+	/* PSI-L pairing */
+	uc->psi_link = k3_nav_psil_request_link(ud->psil_node, uc->src_thread,
+						uc->dst_thread);
+	if (IS_ERR(uc->psi_link)) {
+		dev_err(ud->dev, "k3_nav_psil_request_link fail\n");
+		ret = PTR_ERR(uc->psi_link);
+		goto err_free_res;
+	}
+
+	return 0;
+
+err_free_res:
+	udma_free_tx_resources(uc);
+	udma_free_rx_resources(uc);
+	uc->slave_thread_id = -1;
+	return ret;
+}
+
+static void udma_free_chan_resources(struct udma_chan *uc)
+{
+	/* Some configuration to UDMA-P channel: disable, reset, whatever */
+
+	/* Release PSI-L pairing */
+	k3_nav_psil_release_link(uc->psi_link);
+
+	/* Reset the rings for a new start */
+	udma_reset_rings(uc);
+	udma_free_tx_resources(uc);
+	udma_free_rx_resources(uc);
+
+	uc->slave_thread_id = -1;
+	uc->dir = DMA_MEM_TO_MEM;
+}
+
+static int udma_get_mmrs(struct udevice *dev)
+{
+	struct udma_dev *ud = dev_get_priv(dev);
+	int i;
+
+	for (i = 0; i < MMR_LAST; i++) {
+		ud->mmrs[i] = (uint32_t *)devfdt_get_addr_name(dev,
+				mmr_names[i]);
+		if (!ud->mmrs[i])
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+#define UDMA_MAX_CHANNELS	192
+
+static int udma_probe(struct udevice *dev)
+{
+	struct dma_dev_priv *uc_priv = dev_get_uclass_priv(dev);
+	struct udma_dev *ud = dev_get_priv(dev);
+	int i, ret;
+	u32 cap2, cap3;
+	struct udevice *tmp;
+	struct udevice *tisci_dev = NULL;
+
+	ret = udma_get_mmrs(dev);
+	if (ret)
+		return ret;
+
+	ret = uclass_get_device_by_phandle(UCLASS_MISC, dev,
+					   "ti,psi-proxy",
+					   &ud->psil_node);
+	if (!ud->psil_node) {
+		dev_err(dev, "PSILCFG node is not found\n");
+		return -ENODEV;
+	}
+
+	ret = uclass_get_device_by_phandle(UCLASS_MISC, dev,
+					   "ti,ringacc", &tmp);
+	ud->ringacc = dev_get_priv(tmp);
+	if (IS_ERR(ud->ringacc))
+		return PTR_ERR(ud->ringacc);
+
+	ud->ch_count = dev_read_u32_default(dev, "dma-channels", 0);
+	if (!ud->ch_count) {
+		dev_info(dev, "Missing dma-channels property, using %u.\n",
+			 UDMA_MAX_CHANNELS);
+		ud->ch_count = UDMA_MAX_CHANNELS;
+	}
+
+	ud->psil_base = dev_read_u32_default(dev, "ti,psil-base", 0);
+	if (!ud->psil_base) {
+		dev_info(dev,
+			 "Missing ti,psil-base property, using %d.\n", ret);
+		return -EINVAL;
+	}
+
+	ret = uclass_get_device_by_name(UCLASS_FIRMWARE, "dmsc", &tisci_dev);
+	if (ret) {
+		debug("TISCI RA RM get failed (%d)\n", ret);
+		ud->tisci = NULL;
+		return 0;
+	}
+	ud->tisci = (struct ti_sci_handle *)
+			 (ti_sci_get_handle_from_sysfw(tisci_dev));
+
+	ret = dev_read_u32_default(dev, "ti,sci", 0);
+	if (!ret) {
+		dev_err(dev, "TISCI RA RM disabled\n");
+		ud->tisci = NULL;
+	}
+
+	if (ud->tisci) {
+		ud->tisci_dev_id = -1;
+		ret = dev_read_u32(dev, "ti,sci-dev-id", &ud->tisci_dev_id);
+		if (ret) {
+			dev_err(dev, "ti,sci-dev-id read failure %d\n", ret);
+			return ret;
+		}
+		ud->tisci_udmap_ops = &ud->tisci->ops.rm_udmap_ops;
+	}
+
+	cap2 = udma_read(ud->mmrs[MMR_GCFG], 0x28);
+	cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c);
+
+	ud->rflow_cnt = cap3 & 0x3fff;
+	ud->tchan_cnt = cap2 & 0x1ff;
+	ud->echan_cnt = (cap2 >> 9) & 0x1ff;
+	ud->rchan_cnt = (cap2 >> 18) & 0x1ff;
+	if ((ud->tchan_cnt + ud->rchan_cnt) != ud->ch_count) {
+		dev_info(dev,
+			 "Channel count mismatch: %u != tchan(%u) + rchan(%u)\n",
+			 ud->ch_count, ud->tchan_cnt, ud->rchan_cnt);
+		ud->ch_count  = ud->tchan_cnt + ud->rchan_cnt;
+	}
+
+	dev_info(dev,
+		 "Number of channels: %u (tchan: %u, echan: %u, rchan: %u dev-id %u)\n",
+		 ud->ch_count, ud->tchan_cnt, ud->echan_cnt, ud->rchan_cnt,
+		 ud->tisci_dev_id);
+	dev_info(dev, "Number of rflows: %u\n", ud->rflow_cnt);
+
+	ud->channels = devm_kcalloc(dev, ud->ch_count, sizeof(*ud->channels),
+				    GFP_KERNEL);
+	ud->tchan_map = devm_kcalloc(dev, BITS_TO_LONGS(ud->tchan_cnt),
+				     sizeof(unsigned long), GFP_KERNEL);
+	ud->tchans = devm_kcalloc(dev, ud->tchan_cnt,
+				  sizeof(*ud->tchans), GFP_KERNEL);
+	ud->rchan_map = devm_kcalloc(dev, BITS_TO_LONGS(ud->rchan_cnt),
+				     sizeof(unsigned long), GFP_KERNEL);
+	ud->rchans = devm_kcalloc(dev, ud->rchan_cnt,
+				  sizeof(*ud->rchans), GFP_KERNEL);
+	ud->rflow_map = devm_kcalloc(dev, BITS_TO_LONGS(ud->rflow_cnt),
+				     sizeof(unsigned long), GFP_KERNEL);
+	ud->rflows = devm_kcalloc(dev, ud->rflow_cnt,
+				  sizeof(*ud->rflows), GFP_KERNEL);
+
+	if (!ud->channels || !ud->tchan_map || !ud->rchan_map ||
+	    !ud->rflow_map || !ud->tchans || !ud->rchans || !ud->rflows)
+		return -ENOMEM;
+
+	for (i = 0; i < ud->tchan_cnt; i++) {
+		struct udma_tchan *tchan = &ud->tchans[i];
+
+		tchan->id = i;
+		tchan->reg_rt = ud->mmrs[MMR_TCHANRT] + UDMA_CH_1000(i);
+	}
+
+	for (i = 0; i < ud->rchan_cnt; i++) {
+		struct udma_rchan *rchan = &ud->rchans[i];
+
+		rchan->id = i;
+		rchan->reg_rt = ud->mmrs[MMR_RCHANRT] + UDMA_CH_1000(i);
+	}
+
+	for (i = 0; i < ud->rflow_cnt; i++) {
+		struct udma_rflow *rflow = &ud->rflows[i];
+
+		rflow->id = i;
+	}
+
+	for (i = 0; i < ud->ch_count; i++) {
+		struct udma_chan *uc = &ud->channels[i];
+
+		uc->ud = ud;
+		uc->id = i;
+		uc->slave_thread_id = -1;
+		uc->tchan = NULL;
+		uc->rchan = NULL;
+		uc->dir = DMA_MEM_TO_MEM;
+		sprintf(uc->name, "UDMA chan%d\n", i);
+		if (!i)
+			uc->in_use = true;
+	}
+
+	k3_udma_dbg("UDMA(rev: 0x%08x) CAP0-3: 0x%08x, 0x%08x, 0x%08x, 0x%08x\n",
+		    udma_read(ud->mmrs[MMR_GCFG], 0),
+		    udma_read(ud->mmrs[MMR_GCFG], 0x20),
+		    udma_read(ud->mmrs[MMR_GCFG], 0x24),
+		    udma_read(ud->mmrs[MMR_GCFG], 0x28),
+		    udma_read(ud->mmrs[MMR_GCFG], 0x2c));
+
+	uc_priv->supported = DMA_SUPPORTS_MEM_TO_MEM | DMA_SUPPORTS_MEM_TO_DEV;
+
+	return ret;
+}
+
+static void udma_fill_rxfdq_ring(struct udma_chan *uc, void *src,
+				 void *dst, size_t len)
+{
+	dma_addr_t dma_dst = (dma_addr_t)dst;
+	unsigned long dummy;
+	struct knav_udmap_host_desc_t *desc_rx;
+
+	desc_rx = dma_alloc_coherent(sizeof(*desc_rx), &dummy);
+	memset(desc_rx, 0, sizeof(*desc_rx));
+
+	knav_udmap_hdesc_init(desc_rx, 0, 0);
+	knav_udmap_hdesc_set_pktlen(desc_rx, len);
+	knav_udmap_hdesc_attach_buf(desc_rx, dma_dst, len, dma_dst, len);
+
+	flush_dcache_range((u64)desc_rx,
+			   ALIGN((u64)desc_rx + sizeof(*desc_rx),
+				 ARCH_DMA_MINALIGN));
+	k3_nav_ringacc_ring_push(uc->rchan->fd_ring, &desc_rx);
+}
+
+static void udma_push_tx_ring(struct udma_chan *uc, void *src,
+			      void *dst, size_t len)
+{
+	u32 tc_ring_id = k3_nav_ringacc_get_ring_id(uc->tchan->tc_ring);
+	dma_addr_t dma_src = (dma_addr_t)src;
+	unsigned long dummy;
+	struct knav_udmap_host_desc_t *desc_tx;
+
+	desc_tx = dma_alloc_coherent(sizeof(*desc_tx), &dummy);
+	memset(desc_tx, 0, sizeof(*desc_tx));
+
+	knav_udmap_hdesc_init(desc_tx, 0, 0);
+	knav_udmap_hdesc_set_pktlen(desc_tx, len);
+	knav_udmap_hdesc_attach_buf(desc_tx, dma_src, len, dma_src, len);
+	knav_udmap_hdesc_set_pktids(&desc_tx->hdr, uc->id, 0x3fff);
+	knav_udmap_desc_set_retpolicy(&desc_tx->hdr, 0, tc_ring_id);
+
+	flush_dcache_range((u64)desc_tx,
+			   ALIGN((u64)desc_tx + sizeof(*desc_tx),
+				 ARCH_DMA_MINALIGN));
+	k3_nav_ringacc_ring_push(uc->tchan->t_ring, &desc_tx);
+}
+
+static int udma_transfer(struct udevice *dev, int direction,
+			 void *dst, void *src, size_t len)
+{
+	struct udma_dev *ud = dev_get_priv(dev);
+	/* Hard code to channel 0 for now */
+	struct udma_chan *uc = &ud->channels[0];
+	dma_addr_t paddr;
+	int ret;
+
+	ret = udma_alloc_chan_resources(uc);
+	if (ret)
+		return ret;
+	udma_fill_rxfdq_ring(uc, src, dst, len);
+
+	udma_start(uc);
+	udma_push_tx_ring(uc, src, dst, len);
+	udma_poll_completion(uc, &paddr);
+	udma_stop(uc);
+
+	udma_free_chan_resources(uc);
+	return 0;
+}
+
+static int udma_request(struct dma *dma)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc;
+	unsigned long dummy;
+	int ret;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+
+	uc = &ud->channels[dma->id];
+	ret = udma_alloc_chan_resources(uc);
+	if (ret) {
+		dev_err(dma->dev, "alloc dma res failed %d\n", ret);
+		return -EINVAL;
+	}
+
+	uc->hdesc_size = knav_udmap_hdesc_calc_size(
+				uc->needs_epib, uc->psd_size, 0);
+	uc->hdesc_size = ALIGN(uc->hdesc_size, ARCH_DMA_MINALIGN);
+
+	if (uc->dir == DMA_MEM_TO_DEV) {
+		uc->desc_tx = dma_alloc_coherent(uc->hdesc_size, &dummy);
+		memset(uc->desc_tx, 0, uc->hdesc_size);
+	} else {
+		uc->desc_rx = dma_alloc_coherent(
+				uc->hdesc_size * UDMA_RX_DESC_NUM, &dummy);
+		memset(uc->desc_rx, 0, uc->hdesc_size * UDMA_RX_DESC_NUM);
+	}
+
+	uc->in_use = true;
+	uc->desc_rx_cur = 0;
+	uc->num_rx_bufs = 0;
+
+	return 0;
+}
+
+static int udma_free(struct dma *dma)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (udma_is_chan_running(uc))
+		udma_stop(uc);
+	udma_free_chan_resources(uc);
+
+	uc->in_use = false;
+
+	return 0;
+}
+
+static int udma_enable(struct dma *dma)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc;
+	int ret;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	ret = udma_start(uc);
+
+	return ret;
+}
+
+static int udma_disable(struct dma *dma)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc;
+	int ret = 0;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (udma_is_chan_running(uc))
+		ret = udma_stop(uc);
+	else
+		dev_err(dma->dev, "%s not running\n", __func__);
+
+	return ret;
+}
+
+static int udma_send(struct dma *dma, void *src, size_t len, void *metadata)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct knav_udmap_host_desc_t *desc_tx;
+	dma_addr_t dma_src = (dma_addr_t)src;
+	struct ti_udma_drv_packet_data packet_data = { 0 };
+	dma_addr_t paddr;
+	struct udma_chan *uc;
+	u32 tc_ring_id;
+	int ret;
+
+	if (!metadata)
+		packet_data = *((struct ti_udma_drv_packet_data *)metadata);
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (uc->dir != DMA_MEM_TO_DEV)
+		return -EINVAL;
+
+	tc_ring_id = k3_nav_ringacc_get_ring_id(uc->tchan->tc_ring);
+
+	desc_tx = uc->desc_tx;
+
+	knav_udmap_hdesc_reset_hbdesc(desc_tx);
+
+	knav_udmap_hdesc_init(
+		desc_tx,
+		uc->needs_epib ? KNAV_UDMAP_INFO0_HDESC_EPIB_PRESENT : 0,
+		uc->psd_size);
+	knav_udmap_hdesc_set_pktlen(desc_tx, len);
+	knav_udmap_hdesc_attach_buf(desc_tx, dma_src, len, dma_src, len);
+	knav_udmap_hdesc_set_pktids(&desc_tx->hdr, uc->id, 0x3fff);
+	knav_udmap_desc_set_retpolicy(&desc_tx->hdr, 0, tc_ring_id);
+	/* pass below information from caller */
+	knav_udmap_hdesc_set_pkttype(desc_tx, packet_data.pkt_type);
+	knav_udmap_desc_set_tags_ids(&desc_tx->hdr, 0, packet_data.dest_tag);
+
+	k3_udma_print_buf((ulong)desc_tx, desc_tx, 4, uc->hdesc_size / 4, 0);
+
+	flush_dcache_range((u64)dma_src,
+			   ALIGN((u64)dma_src + len,
+				 ARCH_DMA_MINALIGN));
+
+	flush_dcache_range((u64)desc_tx,
+			   ALIGN((u64)desc_tx + uc->hdesc_size,
+				 ARCH_DMA_MINALIGN));
+	ret = k3_nav_ringacc_ring_push(uc->tchan->t_ring, &uc->desc_tx);
+	if (ret) {
+		dev_err(dma->dev, "TX dma push fail ch_id %lu %d\n",
+			dma->id, ret);
+		return ret;
+	}
+
+	udma_poll_completion(uc, &paddr);
+
+	desc_tx = (struct knav_udmap_host_desc_t *)paddr;
+
+	return 0;
+}
+
+static int udma_receive(struct dma *dma, void **dst, void *metadata)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct knav_udmap_host_desc_t *desc_rx;
+	dma_addr_t buf_dma;
+	struct udma_chan *uc;
+	u32 buf_dma_len, pkt_len;
+	u32 port_id = 0;
+	int ret;
+
+	k3_udma_dev_dbg(dma->dev, "%s\n", __func__);
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (uc->dir != DMA_DEV_TO_MEM)
+		return -EINVAL;
+	if (!uc->num_rx_bufs)
+		return -EINVAL;
+
+	ret = k3_nav_ringacc_ring_pop(uc->rchan->r_ring, &desc_rx);
+	if (ret && ret != -ENODATA) {
+		dev_err(dma->dev, "rx dma fail ch_id:%lu %d\n", dma->id, ret);
+		return ret;
+	} else if (ret == -ENODATA) {
+		return 0;
+	}
+
+	k3_udma_print_buf((ulong)desc_rx, desc_rx, 4, uc->hdesc_size / 4, 0);
+
+	/* invalidate cache data */
+	invalidate_dcache_range((ulong)desc_rx,
+				(ulong)(desc_rx + uc->hdesc_size));
+
+	knav_udmap_hdesc_get_obuf(desc_rx, &buf_dma, &buf_dma_len);
+	pkt_len = knav_udmap_hdesc_get_pktlen(desc_rx);
+
+	/* invalidate cache data */
+	invalidate_dcache_range((ulong)buf_dma,
+				(ulong)(buf_dma + buf_dma_len));
+
+	knav_udmap_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL);
+	k3_udma_dev_dbg(dma->dev, "%s rx port_id:%d\n", __func__, port_id);
+
+	*dst = (void *)buf_dma;
+	uc->num_rx_bufs--;
+
+	return pkt_len;
+}
+
+static int udma_of_xlate(struct dma *dma, struct ofnode_phandle_args *args)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc = &ud->channels[0];
+	ofnode chconf_node, slave_node;
+	char prop[50];
+	u32 val;
+
+	for (val = 0; val < ud->ch_count; val++) {
+		uc = &ud->channels[val];
+		if (!uc->in_use)
+			break;
+	}
+
+	if (val == ud->ch_count)
+		return -EBUSY;
+
+	uc->dir = DMA_DEV_TO_MEM;
+	if (args->args[2] == UDMA_DIR_TX)
+		uc->dir = DMA_MEM_TO_DEV;
+
+	slave_node = ofnode_get_by_phandle(args->args[0]);
+	if (!ofnode_valid(slave_node)) {
+		dev_err(ud->dev, "slave node is missing\n");
+		return -EINVAL;
+	}
+
+	snprintf(prop, sizeof(prop), "ti,psil-config%u", args->args[1]);
+	chconf_node = ofnode_find_subnode(slave_node, prop);
+	if (!ofnode_valid(chconf_node)) {
+		dev_err(ud->dev, "Channel configuration node is missing\n");
+		return -EINVAL;
+	}
+
+	if (!ofnode_read_u32(chconf_node, "linux,udma-mode", &val)) {
+		if (val == UDMA_PKT_MODE)
+			uc->pkt_mode = true;
+	}
+
+	if (!ofnode_read_u32(chconf_node, "statictr-type", &val))
+		uc->static_tr_type = val;
+
+	uc->needs_epib = ofnode_read_bool(chconf_node, "ti,needs-epib");
+	if (!ofnode_read_u32(chconf_node, "ti,psd-size", &val))
+		uc->psd_size = val;
+	uc->metadata_size = (uc->needs_epib ? 16 : 0) + uc->psd_size;
+
+	if (ofnode_read_u32(slave_node, "ti,psil-base", &val)) {
+		dev_err(ud->dev, "ti,psil-base is missing\n");
+		return -EINVAL;
+	}
+
+	uc->slave_thread_id = val + args->args[1];
+
+	dma->id = uc->id;
+	k3_udma_dbg("Allocated dma chn:%lu epib:%d psdata:%u meta:%u thread_id:%x\n",
+		    dma->id, uc->needs_epib,
+		    uc->psd_size, uc->metadata_size,
+		    uc->slave_thread_id);
+
+	return 0;
+}
+
+int udma_prepare_rcv_buf(struct dma *dma, void *dst, size_t size)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct knav_udmap_host_desc_t *desc_rx;
+	dma_addr_t dma_dst;
+	struct udma_chan *uc;
+	u32 desc_num;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (uc->dir != DMA_DEV_TO_MEM)
+		return -EINVAL;
+
+	if (uc->num_rx_bufs >= UDMA_RX_DESC_NUM)
+		return -EINVAL;
+
+	desc_num = uc->desc_rx_cur % UDMA_RX_DESC_NUM;
+	desc_rx = uc->desc_rx + (desc_num * uc->hdesc_size);
+	dma_dst = (dma_addr_t)dst;
+
+	knav_udmap_hdesc_reset_hbdesc(desc_rx);
+
+	knav_udmap_hdesc_init(desc_rx,
+			      uc->needs_epib ?
+			      KNAV_UDMAP_INFO0_HDESC_EPIB_PRESENT : 0,
+			      uc->psd_size);
+	knav_udmap_hdesc_set_pktlen(desc_rx, size);
+	knav_udmap_hdesc_attach_buf(desc_rx, dma_dst, size, dma_dst, size);
+
+	flush_dcache_range((u64)desc_rx,
+			   ALIGN((u64)desc_rx + uc->hdesc_size,
+				 ARCH_DMA_MINALIGN));
+
+	k3_nav_ringacc_ring_push(uc->rchan->fd_ring, &desc_rx);
+
+	uc->num_rx_bufs++;
+	uc->desc_rx_cur++;
+
+	return 0;
+}
+
+static const struct dma_ops udma_ops = {
+	.transfer	= udma_transfer,
+	.of_xlate	= udma_of_xlate,
+	.request	= udma_request,
+	.free		= udma_free,
+	.enable		= udma_enable,
+	.disable	= udma_disable,
+	.send		= udma_send,
+	.receive	= udma_receive,
+	.prepare_rcv_buf = udma_prepare_rcv_buf,
+};
+
+static const struct udevice_id udma_ids[] = {
+	{ .compatible = "ti,k3-navss-udmap" },
+	{ }
+};
+
+U_BOOT_DRIVER(ti_edma3) = {
+	.name	= "ti-udma",
+	.id	= UCLASS_DMA,
+	.of_match = udma_ids,
+	.ops	= &udma_ops,
+	.probe	= udma_probe,
+	.priv_auto_alloc_size = sizeof(struct udma_dev),
+};
diff --git a/include/dt-bindings/dma/k3-udma.h b/include/dt-bindings/dma/k3-udma.h
new file mode 100644
index 0000000..89ba6a9
--- /dev/null
+++ b/include/dt-bindings/dma/k3-udma.h
@@ -0,0 +1,26 @@
+#ifndef __DT_TI_UDMA_H
+#define __DT_TI_UDMA_H
+
+#define UDMA_TR_MODE		0
+#define UDMA_PKT_MODE		1
+
+#define UDMA_DIR_TX		0
+#define UDMA_DIR_RX		1
+
+#define PSIL_STATIC_TR_NONE	0
+#define PSIL_STATIC_TR_XY	1
+#define PSIL_STATIC_TR_MCAN	2
+
+#define UDMA_PDMA_TR_XY(id)				\
+	ti,psil-config##id {				\
+		linux,udma-mode = <UDMA_TR_MODE>;	\
+		statictr-type = <PSIL_STATIC_TR_XY>;	\
+	}
+
+#define UDMA_PDMA_PKT_XY(id)				\
+	ti,psil-config##id {				\
+		linux,udma-mode = <UDMA_PKT_MODE>;	\
+		statictr-type = <PSIL_STATIC_TR_XY>;	\
+	}
+
+#endif /* __DT_TI_UDMA_H */
diff --git a/include/linux/soc/ti/ti-udma.h b/include/linux/soc/ti/ti-udma.h
new file mode 100644
index 0000000..e9d4226
--- /dev/null
+++ b/include/linux/soc/ti/ti-udma.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *  Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ *  Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
+ */
+
+#ifndef __TI_UDMA_H
+#define __TI_UDMA_H
+
+/**
+ * struct ti_udma_drv_packet_data - TI UDMA transfer specific data
+ *
+ * @pkt_type: Packet Type - specific for each DMA client HW
+ * @dest_tag: Destination tag The source pointer.
+ *
+ * TI UDMA transfer specific data passed as part of DMA transfer to
+ * the DMA client HW in UDMA descriptors.
+ */
+struct ti_udma_drv_packet_data {
+	u32	pkt_type;
+	u32	dest_tag;
+};
+
+#endif /* __TI_UDMA_H */
-- 
2.10.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [U-Boot] [NOT-FOR-MERGE-PATCH v6 5/5] net: ethernet: ti: introduce am654 gigabit eth switch subsystem driver
  2018-10-22 21:24 [U-Boot] [PATCH v6 0/5] dma: add channels support Grygorii Strashko
                   ` (3 preceding siblings ...)
  2018-10-22 21:24 ` [U-Boot] [NOT-FOR-MERGE-PATCH v6 4/5] dma: ti: add driver to K3 UDMA Grygorii Strashko
@ 2018-10-22 21:25 ` Grygorii Strashko
  4 siblings, 0 replies; 9+ messages in thread
From: Grygorii Strashko @ 2018-10-22 21:25 UTC (permalink / raw)
  To: u-boot

Add new driver for the TI AM65x SoC Gigabit Ethernet Switch subsystem (CPSW
NUSS). It has two ports (internal and one external) and provides Ethernet packet
communication for the device.
CPSW NUSS features: the Reduced Gigabit Media Independent Interface (RGMII),
Reduced Media Independent Interface (RMII), and the Management Data
Input/Output (MDIO) interface for physical layer device (PHY) management.
The TI AM65x SoC has integrated two-port Gigabit Ethernet Switch subsystem
into device MCU domain named MCU_CPSW0. One Ethernet port (port 1) with
selectable RGMII and RMII interfaces and an internal Communications
Port Programming Interface (CPPI) port (Host port 0).

Host Port 0 CPPI Packet Streaming Interface interface supports 8 TX
channels and 8 RX channels operating by TI am654 NAVSS Unified DMA
Peripheral Root Complex (UDMA-P) controller.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
---
 drivers/net/Kconfig          |   8 +
 drivers/net/Makefile         |   1 +
 drivers/net/am65-cpsw-nuss.c | 962 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 971 insertions(+)
 create mode 100644 drivers/net/am65-cpsw-nuss.c

diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index 5441da4..9376127 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -327,6 +327,14 @@ config DRIVER_TI_EMAC
 	help
 	   Support for davinci emac
 
+config TI_AM65_CPSW_NUSS
+	bool "TI K3 AM65x MCU CPSW Nuss Ethernet controller driver"
+	depends on ARCH_K3
+	select PHYLIB
+	help
+	  This driver supports TI K3 MCU CPSW Nuss Ethernet controller
+	  in Texas Instruments K3 AM65x SoCs.
+
 config XILINX_AXIEMAC
 	depends on DM_ETH && (MICROBLAZE || ARCH_ZYNQ || ARCH_ZYNQMP)
 	select PHYLIB
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 48a2878..c814859 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -60,6 +60,7 @@ obj-$(CONFIG_DRIVER_TI_EMAC) += davinci_emac.o
 obj-$(CONFIG_TSEC_ENET) += tsec.o fsl_mdio.o
 obj-$(CONFIG_DRIVER_TI_CPSW) += cpsw.o cpsw-common.o
 obj-$(CONFIG_FMAN_ENET) += fsl_mdio.o
+obj-$(CONFIG_TI_AM65_CPSW_NUSS) += am65-cpsw-nuss.o
 obj-$(CONFIG_ULI526X) += uli526x.o
 obj-$(CONFIG_VSC7385_ENET) += vsc7385.o
 obj-$(CONFIG_XILINX_AXIEMAC) += xilinx_axi_emac.o
diff --git a/drivers/net/am65-cpsw-nuss.c b/drivers/net/am65-cpsw-nuss.c
new file mode 100644
index 0000000..e2761c9
--- /dev/null
+++ b/drivers/net/am65-cpsw-nuss.c
@@ -0,0 +1,962 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Texas Instruments K3 AM65 Ethernet Switch SubSystem Driver
+ *
+ * Copyright (C) 2018, Texas Instruments, Incorporated
+ *
+ */
+
+#include <common.h>
+#include <asm/io.h>
+#include <asm/processor.h>
+#include <clk.h>
+#include <dm.h>
+#include <dm/lists.h>
+#include <dma-uclass.h>
+#include <dm/of_access.h>
+#include <miiphy.h>
+#include <net.h>
+#include <phy.h>
+#include <power-domain.h>
+#include <linux/soc/ti/ti-udma.h>
+
+#define AM65_CPSW_CPSWNU_MAX_PORTS 2
+
+#define AM65_CPSW_SS_BASE		0x0
+#define AM65_CPSW_SGMII_BASE	0x100
+#define AM65_CPSW_MDIO_BASE	0xf00
+#define AM65_CPSW_XGMII_BASE	0x2100
+#define AM65_CPSW_CPSW_NU_BASE	0x20000
+#define AM65_CPSW_CPSW_NU_ALE_BASE 0x1e000
+
+#define AM65_CPSW_CPSW_NU_PORTS_OFFSET	0x1000
+#define AM65_CPSW_CPSW_NU_PORT_MACSL_OFFSET	0x330
+
+#define AM65_CPSW_MDIO_BUS_FREQ_DEF 1000000
+
+#define AM65_CPSW_CTL_REG			0x4
+#define AM65_CPSW_STAT_PORT_EN_REG	0x14
+#define AM65_CPSW_PTYPE_REG		0x18
+
+#define AM65_CPSW_CTL_REG_P0_ENABLE			BIT(2)
+#define AM65_CPSW_CTL_REG_P0_TX_CRC_REMOVE		BIT(13)
+#define AM65_CPSW_CTL_REG_P0_RX_PAD			BIT(14)
+
+#define AM65_CPSW_P0_FLOW_ID_REG			0x8
+#define AM65_CPSW_PN_RX_MAXLEN_REG		0x24
+#define AM65_CPSW_PN_REG_SA_L			0x308
+#define AM65_CPSW_PN_REG_SA_H			0x30c
+
+#define AM65_CPSW_ALE_CTL_REG			0x8
+#define AM65_CPSW_ALE_CTL_REG_ENABLE		BIT(31)
+#define AM65_CPSW_ALE_CTL_REG_RESET_TBL		BIT(30)
+#define AM65_CPSW_ALE_CTL_REG_BYPASS		BIT(4)
+#define AM65_CPSW_ALE_PN_CTL_REG(x)		(0x40 + (x) * 4)
+#define AM65_CPSW_ALE_PN_CTL_REG_MODE_FORWARD	0x3
+#define AM65_CPSW_ALE_PN_CTL_REG_MAC_ONLY	BIT(11)
+
+#define AM65_CPSW_MACSL_CTL_REG			0x0
+#define AM65_CPSW_MACSL_CTL_REG_IFCTL_A		BIT(15)
+#define AM65_CPSW_MACSL_CTL_REG_GIG		BIT(7)
+#define AM65_CPSW_MACSL_CTL_REG_GMII_EN		BIT(5)
+#define AM65_CPSW_MACSL_CTL_REG_LOOPBACK	BIT(1)
+#define AM65_CPSW_MACSL_CTL_REG_FULL_DUPLEX	BIT(0)
+#define AM65_CPSW_MACSL_RESET_REG		0x8
+#define AM65_CPSW_MACSL_RESET_REG_RESET		BIT(0)
+#define AM65_CPSW_MACSL_STATUS_REG		0x4
+#define AM65_CPSW_MACSL_RESET_REG_PN_IDLE	BIT(31)
+#define AM65_CPSW_MACSL_RESET_REG_PN_E_IDLE	BIT(30)
+#define AM65_CPSW_MACSL_RESET_REG_PN_P_IDLE	BIT(29)
+#define AM65_CPSW_MACSL_RESET_REG_PN_TX_IDLE	BIT(28)
+#define AM65_CPSW_MACSL_RESET_REG_IDLE_MASK \
+	(AM65_CPSW_MACSL_RESET_REG_PN_IDLE | \
+	 AM65_CPSW_MACSL_RESET_REG_PN_E_IDLE | \
+	 AM65_CPSW_MACSL_RESET_REG_PN_P_IDLE | \
+	 AM65_CPSW_MACSL_RESET_REG_PN_TX_IDLE)
+
+#define AM65_CPSW_CPPI_PKT_TYPE			0x7
+
+struct am65_cpsw_port {
+	fdt_addr_t	port_base;
+	fdt_addr_t	macsl_base;
+	bool		disabled;
+	u32		mac_control;
+};
+
+struct am65_cpsw_common {
+	struct udevice		*dev;
+	fdt_addr_t		ss_base;
+	fdt_addr_t		cpsw_base;
+	fdt_addr_t		mdio_base;
+	fdt_addr_t		ale_base;
+	fdt_addr_t		gmii_sel;
+	fdt_addr_t		mac_efuse;
+
+	struct clk		fclk;
+	struct power_domain	pwrdmn;
+
+	u32			port_num;
+	struct am65_cpsw_port	ports[AM65_CPSW_CPSWNU_MAX_PORTS];
+	u32			rflow_id_base;
+
+	struct mii_dev		*bus;
+	int			mdio_div;
+	u32			bus_freq;
+
+	struct dma		dma_tx;
+	struct dma		dma_rx;
+	u32			rx_next;
+	u32			rx_pend;
+	bool			started;
+};
+
+struct am65_cpsw_priv {
+	struct udevice		*dev;
+	struct am65_cpsw_common	*cpsw_common;
+	u32			port_id;
+
+	struct phy_device	*phydev;
+	bool			has_phy;
+	ofnode			phy_node;
+	u32			phy_addr;
+};
+
+#ifdef PKTSIZE_ALIGN
+#define UDMA_RX_BUF_SIZE PKTSIZE_ALIGN
+#else
+#define UDMA_RX_BUF_SIZE ALIGN(1522, ARCH_DMA_MINALIGN)
+#endif
+
+#ifdef PKTBUFSRX
+#define UDMA_RX_DESC_NUM PKTBUFSRX
+#else
+#define UDMA_RX_DESC_NUM 4
+#endif
+
+#ifdef AM65_CPSW_DEBUG
+#define	am65_cpsw_dbg(arg...) pr_err(arg)
+#define	am65_cpsw_dev_dbg(dev, arg...) dev_err(dev, arg)
+static void am65_cpsw_print_buf(ulong addr, const void *data, uint width,
+				uint count, uint linelen)
+{
+	print_buffer(addr, data, width, count, linelen);
+}
+#else
+#define	am65_cpsw_dbg(arg...)
+#define	am65_cpsw_dev_dbg(arg...)
+static void am65_cpsw_print_buf(ulong addr, const void *data, uint width,
+				uint count, uint linelen)
+{}
+#endif
+
+#define mac_hi(mac)	(((mac)[0] << 0) | ((mac)[1] << 8) |    \
+			 ((mac)[2] << 16) | ((mac)[3] << 24))
+#define mac_lo(mac)	(((mac)[4] << 0) | ((mac)[5] << 8))
+
+static void am65_cpsw_set_sl_mac(struct am65_cpsw_port *slave,
+				 unsigned char *addr)
+{
+	writel(mac_hi(addr),
+	       slave->port_base + AM65_CPSW_PN_REG_SA_H);
+	writel(mac_lo(addr),
+	       slave->port_base + AM65_CPSW_PN_REG_SA_L);
+}
+
+int am65_cpsw_macsl_reset(struct am65_cpsw_port *slave)
+{
+	u32 i = 100;
+
+	/* Set the soft reset bit */
+	writel(AM65_CPSW_MACSL_RESET_REG_RESET,
+	       slave->macsl_base + AM65_CPSW_MACSL_RESET_REG);
+
+	while ((readl(slave->macsl_base + AM65_CPSW_MACSL_RESET_REG) &
+		AM65_CPSW_MACSL_RESET_REG_RESET) && i--)
+		cpu_relax();
+
+	/* Timeout on the reset */
+	return i;
+}
+
+static int am65_cpsw_macsl_wait_for_idle(struct am65_cpsw_port *slave)
+{
+	u32 i = 100;
+
+	while ((readl(slave->macsl_base + AM65_CPSW_MACSL_STATUS_REG) &
+		AM65_CPSW_MACSL_RESET_REG_IDLE_MASK) && i--)
+		cpu_relax();
+
+	return i;
+}
+
+static int am65_cpsw_update_link(struct am65_cpsw_priv *priv)
+{
+	struct am65_cpsw_common	*common = priv->cpsw_common;
+	struct am65_cpsw_port *port = &common->ports[priv->port_id];
+	struct phy_device *phy = priv->phydev;
+	u32 mac_control = 0;
+
+	if (phy->link) { /* link up */
+		mac_control = /*AM65_CPSW_MACSL_CTL_REG_LOOPBACK |*/
+			      AM65_CPSW_MACSL_CTL_REG_GMII_EN;
+		if (phy->speed == 1000)
+			mac_control |= AM65_CPSW_MACSL_CTL_REG_GIG;
+		if (phy->duplex == DUPLEX_FULL)
+			mac_control |= AM65_CPSW_MACSL_CTL_REG_FULL_DUPLEX;
+		if (phy->speed == 100)
+			mac_control |= AM65_CPSW_MACSL_CTL_REG_IFCTL_A;
+	}
+
+	if (mac_control == port->mac_control)
+		goto out;
+
+	if (mac_control) {
+		printf("link up on port %d, speed %d, %s duplex\n",
+		       priv->port_id, phy->speed,
+		       (phy->duplex == DUPLEX_FULL) ? "full" : "half");
+	} else {
+		printf("link down on port %d\n", priv->port_id);
+	}
+
+	writel(mac_control, port->macsl_base + AM65_CPSW_MACSL_CTL_REG);
+	port->mac_control = mac_control;
+
+out:
+	return phy->link;
+}
+
+#define AM65_GMII_SEL_MODE_MII		0
+#define AM65_GMII_SEL_MODE_RMII		1
+#define AM65_GMII_SEL_MODE_RGMII	2
+
+#define AM65_GMII_SEL_RGMII_IDMODE	BIT(4)
+
+static void am65_cpsw_gmii_sel_k3(struct am65_cpsw_priv *priv,
+				  phy_interface_t phy_mode, int slave)
+{
+	struct am65_cpsw_common	*common = priv->cpsw_common;
+	u32 reg;
+	u32 mode = 0;
+	bool rgmii_id = false;
+
+	reg = readl(common->gmii_sel);
+
+	am65_cpsw_dev_dbg(common->dev, "old gmii_sel: %08x\n", reg);
+
+	switch (phy_mode) {
+	case PHY_INTERFACE_MODE_RMII:
+		mode = AM65_GMII_SEL_MODE_RMII;
+		break;
+
+	case PHY_INTERFACE_MODE_RGMII:
+		mode = AM65_GMII_SEL_MODE_RGMII;
+		break;
+
+	case PHY_INTERFACE_MODE_RGMII_ID:
+	case PHY_INTERFACE_MODE_RGMII_RXID:
+	case PHY_INTERFACE_MODE_RGMII_TXID:
+		mode = AM65_GMII_SEL_MODE_RGMII;
+		rgmii_id = true;
+		break;
+
+	default:
+		dev_warn(common->dev,
+			 "Unsupported PHY mode: %u. Defaulting to MII.\n",
+			 phy_mode);
+		/* fallthrough */
+	case PHY_INTERFACE_MODE_MII:
+		mode = AM65_GMII_SEL_MODE_MII;
+		break;
+	};
+
+	if (rgmii_id)
+		mode |= AM65_GMII_SEL_RGMII_IDMODE;
+
+	reg = mode;
+	am65_cpsw_dev_dbg(common->dev,
+			  "gmii_sel PHY mode: %u, new gmii_sel: %08x\n",
+			  phy_mode, reg);
+	writel(reg, common->gmii_sel);
+
+	reg = readl(common->gmii_sel);
+	if (reg != mode)
+		dev_err(common->dev,
+			"gmii_sel PHY mode NOT SET!: requested: %08x, gmii_sel: %08x\n",
+			mode, reg);
+}
+
+static int am65_cpsw_start(struct udevice *dev)
+{
+	struct eth_pdata *pdata = dev_get_platdata(dev);
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct am65_cpsw_common	*common = priv->cpsw_common;
+	struct am65_cpsw_port *port = &common->ports[priv->port_id];
+	struct am65_cpsw_port *port0 = &common->ports[0];
+	int ret, i;
+
+	am65_cpsw_dev_dbg(dev, "%s\n", __func__);
+
+	ret = power_domain_on(&common->pwrdmn);
+	if (ret) {
+		dev_err(dev, "power_domain_on() failed %d\n", ret);
+		goto out;
+	}
+
+	ret = clk_enable(&common->fclk);
+	if (ret) {
+		dev_err(dev, "clk enabled failed %d\n", ret);
+		goto err_off_pwrdm;
+	}
+
+	common->rx_next = 0;
+	common->rx_pend = 0;
+	ret = dma_get_by_name(common->dev, "tx0", &common->dma_tx);
+	if (ret) {
+		dev_err(dev, "TX dma get failed %d\n", ret);
+		goto err_off_clk;
+	}
+	ret = dma_get_by_name(common->dev, "rx", &common->dma_rx);
+	if (ret) {
+		dev_err(dev, "RX dma get failed %d\n", ret);
+		goto err_free_tx;
+	}
+
+	for (i = 0; i < UDMA_RX_DESC_NUM; i++) {
+		ret = dma_prepare_rcv_buf(&common->dma_rx,
+					  net_rx_packets[i],
+					  UDMA_RX_BUF_SIZE);
+		if (ret) {
+			dev_err(dev, "RX dma add buf failed %d\n", ret);
+			goto err_free_tx;
+		}
+	}
+
+	ret = dma_enable(&common->dma_tx);
+	if (ret) {
+		dev_err(dev, "TX dma_enable failed %d\n", ret);
+		goto err_free_rx;
+	}
+	ret = dma_enable(&common->dma_rx);
+	if (ret) {
+		dev_err(dev, "RX dma_enable failed %d\n", ret);
+		goto err_dis_tx;
+	}
+
+	/* Control register */
+	writel(AM65_CPSW_CTL_REG_P0_ENABLE |
+	       AM65_CPSW_CTL_REG_P0_TX_CRC_REMOVE |
+	       AM65_CPSW_CTL_REG_P0_RX_PAD,
+	       common->cpsw_base + AM65_CPSW_CTL_REG);
+
+	/* disable priority elevation */
+	writel(0, common->cpsw_base + AM65_CPSW_PTYPE_REG);
+
+	/* enable statistics */
+	writel(BIT(0) | BIT(priv->port_id),
+	       common->cpsw_base + AM65_CPSW_STAT_PORT_EN_REG);
+
+	/* Port 0  length register */
+	writel(PKTSIZE_ALIGN, port0->port_base + AM65_CPSW_PN_RX_MAXLEN_REG);
+
+	/* set base flow_id */
+	writel(common->rflow_id_base,
+	       port0->port_base + AM65_CPSW_P0_FLOW_ID_REG);
+
+	/* Reset and enable the ALE */
+	writel(AM65_CPSW_ALE_CTL_REG_ENABLE | AM65_CPSW_ALE_CTL_REG_RESET_TBL |
+	       AM65_CPSW_ALE_CTL_REG_BYPASS,
+	       common->ale_base + AM65_CPSW_ALE_CTL_REG);
+
+	/* port 0 put into forward mode */
+	writel(AM65_CPSW_ALE_PN_CTL_REG_MODE_FORWARD,
+	       common->ale_base + AM65_CPSW_ALE_PN_CTL_REG(0));
+
+	/* PORT x configuration */
+
+	/* Port x Max length register */
+	writel(PKTSIZE_ALIGN, port->port_base + AM65_CPSW_PN_RX_MAXLEN_REG);
+
+	/* Port x set mac */
+	am65_cpsw_set_sl_mac(port, pdata->enetaddr);
+
+	/* Port x ALE: mac_only, Forwarding */
+	writel(AM65_CPSW_ALE_PN_CTL_REG_MAC_ONLY |
+	       AM65_CPSW_ALE_PN_CTL_REG_MODE_FORWARD,
+	       common->ale_base + AM65_CPSW_ALE_PN_CTL_REG(priv->port_id));
+
+	port->mac_control = 0;
+	if (!am65_cpsw_macsl_reset(port)) {
+		dev_err(dev, "mac_sl reset failed\n");
+		ret = -EFAULT;
+		goto err_dis_rx;
+	}
+
+	ret = phy_startup(priv->phydev);
+	if (ret) {
+		dev_err(dev, "phy_startup failed\n");
+		goto err_dis_rx;
+	}
+
+	ret = am65_cpsw_update_link(priv);
+	if (!ret) {
+		ret = -ENODEV;
+		goto err_phy_shutdown;
+	}
+
+	common->started = true;
+	am65_cpsw_dev_dbg(dev, "%s end\n", __func__);
+
+	return 0;
+
+err_phy_shutdown:
+	phy_shutdown(priv->phydev);
+err_dis_rx:
+	/* disable ports */
+	writel(0, common->ale_base + AM65_CPSW_ALE_PN_CTL_REG(priv->port_id));
+	writel(0, common->ale_base + AM65_CPSW_ALE_PN_CTL_REG(0));
+	if (!am65_cpsw_macsl_wait_for_idle(port))
+		dev_err(dev, "mac_sl idle timeout\n");
+	writel(0, port->macsl_base + AM65_CPSW_MACSL_CTL_REG);
+	writel(0, common->ale_base + AM65_CPSW_ALE_CTL_REG);
+	writel(0, common->cpsw_base + AM65_CPSW_CTL_REG);
+
+	dma_disable(&common->dma_rx);
+err_dis_tx:
+	dma_disable(&common->dma_tx);
+err_free_rx:
+	dma_free(&common->dma_rx);
+err_free_tx:
+	dma_free(&common->dma_tx);
+err_off_clk:
+	clk_disable(&common->fclk);
+err_off_pwrdm:
+	power_domain_off(&common->pwrdmn);
+out:
+	am65_cpsw_dev_dbg(dev, "%s end error\n", __func__);
+
+	return ret;
+}
+
+static int am65_cpsw_send(struct udevice *dev, void *packet, int length)
+{
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct am65_cpsw_common	*common = priv->cpsw_common;
+	struct ti_udma_drv_packet_data packet_data;
+	int ret;
+
+	am65_cpsw_dev_dbg(dev, "%s\n", __func__);
+
+	am65_cpsw_print_buf((ulong)packet, packet, 1, length, 0);
+
+	packet_data.pkt_type = AM65_CPSW_CPPI_PKT_TYPE;
+	packet_data.dest_tag = priv->port_id;
+	ret = dma_send(&common->dma_tx, packet, length, &packet_data);
+	if (ret) {
+		dev_err(dev, "TX dma_send failed %d\n", ret);
+		return ret;
+	}
+
+	am65_cpsw_dev_dbg(dev, "%s end\n", __func__);
+
+	return 0;
+}
+
+static int am65_cpsw_recv(struct udevice *dev, int flags, uchar **packetp)
+{
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct am65_cpsw_common	*common = priv->cpsw_common;
+	int ret;
+
+	/* try to receive a new packet */
+	ret = dma_receive(&common->dma_rx, (void **)packetp, NULL);
+	if (ret > 0)
+		am65_cpsw_print_buf((ulong)(*packetp),
+				    *packetp, 1, ret, 0);
+
+	return ret;
+}
+
+static int am65_cpsw_free_pkt(struct udevice *dev, uchar *packet, int length)
+{
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct am65_cpsw_common	*common = priv->cpsw_common;
+	int ret;
+
+	if (length > 0) {
+		u32 pkt = common->rx_next % UDMA_RX_DESC_NUM;
+
+		am65_cpsw_dev_dbg(dev, "%s length:%d pkt:%u\n",
+				  __func__, length, pkt);
+
+		ret = dma_prepare_rcv_buf(&common->dma_rx,
+					  net_rx_packets[pkt],
+					  UDMA_RX_BUF_SIZE);
+		if (ret)
+			dev_err(dev, "RX dma free_pkt failed %d\n", ret);
+		common->rx_next++;
+
+	}
+
+	return 0;
+}
+
+static void am65_cpsw_stop(struct udevice *dev)
+{
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct am65_cpsw_common *common = priv->cpsw_common;
+	struct am65_cpsw_port *port = &common->ports[priv->port_id];
+
+	if (!common->started)
+		return;
+
+	phy_shutdown(priv->phydev);
+
+	writel(0, common->ale_base + AM65_CPSW_ALE_PN_CTL_REG(priv->port_id));
+	writel(0, common->ale_base + AM65_CPSW_ALE_PN_CTL_REG(0));
+	if (!am65_cpsw_macsl_wait_for_idle(port))
+		dev_err(dev, "mac_sl idle timeout\n");
+	writel(0, port->macsl_base + AM65_CPSW_MACSL_CTL_REG);
+	writel(0, common->ale_base + AM65_CPSW_ALE_CTL_REG);
+	writel(0, common->cpsw_base + AM65_CPSW_CTL_REG);
+
+	dma_disable(&common->dma_tx);
+	dma_free(&common->dma_tx);
+
+	dma_disable(&common->dma_rx);
+	dma_free(&common->dma_rx);
+
+	common->started = false;
+}
+
+static int am65_cpsw_read_rom_hwaddr(struct udevice *dev)
+{
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct am65_cpsw_common *common = priv->cpsw_common;
+	struct eth_pdata *pdata = dev_get_platdata(dev);
+	u32 mac_hi, mac_lo;
+
+	if (common->mac_efuse == FDT_ADDR_T_NONE)
+		return -1;
+
+	mac_lo = readl(common->mac_efuse);
+	mac_hi = readl(common->mac_efuse + 4);
+	pdata->enetaddr[0] = (mac_hi >> 8) & 0xff;
+	pdata->enetaddr[1] = mac_hi & 0xff;
+	pdata->enetaddr[2] = (mac_lo >> 24) & 0xff;
+	pdata->enetaddr[3] = (mac_lo >> 16) & 0xff;
+	pdata->enetaddr[4] = (mac_lo >> 8) & 0xff;
+	pdata->enetaddr[5] = mac_lo & 0xff;
+
+	return 0;
+}
+
+static const struct eth_ops am65_cpsw_ops = {
+	.start		= am65_cpsw_start,
+	.send		= am65_cpsw_send,
+	.recv		= am65_cpsw_recv,
+	.free_pkt	= am65_cpsw_free_pkt,
+	.stop		= am65_cpsw_stop,
+	.read_rom_hwaddr = am65_cpsw_read_rom_hwaddr,
+};
+
+#define PHY_REG_MASK		0x1f
+#define PHY_ID_MASK		0x1f
+
+#define MDIO_CONTROL_MAX_DIV	0xffff
+
+#define MDIO_TIMEOUT            100 /* msecs */
+struct cpsw_mdio_regs {
+	u32	version;
+	u32	control;
+#define CONTROL_IDLE		BIT(31)
+#define CONTROL_ENABLE		BIT(30)
+
+	u32	alive;
+	u32	link;
+	u32	linkintraw;
+	u32	linkintmasked;
+	u32	__reserved_0[2];
+	u32	userintraw;
+	u32	userintmasked;
+	u32	userintmaskset;
+	u32	userintmaskclr;
+	u32	__reserved_1[20];
+
+	struct {
+		u32		access;
+		u32		physel;
+#define USERACCESS_GO		BIT(31)
+#define USERACCESS_WRITE	BIT(30)
+#define USERACCESS_ACK		BIT(29)
+#define USERACCESS_READ		(0)
+#define USERACCESS_DATA		(0xffff)
+	} user[0];
+};
+
+/* wait until hardware is ready for another user access */
+static inline u32 am65_cpsw_mdio_wait_for_user_access(
+			struct cpsw_mdio_regs *mdio_regs)
+{
+	u32 reg = 0;
+	int timeout = MDIO_TIMEOUT;
+
+	while (timeout-- &&
+	       ((reg = __raw_readl(&mdio_regs->user[0].access)) &
+	       USERACCESS_GO))
+		udelay(10);
+
+	if (timeout == -1) {
+		printf("wait_for_user_access Timeout\n");
+		return -ETIMEDOUT;
+	}
+
+	return reg;
+}
+
+static int am65_cpsw_mdio_read(struct mii_dev *bus, int phy_id,
+			       int dev_addr, int phy_reg)
+{
+	struct am65_cpsw_common *cpsw_common = bus->priv;
+	struct cpsw_mdio_regs *mdio_regs;
+	int data;
+	u32 reg;
+
+	mdio_regs = (struct cpsw_mdio_regs *)cpsw_common->mdio_base;
+
+	if (phy_reg & ~PHY_REG_MASK || phy_id & ~PHY_ID_MASK)
+		return -EINVAL;
+
+	am65_cpsw_mdio_wait_for_user_access(mdio_regs);
+	reg = (USERACCESS_GO | USERACCESS_READ | (phy_reg << 21) |
+	       (phy_id << 16));
+	__raw_writel(reg, &mdio_regs->user[0].access);
+	reg = am65_cpsw_mdio_wait_for_user_access(mdio_regs);
+
+	data = (reg & USERACCESS_ACK) ? (reg & USERACCESS_DATA) : -1;
+	return data;
+}
+
+static int am65_cpsw_mdio_write(struct mii_dev *bus, int phy_id, int dev_addr,
+				int phy_reg, u16 data)
+{
+	struct am65_cpsw_common *cpsw_common = bus->priv;
+	struct cpsw_mdio_regs *mdio_regs;
+	u32 reg;
+
+	mdio_regs = (struct cpsw_mdio_regs *)cpsw_common->mdio_base;
+
+	if ((phy_reg & ~PHY_REG_MASK) || (phy_id & ~PHY_ID_MASK))
+		return -EINVAL;
+
+	am65_cpsw_mdio_wait_for_user_access(mdio_regs);
+	reg = (USERACCESS_GO | USERACCESS_WRITE | (phy_reg << 21) |
+		   (phy_id << 16) | (data & USERACCESS_DATA));
+	__raw_writel(reg, &mdio_regs->user[0].access);
+	am65_cpsw_mdio_wait_for_user_access(mdio_regs);
+
+	return 0;
+}
+
+static int am65_cpsw_mdio_init(struct udevice *dev)
+{
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct am65_cpsw_common	*cpsw_common = priv->cpsw_common;
+	struct mii_dev *bus;
+	struct cpsw_mdio_regs *mdio_regs;
+
+	if (!priv->has_phy || cpsw_common->bus)
+		return 0;
+
+	bus = mdio_alloc();
+	if (!bus)
+		return -ENOMEM;
+
+	mdio_regs = (struct cpsw_mdio_regs *)cpsw_common->mdio_base;
+
+	/* set enable and clock divider */
+	writel(cpsw_common->mdio_div | CONTROL_ENABLE, &mdio_regs->control);
+
+	/*
+	 * wait for scan logic to settle:
+	 * the scan time consists of (a) a large fixed component, and (b) a
+	 * small component that varies with the mii bus frequency.  These
+	 * were estimated using measurements at 1.1 and 2.2 MHz on tnetv107x
+	 * silicon.  Since the effect of (b) was found to be largely
+	 * negligible, we keep things simple here.
+	 */
+	udelay(1000);
+
+	bus->read = am65_cpsw_mdio_read;
+	bus->write = am65_cpsw_mdio_write;
+	strcpy(bus->name, dev->name);
+	bus->priv = cpsw_common;
+
+	if (mdio_register(bus))
+		return -EINVAL;
+	cpsw_common->bus = miiphy_get_dev_by_name(dev->name);
+
+	return 0;
+}
+
+static int am65_cpsw_phy_init(struct udevice *dev)
+{
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct am65_cpsw_common *cpsw_common = priv->cpsw_common;
+	struct eth_pdata *pdata = dev_get_platdata(dev);
+	struct phy_device *phydev;
+	u32 supported = PHY_GBIT_FEATURES;
+	int ret;
+
+	am65_cpsw_dev_dbg(dev, "%s\n", __func__);
+
+	phydev = phy_connect(cpsw_common->bus,
+			     priv->phy_addr,
+			     priv->dev,
+			     pdata->phy_interface);
+
+	if (!phydev) {
+		dev_err(dev, "phy_connect() failed\n");
+		return -ENODEV;
+	}
+
+	phydev->supported &= supported;
+	if (pdata->max_speed) {
+		ret = phy_set_supported(phydev, pdata->max_speed);
+		if (ret)
+			return ret;
+	}
+	phydev->advertising = phydev->supported;
+
+#ifdef CONFIG_DM_ETH
+	if (ofnode_valid(priv->phy_node))
+		phydev->node = priv->phy_node;
+#endif
+
+	priv->phydev = phydev;
+	ret = phy_config(phydev);
+	if (ret < 0)
+		pr_err("phy_config() failed: %d", ret);
+
+	am65_cpsw_dev_dbg(dev, "%s end\n", __func__);
+	return ret;
+}
+
+static int am65_cpsw_ofdata_parse_phy(struct udevice *dev, ofnode port_np)
+{
+	struct eth_pdata *pdata = dev_get_platdata(dev);
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct ofnode_phandle_args out_args;
+	const char *phy_mode;
+	int ret = 0;
+
+	phy_mode = ofnode_read_string(port_np, "phy-mode");
+	if (phy_mode) {
+		pdata->phy_interface =
+				phy_get_interface_by_name(phy_mode);
+		if (pdata->phy_interface == -1) {
+			dev_err(dev, "Invalid PHY mode '%s', port %u\n",
+				phy_mode, priv->port_id);
+			ret = -EINVAL;
+			goto out;
+		}
+	}
+
+	ofnode_read_u32(port_np, "max-speed", (u32 *)&pdata->max_speed);
+	if (pdata->max_speed)
+		dev_err(dev, "Port %u speed froced to %uMbit\n",
+			priv->port_id, pdata->max_speed);
+
+	priv->has_phy  = true;
+	ret = ofnode_parse_phandle_with_args(port_np, "phy-handle",
+					     NULL, 0, 0, &out_args);
+	if (ret) {
+		dev_err(dev, "can't parse phy-handle port %u (%d)\n",
+			priv->port_id, ret);
+		priv->has_phy  = false;
+		ret = 0;
+	}
+
+	priv->phy_node = out_args.node;
+	priv->phy_addr = PHY_REG_MASK + 1;
+	if (priv->has_phy) {
+		ret = ofnode_read_u32(priv->phy_node, "reg", &priv->phy_addr);
+		if (ret) {
+			dev_err(dev, "failed to get phy_addr port %u (%d)\n",
+				priv->port_id, ret);
+			goto out;
+		}
+	}
+
+out:
+	return ret;
+}
+
+static int am65_cpsw_probe_cpsw(struct udevice *dev)
+{
+	struct am65_cpsw_priv *priv = dev_get_priv(dev);
+	struct eth_pdata *pdata = dev_get_platdata(dev);
+	struct am65_cpsw_common *cpsw_common;
+	ofnode ports_np, node;
+	int ret, i;
+
+	priv->dev = dev;
+
+	cpsw_common = calloc(1, sizeof(*priv->cpsw_common));
+	if (!cpsw_common)
+		return -ENOMEM;
+	priv->cpsw_common = cpsw_common;
+
+	cpsw_common->dev = dev;
+	cpsw_common->ss_base = dev_read_addr(dev);
+	if (cpsw_common->ss_base == FDT_ADDR_T_NONE)
+		return -EINVAL;
+	cpsw_common->mac_efuse = devfdt_get_addr_name(dev, "mac_efuse");
+	/* no err check - optional */
+
+	ret = power_domain_get_by_index(dev, &cpsw_common->pwrdmn, 0);
+	if (ret) {
+		dev_err(dev, "failed to get pwrdmn: %d\n", ret);
+		return ret;
+	}
+
+	ret = clk_get_by_name(dev, "fck", &cpsw_common->fclk);
+	if (ret) {
+		power_domain_free(&cpsw_common->pwrdmn);
+		dev_err(dev, "failed to get clock %d\n", ret);
+		return ret;
+	}
+
+	cpsw_common->cpsw_base = cpsw_common->ss_base + AM65_CPSW_CPSW_NU_BASE;
+	cpsw_common->ale_base = cpsw_common->cpsw_base +
+				AM65_CPSW_CPSW_NU_ALE_BASE;
+	cpsw_common->mdio_base = cpsw_common->ss_base + AM65_CPSW_MDIO_BASE;
+
+	cpsw_common->rflow_id_base = 0;
+	cpsw_common->rflow_id_base =
+			dev_read_u32_default(dev, "ti,rx-flow-id-base",
+					     cpsw_common->rflow_id_base);
+
+	ports_np = dev_read_subnode(dev, "ports");
+	if (!ofnode_valid(ports_np)) {
+		ret = -ENOENT;
+		goto out;
+	}
+
+	ofnode_for_each_subnode(node, ports_np) {
+		const char *node_name;
+		u32 port_id;
+		bool disabled;
+
+		node_name = ofnode_get_name(node);
+
+		disabled = !ofnode_is_available(node);
+
+		ret = ofnode_read_u32(node, "reg", &port_id);
+		if (ret) {
+			dev_err(dev, "%s: failed to get port_id (%d)\n",
+				node_name, ret);
+			goto out;
+		}
+
+		if (port_id >= AM65_CPSW_CPSWNU_MAX_PORTS) {
+			dev_err(dev, "%s: invalid port_id (%d)\n",
+				node_name, port_id);
+			ret = -EINVAL;
+			goto out;
+		}
+		cpsw_common->port_num++;
+
+		if (!port_id)
+			continue;
+
+		priv->port_id = port_id;
+		cpsw_common->ports[port_id].disabled = disabled;
+		if (disabled)
+			continue;
+
+		ret = am65_cpsw_ofdata_parse_phy(dev, node);
+		if (ret)
+			goto out;
+	}
+
+	for (i = 0; i < AM65_CPSW_CPSWNU_MAX_PORTS; i++) {
+		struct am65_cpsw_port *port = &cpsw_common->ports[i];
+
+		port->port_base = cpsw_common->cpsw_base +
+				  AM65_CPSW_CPSW_NU_PORTS_OFFSET +
+				  (i * AM65_CPSW_CPSW_NU_PORTS_OFFSET);
+		port->macsl_base = port->port_base +
+				   AM65_CPSW_CPSW_NU_PORT_MACSL_OFFSET;
+	}
+
+	node = dev_read_subnode(dev, "cpsw-phy-sel");
+	if (!ofnode_valid(node)) {
+		dev_err(dev, "can't find cpsw-phy-se\n");
+		ret = -ENOENT;
+		goto out;
+	}
+
+	cpsw_common->gmii_sel = ofnode_get_addr(node);
+	if (cpsw_common->gmii_sel == FDT_ADDR_T_NONE) {
+		dev_err(dev, "failed to get gmii_sel base\n");
+		goto out;
+	}
+
+	node = dev_read_subnode(dev, "mdio");
+	if (!ofnode_valid(node)) {
+		dev_err(dev, "can't find mdio\n");
+		ret = -ENOENT;
+		goto out;
+	}
+
+	cpsw_common->bus_freq =
+			dev_read_u32_default(dev, "bus_freq",
+					     AM65_CPSW_MDIO_BUS_FREQ_DEF);
+
+	/* calc mdio div using fclk freq */
+	cpsw_common->mdio_div = clk_get_rate(&cpsw_common->fclk);
+	cpsw_common->mdio_div = (cpsw_common->mdio_div /
+				 cpsw_common->bus_freq) - 1;
+	if (cpsw_common->mdio_div > MDIO_CONTROL_MAX_DIV)
+		cpsw_common->mdio_div = MDIO_CONTROL_MAX_DIV;
+
+	am65_cpsw_gmii_sel_k3(priv, pdata->phy_interface, priv->port_id);
+
+	ret = am65_cpsw_mdio_init(dev);
+	if (ret)
+		goto out;
+
+	ret = am65_cpsw_phy_init(dev);
+	if (ret)
+		goto out;
+
+	dev_info(dev, "K3 CPSW: nuss_ver: 0x%08X cpsw_ver: 0x%08X ale_ver: 0x%08X Ports:%u rflow_id_base:%u mdio_freq:%u\n",
+		 readl(cpsw_common->ss_base),
+		 readl(cpsw_common->cpsw_base),
+		 readl(cpsw_common->ale_base),
+		 cpsw_common->port_num,
+		 cpsw_common->rflow_id_base,
+		 cpsw_common->bus_freq);
+
+out:
+	clk_free(&cpsw_common->fclk);
+	power_domain_free(&cpsw_common->pwrdmn);
+	am65_cpsw_dev_dbg(dev, "%s end %d\n", __func__, ret);
+	return ret;
+}
+
+static const struct udevice_id am65_cpsw_nuss_ids[] = {
+	{ .compatible = "ti,am654-cpsw-nuss" },
+	{ }
+};
+
+U_BOOT_DRIVER(am65_cpsw_nuss_slave) = {
+	.name	= "am65_cpsw_nuss_slave",
+	.id	= UCLASS_ETH,
+	.of_match = am65_cpsw_nuss_ids,
+	.probe	= am65_cpsw_probe_cpsw,
+	.ops	= &am65_cpsw_ops,
+	.priv_auto_alloc_size = sizeof(struct am65_cpsw_priv),
+	.platdata_auto_alloc_size = sizeof(struct eth_pdata),
+	.flags = DM_FLAG_ALLOC_PRIV_DMA,
+};
-- 
2.10.5

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [U-Boot] [PATCH v6 1/5] dma: move dma_ops to dma-uclass.h
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 1/5] dma: move dma_ops to dma-uclass.h Grygorii Strashko
@ 2018-11-02 20:29   ` Tom Rini
  0 siblings, 0 replies; 9+ messages in thread
From: Tom Rini @ 2018-11-02 20:29 UTC (permalink / raw)
  To: u-boot

On Mon, Oct 22, 2018 at 04:24:56PM -0500, Grygorii Strashko wrote:

> From: Álvaro Fernández Rojas <noltari@gmail.com>
> 
> Move dma_ops to a separate header file, following other uclass
> implementations. While doing so, this patch also improves dma_ops
> documentation.
> 
> Reviewed-by: Simon Glass <sjg@chromium.org>
> Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com>

Reviewed-by: Tom Rini <trini@konsulko.com>

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20181102/7f82f2b8/attachment.sig>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [U-Boot] [PATCH v6 2/5] dma: add channels support
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 2/5] dma: add channels support Grygorii Strashko
@ 2018-11-02 20:29   ` Tom Rini
  0 siblings, 0 replies; 9+ messages in thread
From: Tom Rini @ 2018-11-02 20:29 UTC (permalink / raw)
  To: u-boot

On Mon, Oct 22, 2018 at 04:24:57PM -0500, Grygorii Strashko wrote:

> From: Álvaro Fernández Rojas <noltari@gmail.com>
> 
> This adds channels support for dma controllers that have multiple channels
> which can transfer data to/from different devices (enet, usb...).
> 
> DMA channel API:
>  dma_get_by_index()
>  dma_get_by_name()
>  dma_request()
>  dma_free()
>  dma_enable()
>  dma_disable()
>  dma_prepare_rcv_buf()
>  dma_receive()
>  dma_send()
> 
> Signed-off-by: Álvaro Fernández Rojas <noltari@gmail.com>
> [grygorii.strashko at ti.com: drop unused dma_get_by_index_platdata(),
>  add metadata to send/receive ops, add dma_prepare_rcv_buf(),
>  minor clean up]
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>

Reviewed-by: Tom Rini <trini@konsulko.com>

But, please check this patch again, in at least one place you're
dropping an old TI copyright date and making it just be 2018.

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20181102/a8816320/attachment.sig>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [U-Boot] [PATCH v6 3/5] sandbox: dma: add dma-uclass test
  2018-10-22 21:24 ` [U-Boot] [PATCH v6 3/5] sandbox: dma: add dma-uclass test Grygorii Strashko
@ 2018-11-02 20:29   ` Tom Rini
  0 siblings, 0 replies; 9+ messages in thread
From: Tom Rini @ 2018-11-02 20:29 UTC (permalink / raw)
  To: u-boot

On Mon, Oct 22, 2018 at 04:24:58PM -0500, Grygorii Strashko wrote:

> Add a sandbox DMA driver implementation (provider) and corresponding DM
> test.
> 
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>

Applied to u-boot/master, thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20181102/b24baf55/attachment.sig>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-11-02 20:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-22 21:24 [U-Boot] [PATCH v6 0/5] dma: add channels support Grygorii Strashko
2018-10-22 21:24 ` [U-Boot] [PATCH v6 1/5] dma: move dma_ops to dma-uclass.h Grygorii Strashko
2018-11-02 20:29   ` Tom Rini
2018-10-22 21:24 ` [U-Boot] [PATCH v6 2/5] dma: add channels support Grygorii Strashko
2018-11-02 20:29   ` Tom Rini
2018-10-22 21:24 ` [U-Boot] [PATCH v6 3/5] sandbox: dma: add dma-uclass test Grygorii Strashko
2018-11-02 20:29   ` Tom Rini
2018-10-22 21:24 ` [U-Boot] [NOT-FOR-MERGE-PATCH v6 4/5] dma: ti: add driver to K3 UDMA Grygorii Strashko
2018-10-22 21:25 ` [U-Boot] [NOT-FOR-MERGE-PATCH v6 5/5] net: ethernet: ti: introduce am654 gigabit eth switch subsystem driver Grygorii Strashko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.