All of lore.kernel.org
 help / color / mirror / Atom feed
* [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support
@ 2019-02-05 12:01 Vignesh R
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 1/7] firmware: ti_sci: Add support for NAVSS resource management Vignesh R
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Vignesh R @ 2019-02-05 12:01 UTC (permalink / raw)
  To: u-boot

This series adds DMA support for TI's AM654 SoC.

v4:
Convert debug prints to pr_debug()s
Collect R-bs

v3:
Minor comment/whitespace cleanups as pointed out by Tom Rini

v2:
Align DT bindings with latest proposed bindings as pointed out by Peter.
Merge drivers/soc/keystone into drivers/soc/ti

Background:

The AM65x TRM (http://www.ti.com/lit/pdf/spruid7b) describes the Data Movement
Architecture which is implmented by the k3-udma driver.

This DMA architecture is a big departure from 'traditional' architecture where
we had either EDMA or sDMA as system DMA.

Packet DMAs were used as dedicated DMAs to service only networking (K2)
or USB (am335x) while other peripherals were serviced by EDMA.

In AM65x the UDMA (Unified DMA) is used for all data movment within the SoC,
tasked to service all peripherals (UART, McSPI, McASP, networking, etc).

The NAVSS/UDMA is built around CPPI5 (Communications Port Programming Interface)
and it supports Packet mode (similar to CPPI4.1 in K2 for networking) and
TR mode (similar to EDMA descriptor).
The data movement is done within a PSI-L fabric, all peripherals (including the
UDMA-P). peripherals are not addressed by their I/O register as with traditional
DMAs but with their PSI-L thread ID.

To be able to use the DMA the following generic steps need to be taken:
- configure a DMA channel (tchan for TX, rchan for RX)
 - channel mode: Packet or TR mode
 - for memcpy a tchan and rchan pair is used.
 - for packet mode RX we also need to configure a receive flow to configure the
   packet receiption
- the source and destination threads must be paired
- at minimum one pair of rings need to be configured:
 - tx: transfer ring and transfer completion ring
 - rx: free descriptor ring and receive ring

When the channel setup is completed we only interract with the rings:
- TX: push a descriptor to t_ring and wait for it to be pushed to the tc_ring by
  the UDMA-P
- RX: push a descriptor to the fd_ring and wait for UDMA-P to push it back to
  the r_ring.

Resources Management and configuration of channel and ring is handled by
sending TI-SCI msgs to remote core.

Patches are based on kernel patches here:
https://patchwork.kernel.org/cover/10612465/

Grygorii Strashko (5):
  firmware: ti_sci: Add support for NAVSS resource management
  soc: ti: k3: add navss ringacc driver
  soc: ti: k3: add CPPI5 description and helpers
  arm64: dts: ti: k3-am65: add mcu navss nodes
  configs: am65x_evm_a53: Enable DMA related configs

Vignesh R (2):
  dma: ti: add driver to K3 UDMA
  soc: keystone: Merge into ti specific directory

 arch/arm/dts/k3-am65-wakeup.dtsi              |    2 +-
 arch/arm/dts/k3-am654-base-board-u-boot.dtsi  |   47 +
 arch/arm/mach-keystone/Kconfig                |    8 +
 configs/am65x_evm_a53_defconfig               |    4 +-
 drivers/Kconfig                               |    2 +
 drivers/dma/Kconfig                           |    2 +
 drivers/dma/Makefile                          |    2 +
 drivers/dma/ti/Kconfig                        |   14 +
 drivers/dma/ti/Makefile                       |    3 +
 drivers/dma/ti/k3-udma-hwdef.h                |  184 ++
 drivers/dma/ti/k3-udma.c                      | 1730 +++++++++++++++++
 drivers/firmware/ti_sci.c                     |  760 +++++++-
 drivers/firmware/ti_sci.h                     |  642 ++++++
 drivers/soc/Kconfig                           |    5 +
 drivers/soc/Makefile                          |    2 +-
 drivers/soc/keystone/Makefile                 |    3 -
 drivers/soc/ti/Kconfig                        |   26 +
 drivers/soc/ti/Makefile                       |    4 +
 drivers/soc/ti/k3-navss-ringacc.c             | 1057 ++++++++++
 .../soc/{keystone => ti}/keystone_serdes.c    |    0
 include/configs/ti_armv7_keystone2.h          |    3 -
 include/dt-bindings/dma/k3-udma.h             |   31 +
 include/linux/soc/ti/cppi5.h                  |  995 ++++++++++
 include/linux/soc/ti/k3-navss-ringacc.h       |  236 +++
 include/linux/soc/ti/ti-udma.h                |   24 +
 include/linux/soc/ti/ti_sci_protocol.h        |  300 +++
 scripts/config_whitelist.txt                  |    1 -
 27 files changed, 6066 insertions(+), 21 deletions(-)
 create mode 100644 drivers/dma/ti/Kconfig
 create mode 100644 drivers/dma/ti/Makefile
 create mode 100644 drivers/dma/ti/k3-udma-hwdef.h
 create mode 100644 drivers/dma/ti/k3-udma.c
 create mode 100644 drivers/soc/Kconfig
 delete mode 100644 drivers/soc/keystone/Makefile
 create mode 100644 drivers/soc/ti/Kconfig
 create mode 100644 drivers/soc/ti/Makefile
 create mode 100644 drivers/soc/ti/k3-navss-ringacc.c
 rename drivers/soc/{keystone => ti}/keystone_serdes.c (100%)
 create mode 100644 include/dt-bindings/dma/k3-udma.h
 create mode 100644 include/linux/soc/ti/cppi5.h
 create mode 100644 include/linux/soc/ti/k3-navss-ringacc.h
 create mode 100644 include/linux/soc/ti/ti-udma.h

-- 
2.20.1

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [U-Boot] [PATCH v4 1/7] firmware: ti_sci: Add support for NAVSS resource management
  2019-02-05 12:01 [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support Vignesh R
@ 2019-02-05 12:01 ` Vignesh R
  2019-04-12 16:27   ` [U-Boot] [U-Boot, v4, " Tom Rini
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 2/7] soc: ti: k3: add navss ringacc driver Vignesh R
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Vignesh R @ 2019-02-05 12:01 UTC (permalink / raw)
  To: u-boot

From: Grygorii Strashko <grygorii.strashko@ti.com>

Texas Instruments' System Control Interface (TI-SCI) Message Protocol
abstracts management of NAVSS resources, like PSI-L pairing and
unpairing, UDMAP tx/rx/flow configuration and Rings.

This patch adds support for requesting and configuring such resources
from TI-SCI firmware.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Signed-off-by: Vignesh R <vigneshr@ti.com>
---
 arch/arm/dts/k3-am65-wakeup.dtsi       |   2 +-
 drivers/firmware/ti_sci.c              | 760 ++++++++++++++++++++++++-
 drivers/firmware/ti_sci.h              | 642 +++++++++++++++++++++
 include/linux/soc/ti/ti_sci_protocol.h | 300 ++++++++++
 4 files changed, 1692 insertions(+), 12 deletions(-)

diff --git a/arch/arm/dts/k3-am65-wakeup.dtsi b/arch/arm/dts/k3-am65-wakeup.dtsi
index 8d7b47f9dfbf..1f591ef8bb9e 100644
--- a/arch/arm/dts/k3-am65-wakeup.dtsi
+++ b/arch/arm/dts/k3-am65-wakeup.dtsi
@@ -7,7 +7,7 @@
 
 &cbass_wakeup {
 	dmsc: dmsc {
-		compatible = "ti,k2g-sci";
+		compatible = "ti,am654-sci";
 		ti,host-id = <12>;
 		#address-cells = <1>;
 		#size-cells = <1>;
diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c
index 91481260411a..ec78a520e702 100644
--- a/drivers/firmware/ti_sci.c
+++ b/drivers/firmware/ti_sci.c
@@ -12,6 +12,7 @@
 #include <errno.h>
 #include <mailbox.h>
 #include <dm/device.h>
+#include <linux/compat.h>
 #include <linux/err.h>
 #include <linux/soc/ti/k3-sec-proxy.h>
 #include <linux/soc/ti/ti_sci_protocol.h>
@@ -31,16 +32,37 @@ struct ti_sci_xfer {
 	u8 rx_len;
 };
 
+/**
+ * struct ti_sci_rm_type_map - Structure representing TISCI Resource
+ *				management representation of dev_ids.
+ * @dev_id:	TISCI device ID
+ * @type:	Corresponding id as identified by TISCI RM.
+ *
+ * Note: This is used only as a work around for using RM range apis
+ *	for AM654 SoC. For future SoCs dev_id will be used as type
+ *	for RM range APIs. In order to maintain ABI backward compatibility
+ *	type is not being changed for AM654 SoC.
+ */
+struct ti_sci_rm_type_map {
+	u32 dev_id;
+	u16 type;
+};
+
 /**
  * struct ti_sci_desc - Description of SoC integration
- * @host_id:		Host identifier representing the compute entity
- * @max_rx_timeout_us:	Timeout for communication with SoC (in Microseconds)
- * @max_msg_size:	Maximum size of data per message that can be handled.
+ * @default_host_id:	Host identifier representing the compute entity
+ * @max_rx_timeout_ms:	Timeout for communication with SoC (in Milliseconds)
+ * @max_msgs: Maximum number of messages that can be pending
+ *		  simultaneously in the system
+ * @max_msg_size: Maximum size of data per message that can be handled.
+ * @rm_type_map: RM resource type mapping structure.
  */
 struct ti_sci_desc {
-	u8 host_id;
-	int max_rx_timeout_us;
+	u8 default_host_id;
+	int max_rx_timeout_ms;
+	int max_msgs;
 	int max_msg_size;
+	struct ti_sci_rm_type_map *rm_type_map;
 };
 
 /**
@@ -136,7 +158,7 @@ static inline int ti_sci_get_response(struct ti_sci_info *info,
 	int ret;
 
 	/* Receive the response */
-	ret = mbox_recv(chan, msg, info->desc->max_rx_timeout_us);
+	ret = mbox_recv(chan, msg, info->desc->max_rx_timeout_ms);
 	if (ret) {
 		dev_err(info->dev, "%s: Message receive failed. ret = %d\n",
 			__func__, ret);
@@ -1441,6 +1463,147 @@ static int ti_sci_cmd_core_reboot(const struct ti_sci_handle *handle)
 	return ret;
 }
 
+static int ti_sci_get_resource_type(struct ti_sci_info *info, u16 dev_id,
+				    u16 *type)
+{
+	struct ti_sci_rm_type_map *rm_type_map = info->desc->rm_type_map;
+	bool found = false;
+	int i;
+
+	/* If map is not provided then assume dev_id is used as type */
+	if (!rm_type_map) {
+		*type = dev_id;
+		return 0;
+	}
+
+	for (i = 0; rm_type_map[i].dev_id; i++) {
+		if (rm_type_map[i].dev_id == dev_id) {
+			*type = rm_type_map[i].type;
+			found = true;
+			break;
+		}
+	}
+
+	if (!found)
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * ti_sci_get_resource_range - Helper to get a range of resources assigned
+ *			       to a host. Resource is uniquely identified by
+ *			       type and subtype.
+ * @handle:		Pointer to TISCI handle.
+ * @dev_id:		TISCI device ID.
+ * @subtype:		Resource assignment subtype that is being requested
+ *			from the given device.
+ * @s_host:		Host processor ID to which the resources are allocated
+ * @range_start:	Start index of the resource range
+ * @range_num:		Number of resources in the range
+ *
+ * Return: 0 if all went fine, else return appropriate error.
+ */
+static int ti_sci_get_resource_range(const struct ti_sci_handle *handle,
+				     u32 dev_id, u8 subtype, u8 s_host,
+				     u16 *range_start, u16 *range_num)
+{
+	struct ti_sci_msg_resp_get_resource_range *resp;
+	struct ti_sci_msg_req_get_resource_range req;
+	struct ti_sci_xfer *xfer;
+	struct ti_sci_info *info;
+	u16 type;
+	int ret = 0;
+
+	if (IS_ERR(handle))
+		return PTR_ERR(handle);
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_ti_sci_info(handle);
+
+	xfer = ti_sci_setup_one_xfer(info, TI_SCI_MSG_GET_RESOURCE_RANGE,
+				     TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
+				     (u32 *)&req, sizeof(req), sizeof(*resp));
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(dev, "Message alloc failed(%d)\n", ret);
+		return ret;
+	}
+
+	ret = ti_sci_get_resource_type(info, dev_id, &type);
+	if (ret) {
+		dev_err(dev, "rm type lookup failed for %u\n", dev_id);
+		goto fail;
+	}
+
+	req.secondary_host = s_host;
+	req.type = type & MSG_RM_RESOURCE_TYPE_MASK;
+	req.subtype = subtype & MSG_RM_RESOURCE_SUBTYPE_MASK;
+
+	ret = ti_sci_do_xfer(info, xfer);
+	if (ret) {
+		dev_err(dev, "Mbox send fail %d\n", ret);
+		goto fail;
+	}
+
+	resp = (struct ti_sci_msg_resp_get_resource_range *)xfer->tx_message.buf;
+	if (!ti_sci_is_response_ack(resp)) {
+		ret = -ENODEV;
+	} else if (!resp->range_start && !resp->range_num) {
+		ret = -ENODEV;
+	} else {
+		*range_start = resp->range_start;
+		*range_num = resp->range_num;
+	};
+
+fail:
+	return ret;
+}
+
+/**
+ * ti_sci_cmd_get_resource_range - Get a range of resources assigned to host
+ *				   that is same as ti sci interface host.
+ * @handle:		Pointer to TISCI handle.
+ * @dev_id:		TISCI device ID.
+ * @subtype:		Resource assignment subtype that is being requested
+ *			from the given device.
+ * @range_start:	Start index of the resource range
+ * @range_num:		Number of resources in the range
+ *
+ * Return: 0 if all went fine, else return appropriate error.
+ */
+static int ti_sci_cmd_get_resource_range(const struct ti_sci_handle *handle,
+					 u32 dev_id, u8 subtype,
+					 u16 *range_start, u16 *range_num)
+{
+	return ti_sci_get_resource_range(handle, dev_id, subtype,
+					 TI_SCI_IRQ_SECONDARY_HOST_INVALID,
+					 range_start, range_num);
+}
+
+/**
+ * ti_sci_cmd_get_resource_range_from_shost - Get a range of resources
+ *					      assigned to a specified host.
+ * @handle:		Pointer to TISCI handle.
+ * @dev_id:		TISCI device ID.
+ * @subtype:		Resource assignment subtype that is being requested
+ *			from the given device.
+ * @s_host:		Host processor ID to which the resources are allocated
+ * @range_start:	Start index of the resource range
+ * @range_num:		Number of resources in the range
+ *
+ * Return: 0 if all went fine, else return appropriate error.
+ */
+static
+int ti_sci_cmd_get_resource_range_from_shost(const struct ti_sci_handle *handle,
+					     u32 dev_id, u8 subtype, u8 s_host,
+					     u16 *range_start, u16 *range_num)
+{
+	return ti_sci_get_resource_range(handle, dev_id, subtype, s_host,
+					 range_start, range_num);
+}
+
 /**
  * ti_sci_cmd_proc_request() - Command to request a physical processor control
  * @handle:	Pointer to TI SCI handle
@@ -1803,6 +1966,416 @@ static int ti_sci_cmd_get_proc_boot_status(const struct ti_sci_handle *handle,
 	return ret;
 }
 
+/**
+ * ti_sci_cmd_ring_config() - configure RA ring
+ * @handle:	pointer to TI SCI handle
+ * @valid_params: Bitfield defining validity of ring configuration parameters.
+ * @nav_id: Device ID of Navigator Subsystem from which the ring is allocated
+ * @index: Ring index.
+ * @addr_lo: The ring base address lo 32 bits
+ * @addr_hi: The ring base address hi 32 bits
+ * @count: Number of ring elements.
+ * @mode: The mode of the ring
+ * @size: The ring element size.
+ * @order_id: Specifies the ring's bus order ID.
+ *
+ * Return: 0 if all went well, else returns appropriate error value.
+ *
+ * See @ti_sci_msg_rm_ring_cfg_req for more info.
+ */
+static int ti_sci_cmd_ring_config(const struct ti_sci_handle *handle,
+				  u32 valid_params, u16 nav_id, u16 index,
+				  u32 addr_lo, u32 addr_hi, u32 count,
+				  u8 mode, u8 size, u8 order_id)
+{
+	struct ti_sci_msg_rm_ring_cfg_resp *resp;
+	struct ti_sci_msg_rm_ring_cfg_req req;
+	struct ti_sci_xfer *xfer;
+	struct ti_sci_info *info;
+	int ret = 0;
+
+	if (IS_ERR(handle))
+		return PTR_ERR(handle);
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_ti_sci_info(handle);
+
+	xfer = ti_sci_setup_one_xfer(info, TI_SCI_MSG_RM_RING_CFG,
+				     TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
+				     (u32 *)&req, sizeof(req), sizeof(*resp));
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(info->dev, "RM_RA:Message config failed(%d)\n", ret);
+		return ret;
+	}
+	req.valid_params = valid_params;
+	req.nav_id = nav_id;
+	req.index = index;
+	req.addr_lo = addr_lo;
+	req.addr_hi = addr_hi;
+	req.count = count;
+	req.mode = mode;
+	req.size = size;
+	req.order_id = order_id;
+
+	ret = ti_sci_do_xfer(info, xfer);
+	if (ret) {
+		dev_err(info->dev, "RM_RA:Mbox config send fail %d\n", ret);
+		goto fail;
+	}
+
+	resp = (struct ti_sci_msg_rm_ring_cfg_resp *)xfer->tx_message.buf;
+
+	ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
+
+fail:
+	dev_dbg(info->dev, "RM_RA:config ring %u ret:%d\n", index, ret);
+	return ret;
+}
+
+/**
+ * ti_sci_cmd_ring_get_config() - get RA ring configuration
+ * @handle:	pointer to TI SCI handle
+ * @nav_id: Device ID of Navigator Subsystem from which the ring is allocated
+ * @index: Ring index.
+ * @addr_lo: returns ring's base address lo 32 bits
+ * @addr_hi: returns ring's base address hi 32 bits
+ * @count: returns number of ring elements.
+ * @mode: returns mode of the ring
+ * @size: returns ring element size.
+ * @order_id: returns ring's bus order ID.
+ *
+ * Return: 0 if all went well, else returns appropriate error value.
+ *
+ * See @ti_sci_msg_rm_ring_get_cfg_req for more info.
+ */
+static int ti_sci_cmd_ring_get_config(const struct ti_sci_handle *handle,
+				      u32 nav_id, u32 index, u8 *mode,
+				      u32 *addr_lo, u32 *addr_hi,
+				      u32 *count, u8 *size, u8 *order_id)
+{
+	struct ti_sci_msg_rm_ring_get_cfg_resp *resp;
+	struct ti_sci_msg_rm_ring_get_cfg_req req;
+	struct ti_sci_xfer *xfer;
+	struct ti_sci_info *info;
+	int ret = 0;
+
+	if (IS_ERR(handle))
+		return PTR_ERR(handle);
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_ti_sci_info(handle);
+
+	xfer = ti_sci_setup_one_xfer(info, TI_SCI_MSG_RM_RING_GET_CFG,
+				     TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
+				     (u32 *)&req, sizeof(req), sizeof(*resp));
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(info->dev,
+			"RM_RA:Message get config failed(%d)\n", ret);
+		return ret;
+	}
+	req.nav_id = nav_id;
+	req.index = index;
+
+	ret = ti_sci_do_xfer(info, xfer);
+	if (ret) {
+		dev_err(info->dev, "RM_RA:Mbox get config send fail %d\n", ret);
+		goto fail;
+	}
+
+	resp = (struct ti_sci_msg_rm_ring_get_cfg_resp *)xfer->tx_message.buf;
+
+	if (!ti_sci_is_response_ack(resp)) {
+		ret = -ENODEV;
+	} else {
+		if (mode)
+			*mode = resp->mode;
+		if (addr_lo)
+			*addr_lo = resp->addr_lo;
+		if (addr_hi)
+			*addr_hi = resp->addr_hi;
+		if (count)
+			*count = resp->count;
+		if (size)
+			*size = resp->size;
+		if (order_id)
+			*order_id = resp->order_id;
+	};
+
+fail:
+	dev_dbg(info->dev, "RM_RA:get config ring %u ret:%d\n", index, ret);
+	return ret;
+}
+
+static int ti_sci_cmd_rm_psil_pair(const struct ti_sci_handle *handle,
+				   u32 nav_id, u32 src_thread, u32 dst_thread)
+{
+	struct ti_sci_msg_hdr *resp;
+	struct ti_sci_msg_psil_pair req;
+	struct ti_sci_xfer *xfer;
+	struct ti_sci_info *info;
+	int ret = 0;
+
+	if (IS_ERR(handle))
+		return PTR_ERR(handle);
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_ti_sci_info(handle);
+
+	xfer = ti_sci_setup_one_xfer(info, TI_SCI_MSG_RM_PSIL_PAIR,
+				     TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
+				     (u32 *)&req, sizeof(req), sizeof(*resp));
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(info->dev, "RM_PSIL:Message alloc failed(%d)\n", ret);
+		return ret;
+	}
+	req.nav_id = nav_id;
+	req.src_thread = src_thread;
+	req.dst_thread = dst_thread;
+
+	ret = ti_sci_do_xfer(info, xfer);
+	if (ret) {
+		dev_err(info->dev, "RM_PSIL:Mbox send fail %d\n", ret);
+		goto fail;
+	}
+
+	resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf;
+	ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
+
+fail:
+	dev_dbg(info->dev, "RM_PSIL: nav: %u link pair %u->%u ret:%u\n",
+		nav_id, src_thread, dst_thread, ret);
+	return ret;
+}
+
+static int ti_sci_cmd_rm_psil_unpair(const struct ti_sci_handle *handle,
+				     u32 nav_id, u32 src_thread, u32 dst_thread)
+{
+	struct ti_sci_msg_hdr *resp;
+	struct ti_sci_msg_psil_unpair req;
+	struct ti_sci_xfer *xfer;
+	struct ti_sci_info *info;
+	int ret = 0;
+
+	if (IS_ERR(handle))
+		return PTR_ERR(handle);
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_ti_sci_info(handle);
+
+	xfer = ti_sci_setup_one_xfer(info, TI_SCI_MSG_RM_PSIL_UNPAIR,
+				     TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
+				     (u32 *)&req, sizeof(req), sizeof(*resp));
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(info->dev, "RM_PSIL:Message alloc failed(%d)\n", ret);
+		return ret;
+	}
+	req.nav_id = nav_id;
+	req.src_thread = src_thread;
+	req.dst_thread = dst_thread;
+
+	ret = ti_sci_do_xfer(info, xfer);
+	if (ret) {
+		dev_err(info->dev, "RM_PSIL:Mbox send fail %d\n", ret);
+		goto fail;
+	}
+
+	resp = (struct ti_sci_msg_hdr *)xfer->tx_message.buf;
+	ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
+
+fail:
+	dev_dbg(info->dev, "RM_PSIL: link unpair %u->%u ret:%u\n",
+		src_thread, dst_thread, ret);
+	return ret;
+}
+
+static int ti_sci_cmd_rm_udmap_tx_ch_cfg(
+			const struct ti_sci_handle *handle,
+			const struct ti_sci_msg_rm_udmap_tx_ch_cfg *params)
+{
+	struct ti_sci_msg_rm_udmap_tx_ch_cfg_resp *resp;
+	struct ti_sci_msg_rm_udmap_tx_ch_cfg_req req;
+	struct ti_sci_xfer *xfer;
+	struct ti_sci_info *info;
+	int ret = 0;
+
+	if (IS_ERR(handle))
+		return PTR_ERR(handle);
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_ti_sci_info(handle);
+
+	xfer = ti_sci_setup_one_xfer(info, TISCI_MSG_RM_UDMAP_TX_CH_CFG,
+				     TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
+				     (u32 *)&req, sizeof(req), sizeof(*resp));
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(info->dev, "Message TX_CH_CFG alloc failed(%d)\n", ret);
+		return ret;
+	}
+	req.valid_params = params->valid_params;
+	req.nav_id = params->nav_id;
+	req.index = params->index;
+	req.tx_pause_on_err = params->tx_pause_on_err;
+	req.tx_filt_einfo = params->tx_filt_einfo;
+	req.tx_filt_pswords = params->tx_filt_pswords;
+	req.tx_atype = params->tx_atype;
+	req.tx_chan_type = params->tx_chan_type;
+	req.tx_supr_tdpkt = params->tx_supr_tdpkt;
+	req.tx_fetch_size = params->tx_fetch_size;
+	req.tx_credit_count = params->tx_credit_count;
+	req.txcq_qnum = params->txcq_qnum;
+	req.tx_priority = params->tx_priority;
+	req.tx_qos = params->tx_qos;
+	req.tx_orderid = params->tx_orderid;
+	req.fdepth = params->fdepth;
+	req.tx_sched_priority = params->tx_sched_priority;
+
+	ret = ti_sci_do_xfer(info, xfer);
+	if (ret) {
+		dev_err(info->dev, "Mbox send TX_CH_CFG fail %d\n", ret);
+		goto fail;
+	}
+
+	resp =
+	      (struct ti_sci_msg_rm_udmap_tx_ch_cfg_resp *)xfer->tx_message.buf;
+	ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL;
+
+fail:
+	dev_dbg(info->dev, "TX_CH_CFG: chn %u ret:%u\n", params->index, ret);
+	return ret;
+}
+
+static int ti_sci_cmd_rm_udmap_rx_ch_cfg(
+			const struct ti_sci_handle *handle,
+			const struct ti_sci_msg_rm_udmap_rx_ch_cfg *params)
+{
+	struct ti_sci_msg_rm_udmap_rx_ch_cfg_resp *resp;
+	struct ti_sci_msg_rm_udmap_rx_ch_cfg_req req;
+	struct ti_sci_xfer *xfer;
+	struct ti_sci_info *info;
+	int ret = 0;
+
+	if (IS_ERR(handle))
+		return PTR_ERR(handle);
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_ti_sci_info(handle);
+
+	xfer = ti_sci_setup_one_xfer(info, TISCI_MSG_RM_UDMAP_RX_CH_CFG,
+				     TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
+				     (u32 *)&req, sizeof(req), sizeof(*resp));
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(info->dev, "Message RX_CH_CFG alloc failed(%d)\n", ret);
+		return ret;
+	}
+
+	req.valid_params = params->valid_params;
+	req.nav_id = params->nav_id;
+	req.index = params->index;
+	req.rx_fetch_size = params->rx_fetch_size;
+	req.rxcq_qnum = params->rxcq_qnum;
+	req.rx_priority = params->rx_priority;
+	req.rx_qos = params->rx_qos;
+	req.rx_orderid = params->rx_orderid;
+	req.rx_sched_priority = params->rx_sched_priority;
+	req.flowid_start = params->flowid_start;
+	req.flowid_cnt = params->flowid_cnt;
+	req.rx_pause_on_err = params->rx_pause_on_err;
+	req.rx_atype = params->rx_atype;
+	req.rx_chan_type = params->rx_chan_type;
+	req.rx_ignore_short = params->rx_ignore_short;
+	req.rx_ignore_long = params->rx_ignore_long;
+
+	ret = ti_sci_do_xfer(info, xfer);
+	if (ret) {
+		dev_err(info->dev, "Mbox send RX_CH_CFG fail %d\n", ret);
+		goto fail;
+	}
+
+	resp =
+	      (struct ti_sci_msg_rm_udmap_rx_ch_cfg_resp *)xfer->tx_message.buf;
+	ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL;
+
+fail:
+	dev_dbg(info->dev, "RX_CH_CFG: chn %u ret:%d\n", params->index, ret);
+	return ret;
+}
+
+static int ti_sci_cmd_rm_udmap_rx_flow_cfg(
+			const struct ti_sci_handle *handle,
+			const struct ti_sci_msg_rm_udmap_flow_cfg *params)
+{
+	struct ti_sci_msg_rm_udmap_flow_cfg_resp *resp;
+	struct ti_sci_msg_rm_udmap_flow_cfg_req req;
+	struct ti_sci_xfer *xfer;
+	struct ti_sci_info *info;
+	int ret = 0;
+
+	if (IS_ERR(handle))
+		return PTR_ERR(handle);
+	if (!handle)
+		return -EINVAL;
+
+	info = handle_to_ti_sci_info(handle);
+
+	xfer = ti_sci_setup_one_xfer(info, TISCI_MSG_RM_UDMAP_FLOW_CFG,
+				     TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
+				     (u32 *)&req, sizeof(req), sizeof(*resp));
+	if (IS_ERR(xfer)) {
+		ret = PTR_ERR(xfer);
+		dev_err(dev, "RX_FL_CFG: Message alloc failed(%d)\n", ret);
+		return ret;
+	}
+
+	req.valid_params = params->valid_params;
+	req.nav_id = params->nav_id;
+	req.flow_index = params->flow_index;
+	req.rx_einfo_present = params->rx_einfo_present;
+	req.rx_psinfo_present = params->rx_psinfo_present;
+	req.rx_error_handling = params->rx_error_handling;
+	req.rx_desc_type = params->rx_desc_type;
+	req.rx_sop_offset = params->rx_sop_offset;
+	req.rx_dest_qnum = params->rx_dest_qnum;
+	req.rx_src_tag_hi = params->rx_src_tag_hi;
+	req.rx_src_tag_lo = params->rx_src_tag_lo;
+	req.rx_dest_tag_hi = params->rx_dest_tag_hi;
+	req.rx_dest_tag_lo = params->rx_dest_tag_lo;
+	req.rx_src_tag_hi_sel = params->rx_src_tag_hi_sel;
+	req.rx_src_tag_lo_sel = params->rx_src_tag_lo_sel;
+	req.rx_dest_tag_hi_sel = params->rx_dest_tag_hi_sel;
+	req.rx_dest_tag_lo_sel = params->rx_dest_tag_lo_sel;
+	req.rx_fdq0_sz0_qnum = params->rx_fdq0_sz0_qnum;
+	req.rx_fdq1_qnum = params->rx_fdq1_qnum;
+	req.rx_fdq2_qnum = params->rx_fdq2_qnum;
+	req.rx_fdq3_qnum = params->rx_fdq3_qnum;
+	req.rx_ps_location = params->rx_ps_location;
+
+	ret = ti_sci_do_xfer(info, xfer);
+	if (ret) {
+		dev_err(dev, "RX_FL_CFG: Mbox send fail %d\n", ret);
+		goto fail;
+	}
+
+	resp =
+	       (struct ti_sci_msg_rm_udmap_flow_cfg_resp *)xfer->tx_message.buf;
+	ret = ti_sci_is_response_ack(resp) ? 0 : -EINVAL;
+
+fail:
+	dev_dbg(info->dev, "RX_FL_CFG: %u ret:%d\n", params->flow_index, ret);
+	return ret;
+}
+
 /*
  * ti_sci_setup_ops() - Setup the operations structures
  * @info:	pointer to TISCI pointer
@@ -1814,7 +2387,11 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
 	struct ti_sci_dev_ops *dops = &ops->dev_ops;
 	struct ti_sci_clk_ops *cops = &ops->clk_ops;
 	struct ti_sci_core_ops *core_ops = &ops->core_ops;
+	struct ti_sci_rm_core_ops *rm_core_ops = &ops->rm_core_ops;
 	struct ti_sci_proc_ops *pops = &ops->proc_ops;
+	struct ti_sci_rm_ringacc_ops *rops = &ops->rm_ring_ops;
+	struct ti_sci_rm_psil_ops *psilops = &ops->rm_psil_ops;
+	struct ti_sci_rm_udmap_ops *udmap_ops = &ops->rm_udmap_ops;
 
 	bops->board_config = ti_sci_cmd_set_board_config;
 	bops->board_config_rm = ti_sci_cmd_set_board_config_rm;
@@ -1850,6 +2427,10 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
 
 	core_ops->reboot_device = ti_sci_cmd_core_reboot;
 
+	rm_core_ops->get_range = ti_sci_cmd_get_resource_range;
+	rm_core_ops->get_range_from_shost =
+		ti_sci_cmd_get_resource_range_from_shost;
+
 	pops->proc_request = ti_sci_cmd_proc_request;
 	pops->proc_release = ti_sci_cmd_proc_release;
 	pops->proc_handover = ti_sci_cmd_proc_handover;
@@ -1857,6 +2438,16 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
 	pops->set_proc_boot_ctrl = ti_sci_cmd_set_proc_boot_ctrl;
 	pops->proc_auth_boot_image = ti_sci_cmd_proc_auth_boot_image;
 	pops->get_proc_boot_status = ti_sci_cmd_get_proc_boot_status;
+
+	rops->config = ti_sci_cmd_ring_config;
+	rops->get_config = ti_sci_cmd_ring_get_config;
+
+	psilops->pair = ti_sci_cmd_rm_psil_pair;
+	psilops->unpair = ti_sci_cmd_rm_psil_unpair;
+
+	udmap_ops->tx_ch_cfg = ti_sci_cmd_rm_udmap_tx_ch_cfg;
+	udmap_ops->rx_ch_cfg = ti_sci_cmd_rm_udmap_rx_ch_cfg;
+	udmap_ops->rx_flow_cfg = ti_sci_cmd_rm_udmap_rx_flow_cfg;
 }
 
 /**
@@ -1969,7 +2560,7 @@ static int ti_sci_of_to_info(struct udevice *dev, struct ti_sci_info *info)
 	}
 
 	info->host_id = dev_read_u32_default(dev, "ti,host-id",
-					     info->desc->host_id);
+					     info->desc->default_host_id);
 
 	info->is_secure = dev_read_bool(dev, "ti,secure-host");
 
@@ -2009,17 +2600,164 @@ static int ti_sci_probe(struct udevice *dev)
 	return ret;
 }
 
+/*
+ * ti_sci_get_free_resource() - Get a free resource from TISCI resource.
+ * @res:	Pointer to the TISCI resource
+ *
+ * Return: resource num if all went ok else TI_SCI_RESOURCE_NULL.
+ */
+u16 ti_sci_get_free_resource(struct ti_sci_resource *res)
+{
+	u16 set, free_bit;
+
+	for (set = 0; set < res->sets; set++) {
+		free_bit = find_first_zero_bit(res->desc[set].res_map,
+					       res->desc[set].num);
+		if (free_bit != res->desc[set].num) {
+			set_bit(free_bit, res->desc[set].res_map);
+			return res->desc[set].start + free_bit;
+		}
+	}
+
+	return TI_SCI_RESOURCE_NULL;
+}
+
+/**
+ * ti_sci_release_resource() - Release a resource from TISCI resource.
+ * @res:	Pointer to the TISCI resource
+ */
+void ti_sci_release_resource(struct ti_sci_resource *res, u16 id)
+{
+	u16 set;
+
+	for (set = 0; set < res->sets; set++) {
+		if (res->desc[set].start <= id &&
+		    (res->desc[set].num + res->desc[set].start) > id)
+			clear_bit(id - res->desc[set].start,
+				  res->desc[set].res_map);
+	}
+}
+
+/**
+ * devm_ti_sci_get_of_resource() - Get a TISCI resource assigned to a device
+ * @handle:	TISCI handle
+ * @dev:	Device pointer to which the resource is assigned
+ * @of_prop:	property name by which the resource are represented
+ *
+ * Note: This function expects of_prop to be in the form of tuples
+ *	<type, subtype>. Allocates and initializes ti_sci_resource structure
+ *	for each of_prop. Client driver can directly call
+ *	ti_sci_(get_free, release)_resource apis for handling the resource.
+ *
+ * Return: Pointer to ti_sci_resource if all went well else appropriate
+ *	   error pointer.
+ */
+struct ti_sci_resource *
+devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
+			    struct udevice *dev, u32 dev_id, char *of_prop)
+{
+	u32 resource_subtype;
+	u16 resource_type;
+	struct ti_sci_resource *res;
+	int sets, i, ret;
+	u32 *temp;
+
+	res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL);
+	if (!res)
+		return ERR_PTR(-ENOMEM);
+
+	sets = dev_read_size(dev, of_prop);
+	if (sets < 0) {
+		dev_err(dev, "%s resource type ids not available\n", of_prop);
+		return ERR_PTR(sets);
+	}
+	temp = malloc(sets);
+	sets /= sizeof(u32);
+	res->sets = sets;
+
+	res->desc = devm_kcalloc(dev, res->sets, sizeof(*res->desc),
+				 GFP_KERNEL);
+	if (!res->desc)
+		return ERR_PTR(-ENOMEM);
+
+	ret = ti_sci_get_resource_type(handle_to_ti_sci_info(handle), dev_id,
+				       &resource_type);
+	if (ret) {
+		dev_err(dev, "No valid resource type for %u\n", dev_id);
+		return ERR_PTR(-EINVAL);
+	}
+
+	ret = dev_read_u32_array(dev, of_prop, temp, res->sets);
+	if (ret)
+		return ERR_PTR(-EINVAL);
+
+	for (i = 0; i < res->sets; i++) {
+		resource_subtype = temp[i];
+		ret = handle->ops.rm_core_ops.get_range(handle, dev_id,
+							resource_subtype,
+							&res->desc[i].start,
+							&res->desc[i].num);
+		if (ret) {
+			dev_err(dev, "type %d subtype %d not allocated for host %d\n",
+				resource_type, resource_subtype,
+				handle_to_ti_sci_info(handle)->host_id);
+			return ERR_PTR(ret);
+		}
+
+		dev_dbg(dev, "res type = %d, subtype = %d, start = %d, num = %d\n",
+			resource_type, resource_subtype, res->desc[i].start,
+			res->desc[i].num);
+
+		res->desc[i].res_map =
+			devm_kzalloc(dev, BITS_TO_LONGS(res->desc[i].num) *
+				     sizeof(*res->desc[i].res_map), GFP_KERNEL);
+		if (!res->desc[i].res_map)
+			return ERR_PTR(-ENOMEM);
+	}
+
+	return res;
+}
+
+/* Description for K2G */
+static const struct ti_sci_desc ti_sci_pmmc_k2g_desc = {
+	.default_host_id = 2,
+	/* Conservative duration */
+	.max_rx_timeout_ms = 10000,
+	/* Limited by MBOX_TX_QUEUE_LEN. K2G can handle upto 128 messages! */
+	.max_msgs = 20,
+	.max_msg_size = 64,
+	.rm_type_map = NULL,
+};
+
+static struct ti_sci_rm_type_map ti_sci_am654_rm_type_map[] = {
+	{.dev_id = 56, .type = 0x00b}, /* GIC_IRQ */
+	{.dev_id = 179, .type = 0x000}, /* MAIN_NAV_UDMASS_IA0 */
+	{.dev_id = 187, .type = 0x009}, /* MAIN_NAV_RA */
+	{.dev_id = 188, .type = 0x006}, /* MAIN_NAV_UDMAP */
+	{.dev_id = 194, .type = 0x007}, /* MCU_NAV_UDMAP */
+	{.dev_id = 195, .type = 0x00a}, /* MCU_NAV_RA */
+	{.dev_id = 0, .type = 0x000}, /* end of table */
+};
+
 /* Description for AM654 */
-static const struct ti_sci_desc ti_sci_sysfw_am654_desc = {
-	.host_id = 4,
-	.max_rx_timeout_us = 1000000,
+static const struct ti_sci_desc ti_sci_pmmc_am654_desc = {
+	.default_host_id = 12,
+	/* Conservative duration */
+	.max_rx_timeout_ms = 10000,
+	/* Limited by MBOX_TX_QUEUE_LEN. K2G can handle upto 128 messages! */
+	.max_msgs = 20,
 	.max_msg_size = 60,
+	.rm_type_map = ti_sci_am654_rm_type_map,
 };
 
 static const struct udevice_id ti_sci_ids[] = {
 	{
 		.compatible = "ti,k2g-sci",
-		.data = (ulong)&ti_sci_sysfw_am654_desc
+		.data = (ulong)&ti_sci_pmmc_k2g_desc
+	},
+	{
+		.compatible = "ti,am654-sci",
+		.data = (ulong)&ti_sci_pmmc_am654_desc
 	},
 	{ /* Sentinel */ },
 };
diff --git a/drivers/firmware/ti_sci.h b/drivers/firmware/ti_sci.h
index 81591fb0c71f..099846009926 100644
--- a/drivers/firmware/ti_sci.h
+++ b/drivers/firmware/ti_sci.h
@@ -50,6 +50,34 @@
 #define TISCI_MSG_PROC_AUTH_BOOT_IMIAGE	0xc120
 #define TISCI_MSG_GET_PROC_BOOT_STATUS	0xc400
 
+/* Resource Management Requests */
+#define TI_SCI_MSG_GET_RESOURCE_RANGE	0x1500
+
+/* NAVSS resource management */
+/* Ringacc requests */
+#define TI_SCI_MSG_RM_RING_CFG			0x1110
+#define TI_SCI_MSG_RM_RING_GET_CFG		0x1111
+
+/* PSI-L requests */
+#define TI_SCI_MSG_RM_PSIL_PAIR			0x1280
+#define TI_SCI_MSG_RM_PSIL_UNPAIR		0x1281
+
+#define TI_SCI_MSG_RM_UDMAP_TX_ALLOC		0x1200
+#define TI_SCI_MSG_RM_UDMAP_TX_FREE		0x1201
+#define TI_SCI_MSG_RM_UDMAP_RX_ALLOC		0x1210
+#define TI_SCI_MSG_RM_UDMAP_RX_FREE		0x1211
+#define TI_SCI_MSG_RM_UDMAP_FLOW_CFG		0x1220
+#define TI_SCI_MSG_RM_UDMAP_OPT_FLOW_CFG	0x1221
+
+#define TISCI_MSG_RM_UDMAP_TX_CH_CFG		0x1205
+#define TISCI_MSG_RM_UDMAP_TX_CH_GET_CFG	0x1206
+#define TISCI_MSG_RM_UDMAP_RX_CH_CFG		0x1215
+#define TISCI_MSG_RM_UDMAP_RX_CH_GET_CFG	0x1216
+#define TISCI_MSG_RM_UDMAP_FLOW_CFG		0x1230
+#define TISCI_MSG_RM_UDMAP_FLOW_SIZE_THRESH_CFG	0x1231
+#define TISCI_MSG_RM_UDMAP_FLOW_GET_CFG		0x1232
+#define TISCI_MSG_RM_UDMAP_FLOW_SIZE_THRESH_GET_CFG	0x1233
+
 /**
  * struct ti_sci_msg_hdr - Generic Message Header for All messages and responses
  * @type:	Type of messages: One of TI_SCI_MSG* values
@@ -505,6 +533,45 @@ struct ti_sci_msg_resp_get_clock_freq {
 	u64 freq_hz;
 } __packed;
 
+#define TI_SCI_IRQ_SECONDARY_HOST_INVALID	0xff
+
+/**
+ * struct ti_sci_msg_req_get_resource_range - Request to get a host's assigned
+ *					      range of resources.
+ * @hdr:		Generic Header
+ * @type:		Unique resource assignment type
+ * @subtype:		Resource assignment subtype within the resource type.
+ * @secondary_host:	Host processing entity to which the resources are
+ *			allocated. This is required only when the destination
+ *			host id id different from ti sci interface host id,
+ *			else TI_SCI_IRQ_SECONDARY_HOST_INVALID can be passed.
+ *
+ * Request type is TI_SCI_MSG_GET_RESOURCE_RANGE. Responded with requested
+ * resource range which is of type TI_SCI_MSG_GET_RESOURCE_RANGE.
+ */
+struct ti_sci_msg_req_get_resource_range {
+	struct ti_sci_msg_hdr hdr;
+#define MSG_RM_RESOURCE_TYPE_MASK	GENMASK(9, 0)
+#define MSG_RM_RESOURCE_SUBTYPE_MASK	GENMASK(5, 0)
+	u16 type;
+	u8 subtype;
+	u8 secondary_host;
+} __packed;
+
+/**
+ * struct ti_sci_msg_resp_get_resource_range - Response to resource get range.
+ * @hdr:		Generic Header
+ * @range_start:	Start index of the resource range.
+ * @range_num:		Number of resources in the range.
+ *
+ * Response to request TI_SCI_MSG_GET_RESOURCE_RANGE.
+ */
+struct ti_sci_msg_resp_get_resource_range {
+	struct ti_sci_msg_hdr hdr;
+	u16 range_start;
+	u16 range_num;
+} __packed;
+
 #define TISCI_ADDR_LOW_MASK		GENMASK_ULL(31, 0)
 #define TISCI_ADDR_HIGH_MASK		GENMASK_ULL(63, 32)
 #define TISCI_ADDR_HIGH_SHIFT		32
@@ -677,4 +744,579 @@ struct ti_sci_msg_resp_get_proc_boot_status {
 	u32 status_flags;
 } __packed;
 
+/**
+ * struct ti_sci_msg_rm_ring_cfg_req - Configure a Navigator Subsystem ring
+ *
+ * Configures the non-real-time registers of a Navigator Subsystem ring.
+ * @hdr:	Generic Header
+ * @valid_params: Bitfield defining validity of ring configuration parameters.
+ *	The ring configuration fields are not valid, and will not be used for
+ *	ring configuration, if their corresponding valid bit is zero.
+ *	Valid bit usage:
+ *	0 - Valid bit for @tisci_msg_rm_ring_cfg_req addr_lo
+ *	1 - Valid bit for @tisci_msg_rm_ring_cfg_req addr_hi
+ *	2 - Valid bit for @tisci_msg_rm_ring_cfg_req count
+ *	3 - Valid bit for @tisci_msg_rm_ring_cfg_req mode
+ *	4 - Valid bit for @tisci_msg_rm_ring_cfg_req size
+ *	5 - Valid bit for @tisci_msg_rm_ring_cfg_req order_id
+ * @nav_id: Device ID of Navigator Subsystem from which the ring is allocated
+ * @index: ring index to be configured.
+ * @addr_lo: 32 LSBs of ring base address to be programmed into the ring's
+ *	RING_BA_LO register
+ * @addr_hi: 16 MSBs of ring base address to be programmed into the ring's
+ *	RING_BA_HI register.
+ * @count: Number of ring elements. Must be even if mode is CREDENTIALS or QM
+ *	modes.
+ * @mode: Specifies the mode the ring is to be configured.
+ * @size: Specifies encoded ring element size. To calculate the encoded size use
+ *	the formula (log2(size_bytes) - 2), where size_bytes cannot be
+ *	greater than 256.
+ * @order_id: Specifies the ring's bus order ID.
+ */
+struct ti_sci_msg_rm_ring_cfg_req {
+	struct ti_sci_msg_hdr hdr;
+	u32 valid_params;
+	u16 nav_id;
+	u16 index;
+	u32 addr_lo;
+	u32 addr_hi;
+	u32 count;
+	u8 mode;
+	u8 size;
+	u8 order_id;
+} __packed;
+
+/**
+ * struct ti_sci_msg_rm_ring_cfg_resp - Response to configuring a ring.
+ *
+ * @hdr:	Generic Header
+ */
+struct ti_sci_msg_rm_ring_cfg_resp {
+	struct ti_sci_msg_hdr hdr;
+} __packed;
+
+/**
+ * struct ti_sci_msg_rm_ring_get_cfg_req - Get RA ring's configuration
+ *
+ * Gets the configuration of the non-real-time register fields of a ring.  The
+ * host, or a supervisor of the host, who owns the ring must be the requesting
+ * host.  The values of the non-real-time registers are returned in
+ * @ti_sci_msg_rm_ring_get_cfg_resp.
+ *
+ * @hdr: Generic Header
+ * @nav_id: Device ID of Navigator Subsystem from which the ring is allocated
+ * @index: ring index.
+ */
+struct ti_sci_msg_rm_ring_get_cfg_req {
+	struct ti_sci_msg_hdr hdr;
+	u16 nav_id;
+	u16 index;
+} __packed;
+
+/**
+ * struct ti_sci_msg_rm_ring_get_cfg_resp -  Ring get configuration response
+ *
+ * Response received by host processor after RM has handled
+ * @ti_sci_msg_rm_ring_get_cfg_req. The response contains the ring's
+ * non-real-time register values.
+ *
+ * @hdr: Generic Header
+ * @addr_lo: Ring 32 LSBs of base address
+ * @addr_hi: Ring 16 MSBs of base address.
+ * @count: Ring number of elements.
+ * @mode: Ring mode.
+ * @size: encoded Ring element size
+ * @order_id: ing order ID.
+ */
+struct ti_sci_msg_rm_ring_get_cfg_resp {
+	struct ti_sci_msg_hdr hdr;
+	u32 addr_lo;
+	u32 addr_hi;
+	u32 count;
+	u8 mode;
+	u8 size;
+	u8 order_id;
+} __packed;
+
+/**
+ * struct ti_sci_msg_psil_pair - Pairs a PSI-L source thread to a destination
+ *				 thread
+ * @hdr:	Generic Header
+ * @nav_id:	SoC Navigator Subsystem device ID whose PSI-L config proxy is
+ *		used to pair the source and destination threads.
+ * @src_thread:	PSI-L source thread ID within the PSI-L System thread map.
+ *
+ * UDMAP transmit channels mapped to source threads will have their
+ * TCHAN_THRD_ID register programmed with the destination thread if the pairing
+ * is successful.
+
+ * @dst_thread: PSI-L destination thread ID within the PSI-L System thread map.
+ * PSI-L destination threads start at index 0x8000.  The request is NACK'd if
+ * the destination thread is not greater than or equal to 0x8000.
+ *
+ * UDMAP receive channels mapped to destination threads will have their
+ * RCHAN_THRD_ID register programmed with the source thread if the pairing
+ * is successful.
+ *
+ * Request type is TI_SCI_MSG_RM_PSIL_PAIR, response is a generic ACK or NACK
+ * message.
+ */
+struct ti_sci_msg_psil_pair {
+	struct ti_sci_msg_hdr hdr;
+	u32 nav_id;
+	u32 src_thread;
+	u32 dst_thread;
+} __packed;
+
+/**
+ * struct ti_sci_msg_psil_unpair - Unpairs a PSI-L source thread from a
+ *				   destination thread
+ * @hdr:	Generic Header
+ * @nav_id:	SoC Navigator Subsystem device ID whose PSI-L config proxy is
+ *		used to unpair the source and destination threads.
+ * @src_thread:	PSI-L source thread ID within the PSI-L System thread map.
+ *
+ * UDMAP transmit channels mapped to source threads will have their
+ * TCHAN_THRD_ID register cleared if the unpairing is successful.
+ *
+ * @dst_thread: PSI-L destination thread ID within the PSI-L System thread map.
+ * PSI-L destination threads start at index 0x8000.  The request is NACK'd if
+ * the destination thread is not greater than or equal to 0x8000.
+ *
+ * UDMAP receive channels mapped to destination threads will have their
+ * RCHAN_THRD_ID register cleared if the unpairing is successful.
+ *
+ * Request type is TI_SCI_MSG_RM_PSIL_UNPAIR, response is a generic ACK or NACK
+ * message.
+ */
+struct ti_sci_msg_psil_unpair {
+	struct ti_sci_msg_hdr hdr;
+	u32 nav_id;
+	u32 src_thread;
+	u32 dst_thread;
+} __packed;
+
+/**
+ * Configures a Navigator Subsystem UDMAP transmit channel
+ *
+ * Configures the non-real-time registers of a Navigator Subsystem UDMAP
+ * transmit channel.  The channel index must be assigned to the host defined
+ * in the TISCI header via the RM board configuration resource assignment
+ * range list.
+ *
+ * @hdr: Generic Header
+ *
+ * @valid_params: Bitfield defining validity of tx channel configuration
+ * parameters. The tx channel configuration fields are not valid, and will not
+ * be used for ch configuration, if their corresponding valid bit is zero.
+ * Valid bit usage:
+ *    0 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_pause_on_err
+ *    1 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_atype
+ *    2 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_chan_type
+ *    3 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_fetch_size
+ *    4 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::txcq_qnum
+ *    5 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_priority
+ *    6 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_qos
+ *    7 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_orderid
+ *    8 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_sched_priority
+ *    9 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_filt_einfo
+ *   10 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_filt_pswords
+ *   11 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_supr_tdpkt
+ *   12 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::tx_credit_count
+ *   13 - Valid bit for @ref ti_sci_msg_rm_udmap_tx_ch_cfg::fdepth
+ *
+ * @nav_id: SoC device ID of Navigator Subsystem where tx channel is located
+ *
+ * @index: UDMAP transmit channel index.
+ *
+ * @tx_pause_on_err: UDMAP transmit channel pause on error configuration to
+ * be programmed into the tx_pause_on_err field of the channel's TCHAN_TCFG
+ * register.
+ *
+ * @tx_filt_einfo: UDMAP transmit channel extended packet information passing
+ * configuration to be programmed into the tx_filt_einfo field of the
+ * channel's TCHAN_TCFG register.
+ *
+ * @tx_filt_pswords: UDMAP transmit channel protocol specific word passing
+ * configuration to be programmed into the tx_filt_pswords field of the
+ * channel's TCHAN_TCFG register.
+ *
+ * @tx_atype: UDMAP transmit channel non Ring Accelerator access pointer
+ * interpretation configuration to be programmed into the tx_atype field of
+ * the channel's TCHAN_TCFG register.
+ *
+ * @tx_chan_type: UDMAP transmit channel functional channel type and work
+ * passing mechanism configuration to be programmed into the tx_chan_type
+ * field of the channel's TCHAN_TCFG register.
+ *
+ * @tx_supr_tdpkt: UDMAP transmit channel teardown packet generation suppression
+ * configuration to be programmed into the tx_supr_tdpkt field of the channel's
+ * TCHAN_TCFG register.
+ *
+ * @tx_fetch_size: UDMAP transmit channel number of 32-bit descriptor words to
+ * fetch configuration to be programmed into the tx_fetch_size field of the
+ * channel's TCHAN_TCFG register.  The user must make sure to set the maximum
+ * word count that can pass through the channel for any allowed descriptor type.
+ *
+ * @tx_credit_count: UDMAP transmit channel transfer request credit count
+ * configuration to be programmed into the count field of the TCHAN_TCREDIT
+ * register.  Specifies how many credits for complete TRs are available.
+ *
+ * @txcq_qnum: UDMAP transmit channel completion queue configuration to be
+ * programmed into the txcq_qnum field of the TCHAN_TCQ register. The specified
+ * completion queue must be assigned to the host, or a subordinate of the host,
+ * requesting configuration of the transmit channel.
+ *
+ * @tx_priority: UDMAP transmit channel transmit priority value to be programmed
+ * into the priority field of the channel's TCHAN_TPRI_CTRL register.
+ *
+ * @tx_qos: UDMAP transmit channel transmit qos value to be programmed into the
+ * qos field of the channel's TCHAN_TPRI_CTRL register.
+ *
+ * @tx_orderid: UDMAP transmit channel bus order id value to be programmed into
+ * the orderid field of the channel's TCHAN_TPRI_CTRL register.
+ *
+ * @fdepth: UDMAP transmit channel FIFO depth configuration to be programmed
+ * into the fdepth field of the TCHAN_TFIFO_DEPTH register. Sets the number of
+ * Tx FIFO bytes which are allowed to be stored for the channel. Check the UDMAP
+ * section of the TRM for restrictions regarding this parameter.
+ *
+ * @tx_sched_priority: UDMAP transmit channel tx scheduling priority
+ * configuration to be programmed into the priority field of the channel's
+ * TCHAN_TST_SCHED register.
+ */
+struct ti_sci_msg_rm_udmap_tx_ch_cfg_req {
+	struct ti_sci_msg_hdr hdr;
+	u32 valid_params;
+	u16 nav_id;
+	u16 index;
+	u8 tx_pause_on_err;
+	u8 tx_filt_einfo;
+	u8 tx_filt_pswords;
+	u8 tx_atype;
+	u8 tx_chan_type;
+	u8 tx_supr_tdpkt;
+	u16 tx_fetch_size;
+	u8 tx_credit_count;
+	u16 txcq_qnum;
+	u8 tx_priority;
+	u8 tx_qos;
+	u8 tx_orderid;
+	u16 fdepth;
+	u8 tx_sched_priority;
+} __packed;
+
+/**
+ *  Response to configuring a UDMAP transmit channel.
+ *
+ * @hdr: Standard TISCI header
+ */
+struct ti_sci_msg_rm_udmap_tx_ch_cfg_resp {
+	struct ti_sci_msg_hdr hdr;
+} __packed;
+
+/**
+ * Configures a Navigator Subsystem UDMAP receive channel
+ *
+ * Configures the non-real-time registers of a Navigator Subsystem UDMAP
+ * receive channel.  The channel index must be assigned to the host defined
+ * in the TISCI header via the RM board configuration resource assignment
+ * range list.
+ *
+ * @hdr: Generic Header
+ *
+ * @valid_params: Bitfield defining validity of rx channel configuration
+ * parameters.
+ * The rx channel configuration fields are not valid, and will not be used for
+ * ch configuration, if their corresponding valid bit is zero.
+ * Valid bit usage:
+ *    0 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_pause_on_err
+ *    1 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_atype
+ *    2 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_chan_type
+ *    3 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_fetch_size
+ *    4 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rxcq_qnum
+ *    5 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_priority
+ *    6 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_qos
+ *    7 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_orderid
+ *    8 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_sched_priority
+ *    9 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::flowid_start
+ *   10 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::flowid_cnt
+ *   11 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_ignore_short
+ *   12 - Valid bit for @ti_sci_msg_rm_udmap_rx_ch_cfg_req::rx_ignore_long
+ *
+ * @nav_id: SoC device ID of Navigator Subsystem where rx channel is located
+ *
+ * @index: UDMAP receive channel index.
+ *
+ * @rx_fetch_size: UDMAP receive channel number of 32-bit descriptor words to
+ * fetch configuration to be programmed into the rx_fetch_size field of the
+ * channel's RCHAN_RCFG register.
+ *
+ * @rxcq_qnum: UDMAP receive channel completion queue configuration to be
+ * programmed into the rxcq_qnum field of the RCHAN_RCQ register.
+ * The specified completion queue must be assigned to the host, or a subordinate
+ * of the host, requesting configuration of the receive channel.
+ *
+ * @rx_priority: UDMAP receive channel receive priority value to be programmed
+ * into the priority field of the channel's RCHAN_RPRI_CTRL register.
+ *
+ * @rx_qos: UDMAP receive channel receive qos value to be programmed into the
+ * qos field of the channel's RCHAN_RPRI_CTRL register.
+ *
+ * @rx_orderid: UDMAP receive channel bus order id value to be programmed into
+ * the orderid field of the channel's RCHAN_RPRI_CTRL register.
+ *
+ * @rx_sched_priority: UDMAP receive channel rx scheduling priority
+ * configuration to be programmed into the priority field of the channel's
+ * RCHAN_RST_SCHED register.
+ *
+ * @flowid_start: UDMAP receive channel additional flows starting index
+ * configuration to program into the flow_start field of the RCHAN_RFLOW_RNG
+ * register. Specifies the starting index for flow IDs the receive channel is to
+ * make use of beyond the default flow. flowid_start and @ref flowid_cnt must be
+ * set as valid and configured together. The starting flow ID set by
+ * @ref flowid_cnt must be a flow index within the Navigator Subsystem's subset
+ * of flows beyond the default flows statically mapped to receive channels.
+ * The additional flows must be assigned to the host, or a subordinate of the
+ * host, requesting configuration of the receive channel.
+ *
+ * @flowid_cnt: UDMAP receive channel additional flows count configuration to
+ * program into the flowid_cnt field of the RCHAN_RFLOW_RNG register.
+ * This field specifies how many flow IDs are in the additional contiguous range
+ * of legal flow IDs for the channel.  @ref flowid_start and flowid_cnt must be
+ * set as valid and configured together. Disabling the valid_params field bit
+ * for flowid_cnt indicates no flow IDs other than the default are to be
+ * allocated and used by the receive channel. @ref flowid_start plus flowid_cnt
+ * cannot be greater than the number of receive flows in the receive channel's
+ * Navigator Subsystem.  The additional flows must be assigned to the host, or a
+ * subordinate of the host, requesting configuration of the receive channel.
+ *
+ * @rx_pause_on_err: UDMAP receive channel pause on error configuration to be
+ * programmed into the rx_pause_on_err field of the channel's RCHAN_RCFG
+ * register.
+ *
+ * @rx_atype: UDMAP receive channel non Ring Accelerator access pointer
+ * interpretation configuration to be programmed into the rx_atype field of the
+ * channel's RCHAN_RCFG register.
+ *
+ * @rx_chan_type: UDMAP receive channel functional channel type and work passing
+ * mechanism configuration to be programmed into the rx_chan_type field of the
+ * channel's RCHAN_RCFG register.
+ *
+ * @rx_ignore_short: UDMAP receive channel short packet treatment configuration
+ * to be programmed into the rx_ignore_short field of the RCHAN_RCFG register.
+ *
+ * @rx_ignore_long: UDMAP receive channel long packet treatment configuration to
+ * be programmed into the rx_ignore_long field of the RCHAN_RCFG register.
+ */
+struct ti_sci_msg_rm_udmap_rx_ch_cfg_req {
+	struct ti_sci_msg_hdr hdr;
+	u32 valid_params;
+	u16 nav_id;
+	u16 index;
+	u16 rx_fetch_size;
+	u16 rxcq_qnum;
+	u8 rx_priority;
+	u8 rx_qos;
+	u8 rx_orderid;
+	u8 rx_sched_priority;
+	u16 flowid_start;
+	u16 flowid_cnt;
+	u8 rx_pause_on_err;
+	u8 rx_atype;
+	u8 rx_chan_type;
+	u8 rx_ignore_short;
+	u8 rx_ignore_long;
+} __packed;
+
+/**
+ * Response to configuring a UDMAP receive channel.
+ *
+ * @hdr: Standard TISCI header
+ */
+struct ti_sci_msg_rm_udmap_rx_ch_cfg_resp {
+	struct ti_sci_msg_hdr hdr;
+} __packed;
+
+/**
+ * Configures a Navigator Subsystem UDMAP receive flow
+ *
+ * Configures a Navigator Subsystem UDMAP receive flow's registers.
+ * Configuration does not include the flow registers which handle size-based
+ * free descriptor queue routing.
+ *
+ * The flow index must be assigned to the host defined in the TISCI header via
+ * the RM board configuration resource assignment range list.
+ *
+ * @hdr: Standard TISCI header
+ *
+ * @valid_params
+ * Bitfield defining validity of rx flow configuration parameters.  The
+ * rx flow configuration fields are not valid, and will not be used for flow
+ * configuration, if their corresponding valid bit is zero.  Valid bit usage:
+ *     0 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_einfo_present
+ *     1 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_psinfo_present
+ *     2 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_error_handling
+ *     3 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_desc_type
+ *     4 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_sop_offset
+ *     5 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_qnum
+ *     6 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_hi
+ *     7 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_lo
+ *     8 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_hi
+ *     9 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_lo
+ *    10 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_hi_sel
+ *    11 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_src_tag_lo_sel
+ *    12 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_hi_sel
+ *    13 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_dest_tag_lo_sel
+ *    14 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq0_sz0_qnum
+ *    15 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq1_sz0_qnum
+ *    16 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq2_sz0_qnum
+ *    17 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_fdq3_sz0_qnum
+ *    18 - Valid bit for @tisci_msg_rm_udmap_flow_cfg_req::rx_ps_location
+ *
+ * @nav_id: SoC device ID of Navigator Subsystem from which the receive flow is
+ * allocated
+ *
+ * @flow_index: UDMAP receive flow index for non-optional configuration.
+ *
+ * @rx_einfo_present:
+ * UDMAP receive flow extended packet info present configuration to be
+ * programmed into the rx_einfo_present field of the flow's RFLOW_RFA register.
+ *
+ * @rx_psinfo_present:
+ * UDMAP receive flow PS words present configuration to be programmed into the
+ * rx_psinfo_present field of the flow's RFLOW_RFA register.
+ *
+ * @rx_error_handling:
+ * UDMAP receive flow error handling configuration to be programmed into the
+ * rx_error_handling field of the flow's RFLOW_RFA register.
+ *
+ * @rx_desc_type:
+ * UDMAP receive flow descriptor type configuration to be programmed into the
+ * rx_desc_type field field of the flow's RFLOW_RFA register.
+ *
+ * @rx_sop_offset:
+ * UDMAP receive flow start of packet offset configuration to be programmed
+ * into the rx_sop_offset field of the RFLOW_RFA register.  See the UDMAP
+ * section of the TRM for more information on this setting.  Valid values for
+ * this field are 0-255 bytes.
+ *
+ * @rx_dest_qnum:
+ * UDMAP receive flow destination queue configuration to be programmed into the
+ * rx_dest_qnum field of the flow's RFLOW_RFA register.  The specified
+ * destination queue must be valid within the Navigator Subsystem and must be
+ * owned by the host, or a subordinate of the host, requesting allocation and
+ * configuration of the receive flow.
+ *
+ * @rx_src_tag_hi:
+ * UDMAP receive flow source tag high byte constant configuration to be
+ * programmed into the rx_src_tag_hi field of the flow's RFLOW_RFB register.
+ * See the UDMAP section of the TRM for more information on this setting.
+ *
+ * @rx_src_tag_lo:
+ * UDMAP receive flow source tag low byte constant configuration to be
+ * programmed into the rx_src_tag_lo field of the flow's RFLOW_RFB register.
+ * See the UDMAP section of the TRM for more information on this setting.
+ *
+ * @rx_dest_tag_hi:
+ * UDMAP receive flow destination tag high byte constant configuration to be
+ * programmed into the rx_dest_tag_hi field of the flow's RFLOW_RFB register.
+ * See the UDMAP section of the TRM for more information on this setting.
+ *
+ * @rx_dest_tag_lo:
+ * UDMAP receive flow destination tag low byte constant configuration to be
+ * programmed into the rx_dest_tag_lo field of the flow's RFLOW_RFB register.
+ * See the UDMAP section of the TRM for more information on this setting.
+ *
+ * @rx_src_tag_hi_sel:
+ * UDMAP receive flow source tag high byte selector configuration to be
+ * programmed into the rx_src_tag_hi_sel field of the RFLOW_RFC register.  See
+ * the UDMAP section of the TRM for more information on this setting.
+ *
+ * @rx_src_tag_lo_sel:
+ * UDMAP receive flow source tag low byte selector configuration to be
+ * programmed into the rx_src_tag_lo_sel field of the RFLOW_RFC register.  See
+ * the UDMAP section of the TRM for more information on this setting.
+ *
+ * @rx_dest_tag_hi_sel:
+ * UDMAP receive flow destination tag high byte selector configuration to be
+ * programmed into the rx_dest_tag_hi_sel field of the RFLOW_RFC register.  See
+ * the UDMAP section of the TRM for more information on this setting.
+ *
+ * @rx_dest_tag_lo_sel:
+ * UDMAP receive flow destination tag low byte selector configuration to be
+ * programmed into the rx_dest_tag_lo_sel field of the RFLOW_RFC register.  See
+ * the UDMAP section of the TRM for more information on this setting.
+ *
+ * @rx_fdq0_sz0_qnum:
+ * UDMAP receive flow free descriptor queue 0 configuration to be programmed
+ * into the rx_fdq0_sz0_qnum field of the flow's RFLOW_RFD register.  See the
+ * UDMAP section of the TRM for more information on this setting. The specified
+ * free queue must be valid within the Navigator Subsystem and must be owned
+ * by the host, or a subordinate of the host, requesting allocation and
+ * configuration of the receive flow.
+ *
+ * @rx_fdq1_qnum:
+ * UDMAP receive flow free descriptor queue 1 configuration to be programmed
+ * into the rx_fdq1_qnum field of the flow's RFLOW_RFD register.  See the
+ * UDMAP section of the TRM for more information on this setting.  The specified
+ * free queue must be valid within the Navigator Subsystem and must be owned
+ * by the host, or a subordinate of the host, requesting allocation and
+ * configuration of the receive flow.
+ *
+ * @rx_fdq2_qnum:
+ * UDMAP receive flow free descriptor queue 2 configuration to be programmed
+ * into the rx_fdq2_qnum field of the flow's RFLOW_RFE register.  See the
+ * UDMAP section of the TRM for more information on this setting.  The specified
+ * free queue must be valid within the Navigator Subsystem and must be owned
+ * by the host, or a subordinate of the host, requesting allocation and
+ * configuration of the receive flow.
+ *
+ * @rx_fdq3_qnum:
+ * UDMAP receive flow free descriptor queue 3 configuration to be programmed
+ * into the rx_fdq3_qnum field of the flow's RFLOW_RFE register.  See the
+ * UDMAP section of the TRM for more information on this setting.  The specified
+ * free queue must be valid within the Navigator Subsystem and must be owned
+ * by the host, or a subordinate of the host, requesting allocation and
+ * configuration of the receive flow.
+ *
+ * @rx_ps_location:
+ * UDMAP receive flow PS words location configuration to be programmed into the
+ * rx_ps_location field of the flow's RFLOW_RFA register.
+ */
+struct ti_sci_msg_rm_udmap_flow_cfg_req {
+	struct ti_sci_msg_hdr hdr;
+	u32 valid_params;
+	u16 nav_id;
+	u16 flow_index;
+	u8 rx_einfo_present;
+	u8 rx_psinfo_present;
+	u8 rx_error_handling;
+	u8 rx_desc_type;
+	u16 rx_sop_offset;
+	u16 rx_dest_qnum;
+	u8 rx_src_tag_hi;
+	u8 rx_src_tag_lo;
+	u8 rx_dest_tag_hi;
+	u8 rx_dest_tag_lo;
+	u8 rx_src_tag_hi_sel;
+	u8 rx_src_tag_lo_sel;
+	u8 rx_dest_tag_hi_sel;
+	u8 rx_dest_tag_lo_sel;
+	u16 rx_fdq0_sz0_qnum;
+	u16 rx_fdq1_qnum;
+	u16 rx_fdq2_qnum;
+	u16 rx_fdq3_qnum;
+	u8 rx_ps_location;
+} __packed;
+
+/**
+ *  Response to configuring a Navigator Subsystem UDMAP receive flow
+ *
+ * @hdr: Standard TISCI header
+ */
+struct ti_sci_msg_rm_udmap_flow_cfg_resp {
+	struct ti_sci_msg_hdr hdr;
+} __packed;
+
 #endif /* __TI_SCI_H */
diff --git a/include/linux/soc/ti/ti_sci_protocol.h b/include/linux/soc/ti/ti_sci_protocol.h
index 90d505363652..49baf98ce7d0 100644
--- a/include/linux/soc/ti/ti_sci_protocol.h
+++ b/include/linux/soc/ti/ti_sci_protocol.h
@@ -212,6 +212,31 @@ struct ti_sci_clk_ops {
 			u64 *current_freq);
 };
 
+/**
+ * struct ti_sci_rm_core_ops - Resource management core operations
+ * @get_range:		Get a range of resources belonging to ti sci host.
+ * @get_rage_from_shost:	Get a range of resources belonging to
+ *				specified host id.
+ *			- s_host: Host processing entity to which the
+ *				  resources are allocated
+ *
+ * NOTE: for these functions, all the parameters are consolidated and defined
+ * as below:
+ * - handle:	Pointer to TISCI handle as retrieved by *ti_sci_get_handle
+ * - dev_id:	TISCI device ID.
+ * - subtype:	Resource assignment subtype that is being requested
+ *		from the given device.
+ * - range_start:	Start index of the resource range
+ * - range_end:		Number of resources in the range
+ */
+struct ti_sci_rm_core_ops {
+	int (*get_range)(const struct ti_sci_handle *handle, u32 dev_id,
+			 u8 subtype, u16 *range_start, u16 *range_num);
+	int (*get_range_from_shost)(const struct ti_sci_handle *handle,
+				    u32 dev_id, u8 subtype, u8 s_host,
+				    u16 *range_start, u16 *range_num);
+};
+
 /**
  * struct ti_sci_core_ops - SoC Core Operations
  * @reboot_device: Reboot the SoC
@@ -257,6 +282,230 @@ struct ti_sci_proc_ops {
 				    u32 *sts_flags);
 };
 
+#define TI_SCI_RING_MODE_RING			(0)
+#define TI_SCI_RING_MODE_MESSAGE		(1)
+#define TI_SCI_RING_MODE_CREDENTIALS		(2)
+#define TI_SCI_RING_MODE_QM			(3)
+
+#define TI_SCI_MSG_UNUSED_SECONDARY_HOST TI_SCI_RM_NULL_U8
+
+/* RA config.addr_lo parameter is valid for RM ring configure TI_SCI message */
+#define TI_SCI_MSG_VALUE_RM_RING_ADDR_LO_VALID	BIT(0)
+/* RA config.addr_hi parameter is valid for RM ring configure TI_SCI message */
+#define TI_SCI_MSG_VALUE_RM_RING_ADDR_HI_VALID	BIT(1)
+ /* RA config.count parameter is valid for RM ring configure TI_SCI message */
+#define TI_SCI_MSG_VALUE_RM_RING_COUNT_VALID	BIT(2)
+/* RA config.mode parameter is valid for RM ring configure TI_SCI message */
+#define TI_SCI_MSG_VALUE_RM_RING_MODE_VALID	BIT(3)
+/* RA config.size parameter is valid for RM ring configure TI_SCI message */
+#define TI_SCI_MSG_VALUE_RM_RING_SIZE_VALID	BIT(4)
+/* RA config.order_id parameter is valid for RM ring configure TISCI message */
+#define TI_SCI_MSG_VALUE_RM_RING_ORDER_ID_VALID	BIT(5)
+
+#define TI_SCI_MSG_VALUE_RM_ALL_NO_ORDER \
+	(TI_SCI_MSG_VALUE_RM_RING_ADDR_LO_VALID | \
+	TI_SCI_MSG_VALUE_RM_RING_ADDR_HI_VALID | \
+	TI_SCI_MSG_VALUE_RM_RING_COUNT_VALID | \
+	TI_SCI_MSG_VALUE_RM_RING_MODE_VALID | \
+	TI_SCI_MSG_VALUE_RM_RING_SIZE_VALID)
+
+/**
+ * struct ti_sci_rm_ringacc_ops - Ring Accelerator Management operations
+ * @config: configure the SoC Navigator Subsystem Ring Accelerator ring
+ * @get_config: get the SoC Navigator Subsystem Ring Accelerator ring
+ *		configuration
+ */
+struct ti_sci_rm_ringacc_ops {
+	int (*config)(const struct ti_sci_handle *handle,
+		      u32 valid_params, u16 nav_id, u16 index,
+		      u32 addr_lo, u32 addr_hi, u32 count, u8 mode,
+		      u8 size, u8 order_id
+	);
+	int (*get_config)(const struct ti_sci_handle *handle,
+			  u32 nav_id, u32 index, u8 *mode,
+			  u32 *addr_lo, u32 *addr_hi, u32 *count,
+			  u8 *size, u8 *order_id);
+};
+
+/**
+ * struct ti_sci_rm_psil_ops - PSI-L thread operations
+ * @pair: pair PSI-L source thread to a destination thread.
+ *	If the src_thread is mapped to UDMA tchan, the corresponding channel's
+ *	TCHAN_THRD_ID register is updated.
+ *	If the dst_thread is mapped to UDMA rchan, the corresponding channel's
+ *	RCHAN_THRD_ID register is updated.
+ * @unpair: unpair PSI-L source thread from a destination thread.
+ *	If the src_thread is mapped to UDMA tchan, the corresponding channel's
+ *	TCHAN_THRD_ID register is cleared.
+ *	If the dst_thread is mapped to UDMA rchan, the corresponding channel's
+ *	RCHAN_THRD_ID register is cleared.
+ */
+struct ti_sci_rm_psil_ops {
+	int (*pair)(const struct ti_sci_handle *handle, u32 nav_id,
+		    u32 src_thread, u32 dst_thread);
+	int (*unpair)(const struct ti_sci_handle *handle, u32 nav_id,
+		      u32 src_thread, u32 dst_thread);
+};
+
+/* UDMAP channel types */
+#define TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR		2
+#define TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR_SB		3	/* RX only */
+#define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBRR		10
+#define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBVR		11
+#define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR	12
+#define TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBVR	13
+
+/* UDMAP channel atypes */
+#define TI_SCI_RM_UDMAP_ATYPE_PHYS			0
+#define TI_SCI_RM_UDMAP_ATYPE_INTERMEDIATE		1
+#define TI_SCI_RM_UDMAP_ATYPE_VIRTUAL			2
+
+/* UDMAP channel scheduling priorities */
+#define TI_SCI_RM_UDMAP_SCHED_PRIOR_HIGH		0
+#define TI_SCI_RM_UDMAP_SCHED_PRIOR_MEDHIGH		1
+#define TI_SCI_RM_UDMAP_SCHED_PRIOR_MEDLOW		2
+#define TI_SCI_RM_UDMAP_SCHED_PRIOR_LOW			3
+
+#define TI_SCI_RM_UDMAP_RX_FLOW_DESC_HOST		0
+#define TI_SCI_RM_UDMAP_RX_FLOW_DESC_MONO		2
+
+/* UDMAP TX/RX channel valid_params common declarations */
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID		BIT(0)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_ATYPE_VALID                BIT(1)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID            BIT(2)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID           BIT(3)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID              BIT(4)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_PRIORITY_VALID             BIT(5)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_QOS_VALID                  BIT(6)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_ORDER_ID_VALID             BIT(7)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_SCHED_PRIORITY_VALID       BIT(8)
+
+/**
+ * Configures a Navigator Subsystem UDMAP transmit channel
+ *
+ * Configures a Navigator Subsystem UDMAP transmit channel registers.
+ * See @ti_sci_msg_rm_udmap_tx_ch_cfg_req
+ */
+struct ti_sci_msg_rm_udmap_tx_ch_cfg {
+	u32 valid_params;
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_EINFO_VALID        BIT(9)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_PSWORDS_VALID      BIT(10)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_SUPR_TDPKT_VALID        BIT(11)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_CREDIT_COUNT_VALID      BIT(12)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FDEPTH_VALID            BIT(13)
+	u16 nav_id;
+	u16 index;
+	u8 tx_pause_on_err;
+	u8 tx_filt_einfo;
+	u8 tx_filt_pswords;
+	u8 tx_atype;
+	u8 tx_chan_type;
+	u8 tx_supr_tdpkt;
+	u16 tx_fetch_size;
+	u8 tx_credit_count;
+	u16 txcq_qnum;
+	u8 tx_priority;
+	u8 tx_qos;
+	u8 tx_orderid;
+	u16 fdepth;
+	u8 tx_sched_priority;
+};
+
+/**
+ * Configures a Navigator Subsystem UDMAP receive channel
+ *
+ * Configures a Navigator Subsystem UDMAP receive channel registers.
+ * See @ti_sci_msg_rm_udmap_rx_ch_cfg_req
+ */
+struct ti_sci_msg_rm_udmap_rx_ch_cfg {
+	u32 valid_params;
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID      BIT(9)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID        BIT(10)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_IGNORE_SHORT_VALID      BIT(11)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_IGNORE_LONG_VALID       BIT(12)
+	u16 nav_id;
+	u16 index;
+	u16 rx_fetch_size;
+	u16 rxcq_qnum;
+	u8 rx_priority;
+	u8 rx_qos;
+	u8 rx_orderid;
+	u8 rx_sched_priority;
+	u16 flowid_start;
+	u16 flowid_cnt;
+	u8 rx_pause_on_err;
+	u8 rx_atype;
+	u8 rx_chan_type;
+	u8 rx_ignore_short;
+	u8 rx_ignore_long;
+};
+
+/**
+ * Configures a Navigator Subsystem UDMAP receive flow
+ *
+ * Configures a Navigator Subsystem UDMAP receive flow's registers.
+ * See @tis_ci_msg_rm_udmap_flow_cfg_req
+ */
+struct ti_sci_msg_rm_udmap_flow_cfg {
+	u32 valid_params;
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID	BIT(0)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID     BIT(1)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID     BIT(2)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DESC_TYPE_VALID          BIT(3)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SOP_OFFSET_VALID         BIT(4)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_QNUM_VALID          BIT(5)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_VALID         BIT(6)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_VALID         BIT(7)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_VALID        BIT(8)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_VALID        BIT(9)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_SEL_VALID     BIT(10)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_SEL_VALID     BIT(11)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_SEL_VALID    BIT(12)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_SEL_VALID    BIT(13)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ0_SZ0_QNUM_VALID      BIT(14)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ1_QNUM_VALID          BIT(15)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ2_QNUM_VALID          BIT(16)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ3_QNUM_VALID          BIT(17)
+#define TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PS_LOCATION_VALID        BIT(18)
+	u16 nav_id;
+	u16 flow_index;
+	u8 rx_einfo_present;
+	u8 rx_psinfo_present;
+	u8 rx_error_handling;
+	u8 rx_desc_type;
+	u16 rx_sop_offset;
+	u16 rx_dest_qnum;
+	u8 rx_src_tag_hi;
+	u8 rx_src_tag_lo;
+	u8 rx_dest_tag_hi;
+	u8 rx_dest_tag_lo;
+	u8 rx_src_tag_hi_sel;
+	u8 rx_src_tag_lo_sel;
+	u8 rx_dest_tag_hi_sel;
+	u8 rx_dest_tag_lo_sel;
+	u16 rx_fdq0_sz0_qnum;
+	u16 rx_fdq1_qnum;
+	u16 rx_fdq2_qnum;
+	u16 rx_fdq3_qnum;
+	u8 rx_ps_location;
+};
+
+/**
+ * struct ti_sci_rm_udmap_ops - UDMA Management operations
+ * @tx_ch_cfg: configure SoC Navigator Subsystem UDMA transmit channel.
+ * @rx_ch_cfg: configure SoC Navigator Subsystem UDMA receive channel.
+ * @rx_flow_cfg: configure SoC Navigator Subsystem UDMA receive flow.
+ */
+struct ti_sci_rm_udmap_ops {
+	int (*tx_ch_cfg)(const struct ti_sci_handle *handle,
+			 const struct ti_sci_msg_rm_udmap_tx_ch_cfg *params);
+	int (*rx_ch_cfg)(const struct ti_sci_handle *handle,
+			 const struct ti_sci_msg_rm_udmap_rx_ch_cfg *params);
+	int (*rx_flow_cfg)(
+		const struct ti_sci_handle *handle,
+		const struct ti_sci_msg_rm_udmap_flow_cfg *params);
+};
+
 /**
  * struct ti_sci_ops - Function support for TI SCI
  * @board_ops:	Miscellaneous operations
@@ -264,6 +513,7 @@ struct ti_sci_proc_ops {
  * @clk_ops:	Clock specific operations
  * @core_ops:	Core specific operations
  * @proc_ops:	Processor specific operations
+ * @ring_ops: Ring Accelerator Management operations
  */
 struct ti_sci_ops {
 	struct ti_sci_board_ops board_ops;
@@ -271,6 +521,10 @@ struct ti_sci_ops {
 	struct ti_sci_clk_ops clk_ops;
 	struct ti_sci_core_ops core_ops;
 	struct ti_sci_proc_ops proc_ops;
+	struct ti_sci_rm_core_ops rm_core_ops;
+	struct ti_sci_rm_ringacc_ops rm_ring_ops;
+	struct ti_sci_rm_psil_ops rm_psil_ops;
+	struct ti_sci_rm_udmap_ops rm_udmap_ops;
 };
 
 /**
@@ -283,12 +537,42 @@ struct ti_sci_handle {
 	struct ti_sci_version_info version;
 };
 
+#define TI_SCI_RESOURCE_NULL	0xffff
+
+/**
+ * struct ti_sci_resource_desc - Description of TI SCI resource instance range.
+ * @start:	Start index of the resource.
+ * @num:	Number of resources.
+ * @res_map:	Bitmap to manage the allocation of these resources.
+ */
+struct ti_sci_resource_desc {
+	u16 start;
+	u16 num;
+	unsigned long *res_map;
+};
+
+/**
+ * struct ti_sci_resource - Structure representing a resource assigned
+ *			    to a device.
+ * @sets:	Number of sets available from this resource type
+ * @desc:	Array of resource descriptors.
+ */
+struct ti_sci_resource {
+	u16 sets;
+	struct ti_sci_resource_desc *desc;
+};
+
 #if IS_ENABLED(CONFIG_TI_SCI_PROTOCOL)
 
 const struct ti_sci_handle *ti_sci_get_handle_from_sysfw(struct udevice *dev);
 const struct ti_sci_handle *ti_sci_get_handle(struct udevice *dev);
 const struct ti_sci_handle *ti_sci_get_by_phandle(struct udevice *dev,
 						  const char *property);
+u16 ti_sci_get_free_resource(struct ti_sci_resource *res);
+void ti_sci_release_resource(struct ti_sci_resource *res, u16 id);
+struct ti_sci_resource *
+devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
+			    struct udevice *dev, u32 dev_id, char *of_prop);
 
 #else	/* CONFIG_TI_SCI_PROTOCOL */
 
@@ -309,6 +593,22 @@ const struct ti_sci_handle *ti_sci_get_by_phandle(struct udevice *dev,
 {
 	return ERR_PTR(-EINVAL);
 }
+
+static inline u16 ti_sci_get_free_resource(struct ti_sci_resource *res)
+{
+	return TI_SCI_RESOURCE_NULL;
+}
+
+static inline void ti_sci_release_resource(struct ti_sci_resource *res, u16 id)
+{
+}
+
+static inline struct ti_sci_resource *
+devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
+			    struct udevice *dev, u32 dev_id, char *of_prop)
+{
+	return ERR_PTR(-EINVAL);
+}
 #endif	/* CONFIG_TI_SCI_PROTOCOL */
 
 #endif	/* __TISCI_PROTOCOL_H */
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [U-Boot] [PATCH v4 2/7] soc: ti: k3: add navss ringacc driver
  2019-02-05 12:01 [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support Vignesh R
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 1/7] firmware: ti_sci: Add support for NAVSS resource management Vignesh R
@ 2019-02-05 12:01 ` Vignesh R
  2019-04-12 16:27   ` [U-Boot] [U-Boot,v4,2/7] " Tom Rini
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 3/7] soc: ti: k3: add CPPI5 description and helpers Vignesh R
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Vignesh R @ 2019-02-05 12:01 UTC (permalink / raw)
  To: u-boot

From: Grygorii Strashko <grygorii.strashko@ti.com>

The Ring Accelerator (RINGACC or RA) provides hardware acceleration to
enable straightforward passing of work between a producer and a consumer.
There is one RINGACC module per NAVSS on TI AM65x SoCs.

The RINGACC converts constant-address read and write accesses to equivalent
read or write accesses to a circular data structure in memory. The RINGACC
eliminates the need for each DMA controller which needs to access ring
elements from having to know the current state of the ring (base address,
current offset). The DMA controller performs a read or write access to a
specific address range (which maps to the source interface on the RINGACC)
and the RINGACC replaces the address for the transaction with a new address
which corresponds to the head or tail element of the ring (head for reads,
tail for writes). Since the RINGACC maintains the state, multiple DMA
controllers or channels are allowed to coherently share the same rings as
applicable. The RINGACC is able to place data which is destined towards
software into cached memory directly.

Supported ring modes:
 - Ring Mode
 - Messaging Mode
 - Credentials Mode
 - Queue Manager Mode

TI-SCI integration:

Texas Instrument's System Control Interface (TI-SCI) Message Protocol now
has control over Ringacc module resources management (RM) and Rings
configuration.

The Ringacc driver manages Rings allocation by itself now and requests
TI-SCI firmware to allocate and configure specific Rings only. It's done
this way because, Linux driver implements two stage Rings allocation and
configuration (allocate ring and configure ring) while TI-SCI Message
Protocol supports only one combined operation (allocate+configure).

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Vignesh R <vigneshr@ti.com>
---
 drivers/Kconfig                         |    2 +
 drivers/soc/Kconfig                     |    5 +
 drivers/soc/Makefile                    |    1 +
 drivers/soc/ti/Kconfig                  |   20 +
 drivers/soc/ti/Makefile                 |    5 +
 drivers/soc/ti/k3-navss-ringacc.c       | 1057 +++++++++++++++++++++++
 include/linux/soc/ti/k3-navss-ringacc.h |  236 +++++
 7 files changed, 1326 insertions(+)
 create mode 100644 drivers/soc/Kconfig
 create mode 100644 drivers/soc/ti/Kconfig
 create mode 100644 drivers/soc/ti/Makefile
 create mode 100644 drivers/soc/ti/k3-navss-ringacc.c
 create mode 100644 include/linux/soc/ti/k3-navss-ringacc.h

diff --git a/drivers/Kconfig b/drivers/Kconfig
index e9fbadd13d5a..1eedb3462b0a 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -96,6 +96,8 @@ source "drivers/smem/Kconfig"
 
 source "drivers/sound/Kconfig"
 
+source "drivers/soc/Kconfig"
+
 source "drivers/spi/Kconfig"
 
 source "drivers/spmi/Kconfig"
diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
new file mode 100644
index 000000000000..7b4e4d613088
--- /dev/null
+++ b/drivers/soc/Kconfig
@@ -0,0 +1,5 @@
+menu "SOC (System On Chip) specific Drivers"
+
+source "drivers/soc/ti/Kconfig"
+
+endmenu
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 42037f99d587..8b7ead546e1c 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -3,3 +3,4 @@
 # Makefile for the U-Boot SOC specific device drivers.
 
 obj-$(CONFIG_ARCH_KEYSTONE)	+= keystone/
+obj-$(CONFIG_SOC_TI) += ti/
diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
new file mode 100644
index 000000000000..8c0f3c07b23f
--- /dev/null
+++ b/drivers/soc/ti/Kconfig
@@ -0,0 +1,20 @@
+# SPDX-License-Identifier: GPL-2.0+
+
+menuconfig SOC_TI
+	bool "TI SOC drivers support"
+
+if SOC_TI
+
+config TI_K3_NAVSS_RINGACC
+	bool "K3 Ring accelerator Sub System"
+	depends on ARCH_K3
+	select MISC
+	help
+	  Say y here to support the K3 AM65x Ring accelerator module.
+	  The Ring Accelerator (RINGACC or RA)  provides hardware acceleration
+	  to enable straightforward passing of work between a producer
+	  and a consumer. There is one RINGACC module per NAVSS on TI AM65x SoCs
+	  If unsure, say N.
+
+
+endif # SOC_TI
diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
new file mode 100644
index 000000000000..63e21aaad40f
--- /dev/null
+++ b/drivers/soc/ti/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0+
+#
+# TI K3 SOC drivers
+#
+obj-$(CONFIG_TI_K3_NAVSS_RINGACC)	+= k3-navss-ringacc.o
diff --git a/drivers/soc/ti/k3-navss-ringacc.c b/drivers/soc/ti/k3-navss-ringacc.c
new file mode 100644
index 000000000000..fcb84f7aa49b
--- /dev/null
+++ b/drivers/soc/ti/k3-navss-ringacc.c
@@ -0,0 +1,1057 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * TI K3 AM65x NAVSS Ring accelerator Manager (RA) subsystem driver
+ *
+ * Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ */
+
+#include <common.h>
+#include <asm/io.h>
+#include <malloc.h>
+#include <asm/dma-mapping.h>
+#include <asm/bitops.h>
+#include <dm.h>
+#include <dm/read.h>
+#include <dm/uclass.h>
+#include <linux/compat.h>
+#include <linux/soc/ti/k3-navss-ringacc.h>
+#include <linux/soc/ti/ti_sci_protocol.h>
+
+#define set_bit(bit, bitmap)	__set_bit(bit, bitmap)
+#define clear_bit(bit, bitmap)	__clear_bit(bit, bitmap)
+#define dma_free_coherent(dev, size, cpu_addr, dma_handle) \
+	dma_free_coherent(cpu_addr)
+#define dma_zalloc_coherent(dev, size, dma_handle, flag) \
+({ \
+	void	*ring_mem_virt; \
+	ring_mem_virt = dma_alloc_coherent((size), \
+					   (unsigned long *)(dma_handle)); \
+	if (ring_mem_virt) \
+		memset(ring_mem_virt, 0, (size)); \
+	ring_mem_virt; \
+})
+
+static LIST_HEAD(k3_nav_ringacc_list);
+
+static	void ringacc_writel(u32 v, void __iomem *reg)
+{
+	pr_debug("WRITEL(32): v(%08X)-->reg(%p)\n", v, reg);
+	writel(v, reg);
+}
+
+static	u32 ringacc_readl(void __iomem *reg)
+{
+	u32 v;
+
+	v = readl(reg);
+	pr_debug("READL(32): v(%08X)<--reg(%p)\n", v, reg);
+	return v;
+}
+
+#define KNAV_RINGACC_CFG_RING_SIZE_ELCNT_MASK		GENMASK(19, 0)
+
+/**
+ * struct k3_nav_ring_rt_regs -  The RA Control/Status Registers region
+ */
+struct k3_nav_ring_rt_regs {
+	u32	resv_16[4];
+	u32	db;		/* RT Ring N Doorbell Register */
+	u32	resv_4[1];
+	u32	occ;		/* RT Ring N Occupancy Register */
+	u32	indx;		/* RT Ring N Current Index Register */
+	u32	hwocc;		/* RT Ring N Hardware Occupancy Register */
+	u32	hwindx;		/* RT Ring N Current Index Register */
+};
+
+#define KNAV_RINGACC_RT_REGS_STEP	0x1000
+
+/**
+ * struct k3_nav_ring_fifo_regs -  The Ring Accelerator Queues Registers region
+ */
+struct k3_nav_ring_fifo_regs {
+	u32	head_data[128];		/* Ring Head Entry Data Registers */
+	u32	tail_data[128];		/* Ring Tail Entry Data Registers */
+	u32	peek_head_data[128];	/* Ring Peek Head Entry Data Regs */
+	u32	peek_tail_data[128];	/* Ring Peek Tail Entry Data Regs */
+};
+
+/**
+ * struct k3_ringacc_proxy_gcfg_regs - RA Proxy Global Config MMIO Region
+ */
+struct k3_ringacc_proxy_gcfg_regs {
+	u32	revision;	/* Revision Register */
+	u32	config;		/* Config Register */
+};
+
+#define K3_RINGACC_PROXY_CFG_THREADS_MASK		GENMASK(15, 0)
+
+/**
+ * struct k3_ringacc_proxy_target_regs -  RA Proxy Datapath MMIO Region
+ */
+struct k3_ringacc_proxy_target_regs {
+	u32	control;	/* Proxy Control Register */
+	u32	status;		/* Proxy Status Register */
+	u8	resv_512[504];
+	u32	data[128];	/* Proxy Data Register */
+};
+
+#define K3_RINGACC_PROXY_TARGET_STEP	0x1000
+#define K3_RINGACC_PROXY_NOT_USED	(-1)
+
+enum k3_ringacc_proxy_access_mode {
+	PROXY_ACCESS_MODE_HEAD = 0,
+	PROXY_ACCESS_MODE_TAIL = 1,
+	PROXY_ACCESS_MODE_PEEK_HEAD = 2,
+	PROXY_ACCESS_MODE_PEEK_TAIL = 3,
+};
+
+#define KNAV_RINGACC_FIFO_WINDOW_SIZE_BYTES  (512U)
+#define KNAV_RINGACC_FIFO_REGS_STEP	0x1000
+#define KNAV_RINGACC_MAX_DB_RING_CNT    (127U)
+
+/**
+ * struct k3_nav_ring_ops -  Ring operations
+ */
+struct k3_nav_ring_ops {
+	int (*push_tail)(struct k3_nav_ring *ring, void *elm);
+	int (*push_head)(struct k3_nav_ring *ring, void *elm);
+	int (*pop_tail)(struct k3_nav_ring *ring, void *elm);
+	int (*pop_head)(struct k3_nav_ring *ring, void *elm);
+};
+
+/**
+ * struct k3_nav_ring - RA Ring descriptor
+ *
+ * @rt - Ring control/status registers
+ * @fifos - Ring queues registers
+ * @proxy - Ring Proxy Datapath registers
+ * @ring_mem_dma - Ring buffer dma address
+ * @ring_mem_virt - Ring buffer virt address
+ * @ops - Ring operations
+ * @size - Ring size in elements
+ * @elm_size - Size of the ring element
+ * @mode - Ring mode
+ * @flags - flags
+ * @free - Number of free elements
+ * @occ - Ring occupancy
+ * @windex - Write index (only for @K3_NAV_RINGACC_RING_MODE_RING)
+ * @rindex - Read index (only for @K3_NAV_RINGACC_RING_MODE_RING)
+ * @ring_id - Ring Id
+ * @parent - Pointer on struct @k3_nav_ringacc
+ * @use_count - Use count for shared rings
+ * @proxy_id - RA Ring Proxy Id (only if @K3_NAV_RINGACC_RING_USE_PROXY)
+ */
+struct k3_nav_ring {
+	struct k3_nav_ring_rt_regs __iomem *rt;
+	struct k3_nav_ring_fifo_regs __iomem *fifos;
+	struct k3_ringacc_proxy_target_regs  __iomem *proxy;
+	dma_addr_t	ring_mem_dma;
+	void		*ring_mem_virt;
+	struct k3_nav_ring_ops *ops;
+	u32		size;
+	enum k3_nav_ring_size elm_size;
+	enum k3_nav_ring_mode mode;
+	u32		flags;
+#define KNAV_RING_FLAG_BUSY	BIT(1)
+#define K3_NAV_RING_FLAG_SHARED	BIT(2)
+	u32		free;
+	u32		occ;
+	u32		windex;
+	u32		rindex;
+	u32		ring_id;
+	struct k3_nav_ringacc	*parent;
+	u32		use_count;
+	int		proxy_id;
+};
+
+/**
+ * struct k3_nav_ringacc - Rings accelerator descriptor
+ *
+ * @dev - pointer on RA device
+ * @proxy_gcfg - RA proxy global config registers
+ * @proxy_target_base - RA proxy datapath region
+ * @num_rings - number of ring in RA
+ * @rm_gp_range - general purpose rings range from tisci
+ * @dma_ring_reset_quirk - DMA reset w/a enable
+ * @num_proxies - number of RA proxies
+ * @rings - array of rings descriptors (struct @k3_nav_ring)
+ * @list - list of RAs in the system
+ * @tisci - pointer ti-sci handle
+ * @tisci_ring_ops - ti-sci rings ops
+ * @tisci_dev_id - ti-sci device id
+ */
+struct k3_nav_ringacc {
+	struct udevice *dev;
+	struct k3_ringacc_proxy_gcfg_regs __iomem *proxy_gcfg;
+	void __iomem *proxy_target_base;
+	u32 num_rings; /* number of rings in Ringacc module */
+	unsigned long *rings_inuse;
+	struct ti_sci_resource *rm_gp_range;
+	bool dma_ring_reset_quirk;
+	u32 num_proxies;
+	unsigned long *proxy_inuse;
+
+	struct k3_nav_ring *rings;
+	struct list_head list;
+
+	const struct ti_sci_handle *tisci;
+	const struct ti_sci_rm_ringacc_ops *tisci_ring_ops;
+	u32  tisci_dev_id;
+};
+
+static long k3_nav_ringacc_ring_get_fifo_pos(struct k3_nav_ring *ring)
+{
+	return KNAV_RINGACC_FIFO_WINDOW_SIZE_BYTES -
+	       (4 << ring->elm_size);
+}
+
+static void *k3_nav_ringacc_get_elm_addr(struct k3_nav_ring *ring, u32 idx)
+{
+	return (idx * (4 << ring->elm_size) + ring->ring_mem_virt);
+}
+
+static int k3_nav_ringacc_ring_push_mem(struct k3_nav_ring *ring, void *elem);
+static int k3_nav_ringacc_ring_pop_mem(struct k3_nav_ring *ring, void *elem);
+
+static struct k3_nav_ring_ops k3_nav_mode_ring_ops = {
+		.push_tail = k3_nav_ringacc_ring_push_mem,
+		.pop_head = k3_nav_ringacc_ring_pop_mem,
+};
+
+static int k3_nav_ringacc_ring_push_io(struct k3_nav_ring *ring, void *elem);
+static int k3_nav_ringacc_ring_pop_io(struct k3_nav_ring *ring, void *elem);
+static int k3_nav_ringacc_ring_push_head_io(struct k3_nav_ring *ring,
+					    void *elem);
+static int k3_nav_ringacc_ring_pop_tail_io(struct k3_nav_ring *ring,
+					   void *elem);
+
+static struct k3_nav_ring_ops k3_nav_mode_msg_ops = {
+		.push_tail = k3_nav_ringacc_ring_push_io,
+		.push_head = k3_nav_ringacc_ring_push_head_io,
+		.pop_tail = k3_nav_ringacc_ring_pop_tail_io,
+		.pop_head = k3_nav_ringacc_ring_pop_io,
+};
+
+static int k3_ringacc_ring_push_head_proxy(struct k3_nav_ring *ring,
+					   void *elem);
+static int k3_ringacc_ring_push_tail_proxy(struct k3_nav_ring *ring,
+					   void *elem);
+static int k3_ringacc_ring_pop_head_proxy(struct k3_nav_ring *ring, void *elem);
+static int k3_ringacc_ring_pop_tail_proxy(struct k3_nav_ring *ring, void *elem);
+
+static struct k3_nav_ring_ops k3_nav_mode_proxy_ops = {
+		.push_tail = k3_ringacc_ring_push_tail_proxy,
+		.push_head = k3_ringacc_ring_push_head_proxy,
+		.pop_tail = k3_ringacc_ring_pop_tail_proxy,
+		.pop_head = k3_ringacc_ring_pop_head_proxy,
+};
+
+struct udevice *k3_nav_ringacc_get_dev(struct k3_nav_ringacc *ringacc)
+{
+	return ringacc->dev;
+}
+
+struct k3_nav_ring *k3_nav_ringacc_request_ring(struct k3_nav_ringacc *ringacc,
+						int id, u32 flags)
+{
+	int proxy_id = K3_RINGACC_PROXY_NOT_USED;
+
+	if (id == K3_NAV_RINGACC_RING_ID_ANY) {
+		/* Request for any general purpose ring */
+		struct ti_sci_resource_desc *gp_rings =
+					&ringacc->rm_gp_range->desc[0];
+		unsigned long size;
+
+		size = gp_rings->start + gp_rings->num;
+		id = find_next_zero_bit(ringacc->rings_inuse,
+					size, gp_rings->start);
+		if (id == size)
+			goto error;
+	} else if (id < 0) {
+		goto error;
+	}
+
+	if (test_bit(id, ringacc->rings_inuse) &&
+	    !(ringacc->rings[id].flags & K3_NAV_RING_FLAG_SHARED))
+		goto error;
+	else if (ringacc->rings[id].flags & K3_NAV_RING_FLAG_SHARED)
+		goto out;
+
+	if (flags & K3_NAV_RINGACC_RING_USE_PROXY) {
+		proxy_id = find_next_zero_bit(ringacc->proxy_inuse,
+					      ringacc->num_proxies, 0);
+		if (proxy_id == ringacc->num_proxies)
+			goto error;
+	}
+
+	if (!try_module_get(ringacc->dev->driver->owner))
+		goto error;
+
+	if (proxy_id != K3_RINGACC_PROXY_NOT_USED) {
+		set_bit(proxy_id, ringacc->proxy_inuse);
+		ringacc->rings[id].proxy_id = proxy_id;
+		pr_debug("Giving ring#%d proxy#%d\n",
+			 id, proxy_id);
+	} else {
+		pr_debug("Giving ring#%d\n", id);
+	}
+
+	set_bit(id, ringacc->rings_inuse);
+out:
+	ringacc->rings[id].use_count++;
+	return &ringacc->rings[id];
+
+error:
+	return NULL;
+}
+
+static void k3_ringacc_ring_reset_sci(struct k3_nav_ring *ring)
+{
+	struct k3_nav_ringacc *ringacc = ring->parent;
+	int ret;
+
+	ret = ringacc->tisci_ring_ops->config(
+			ringacc->tisci,
+			TI_SCI_MSG_VALUE_RM_RING_COUNT_VALID,
+			ringacc->tisci_dev_id,
+			ring->ring_id,
+			0,
+			0,
+			ring->size,
+			0,
+			0,
+			0);
+	if (ret)
+		dev_err(ringacc->dev, "TISCI reset ring fail (%d) ring_idx %d\n",
+			ret, ring->ring_id);
+}
+
+void k3_nav_ringacc_ring_reset(struct k3_nav_ring *ring)
+{
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return;
+
+	ring->occ = 0;
+	ring->free = 0;
+	ring->rindex = 0;
+	ring->windex = 0;
+
+	k3_ringacc_ring_reset_sci(ring);
+}
+
+static void k3_ringacc_ring_reconfig_qmode_sci(struct k3_nav_ring *ring,
+					       enum k3_nav_ring_mode mode)
+{
+	struct k3_nav_ringacc *ringacc = ring->parent;
+	int ret;
+
+	ret = ringacc->tisci_ring_ops->config(
+			ringacc->tisci,
+			TI_SCI_MSG_VALUE_RM_RING_MODE_VALID,
+			ringacc->tisci_dev_id,
+			ring->ring_id,
+			0,
+			0,
+			0,
+			mode,
+			0,
+			0);
+	if (ret)
+		dev_err(ringacc->dev, "TISCI reconf qmode fail (%d) ring_idx %d\n",
+			ret, ring->ring_id);
+}
+
+void k3_nav_ringacc_ring_reset_dma(struct k3_nav_ring *ring, u32 occ)
+{
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return;
+
+	if (!ring->parent->dma_ring_reset_quirk)
+		return;
+
+	if (!occ)
+		occ = ringacc_readl(&ring->rt->occ);
+
+	if (occ) {
+		u32 db_ring_cnt, db_ring_cnt_cur;
+
+		pr_debug("%s %u occ: %u\n", __func__,
+			 ring->ring_id, occ);
+		/* 2. Reset the ring */
+		k3_ringacc_ring_reset_sci(ring);
+
+		/*
+		 * 3. Setup the ring in ring/doorbell mode
+		 * (if not already in this mode)
+		 */
+		if (ring->mode != K3_NAV_RINGACC_RING_MODE_RING)
+			k3_ringacc_ring_reconfig_qmode_sci(
+					ring, K3_NAV_RINGACC_RING_MODE_RING);
+		/*
+		 * 4. Ring the doorbell 2**22 – ringOcc times.
+		 * This will wrap the internal UDMAP ring state occupancy
+		 * counter (which is 21-bits wide) to 0.
+		 */
+		db_ring_cnt = (1U << 22) - occ;
+
+		while (db_ring_cnt != 0) {
+			/*
+			 * Ring the doorbell with the maximum count each
+			 * iteration if possible to minimize the total
+			 * of writes
+			 */
+			if (db_ring_cnt > KNAV_RINGACC_MAX_DB_RING_CNT)
+				db_ring_cnt_cur = KNAV_RINGACC_MAX_DB_RING_CNT;
+			else
+				db_ring_cnt_cur = db_ring_cnt;
+
+			writel(db_ring_cnt_cur, &ring->rt->db);
+			db_ring_cnt -= db_ring_cnt_cur;
+		}
+
+		/* 5. Restore the original ring mode (if not ring mode) */
+		if (ring->mode != K3_NAV_RINGACC_RING_MODE_RING)
+			k3_ringacc_ring_reconfig_qmode_sci(ring, ring->mode);
+	}
+
+	/* 2. Reset the ring */
+	k3_nav_ringacc_ring_reset(ring);
+}
+
+static void k3_ringacc_ring_free_sci(struct k3_nav_ring *ring)
+{
+	struct k3_nav_ringacc *ringacc = ring->parent;
+	int ret;
+
+	ret = ringacc->tisci_ring_ops->config(
+			ringacc->tisci,
+			TI_SCI_MSG_VALUE_RM_ALL_NO_ORDER,
+			ringacc->tisci_dev_id,
+			ring->ring_id,
+			0,
+			0,
+			0,
+			0,
+			0,
+			0);
+	if (ret)
+		dev_err(ringacc->dev, "TISCI ring free fail (%d) ring_idx %d\n",
+			ret, ring->ring_id);
+}
+
+int k3_nav_ringacc_ring_free(struct k3_nav_ring *ring)
+{
+	struct k3_nav_ringacc *ringacc;
+
+	if (!ring)
+		return -EINVAL;
+
+	ringacc = ring->parent;
+
+	pr_debug("%s flags: 0x%08x\n", __func__, ring->flags);
+
+	if (!test_bit(ring->ring_id, ringacc->rings_inuse))
+		return -EINVAL;
+
+	if (--ring->use_count)
+		goto out;
+
+	if (!(ring->flags & KNAV_RING_FLAG_BUSY))
+		goto no_init;
+
+	k3_ringacc_ring_free_sci(ring);
+
+	dma_free_coherent(ringacc->dev,
+			  ring->size * (4 << ring->elm_size),
+			  ring->ring_mem_virt, ring->ring_mem_dma);
+	ring->flags &= ~KNAV_RING_FLAG_BUSY;
+	ring->ops = NULL;
+	if (ring->proxy_id != K3_RINGACC_PROXY_NOT_USED) {
+		clear_bit(ring->proxy_id, ringacc->proxy_inuse);
+		ring->proxy = NULL;
+		ring->proxy_id = K3_RINGACC_PROXY_NOT_USED;
+	}
+
+no_init:
+	clear_bit(ring->ring_id, ringacc->rings_inuse);
+
+	module_put(ringacc->dev->driver->owner);
+
+out:
+	return 0;
+}
+
+u32 k3_nav_ringacc_get_ring_id(struct k3_nav_ring *ring)
+{
+	if (!ring)
+		return -EINVAL;
+
+	return ring->ring_id;
+}
+
+static int k3_nav_ringacc_ring_cfg_sci(struct k3_nav_ring *ring)
+{
+	struct k3_nav_ringacc *ringacc = ring->parent;
+	u32 ring_idx;
+	int ret;
+
+	if (!ringacc->tisci)
+		return -EINVAL;
+
+	ring_idx = ring->ring_id;
+	ret = ringacc->tisci_ring_ops->config(
+			ringacc->tisci,
+			TI_SCI_MSG_VALUE_RM_ALL_NO_ORDER,
+			ringacc->tisci_dev_id,
+			ring_idx,
+			lower_32_bits(ring->ring_mem_dma),
+			upper_32_bits(ring->ring_mem_dma),
+			ring->size,
+			ring->mode,
+			ring->elm_size,
+			0);
+	if (ret)
+		dev_err(ringacc->dev, "TISCI config ring fail (%d) ring_idx %d\n",
+			ret, ring_idx);
+
+	return ret;
+}
+
+int k3_nav_ringacc_ring_cfg(struct k3_nav_ring *ring,
+			    struct k3_nav_ring_cfg *cfg)
+{
+	struct k3_nav_ringacc *ringacc = ring->parent;
+	int ret = 0;
+
+	if (!ring || !cfg)
+		return -EINVAL;
+	if (cfg->elm_size > K3_NAV_RINGACC_RING_ELSIZE_256 ||
+	    cfg->mode > K3_NAV_RINGACC_RING_MODE_QM ||
+	    cfg->size & ~KNAV_RINGACC_CFG_RING_SIZE_ELCNT_MASK ||
+	    !test_bit(ring->ring_id, ringacc->rings_inuse))
+		return -EINVAL;
+
+	if (ring->use_count != 1)
+		return 0;
+
+	ring->size = cfg->size;
+	ring->elm_size = cfg->elm_size;
+	ring->mode = cfg->mode;
+	ring->occ = 0;
+	ring->free = 0;
+	ring->rindex = 0;
+	ring->windex = 0;
+
+	if (ring->proxy_id != K3_RINGACC_PROXY_NOT_USED)
+		ring->proxy = ringacc->proxy_target_base +
+			      ring->proxy_id * K3_RINGACC_PROXY_TARGET_STEP;
+
+	switch (ring->mode) {
+	case K3_NAV_RINGACC_RING_MODE_RING:
+		ring->ops = &k3_nav_mode_ring_ops;
+		break;
+	case K3_NAV_RINGACC_RING_MODE_QM:
+		/*
+		 * In Queue mode elm_size can be 8 only and each operation
+		 * uses 2 element slots
+		 */
+		if (cfg->elm_size != K3_NAV_RINGACC_RING_ELSIZE_8 ||
+		    cfg->size % 2)
+			goto err_free_proxy;
+	case K3_NAV_RINGACC_RING_MODE_MESSAGE:
+		if (ring->proxy)
+			ring->ops = &k3_nav_mode_proxy_ops;
+		else
+			ring->ops = &k3_nav_mode_msg_ops;
+		break;
+	default:
+		ring->ops = NULL;
+		ret = -EINVAL;
+		goto err_free_proxy;
+	};
+
+	ring->ring_mem_virt =
+			dma_zalloc_coherent(ringacc->dev,
+					    ring->size * (4 << ring->elm_size),
+					    &ring->ring_mem_dma, GFP_KERNEL);
+	if (!ring->ring_mem_virt) {
+		dev_err(ringacc->dev, "Failed to alloc ring mem\n");
+		ret = -ENOMEM;
+		goto err_free_ops;
+	}
+
+	ret = k3_nav_ringacc_ring_cfg_sci(ring);
+
+	if (ret)
+		goto err_free_mem;
+
+	ring->flags |= KNAV_RING_FLAG_BUSY;
+	ring->flags |= (cfg->flags & K3_NAV_RINGACC_RING_SHARED) ?
+			K3_NAV_RING_FLAG_SHARED : 0;
+
+	return 0;
+
+err_free_mem:
+	dma_free_coherent(ringacc->dev,
+			  ring->size * (4 << ring->elm_size),
+			  ring->ring_mem_virt,
+			  ring->ring_mem_dma);
+err_free_ops:
+	ring->ops = NULL;
+err_free_proxy:
+	ring->proxy = NULL;
+	return ret;
+}
+
+u32 k3_nav_ringacc_ring_get_size(struct k3_nav_ring *ring)
+{
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return -EINVAL;
+
+	return ring->size;
+}
+
+u32 k3_nav_ringacc_ring_get_free(struct k3_nav_ring *ring)
+{
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return -EINVAL;
+
+	if (!ring->free)
+		ring->free = ring->size - ringacc_readl(&ring->rt->occ);
+
+	return ring->free;
+}
+
+u32 k3_nav_ringacc_ring_get_occ(struct k3_nav_ring *ring)
+{
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return -EINVAL;
+
+	return ringacc_readl(&ring->rt->occ);
+}
+
+u32 k3_nav_ringacc_ring_is_full(struct k3_nav_ring *ring)
+{
+	return !k3_nav_ringacc_ring_get_free(ring);
+}
+
+enum k3_ringacc_access_mode {
+	K3_RINGACC_ACCESS_MODE_PUSH_HEAD,
+	K3_RINGACC_ACCESS_MODE_POP_HEAD,
+	K3_RINGACC_ACCESS_MODE_PUSH_TAIL,
+	K3_RINGACC_ACCESS_MODE_POP_TAIL,
+	K3_RINGACC_ACCESS_MODE_PEEK_HEAD,
+	K3_RINGACC_ACCESS_MODE_PEEK_TAIL,
+};
+
+static int k3_ringacc_ring_cfg_proxy(struct k3_nav_ring *ring,
+				     enum k3_ringacc_proxy_access_mode mode)
+{
+	u32 val;
+
+	val = ring->ring_id;
+	val |= mode << 16;
+	val |= ring->elm_size << 24;
+	ringacc_writel(val, &ring->proxy->control);
+	return 0;
+}
+
+static int k3_nav_ringacc_ring_access_proxy(
+			struct k3_nav_ring *ring, void *elem,
+			enum k3_ringacc_access_mode access_mode)
+{
+	void __iomem *ptr;
+
+	ptr = (void __iomem *)&ring->proxy->data;
+
+	switch (access_mode) {
+	case K3_RINGACC_ACCESS_MODE_PUSH_HEAD:
+	case K3_RINGACC_ACCESS_MODE_POP_HEAD:
+		k3_ringacc_ring_cfg_proxy(ring, PROXY_ACCESS_MODE_HEAD);
+		break;
+	case K3_RINGACC_ACCESS_MODE_PUSH_TAIL:
+	case K3_RINGACC_ACCESS_MODE_POP_TAIL:
+		k3_ringacc_ring_cfg_proxy(ring, PROXY_ACCESS_MODE_TAIL);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	ptr += k3_nav_ringacc_ring_get_fifo_pos(ring);
+
+	switch (access_mode) {
+	case K3_RINGACC_ACCESS_MODE_POP_HEAD:
+	case K3_RINGACC_ACCESS_MODE_POP_TAIL:
+		pr_debug("proxy:memcpy_fromio(x): --> ptr(%p), mode:%d\n",
+			 ptr, access_mode);
+		memcpy_fromio(elem, ptr, (4 << ring->elm_size));
+		ring->occ--;
+		break;
+	case K3_RINGACC_ACCESS_MODE_PUSH_TAIL:
+	case K3_RINGACC_ACCESS_MODE_PUSH_HEAD:
+		pr_debug("proxy:memcpy_toio(x): --> ptr(%p), mode:%d\n",
+			 ptr, access_mode);
+		memcpy_toio(ptr, elem, (4 << ring->elm_size));
+		ring->free--;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	pr_debug("proxy: free%d occ%d\n",
+		 ring->free, ring->occ);
+	return 0;
+}
+
+static int k3_ringacc_ring_push_head_proxy(struct k3_nav_ring *ring, void *elem)
+{
+	return k3_nav_ringacc_ring_access_proxy(
+			ring, elem, K3_RINGACC_ACCESS_MODE_PUSH_HEAD);
+}
+
+static int k3_ringacc_ring_push_tail_proxy(struct k3_nav_ring *ring, void *elem)
+{
+	return k3_nav_ringacc_ring_access_proxy(
+			ring, elem, K3_RINGACC_ACCESS_MODE_PUSH_TAIL);
+}
+
+static int k3_ringacc_ring_pop_head_proxy(struct k3_nav_ring *ring, void *elem)
+{
+	return k3_nav_ringacc_ring_access_proxy(
+			ring, elem, K3_RINGACC_ACCESS_MODE_POP_HEAD);
+}
+
+static int k3_ringacc_ring_pop_tail_proxy(struct k3_nav_ring *ring, void *elem)
+{
+	return k3_nav_ringacc_ring_access_proxy(
+			ring, elem, K3_RINGACC_ACCESS_MODE_POP_HEAD);
+}
+
+static int k3_nav_ringacc_ring_access_io(
+		struct k3_nav_ring *ring, void *elem,
+		enum k3_ringacc_access_mode access_mode)
+{
+	void __iomem *ptr;
+
+	switch (access_mode) {
+	case K3_RINGACC_ACCESS_MODE_PUSH_HEAD:
+	case K3_RINGACC_ACCESS_MODE_POP_HEAD:
+		ptr = (void __iomem *)&ring->fifos->head_data;
+		break;
+	case K3_RINGACC_ACCESS_MODE_PUSH_TAIL:
+	case K3_RINGACC_ACCESS_MODE_POP_TAIL:
+		ptr = (void __iomem *)&ring->fifos->tail_data;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	ptr += k3_nav_ringacc_ring_get_fifo_pos(ring);
+
+	switch (access_mode) {
+	case K3_RINGACC_ACCESS_MODE_POP_HEAD:
+	case K3_RINGACC_ACCESS_MODE_POP_TAIL:
+		pr_debug("memcpy_fromio(x): --> ptr(%p), mode:%d\n",
+			 ptr, access_mode);
+		memcpy_fromio(elem, ptr, (4 << ring->elm_size));
+		ring->occ--;
+		break;
+	case K3_RINGACC_ACCESS_MODE_PUSH_TAIL:
+	case K3_RINGACC_ACCESS_MODE_PUSH_HEAD:
+		pr_debug("memcpy_toio(x): --> ptr(%p), mode:%d\n",
+			 ptr, access_mode);
+		memcpy_toio(ptr, elem, (4 << ring->elm_size));
+		ring->free--;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	pr_debug("free%d index%d occ%d index%d\n",
+		 ring->free, ring->windex, ring->occ, ring->rindex);
+	return 0;
+}
+
+static int k3_nav_ringacc_ring_push_head_io(struct k3_nav_ring *ring,
+					    void *elem)
+{
+	return k3_nav_ringacc_ring_access_io(
+			ring, elem, K3_RINGACC_ACCESS_MODE_PUSH_HEAD);
+}
+
+static int k3_nav_ringacc_ring_push_io(struct k3_nav_ring *ring, void *elem)
+{
+	return k3_nav_ringacc_ring_access_io(
+			ring, elem, K3_RINGACC_ACCESS_MODE_PUSH_TAIL);
+}
+
+static int k3_nav_ringacc_ring_pop_io(struct k3_nav_ring *ring, void *elem)
+{
+	return k3_nav_ringacc_ring_access_io(
+			ring, elem, K3_RINGACC_ACCESS_MODE_POP_HEAD);
+}
+
+static int k3_nav_ringacc_ring_pop_tail_io(struct k3_nav_ring *ring, void *elem)
+{
+	return k3_nav_ringacc_ring_access_io(
+			ring, elem, K3_RINGACC_ACCESS_MODE_POP_HEAD);
+}
+
+static int k3_nav_ringacc_ring_push_mem(struct k3_nav_ring *ring, void *elem)
+{
+	void *elem_ptr;
+
+	elem_ptr = k3_nav_ringacc_get_elm_addr(ring, ring->windex);
+
+	memcpy(elem_ptr, elem, (4 << ring->elm_size));
+
+	ring->windex = (ring->windex + 1) % ring->size;
+	ring->free--;
+	ringacc_writel(1, &ring->rt->db);
+
+	pr_debug("ring_push_mem: free%d index%d\n",
+		 ring->free, ring->windex);
+
+	return 0;
+}
+
+static int k3_nav_ringacc_ring_pop_mem(struct k3_nav_ring *ring, void *elem)
+{
+	void *elem_ptr;
+
+	elem_ptr = k3_nav_ringacc_get_elm_addr(ring, ring->rindex);
+
+	memcpy(elem, elem_ptr, (4 << ring->elm_size));
+
+	ring->rindex = (ring->rindex + 1) % ring->size;
+	ring->occ--;
+	ringacc_writel(-1, &ring->rt->db);
+
+	pr_debug("ring_pop_mem: occ%d index%d pos_ptr%p\n",
+		 ring->occ, ring->rindex, elem_ptr);
+	return 0;
+}
+
+int k3_nav_ringacc_ring_push(struct k3_nav_ring *ring, void *elem)
+{
+	int ret = -EOPNOTSUPP;
+
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return -EINVAL;
+
+	pr_debug("ring_push%d: free%d index%d\n",
+		 ring->ring_id, ring->free, ring->windex);
+
+	if (k3_nav_ringacc_ring_is_full(ring))
+		return -ENOMEM;
+
+	if (ring->ops && ring->ops->push_tail)
+		ret = ring->ops->push_tail(ring, elem);
+
+	return ret;
+}
+
+int k3_nav_ringacc_ring_push_head(struct k3_nav_ring *ring, void *elem)
+{
+	int ret = -EOPNOTSUPP;
+
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return -EINVAL;
+
+	pr_debug("ring_push_head: free%d index%d\n",
+		 ring->free, ring->windex);
+
+	if (k3_nav_ringacc_ring_is_full(ring))
+		return -ENOMEM;
+
+	if (ring->ops && ring->ops->push_head)
+		ret = ring->ops->push_head(ring, elem);
+
+	return ret;
+}
+
+int k3_nav_ringacc_ring_pop(struct k3_nav_ring *ring, void *elem)
+{
+	int ret = -EOPNOTSUPP;
+
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return -EINVAL;
+
+	if (!ring->occ)
+		ring->occ = k3_nav_ringacc_ring_get_occ(ring);
+
+	pr_debug("ring_pop%d: occ%d index%d\n",
+		 ring->ring_id, ring->occ, ring->rindex);
+
+	if (!ring->occ)
+		return -ENODATA;
+
+	if (ring->ops && ring->ops->pop_head)
+		ret = ring->ops->pop_head(ring, elem);
+
+	return ret;
+}
+
+int k3_nav_ringacc_ring_pop_tail(struct k3_nav_ring *ring, void *elem)
+{
+	int ret = -EOPNOTSUPP;
+
+	if (!ring || !(ring->flags & KNAV_RING_FLAG_BUSY))
+		return -EINVAL;
+
+	if (!ring->occ)
+		ring->occ = k3_nav_ringacc_ring_get_occ(ring);
+
+	pr_debug("ring_pop_tail: occ%d index%d\n",
+		 ring->occ, ring->rindex);
+
+	if (!ring->occ)
+		return -ENODATA;
+
+	if (ring->ops && ring->ops->pop_tail)
+		ret = ring->ops->pop_tail(ring, elem);
+
+	return ret;
+}
+
+static int k3_nav_ringacc_probe_dt(struct k3_nav_ringacc *ringacc)
+{
+	struct udevice *dev = ringacc->dev;
+	struct udevice *tisci_dev = NULL;
+	int ret;
+
+	ringacc->num_rings = dev_read_u32_default(dev, "ti,num-rings", 0);
+	if (!ringacc->num_rings) {
+		dev_err(dev, "ti,num-rings read failure %d\n", ret);
+		return -EINVAL;
+	}
+
+	ringacc->dma_ring_reset_quirk =
+			dev_read_bool(dev, "ti,dma-ring-reset-quirk");
+
+	ret = uclass_get_device_by_name(UCLASS_FIRMWARE, "dmsc", &tisci_dev);
+	if (ret) {
+		pr_debug("TISCI RA RM get failed (%d)\n", ret);
+		ringacc->tisci = NULL;
+		return -ENODEV;
+	}
+	ringacc->tisci = (struct ti_sci_handle *)
+			 (ti_sci_get_handle_from_sysfw(tisci_dev));
+
+	ret = dev_read_u32_default(dev, "ti,sci", 0);
+	if (!ret) {
+		dev_err(dev, "TISCI RA RM disabled\n");
+		ringacc->tisci = NULL;
+		return ret;
+	}
+
+	ret = dev_read_u32(dev, "ti,sci-dev-id", &ringacc->tisci_dev_id);
+	if (ret) {
+		dev_err(dev, "ti,sci-dev-id read failure %d\n", ret);
+		ringacc->tisci = NULL;
+		return ret;
+	}
+
+	ringacc->rm_gp_range = devm_ti_sci_get_of_resource(
+					ringacc->tisci, dev,
+					ringacc->tisci_dev_id,
+					"ti,sci-rm-range-gp-rings");
+	if (IS_ERR(ringacc->rm_gp_range))
+		ret = PTR_ERR(ringacc->rm_gp_range);
+
+	return 0;
+}
+
+static int k3_nav_ringacc_probe(struct udevice *dev)
+{
+	struct k3_nav_ringacc *ringacc;
+	void __iomem *base_fifo, *base_rt;
+	int ret, i;
+
+	ringacc = dev_get_priv(dev);
+	if (!ringacc)
+		return -ENOMEM;
+
+	ringacc->dev = dev;
+
+	ret = k3_nav_ringacc_probe_dt(ringacc);
+	if (ret)
+		return ret;
+
+	base_rt = (uint32_t *)devfdt_get_addr_name(dev, "rt");
+	pr_debug("rt %p\n", base_rt);
+	if (IS_ERR(base_rt))
+		return PTR_ERR(base_rt);
+
+	base_fifo = (uint32_t *)devfdt_get_addr_name(dev, "fifos");
+	pr_debug("fifos %p\n", base_fifo);
+	if (IS_ERR(base_fifo))
+		return PTR_ERR(base_fifo);
+
+	ringacc->proxy_gcfg = (struct k3_ringacc_proxy_gcfg_regs __iomem *)
+		devfdt_get_addr_name(dev, "proxy_gcfg");
+	if (IS_ERR(ringacc->proxy_gcfg))
+		return PTR_ERR(ringacc->proxy_gcfg);
+	ringacc->proxy_target_base =
+		(struct k3_ringacc_proxy_gcfg_regs __iomem *)
+		devfdt_get_addr_name(dev, "proxy_target");
+	if (IS_ERR(ringacc->proxy_target_base))
+		return PTR_ERR(ringacc->proxy_target_base);
+
+	ringacc->num_proxies = ringacc_readl(&ringacc->proxy_gcfg->config) &
+					 K3_RINGACC_PROXY_CFG_THREADS_MASK;
+
+	ringacc->rings = devm_kzalloc(dev,
+				      sizeof(*ringacc->rings) *
+				      ringacc->num_rings,
+				      GFP_KERNEL);
+	ringacc->rings_inuse = devm_kcalloc(dev,
+					    BITS_TO_LONGS(ringacc->num_rings),
+					    sizeof(unsigned long), GFP_KERNEL);
+	ringacc->proxy_inuse = devm_kcalloc(dev,
+					    BITS_TO_LONGS(ringacc->num_proxies),
+					    sizeof(unsigned long), GFP_KERNEL);
+
+	if (!ringacc->rings || !ringacc->rings_inuse || !ringacc->proxy_inuse)
+		return -ENOMEM;
+
+	for (i = 0; i < ringacc->num_rings; i++) {
+		ringacc->rings[i].rt = base_rt +
+				       KNAV_RINGACC_RT_REGS_STEP * i;
+		ringacc->rings[i].fifos = base_fifo +
+					  KNAV_RINGACC_FIFO_REGS_STEP * i;
+		ringacc->rings[i].parent = ringacc;
+		ringacc->rings[i].ring_id = i;
+		ringacc->rings[i].proxy_id = K3_RINGACC_PROXY_NOT_USED;
+	}
+	dev_set_drvdata(dev, ringacc);
+
+	ringacc->tisci_ring_ops = &ringacc->tisci->ops.rm_ring_ops;
+
+	list_add_tail(&ringacc->list, &k3_nav_ringacc_list);
+
+	dev_info(dev, "Ring Accelerator probed rings:%u, gp-rings[%u,%u] sci-dev-id:%u\n",
+		 ringacc->num_rings,
+		 ringacc->rm_gp_range->desc[0].start,
+		 ringacc->rm_gp_range->desc[0].num,
+		 ringacc->tisci_dev_id);
+	dev_info(dev, "dma-ring-reset-quirk: %s\n",
+		 ringacc->dma_ring_reset_quirk ? "enabled" : "disabled");
+	dev_info(dev, "RA Proxy rev. %08x, num_proxies:%u\n",
+		 ringacc_readl(&ringacc->proxy_gcfg->revision),
+		 ringacc->num_proxies);
+	return 0;
+}
+
+static const struct udevice_id knav_ringacc_ids[] = {
+	{ .compatible = "ti,am654-navss-ringacc" },
+	{},
+};
+
+U_BOOT_DRIVER(k3_navss_ringacc) = {
+	.name	= "k3-navss-ringacc",
+	.id	= UCLASS_MISC,
+	.of_match = knav_ringacc_ids,
+	.probe = k3_nav_ringacc_probe,
+	.priv_auto_alloc_size = sizeof(struct k3_nav_ringacc),
+};
diff --git a/include/linux/soc/ti/k3-navss-ringacc.h b/include/linux/soc/ti/k3-navss-ringacc.h
new file mode 100644
index 000000000000..487dfe985957
--- /dev/null
+++ b/include/linux/soc/ti/k3-navss-ringacc.h
@@ -0,0 +1,236 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * TI K3 AM65x NAVSS Ring accelerator Manager (RA) subsystem driver
+ *
+ * Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ */
+
+#ifndef __SOC_TI_K3_NAVSS_RINGACC_API_H_
+#define __SOC_TI_K3_NAVSS_RINGACC_API_H_
+
+#include <dm/ofnode.h>
+
+/**
+ * enum k3_nav_ring_mode - &struct k3_nav_ring_cfg mode
+ *
+ * RA ring operational modes
+ *
+ * @K3_NAV_RINGACC_RING_MODE_RING: Exposed Ring mode for SW direct access
+ * @K3_NAV_RINGACC_RING_MODE_MESSAGE: Messaging mode. Messaging mode requires
+ *	that all accesses to the queue must go through this IP so that all
+ *	accesses to the memory are controlled and ordered. This IP then
+ *	controls the entire state of the queue, and SW has no directly control,
+ *	such as through doorbells and cannot access the storage memory directly.
+ *	This is particularly useful when more than one SW or HW entity can be
+ *	the producer and/or consumer at the same time
+ * @K3_NAV_RINGACC_RING_MODE_CREDENTIALS: Credentials mode is message mode plus
+ *	stores credentials with each message, requiring the element size to be
+ *	doubled to fit the credentials. Any exposed memory should be protected
+ *	by a firewall from unwanted access
+ * @K3_NAV_RINGACC_RING_MODE_QM:  Queue manager mode. This takes the credentials
+ *	mode and adds packet length per element, along with additional read only
+ *	fields for element count and accumulated queue length. The QM mode only
+ *	operates with an 8 byte element size (any other element size is
+ *	illegal), and like in credentials mode each operation uses 2 element
+ *	slots to store the credentials and length fields
+ */
+enum k3_nav_ring_mode {
+	K3_NAV_RINGACC_RING_MODE_RING = 0,
+	K3_NAV_RINGACC_RING_MODE_MESSAGE,
+	K3_NAV_RINGACC_RING_MODE_CREDENTIALS,
+	K3_NAV_RINGACC_RING_MODE_QM,
+	k3_NAV_RINGACC_RING_MODE_INVALID
+};
+
+/**
+ * enum k3_nav_ring_size - &struct k3_nav_ring_cfg elm_size
+ *
+ * RA ring element's sizes in bytes.
+ */
+enum k3_nav_ring_size {
+	K3_NAV_RINGACC_RING_ELSIZE_4 = 0,
+	K3_NAV_RINGACC_RING_ELSIZE_8,
+	K3_NAV_RINGACC_RING_ELSIZE_16,
+	K3_NAV_RINGACC_RING_ELSIZE_32,
+	K3_NAV_RINGACC_RING_ELSIZE_64,
+	K3_NAV_RINGACC_RING_ELSIZE_128,
+	K3_NAV_RINGACC_RING_ELSIZE_256,
+	K3_NAV_RINGACC_RING_ELSIZE_INVALID
+};
+
+struct k3_nav_ringacc;
+struct k3_nav_ring;
+
+/**
+ * enum k3_nav_ring_cfg - RA ring configuration structure
+ *
+ * @size: Ring size, number of elements
+ * @elm_size: Ring element size
+ * @mode: Ring operational mode
+ * @flags: Ring configuration flags. Possible values:
+ *	 @K3_NAV_RINGACC_RING_SHARED: when set allows to request the same ring
+ *	 few times. It's usable when the same ring is used as Free Host PD ring
+ *	 for different flows, for example.
+ *	 Note: Locking should be done by consumer if required
+ */
+struct k3_nav_ring_cfg {
+	u32 size;
+	enum k3_nav_ring_size elm_size;
+	enum k3_nav_ring_mode mode;
+#define K3_NAV_RINGACC_RING_SHARED BIT(1)
+	u32 flags;
+};
+
+#define K3_NAV_RINGACC_RING_ID_ANY (-1)
+#define K3_NAV_RINGACC_RING_USE_PROXY BIT(1)
+
+/**
+ * k3_nav_ringacc_request_ring - request ring from ringacc
+ * @ringacc: pointer on ringacc
+ * @id: ring id or K3_NAV_RINGACC_RING_ID_ANY for any general purpose ring
+ * @flags:
+ *	@K3_NAV_RINGACC_RING_USE_PROXY: if set - proxy will be allocated and
+ *		used to access ring memory. Sopported only for rings in
+ *		Message/Credentials/Queue mode.
+ *
+ * Returns pointer on the Ring - struct k3_nav_ring
+ * or NULL in case of failure.
+ */
+struct k3_nav_ring *k3_nav_ringacc_request_ring(struct k3_nav_ringacc *ringacc,
+						int id, u32 flags);
+
+/**
+ * k3_nav_ringacc_get_dev - get pointer on RA device
+ * @ringacc: pointer on RA
+ *
+ * Returns device pointer
+ */
+struct udevice *k3_nav_ringacc_get_dev(struct k3_nav_ringacc *ringacc);
+
+/**
+ * k3_nav_ringacc_ring_reset - ring reset
+ * @ring: pointer on Ring
+ *
+ * Resets ring internal state ((hw)occ, (hw)idx).
+ * TODO_GS: ? Ring can be reused without reconfiguration
+ */
+void k3_nav_ringacc_ring_reset(struct k3_nav_ring *ring);
+/**
+ * k3_nav_ringacc_ring_reset - ring reset for DMA rings
+ * @ring: pointer on Ring
+ *
+ * Resets ring internal state ((hw)occ, (hw)idx). Should be used for rings
+ * which are read by K3 UDMA, like TX or Free Host PD rings.
+ */
+void k3_nav_ringacc_ring_reset_dma(struct k3_nav_ring *ring, u32 occ);
+
+/**
+ * k3_nav_ringacc_ring_free - ring free
+ * @ring: pointer on Ring
+ *
+ * Resets ring and free all alocated resources.
+ */
+int k3_nav_ringacc_ring_free(struct k3_nav_ring *ring);
+
+/**
+ * k3_nav_ringacc_get_ring_id - Get the Ring ID
+ * @ring: pointer on ring
+ *
+ * Returns the Ring ID
+ */
+u32 k3_nav_ringacc_get_ring_id(struct k3_nav_ring *ring);
+
+/**
+ * k3_nav_ringacc_ring_cfg - ring configure
+ * @ring: pointer on ring
+ * @cfg: Ring configuration parameters (see &struct k3_nav_ring_cfg)
+ *
+ * Configures ring, including ring memory allocation.
+ * Returns 0 on success, errno otherwise.
+ */
+int k3_nav_ringacc_ring_cfg(struct k3_nav_ring *ring,
+			    struct k3_nav_ring_cfg *cfg);
+
+/**
+ * k3_nav_ringacc_ring_get_size - get ring size
+ * @ring: pointer on ring
+ *
+ * Returns ring size in number of elements.
+ */
+u32 k3_nav_ringacc_ring_get_size(struct k3_nav_ring *ring);
+
+/**
+ * k3_nav_ringacc_ring_get_free - get free elements
+ * @ring: pointer on ring
+ *
+ * Returns number of free elements in the ring.
+ */
+u32 k3_nav_ringacc_ring_get_free(struct k3_nav_ring *ring);
+
+/**
+ * k3_nav_ringacc_ring_get_occ - get ring occupancy
+ * @ring: pointer on ring
+ *
+ * Returns total number of valid entries on the ring
+ */
+u32 k3_nav_ringacc_ring_get_occ(struct k3_nav_ring *ring);
+
+/**
+ * k3_nav_ringacc_ring_is_full - checks if ring is full
+ * @ring: pointer on ring
+ *
+ * Returns true if the ring is full
+ */
+u32 k3_nav_ringacc_ring_is_full(struct k3_nav_ring *ring);
+
+/**
+ * k3_nav_ringacc_ring_push - push element to the ring tail
+ * @ring: pointer on ring
+ * @elem: pointer on ring element buffer
+ *
+ * Push one ring element to the ring tail. Size of the ring element is
+ * determined by ring configuration &struct k3_nav_ring_cfg elm_size.
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int k3_nav_ringacc_ring_push(struct k3_nav_ring *ring, void *elem);
+
+/**
+ * k3_nav_ringacc_ring_pop - pop element from the ring head
+ * @ring: pointer on ring
+ * @elem: pointer on ring element buffer
+ *
+ * Push one ring element from the ring head. Size of the ring element is
+ * determined by ring configuration &struct k3_nav_ring_cfg elm_size..
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int k3_nav_ringacc_ring_pop(struct k3_nav_ring *ring, void *elem);
+
+/**
+ * k3_nav_ringacc_ring_push_head - push element to the ring head
+ * @ring: pointer on ring
+ * @elem: pointer on ring element buffer
+ *
+ * Push one ring element to the ring head. Size of the ring element is
+ * determined by ring configuration &struct k3_nav_ring_cfg elm_size.
+ *
+ * Returns 0 on success, errno otherwise.
+ * Not Supported by ring modes: K3_NAV_RINGACC_RING_MODE_RING
+ */
+int k3_nav_ringacc_ring_push_head(struct k3_nav_ring *ring, void *elem);
+
+/**
+ * k3_nav_ringacc_ring_pop_tail - pop element from the ring tail
+ * @ring: pointer on ring
+ * @elem: pointer on ring element buffer
+ *
+ * Push one ring element from the ring tail. Size of the ring element is
+ * determined by ring configuration &struct k3_nav_ring_cfg elm_size.
+ *
+ * Returns 0 on success, errno otherwise.
+ * Not Supported by ring modes: K3_NAV_RINGACC_RING_MODE_RING
+ */
+int k3_nav_ringacc_ring_pop_tail(struct k3_nav_ring *ring, void *elem);
+
+#endif /* __SOC_TI_K3_NAVSS_RINGACC_API_H_ */
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [U-Boot] [PATCH v4 3/7] soc: ti: k3: add CPPI5 description and helpers
  2019-02-05 12:01 [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support Vignesh R
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 1/7] firmware: ti_sci: Add support for NAVSS resource management Vignesh R
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 2/7] soc: ti: k3: add navss ringacc driver Vignesh R
@ 2019-02-05 12:01 ` Vignesh R
  2019-04-12 16:28   ` [U-Boot] [U-Boot, v4, " Tom Rini
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 4/7] dma: ti: add driver to K3 UDMA Vignesh R
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Vignesh R @ 2019-02-05 12:01 UTC (permalink / raw)
  To: u-boot

From: Grygorii Strashko <grygorii.strashko@ti.com>

Add TI Communications Port Programming Interface (CPPI) 5
interface description and helpers

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Vignesh R <vigneshr@ti.com>
Reviewed-by: Tom Rini <trini@konsulko.com>

---
 include/linux/soc/ti/cppi5.h | 995 +++++++++++++++++++++++++++++++++++
 1 file changed, 995 insertions(+)
 create mode 100644 include/linux/soc/ti/cppi5.h

diff --git a/include/linux/soc/ti/cppi5.h b/include/linux/soc/ti/cppi5.h
new file mode 100644
index 000000000000..34038b31f702
--- /dev/null
+++ b/include/linux/soc/ti/cppi5.h
@@ -0,0 +1,995 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * CPPI5 descriptors interface
+ *
+ * Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ */
+
+#ifndef __TI_CPPI5_H__
+#define __TI_CPPI5_H__
+
+#include <hexdump.h>
+#include <linux/bitops.h>
+
+/**
+ * Descriptor header, present in all types of descriptors
+ */
+struct cppi5_desc_hdr_t {
+	u32 pkt_info0;	/* Packet info word 0 (n/a in Buffer desc) */
+	u32 pkt_info1;	/* Packet info word 1 (n/a in Buffer desc) */
+	u32 pkt_info2;	/* Packet info word 2 Buffer reclamation info */
+	u32 src_dst_tag; /* Packet info word 3 (n/a in Buffer desc) */
+} __packed;
+
+/**
+ * Host-mode packet and buffer descriptor definition
+ */
+struct cppi5_host_desc_t {
+	struct cppi5_desc_hdr_t hdr;
+	u64 next_desc;	/* w4/5: Linking word */
+	u64 buf_ptr;	/* w6/7: Buffer pointer */
+	u32 buf_info1;	/* w8: Buffer valid data length */
+	u32 org_buf_len; /* w9: Original buffer length */
+	u64 org_buf_ptr; /* w10/11: Original buffer pointer */
+	u32 epib[0];	/* Extended Packet Info Data (optional, 4 words) */
+	/*
+	 * Protocol Specific Data (optional, 0-128 bytes in multiples of 4),
+	 * and/or Other Software Data (0-N bytes, optional)
+	 */
+} __packed;
+
+#define CPPI5_DESC_MIN_ALIGN			(16U)
+
+#define CPPI5_INFO0_HDESC_EPIB_SIZE		(16U)
+#define CPPI5_INFO0_HDESC_PSDATA_MAX_SIZE	(128U)
+
+#define CPPI5_INFO0_HDESC_TYPE_SHIFT		(30U)
+#define CPPI5_INFO0_HDESC_TYPE_MASK		GENMASK(31, 30)
+#define   CPPI5_INFO0_DESC_TYPE_VAL_HOST	(1U)
+#define   CPPI5_INFO0_DESC_TYPE_VAL_MONO	(2U)
+#define   CPPI5_INFO0_DESC_TYPE_VAL_TR		(3U)
+#define CPPI5_INFO0_HDESC_EPIB_PRESENT		BIT(29)
+/*
+ * Protocol Specific Words location:
+ * 0 - located in the descriptor,
+ * 1 = located in the SOP Buffer immediately prior to the data.
+ */
+#define CPPI5_INFO0_HDESC_PSINFO_LOCATION	BIT(28)
+#define CPPI5_INFO0_HDESC_PSINFO_SIZE_SHIFT	(22U)
+#define CPPI5_INFO0_HDESC_PSINFO_SIZE_MASK	GENMASK(27, 22)
+#define CPPI5_INFO0_HDESC_PKTLEN_SHIFT		(0)
+#define CPPI5_INFO0_HDESC_PKTLEN_MASK		GENMASK(21, 0)
+
+#define CPPI5_INFO1_DESC_PKTERROR_SHIFT		(28U)
+#define CPPI5_INFO1_DESC_PKTERROR_MASK		GENMASK(31, 28)
+#define CPPI5_INFO1_HDESC_PSFLGS_SHIFT		(24U)
+#define CPPI5_INFO1_HDESC_PSFLGS_MASK		GENMASK(27, 24)
+#define CPPI5_INFO1_DESC_PKTID_SHIFT		(14U)
+#define CPPI5_INFO1_DESC_PKTID_MASK		GENMASK(23, 14)
+#define CPPI5_INFO1_DESC_FLOWID_SHIFT		(0)
+#define CPPI5_INFO1_DESC_FLOWID_MASK		GENMASK(13, 0)
+
+#define CPPI5_INFO2_HDESC_PKTTYPE_SHIFT		(27U)
+#define CPPI5_INFO2_HDESC_PKTTYPE_MASK		GENMASK(31, 27)
+/* Return Policy: 0 - Entire packet 1 - Each buffer */
+#define CPPI5_INFO2_HDESC_RETPOLICY		BIT(18)
+/*
+ * Early Return:
+ * 0 = desc pointers should be returned after all reads have been completed
+ * 1 = desc pointers should be returned immediately upon fetching
+ * the descriptor and beginning to transfer data.
+ */
+#define CPPI5_INFO2_HDESC_EARLYRET		BIT(17)
+/*
+ * Return Push Policy:
+ * 0 = Descriptor must be returned to tail of queue
+ * 1 = Descriptor must be returned to head of queue
+ */
+#define CPPI5_INFO2_DESC_RETPUSHPOLICY		BIT(16)
+#define CPPI5_INFO2_DESC_RETQ_SHIFT		(0)
+#define CPPI5_INFO2_DESC_RETQ_MASK		GENMASK(15, 0)
+
+#define CPPI5_INFO3_DESC_SRCTAG_SHIFT		(16U)
+#define CPPI5_INFO3_DESC_SRCTAG_MASK		GENMASK(31, 16)
+#define CPPI5_INFO3_DESC_DSTTAG_SHIFT		(0)
+#define CPPI5_INFO3_DESC_DSTTAG_MASK		GENMASK(15, 0)
+
+#define CPPI5_BUFINFO1_HDESC_DATA_LEN_SHIFT	(0)
+#define CPPI5_BUFINFO1_HDESC_DATA_LEN_MASK	GENMASK(27, 0)
+
+#define CPPI5_OBUFINFO0_HDESC_BUF_LEN_SHIFT	(0)
+#define CPPI5_OBUFINFO0_HDESC_BUF_LEN_MASK	GENMASK(27, 0)
+
+/*
+ * Host Packet Descriptor Extended Packet Info Block
+ */
+struct cppi5_desc_epib_t {
+	u32 timestamp;	/* w0: application specific timestamp */
+	u32 sw_info0;	/* w1: Software Info 0 */
+	u32 sw_info1;	/* w2: Software Info 1 */
+	u32 sw_info2;	/* w3: Software Info 2 */
+};
+
+/**
+ * Monolithic-mode packet descriptor
+ */
+struct cppi5_monolithic_desc_t {
+	struct cppi5_desc_hdr_t hdr;
+	u32 epib[0];	/* Extended Packet Info Data (optional, 4 words) */
+	/*
+	 * Protocol Specific Data (optional, 0-128 bytes in multiples of 4),
+	 *  and/or Other Software Data (0-N bytes, optional)
+	 */
+};
+
+#define CPPI5_INFO2_MDESC_DATA_OFFSET_SHIFT	(18U)
+#define CPPI5_INFO2_MDESC_DATA_OFFSET_MASK	GENMASK(26, 18)
+
+/*
+ * Reload Enable:
+ * 0 = Finish the packet and place the descriptor back on the return queue
+ * 1 = Vector to the Reload Index and resume processing
+ */
+#define CPPI5_INFO0_TRDESC_RLDCNT_SHIFT		(20U)
+#define CPPI5_INFO0_TRDESC_RLDCNT_MASK		GENMASK(28, 20)
+#define CPPI5_INFO0_TRDESC_RLDCNT_MAX		(0x1ff)
+#define CPPI5_INFO0_TRDESC_RLDCNT_INFINITE	CPPI5_INFO0_TRDESC_RLDCNT_MAX
+#define CPPI5_INFO0_TRDESC_RLDIDX_SHIFT		(14U)
+#define CPPI5_INFO0_TRDESC_RLDIDX_MASK		GENMASK(19, 14)
+#define CPPI5_INFO0_TRDESC_RLDIDX_MAX		(0x3f)
+#define CPPI5_INFO0_TRDESC_LASTIDX_SHIFT	(0)
+#define CPPI5_INFO0_TRDESC_LASTIDX_MASK		GENMASK(13, 0)
+
+#define CPPI5_INFO1_TRDESC_RECSIZE_SHIFT	(24U)
+#define CPPI5_INFO1_TRDESC_RECSIZE_MASK		GENMASK(26, 24)
+#define   CPPI5_INFO1_TRDESC_RECSIZE_VAL_16B	(0)
+#define   CPPI5_INFO1_TRDESC_RECSIZE_VAL_32B	(1U)
+#define   CPPI5_INFO1_TRDESC_RECSIZE_VAL_64B	(2U)
+#define   CPPI5_INFO1_TRDESC_RECSIZE_VAL_128B	(3U)
+
+static inline void cppi5_desc_dump(void *desc, u32 size)
+{
+	print_hex_dump(KERN_ERR "dump udmap_desc: ", DUMP_PREFIX_NONE,
+		       32, 4, desc, size, false);
+}
+
+/**
+ * cppi5_desc_get_type - get descriptor type
+ * @desc_hdr: packet descriptor/TR header
+ *
+ * Returns descriptor type:
+ * CPPI5_INFO0_DESC_TYPE_VAL_HOST
+ * CPPI5_INFO0_DESC_TYPE_VAL_MONO
+ * CPPI5_INFO0_DESC_TYPE_VAL_TR
+ */
+static inline u32 cppi5_desc_get_type(struct cppi5_desc_hdr_t *desc_hdr)
+{
+	WARN_ON(!desc_hdr);
+
+	return (desc_hdr->pkt_info0 & CPPI5_INFO0_HDESC_TYPE_MASK) >>
+		CPPI5_INFO0_HDESC_TYPE_SHIFT;
+}
+
+/**
+ * cppi5_desc_get_errflags - get Error Flags from Desc
+ * @desc_hdr: packet/TR descriptor header
+ *
+ * Returns Error Flags from Packet/TR Descriptor
+ */
+static inline u32 cppi5_desc_get_errflags(struct cppi5_desc_hdr_t *desc_hdr)
+{
+	WARN_ON(!desc_hdr);
+
+	return (desc_hdr->pkt_info1 & CPPI5_INFO1_DESC_PKTERROR_MASK) >>
+		CPPI5_INFO1_DESC_PKTERROR_SHIFT;
+}
+
+/**
+ * cppi5_desc_get_pktids - get Packet and Flow ids from Desc
+ * @desc_hdr: packet/TR descriptor header
+ * @pkt_id: Packet ID
+ * @flow_id: Flow ID
+ *
+ * Returns Packet and Flow ids from packet/TR descriptor
+ */
+static inline void cppi5_desc_get_pktids(struct cppi5_desc_hdr_t *desc_hdr,
+					 u32 *pkt_id, u32 *flow_id)
+{
+	WARN_ON(!desc_hdr);
+
+	*pkt_id = (desc_hdr->pkt_info1 & CPPI5_INFO1_DESC_PKTID_MASK) >>
+		   CPPI5_INFO1_DESC_PKTID_SHIFT;
+	*flow_id = (desc_hdr->pkt_info1 & CPPI5_INFO1_DESC_FLOWID_MASK) >>
+		    CPPI5_INFO1_DESC_FLOWID_SHIFT;
+}
+
+/**
+ * cppi5_desc_set_pktids - set Packet and Flow ids in Desc
+ * @desc_hdr: packet/TR descriptor header
+ * @pkt_id: Packet ID
+ * @flow_id: Flow ID
+ */
+static inline void cppi5_desc_set_pktids(struct cppi5_desc_hdr_t *desc_hdr,
+					 u32 pkt_id, u32 flow_id)
+{
+	WARN_ON(!desc_hdr);
+
+	desc_hdr->pkt_info1 |= (pkt_id << CPPI5_INFO1_DESC_PKTID_SHIFT) &
+				CPPI5_INFO1_DESC_PKTID_MASK;
+	desc_hdr->pkt_info1 |= (flow_id << CPPI5_INFO1_DESC_FLOWID_SHIFT) &
+				CPPI5_INFO1_DESC_FLOWID_MASK;
+}
+
+/**
+ * cppi5_desc_set_retpolicy - set Packet Return Policy in Desc
+ * @desc_hdr: packet/TR descriptor header
+ * @flags: fags, supported values
+ *  CPPI5_INFO2_HDESC_RETPOLICY
+ *  CPPI5_INFO2_HDESC_EARLYRET
+ *  CPPI5_INFO2_DESC_RETPUSHPOLICY
+ * @return_ring_id: Packet Return Queue/Ring id, value 0xFFFF reserved
+ */
+static inline void cppi5_desc_set_retpolicy(struct cppi5_desc_hdr_t *desc_hdr,
+					    u32 flags, u32 return_ring_id)
+{
+	WARN_ON(!desc_hdr);
+
+	desc_hdr->pkt_info2 |= flags;
+	desc_hdr->pkt_info2 |= return_ring_id & CPPI5_INFO2_DESC_RETQ_MASK;
+}
+
+/**
+ * cppi5_desc_get_tags_ids - get Packet Src/Dst Tags from Desc
+ * @desc_hdr: packet/TR descriptor header
+ * @src_tag_id: Source Tag
+ * @dst_tag_id: Dest Tag
+ *
+ * Returns Packet Src/Dst Tags from packet/TR descriptor
+ */
+static inline void cppi5_desc_get_tags_ids(struct cppi5_desc_hdr_t *desc_hdr,
+					   u32 *src_tag_id, u32 *dst_tag_id)
+{
+	WARN_ON(!desc_hdr);
+
+	if (src_tag_id)
+		*src_tag_id = (desc_hdr->src_dst_tag &
+			      CPPI5_INFO3_DESC_SRCTAG_MASK) >>
+			      CPPI5_INFO3_DESC_SRCTAG_SHIFT;
+	if (dst_tag_id)
+		*dst_tag_id = desc_hdr->src_dst_tag &
+			      CPPI5_INFO3_DESC_DSTTAG_MASK;
+}
+
+/**
+ * cppi5_desc_set_tags_ids - set Packet Src/Dst Tags in HDesc
+ * @desc_hdr: packet/TR descriptor header
+ * @src_tag_id: Source Tag
+ * @dst_tag_id: Dest Tag
+ *
+ * Returns Packet Src/Dst Tags from packet/TR descriptor
+ */
+static inline void cppi5_desc_set_tags_ids(struct cppi5_desc_hdr_t *desc_hdr,
+					   u32 src_tag_id, u32 dst_tag_id)
+{
+	WARN_ON(!desc_hdr);
+
+	desc_hdr->src_dst_tag = (src_tag_id << CPPI5_INFO3_DESC_SRCTAG_SHIFT) &
+				CPPI5_INFO3_DESC_SRCTAG_MASK;
+	desc_hdr->src_dst_tag |= dst_tag_id & CPPI5_INFO3_DESC_DSTTAG_MASK;
+}
+
+/**
+ * cppi5_hdesc_calc_size - Calculate Host Packet Descriptor size
+ * @epib: is EPIB present
+ * @psdata_size: PSDATA size
+ * @sw_data_size: SWDATA size
+ *
+ * Returns required Host Packet Descriptor size
+ * 0 - if PSDATA > CPPI5_INFO0_HDESC_PSDATA_MAX_SIZE
+ */
+static inline u32 cppi5_hdesc_calc_size(bool epib, u32 psdata_size,
+					u32 sw_data_size)
+{
+	u32 desc_size;
+
+	if (psdata_size > CPPI5_INFO0_HDESC_PSDATA_MAX_SIZE)
+		return 0;
+	//TODO_GS: align
+	desc_size = sizeof(struct cppi5_host_desc_t) + psdata_size +
+		    sw_data_size;
+
+	if (epib)
+		desc_size += CPPI5_INFO0_HDESC_EPIB_SIZE;
+
+	return ALIGN(desc_size, CPPI5_DESC_MIN_ALIGN);
+}
+
+/**
+ * cppi5_hdesc_init - Init Host Packet Descriptor size
+ * @desc: Host packet descriptor
+ * @flags: supported values
+ *	CPPI5_INFO0_HDESC_EPIB_PRESENT
+ *	CPPI5_INFO0_HDESC_PSINFO_LOCATION
+ * @psdata_size: PSDATA size
+ *
+ * Returns required Host Packet Descriptor size
+ * 0 - if PSDATA > CPPI5_INFO0_HDESC_PSDATA_MAX_SIZE
+ */
+static inline void cppi5_hdesc_init(struct cppi5_host_desc_t *desc, u32 flags,
+				    u32 psdata_size)
+{
+	WARN_ON(!desc);
+	WARN_ON(psdata_size > CPPI5_INFO0_HDESC_PSDATA_MAX_SIZE);
+	WARN_ON(flags & ~(CPPI5_INFO0_HDESC_EPIB_PRESENT |
+			  CPPI5_INFO0_HDESC_PSINFO_LOCATION));
+
+	desc->hdr.pkt_info0 = (CPPI5_INFO0_DESC_TYPE_VAL_HOST <<
+			       CPPI5_INFO0_HDESC_TYPE_SHIFT) | (flags);
+	desc->hdr.pkt_info0 |= ((psdata_size >> 2) <<
+				CPPI5_INFO0_HDESC_PSINFO_SIZE_SHIFT) &
+				CPPI5_INFO0_HDESC_PSINFO_SIZE_MASK;
+	desc->next_desc = 0;
+}
+
+/**
+ * cppi5_hdesc_update_flags - Replace descriptor flags
+ * @desc: Host packet descriptor
+ * @flags: supported values
+ *	CPPI5_INFO0_HDESC_EPIB_PRESENT
+ *	CPPI5_INFO0_HDESC_PSINFO_LOCATION
+ */
+static inline void cppi5_hdesc_update_flags(struct cppi5_host_desc_t *desc,
+					    u32 flags)
+{
+	WARN_ON(!desc);
+	WARN_ON(flags & ~(CPPI5_INFO0_HDESC_EPIB_PRESENT |
+			  CPPI5_INFO0_HDESC_PSINFO_LOCATION));
+
+	desc->hdr.pkt_info0 &= ~(CPPI5_INFO0_HDESC_EPIB_PRESENT |
+				 CPPI5_INFO0_HDESC_PSINFO_LOCATION);
+	desc->hdr.pkt_info0 |= flags;
+}
+
+/**
+ * cppi5_hdesc_update_psdata_size - Replace PSdata size
+ * @desc: Host packet descriptor
+ * @psdata_size: PSDATA size
+ */
+static inline void cppi5_hdesc_update_psdata_size(
+				struct cppi5_host_desc_t *desc, u32 psdata_size)
+{
+	WARN_ON(!desc);
+	WARN_ON(psdata_size > CPPI5_INFO0_HDESC_PSDATA_MAX_SIZE);
+
+	desc->hdr.pkt_info0 &= ~CPPI5_INFO0_HDESC_PSINFO_SIZE_MASK;
+	desc->hdr.pkt_info0 |= ((psdata_size >> 2) <<
+				CPPI5_INFO0_HDESC_PSINFO_SIZE_SHIFT) &
+				CPPI5_INFO0_HDESC_PSINFO_SIZE_MASK;
+}
+
+/**
+ * cppi5_hdesc_get_psdata_size - get PSdata size in bytes
+ * @desc: Host packet descriptor
+ */
+static inline u32 cppi5_hdesc_get_psdata_size(struct cppi5_host_desc_t *desc)
+{
+	u32 psdata_size = 0;
+
+	WARN_ON(!desc);
+
+	if (!(desc->hdr.pkt_info0 & CPPI5_INFO0_HDESC_PSINFO_LOCATION))
+		psdata_size = (desc->hdr.pkt_info0 &
+			       CPPI5_INFO0_HDESC_PSINFO_SIZE_MASK) >>
+			       CPPI5_INFO0_HDESC_PSINFO_SIZE_SHIFT;
+
+	return (psdata_size << 2);
+}
+
+/**
+ * cppi5_hdesc_get_pktlen - get Packet Length from HDesc
+ * @desc: Host packet descriptor
+ *
+ * Returns Packet Length from Host Packet Descriptor
+ */
+static inline u32 cppi5_hdesc_get_pktlen(struct cppi5_host_desc_t *desc)
+{
+	WARN_ON(!desc);
+
+	return (desc->hdr.pkt_info0 & CPPI5_INFO0_HDESC_PKTLEN_MASK);
+}
+
+/**
+ * cppi5_hdesc_set_pktlen - set Packet Length in HDesc
+ * @desc: Host packet descriptor
+ */
+static inline void cppi5_hdesc_set_pktlen(struct cppi5_host_desc_t *desc,
+					  u32 pkt_len)
+{
+	WARN_ON(!desc);
+
+	desc->hdr.pkt_info0 |= (pkt_len & CPPI5_INFO0_HDESC_PKTLEN_MASK);
+}
+
+/**
+ * cppi5_hdesc_get_psflags - get Protocol Specific Flags from HDesc
+ * @desc: Host packet descriptor
+ *
+ * Returns Protocol Specific Flags from Host Packet Descriptor
+ */
+static inline u32 cppi5_hdesc_get_psflags(struct cppi5_host_desc_t *desc)
+{
+	WARN_ON(!desc);
+
+	return (desc->hdr.pkt_info1 & CPPI5_INFO1_HDESC_PSFLGS_MASK) >>
+		CPPI5_INFO1_HDESC_PSFLGS_SHIFT;
+}
+
+/**
+ * cppi5_hdesc_set_psflags - set Protocol Specific Flags in HDesc
+ * @desc: Host packet descriptor
+ */
+static inline void cppi5_hdesc_set_psflags(struct cppi5_host_desc_t *desc,
+					   u32 ps_flags)
+{
+	WARN_ON(!desc);
+
+	desc->hdr.pkt_info1 |= (ps_flags <<
+				CPPI5_INFO1_HDESC_PSFLGS_SHIFT) &
+				CPPI5_INFO1_HDESC_PSFLGS_MASK;
+}
+
+/**
+ * cppi5_hdesc_get_errflags - get Packet Type from HDesc
+ * @desc: Host packet descriptor
+ */
+static inline u32 cppi5_hdesc_get_pkttype(struct cppi5_host_desc_t *desc)
+{
+	WARN_ON(!desc);
+
+	return (desc->hdr.pkt_info2 & CPPI5_INFO2_HDESC_PKTTYPE_MASK) >>
+		CPPI5_INFO2_HDESC_PKTTYPE_SHIFT;
+}
+
+/**
+ * cppi5_hdesc_get_errflags - set Packet Type in HDesc
+ * @desc: Host packet descriptor
+ * @pkt_type: Packet Type
+ */
+static inline void cppi5_hdesc_set_pkttype(struct cppi5_host_desc_t *desc,
+					   u32 pkt_type)
+{
+	WARN_ON(!desc);
+	desc->hdr.pkt_info2 |=
+			(pkt_type << CPPI5_INFO2_HDESC_PKTTYPE_SHIFT) &
+			 CPPI5_INFO2_HDESC_PKTTYPE_MASK;
+}
+
+/**
+ * cppi5_hdesc_attach_buf - attach buffer to HDesc
+ * @desc: Host packet descriptor
+ * @buf: Buffer physical address
+ * @buf_data_len: Buffer length
+ * @obuf: Original Buffer physical address
+ * @obuf_len: Original Buffer length
+ *
+ * Attaches buffer to Host Packet Descriptor
+ */
+static inline void cppi5_hdesc_attach_buf(struct cppi5_host_desc_t *desc,
+					  dma_addr_t buf, u32 buf_data_len,
+					  dma_addr_t obuf, u32 obuf_len)
+{
+	WARN_ON(!desc);
+	WARN_ON(!buf && !obuf);
+
+	desc->buf_ptr = buf;
+	desc->buf_info1 = buf_data_len & CPPI5_BUFINFO1_HDESC_DATA_LEN_MASK;
+	desc->org_buf_ptr = obuf;
+	desc->org_buf_len = obuf_len & CPPI5_OBUFINFO0_HDESC_BUF_LEN_MASK;
+}
+
+static inline void cppi5_hdesc_get_obuf(struct cppi5_host_desc_t *desc,
+					dma_addr_t *obuf, u32 *obuf_len)
+{
+	WARN_ON(!desc);
+	WARN_ON(!obuf);
+	WARN_ON(!obuf_len);
+
+	*obuf = desc->org_buf_ptr;
+	*obuf_len = desc->org_buf_len & CPPI5_OBUFINFO0_HDESC_BUF_LEN_MASK;
+}
+
+static inline void cppi5_hdesc_reset_to_original(struct cppi5_host_desc_t *desc)
+{
+	WARN_ON(!desc);
+
+	desc->buf_ptr = desc->org_buf_ptr;
+	desc->buf_info1 = desc->org_buf_len;
+}
+
+/**
+ * cppi5_hdesc_link_hbdesc - link Host Buffer Descriptor to HDesc
+ * @desc: Host Packet Descriptor
+ * @buf_desc: Host Buffer Descriptor physical address
+ *
+ * add and link Host Buffer Descriptor to HDesc
+ */
+static inline void cppi5_hdesc_link_hbdesc(struct cppi5_host_desc_t *desc,
+					   dma_addr_t hbuf_desc)
+{
+	WARN_ON(!desc);
+	WARN_ON(!hbuf_desc);
+
+	desc->next_desc = hbuf_desc;
+}
+
+static inline dma_addr_t cppi5_hdesc_get_next_hbdesc(
+				struct cppi5_host_desc_t *desc)
+{
+	WARN_ON(!desc);
+
+	return (dma_addr_t)desc->next_desc;
+}
+
+static inline void cppi5_hdesc_reset_hbdesc(struct cppi5_host_desc_t *desc)
+{
+	WARN_ON(!desc);
+
+	desc->hdr = (struct cppi5_desc_hdr_t) { 0 };
+	desc->next_desc = 0;
+}
+
+/**
+ * cppi5_hdesc_epib_present -  check if EPIB present
+ * @desc_hdr: packet descriptor/TR header
+ *
+ * Returns true if EPIB present in the packet
+ */
+static inline bool cppi5_hdesc_epib_present(struct cppi5_desc_hdr_t *desc_hdr)
+{
+	WARN_ON(!desc_hdr);
+	return !!(desc_hdr->pkt_info0 & CPPI5_INFO0_HDESC_EPIB_PRESENT);
+}
+
+/**
+ * cppi5_hdesc_get_psdata -  Get pointer on PSDATA
+ * @desc: Host packet descriptor
+ *
+ * Returns pointer on PSDATA in HDesc.
+ * NULL - if ps_data placed at the start of data buffer.
+ */
+static inline void *cppi5_hdesc_get_psdata(struct cppi5_host_desc_t *desc)
+{
+	u32 psdata_size;
+	void *psdata;
+
+	WARN_ON(!desc);
+
+	if (desc->hdr.pkt_info0 & CPPI5_INFO0_HDESC_PSINFO_LOCATION)
+		return NULL;
+
+	psdata_size = (desc->hdr.pkt_info0 &
+		       CPPI5_INFO0_HDESC_PSINFO_SIZE_MASK) >>
+		       CPPI5_INFO0_HDESC_PSINFO_SIZE_SHIFT;
+
+	if (!psdata_size)
+		return NULL;
+
+	psdata = &desc->epib;
+
+	if (cppi5_hdesc_epib_present(&desc->hdr))
+		psdata += CPPI5_INFO0_HDESC_EPIB_SIZE;
+
+	return psdata;
+}
+
+static inline u32 *cppi5_hdesc_get_psdata32(struct cppi5_host_desc_t *desc)
+{
+	return (u32 *)cppi5_hdesc_get_psdata(desc);
+}
+
+/**
+ * cppi5_hdesc_get_swdata -  Get pointer on swdata
+ * @desc: Host packet descriptor
+ *
+ * Returns pointer on SWDATA in HDesc.
+ * NOTE. It's caller responsibility to be sure hdesc actually has swdata.
+ */
+static inline void *cppi5_hdesc_get_swdata(struct cppi5_host_desc_t *desc)
+{
+	u32 psdata_size = 0;
+	void *swdata;
+
+	WARN_ON(!desc);
+
+	if (!(desc->hdr.pkt_info0 & CPPI5_INFO0_HDESC_PSINFO_LOCATION))
+		psdata_size = (desc->hdr.pkt_info0 &
+			       CPPI5_INFO0_HDESC_PSINFO_SIZE_MASK) >>
+			       CPPI5_INFO0_HDESC_PSINFO_SIZE_SHIFT;
+
+	swdata = &desc->epib;
+
+	if (cppi5_hdesc_epib_present(&desc->hdr))
+		swdata += CPPI5_INFO0_HDESC_EPIB_SIZE;
+
+	swdata += (psdata_size << 2);
+
+	return swdata;
+}
+
+/* ================================== TR ================================== */
+
+#define CPPI5_TR_TYPE_SHIFT			(0U)
+#define CPPI5_TR_TYPE_MASK			GENMASK(3, 0)
+#define CPPI5_TR_STATIC				BIT(4)
+#define CPPI5_TR_WAIT				BIT(5)
+#define CPPI5_TR_EVENT_SIZE_SHIFT		(6U)
+#define CPPI5_TR_EVENT_SIZE_MASK		GENMASK(7, 6)
+#define CPPI5_TR_TRIGGER0_SHIFT			(8U)
+#define CPPI5_TR_TRIGGER0_MASK			GENMASK(9, 8)
+#define CPPI5_TR_TRIGGER0_TYPE_SHIFT		(10U)
+#define CPPI5_TR_TRIGGER0_TYPE_MASK		GENMASK(11, 10)
+#define CPPI5_TR_TRIGGER1_SHIFT			(12U)
+#define CPPI5_TR_TRIGGER1_MASK			GENMASK(13, 12)
+#define CPPI5_TR_TRIGGER1_TYPE_SHIFT		(14U)
+#define CPPI5_TR_TRIGGER1_TYPE_MASK		GENMASK(15, 14)
+#define CPPI5_TR_CMD_ID_SHIFT			(16U)
+#define CPPI5_TR_CMD_ID_MASK			GENMASK(23, 16)
+#define CPPI5_TR_CSF_FLAGS_SHIFT		(24U)
+#define CPPI5_TR_CSF_FLAGS_MASK			GENMASK(31, 24)
+#define   CPPI5_TR_CSF_SA_INDIRECT		BIT(0)
+#define   CPPI5_TR_CSF_DA_INDIRECT		BIT(1)
+#define   CPPI5_TR_CSF_SUPR_EVT			BIT(2)
+#define   CPPI5_TR_CSF_EOL_ADV_SHIFT		(4U)
+#define   CPPI5_TR_CSF_EOL_ADV_MASK		GENMASK(6, 4)
+#define   CPPI5_TR_CSF_EOP			BIT(7)
+
+/* Udmap TR flags Type field specifies the type of TR. */
+enum cppi5_tr_types {
+	/* type0: One dimensional data move */
+	CPPI5_TR_TYPE0 = 0,
+	/* type1: Two dimensional data move */
+	CPPI5_TR_TYPE1,
+	/* type2: Three dimensional data move */
+	CPPI5_TR_TYPE2,
+	/* type3: Four dimensional data move */
+	CPPI5_TR_TYPE3,
+	/* type4: Four dimensional data move with data formatting */
+	CPPI5_TR_TYPE4,
+	/* type5: Four dimensional Cache Warm */
+	CPPI5_TR_TYPE5,
+	/* type6-7: Reserved */
+	/* type8: Four Dimensional Block Move */
+	CPPI5_TR_TYPE8 = 8,
+	/* type9: Four Dimensional Block Move with Repacking */
+	CPPI5_TR_TYPE9,
+	/* type10: Two Dimensional Block Move */
+	CPPI5_TR_TYPE10,
+	/* type11: Two Dimensional Block Move with Repacking */
+	CPPI5_TR_TYPE11,
+	/* type12-14: Reserved */
+	/* type15 Four Dimensional Block Move with Repacking and Indirection */
+	CPPI5_TR_TYPE15 = 15,
+	CPPI5_TR_TYPE_MAX
+};
+
+/*
+ * Udmap TR Flags EVENT_SIZE field specifies when an event is generated
+ * for each TR.
+ */
+enum cppi5_tr_event_size {
+	/* When TR is complete and all status for the TR has been received */
+	CPPI5_TR_EVENT_SIZE_COMPLETION,
+	/*
+	 * Type 0: when the last data transaction is sent for the TR;
+	 * Type 1-11: when ICNT1 is decremented
+	 */
+	CPPI5_TR_EVENT_SIZE_ICNT1_DEC,
+	/*
+	 * Type 0-1,10-11: when the last data transaction is sent for the TR;
+	 * All other types: when ICNT2 is decremented
+	 */
+	CPPI5_TR_EVENT_SIZE_ICNT2_DEC,
+	/*
+	 * Type 0-2,10-11: when the last data transaction is sent for the TR;
+	 * All other types: when ICNT3 is decremented
+	 */
+	CPPI5_TR_EVENT_SIZE_ICNT3_DEC,
+	CPPI5_TR_EVENT_SIZE_MAX
+};
+
+/*
+ * Udmap TR Flags TRIGGERx field specifies the type of trigger used to
+ * enable the TR to transfer data as specified by TRIGGERx_TYPE field.
+ */
+enum cppi5_tr_trigger {
+	CPPI5_TR_TRIGGER_NONE,		/* No Trigger */
+	CPPI5_TR_TRIGGER_GLOBAL0,		/* Global Trigger 0 */
+	CPPI5_TR_TRIGGER_GLOBAL1,		/* Global Trigger 1 */
+	CPPI5_TR_TRIGGER_LOCAL_EVENT,	/* Local Event */
+	CPPI5_TR_TRIGGER_MAX
+};
+
+/*
+ * Udmap TR Flags TRIGGERx_TYPE field specifies the type of data transfer
+ * that will be enabled by receiving a trigger as specified by TRIGGERx.
+ */
+enum cppi5_tr_trigger_type {
+	/* The second inner most loop (ICNT1) will be decremented by 1 */
+	CPPI5_TR_TRIGGER_TYPE_ICNT1_DEC,
+	/* The third inner most loop (ICNT2) will be decremented by 1 */
+	CPPI5_TR_TRIGGER_TYPE_ICNT2_DEC,
+	/* The outer most loop (ICNT3) will be decremented by 1 */
+	CPPI5_TR_TRIGGER_TYPE_ICNT3_DEC,
+	/* The entire TR will be allowed to complete */
+	CPPI5_TR_TRIGGER_TYPE_ALL,
+	CPPI5_TR_TRIGGER_TYPE_MAX
+};
+
+typedef u32 cppi5_tr_flags_t;
+
+/* Type 0 (One dimensional data move) TR (16 byte) */
+struct cppi5_tr_type0_t {
+	cppi5_tr_flags_t flags;
+	u16 icnt0;
+	u16 unused;
+	u64 addr;
+} __aligned(16) __packed;
+
+/* Type 1 (Two dimensional data move) TR (32 byte) */
+struct cppi5_tr_type1_t {
+	cppi5_tr_flags_t flags;
+	u16 icnt0;
+	u16 icnt1;
+	u64 addr;
+	s32 dim1;
+} __aligned(32) __packed;
+
+/* Type 2 (Three dimensional data move) TR (32 byte) */
+struct cppi5_tr_type2_t {
+	cppi5_tr_flags_t flags;
+	u16 icnt0;
+	u16 icnt1;
+	u64 addr;
+	s32 dim1;
+	u16 icnt2;
+	u16 unused;
+	s32 dim2;
+} __aligned(32) __packed;
+
+/* Type 3 (Four dimensional data move) TR (32 byte) */
+struct cppi5_tr_type3_t {
+	cppi5_tr_flags_t flags;
+	u16 icnt0;
+	u16 icnt1;
+	u64 addr;
+	s32 dim1;
+	u16 icnt2;
+	u16 icnt3;
+	s32 dim2;
+	s32 dim3;
+} __aligned(32) __packed;
+
+/*
+ * Type 15 (Four Dimensional Block Copy with Repacking and
+ * Indirection Support) TR (64 byte).
+ */
+struct cppi5_tr_type15_t {
+	cppi5_tr_flags_t flags;
+	u16 icnt0;
+	u16 icnt1;
+	u64 addr;
+	s32 dim1;
+	u16 icnt2;
+	u16 icnt3;
+	s32 dim2;
+	s32 dim3;
+	u32 _reserved;
+	s32 ddim1;
+	u64 daddr;
+	s32 ddim2;
+	s32 ddim3;
+	u16 dicnt0;
+	u16 dicnt1;
+	u16 dicnt2;
+	u16 dicnt3;
+} __aligned(64) __packed;
+
+struct cppi5_tr_resp_t {
+	u8 status;
+	u8 reserved;
+	u8 cmd_id;
+	u8 flags;
+} __packed;
+
+#define CPPI5_TR_RESPONSE_STATUS_TYPE_SHIFT	(0U)
+#define CPPI5_TR_RESPONSE_STATUS_TYPE_MASK	GENMASK(3, 0)
+#define CPPI5_TR_RESPONSE_STATUS_INFO_SHIFT	(4U)
+#define CPPI5_TR_RESPONSE_STATUS_INFO_MASK	GENMASK(7, 4)
+#define CPPI5_TR_RESPONSE_CMDID_SHIFT		(16U)
+#define CPPI5_TR_RESPONSE_CMDID_MASK		GENMASK(23, 16)
+#define CPPI5_TR_RESPONSE_CFG_SPECIFIC_SHIFT	(24U)
+#define CPPI5_TR_RESPONSE_CFG_SPECIFIC_MASK	GENMASK(31, 24)
+
+/*
+ * Udmap TR Response Status Type field is used to determine
+ * what type of status is being returned.
+ */
+enum cppi5_tr_resp_status_type {
+	CPPI5_TR_RESPONSE_STATUS_COMPLETE,		/* None */
+	CPPI5_TR_RESPONSE_STATUS_TRANSFER_ERR,		/* Transfer Error */
+	CPPI5_TR_RESPONSE_STATUS_ABORTED_ERR,		/* Aborted Error */
+	CPPI5_TR_RESPONSE_STATUS_SUBMISSION_ERR,	/* Submission Error */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_ERR,	/* Unsup. Feature */
+	CPPI5_TR_RESPONSE_STATUS_MAX
+};
+
+/*
+ * Udmap TR Response Status field values which corresponds
+ * CPPI5_TR_RESPONSE_STATUS_SUBMISSION_ERR
+ */
+enum cppi5_tr_resp_status_submission {
+	/* ICNT0 was 0 */
+	CPPI5_TR_RESPONSE_STATUS_SUBMISSION_ICNT0,
+	/* Channel FIFO was full when TR received */
+	CPPI5_TR_RESPONSE_STATUS_SUBMISSION_FIFO_FULL,
+	/* Channel is not owned by the submitter */
+	CPPI5_TR_RESPONSE_STATUS_SUBMISSION_OWN,
+	CPPI5_TR_RESPONSE_STATUS_SUBMISSION_MAX
+};
+
+/*
+ * Udmap TR Response Status field values which corresponds
+ * CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_ERR
+ */
+enum cppi5_tr_resp_status_unsupported {
+	/* TR Type not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_TR_TYPE,
+	/* STATIC not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_STATIC,
+	/* EOL not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_EOL,
+	/* CONFIGURATION SPECIFIC not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_CFG_SPECIFIC,
+	/* AMODE not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_AMODE,
+	/* ELTYPE not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_ELTYPE,
+	/* DFMT not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_DFMT,
+	/* SECTR not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_SECTR,
+	/* AMODE SPECIFIC field not supported */
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_AMODE_SPECIFIC,
+	CPPI5_TR_RESPONSE_STATUS_UNSUPPORTED_MAX
+};
+
+/**
+ * cppi5_trdesc_calc_size - Calculate TR Descriptor size
+ * @tr_count: number of TR records
+ * @tr_size: Nominal size of TR record (max) [16, 32, 64, 128]
+ *
+ * Returns required TR Descriptor size
+ */
+static inline size_t cppi5_trdesc_calc_size(u32 tr_count, u32 tr_size)
+{
+	/*
+	 * The Size of a TR descriptor is:
+	 * 1 x tr_size : the first 16 bytes is used by the packet info block +
+	 * tr_count x tr_size : Transfer Request Records +
+	 * tr_count x sizeof(struct cppi5_tr_resp_t) : Transfer Response Records
+	 */
+	return tr_size * (tr_count + 1) +
+		sizeof(struct cppi5_tr_resp_t) * tr_count;
+}
+
+/**
+ * cppi5_trdesc_init - Init TR Descriptor
+ * @desc: TR Descriptor
+ * @tr_count: number of TR records
+ * @tr_size: Nominal size of TR record (max) [16, 32, 64, 128]
+ * @reload_idx: Absolute index to jump to on the 2nd and following passes
+ *		through the TR packet.
+ * @reload_count: Number of times to jump from last entry to reload_idx. 0x1ff
+ *		  indicates infinite looping.
+ *
+ * Init TR Descriptor
+ */
+static inline void cppi5_trdesc_init(struct cppi5_desc_hdr_t *desc_hdr,
+				     u32 tr_count, u32 tr_size, u32 reload_idx,
+				     u32 reload_count)
+{
+	WARN_ON(!desc_hdr);
+	WARN_ON(tr_count & ~CPPI5_INFO0_TRDESC_LASTIDX_MASK);
+	WARN_ON(reload_idx > CPPI5_INFO0_TRDESC_RLDIDX_MAX);
+	WARN_ON(reload_count > CPPI5_INFO0_TRDESC_RLDCNT_MAX);
+
+	desc_hdr->pkt_info0 = CPPI5_INFO0_DESC_TYPE_VAL_TR <<
+			      CPPI5_INFO0_HDESC_TYPE_SHIFT;
+	desc_hdr->pkt_info0 |= (reload_count << CPPI5_INFO0_TRDESC_RLDCNT_SHIFT) &
+			       CPPI5_INFO0_TRDESC_RLDCNT_MASK;
+	desc_hdr->pkt_info0 |= (reload_idx << CPPI5_INFO0_TRDESC_RLDIDX_SHIFT) &
+			       CPPI5_INFO0_TRDESC_RLDIDX_MASK;
+	desc_hdr->pkt_info0 |= (tr_count - 1) & CPPI5_INFO0_TRDESC_LASTIDX_MASK;
+
+	desc_hdr->pkt_info1 |= ((ffs(tr_size >> 4) - 1) <<
+				CPPI5_INFO1_TRDESC_RECSIZE_SHIFT) &
+				CPPI5_INFO1_TRDESC_RECSIZE_MASK;
+}
+
+/**
+ * cppi5_tr_init - Init TR record
+ * @flags: Pointer to the TR's flags
+ * @type: TR type
+ * @static_tr: TR is static
+ * @wait: Wait for TR completion before allow the next TR to start
+ * @event_size: output event generation cfg
+ * @cmd_id: TR identifier (application specifics)
+ *
+ * Init TR record
+ */
+static inline void cppi5_tr_init(cppi5_tr_flags_t *flags,
+				 enum cppi5_tr_types type, bool static_tr,
+				 bool wait, enum cppi5_tr_event_size event_size,
+				 u32 cmd_id)
+{
+	WARN_ON(!flags);
+
+	*flags = type;
+	*flags |= (event_size << CPPI5_TR_EVENT_SIZE_SHIFT) &
+		  CPPI5_TR_EVENT_SIZE_MASK;
+
+	*flags |= (cmd_id << CPPI5_TR_CMD_ID_SHIFT) &
+		  CPPI5_TR_CMD_ID_MASK;
+
+	if (static_tr && (type == CPPI5_TR_TYPE8 || type == CPPI5_TR_TYPE9))
+		*flags |= CPPI5_TR_STATIC;
+
+	if (wait)
+		*flags |= CPPI5_TR_WAIT;
+}
+
+/**
+ * cppi5_tr_set_trigger - Configure trigger0/1 and trigger0/1_type
+ * @flags: Pointer to the TR's flags
+ * @trigger0: trigger0 selection
+ * @trigger0_type: type of data transfer that will be enabled by trigger0
+ * @trigger1: trigger1 selection
+ * @trigger1_type: type of data transfer that will be enabled by trigger1
+ *
+ * Configure the triggers for the TR
+ */
+static inline void cppi5_tr_set_trigger(cppi5_tr_flags_t *flags,
+				enum cppi5_tr_trigger trigger0,
+				enum cppi5_tr_trigger_type trigger0_type,
+				enum cppi5_tr_trigger trigger1,
+				enum cppi5_tr_trigger_type trigger1_type)
+{
+	WARN_ON(!flags);
+
+	*flags |= (trigger0 << CPPI5_TR_TRIGGER0_SHIFT) &
+		  CPPI5_TR_TRIGGER0_MASK;
+	*flags |= (trigger0_type << CPPI5_TR_TRIGGER0_TYPE_SHIFT) &
+		  CPPI5_TR_TRIGGER0_TYPE_MASK;
+
+	*flags |= (trigger1 << CPPI5_TR_TRIGGER1_SHIFT) &
+		  CPPI5_TR_TRIGGER1_MASK;
+	*flags |= (trigger1_type << CPPI5_TR_TRIGGER1_TYPE_SHIFT) &
+		  CPPI5_TR_TRIGGER1_TYPE_MASK;
+}
+
+/**
+ * cppi5_tr_cflag_set - Update the Configuration specific flags
+ * @flags: Pointer to the TR's flags
+ * @csf: Configuration specific flags
+ *
+ * Set a bit in Configuration Specific Flags section of the TR flags.
+ */
+static inline void cppi5_tr_csf_set(cppi5_tr_flags_t *flags, u32 csf)
+{
+	WARN_ON(!flags);
+
+	*flags |= (csf << CPPI5_TR_CSF_FLAGS_SHIFT) &
+		  CPPI5_TR_CSF_FLAGS_MASK;
+}
+
+#endif /* __TI_CPPI5_H__ */
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [U-Boot] [PATCH v4 4/7] dma: ti: add driver to K3 UDMA
  2019-02-05 12:01 [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support Vignesh R
                   ` (2 preceding siblings ...)
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 3/7] soc: ti: k3: add CPPI5 description and helpers Vignesh R
@ 2019-02-05 12:01 ` Vignesh R
  2019-02-06 23:43   ` Tom Rini
  2019-04-12 16:28   ` [U-Boot] [U-Boot,v4,4/7] " Tom Rini
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 5/7] soc: keystone: Merge into ti specific directory Vignesh R
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 16+ messages in thread
From: Vignesh R @ 2019-02-05 12:01 UTC (permalink / raw)
  To: u-boot

The UDMA-P is intended to perform similar (but significantly upgraded) functions
as the packet-oriented DMA used on previous SoC devices. The UDMA-P module
supports the transmission and reception of various packet types.
The UDMA-P also supports acting as both a UTC and UDMA-C for its internal
channels. Channels in the UDMA-P can be configured to be either Packet-Based or
Third-Party channels on a channel by channel basis.

The initial driver supports:
- MEM_TO_MEM (TR mode)
- DEV_TO_MEM (Packet mode)
- MEM_TO_DEV (Packet mode)

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Vignesh R <vigneshr@ti.com>
---
 drivers/dma/Kconfig               |    2 +
 drivers/dma/Makefile              |    2 +
 drivers/dma/ti/Kconfig            |   14 +
 drivers/dma/ti/Makefile           |    3 +
 drivers/dma/ti/k3-udma-hwdef.h    |  184 +++
 drivers/dma/ti/k3-udma.c          | 1730 +++++++++++++++++++++++++++++
 include/dt-bindings/dma/k3-udma.h |   31 +
 include/linux/soc/ti/ti-udma.h    |   24 +
 8 files changed, 1990 insertions(+)
 create mode 100644 drivers/dma/ti/Kconfig
 create mode 100644 drivers/dma/ti/Makefile
 create mode 100644 drivers/dma/ti/k3-udma-hwdef.h
 create mode 100644 drivers/dma/ti/k3-udma.c
 create mode 100644 include/dt-bindings/dma/k3-udma.h
 create mode 100644 include/linux/soc/ti/ti-udma.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 1820676d7a18..4f37ba7d35eb 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -57,4 +57,6 @@ config APBH_DMA_BURST8
 
 endif
 
+source "drivers/dma/ti/Kconfig"
+
 endmenu # menu "DMA Support"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index b5f9147e0a54..afab324461b9 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -13,3 +13,5 @@ obj-$(CONFIG_SANDBOX_DMA) += sandbox-dma-test.o
 obj-$(CONFIG_TI_KSNAV) += keystone_nav.o keystone_nav_cfg.o
 obj-$(CONFIG_TI_EDMA3) += ti-edma3.o
 obj-$(CONFIG_DMA_LPC32XX) += lpc32xx_dma.o
+
+obj-y += ti/
diff --git a/drivers/dma/ti/Kconfig b/drivers/dma/ti/Kconfig
new file mode 100644
index 000000000000..3d5498326c42
--- /dev/null
+++ b/drivers/dma/ti/Kconfig
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: GPL-2.0+
+
+if ARCH_K3
+
+config TI_K3_NAVSS_UDMA
+        bool "Texas Instruments UDMA"
+        depends on ARCH_K3
+        select DMA
+        select TI_K3_NAVSS_RINGACC
+        select TI_K3_NAVSS_PSILCFG
+        default n
+        help
+          Support for UDMA used in K3 devices.
+endif
diff --git a/drivers/dma/ti/Makefile b/drivers/dma/ti/Makefile
new file mode 100644
index 000000000000..de2f9ac91a46
--- /dev/null
+++ b/drivers/dma/ti/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0+
+
+obj-$(CONFIG_TI_K3_NAVSS_UDMA) += k3-udma.o
diff --git a/drivers/dma/ti/k3-udma-hwdef.h b/drivers/dma/ti/k3-udma-hwdef.h
new file mode 100644
index 000000000000..c88399a815ea
--- /dev/null
+++ b/drivers/dma/ti/k3-udma-hwdef.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ *  Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#ifndef K3_NAVSS_UDMA_HWDEF_H_
+#define K3_NAVSS_UDMA_HWDEF_H_
+
+#define UDMA_PSIL_DST_THREAD_ID_OFFSET 0x8000
+
+/* Global registers */
+#define UDMA_REV_REG			0x0
+#define UDMA_PERF_CTL_REG		0x4
+#define UDMA_EMU_CTL_REG		0x8
+#define UDMA_PSIL_TO_REG		0x10
+#define UDMA_UTC_CTL_REG		0x1c
+#define UDMA_CAP_REG(i)			(0x20 + (i * 4))
+#define UDMA_RX_FLOW_ID_FW_OES_REG	0x80
+#define UDMA_RX_FLOW_ID_FW_STATUS_REG	0x88
+
+/* RX Flow regs */
+#define UDMA_RFLOW_RFA_REG		0x0
+#define UDMA_RFLOW_RFB_REG		0x4
+#define UDMA_RFLOW_RFC_REG		0x8
+#define UDMA_RFLOW_RFD_REG		0xc
+#define UDMA_RFLOW_RFE_REG		0x10
+#define UDMA_RFLOW_RFF_REG		0x14
+#define UDMA_RFLOW_RFG_REG		0x18
+#define UDMA_RFLOW_RFH_REG		0x1c
+
+#define UDMA_RFLOW_REG(x) (UDMA_RFLOW_RF##x##_REG)
+
+/* TX chan regs */
+#define UDMA_TCHAN_TCFG_REG		0x0
+#define UDMA_TCHAN_TCREDIT_REG		0x4
+#define UDMA_TCHAN_TCQ_REG		0x14
+#define UDMA_TCHAN_TOES_REG(i)		(0x20 + (i) * 4)
+#define UDMA_TCHAN_TEOES_REG		0x60
+#define UDMA_TCHAN_TPRI_CTRL_REG	0x64
+#define UDMA_TCHAN_THREAD_ID_REG	0x68
+#define UDMA_TCHAN_TFIFO_DEPTH_REG	0x70
+#define UDMA_TCHAN_TST_SCHED_REG	0x80
+
+/* RX chan regs */
+#define UDMA_RCHAN_RCFG_REG		0x0
+#define UDMA_RCHAN_RCQ_REG		0x14
+#define UDMA_RCHAN_ROES_REG(i)		(0x20 + (i) * 4)
+#define UDMA_RCHAN_REOES_REG		0x60
+#define UDMA_RCHAN_RPRI_CTRL_REG	0x64
+#define UDMA_RCHAN_THREAD_ID_REG	0x68
+#define UDMA_RCHAN_RST_SCHED_REG	0x80
+#define UDMA_RCHAN_RFLOW_RNG_REG	0xf0
+
+/* TX chan RT regs */
+#define UDMA_TCHAN_RT_CTL_REG		0x0
+#define UDMA_TCHAN_RT_SWTRIG_REG	0x8
+#define UDMA_TCHAN_RT_STDATA_REG	0x80
+
+#define UDMA_TCHAN_RT_PEERn_REG(i)	(0x200 + (i * 0x4))
+#define UDMA_TCHAN_RT_PEER_STATIC_TR_XY_REG	\
+	UDMA_TCHAN_RT_PEERn_REG(0)	/* PSI-L: 0x400 */
+#define UDMA_TCHAN_RT_PEER_STATIC_TR_Z_REG	\
+	UDMA_TCHAN_RT_PEERn_REG(1)	/* PSI-L: 0x401 */
+#define UDMA_TCHAN_RT_PEER_BCNT_REG		\
+	UDMA_TCHAN_RT_PEERn_REG(4)	/* PSI-L: 0x404 */
+#define UDMA_TCHAN_RT_PEER_RT_EN_REG		\
+	UDMA_TCHAN_RT_PEERn_REG(8)	/* PSI-L: 0x408 */
+
+#define UDMA_TCHAN_RT_PCNT_REG		0x400
+#define UDMA_TCHAN_RT_BCNT_REG		0x408
+#define UDMA_TCHAN_RT_SBCNT_REG		0x410
+
+/* RX chan RT regs */
+#define UDMA_RCHAN_RT_CTL_REG		0x0
+#define UDMA_RCHAN_RT_SWTRIG_REG	0x8
+#define UDMA_RCHAN_RT_STDATA_REG	0x80
+
+#define UDMA_RCHAN_RT_PEERn_REG(i)	(0x200 + (i * 0x4))
+#define UDMA_RCHAN_RT_PEER_STATIC_TR_XY_REG	\
+	UDMA_RCHAN_RT_PEERn_REG(0)	/* PSI-L: 0x400 */
+#define UDMA_RCHAN_RT_PEER_STATIC_TR_Z_REG	\
+	UDMA_RCHAN_RT_PEERn_REG(1)	/* PSI-L: 0x401 */
+#define UDMA_RCHAN_RT_PEER_BCNT_REG		\
+	UDMA_RCHAN_RT_PEERn_REG(4)	/* PSI-L: 0x404 */
+#define UDMA_RCHAN_RT_PEER_RT_EN_REG		\
+	UDMA_RCHAN_RT_PEERn_REG(8)	/* PSI-L: 0x408 */
+
+#define UDMA_RCHAN_RT_PCNT_REG		0x400
+#define UDMA_RCHAN_RT_BCNT_REG		0x408
+#define UDMA_RCHAN_RT_SBCNT_REG		0x410
+
+/* UDMA_TCHAN_TCFG_REG/UDMA_RCHAN_RCFG_REG */
+#define UDMA_CHAN_CFG_PAUSE_ON_ERR		BIT(31)
+#define UDMA_TCHAN_CFG_FILT_EINFO		BIT(30)
+#define UDMA_TCHAN_CFG_FILT_PSWORDS		BIT(29)
+#define UDMA_CHAN_CFG_ATYPE_MASK		GENMASK(25, 24)
+#define UDMA_CHAN_CFG_ATYPE_SHIFT		24
+#define UDMA_CHAN_CFG_CHAN_TYPE_MASK		GENMASK(19, 16)
+#define UDMA_CHAN_CFG_CHAN_TYPE_SHIFT		16
+/*
+ * PBVR - using pass by value rings
+ * PBRR - using pass by reference rings
+ * 3RDP - Third Party DMA
+ * BC - Block Copy
+ * SB - single buffer packet mode enabled
+ */
+#define UDMA_CHAN_CFG_CHAN_TYPE_PACKET_PBRR \
+	(2 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_CHAN_CFG_CHAN_TYPE_PACKET_SB_PBRR \
+	(3 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_CHAN_CFG_CHAN_TYPE_3RDP_PBRR \
+	(10 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_CHAN_CFG_CHAN_TYPE_3RDP_PBVR \
+	(11 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_CHAN_CFG_CHAN_TYPE_3RDP_BC_PBRR \
+	(12 << UDMA_CHAN_CFG_CHAN_TYPE_SHIFT)
+#define UDMA_RCHAN_CFG_IGNORE_SHORT		BIT(15)
+#define UDMA_RCHAN_CFG_IGNORE_LONG		BIT(14)
+#define UDMA_TCHAN_CFG_SUPR_TDPKT		BIT(8)
+#define UDMA_CHAN_CFG_FETCH_SIZE_MASK		GENMASK(6, 0)
+#define UDMA_CHAN_CFG_FETCH_SIZE_SHIFT		0
+
+/* UDMA_TCHAN_RT_CTL_REG/UDMA_RCHAN_RT_CTL_REG */
+#define UDMA_CHAN_RT_CTL_EN	BIT(31)
+#define UDMA_CHAN_RT_CTL_TDOWN	BIT(30)
+#define UDMA_CHAN_RT_CTL_PAUSE	BIT(29)
+#define UDMA_CHAN_RT_CTL_FTDOWN	BIT(28)
+#define UDMA_CHAN_RT_CTL_ERROR	BIT(0)
+
+/* UDMA_TCHAN_RT_PEER_RT_EN_REG/UDMA_RCHAN_RT_PEER_RT_EN_REG (PSI-L: 0x408) */
+#define UDMA_PEER_RT_EN_ENABLE		BIT(31)
+#define UDMA_PEER_RT_EN_TEARDOWN	BIT(30)
+#define UDMA_PEER_RT_EN_PAUSE		BIT(29)
+#define UDMA_PEER_RT_EN_FLUSH		BIT(28)
+#define UDMA_PEER_RT_EN_IDLE		BIT(1)
+
+/* RX Flow reg RFA */
+#define UDMA_RFLOW_RFA_EINFO			BIT(30)
+#define UDMA_RFLOW_RFA_PSINFO			BIT(29)
+#define UDMA_RFLOW_RFA_ERR_HANDLING		BIT(28)
+#define UDMA_RFLOW_RFA_DESC_TYPE_MASK		GENMASK(27, 26)
+#define UDMA_RFLOW_RFA_DESC_TYPE_SHIFT		26
+#define UDMA_RFLOW_RFA_PS_LOC			BIT(25)
+#define UDMA_RFLOW_RFA_SOP_OFF_MASK		GENMASK(24, 16)
+#define UDMA_RFLOW_RFA_SOP_OFF_SHIFT		16
+#define UDMA_RFLOW_RFA_DEST_QNUM_MASK		GENMASK(15, 0)
+#define UDMA_RFLOW_RFA_DEST_QNUM_SHIFT		0
+
+/* RX Flow reg RFC */
+#define UDMA_RFLOW_RFC_SRC_TAG_HI_SEL_SHIFT	28
+#define UDMA_RFLOW_RFC_SRC_TAG_LO_SEL_SHIFT	24
+#define UDMA_RFLOW_RFC_DST_TAG_HI_SEL_SHIFT	20
+#define UDMA_RFLOW_RFC_DST_TAG_LO_SE_SHIFT	16
+
+/*
+ * UDMA_TCHAN_RT_PEER_STATIC_TR_XY_REG /
+ * UDMA_RCHAN_RT_PEER_STATIC_TR_XY_REG
+ */
+#define PDMA_STATIC_TR_X_MASK		GENMASK(26, 24)
+#define PDMA_STATIC_TR_X_SHIFT		(24)
+#define PDMA_STATIC_TR_Y_MASK		GENMASK(11, 0)
+#define PDMA_STATIC_TR_Y_SHIFT		(0)
+
+#define PDMA_STATIC_TR_Y(x)	\
+	(((x) << PDMA_STATIC_TR_Y_SHIFT) & PDMA_STATIC_TR_Y_MASK)
+#define PDMA_STATIC_TR_X(x)	\
+	(((x) << PDMA_STATIC_TR_X_SHIFT) & PDMA_STATIC_TR_X_MASK)
+
+/*
+ * UDMA_TCHAN_RT_PEER_STATIC_TR_Z_REG /
+ * UDMA_RCHAN_RT_PEER_STATIC_TR_Z_REG
+ */
+#define PDMA_STATIC_TR_Z_MASK		GENMASK(11, 0)
+#define PDMA_STATIC_TR_Z_SHIFT		(0)
+#define PDMA_STATIC_TR_Z(x)	\
+	(((x) << PDMA_STATIC_TR_Z_SHIFT) & PDMA_STATIC_TR_Z_MASK)
+
+#endif /* K3_NAVSS_UDMA_HWDEF_H_ */
diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
new file mode 100644
index 000000000000..f78a01aa8f8c
--- /dev/null
+++ b/drivers/dma/ti/k3-udma.c
@@ -0,0 +1,1730 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ *  Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ *  Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
+ */
+#define pr_fmt(fmt) "udma: " fmt
+
+#include <common.h>
+#include <asm/io.h>
+#include <asm/bitops.h>
+#include <malloc.h>
+#include <asm/dma-mapping.h>
+#include <dm.h>
+#include <dm/read.h>
+#include <dm/of_access.h>
+#include <dma.h>
+#include <dma-uclass.h>
+#include <linux/delay.h>
+#include <dt-bindings/dma/k3-udma.h>
+#include <linux/soc/ti/k3-navss-ringacc.h>
+#include <linux/soc/ti/cppi5.h>
+#include <linux/soc/ti/ti-udma.h>
+#include <linux/soc/ti/ti_sci_protocol.h>
+
+#include "k3-udma-hwdef.h"
+
+#if BITS_PER_LONG == 64
+#define RINGACC_RING_USE_PROXY	(0)
+#else
+#define RINGACC_RING_USE_PROXY	(1)
+#endif
+
+struct udma_chan;
+
+enum udma_mmr {
+	MMR_GCFG = 0,
+	MMR_RCHANRT,
+	MMR_TCHANRT,
+	MMR_LAST,
+};
+
+static const char * const mmr_names[] = {
+	"gcfg", "rchanrt", "tchanrt"
+};
+
+struct udma_tchan {
+	void __iomem *reg_rt;
+
+	int id;
+	struct k3_nav_ring *t_ring; /* Transmit ring */
+	struct k3_nav_ring *tc_ring; /* Transmit Completion ring */
+};
+
+struct udma_rchan {
+	void __iomem *reg_rt;
+
+	int id;
+	struct k3_nav_ring *fd_ring; /* Free Descriptor ring */
+	struct k3_nav_ring *r_ring; /* Receive ring*/
+};
+
+struct udma_rflow {
+	int id;
+};
+
+struct udma_dev {
+	struct device *dev;
+	void __iomem *mmrs[MMR_LAST];
+
+	struct k3_nav_ringacc *ringacc;
+
+	u32 features;
+
+	int tchan_cnt;
+	int echan_cnt;
+	int rchan_cnt;
+	int rflow_cnt;
+	unsigned long *tchan_map;
+	unsigned long *rchan_map;
+	unsigned long *rflow_map;
+
+	struct udma_tchan *tchans;
+	struct udma_rchan *rchans;
+	struct udma_rflow *rflows;
+
+	struct udma_chan *channels;
+	u32 psil_base;
+
+	u32 ch_count;
+	const struct ti_sci_handle *tisci;
+	const struct ti_sci_rm_udmap_ops *tisci_udmap_ops;
+	const struct ti_sci_rm_psil_ops *tisci_psil_ops;
+	u32  tisci_dev_id;
+	u32  tisci_navss_dev_id;
+	bool is_coherent;
+};
+
+struct udma_chan {
+	struct udma_dev *ud;
+	char name[20];
+
+	struct udma_tchan *tchan;
+	struct udma_rchan *rchan;
+	struct udma_rflow *rflow;
+
+	u32 bcnt; /* number of bytes completed since the start of the channel */
+
+	bool pkt_mode; /* TR or packet */
+	bool needs_epib; /* EPIB is needed for the communication or not */
+	u32 psd_size; /* size of Protocol Specific Data */
+	u32 metadata_size; /* (needs_epib ? 16:0) + psd_size */
+	int slave_thread_id;
+	u32 src_thread;
+	u32 dst_thread;
+	u32 static_tr_type;
+
+	u32 id;
+	enum dma_direction dir;
+
+	struct cppi5_host_desc_t *desc_tx;
+	u32 hdesc_size;
+	bool in_use;
+	void	*desc_rx;
+	u32	num_rx_bufs;
+	u32	desc_rx_cur;
+
+};
+
+#define UDMA_CH_1000(ch)		(ch * 0x1000)
+#define UDMA_CH_100(ch)			(ch * 0x100)
+#define UDMA_CH_40(ch)			(ch * 0x40)
+
+#ifdef PKTBUFSRX
+#define UDMA_RX_DESC_NUM PKTBUFSRX
+#else
+#define UDMA_RX_DESC_NUM 4
+#endif
+
+/* Generic register access functions */
+static inline u32 udma_read(void __iomem *base, int reg)
+{
+	u32 v;
+
+	v = __raw_readl(base + reg);
+	pr_debug("READL(32): v(%08X)<--reg(%p)\n", v, base + reg);
+	return v;
+}
+
+static inline void udma_write(void __iomem *base, int reg, u32 val)
+{
+	pr_debug("WRITEL(32): v(%08X)-->reg(%p)\n", val, base + reg);
+	__raw_writel(val, base + reg);
+}
+
+static inline void udma_update_bits(void __iomem *base, int reg,
+				    u32 mask, u32 val)
+{
+	u32 tmp, orig;
+
+	orig = udma_read(base, reg);
+	tmp = orig & ~mask;
+	tmp |= (val & mask);
+
+	if (tmp != orig)
+		udma_write(base, reg, tmp);
+}
+
+/* TCHANRT */
+static inline u32 udma_tchanrt_read(struct udma_tchan *tchan, int reg)
+{
+	if (!tchan)
+		return 0;
+	return udma_read(tchan->reg_rt, reg);
+}
+
+static inline void udma_tchanrt_write(struct udma_tchan *tchan,
+				      int reg, u32 val)
+{
+	if (!tchan)
+		return;
+	udma_write(tchan->reg_rt, reg, val);
+}
+
+/* RCHANRT */
+static inline u32 udma_rchanrt_read(struct udma_rchan *rchan, int reg)
+{
+	if (!rchan)
+		return 0;
+	return udma_read(rchan->reg_rt, reg);
+}
+
+static inline void udma_rchanrt_write(struct udma_rchan *rchan,
+				      int reg, u32 val)
+{
+	if (!rchan)
+		return;
+	udma_write(rchan->reg_rt, reg, val);
+}
+
+static inline int udma_navss_psil_pair(struct udma_dev *ud, u32 src_thread,
+				       u32 dst_thread)
+{
+	dst_thread |= UDMA_PSIL_DST_THREAD_ID_OFFSET;
+	return ud->tisci_psil_ops->pair(ud->tisci,
+					ud->tisci_navss_dev_id,
+					src_thread, dst_thread);
+}
+
+static inline int udma_navss_psil_unpair(struct udma_dev *ud, u32 src_thread,
+					 u32 dst_thread)
+{
+	dst_thread |= UDMA_PSIL_DST_THREAD_ID_OFFSET;
+	return ud->tisci_psil_ops->unpair(ud->tisci,
+					  ud->tisci_navss_dev_id,
+					  src_thread, dst_thread);
+}
+
+static inline char *udma_get_dir_text(enum dma_direction dir)
+{
+	switch (dir) {
+	case DMA_DEV_TO_MEM:
+		return "DEV_TO_MEM";
+	case DMA_MEM_TO_DEV:
+		return "MEM_TO_DEV";
+	case DMA_MEM_TO_MEM:
+		return "MEM_TO_MEM";
+	case DMA_DEV_TO_DEV:
+		return "DEV_TO_DEV";
+	default:
+		break;
+	}
+
+	return "invalid";
+}
+
+static inline bool udma_is_chan_running(struct udma_chan *uc)
+{
+	u32 trt_ctl = 0;
+	u32 rrt_ctl = 0;
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		rrt_ctl = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_CTL_REG);
+		pr_debug("%s: rrt_ctl: 0x%08x (peer: 0x%08x)\n",
+			 __func__, rrt_ctl,
+			 udma_rchanrt_read(uc->rchan,
+					   UDMA_RCHAN_RT_PEER_RT_EN_REG));
+		break;
+	case DMA_MEM_TO_DEV:
+		trt_ctl = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_CTL_REG);
+		pr_debug("%s: trt_ctl: 0x%08x (peer: 0x%08x)\n",
+			 __func__, trt_ctl,
+			 udma_tchanrt_read(uc->tchan,
+					   UDMA_TCHAN_RT_PEER_RT_EN_REG));
+		break;
+	case DMA_MEM_TO_MEM:
+		trt_ctl = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_CTL_REG);
+		rrt_ctl = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_CTL_REG);
+		break;
+	default:
+		break;
+	}
+
+	if (trt_ctl & UDMA_CHAN_RT_CTL_EN || rrt_ctl & UDMA_CHAN_RT_CTL_EN)
+		return true;
+
+	return false;
+}
+
+static int udma_is_coherent(struct udma_chan *uc)
+{
+	return uc->ud->is_coherent;
+}
+
+static int udma_pop_from_ring(struct udma_chan *uc, dma_addr_t *addr)
+{
+	struct k3_nav_ring *ring = NULL;
+	int ret = -ENOENT;
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		ring = uc->rchan->r_ring;
+		break;
+	case DMA_MEM_TO_DEV:
+		ring = uc->tchan->tc_ring;
+		break;
+	case DMA_MEM_TO_MEM:
+		ring = uc->tchan->tc_ring;
+		break;
+	default:
+		break;
+	}
+
+	if (ring && k3_nav_ringacc_ring_get_occ(ring))
+		ret = k3_nav_ringacc_ring_pop(ring, addr);
+
+	return ret;
+}
+
+static void udma_reset_rings(struct udma_chan *uc)
+{
+	struct k3_nav_ring *ring1 = NULL;
+	struct k3_nav_ring *ring2 = NULL;
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		ring1 = uc->rchan->fd_ring;
+		ring2 = uc->rchan->r_ring;
+		break;
+	case DMA_MEM_TO_DEV:
+		ring1 = uc->tchan->t_ring;
+		ring2 = uc->tchan->tc_ring;
+		break;
+	case DMA_MEM_TO_MEM:
+		ring1 = uc->tchan->t_ring;
+		ring2 = uc->tchan->tc_ring;
+		break;
+	default:
+		break;
+	}
+
+	if (ring1)
+		k3_nav_ringacc_ring_reset_dma(ring1, 0);
+	if (ring2)
+		k3_nav_ringacc_ring_reset(ring2);
+}
+
+static void udma_reset_counters(struct udma_chan *uc)
+{
+	u32 val;
+
+	if (uc->tchan) {
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_BCNT_REG);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_BCNT_REG, val);
+
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_SBCNT_REG);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_SBCNT_REG, val);
+
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_PCNT_REG);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_PCNT_REG, val);
+
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_PEER_BCNT_REG);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_PEER_BCNT_REG, val);
+	}
+
+	if (uc->rchan) {
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_BCNT_REG);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_BCNT_REG, val);
+
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_SBCNT_REG);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_SBCNT_REG, val);
+
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_PCNT_REG);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PCNT_REG, val);
+
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_PEER_BCNT_REG);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PEER_BCNT_REG, val);
+	}
+
+	uc->bcnt = 0;
+}
+
+static inline int udma_stop_hard(struct udma_chan *uc)
+{
+	pr_debug("%s: ENTER (chan%d)\n", __func__, uc->id);
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PEER_RT_EN_REG, 0);
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG, 0);
+		break;
+	case DMA_MEM_TO_DEV:
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG, 0);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_PEER_RT_EN_REG, 0);
+		break;
+	case DMA_MEM_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG, 0);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG, 0);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int udma_start(struct udma_chan *uc)
+{
+	/* Channel is already running, no need to proceed further */
+	if (udma_is_chan_running(uc))
+		goto out;
+
+	pr_debug("%s: chan:%d dir:%s (static_tr_type: %d)\n",
+		 __func__, uc->id, udma_get_dir_text(uc->dir),
+		 uc->static_tr_type);
+
+	/* Make sure that we clear the teardown bit, if it is set */
+	udma_stop_hard(uc);
+
+	/* Reset all counters */
+	udma_reset_counters(uc);
+
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG,
+				   UDMA_CHAN_RT_CTL_EN);
+
+		/* Enable remote */
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PEER_RT_EN_REG,
+				   UDMA_PEER_RT_EN_ENABLE);
+
+		pr_debug("%s(rx): RT_CTL:0x%08x PEER RT_ENABLE:0x%08x\n",
+			 __func__,
+			 udma_rchanrt_read(uc->rchan,
+					   UDMA_RCHAN_RT_CTL_REG),
+			 udma_rchanrt_read(uc->rchan,
+					   UDMA_RCHAN_RT_PEER_RT_EN_REG));
+		break;
+	case DMA_MEM_TO_DEV:
+		/* Enable remote */
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_PEER_RT_EN_REG,
+				   UDMA_PEER_RT_EN_ENABLE);
+
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG,
+				   UDMA_CHAN_RT_CTL_EN);
+
+		pr_debug("%s(tx): RT_CTL:0x%08x PEER RT_ENABLE:0x%08x\n",
+			 __func__,
+			 udma_rchanrt_read(uc->rchan,
+					   UDMA_TCHAN_RT_CTL_REG),
+			 udma_rchanrt_read(uc->rchan,
+					   UDMA_TCHAN_RT_PEER_RT_EN_REG));
+		break;
+	case DMA_MEM_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG,
+				   UDMA_CHAN_RT_CTL_EN);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG,
+				   UDMA_CHAN_RT_CTL_EN);
+
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	pr_debug("%s: DONE chan:%d\n", __func__, uc->id);
+out:
+	return 0;
+}
+
+static inline void udma_stop_mem2dev(struct udma_chan *uc, bool sync)
+{
+	int i = 0;
+	u32 val;
+
+	udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG,
+			   UDMA_CHAN_RT_CTL_EN |
+			   UDMA_CHAN_RT_CTL_TDOWN);
+
+	val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_CTL_REG);
+
+	while (sync && (val & UDMA_CHAN_RT_CTL_EN)) {
+		val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_CTL_REG);
+		udelay(1);
+		if (i > 1000) {
+			printf(" %s TIMEOUT !\n", __func__);
+			break;
+		}
+		i++;
+	}
+
+	val = udma_tchanrt_read(uc->tchan, UDMA_TCHAN_RT_PEER_RT_EN_REG);
+	if (val & UDMA_PEER_RT_EN_ENABLE)
+		printf("%s: peer not stopped TIMEOUT !\n", __func__);
+}
+
+static inline void udma_stop_dev2mem(struct udma_chan *uc, bool sync)
+{
+	int i = 0;
+	u32 val;
+
+	udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_PEER_RT_EN_REG,
+			   UDMA_PEER_RT_EN_ENABLE |
+			   UDMA_PEER_RT_EN_TEARDOWN);
+
+	val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_CTL_REG);
+
+	while (sync && (val & UDMA_CHAN_RT_CTL_EN)) {
+		val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_CTL_REG);
+		udelay(1);
+		if (i > 1000) {
+			printf("%s TIMEOUT !\n", __func__);
+			break;
+		}
+		i++;
+	}
+
+	val = udma_rchanrt_read(uc->rchan, UDMA_RCHAN_RT_PEER_RT_EN_REG);
+	if (val & UDMA_PEER_RT_EN_ENABLE)
+		printf("%s: peer not stopped TIMEOUT !\n", __func__);
+}
+
+static inline int udma_stop(struct udma_chan *uc)
+{
+	pr_debug("%s: chan:%d dir:%s\n",
+		 __func__, uc->id, udma_get_dir_text(uc->dir));
+
+	udma_reset_counters(uc);
+	switch (uc->dir) {
+	case DMA_DEV_TO_MEM:
+		udma_stop_dev2mem(uc, true);
+		break;
+	case DMA_MEM_TO_DEV:
+		udma_stop_mem2dev(uc, true);
+		break;
+	case DMA_MEM_TO_MEM:
+		udma_rchanrt_write(uc->rchan, UDMA_RCHAN_RT_CTL_REG, 0);
+		udma_tchanrt_write(uc->tchan, UDMA_TCHAN_RT_CTL_REG, 0);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void udma_poll_completion(struct udma_chan *uc, dma_addr_t *paddr)
+{
+	int i = 1;
+
+	while (udma_pop_from_ring(uc, paddr)) {
+		udelay(1);
+		if (!(i % 1000000))
+			printf(".");
+		i++;
+	}
+}
+
+#define UDMA_RESERVE_RESOURCE(res)					\
+static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud,	\
+					       int id)			\
+{									\
+	if (id >= 0) {							\
+		if (test_bit(id, ud->res##_map)) {			\
+			dev_err(ud->dev, "res##%d is in use\n", id);	\
+			return ERR_PTR(-ENOENT);			\
+		}							\
+	} else {							\
+		id = find_first_zero_bit(ud->res##_map, ud->res##_cnt); \
+		if (id == ud->res##_cnt) {				\
+			return ERR_PTR(-ENOENT);			\
+		}							\
+	}								\
+									\
+	__set_bit(id, ud->res##_map);					\
+	return &ud->res##s[id];						\
+}
+
+UDMA_RESERVE_RESOURCE(tchan);
+UDMA_RESERVE_RESOURCE(rchan);
+UDMA_RESERVE_RESOURCE(rflow);
+
+static int udma_get_tchan(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->tchan) {
+		dev_dbg(ud->dev, "chan%d: already have tchan%d allocated\n",
+			uc->id, uc->tchan->id);
+		return 0;
+	}
+
+	uc->tchan = __udma_reserve_tchan(ud, -1);
+	if (IS_ERR(uc->tchan))
+		return PTR_ERR(uc->tchan);
+
+	pr_debug("chan%d: got tchan%d\n", uc->id, uc->tchan->id);
+
+	if (udma_is_chan_running(uc)) {
+		dev_warn(ud->dev, "chan%d: tchan%d is running!\n", uc->id,
+			 uc->tchan->id);
+		udma_stop(uc);
+		if (udma_is_chan_running(uc))
+			dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+	}
+
+	return 0;
+}
+
+static int udma_get_rchan(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->rchan) {
+		dev_dbg(ud->dev, "chan%d: already have rchan%d allocated\n",
+			uc->id, uc->rchan->id);
+		return 0;
+	}
+
+	uc->rchan = __udma_reserve_rchan(ud, -1);
+	if (IS_ERR(uc->rchan))
+		return PTR_ERR(uc->rchan);
+
+	pr_debug("chan%d: got rchan%d\n", uc->id, uc->rchan->id);
+
+	if (udma_is_chan_running(uc)) {
+		dev_warn(ud->dev, "chan%d: rchan%d is running!\n", uc->id,
+			 uc->rchan->id);
+		udma_stop(uc);
+		if (udma_is_chan_running(uc))
+			dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+	}
+
+	return 0;
+}
+
+static int udma_get_chan_pair(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+	int chan_id, end;
+
+	if ((uc->tchan && uc->rchan) && uc->tchan->id == uc->rchan->id) {
+		dev_info(ud->dev, "chan%d: already have %d pair allocated\n",
+			 uc->id, uc->tchan->id);
+		return 0;
+	}
+
+	if (uc->tchan) {
+		dev_err(ud->dev, "chan%d: already have tchan%d allocated\n",
+			uc->id, uc->tchan->id);
+		return -EBUSY;
+	} else if (uc->rchan) {
+		dev_err(ud->dev, "chan%d: already have rchan%d allocated\n",
+			uc->id, uc->rchan->id);
+		return -EBUSY;
+	}
+
+	/* Can be optimized, but let's have it like this for now */
+	end = min(ud->tchan_cnt, ud->rchan_cnt);
+	for (chan_id = 0; chan_id < end; chan_id++) {
+		if (!test_bit(chan_id, ud->tchan_map) &&
+		    !test_bit(chan_id, ud->rchan_map))
+			break;
+	}
+
+	if (chan_id == end)
+		return -ENOENT;
+
+	__set_bit(chan_id, ud->tchan_map);
+	__set_bit(chan_id, ud->rchan_map);
+	uc->tchan = &ud->tchans[chan_id];
+	uc->rchan = &ud->rchans[chan_id];
+
+	pr_debug("chan%d: got t/rchan%d pair\n", uc->id, chan_id);
+
+	if (udma_is_chan_running(uc)) {
+		dev_warn(ud->dev, "chan%d: t/rchan%d pair is running!\n",
+			 uc->id, chan_id);
+		udma_stop(uc);
+		if (udma_is_chan_running(uc))
+			dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);
+	}
+
+	return 0;
+}
+
+static int udma_get_rflow(struct udma_chan *uc, int flow_id)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->rflow) {
+		dev_dbg(ud->dev, "chan%d: already have rflow%d allocated\n",
+			uc->id, uc->rflow->id);
+		return 0;
+	}
+
+	if (!uc->rchan)
+		dev_warn(ud->dev, "chan%d: does not have rchan??\n", uc->id);
+
+	uc->rflow = __udma_reserve_rflow(ud, flow_id);
+	if (IS_ERR(uc->rflow))
+		return PTR_ERR(uc->rflow);
+
+	pr_debug("chan%d: got rflow%d\n", uc->id, uc->rflow->id);
+	return 0;
+}
+
+static void udma_put_rchan(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->rchan) {
+		dev_dbg(ud->dev, "chan%d: put rchan%d\n", uc->id,
+			uc->rchan->id);
+		__clear_bit(uc->rchan->id, ud->rchan_map);
+		uc->rchan = NULL;
+	}
+}
+
+static void udma_put_tchan(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->tchan) {
+		dev_dbg(ud->dev, "chan%d: put tchan%d\n", uc->id,
+			uc->tchan->id);
+		__clear_bit(uc->tchan->id, ud->tchan_map);
+		uc->tchan = NULL;
+	}
+}
+
+static void udma_put_rflow(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+
+	if (uc->rflow) {
+		dev_dbg(ud->dev, "chan%d: put rflow%d\n", uc->id,
+			uc->rflow->id);
+		__clear_bit(uc->rflow->id, ud->rflow_map);
+		uc->rflow = NULL;
+	}
+}
+
+static void udma_free_tx_resources(struct udma_chan *uc)
+{
+	if (!uc->tchan)
+		return;
+
+	k3_nav_ringacc_ring_free(uc->tchan->t_ring);
+	k3_nav_ringacc_ring_free(uc->tchan->tc_ring);
+	uc->tchan->t_ring = NULL;
+	uc->tchan->tc_ring = NULL;
+
+	udma_put_tchan(uc);
+}
+
+static int udma_alloc_tx_resources(struct udma_chan *uc)
+{
+	struct k3_nav_ring_cfg ring_cfg;
+	struct udma_dev *ud = uc->ud;
+	int ret;
+
+	ret = udma_get_tchan(uc);
+	if (ret)
+		return ret;
+
+	uc->tchan->t_ring = k3_nav_ringacc_request_ring(
+				ud->ringacc, uc->tchan->id,
+				RINGACC_RING_USE_PROXY);
+	if (!uc->tchan->t_ring) {
+		ret = -EBUSY;
+		goto err_tx_ring;
+	}
+
+	uc->tchan->tc_ring = k3_nav_ringacc_request_ring(
+				ud->ringacc, -1, RINGACC_RING_USE_PROXY);
+	if (!uc->tchan->tc_ring) {
+		ret = -EBUSY;
+		goto err_txc_ring;
+	}
+
+	memset(&ring_cfg, 0, sizeof(ring_cfg));
+	ring_cfg.size = 16;
+	ring_cfg.elm_size = K3_NAV_RINGACC_RING_ELSIZE_8;
+	ring_cfg.mode = K3_NAV_RINGACC_RING_MODE_MESSAGE;
+
+	ret = k3_nav_ringacc_ring_cfg(uc->tchan->t_ring, &ring_cfg);
+	ret |= k3_nav_ringacc_ring_cfg(uc->tchan->tc_ring, &ring_cfg);
+
+	if (ret)
+		goto err_ringcfg;
+
+	return 0;
+
+err_ringcfg:
+	k3_nav_ringacc_ring_free(uc->tchan->tc_ring);
+	uc->tchan->tc_ring = NULL;
+err_txc_ring:
+	k3_nav_ringacc_ring_free(uc->tchan->t_ring);
+	uc->tchan->t_ring = NULL;
+err_tx_ring:
+	udma_put_tchan(uc);
+
+	return ret;
+}
+
+static void udma_free_rx_resources(struct udma_chan *uc)
+{
+	if (!uc->rchan)
+		return;
+
+	k3_nav_ringacc_ring_free(uc->rchan->fd_ring);
+	k3_nav_ringacc_ring_free(uc->rchan->r_ring);
+	uc->rchan->fd_ring = NULL;
+	uc->rchan->r_ring = NULL;
+
+	udma_put_rflow(uc);
+	udma_put_rchan(uc);
+}
+
+static int udma_alloc_rx_resources(struct udma_chan *uc)
+{
+	struct k3_nav_ring_cfg ring_cfg;
+	struct udma_dev *ud = uc->ud;
+	int fd_ring_id;
+	int ret;
+
+	ret = udma_get_rchan(uc);
+	if (ret)
+		return ret;
+
+	/* For MEM_TO_MEM we don't need rflow or rings */
+	if (uc->dir == DMA_MEM_TO_MEM)
+		return 0;
+
+	ret = udma_get_rflow(uc, uc->rchan->id);
+	if (ret) {
+		ret = -EBUSY;
+		goto err_rflow;
+	}
+
+	fd_ring_id = ud->tchan_cnt + ud->echan_cnt + uc->rchan->id;
+
+	uc->rchan->fd_ring = k3_nav_ringacc_request_ring(
+				ud->ringacc, fd_ring_id,
+				RINGACC_RING_USE_PROXY);
+	if (!uc->rchan->fd_ring) {
+		ret = -EBUSY;
+		goto err_rx_ring;
+	}
+
+	uc->rchan->r_ring = k3_nav_ringacc_request_ring(
+				ud->ringacc, -1, RINGACC_RING_USE_PROXY);
+	if (!uc->rchan->r_ring) {
+		ret = -EBUSY;
+		goto err_rxc_ring;
+	}
+
+	memset(&ring_cfg, 0, sizeof(ring_cfg));
+	ring_cfg.size = 16;
+	ring_cfg.elm_size = K3_NAV_RINGACC_RING_ELSIZE_8;
+	ring_cfg.mode = K3_NAV_RINGACC_RING_MODE_MESSAGE;
+
+	ret = k3_nav_ringacc_ring_cfg(uc->rchan->fd_ring, &ring_cfg);
+	ret |= k3_nav_ringacc_ring_cfg(uc->rchan->r_ring, &ring_cfg);
+
+	if (ret)
+		goto err_ringcfg;
+
+	return 0;
+
+err_ringcfg:
+	k3_nav_ringacc_ring_free(uc->rchan->r_ring);
+	uc->rchan->r_ring = NULL;
+err_rxc_ring:
+	k3_nav_ringacc_ring_free(uc->rchan->fd_ring);
+	uc->rchan->fd_ring = NULL;
+err_rx_ring:
+	udma_put_rflow(uc);
+err_rflow:
+	udma_put_rchan(uc);
+
+	return ret;
+}
+
+static int udma_alloc_tchan_sci_req(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+	int tc_ring = k3_nav_ringacc_get_ring_id(uc->tchan->tc_ring);
+	struct ti_sci_msg_rm_udmap_tx_ch_cfg req;
+	u32 mode;
+	int ret;
+
+	if (uc->pkt_mode)
+		mode = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR;
+	else
+		mode = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR;
+
+	req.valid_params = TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID;
+	req.nav_id = ud->tisci_dev_id;
+	req.index = uc->tchan->id;
+	req.tx_chan_type = mode;
+	if (uc->dir == DMA_MEM_TO_MEM)
+		req.tx_fetch_size = sizeof(struct cppi5_desc_hdr_t) >> 2;
+	else
+		req.tx_fetch_size = cppi5_hdesc_calc_size(uc->needs_epib,
+							  uc->psd_size,
+							  0) >> 2;
+	req.txcq_qnum = tc_ring;
+
+	ret = ud->tisci_udmap_ops->tx_ch_cfg(ud->tisci, &req);
+	if (ret)
+		dev_err(ud->dev, "tisci tx alloc failed %d\n", ret);
+
+	return ret;
+}
+
+static int udma_alloc_rchan_sci_req(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+	int fd_ring = k3_nav_ringacc_get_ring_id(uc->rchan->fd_ring);
+	int rx_ring = k3_nav_ringacc_get_ring_id(uc->rchan->r_ring);
+	int tc_ring = k3_nav_ringacc_get_ring_id(uc->tchan->tc_ring);
+	struct ti_sci_msg_rm_udmap_rx_ch_cfg req = { 0 };
+	struct ti_sci_msg_rm_udmap_flow_cfg flow_req = { 0 };
+	u32 mode;
+	int ret;
+
+	if (uc->pkt_mode)
+		mode = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR;
+	else
+		mode = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR;
+
+	req.valid_params = TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID;
+	req.nav_id = ud->tisci_dev_id;
+	req.index = uc->rchan->id;
+	req.rx_chan_type = mode;
+	if (uc->dir == DMA_MEM_TO_MEM) {
+		req.rx_fetch_size = sizeof(struct cppi5_desc_hdr_t) >> 2;
+		req.rxcq_qnum = tc_ring;
+	} else {
+		req.rx_fetch_size = cppi5_hdesc_calc_size(uc->needs_epib,
+							  uc->psd_size,
+							  0) >> 2;
+		req.rxcq_qnum = rx_ring;
+	}
+	if (uc->rflow->id != uc->rchan->id && uc->dir != DMA_MEM_TO_MEM) {
+		req.flowid_start = uc->rflow->id;
+		req.flowid_cnt = 1;
+		req.valid_params |=
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID;
+	}
+
+	ret = ud->tisci_udmap_ops->rx_ch_cfg(ud->tisci, &req);
+	if (ret) {
+		dev_err(ud->dev, "tisci rx %u cfg failed %d\n",
+			uc->rchan->id, ret);
+		return ret;
+	}
+	if (uc->dir == DMA_MEM_TO_MEM)
+		return ret;
+
+	flow_req.valid_params =
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DESC_TYPE_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_SEL_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_SEL_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_SEL_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_SEL_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ0_SZ0_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ1_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ2_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ3_QNUM_VALID |
+			TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PS_LOCATION_VALID;
+
+	flow_req.nav_id = ud->tisci_dev_id;
+	flow_req.flow_index = uc->rflow->id;
+
+	if (uc->needs_epib)
+		flow_req.rx_einfo_present = 1;
+	else
+		flow_req.rx_einfo_present = 0;
+
+	if (uc->psd_size)
+		flow_req.rx_psinfo_present = 1;
+	else
+		flow_req.rx_psinfo_present = 0;
+
+	flow_req.rx_error_handling = 0;
+	flow_req.rx_desc_type = 0;
+	flow_req.rx_dest_qnum = rx_ring;
+	flow_req.rx_src_tag_hi_sel = 2;
+	flow_req.rx_src_tag_lo_sel = 4;
+	flow_req.rx_dest_tag_hi_sel = 5;
+	flow_req.rx_dest_tag_lo_sel = 4;
+	flow_req.rx_fdq0_sz0_qnum = fd_ring;
+	flow_req.rx_fdq1_qnum = fd_ring;
+	flow_req.rx_fdq2_qnum = fd_ring;
+	flow_req.rx_fdq3_qnum = fd_ring;
+	flow_req.rx_ps_location = 0;
+
+	ret = ud->tisci_udmap_ops->rx_flow_cfg(ud->tisci, &flow_req);
+	if (ret)
+		dev_err(ud->dev, "tisci rx %u flow %u cfg failed %d\n",
+			uc->rchan->id, uc->rflow->id, ret);
+
+	return ret;
+}
+
+static int udma_alloc_chan_resources(struct udma_chan *uc)
+{
+	struct udma_dev *ud = uc->ud;
+	int ret;
+
+	pr_debug("%s: chan:%d as %s\n",
+		 __func__, uc->id, udma_get_dir_text(uc->dir));
+
+	switch (uc->dir) {
+	case DMA_MEM_TO_MEM:
+		/* Non synchronized - mem to mem type of transfer */
+		ret = udma_get_chan_pair(uc);
+		if (ret)
+			return ret;
+
+		ret = udma_alloc_tx_resources(uc);
+		if (ret)
+			goto err_free_res;
+
+		ret = udma_alloc_rx_resources(uc);
+		if (ret)
+			goto err_free_res;
+
+		uc->src_thread = ud->psil_base + uc->tchan->id;
+		uc->dst_thread = (ud->psil_base + uc->rchan->id) | 0x8000;
+		break;
+	case DMA_MEM_TO_DEV:
+		/* Slave transfer synchronized - mem to dev (TX) trasnfer */
+		ret = udma_alloc_tx_resources(uc);
+		if (ret)
+			goto err_free_res;
+
+		uc->src_thread = ud->psil_base + uc->tchan->id;
+		uc->dst_thread = uc->slave_thread_id;
+		if (!(uc->dst_thread & 0x8000))
+			uc->dst_thread |= 0x8000;
+
+		break;
+	case DMA_DEV_TO_MEM:
+		/* Slave transfer synchronized - dev to mem (RX) trasnfer */
+		ret = udma_alloc_rx_resources(uc);
+		if (ret)
+			goto err_free_res;
+
+		uc->src_thread = uc->slave_thread_id;
+		uc->dst_thread = (ud->psil_base + uc->rchan->id) | 0x8000;
+
+		break;
+	default:
+		/* Can not happen */
+		pr_debug("%s: chan:%d invalid direction (%u)\n",
+			 __func__, uc->id, uc->dir);
+		return -EINVAL;
+	}
+
+	/* We have channel indexes and rings */
+	if (uc->dir == DMA_MEM_TO_MEM) {
+		ret = udma_alloc_tchan_sci_req(uc);
+		if (ret)
+			goto err_free_res;
+
+		ret = udma_alloc_rchan_sci_req(uc);
+		if (ret)
+			goto err_free_res;
+	} else {
+		/* Slave transfer */
+		if (uc->dir == DMA_MEM_TO_DEV) {
+			ret = udma_alloc_tchan_sci_req(uc);
+			if (ret)
+				goto err_free_res;
+		} else {
+			ret = udma_alloc_rchan_sci_req(uc);
+			if (ret)
+				goto err_free_res;
+		}
+	}
+
+	/* PSI-L pairing */
+	ret = udma_navss_psil_pair(ud, uc->src_thread, uc->dst_thread);
+	if (ret) {
+		dev_err(ud->dev, "k3_nav_psil_request_link fail\n");
+		goto err_free_res;
+	}
+
+	return 0;
+
+err_free_res:
+	udma_free_tx_resources(uc);
+	udma_free_rx_resources(uc);
+	uc->slave_thread_id = -1;
+	return ret;
+}
+
+static void udma_free_chan_resources(struct udma_chan *uc)
+{
+	/* Some configuration to UDMA-P channel: disable, reset, whatever */
+
+	/* Release PSI-L pairing */
+	udma_navss_psil_unpair(uc->ud, uc->src_thread, uc->dst_thread);
+
+	/* Reset the rings for a new start */
+	udma_reset_rings(uc);
+	udma_free_tx_resources(uc);
+	udma_free_rx_resources(uc);
+
+	uc->slave_thread_id = -1;
+	uc->dir = DMA_MEM_TO_MEM;
+}
+
+static int udma_get_mmrs(struct udevice *dev)
+{
+	struct udma_dev *ud = dev_get_priv(dev);
+	int i;
+
+	for (i = 0; i < MMR_LAST; i++) {
+		ud->mmrs[i] = (uint32_t *)devfdt_get_addr_name(dev,
+				mmr_names[i]);
+		if (!ud->mmrs[i])
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+#define UDMA_MAX_CHANNELS	192
+
+static int udma_probe(struct udevice *dev)
+{
+	struct dma_dev_priv *uc_priv = dev_get_uclass_priv(dev);
+	struct udma_dev *ud = dev_get_priv(dev);
+	int i, ret;
+	u32 cap2, cap3;
+	struct udevice *tmp;
+	struct udevice *tisci_dev = NULL;
+
+	ret = udma_get_mmrs(dev);
+	if (ret)
+		return ret;
+
+	ret = uclass_get_device_by_phandle(UCLASS_MISC, dev,
+					   "ti,ringacc", &tmp);
+	ud->ringacc = dev_get_priv(tmp);
+	if (IS_ERR(ud->ringacc))
+		return PTR_ERR(ud->ringacc);
+
+	ud->psil_base = dev_read_u32_default(dev, "ti,psil-base", 0);
+	if (!ud->psil_base) {
+		dev_info(dev,
+			 "Missing ti,psil-base property, using %d.\n", ret);
+		return -EINVAL;
+	}
+
+	ret = uclass_get_device_by_name(UCLASS_FIRMWARE, "dmsc", &tisci_dev);
+	if (ret) {
+		debug("TISCI RA RM get failed (%d)\n", ret);
+		ud->tisci = NULL;
+		return 0;
+	}
+	ud->tisci = (struct ti_sci_handle *)
+			 (ti_sci_get_handle_from_sysfw(tisci_dev));
+
+	ret = dev_read_u32_default(dev, "ti,sci", 0);
+	if (!ret) {
+		dev_err(dev, "TISCI RA RM disabled\n");
+		ud->tisci = NULL;
+	}
+
+	if (ud->tisci) {
+		ofnode navss_ofnode = ofnode_get_parent(dev_ofnode(dev));
+
+		ud->tisci_dev_id = -1;
+		ret = dev_read_u32(dev, "ti,sci-dev-id", &ud->tisci_dev_id);
+		if (ret) {
+			dev_err(dev, "ti,sci-dev-id read failure %d\n", ret);
+			return ret;
+		}
+
+		ud->tisci_navss_dev_id = -1;
+		ret = ofnode_read_u32(navss_ofnode, "ti,sci-dev-id",
+				      &ud->tisci_navss_dev_id);
+		if (ret) {
+			dev_err(dev, "navss sci-dev-id read failure %d\n", ret);
+			return ret;
+		}
+
+		ud->tisci_udmap_ops = &ud->tisci->ops.rm_udmap_ops;
+		ud->tisci_psil_ops = &ud->tisci->ops.rm_psil_ops;
+	}
+
+	ud->is_coherent = dev_read_bool(dev, "dma-coherent");
+
+	cap2 = udma_read(ud->mmrs[MMR_GCFG], 0x28);
+	cap3 = udma_read(ud->mmrs[MMR_GCFG], 0x2c);
+
+	ud->rflow_cnt = cap3 & 0x3fff;
+	ud->tchan_cnt = cap2 & 0x1ff;
+	ud->echan_cnt = (cap2 >> 9) & 0x1ff;
+	ud->rchan_cnt = (cap2 >> 18) & 0x1ff;
+	ud->ch_count  = ud->tchan_cnt + ud->rchan_cnt;
+
+	dev_info(dev,
+		 "Number of channels: %u (tchan: %u, echan: %u, rchan: %u dev-id %u)\n",
+		 ud->ch_count, ud->tchan_cnt, ud->echan_cnt, ud->rchan_cnt,
+		 ud->tisci_dev_id);
+	dev_info(dev, "Number of rflows: %u\n", ud->rflow_cnt);
+
+	ud->channels = devm_kcalloc(dev, ud->ch_count, sizeof(*ud->channels),
+				    GFP_KERNEL);
+	ud->tchan_map = devm_kcalloc(dev, BITS_TO_LONGS(ud->tchan_cnt),
+				     sizeof(unsigned long), GFP_KERNEL);
+	ud->tchans = devm_kcalloc(dev, ud->tchan_cnt,
+				  sizeof(*ud->tchans), GFP_KERNEL);
+	ud->rchan_map = devm_kcalloc(dev, BITS_TO_LONGS(ud->rchan_cnt),
+				     sizeof(unsigned long), GFP_KERNEL);
+	ud->rchans = devm_kcalloc(dev, ud->rchan_cnt,
+				  sizeof(*ud->rchans), GFP_KERNEL);
+	ud->rflow_map = devm_kcalloc(dev, BITS_TO_LONGS(ud->rflow_cnt),
+				     sizeof(unsigned long), GFP_KERNEL);
+	ud->rflows = devm_kcalloc(dev, ud->rflow_cnt,
+				  sizeof(*ud->rflows), GFP_KERNEL);
+
+	if (!ud->channels || !ud->tchan_map || !ud->rchan_map ||
+	    !ud->rflow_map || !ud->tchans || !ud->rchans || !ud->rflows)
+		return -ENOMEM;
+
+	for (i = 0; i < ud->tchan_cnt; i++) {
+		struct udma_tchan *tchan = &ud->tchans[i];
+
+		tchan->id = i;
+		tchan->reg_rt = ud->mmrs[MMR_TCHANRT] + UDMA_CH_1000(i);
+	}
+
+	for (i = 0; i < ud->rchan_cnt; i++) {
+		struct udma_rchan *rchan = &ud->rchans[i];
+
+		rchan->id = i;
+		rchan->reg_rt = ud->mmrs[MMR_RCHANRT] + UDMA_CH_1000(i);
+	}
+
+	for (i = 0; i < ud->rflow_cnt; i++) {
+		struct udma_rflow *rflow = &ud->rflows[i];
+
+		rflow->id = i;
+	}
+
+	for (i = 0; i < ud->ch_count; i++) {
+		struct udma_chan *uc = &ud->channels[i];
+
+		uc->ud = ud;
+		uc->id = i;
+		uc->slave_thread_id = -1;
+		uc->tchan = NULL;
+		uc->rchan = NULL;
+		uc->dir = DMA_MEM_TO_MEM;
+		sprintf(uc->name, "UDMA chan%d\n", i);
+		if (!i)
+			uc->in_use = true;
+	}
+
+	pr_debug("UDMA(rev: 0x%08x) CAP0-3: 0x%08x, 0x%08x, 0x%08x, 0x%08x\n",
+		 udma_read(ud->mmrs[MMR_GCFG], 0),
+		 udma_read(ud->mmrs[MMR_GCFG], 0x20),
+		 udma_read(ud->mmrs[MMR_GCFG], 0x24),
+		 udma_read(ud->mmrs[MMR_GCFG], 0x28),
+		 udma_read(ud->mmrs[MMR_GCFG], 0x2c));
+
+	uc_priv->supported = DMA_SUPPORTS_MEM_TO_MEM | DMA_SUPPORTS_MEM_TO_DEV;
+
+	return ret;
+}
+
+static int *udma_prep_dma_memcpy(struct udma_chan *uc, dma_addr_t dest,
+				 dma_addr_t src, size_t len)
+{
+	u32 tc_ring_id = k3_nav_ringacc_get_ring_id(uc->tchan->tc_ring);
+	struct cppi5_tr_type15_t *tr_req;
+	int num_tr;
+	size_t tr_size = sizeof(struct cppi5_tr_type15_t);
+	u16 tr0_cnt0, tr0_cnt1, tr1_cnt0;
+	unsigned long dummy;
+	void *tr_desc;
+	size_t desc_size;
+
+	if (len < SZ_64K) {
+		num_tr = 1;
+		tr0_cnt0 = len;
+		tr0_cnt1 = 1;
+	} else {
+		unsigned long align_to = __ffs(src | dest);
+
+		if (align_to > 3)
+			align_to = 3;
+		/*
+		 * Keep simple: tr0: SZ_64K-alignment blocks,
+		 *		tr1: the remaining
+		 */
+		num_tr = 2;
+		tr0_cnt0 = (SZ_64K - BIT(align_to));
+		if (len / tr0_cnt0 >= SZ_64K) {
+			dev_err(uc->ud->dev, "size %zu is not supported\n",
+				len);
+			return NULL;
+		}
+
+		tr0_cnt1 = len / tr0_cnt0;
+		tr1_cnt0 = len % tr0_cnt0;
+	}
+
+	desc_size = cppi5_trdesc_calc_size(num_tr, tr_size);
+	tr_desc = dma_alloc_coherent(desc_size, &dummy);
+	if (!tr_desc)
+		return NULL;
+	memset(tr_desc, 0, desc_size);
+
+	cppi5_trdesc_init(tr_desc, num_tr, tr_size, 0, 0);
+	cppi5_desc_set_pktids(tr_desc, uc->id, 0x3fff);
+	cppi5_desc_set_retpolicy(tr_desc, 0, tc_ring_id);
+
+	tr_req = tr_desc + tr_size;
+
+	cppi5_tr_init(&tr_req[0].flags, CPPI5_TR_TYPE15, false, true,
+		      CPPI5_TR_EVENT_SIZE_COMPLETION, 1);
+	cppi5_tr_csf_set(&tr_req[0].flags, CPPI5_TR_CSF_SUPR_EVT);
+
+	tr_req[0].addr = src;
+	tr_req[0].icnt0 = tr0_cnt0;
+	tr_req[0].icnt1 = tr0_cnt1;
+	tr_req[0].icnt2 = 1;
+	tr_req[0].icnt3 = 1;
+	tr_req[0].dim1 = tr0_cnt0;
+
+	tr_req[0].daddr = dest;
+	tr_req[0].dicnt0 = tr0_cnt0;
+	tr_req[0].dicnt1 = tr0_cnt1;
+	tr_req[0].dicnt2 = 1;
+	tr_req[0].dicnt3 = 1;
+	tr_req[0].ddim1 = tr0_cnt0;
+
+	if (num_tr == 2) {
+		cppi5_tr_init(&tr_req[1].flags, CPPI5_TR_TYPE15, false, true,
+			      CPPI5_TR_EVENT_SIZE_COMPLETION, 0);
+		cppi5_tr_csf_set(&tr_req[1].flags, CPPI5_TR_CSF_SUPR_EVT);
+
+		tr_req[1].addr = src + tr0_cnt1 * tr0_cnt0;
+		tr_req[1].icnt0 = tr1_cnt0;
+		tr_req[1].icnt1 = 1;
+		tr_req[1].icnt2 = 1;
+		tr_req[1].icnt3 = 1;
+
+		tr_req[1].daddr = dest + tr0_cnt1 * tr0_cnt0;
+		tr_req[1].dicnt0 = tr1_cnt0;
+		tr_req[1].dicnt1 = 1;
+		tr_req[1].dicnt2 = 1;
+		tr_req[1].dicnt3 = 1;
+	}
+
+	cppi5_tr_csf_set(&tr_req[num_tr - 1].flags, CPPI5_TR_CSF_EOP);
+
+	if (!udma_is_coherent(uc)) {
+		flush_dcache_range((u64)tr_desc,
+				   ALIGN((u64)tr_desc + desc_size,
+					 ARCH_DMA_MINALIGN));
+	}
+
+	k3_nav_ringacc_ring_push(uc->tchan->t_ring, &tr_desc);
+
+	return 0;
+}
+
+static int udma_transfer(struct udevice *dev, int direction,
+			 void *dst, void *src, size_t len)
+{
+	struct udma_dev *ud = dev_get_priv(dev);
+	/* Channel0 is reserved for memcpy */
+	struct udma_chan *uc = &ud->channels[0];
+	dma_addr_t paddr = 0;
+	int ret;
+
+	ret = udma_alloc_chan_resources(uc);
+	if (ret)
+		return ret;
+
+	udma_prep_dma_memcpy(uc, (dma_addr_t)dst, (dma_addr_t)src, len);
+	udma_start(uc);
+	udma_poll_completion(uc, &paddr);
+	udma_stop(uc);
+
+	udma_free_chan_resources(uc);
+	return 0;
+}
+
+static int udma_request(struct dma *dma)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc;
+	unsigned long dummy;
+	int ret;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+
+	uc = &ud->channels[dma->id];
+	ret = udma_alloc_chan_resources(uc);
+	if (ret) {
+		dev_err(dma->dev, "alloc dma res failed %d\n", ret);
+		return -EINVAL;
+	}
+
+	uc->hdesc_size = cppi5_hdesc_calc_size(uc->needs_epib,
+					       uc->psd_size, 0);
+	uc->hdesc_size = ALIGN(uc->hdesc_size, ARCH_DMA_MINALIGN);
+
+	if (uc->dir == DMA_MEM_TO_DEV) {
+		uc->desc_tx = dma_alloc_coherent(uc->hdesc_size, &dummy);
+		memset(uc->desc_tx, 0, uc->hdesc_size);
+	} else {
+		uc->desc_rx = dma_alloc_coherent(
+				uc->hdesc_size * UDMA_RX_DESC_NUM, &dummy);
+		memset(uc->desc_rx, 0, uc->hdesc_size * UDMA_RX_DESC_NUM);
+	}
+
+	uc->in_use = true;
+	uc->desc_rx_cur = 0;
+	uc->num_rx_bufs = 0;
+
+	return 0;
+}
+
+static int udma_free(struct dma *dma)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (udma_is_chan_running(uc))
+		udma_stop(uc);
+	udma_free_chan_resources(uc);
+
+	uc->in_use = false;
+
+	return 0;
+}
+
+static int udma_enable(struct dma *dma)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc;
+	int ret;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	ret = udma_start(uc);
+
+	return ret;
+}
+
+static int udma_disable(struct dma *dma)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc;
+	int ret = 0;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (udma_is_chan_running(uc))
+		ret = udma_stop(uc);
+	else
+		dev_err(dma->dev, "%s not running\n", __func__);
+
+	return ret;
+}
+
+static int udma_send(struct dma *dma, void *src, size_t len, void *metadata)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct cppi5_host_desc_t *desc_tx;
+	dma_addr_t dma_src = (dma_addr_t)src;
+	struct ti_udma_drv_packet_data packet_data = { 0 };
+	dma_addr_t paddr;
+	struct udma_chan *uc;
+	u32 tc_ring_id;
+	int ret;
+
+	if (!metadata)
+		packet_data = *((struct ti_udma_drv_packet_data *)metadata);
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (uc->dir != DMA_MEM_TO_DEV)
+		return -EINVAL;
+
+	tc_ring_id = k3_nav_ringacc_get_ring_id(uc->tchan->tc_ring);
+
+	desc_tx = uc->desc_tx;
+
+	cppi5_hdesc_reset_hbdesc(desc_tx);
+
+	cppi5_hdesc_init(desc_tx,
+			 uc->needs_epib ? CPPI5_INFO0_HDESC_EPIB_PRESENT : 0,
+			 uc->psd_size);
+	cppi5_hdesc_set_pktlen(desc_tx, len);
+	cppi5_hdesc_attach_buf(desc_tx, dma_src, len, dma_src, len);
+	cppi5_desc_set_pktids(&desc_tx->hdr, uc->id, 0x3fff);
+	cppi5_desc_set_retpolicy(&desc_tx->hdr, 0, tc_ring_id);
+	/* pass below information from caller */
+	cppi5_hdesc_set_pkttype(desc_tx, packet_data.pkt_type);
+	cppi5_desc_set_tags_ids(&desc_tx->hdr, 0, packet_data.dest_tag);
+
+	if (!udma_is_coherent(uc)) {
+		flush_dcache_range((u64)dma_src,
+				   ALIGN((u64)dma_src + len,
+					 ARCH_DMA_MINALIGN));
+		flush_dcache_range((u64)desc_tx,
+				   ALIGN((u64)desc_tx + uc->hdesc_size,
+					 ARCH_DMA_MINALIGN));
+	}
+
+	ret = k3_nav_ringacc_ring_push(uc->tchan->t_ring, &uc->desc_tx);
+	if (ret) {
+		dev_err(dma->dev, "TX dma push fail ch_id %lu %d\n",
+			dma->id, ret);
+		return ret;
+	}
+
+	udma_poll_completion(uc, &paddr);
+
+	return 0;
+}
+
+static int udma_receive(struct dma *dma, void **dst, void *metadata)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct cppi5_host_desc_t *desc_rx;
+	dma_addr_t buf_dma;
+	struct udma_chan *uc;
+	u32 buf_dma_len, pkt_len;
+	u32 port_id = 0;
+	int ret;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (uc->dir != DMA_DEV_TO_MEM)
+		return -EINVAL;
+	if (!uc->num_rx_bufs)
+		return -EINVAL;
+
+	ret = k3_nav_ringacc_ring_pop(uc->rchan->r_ring, &desc_rx);
+	if (ret && ret != -ENODATA) {
+		dev_err(dma->dev, "rx dma fail ch_id:%lu %d\n", dma->id, ret);
+		return ret;
+	} else if (ret == -ENODATA) {
+		return 0;
+	}
+
+	/* invalidate cache data */
+	if (!udma_is_coherent(uc)) {
+		invalidate_dcache_range((ulong)desc_rx,
+					(ulong)(desc_rx + uc->hdesc_size));
+	}
+
+	cppi5_hdesc_get_obuf(desc_rx, &buf_dma, &buf_dma_len);
+	pkt_len = cppi5_hdesc_get_pktlen(desc_rx);
+
+	/* invalidate cache data */
+	if (!udma_is_coherent(uc)) {
+		invalidate_dcache_range((ulong)buf_dma,
+					(ulong)(buf_dma + buf_dma_len));
+	}
+
+	cppi5_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL);
+
+	*dst = (void *)buf_dma;
+	uc->num_rx_bufs--;
+
+	return pkt_len;
+}
+
+static int udma_of_xlate(struct dma *dma, struct ofnode_phandle_args *args)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct udma_chan *uc = &ud->channels[0];
+	ofnode chconf_node, slave_node;
+	char prop[50];
+	u32 val;
+
+	for (val = 0; val < ud->ch_count; val++) {
+		uc = &ud->channels[val];
+		if (!uc->in_use)
+			break;
+	}
+
+	if (val == ud->ch_count)
+		return -EBUSY;
+
+	uc->dir = DMA_DEV_TO_MEM;
+	if (args->args[2] == UDMA_DIR_TX)
+		uc->dir = DMA_MEM_TO_DEV;
+
+	slave_node = ofnode_get_by_phandle(args->args[0]);
+	if (!ofnode_valid(slave_node)) {
+		dev_err(ud->dev, "slave node is missing\n");
+		return -EINVAL;
+	}
+
+	snprintf(prop, sizeof(prop), "ti,psil-config%u", args->args[1]);
+	chconf_node = ofnode_find_subnode(slave_node, prop);
+	if (!ofnode_valid(chconf_node)) {
+		dev_err(ud->dev, "Channel configuration node is missing\n");
+		return -EINVAL;
+	}
+
+	if (!ofnode_read_u32(chconf_node, "linux,udma-mode", &val)) {
+		if (val == UDMA_PKT_MODE)
+			uc->pkt_mode = true;
+	}
+
+	if (!ofnode_read_u32(chconf_node, "statictr-type", &val))
+		uc->static_tr_type = val;
+
+	uc->needs_epib = ofnode_read_bool(chconf_node, "ti,needs-epib");
+	if (!ofnode_read_u32(chconf_node, "ti,psd-size", &val))
+		uc->psd_size = val;
+	uc->metadata_size = (uc->needs_epib ? 16 : 0) + uc->psd_size;
+
+	if (ofnode_read_u32(slave_node, "ti,psil-base", &val)) {
+		dev_err(ud->dev, "ti,psil-base is missing\n");
+		return -EINVAL;
+	}
+
+	uc->slave_thread_id = val + args->args[1];
+
+	dma->id = uc->id;
+	pr_debug("Allocated dma chn:%lu epib:%d psdata:%u meta:%u thread_id:%x\n",
+		 dma->id, uc->needs_epib,
+		 uc->psd_size, uc->metadata_size,
+		 uc->slave_thread_id);
+
+	return 0;
+}
+
+int udma_prepare_rcv_buf(struct dma *dma, void *dst, size_t size)
+{
+	struct udma_dev *ud = dev_get_priv(dma->dev);
+	struct cppi5_host_desc_t *desc_rx;
+	dma_addr_t dma_dst;
+	struct udma_chan *uc;
+	u32 desc_num;
+
+	if (dma->id >= (ud->rchan_cnt + ud->tchan_cnt)) {
+		dev_err(dma->dev, "invalid dma ch_id %lu\n", dma->id);
+		return -EINVAL;
+	}
+	uc = &ud->channels[dma->id];
+
+	if (uc->dir != DMA_DEV_TO_MEM)
+		return -EINVAL;
+
+	if (uc->num_rx_bufs >= UDMA_RX_DESC_NUM)
+		return -EINVAL;
+
+	desc_num = uc->desc_rx_cur % UDMA_RX_DESC_NUM;
+	desc_rx = uc->desc_rx + (desc_num * uc->hdesc_size);
+	dma_dst = (dma_addr_t)dst;
+
+	cppi5_hdesc_reset_hbdesc(desc_rx);
+
+	cppi5_hdesc_init(desc_rx,
+			 uc->needs_epib ? CPPI5_INFO0_HDESC_EPIB_PRESENT : 0,
+			 uc->psd_size);
+	cppi5_hdesc_set_pktlen(desc_rx, size);
+	cppi5_hdesc_attach_buf(desc_rx, dma_dst, size, dma_dst, size);
+
+	if (!udma_is_coherent(uc)) {
+		flush_dcache_range((u64)desc_rx,
+				   ALIGN((u64)desc_rx + uc->hdesc_size,
+					 ARCH_DMA_MINALIGN));
+	}
+
+	k3_nav_ringacc_ring_push(uc->rchan->fd_ring, &desc_rx);
+
+	uc->num_rx_bufs++;
+	uc->desc_rx_cur++;
+
+	return 0;
+}
+
+static const struct dma_ops udma_ops = {
+	.transfer	= udma_transfer,
+	.of_xlate	= udma_of_xlate,
+	.request	= udma_request,
+	.free		= udma_free,
+	.enable		= udma_enable,
+	.disable	= udma_disable,
+	.send		= udma_send,
+	.receive	= udma_receive,
+	.prepare_rcv_buf = udma_prepare_rcv_buf,
+};
+
+static const struct udevice_id udma_ids[] = {
+	{ .compatible = "ti,k3-navss-udmap" },
+	{ }
+};
+
+U_BOOT_DRIVER(ti_edma3) = {
+	.name	= "ti-udma",
+	.id	= UCLASS_DMA,
+	.of_match = udma_ids,
+	.ops	= &udma_ops,
+	.probe	= udma_probe,
+	.priv_auto_alloc_size = sizeof(struct udma_dev),
+};
diff --git a/include/dt-bindings/dma/k3-udma.h b/include/dt-bindings/dma/k3-udma.h
new file mode 100644
index 000000000000..670e1232b485
--- /dev/null
+++ b/include/dt-bindings/dma/k3-udma.h
@@ -0,0 +1,31 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ *  Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ */
+
+#ifndef __DT_TI_UDMA_H
+#define __DT_TI_UDMA_H
+
+#define UDMA_TR_MODE		0
+#define UDMA_PKT_MODE		1
+
+#define UDMA_DIR_TX		0
+#define UDMA_DIR_RX		1
+
+#define PSIL_STATIC_TR_NONE	0
+#define PSIL_STATIC_TR_XY	1
+#define PSIL_STATIC_TR_MCAN	2
+
+#define UDMA_PDMA_TR_XY(id)				\
+	ti,psil-config##id {				\
+		linux,udma-mode = <UDMA_TR_MODE>;	\
+		statictr-type = <PSIL_STATIC_TR_XY>;	\
+	}
+
+#define UDMA_PDMA_PKT_XY(id)				\
+	ti,psil-config##id {				\
+		linux,udma-mode = <UDMA_PKT_MODE>;	\
+		statictr-type = <PSIL_STATIC_TR_XY>;	\
+	}
+
+#endif /* __DT_TI_UDMA_H */
diff --git a/include/linux/soc/ti/ti-udma.h b/include/linux/soc/ti/ti-udma.h
new file mode 100644
index 000000000000..e9d4226c48d9
--- /dev/null
+++ b/include/linux/soc/ti/ti-udma.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ *  Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
+ *  Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
+ */
+
+#ifndef __TI_UDMA_H
+#define __TI_UDMA_H
+
+/**
+ * struct ti_udma_drv_packet_data - TI UDMA transfer specific data
+ *
+ * @pkt_type: Packet Type - specific for each DMA client HW
+ * @dest_tag: Destination tag The source pointer.
+ *
+ * TI UDMA transfer specific data passed as part of DMA transfer to
+ * the DMA client HW in UDMA descriptors.
+ */
+struct ti_udma_drv_packet_data {
+	u32	pkt_type;
+	u32	dest_tag;
+};
+
+#endif /* __TI_UDMA_H */
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [U-Boot] [PATCH v4 5/7] soc: keystone: Merge into ti specific directory
  2019-02-05 12:01 [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support Vignesh R
                   ` (3 preceding siblings ...)
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 4/7] dma: ti: add driver to K3 UDMA Vignesh R
@ 2019-02-05 12:01 ` Vignesh R
  2019-04-12 16:28   ` [U-Boot] [U-Boot, v4, " Tom Rini
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 6/7] arm64: dts: ti: k3-am65: add mcu navss nodes Vignesh R
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 7/7] configs: am65x_evm_a53: Enable DMA related configs Vignesh R
  6 siblings, 1 reply; 16+ messages in thread
From: Vignesh R @ 2019-02-05 12:01 UTC (permalink / raw)
  To: u-boot

Merge drivers/soc/keystone/ into drivers/soc/ti/
and convert CONFIG_TI_KEYSTONE_SERDES into Kconfig.

Signed-off-by: Vignesh R <vigneshr@ti.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
---
 arch/arm/mach-keystone/Kconfig                 | 8 ++++++++
 drivers/soc/Makefile                           | 1 -
 drivers/soc/keystone/Makefile                  | 3 ---
 drivers/soc/ti/Kconfig                         | 6 ++++++
 drivers/soc/ti/Makefile                        | 5 ++---
 drivers/soc/{keystone => ti}/keystone_serdes.c | 0
 include/configs/ti_armv7_keystone2.h           | 3 ---
 scripts/config_whitelist.txt                   | 1 -
 8 files changed, 16 insertions(+), 11 deletions(-)
 delete mode 100644 drivers/soc/keystone/Makefile
 rename drivers/soc/{keystone => ti}/keystone_serdes.c (100%)

diff --git a/arch/arm/mach-keystone/Kconfig b/arch/arm/mach-keystone/Kconfig
index d24596eccb0d..e06eba5aea1f 100644
--- a/arch/arm/mach-keystone/Kconfig
+++ b/arch/arm/mach-keystone/Kconfig
@@ -9,18 +9,24 @@ config TARGET_K2HK_EVM
 	select SPL_BOARD_INIT if SPL
 	select CMD_DDR3
 	imply DM_I2C
+	imply SOC_TI
+	imply TI_KEYSTONE_SERDES
 
 config TARGET_K2E_EVM
 	bool "TI Keystone 2 Edison EVM"
 	select SPL_BOARD_INIT if SPL
 	select CMD_DDR3
 	imply DM_I2C
+	imply SOC_TI
+	imply TI_KEYSTONE_SERDES
 
 config TARGET_K2L_EVM
 	bool "TI Keystone 2 Lamar EVM"
 	select SPL_BOARD_INIT if SPL
 	select CMD_DDR3
 	imply DM_I2C
+	imply SOC_TI
+	imply TI_KEYSTONE_SERDES
 
 config TARGET_K2G_EVM
 	bool "TI Keystone 2 Galileo EVM"
@@ -29,6 +35,8 @@ config TARGET_K2G_EVM
         select TI_I2C_BOARD_DETECT
 	select CMD_DDR3
 	imply DM_I2C
+	imply SOC_TI
+	imply TI_KEYSTONE_SERDES
 
 endchoice
 
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 8b7ead546e1c..ce253b7aa886 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -2,5 +2,4 @@
 #
 # Makefile for the U-Boot SOC specific device drivers.
 
-obj-$(CONFIG_ARCH_KEYSTONE)	+= keystone/
 obj-$(CONFIG_SOC_TI) += ti/
diff --git a/drivers/soc/keystone/Makefile b/drivers/soc/keystone/Makefile
deleted file mode 100644
index dfebb143e09b..000000000000
--- a/drivers/soc/keystone/Makefile
+++ /dev/null
@@ -1,3 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0+
-
-obj-$(CONFIG_TI_KEYSTONE_SERDES) += keystone_serdes.o
diff --git a/drivers/soc/ti/Kconfig b/drivers/soc/ti/Kconfig
index 8c0f3c07b23f..e4f88344487e 100644
--- a/drivers/soc/ti/Kconfig
+++ b/drivers/soc/ti/Kconfig
@@ -16,5 +16,11 @@ config TI_K3_NAVSS_RINGACC
 	  and a consumer. There is one RINGACC module per NAVSS on TI AM65x SoCs
 	  If unsure, say N.
 
+config TI_KEYSTONE_SERDES
+	bool "Keystone SerDes driver for ethernet"
+	depends on ARCH_KEYSTONE
+	help
+	 SerDes driver for Keystone SoC used for ethernet support on TI
+	 K2 platforms.
 
 endif # SOC_TI
diff --git a/drivers/soc/ti/Makefile b/drivers/soc/ti/Makefile
index 63e21aaad40f..4ec04ee1257e 100644
--- a/drivers/soc/ti/Makefile
+++ b/drivers/soc/ti/Makefile
@@ -1,5 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0+
-#
-# TI K3 SOC drivers
-#
+
 obj-$(CONFIG_TI_K3_NAVSS_RINGACC)	+= k3-navss-ringacc.o
+obj-$(CONFIG_TI_KEYSTONE_SERDES)	+= keystone_serdes.o
diff --git a/drivers/soc/keystone/keystone_serdes.c b/drivers/soc/ti/keystone_serdes.c
similarity index 100%
rename from drivers/soc/keystone/keystone_serdes.c
rename to drivers/soc/ti/keystone_serdes.c
diff --git a/include/configs/ti_armv7_keystone2.h b/include/configs/ti_armv7_keystone2.h
index 0c7d66486832..6d8536a815f8 100644
--- a/include/configs/ti_armv7_keystone2.h
+++ b/include/configs/ti_armv7_keystone2.h
@@ -134,9 +134,6 @@
 #define CONFIG_KSNET_SERDES_SGMII2_BASE		KS2_SGMII_SERDES2_BASE
 #define CONFIG_KSNET_SERDES_LANES_PER_SGMII	KS2_LANES_PER_SGMII_SERDES
 
-/* SerDes */
-#define CONFIG_TI_KEYSTONE_SERDES
-
 #define CONFIG_AEMIF_CNTRL_BASE		KS2_AEMIF_CNTRL_BASE
 
 /* I2C Configuration */
diff --git a/scripts/config_whitelist.txt b/scripts/config_whitelist.txt
index b425cc360fe1..294d5238b414 100644
--- a/scripts/config_whitelist.txt
+++ b/scripts/config_whitelist.txt
@@ -4419,7 +4419,6 @@ CONFIG_THOR_RESET_OFF
 CONFIG_THUNDERX
 CONFIG_TIMESTAMP
 CONFIG_TIZEN
-CONFIG_TI_KEYSTONE_SERDES
 CONFIG_TI_KSNAV
 CONFIG_TI_SPI_MMAP
 CONFIG_TMU_TIMER
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [U-Boot] [PATCH v4 6/7] arm64: dts: ti: k3-am65: add mcu navss nodes
  2019-02-05 12:01 [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support Vignesh R
                   ` (4 preceding siblings ...)
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 5/7] soc: keystone: Merge into ti specific directory Vignesh R
@ 2019-02-05 12:01 ` Vignesh R
  2019-04-12 16:28   ` [U-Boot] [U-Boot, v4, " Tom Rini
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 7/7] configs: am65x_evm_a53: Enable DMA related configs Vignesh R
  6 siblings, 1 reply; 16+ messages in thread
From: Vignesh R @ 2019-02-05 12:01 UTC (permalink / raw)
  To: u-boot

From: Grygorii Strashko <grygorii.strashko@ti.com>

Add DT node for MCU NAVSS its components to get DMA working on AM654
SoC.

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Vignesh R <vigneshr@ti.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
---
 arch/arm/dts/k3-am654-base-board-u-boot.dtsi | 47 ++++++++++++++++++++
 1 file changed, 47 insertions(+)

diff --git a/arch/arm/dts/k3-am654-base-board-u-boot.dtsi b/arch/arm/dts/k3-am654-base-board-u-boot.dtsi
index 143eb6d63092..c5d23d0203ab 100644
--- a/arch/arm/dts/k3-am654-base-board-u-boot.dtsi
+++ b/arch/arm/dts/k3-am654-base-board-u-boot.dtsi
@@ -4,6 +4,7 @@
  */
 
 #include <dt-bindings/pinctrl/k3-am65.h>
+#include <dt-bindings/dma/k3-udma.h>
 
 / {
 	chosen {
@@ -63,6 +64,52 @@
 		pinctrl-single,register-width = <32>;
 		pinctrl-single,function-mask = <0xffffffff>;
 	};
+
+	navss_mcu: navss-mcu {
+		compatible = "simple-bus";
+		#address-cells = <2>;
+		#size-cells = <2>;
+		ranges;
+
+		ti,sci-dev-id = <119>;
+
+		mcu_ringacc: ringacc at 2b800000 {
+			compatible = "ti,am654-navss-ringacc";
+			reg =	<0x0 0x2b800000 0x0 0x400000>,
+				<0x0 0x2b000000 0x0 0x400000>,
+				<0x0 0x28590000 0x0 0x100>,
+				<0x0 0x2a500000 0x0 0x40000>;
+			reg-names = "rt", "fifos",
+				    "proxy_gcfg", "proxy_target";
+			ti,num-rings = <286>;
+			ti,sci-rm-range-gp-rings = <0x2>; /* GP ring range */
+			ti,dma-ring-reset-quirk;
+			ti,sci = <&dmsc>;
+			ti,sci-dev-id = <195>;
+		};
+
+		mcu_udmap: udmap at 285c0000 {
+			compatible = "ti,k3-navss-udmap";
+			reg =	<0x0 0x285c0000 0x0 0x100>,
+				<0x0 0x2a800000 0x0 0x40000>,
+				<0x0 0x2aa00000 0x0 0x40000>;
+			reg-names = "gcfg", "rchanrt", "tchanrt";
+			#dma-cells = <3>;
+
+			ti,ringacc = <&mcu_ringacc>;
+			ti,psil-base = <0x6000>;
+
+			ti,sci = <&dmsc>;
+			ti,sci-dev-id = <194>;
+
+			ti,sci-rm-range-tchan = <0x1>, /* TX_HCHAN */
+						<0x2>; /* TX_CHAN */
+			ti,sci-rm-range-rchan = <0x3>, /* RX_HCHAN */
+						<0x4>; /* RX_CHAN */
+			ti,sci-rm-range-rflow = <0x5>; /* GP RFLOW */
+			dma-coherent;
+		};
+	};
 };
 
 &cbass_wakeup {
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [U-Boot] [PATCH v4 7/7] configs: am65x_evm_a53: Enable DMA related configs
  2019-02-05 12:01 [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support Vignesh R
                   ` (5 preceding siblings ...)
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 6/7] arm64: dts: ti: k3-am65: add mcu navss nodes Vignesh R
@ 2019-02-05 12:01 ` Vignesh R
  2019-04-12 16:28   ` [U-Boot] [U-Boot, v4, " Tom Rini
  6 siblings, 1 reply; 16+ messages in thread
From: Vignesh R @ 2019-02-05 12:01 UTC (permalink / raw)
  To: u-boot

From: Grygorii Strashko <grygorii.strashko@ti.com>

Enable TI K3 AM65x PSI-L, Ring Accelerator and UDMA drivers

Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Vignesh R <vigneshr@ti.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
---
 configs/am65x_evm_a53_defconfig | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/configs/am65x_evm_a53_defconfig b/configs/am65x_evm_a53_defconfig
index 8f6fd25531b4..c058b75e0079 100644
--- a/configs/am65x_evm_a53_defconfig
+++ b/configs/am65x_evm_a53_defconfig
@@ -48,10 +48,11 @@ CONFIG_SPL_DM_SEQ_ALIAS=y
 CONFIG_CLK=y
 CONFIG_SPL_CLK=y
 CONFIG_CLK_TI_SCI=y
+CONFIG_DMA_CHANNELS=y
+CONFIG_TI_K3_NAVSS_UDMA=y
 CONFIG_TI_SCI_PROTOCOL=y
 CONFIG_DM_MAILBOX=y
 CONFIG_K3_SEC_PROXY=y
-CONFIG_MISC=y
 CONFIG_DM_MMC=y
 CONFIG_MMC_SDHCI=y
 CONFIG_MMC_SDHCI_K3_ARASAN=y
@@ -67,5 +68,6 @@ CONFIG_REMOTEPROC_K3=y
 CONFIG_DM_RESET=y
 CONFIG_RESET_TI_SCI=y
 CONFIG_DM_SERIAL=y
+CONFIG_SOC_TI=y
 CONFIG_SYSRESET=y
 CONFIG_SYSRESET_TI_SCI=y
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [U-Boot] [PATCH v4 4/7] dma: ti: add driver to K3 UDMA
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 4/7] dma: ti: add driver to K3 UDMA Vignesh R
@ 2019-02-06 23:43   ` Tom Rini
  2019-04-12 16:28   ` [U-Boot] [U-Boot,v4,4/7] " Tom Rini
  1 sibling, 0 replies; 16+ messages in thread
From: Tom Rini @ 2019-02-06 23:43 UTC (permalink / raw)
  To: u-boot

On Tue, Feb 05, 2019 at 05:31:24PM +0530, Vignesh R wrote:

> The UDMA-P is intended to perform similar (but significantly upgraded) functions
> as the packet-oriented DMA used on previous SoC devices. The UDMA-P module
> supports the transmission and reception of various packet types.
> The UDMA-P also supports acting as both a UTC and UDMA-C for its internal
> channels. Channels in the UDMA-P can be configured to be either Packet-Based or
> Third-Party channels on a channel by channel basis.
> 
> The initial driver supports:
> - MEM_TO_MEM (TR mode)
> - DEV_TO_MEM (Packet mode)
> - MEM_TO_DEV (Packet mode)
> 
> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
> Signed-off-by: Vignesh R <vigneshr@ti.com>

Please note that some of these comments apply to patch #2 as well, but I
see a specific thing here

[snip]
> +++ b/drivers/dma/ti/k3-udma-hwdef.h
> @@ -0,0 +1,184 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + *  Copyright (C) 2018 Texas Instruments Incorporated - http://www.ti.com
> + *
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.

You don't need/want both the SPDX tag and boilerplate.

[snip]
> +/* Generic register access functions */
> +static inline u32 udma_read(void __iomem *base, int reg)
> +{
> +	u32 v;
> +
> +	v = __raw_readl(base + reg);
> +	pr_debug("READL(32): v(%08X)<--reg(%p)\n", v, base + reg);
> +	return v;
> +}
> +
> +static inline void udma_write(void __iomem *base, int reg, u32 val)
> +{
> +	pr_debug("WRITEL(32): v(%08X)-->reg(%p)\n", val, base + reg);
> +	__raw_writel(val, base + reg);
> +}

I wasn't clear enough, sorry.  We should be using __raw_readl/writel and
not wrapping them.  If things are still in such a state that you need to
dump every read/write, are things really usable?  I assume this is just
handy bring-up related work and it's time to drop it.  Thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20190206/5332307d/attachment.sig>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [U-Boot] [U-Boot, v4, 1/7] firmware: ti_sci: Add support for NAVSS resource management
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 1/7] firmware: ti_sci: Add support for NAVSS resource management Vignesh R
@ 2019-04-12 16:27   ` Tom Rini
  0 siblings, 0 replies; 16+ messages in thread
From: Tom Rini @ 2019-04-12 16:27 UTC (permalink / raw)
  To: u-boot

On Tue, Feb 05, 2019 at 05:31:21PM +0530, Vignesh R wrote:

> From: Grygorii Strashko <grygorii.strashko@ti.com>
> 
> Texas Instruments' System Control Interface (TI-SCI) Message Protocol
> abstracts management of NAVSS resources, like PSI-L pairing and
> unpairing, UDMAP tx/rx/flow configuration and Rings.
> 
> This patch adds support for requesting and configuring such resources
> from TI-SCI firmware.
> 
> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
> Reviewed-by: Tom Rini <trini@konsulko.com>
> Signed-off-by: Vignesh R <vigneshr@ti.com>

Applied to u-boot/master, thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20190412/0a29b1e0/attachment.sig>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [U-Boot] [U-Boot,v4,2/7] soc: ti: k3: add navss ringacc driver
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 2/7] soc: ti: k3: add navss ringacc driver Vignesh R
@ 2019-04-12 16:27   ` Tom Rini
  0 siblings, 0 replies; 16+ messages in thread
From: Tom Rini @ 2019-04-12 16:27 UTC (permalink / raw)
  To: u-boot

On Tue, Feb 05, 2019 at 05:31:22PM +0530, Vignesh R wrote:

> From: Grygorii Strashko <grygorii.strashko@ti.com>
> 
> The Ring Accelerator (RINGACC or RA) provides hardware acceleration to
> enable straightforward passing of work between a producer and a consumer.
> There is one RINGACC module per NAVSS on TI AM65x SoCs.
> 
> The RINGACC converts constant-address read and write accesses to equivalent
> read or write accesses to a circular data structure in memory. The RINGACC
> eliminates the need for each DMA controller which needs to access ring
> elements from having to know the current state of the ring (base address,
> current offset). The DMA controller performs a read or write access to a
> specific address range (which maps to the source interface on the RINGACC)
> and the RINGACC replaces the address for the transaction with a new address
> which corresponds to the head or tail element of the ring (head for reads,
> tail for writes). Since the RINGACC maintains the state, multiple DMA
> controllers or channels are allowed to coherently share the same rings as
> applicable. The RINGACC is able to place data which is destined towards
> software into cached memory directly.
> 
> Supported ring modes:
>  - Ring Mode
>  - Messaging Mode
>  - Credentials Mode
>  - Queue Manager Mode
> 
> TI-SCI integration:
> 
> Texas Instrument's System Control Interface (TI-SCI) Message Protocol now
> has control over Ringacc module resources management (RM) and Rings
> configuration.
> 
> The Ringacc driver manages Rings allocation by itself now and requests
> TI-SCI firmware to allocate and configure specific Rings only. It's done
> this way because, Linux driver implements two stage Rings allocation and
> configuration (allocate ring and configure ring) while TI-SCI Message
> Protocol supports only one combined operation (allocate+configure).
> 
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
> Signed-off-by: Vignesh R <vigneshr@ti.com>

Applied to u-boot/master, thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20190412/2fe23985/attachment.sig>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [U-Boot] [U-Boot, v4, 3/7] soc: ti: k3: add CPPI5 description and helpers
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 3/7] soc: ti: k3: add CPPI5 description and helpers Vignesh R
@ 2019-04-12 16:28   ` Tom Rini
  0 siblings, 0 replies; 16+ messages in thread
From: Tom Rini @ 2019-04-12 16:28 UTC (permalink / raw)
  To: u-boot

On Tue, Feb 05, 2019 at 05:31:23PM +0530, Vignesh R wrote:

> From: Grygorii Strashko <grygorii.strashko@ti.com>
> 
> Add TI Communications Port Programming Interface (CPPI) 5
> interface description and helpers
> 
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
> Signed-off-by: Vignesh R <vigneshr@ti.com>
> Reviewed-by: Tom Rini <trini@konsulko.com>

Applied to u-boot/master, thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20190412/b33eca9d/attachment.sig>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [U-Boot] [U-Boot,v4,4/7] dma: ti: add driver to K3 UDMA
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 4/7] dma: ti: add driver to K3 UDMA Vignesh R
  2019-02-06 23:43   ` Tom Rini
@ 2019-04-12 16:28   ` Tom Rini
  1 sibling, 0 replies; 16+ messages in thread
From: Tom Rini @ 2019-04-12 16:28 UTC (permalink / raw)
  To: u-boot

On Tue, Feb 05, 2019 at 05:31:24PM +0530, Vignesh R wrote:

> The UDMA-P is intended to perform similar (but significantly upgraded) functions
> as the packet-oriented DMA used on previous SoC devices. The UDMA-P module
> supports the transmission and reception of various packet types.
> The UDMA-P also supports acting as both a UTC and UDMA-C for its internal
> channels. Channels in the UDMA-P can be configured to be either Packet-Based or
> Third-Party channels on a channel by channel basis.
> 
> The initial driver supports:
> - MEM_TO_MEM (TR mode)
> - DEV_TO_MEM (Packet mode)
> - MEM_TO_DEV (Packet mode)
> 
> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
> Signed-off-by: Vignesh R <vigneshr@ti.com>

Applied to u-boot/master, thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20190412/a5e455df/attachment.sig>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [U-Boot] [U-Boot, v4, 5/7] soc: keystone: Merge into ti specific directory
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 5/7] soc: keystone: Merge into ti specific directory Vignesh R
@ 2019-04-12 16:28   ` Tom Rini
  0 siblings, 0 replies; 16+ messages in thread
From: Tom Rini @ 2019-04-12 16:28 UTC (permalink / raw)
  To: u-boot

On Tue, Feb 05, 2019 at 05:31:25PM +0530, Vignesh R wrote:

> Merge drivers/soc/keystone/ into drivers/soc/ti/
> and convert CONFIG_TI_KEYSTONE_SERDES into Kconfig.
> 
> Signed-off-by: Vignesh R <vigneshr@ti.com>
> Reviewed-by: Tom Rini <trini@konsulko.com>

Applied to u-boot/master, thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20190412/34897e1d/attachment.sig>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [U-Boot] [U-Boot, v4, 6/7] arm64: dts: ti: k3-am65: add mcu navss nodes
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 6/7] arm64: dts: ti: k3-am65: add mcu navss nodes Vignesh R
@ 2019-04-12 16:28   ` Tom Rini
  0 siblings, 0 replies; 16+ messages in thread
From: Tom Rini @ 2019-04-12 16:28 UTC (permalink / raw)
  To: u-boot

On Tue, Feb 05, 2019 at 05:31:26PM +0530, Vignesh R wrote:

> From: Grygorii Strashko <grygorii.strashko@ti.com>
> 
> Add DT node for MCU NAVSS its components to get DMA working on AM654
> SoC.
> 
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
> Signed-off-by: Vignesh R <vigneshr@ti.com>
> Reviewed-by: Tom Rini <trini@konsulko.com>

Applied to u-boot/master, thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20190412/16aa8a19/attachment.sig>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [U-Boot] [U-Boot, v4, 7/7] configs: am65x_evm_a53: Enable DMA related configs
  2019-02-05 12:01 ` [U-Boot] [PATCH v4 7/7] configs: am65x_evm_a53: Enable DMA related configs Vignesh R
@ 2019-04-12 16:28   ` Tom Rini
  0 siblings, 0 replies; 16+ messages in thread
From: Tom Rini @ 2019-04-12 16:28 UTC (permalink / raw)
  To: u-boot

On Tue, Feb 05, 2019 at 05:31:27PM +0530, Vignesh R wrote:

> From: Grygorii Strashko <grygorii.strashko@ti.com>
> 
> Enable TI K3 AM65x PSI-L, Ring Accelerator and UDMA drivers
> 
> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
> Signed-off-by: Vignesh R <vigneshr@ti.com>
> Reviewed-by: Tom Rini <trini@konsulko.com>

Applied to u-boot/master, thanks!

-- 
Tom
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.denx.de/pipermail/u-boot/attachments/20190412/0a618a89/attachment.sig>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2019-04-12 16:28 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-05 12:01 [U-Boot] [PATCH v4 0/7] AM65x: Add DMA support Vignesh R
2019-02-05 12:01 ` [U-Boot] [PATCH v4 1/7] firmware: ti_sci: Add support for NAVSS resource management Vignesh R
2019-04-12 16:27   ` [U-Boot] [U-Boot, v4, " Tom Rini
2019-02-05 12:01 ` [U-Boot] [PATCH v4 2/7] soc: ti: k3: add navss ringacc driver Vignesh R
2019-04-12 16:27   ` [U-Boot] [U-Boot,v4,2/7] " Tom Rini
2019-02-05 12:01 ` [U-Boot] [PATCH v4 3/7] soc: ti: k3: add CPPI5 description and helpers Vignesh R
2019-04-12 16:28   ` [U-Boot] [U-Boot, v4, " Tom Rini
2019-02-05 12:01 ` [U-Boot] [PATCH v4 4/7] dma: ti: add driver to K3 UDMA Vignesh R
2019-02-06 23:43   ` Tom Rini
2019-04-12 16:28   ` [U-Boot] [U-Boot,v4,4/7] " Tom Rini
2019-02-05 12:01 ` [U-Boot] [PATCH v4 5/7] soc: keystone: Merge into ti specific directory Vignesh R
2019-04-12 16:28   ` [U-Boot] [U-Boot, v4, " Tom Rini
2019-02-05 12:01 ` [U-Boot] [PATCH v4 6/7] arm64: dts: ti: k3-am65: add mcu navss nodes Vignesh R
2019-04-12 16:28   ` [U-Boot] [U-Boot, v4, " Tom Rini
2019-02-05 12:01 ` [U-Boot] [PATCH v4 7/7] configs: am65x_evm_a53: Enable DMA related configs Vignesh R
2019-04-12 16:28   ` [U-Boot] [U-Boot, v4, " Tom Rini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.