linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] drivers: remoteproc: xilinx: add mailbox support
@ 2023-02-13 21:18 Tanmay Shah
  2023-02-13 21:18 ` [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes Tanmay Shah
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Tanmay Shah @ 2023-02-13 21:18 UTC (permalink / raw)
  To: michal.simek, andersson, mathieu.poirier, jaswinder.singh,
	ben.levinsky, shubhrajyoti.datta
  Cc: linux-kernel, linux-arm-kernel, linux-remoteproc, Tanmay Shah

This series adds mailbox support required for rpmsg communication
in Xilinx remoteproc driver. It also fixes few bugs in Xilinx
zynqmp-ipi driver that handles corner cases.

Changes in v3:
  - split single patch into multiple patches
  - add appropriate fixes tag for patches fixing previous bug in the driver
  - make enhancements in zynqmp-ipi driver that handles corener cases
  - fix memory-region carveout names
  - fix multi-line comment format
  - do not mixup mailbox information with memory-regions
  - fix redundant dev_warn for split mode
  - Setting up mailboxes should return an error code
  - Move mailbox setup during driver probe
  - Add .kick function only if mailbox setup is success

v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/

Tanmay Shah (3):
  drivers: mailbox: zynqmp: handle multiple child nodes
  drivers: remoteproc: xilinx: fix carveout names
  remoteproc: xilinx: add mailbox channels for rpmsg

 drivers/mailbox/zynqmp-ipi-mailbox.c       |   8 +-
 drivers/remoteproc/xlnx_r5_remoteproc.c    | 315 ++++++++++++++++-----
 include/linux/mailbox/zynqmp-ipi-message.h |   2 +-
 3 files changed, 251 insertions(+), 74 deletions(-)


base-commit: 10de8156ed71d3dbd7e9099aa76e67ea2c37d4ff
-- 
2.25.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes
  2023-02-13 21:18 [PATCH v3 0/3] drivers: remoteproc: xilinx: add mailbox support Tanmay Shah
@ 2023-02-13 21:18 ` Tanmay Shah
  2023-02-22  5:28   ` Datta, Shubhrajyoti
  2023-02-22 17:34   ` Mathieu Poirier
  2023-02-13 21:18 ` [PATCH v3 2/3] drivers: remoteproc: xilinx: fix carveout names Tanmay Shah
  2023-02-13 21:18 ` [PATCH v3 3/3] remoteproc: xilinx: add mailbox channels for rpmsg Tanmay Shah
  2 siblings, 2 replies; 15+ messages in thread
From: Tanmay Shah @ 2023-02-13 21:18 UTC (permalink / raw)
  To: michal.simek, andersson, mathieu.poirier, jaswinder.singh,
	ben.levinsky, shubhrajyoti.datta
  Cc: linux-kernel, linux-arm-kernel, linux-remoteproc, Tanmay Shah

As of now only one child node is handled by zynqmp-ipi
mailbox driver. Upon introducing remoteproc r5 core mailbox
nodes, found few enhancements in Xilinx zynqmp mailbox driver
as following:

- fix mailbox child node counts
  If child mailbox node status is disabled it causes
  crash in interrupt handler. Fix this by assigning
  only available child node during driver probe.

- fix typo in IPI documentation %s/12/32/
  Xilinx IPI message buffers allows 32-byte data transfer.
  Fix documentation that says 12 bytes

- fix bug in zynqmp-ipi isr handling
  Multiple IPI channels are mapped to same interrupt handler.
  Current isr implementation handles only one channel per isr.
  Fix this behavior by checking isr status bit of all child
  mailbox nodes.

Fixes: 4981b82ba2ff ("mailbox: ZynqMP IPI mailbox controller")
Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---

Changelog:
  - This is first version of this change, however posting as part of the series
    that has version v3.

v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/

 drivers/mailbox/zynqmp-ipi-mailbox.c       | 8 ++++----
 include/linux/mailbox/zynqmp-ipi-message.h | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
index 12e004ff1a14..b1498f6f06e1 100644
--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
+++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
@@ -152,7 +152,7 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
 	struct zynqmp_ipi_message *msg;
 	u64 arg0, arg3;
 	struct arm_smccc_res res;
-	int ret, i;
+	int ret, i, status = IRQ_NONE;
 
 	(void)irq;
 	arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
@@ -170,11 +170,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
 				memcpy_fromio(msg->data, mchan->req_buf,
 					      msg->len);
 				mbox_chan_received_data(chan, (void *)msg);
-				return IRQ_HANDLED;
+				status = IRQ_HANDLED;
 			}
 		}
 	}
-	return IRQ_NONE;
+	return status;
 }
 
 /**
@@ -634,7 +634,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
 	struct zynqmp_ipi_mbox *mbox;
 	int num_mboxes, ret = -EINVAL;
 
-	num_mboxes = of_get_child_count(np);
+	num_mboxes = of_get_available_child_count(np);
 	pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * sizeof(*mbox)),
 			     GFP_KERNEL);
 	if (!pdata)
diff --git a/include/linux/mailbox/zynqmp-ipi-message.h b/include/linux/mailbox/zynqmp-ipi-message.h
index 35ce84c8ca02..31d8046d945e 100644
--- a/include/linux/mailbox/zynqmp-ipi-message.h
+++ b/include/linux/mailbox/zynqmp-ipi-message.h
@@ -9,7 +9,7 @@
  * @data: message payload
  *
  * This is the structure for data used in mbox_send_message
- * the maximum length of data buffer is fixed to 12 bytes.
+ * the maximum length of data buffer is fixed to 32 bytes.
  * Client is supposed to be aware of this.
  */
 struct zynqmp_ipi_message {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 2/3] drivers: remoteproc: xilinx: fix carveout names
  2023-02-13 21:18 [PATCH v3 0/3] drivers: remoteproc: xilinx: add mailbox support Tanmay Shah
  2023-02-13 21:18 ` [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes Tanmay Shah
@ 2023-02-13 21:18 ` Tanmay Shah
  2023-02-22 18:41   ` Mathieu Poirier
  2023-02-13 21:18 ` [PATCH v3 3/3] remoteproc: xilinx: add mailbox channels for rpmsg Tanmay Shah
  2 siblings, 1 reply; 15+ messages in thread
From: Tanmay Shah @ 2023-02-13 21:18 UTC (permalink / raw)
  To: michal.simek, andersson, mathieu.poirier, jaswinder.singh,
	ben.levinsky, shubhrajyoti.datta
  Cc: linux-kernel, linux-arm-kernel, linux-remoteproc, Tanmay Shah

If the unit address is appended to node name of memory-region,
then adding rproc carveouts fails as node name and unit-address
both are passed as carveout name (i.e. vdev0vring0@xxxxxxxx). However,
only node name is expected by remoteproc framework. This patch moves
memory-region node parsing from driver probe to prepare and
only passes node-name and not unit-address

Fixes: 6b291e8020a8 ("drivers: remoteproc: Add Xilinx r5 remoteproc driver")
Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---

Changelog:
  - This is first version of this change, however posting as part of the series
    that has version v3. The v2 of the series could be found at following link.

v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/

 drivers/remoteproc/xlnx_r5_remoteproc.c | 87 ++++++-------------------
 1 file changed, 20 insertions(+), 67 deletions(-)

diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
index 2db57d394155..81af2dea56c2 100644
--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
@@ -61,8 +61,6 @@ static const struct mem_bank_data zynqmp_tcm_banks[] = {
  * @np: device node of RPU instance
  * @tcm_bank_count: number TCM banks accessible to this RPU
  * @tcm_banks: array of each TCM bank data
- * @rmem_count: Number of reserved mem regions
- * @rmem: reserved memory region nodes from device tree
  * @rproc: rproc handle
  * @pm_domain_id: RPU CPU power domain id
  */
@@ -71,8 +69,6 @@ struct zynqmp_r5_core {
 	struct device_node *np;
 	int tcm_bank_count;
 	struct mem_bank_data **tcm_banks;
-	int rmem_count;
-	struct reserved_mem **rmem;
 	struct rproc *rproc;
 	u32 pm_domain_id;
 };
@@ -239,21 +235,31 @@ static int add_mem_regions_carveout(struct rproc *rproc)
 {
 	struct rproc_mem_entry *rproc_mem;
 	struct zynqmp_r5_core *r5_core;
+	struct device_node *rmem_np;
 	struct reserved_mem *rmem;
 	int i, num_mem_regions;
 
 	r5_core = (struct zynqmp_r5_core *)rproc->priv;
-	num_mem_regions = r5_core->rmem_count;
+
+	num_mem_regions = of_property_count_elems_of_size(r5_core->np, "memory-region",
+							  sizeof(phandle));
 
 	for (i = 0; i < num_mem_regions; i++) {
-		rmem = r5_core->rmem[i];
 
-		if (!strncmp(rmem->name, "vdev0buffer", strlen("vdev0buffer"))) {
+		rmem_np = of_parse_phandle(r5_core->np, "memory-region", i);
+
+		rmem = of_reserved_mem_lookup(rmem_np);
+		if (!rmem) {
+			of_node_put(rmem_np);
+			return -EINVAL;
+		}
+
+		if (!strcmp(rmem_np->name, "vdev0buffer")) {
 			/* Init reserved memory for vdev buffer */
 			rproc_mem = rproc_of_resm_mem_entry_init(&rproc->dev, i,
 								 rmem->size,
 								 rmem->base,
-								 rmem->name);
+								 rmem_np->name);
 		} else {
 			/* Register associated reserved memory regions */
 			rproc_mem = rproc_mem_entry_init(&rproc->dev, NULL,
@@ -261,16 +267,20 @@ static int add_mem_regions_carveout(struct rproc *rproc)
 							 rmem->size, rmem->base,
 							 zynqmp_r5_mem_region_map,
 							 zynqmp_r5_mem_region_unmap,
-							 rmem->name);
+							 rmem_np->name);
 		}
 
-		if (!rproc_mem)
+		if (!rproc_mem) {
+			of_node_put(rmem_np);
 			return -ENOMEM;
+		}
 
 		rproc_add_carveout(rproc, rproc_mem);
 
 		dev_dbg(&rproc->dev, "reserved mem carveout %s addr=%llx, size=0x%llx",
 			rmem->name, rmem->base, rmem->size);
+
+		of_node_put(rmem_np);
 	}
 
 	return 0;
@@ -726,59 +736,6 @@ static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
 	return 0;
 }
 
-/**
- * zynqmp_r5_get_mem_region_node()
- * parse memory-region property and get reserved mem regions
- *
- * @r5_core: pointer to zynqmp_r5_core type object
- *
- * Return: 0 for success and error code for failure.
- */
-static int zynqmp_r5_get_mem_region_node(struct zynqmp_r5_core *r5_core)
-{
-	struct device_node *np, *rmem_np;
-	struct reserved_mem **rmem;
-	int res_mem_count, i;
-	struct device *dev;
-
-	dev = r5_core->dev;
-	np = r5_core->np;
-
-	res_mem_count = of_property_count_elems_of_size(np, "memory-region",
-							sizeof(phandle));
-	if (res_mem_count <= 0) {
-		dev_warn(dev, "failed to get memory-region property %d\n",
-			 res_mem_count);
-		return 0;
-	}
-
-	rmem = devm_kcalloc(dev, res_mem_count,
-			    sizeof(struct reserved_mem *), GFP_KERNEL);
-	if (!rmem)
-		return -ENOMEM;
-
-	for (i = 0; i < res_mem_count; i++) {
-		rmem_np = of_parse_phandle(np, "memory-region", i);
-		if (!rmem_np)
-			goto release_rmem;
-
-		rmem[i] = of_reserved_mem_lookup(rmem_np);
-		if (!rmem[i]) {
-			of_node_put(rmem_np);
-			goto release_rmem;
-		}
-
-		of_node_put(rmem_np);
-	}
-
-	r5_core->rmem_count = res_mem_count;
-	r5_core->rmem = rmem;
-	return 0;
-
-release_rmem:
-	return -EINVAL;
-}
-
 /*
  * zynqmp_r5_core_init()
  * Create and initialize zynqmp_r5_core type object
@@ -806,10 +763,6 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
 	for (i = 0; i < cluster->core_count; i++) {
 		r5_core = cluster->r5_cores[i];
 
-		ret = zynqmp_r5_get_mem_region_node(r5_core);
-		if (ret)
-			dev_warn(dev, "memory-region prop failed %d\n", ret);
-
 		/* Initialize r5 cores with power-domains parsed from dts */
 		ret = of_property_read_u32_index(r5_core->np, "power-domains",
 						 1, &r5_core->pm_domain_id);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 3/3] remoteproc: xilinx: add mailbox channels for rpmsg
  2023-02-13 21:18 [PATCH v3 0/3] drivers: remoteproc: xilinx: add mailbox support Tanmay Shah
  2023-02-13 21:18 ` [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes Tanmay Shah
  2023-02-13 21:18 ` [PATCH v3 2/3] drivers: remoteproc: xilinx: fix carveout names Tanmay Shah
@ 2023-02-13 21:18 ` Tanmay Shah
  2023-02-22 19:06   ` Mathieu Poirier
  2 siblings, 1 reply; 15+ messages in thread
From: Tanmay Shah @ 2023-02-13 21:18 UTC (permalink / raw)
  To: michal.simek, andersson, mathieu.poirier, jaswinder.singh,
	ben.levinsky, shubhrajyoti.datta
  Cc: linux-kernel, linux-arm-kernel, linux-remoteproc, Tanmay Shah

This patch makes each r5 core mailbox client and uses
tx and rx channels to send and receive data to/from
remote processor respectively. This is needed for rpmsg
communication to remote processor.

Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---

Changes in v3:
  - fix multi-line comment format
  - do not mixup mailbox information with memory-regions
  - fix redundant dev_warn for split mode
  - setting up mailboxes should return an error code
  - redesign driver to move mailbox setup during driver probe
  - add .kick function only if mailbox setup is success

v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/

 drivers/remoteproc/xlnx_r5_remoteproc.c | 228 +++++++++++++++++++++++-
 1 file changed, 226 insertions(+), 2 deletions(-)

diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
index 81af2dea56c2..f7131fe8fe7e 100644
--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
@@ -8,16 +8,23 @@
 #include <linux/dma-mapping.h>
 #include <linux/firmware/xlnx-zynqmp.h>
 #include <linux/kernel.h>
+#include <linux/mailbox_client.h>
+#include <linux/mailbox/zynqmp-ipi-message.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
 #include <linux/of_platform.h>
 #include <linux/of_reserved_mem.h>
 #include <linux/platform_device.h>
 #include <linux/remoteproc.h>
-#include <linux/slab.h>
 
 #include "remoteproc_internal.h"
 
+/* IPI buffer MAX length */
+#define IPI_BUF_LEN_MAX	32U
+
+/* RX mailbox client buffer max length */
+#define MBOX_CLIENT_BUF_MAX	(IPI_BUF_LEN_MAX + \
+				 sizeof(struct zynqmp_ipi_message))
 /*
  * settings for RPU cluster mode which
  * reflects possible values of xlnx,cluster-mode dt-property
@@ -43,6 +50,27 @@ struct mem_bank_data {
 	char *bank_name;
 };
 
+/**
+ * struct mbox_info
+ *
+ * @rx_mc_buf: to copy data from mailbox rx channel
+ * @tx_mc_buf: to copy data to mailbox tx channel
+ * @r5_core: this mailbox's corresponding r5_core pointer
+ * @mbox_work: schedule work after receiving data from mailbox
+ * @mbox_cl: mailbox client
+ * @tx_chan: mailbox tx channel
+ * @rx_chan: mailbox rx channel
+ */
+struct mbox_info {
+	unsigned char rx_mc_buf[MBOX_CLIENT_BUF_MAX];
+	unsigned char tx_mc_buf[MBOX_CLIENT_BUF_MAX];
+	struct zynqmp_r5_core *r5_core;
+	struct work_struct mbox_work;
+	struct mbox_client mbox_cl;
+	struct mbox_chan *tx_chan;
+	struct mbox_chan *rx_chan;
+};
+
 /*
  * Hardcoded TCM bank values. This will be removed once TCM bindings are
  * accepted for system-dt specifications and upstreamed in linux kernel
@@ -63,6 +91,7 @@ static const struct mem_bank_data zynqmp_tcm_banks[] = {
  * @tcm_banks: array of each TCM bank data
  * @rproc: rproc handle
  * @pm_domain_id: RPU CPU power domain id
+ * @ipi: pointer to mailbox information
  */
 struct zynqmp_r5_core {
 	struct device *dev;
@@ -71,6 +100,7 @@ struct zynqmp_r5_core {
 	struct mem_bank_data **tcm_banks;
 	struct rproc *rproc;
 	u32 pm_domain_id;
+	struct mbox_info *ipi;
 };
 
 /**
@@ -88,6 +118,178 @@ struct zynqmp_r5_cluster {
 	struct zynqmp_r5_core **r5_cores;
 };
 
+/**
+ * event_notified_idr_cb() - callback for vq_interrupt per notifyid
+ * @id: rproc->notify id
+ * @ptr: pointer to idr private data
+ * @data: data passed to idr_for_each callback
+ *
+ * Pass notification to remoteproc virtio
+ *
+ * Return: 0. having return is to satisfy the idr_for_each() function
+ *          pointer input argument requirement.
+ **/
+static int event_notified_idr_cb(int id, void *ptr, void *data)
+{
+	struct rproc *rproc = data;
+
+	if (rproc_vq_interrupt(rproc, id) == IRQ_NONE)
+		dev_dbg(&rproc->dev, "data not found for vqid=%d\n", id);
+
+	return 0;
+}
+
+/**
+ * handle_event_notified() - remoteproc notification work function
+ * @work: pointer to the work structure
+ *
+ * It checks each registered remoteproc notify IDs.
+ */
+static void handle_event_notified(struct work_struct *work)
+{
+	struct mbox_info *ipi;
+	struct rproc *rproc;
+
+	ipi = container_of(work, struct mbox_info, mbox_work);
+	rproc = ipi->r5_core->rproc;
+
+	/*
+	 * We only use IPI for interrupt. The RPU firmware side may or may
+	 * not write the notifyid when it trigger IPI.
+	 * And thus, we scan through all the registered notifyids and
+	 * find which one is valid to get the message.
+	 * Even if message from firmware is NULL, we attempt to get vqid
+	 */
+	idr_for_each(&rproc->notifyids, event_notified_idr_cb, rproc);
+}
+
+/**
+ * zynqmp_r5_mb_rx_cb() - receive channel mailbox callback
+ * @cl: mailbox client
+ * @msg: message pointer
+ *
+ * Receive data from ipi buffer, ack interrupt and then
+ * it will schedule the R5 notification work.
+ */
+static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *msg)
+{
+	struct zynqmp_ipi_message *ipi_msg, *buf_msg;
+	struct mbox_info *ipi;
+	size_t len;
+
+	ipi = container_of(cl, struct mbox_info, mbox_cl);
+
+	/* copy data from ipi buffer to r5_core */
+	ipi_msg = (struct zynqmp_ipi_message *)msg;
+	buf_msg = (struct zynqmp_ipi_message *)ipi->rx_mc_buf;
+	len = ipi_msg->len;
+	if (len > IPI_BUF_LEN_MAX) {
+		dev_warn(cl->dev, "msg size exceeded than %d\n",
+			 IPI_BUF_LEN_MAX);
+		len = IPI_BUF_LEN_MAX;
+	}
+	buf_msg->len = len;
+	memcpy(buf_msg->data, ipi_msg->data, len);
+
+	/* received and processed interrupt ack */
+	if (mbox_send_message(ipi->rx_chan, NULL) < 0)
+		dev_err(cl->dev, "ack failed to mbox rx_chan\n");
+
+	schedule_work(&ipi->mbox_work);
+}
+
+/**
+ * zynqmp_r5_setup_mbox() - Setup mailboxes related properties
+ *			    this is used for each individual R5 core
+ *
+ * @cdev: child node device
+ *
+ * Function to setup mailboxes related properties
+ * return : NULL if failed else pointer to mbox_info
+ */
+static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
+{
+	struct mbox_client *mbox_cl;
+	struct mbox_info *ipi;
+
+	ipi = kzalloc(sizeof(*ipi), GFP_KERNEL);
+	if (!ipi)
+		return NULL;
+
+	mbox_cl = &ipi->mbox_cl;
+	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
+	mbox_cl->tx_block = false;
+	mbox_cl->knows_txdone = false;
+	mbox_cl->tx_done = NULL;
+	mbox_cl->dev = cdev;
+
+	/* Request TX and RX channels */
+	ipi->tx_chan = mbox_request_channel_byname(mbox_cl, "tx");
+	if (IS_ERR(ipi->tx_chan)) {
+		ipi->tx_chan = NULL;
+		kfree(ipi);
+		dev_warn(cdev, "mbox tx channel request failed\n");
+		return NULL;
+	}
+
+	ipi->rx_chan = mbox_request_channel_byname(mbox_cl, "rx");
+	if (IS_ERR(ipi->rx_chan)) {
+		mbox_free_channel(ipi->tx_chan);
+		ipi->rx_chan = NULL;
+		ipi->tx_chan = NULL;
+		kfree(ipi);
+		dev_warn(cdev, "mbox rx channel request failed\n");
+		return NULL;
+	}
+
+	INIT_WORK(&ipi->mbox_work, handle_event_notified);
+
+	return ipi;
+}
+
+static void zynqmp_r5_free_mbox(struct mbox_info *ipi)
+{
+	if (!ipi)
+		return;
+
+	if (ipi->tx_chan) {
+		mbox_free_channel(ipi->tx_chan);
+		ipi->tx_chan = NULL;
+	}
+
+	if (ipi->rx_chan) {
+		mbox_free_channel(ipi->rx_chan);
+		ipi->rx_chan = NULL;
+	}
+
+	kfree(ipi);
+}
+
+/*
+ * zynqmp_r5_core_kick() - kick a firmware if mbox is provided
+ * @rproc: r5 core's corresponding rproc structure
+ * @vqid: virtqueue ID
+ */
+static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
+{
+	struct zynqmp_r5_core *r5_core = rproc->priv;
+	struct device *dev = r5_core->dev;
+	struct zynqmp_ipi_message *mb_msg;
+	struct mbox_info *ipi;
+	int ret;
+
+	ipi = r5_core->ipi;
+	if (!ipi)
+		return;
+
+	mb_msg = (struct zynqmp_ipi_message *)ipi->tx_mc_buf;
+	memcpy(mb_msg->data, &vqid, sizeof(vqid));
+	mb_msg->len = sizeof(vqid);
+	ret = mbox_send_message(ipi->tx_chan, mb_msg);
+	if (ret < 0)
+		dev_warn(dev, "failed to send message\n");
+}
+
 /*
  * zynqmp_r5_set_mode()
  *
@@ -617,7 +819,7 @@ static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
 	return 0;
 }
 
-static const struct rproc_ops zynqmp_r5_rproc_ops = {
+static struct rproc_ops zynqmp_r5_rproc_ops = {
 	.prepare	= zynqmp_r5_rproc_prepare,
 	.unprepare	= zynqmp_r5_rproc_unprepare,
 	.start		= zynqmp_r5_rproc_start,
@@ -642,6 +844,7 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
 {
 	struct zynqmp_r5_core *r5_core;
 	struct rproc *r5_rproc;
+	struct mbox_info *ipi;
 	int ret;
 
 	/* Set up DMA mask */
@@ -649,12 +852,23 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
 	if (ret)
 		return ERR_PTR(ret);
 
+	/*
+	 * If mailbox nodes are disabled using "status" property then setting up
+	 * mailbox channels will be failed. In that case we don't really need
+	 * kick() operation. Include .kick() only if mbox channels are acquired
+	 * successfully.
+	 */
+	ipi = zynqmp_r5_setup_mbox(cdev);
+	if (ipi)
+		zynqmp_r5_rproc_ops.kick = zynqmp_r5_rproc_kick;
+
 	/* Allocate remoteproc instance */
 	r5_rproc = rproc_alloc(cdev, dev_name(cdev),
 			       &zynqmp_r5_rproc_ops,
 			       NULL, sizeof(struct zynqmp_r5_core));
 	if (!r5_rproc) {
 		dev_err(cdev, "failed to allocate memory for rproc instance\n");
+		zynqmp_r5_free_mbox(ipi);
 		return ERR_PTR(-ENOMEM);
 	}
 
@@ -665,6 +879,7 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
 	if (!r5_core->np) {
 		dev_err(cdev, "can't get device node for r5 core\n");
 		ret = -EINVAL;
+		zynqmp_r5_free_mbox(ipi);
 		goto free_rproc;
 	}
 
@@ -672,10 +887,17 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
 	ret = rproc_add(r5_rproc);
 	if (ret) {
 		dev_err(cdev, "failed to add r5 remoteproc\n");
+		zynqmp_r5_free_mbox(ipi);
 		goto free_rproc;
 	}
 
+	if (ipi) {
+		r5_core->ipi = ipi;
+		ipi->r5_core = r5_core;
+	}
+
 	r5_core->rproc = r5_rproc;
+
 	return r5_core;
 
 free_rproc:
@@ -918,6 +1140,7 @@ static int zynqmp_r5_cluster_init(struct zynqmp_r5_cluster *cluster)
 	while (i >= 0) {
 		put_device(child_devs[i]);
 		if (r5_cores[i]) {
+			zynqmp_r5_free_mbox(r5_cores[i]->ipi);
 			of_reserved_mem_device_release(r5_cores[i]->dev);
 			rproc_del(r5_cores[i]->rproc);
 			rproc_free(r5_cores[i]->rproc);
@@ -942,6 +1165,7 @@ static void zynqmp_r5_cluster_exit(void *data)
 
 	for (i = 0; i < cluster->core_count; i++) {
 		r5_core = cluster->r5_cores[i];
+		zynqmp_r5_free_mbox(r5_core->ipi);
 		of_reserved_mem_device_release(r5_core->dev);
 		put_device(r5_core->dev);
 		rproc_del(r5_core->rproc);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* RE: [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes
  2023-02-13 21:18 ` [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes Tanmay Shah
@ 2023-02-22  5:28   ` Datta, Shubhrajyoti
  2023-02-22 17:34   ` Mathieu Poirier
  1 sibling, 0 replies; 15+ messages in thread
From: Datta, Shubhrajyoti @ 2023-02-22  5:28 UTC (permalink / raw)
  To: Shah, Tanmay, Simek, Michal, andersson, mathieu.poirier,
	jaswinder.singh, Levinsky, Ben
  Cc: linux-kernel, linux-arm-kernel, linux-remoteproc, Shah, Tanmay

[AMD Official Use Only - General]

Hi,


> -----Original Message-----
> From: Tanmay Shah <tanmay.shah@amd.com>
> Sent: Tuesday, February 14, 2023 2:48 AM
> To: Simek, Michal <michal.simek@amd.com>; andersson@kernel.org;
> mathieu.poirier@linaro.org; jaswinder.singh@linaro.org; Levinsky, Ben
> <ben.levinsky@amd.com>; Datta, Shubhrajyoti
> <shubhrajyoti.datta@amd.com>
> Cc: linux-kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org;
> linux-remoteproc@vger.kernel.org; Shah, Tanmay
> <tanmay.shah@amd.com>
> Subject: [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child
> nodes
> 
> As of now only one child node is handled by zynqmp-ipi mailbox driver.
> Upon introducing remoteproc r5 core mailbox nodes, found few
> enhancements in Xilinx zynqmp mailbox driver as following:
> 
> - fix mailbox child node counts
>   If child mailbox node status is disabled it causes
>   crash in interrupt handler. Fix this by assigning
>   only available child node during driver probe.
> 
> - fix typo in IPI documentation %s/12/32/
>   Xilinx IPI message buffers allows 32-byte data transfer.
>   Fix documentation that says 12 bytes
> 
> - fix bug in zynqmp-ipi isr handling
>   Multiple IPI channels are mapped to same interrupt handler.
>   Current isr implementation handles only one channel per isr.
>   Fix this behavior by checking isr status bit of all child
>   mailbox nodes.

It does multiple things it will be good if one patch does one stuff.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes
  2023-02-13 21:18 ` [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes Tanmay Shah
  2023-02-22  5:28   ` Datta, Shubhrajyoti
@ 2023-02-22 17:34   ` Mathieu Poirier
  2023-02-23  9:40     ` Michal Simek
  1 sibling, 1 reply; 15+ messages in thread
From: Mathieu Poirier @ 2023-02-22 17:34 UTC (permalink / raw)
  To: Tanmay Shah
  Cc: michal.simek, andersson, jaswinder.singh, ben.levinsky,
	shubhrajyoti.datta, linux-kernel, linux-arm-kernel,
	linux-remoteproc

On Mon, Feb 13, 2023 at 01:18:24PM -0800, Tanmay Shah wrote:
> As of now only one child node is handled by zynqmp-ipi
> mailbox driver. Upon introducing remoteproc r5 core mailbox
> nodes, found few enhancements in Xilinx zynqmp mailbox driver
> as following:
> 
> - fix mailbox child node counts
>   If child mailbox node status is disabled it causes
>   crash in interrupt handler. Fix this by assigning
>   only available child node during driver probe.
> 
> - fix typo in IPI documentation %s/12/32/
>   Xilinx IPI message buffers allows 32-byte data transfer.
>   Fix documentation that says 12 bytes
> 
> - fix bug in zynqmp-ipi isr handling
>   Multiple IPI channels are mapped to same interrupt handler.
>   Current isr implementation handles only one channel per isr.
>   Fix this behavior by checking isr status bit of all child
>   mailbox nodes.
> 
> Fixes: 4981b82ba2ff ("mailbox: ZynqMP IPI mailbox controller")
> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
> 
> Changelog:
>   - This is first version of this change, however posting as part of the series
>     that has version v3.
> 
> v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/
> 
>  drivers/mailbox/zynqmp-ipi-mailbox.c       | 8 ++++----
>  include/linux/mailbox/zynqmp-ipi-message.h | 2 +-
>  2 files changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
> index 12e004ff1a14..b1498f6f06e1 100644
> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
> @@ -152,7 +152,7 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
>  	struct zynqmp_ipi_message *msg;
>  	u64 arg0, arg3;
>  	struct arm_smccc_res res;
> -	int ret, i;
> +	int ret, i, status = IRQ_NONE;
>  
>  	(void)irq;
>  	arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
> @@ -170,11 +170,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
>  				memcpy_fromio(msg->data, mchan->req_buf,
>  					      msg->len);
>  				mbox_chan_received_data(chan, (void *)msg);
> -				return IRQ_HANDLED;
> +				status = IRQ_HANDLED;
>  			}
>  		}
>  	}
> -	return IRQ_NONE;
> +	return status;
>  }
>  
>  /**
> @@ -634,7 +634,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
>  	struct zynqmp_ipi_mbox *mbox;
>  	int num_mboxes, ret = -EINVAL;
>  
> -	num_mboxes = of_get_child_count(np);
> +	num_mboxes = of_get_available_child_count(np);
>  	pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * sizeof(*mbox)),
>  			     GFP_KERNEL);
>  	if (!pdata)
> diff --git a/include/linux/mailbox/zynqmp-ipi-message.h b/include/linux/mailbox/zynqmp-ipi-message.h
> index 35ce84c8ca02..31d8046d945e 100644
> --- a/include/linux/mailbox/zynqmp-ipi-message.h
> +++ b/include/linux/mailbox/zynqmp-ipi-message.h
> @@ -9,7 +9,7 @@
>   * @data: message payload
>   *
>   * This is the structure for data used in mbox_send_message
> - * the maximum length of data buffer is fixed to 12 bytes.
> + * the maximum length of data buffer is fixed to 32 bytes.
>   * Client is supposed to be aware of this.

I agree that this should be split in 3 patches but the fixes are so small that
it is hardly required.  I'll leave it up to Michal to decide.

Split or not:

Acked-by: Mathieu Poirier <mathieu.poirier@linaro.org>

>   */
>  struct zynqmp_ipi_message {
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/3] drivers: remoteproc: xilinx: fix carveout names
  2023-02-13 21:18 ` [PATCH v3 2/3] drivers: remoteproc: xilinx: fix carveout names Tanmay Shah
@ 2023-02-22 18:41   ` Mathieu Poirier
  2023-02-22 20:47     ` Tanmay Shah
  0 siblings, 1 reply; 15+ messages in thread
From: Mathieu Poirier @ 2023-02-22 18:41 UTC (permalink / raw)
  To: Tanmay Shah
  Cc: michal.simek, andersson, jaswinder.singh, ben.levinsky,
	shubhrajyoti.datta, linux-kernel, linux-arm-kernel,
	linux-remoteproc

On Mon, Feb 13, 2023 at 01:18:25PM -0800, Tanmay Shah wrote:
> If the unit address is appended to node name of memory-region,
> then adding rproc carveouts fails as node name and unit-address
> both are passed as carveout name (i.e. vdev0vring0@xxxxxxxx). However,
> only node name is expected by remoteproc framework. This patch moves
> memory-region node parsing from driver probe to prepare and
> only passes node-name and not unit-address
> 
> Fixes: 6b291e8020a8 ("drivers: remoteproc: Add Xilinx r5 remoteproc driver")
> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
> 
> Changelog:
>   - This is first version of this change, however posting as part of the series
>     that has version v3. The v2 of the series could be found at following link.
> 
> v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/
> 
>  drivers/remoteproc/xlnx_r5_remoteproc.c | 87 ++++++-------------------
>  1 file changed, 20 insertions(+), 67 deletions(-)
> 
> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> index 2db57d394155..81af2dea56c2 100644
> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> @@ -61,8 +61,6 @@ static const struct mem_bank_data zynqmp_tcm_banks[] = {
>   * @np: device node of RPU instance
>   * @tcm_bank_count: number TCM banks accessible to this RPU
>   * @tcm_banks: array of each TCM bank data
> - * @rmem_count: Number of reserved mem regions
> - * @rmem: reserved memory region nodes from device tree
>   * @rproc: rproc handle
>   * @pm_domain_id: RPU CPU power domain id
>   */
> @@ -71,8 +69,6 @@ struct zynqmp_r5_core {
>  	struct device_node *np;
>  	int tcm_bank_count;
>  	struct mem_bank_data **tcm_banks;
> -	int rmem_count;
> -	struct reserved_mem **rmem;
>  	struct rproc *rproc;
>  	u32 pm_domain_id;
>  };
> @@ -239,21 +235,31 @@ static int add_mem_regions_carveout(struct rproc *rproc)
>  {
>  	struct rproc_mem_entry *rproc_mem;
>  	struct zynqmp_r5_core *r5_core;
> +	struct device_node *rmem_np;
>  	struct reserved_mem *rmem;
>  	int i, num_mem_regions;
>  
>  	r5_core = (struct zynqmp_r5_core *)rproc->priv;
> -	num_mem_regions = r5_core->rmem_count;
> +
> +	num_mem_regions = of_property_count_elems_of_size(r5_core->np, "memory-region",
> +							  sizeof(phandle));
>  
>  	for (i = 0; i < num_mem_regions; i++) {
> -		rmem = r5_core->rmem[i];
>

Extra line

Everyone else in the remoteproc subsystem is using of_phandle_iterator_next(),
please do the same.  It is easier to maintain and you don't have to call
of_node_put() after each iteration. 


> -		if (!strncmp(rmem->name, "vdev0buffer", strlen("vdev0buffer"))) {
> +		rmem_np = of_parse_phandle(r5_core->np, "memory-region", i);
> +
> +		rmem = of_reserved_mem_lookup(rmem_np);
> +		if (!rmem) {
> +			of_node_put(rmem_np);
> +			return -EINVAL;
> +		}
> +
> +		if (!strcmp(rmem_np->name, "vdev0buffer")) {
>  			/* Init reserved memory for vdev buffer */
>  			rproc_mem = rproc_of_resm_mem_entry_init(&rproc->dev, i,
>  								 rmem->size,
>  								 rmem->base,
> -								 rmem->name);
> +								 rmem_np->name);
>  		} else {
>  			/* Register associated reserved memory regions */
>  			rproc_mem = rproc_mem_entry_init(&rproc->dev, NULL,
> @@ -261,16 +267,20 @@ static int add_mem_regions_carveout(struct rproc *rproc)
>  							 rmem->size, rmem->base,
>  							 zynqmp_r5_mem_region_map,
>  							 zynqmp_r5_mem_region_unmap,
> -							 rmem->name);
> +							 rmem_np->name);
>  		}
>  
> -		if (!rproc_mem)
> +		if (!rproc_mem) {
> +			of_node_put(rmem_np);

When moving to of_phandle_iterator_next(), of_node_put(it.node) has to be
called on error conditions.  Other drivers don't do it, something I will fix in
the next cycle.

>  			return -ENOMEM;
> +		}
>  
>  		rproc_add_carveout(rproc, rproc_mem);
>  
>  		dev_dbg(&rproc->dev, "reserved mem carveout %s addr=%llx, size=0x%llx",
>  			rmem->name, rmem->base, rmem->size);
> +
> +		of_node_put(rmem_np);
>  	}
>  
>  	return 0;
> @@ -726,59 +736,6 @@ static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
>  	return 0;
>  }
>  
> -/**
> - * zynqmp_r5_get_mem_region_node()
> - * parse memory-region property and get reserved mem regions
> - *
> - * @r5_core: pointer to zynqmp_r5_core type object
> - *
> - * Return: 0 for success and error code for failure.
> - */
> -static int zynqmp_r5_get_mem_region_node(struct zynqmp_r5_core *r5_core)
> -{
> -	struct device_node *np, *rmem_np;
> -	struct reserved_mem **rmem;
> -	int res_mem_count, i;
> -	struct device *dev;
> -
> -	dev = r5_core->dev;
> -	np = r5_core->np;
> -
> -	res_mem_count = of_property_count_elems_of_size(np, "memory-region",
> -							sizeof(phandle));
> -	if (res_mem_count <= 0) {
> -		dev_warn(dev, "failed to get memory-region property %d\n",
> -			 res_mem_count);
> -		return 0;
> -	}
> -
> -	rmem = devm_kcalloc(dev, res_mem_count,
> -			    sizeof(struct reserved_mem *), GFP_KERNEL);
> -	if (!rmem)
> -		return -ENOMEM;
> -
> -	for (i = 0; i < res_mem_count; i++) {
> -		rmem_np = of_parse_phandle(np, "memory-region", i);
> -		if (!rmem_np)
> -			goto release_rmem;
> -
> -		rmem[i] = of_reserved_mem_lookup(rmem_np);
> -		if (!rmem[i]) {
> -			of_node_put(rmem_np);
> -			goto release_rmem;
> -		}
> -
> -		of_node_put(rmem_np);
> -	}
> -
> -	r5_core->rmem_count = res_mem_count;
> -	r5_core->rmem = rmem;
> -	return 0;
> -
> -release_rmem:
> -	return -EINVAL;
> -}
> -
>  /*
>   * zynqmp_r5_core_init()
>   * Create and initialize zynqmp_r5_core type object
> @@ -806,10 +763,6 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
>  	for (i = 0; i < cluster->core_count; i++) {
>  		r5_core = cluster->r5_cores[i];
>  
> -		ret = zynqmp_r5_get_mem_region_node(r5_core);
> -		if (ret)
> -			dev_warn(dev, "memory-region prop failed %d\n", ret);
> -
>  		/* Initialize r5 cores with power-domains parsed from dts */
>  		ret = of_property_read_u32_index(r5_core->np, "power-domains",
>  						 1, &r5_core->pm_domain_id);
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 3/3] remoteproc: xilinx: add mailbox channels for rpmsg
  2023-02-13 21:18 ` [PATCH v3 3/3] remoteproc: xilinx: add mailbox channels for rpmsg Tanmay Shah
@ 2023-02-22 19:06   ` Mathieu Poirier
  2023-02-22 20:49     ` Tanmay Shah
  0 siblings, 1 reply; 15+ messages in thread
From: Mathieu Poirier @ 2023-02-22 19:06 UTC (permalink / raw)
  To: Tanmay Shah
  Cc: michal.simek, andersson, jaswinder.singh, ben.levinsky,
	shubhrajyoti.datta, linux-kernel, linux-arm-kernel,
	linux-remoteproc

On Mon, Feb 13, 2023 at 01:18:26PM -0800, Tanmay Shah wrote:
> This patch makes each r5 core mailbox client and uses
> tx and rx channels to send and receive data to/from
> remote processor respectively. This is needed for rpmsg
> communication to remote processor.
> 
> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
> 
> Changes in v3:
>   - fix multi-line comment format
>   - do not mixup mailbox information with memory-regions
>   - fix redundant dev_warn for split mode
>   - setting up mailboxes should return an error code
>   - redesign driver to move mailbox setup during driver probe
>   - add .kick function only if mailbox setup is success
> 
> v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/
> 
>  drivers/remoteproc/xlnx_r5_remoteproc.c | 228 +++++++++++++++++++++++-
>  1 file changed, 226 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> index 81af2dea56c2..f7131fe8fe7e 100644
> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> @@ -8,16 +8,23 @@
>  #include <linux/dma-mapping.h>
>  #include <linux/firmware/xlnx-zynqmp.h>
>  #include <linux/kernel.h>
> +#include <linux/mailbox_client.h>
> +#include <linux/mailbox/zynqmp-ipi-message.h>
>  #include <linux/module.h>
>  #include <linux/of_address.h>
>  #include <linux/of_platform.h>
>  #include <linux/of_reserved_mem.h>
>  #include <linux/platform_device.h>
>  #include <linux/remoteproc.h>
> -#include <linux/slab.h>
>  
>  #include "remoteproc_internal.h"
>  
> +/* IPI buffer MAX length */
> +#define IPI_BUF_LEN_MAX	32U
> +
> +/* RX mailbox client buffer max length */
> +#define MBOX_CLIENT_BUF_MAX	(IPI_BUF_LEN_MAX + \
> +				 sizeof(struct zynqmp_ipi_message))
>  /*
>   * settings for RPU cluster mode which
>   * reflects possible values of xlnx,cluster-mode dt-property
> @@ -43,6 +50,27 @@ struct mem_bank_data {
>  	char *bank_name;
>  };
>  
> +/**
> + * struct mbox_info
> + *
> + * @rx_mc_buf: to copy data from mailbox rx channel
> + * @tx_mc_buf: to copy data to mailbox tx channel
> + * @r5_core: this mailbox's corresponding r5_core pointer
> + * @mbox_work: schedule work after receiving data from mailbox
> + * @mbox_cl: mailbox client
> + * @tx_chan: mailbox tx channel
> + * @rx_chan: mailbox rx channel
> + */
> +struct mbox_info {
> +	unsigned char rx_mc_buf[MBOX_CLIENT_BUF_MAX];
> +	unsigned char tx_mc_buf[MBOX_CLIENT_BUF_MAX];
> +	struct zynqmp_r5_core *r5_core;
> +	struct work_struct mbox_work;
> +	struct mbox_client mbox_cl;
> +	struct mbox_chan *tx_chan;
> +	struct mbox_chan *rx_chan;
> +};
> +
>  /*
>   * Hardcoded TCM bank values. This will be removed once TCM bindings are
>   * accepted for system-dt specifications and upstreamed in linux kernel
> @@ -63,6 +91,7 @@ static const struct mem_bank_data zynqmp_tcm_banks[] = {
>   * @tcm_banks: array of each TCM bank data
>   * @rproc: rproc handle
>   * @pm_domain_id: RPU CPU power domain id
> + * @ipi: pointer to mailbox information
>   */
>  struct zynqmp_r5_core {
>  	struct device *dev;
> @@ -71,6 +100,7 @@ struct zynqmp_r5_core {
>  	struct mem_bank_data **tcm_banks;
>  	struct rproc *rproc;
>  	u32 pm_domain_id;
> +	struct mbox_info *ipi;
>  };
>  
>  /**
> @@ -88,6 +118,178 @@ struct zynqmp_r5_cluster {
>  	struct zynqmp_r5_core **r5_cores;
>  };
>  
> +/**
> + * event_notified_idr_cb() - callback for vq_interrupt per notifyid
> + * @id: rproc->notify id
> + * @ptr: pointer to idr private data
> + * @data: data passed to idr_for_each callback
> + *
> + * Pass notification to remoteproc virtio
> + *
> + * Return: 0. having return is to satisfy the idr_for_each() function
> + *          pointer input argument requirement.
> + **/
> +static int event_notified_idr_cb(int id, void *ptr, void *data)
> +{
> +	struct rproc *rproc = data;
> +
> +	if (rproc_vq_interrupt(rproc, id) == IRQ_NONE)
> +		dev_dbg(&rproc->dev, "data not found for vqid=%d\n", id);
> +
> +	return 0;
> +}
> +
> +/**
> + * handle_event_notified() - remoteproc notification work function
> + * @work: pointer to the work structure
> + *
> + * It checks each registered remoteproc notify IDs.
> + */
> +static void handle_event_notified(struct work_struct *work)
> +{
> +	struct mbox_info *ipi;
> +	struct rproc *rproc;
> +
> +	ipi = container_of(work, struct mbox_info, mbox_work);
> +	rproc = ipi->r5_core->rproc;
> +
> +	/*
> +	 * We only use IPI for interrupt. The RPU firmware side may or may
> +	 * not write the notifyid when it trigger IPI.
> +	 * And thus, we scan through all the registered notifyids and
> +	 * find which one is valid to get the message.
> +	 * Even if message from firmware is NULL, we attempt to get vqid
> +	 */
> +	idr_for_each(&rproc->notifyids, event_notified_idr_cb, rproc);
> +}
> +
> +/**
> + * zynqmp_r5_mb_rx_cb() - receive channel mailbox callback
> + * @cl: mailbox client
> + * @msg: message pointer
> + *
> + * Receive data from ipi buffer, ack interrupt and then
> + * it will schedule the R5 notification work.
> + */
> +static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *msg)
> +{
> +	struct zynqmp_ipi_message *ipi_msg, *buf_msg;
> +	struct mbox_info *ipi;
> +	size_t len;
> +
> +	ipi = container_of(cl, struct mbox_info, mbox_cl);
> +
> +	/* copy data from ipi buffer to r5_core */
> +	ipi_msg = (struct zynqmp_ipi_message *)msg;
> +	buf_msg = (struct zynqmp_ipi_message *)ipi->rx_mc_buf;
> +	len = ipi_msg->len;
> +	if (len > IPI_BUF_LEN_MAX) {
> +		dev_warn(cl->dev, "msg size exceeded than %d\n",
> +			 IPI_BUF_LEN_MAX);
> +		len = IPI_BUF_LEN_MAX;
> +	}
> +	buf_msg->len = len;
> +	memcpy(buf_msg->data, ipi_msg->data, len);
> +
> +	/* received and processed interrupt ack */
> +	if (mbox_send_message(ipi->rx_chan, NULL) < 0)
> +		dev_err(cl->dev, "ack failed to mbox rx_chan\n");
> +
> +	schedule_work(&ipi->mbox_work);
> +}
> +
> +/**
> + * zynqmp_r5_setup_mbox() - Setup mailboxes related properties
> + *			    this is used for each individual R5 core
> + *
> + * @cdev: child node device
> + *
> + * Function to setup mailboxes related properties
> + * return : NULL if failed else pointer to mbox_info
> + */
> +static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
> +{
> +	struct mbox_client *mbox_cl;
> +	struct mbox_info *ipi;
> +
> +	ipi = kzalloc(sizeof(*ipi), GFP_KERNEL);
> +	if (!ipi)
> +		return NULL;
> +
> +	mbox_cl = &ipi->mbox_cl;
> +	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
> +	mbox_cl->tx_block = false;
> +	mbox_cl->knows_txdone = false;
> +	mbox_cl->tx_done = NULL;
> +	mbox_cl->dev = cdev;
> +
> +	/* Request TX and RX channels */
> +	ipi->tx_chan = mbox_request_channel_byname(mbox_cl, "tx");
> +	if (IS_ERR(ipi->tx_chan)) {
> +		ipi->tx_chan = NULL;
> +		kfree(ipi);
> +		dev_warn(cdev, "mbox tx channel request failed\n");
> +		return NULL;
> +	}
> +
> +	ipi->rx_chan = mbox_request_channel_byname(mbox_cl, "rx");
> +	if (IS_ERR(ipi->rx_chan)) {
> +		mbox_free_channel(ipi->tx_chan);
> +		ipi->rx_chan = NULL;
> +		ipi->tx_chan = NULL;
> +		kfree(ipi);
> +		dev_warn(cdev, "mbox rx channel request failed\n");
> +		return NULL;
> +	}
> +
> +	INIT_WORK(&ipi->mbox_work, handle_event_notified);
> +
> +	return ipi;
> +}
> +
> +static void zynqmp_r5_free_mbox(struct mbox_info *ipi)
> +{
> +	if (!ipi)
> +		return;
> +
> +	if (ipi->tx_chan) {
> +		mbox_free_channel(ipi->tx_chan);
> +		ipi->tx_chan = NULL;
> +	}
> +
> +	if (ipi->rx_chan) {
> +		mbox_free_channel(ipi->rx_chan);
> +		ipi->rx_chan = NULL;
> +	}
> +
> +	kfree(ipi);
> +}
> +
> +/*
> + * zynqmp_r5_core_kick() - kick a firmware if mbox is provided
> + * @rproc: r5 core's corresponding rproc structure
> + * @vqid: virtqueue ID
> + */
> +static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
> +{
> +	struct zynqmp_r5_core *r5_core = rproc->priv;
> +	struct device *dev = r5_core->dev;
> +	struct zynqmp_ipi_message *mb_msg;
> +	struct mbox_info *ipi;
> +	int ret;
> +
> +	ipi = r5_core->ipi;
> +	if (!ipi)
> +		return;
> +
> +	mb_msg = (struct zynqmp_ipi_message *)ipi->tx_mc_buf;
> +	memcpy(mb_msg->data, &vqid, sizeof(vqid));
> +	mb_msg->len = sizeof(vqid);
> +	ret = mbox_send_message(ipi->tx_chan, mb_msg);
> +	if (ret < 0)
> +		dev_warn(dev, "failed to send message\n");
> +}
> +
>  /*
>   * zynqmp_r5_set_mode()
>   *
> @@ -617,7 +819,7 @@ static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
>  	return 0;
>  }
>  
> -static const struct rproc_ops zynqmp_r5_rproc_ops = {
> +static struct rproc_ops zynqmp_r5_rproc_ops = {
>  	.prepare	= zynqmp_r5_rproc_prepare,
>  	.unprepare	= zynqmp_r5_rproc_unprepare,
>  	.start		= zynqmp_r5_rproc_start,
> @@ -642,6 +844,7 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>  {
>  	struct zynqmp_r5_core *r5_core;
>  	struct rproc *r5_rproc;
> +	struct mbox_info *ipi;
>  	int ret;
>  
>  	/* Set up DMA mask */
> @@ -649,12 +852,23 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>  	if (ret)
>  		return ERR_PTR(ret);
>  
> +	/*
> +	 * If mailbox nodes are disabled using "status" property then setting up
> +	 * mailbox channels will be failed. In that case we don't really need
> +	 * kick() operation. Include .kick() only if mbox channels are acquired
> +	 * successfully.
> +	 */
> +	ipi = zynqmp_r5_setup_mbox(cdev);
> +	if (ipi)
> +		zynqmp_r5_rproc_ops.kick = zynqmp_r5_rproc_kick;
> +
>  	/* Allocate remoteproc instance */
>  	r5_rproc = rproc_alloc(cdev, dev_name(cdev),
>  			       &zynqmp_r5_rproc_ops,
>  			       NULL, sizeof(struct zynqmp_r5_core));
>  	if (!r5_rproc) {
>  		dev_err(cdev, "failed to allocate memory for rproc instance\n");
> +		zynqmp_r5_free_mbox(ipi);
>  		return ERR_PTR(-ENOMEM);
>  	}
>  
> @@ -665,6 +879,7 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>  	if (!r5_core->np) {
>  		dev_err(cdev, "can't get device node for r5 core\n");
>  		ret = -EINVAL;
> +		zynqmp_r5_free_mbox(ipi);
>  		goto free_rproc;
>  	}
>  
> @@ -672,10 +887,17 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>  	ret = rproc_add(r5_rproc);
>  	if (ret) {
>  		dev_err(cdev, "failed to add r5 remoteproc\n");
> +		zynqmp_r5_free_mbox(ipi);
>  		goto free_rproc;
>  	}
>  
> +	if (ipi) {
> +		r5_core->ipi = ipi;
> +		ipi->r5_core = r5_core;
> +	}
> +
>  	r5_core->rproc = r5_rproc;
> +
>  	return r5_core;
>  
>  free_rproc:
> @@ -918,6 +1140,7 @@ static int zynqmp_r5_cluster_init(struct zynqmp_r5_cluster *cluster)
>  	while (i >= 0) {
>  		put_device(child_devs[i]);
>  		if (r5_cores[i]) {
> +			zynqmp_r5_free_mbox(r5_cores[i]->ipi);

The mailboxes are initialized in zynqmp_r5_add_rproc_core() but free'd here in
case of trouble, which introduces coupling between the two functions.  I suggest
moving zynqmp_r5_setup_mbox() in zynqmp_r5_cluster_init() and initialize both
mailboxes in it.

I am done reviewing this set.

Thanks,
Mathieu

Thanks,
Mathieu

>  			of_reserved_mem_device_release(r5_cores[i]->dev);
>  			rproc_del(r5_cores[i]->rproc);
>  			rproc_free(r5_cores[i]->rproc);
> @@ -942,6 +1165,7 @@ static void zynqmp_r5_cluster_exit(void *data)
>  
>  	for (i = 0; i < cluster->core_count; i++) {
>  		r5_core = cluster->r5_cores[i];
> +		zynqmp_r5_free_mbox(r5_core->ipi);
>  		of_reserved_mem_device_release(r5_core->dev);
>  		put_device(r5_core->dev);
>  		rproc_del(r5_core->rproc);
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/3] drivers: remoteproc: xilinx: fix carveout names
  2023-02-22 18:41   ` Mathieu Poirier
@ 2023-02-22 20:47     ` Tanmay Shah
  0 siblings, 0 replies; 15+ messages in thread
From: Tanmay Shah @ 2023-02-22 20:47 UTC (permalink / raw)
  To: Mathieu Poirier, Tanmay Shah
  Cc: michal.simek, andersson, jaswinder.singh, ben.levinsky,
	shubhrajyoti.datta, linux-kernel, linux-arm-kernel,
	linux-remoteproc

Hi Mathieu,

Thanks for the reviews. I agree to all the comments and will address in 
the next revision accordingly.

Thanks,

Tanmay

On 2/22/23 10:41 AM, Mathieu Poirier wrote:
> On Mon, Feb 13, 2023 at 01:18:25PM -0800, Tanmay Shah wrote:
>> If the unit address is appended to node name of memory-region,
>> then adding rproc carveouts fails as node name and unit-address
>> both are passed as carveout name (i.e. vdev0vring0@xxxxxxxx). However,
>> only node name is expected by remoteproc framework. This patch moves
>> memory-region node parsing from driver probe to prepare and
>> only passes node-name and not unit-address
>>
>> Fixes: 6b291e8020a8 ("drivers: remoteproc: Add Xilinx r5 remoteproc driver")
>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
>> ---
>>
>> Changelog:
>>    - This is first version of this change, however posting as part of the series
>>      that has version v3. The v2 of the series could be found at following link.
>>
>> v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/
>>
>>   drivers/remoteproc/xlnx_r5_remoteproc.c | 87 ++++++-------------------
>>   1 file changed, 20 insertions(+), 67 deletions(-)
>>
>> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
>> index 2db57d394155..81af2dea56c2 100644
>> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
>> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
>> @@ -61,8 +61,6 @@ static const struct mem_bank_data zynqmp_tcm_banks[] = {
>>    * @np: device node of RPU instance
>>    * @tcm_bank_count: number TCM banks accessible to this RPU
>>    * @tcm_banks: array of each TCM bank data
>> - * @rmem_count: Number of reserved mem regions
>> - * @rmem: reserved memory region nodes from device tree
>>    * @rproc: rproc handle
>>    * @pm_domain_id: RPU CPU power domain id
>>    */
>> @@ -71,8 +69,6 @@ struct zynqmp_r5_core {
>>   	struct device_node *np;
>>   	int tcm_bank_count;
>>   	struct mem_bank_data **tcm_banks;
>> -	int rmem_count;
>> -	struct reserved_mem **rmem;
>>   	struct rproc *rproc;
>>   	u32 pm_domain_id;
>>   };
>> @@ -239,21 +235,31 @@ static int add_mem_regions_carveout(struct rproc *rproc)
>>   {
>>   	struct rproc_mem_entry *rproc_mem;
>>   	struct zynqmp_r5_core *r5_core;
>> +	struct device_node *rmem_np;
>>   	struct reserved_mem *rmem;
>>   	int i, num_mem_regions;
>>   
>>   	r5_core = (struct zynqmp_r5_core *)rproc->priv;
>> -	num_mem_regions = r5_core->rmem_count;
>> +
>> +	num_mem_regions = of_property_count_elems_of_size(r5_core->np, "memory-region",
>> +							  sizeof(phandle));
>>   
>>   	for (i = 0; i < num_mem_regions; i++) {
>> -		rmem = r5_core->rmem[i];
>>
> Extra line
>
> Everyone else in the remoteproc subsystem is using of_phandle_iterator_next(),
> please do the same.  It is easier to maintain and you don't have to call
> of_node_put() after each iteration.
>
>
>> -		if (!strncmp(rmem->name, "vdev0buffer", strlen("vdev0buffer"))) {
>> +		rmem_np = of_parse_phandle(r5_core->np, "memory-region", i);
>> +
>> +		rmem = of_reserved_mem_lookup(rmem_np);
>> +		if (!rmem) {
>> +			of_node_put(rmem_np);
>> +			return -EINVAL;
>> +		}
>> +
>> +		if (!strcmp(rmem_np->name, "vdev0buffer")) {
>>   			/* Init reserved memory for vdev buffer */
>>   			rproc_mem = rproc_of_resm_mem_entry_init(&rproc->dev, i,
>>   								 rmem->size,
>>   								 rmem->base,
>> -								 rmem->name);
>> +								 rmem_np->name);
>>   		} else {
>>   			/* Register associated reserved memory regions */
>>   			rproc_mem = rproc_mem_entry_init(&rproc->dev, NULL,
>> @@ -261,16 +267,20 @@ static int add_mem_regions_carveout(struct rproc *rproc)
>>   							 rmem->size, rmem->base,
>>   							 zynqmp_r5_mem_region_map,
>>   							 zynqmp_r5_mem_region_unmap,
>> -							 rmem->name);
>> +							 rmem_np->name);
>>   		}
>>   
>> -		if (!rproc_mem)
>> +		if (!rproc_mem) {
>> +			of_node_put(rmem_np);
> When moving to of_phandle_iterator_next(), of_node_put(it.node) has to be
> called on error conditions.  Other drivers don't do it, something I will fix in
> the next cycle.
>
>>   			return -ENOMEM;
>> +		}
>>   
>>   		rproc_add_carveout(rproc, rproc_mem);
>>   
>>   		dev_dbg(&rproc->dev, "reserved mem carveout %s addr=%llx, size=0x%llx",
>>   			rmem->name, rmem->base, rmem->size);
>> +
>> +		of_node_put(rmem_np);
>>   	}
>>   
>>   	return 0;
>> @@ -726,59 +736,6 @@ static int zynqmp_r5_get_tcm_node(struct zynqmp_r5_cluster *cluster)
>>   	return 0;
>>   }
>>   
>> -/**
>> - * zynqmp_r5_get_mem_region_node()
>> - * parse memory-region property and get reserved mem regions
>> - *
>> - * @r5_core: pointer to zynqmp_r5_core type object
>> - *
>> - * Return: 0 for success and error code for failure.
>> - */
>> -static int zynqmp_r5_get_mem_region_node(struct zynqmp_r5_core *r5_core)
>> -{
>> -	struct device_node *np, *rmem_np;
>> -	struct reserved_mem **rmem;
>> -	int res_mem_count, i;
>> -	struct device *dev;
>> -
>> -	dev = r5_core->dev;
>> -	np = r5_core->np;
>> -
>> -	res_mem_count = of_property_count_elems_of_size(np, "memory-region",
>> -							sizeof(phandle));
>> -	if (res_mem_count <= 0) {
>> -		dev_warn(dev, "failed to get memory-region property %d\n",
>> -			 res_mem_count);
>> -		return 0;
>> -	}
>> -
>> -	rmem = devm_kcalloc(dev, res_mem_count,
>> -			    sizeof(struct reserved_mem *), GFP_KERNEL);
>> -	if (!rmem)
>> -		return -ENOMEM;
>> -
>> -	for (i = 0; i < res_mem_count; i++) {
>> -		rmem_np = of_parse_phandle(np, "memory-region", i);
>> -		if (!rmem_np)
>> -			goto release_rmem;
>> -
>> -		rmem[i] = of_reserved_mem_lookup(rmem_np);
>> -		if (!rmem[i]) {
>> -			of_node_put(rmem_np);
>> -			goto release_rmem;
>> -		}
>> -
>> -		of_node_put(rmem_np);
>> -	}
>> -
>> -	r5_core->rmem_count = res_mem_count;
>> -	r5_core->rmem = rmem;
>> -	return 0;
>> -
>> -release_rmem:
>> -	return -EINVAL;
>> -}
>> -
>>   /*
>>    * zynqmp_r5_core_init()
>>    * Create and initialize zynqmp_r5_core type object
>> @@ -806,10 +763,6 @@ static int zynqmp_r5_core_init(struct zynqmp_r5_cluster *cluster,
>>   	for (i = 0; i < cluster->core_count; i++) {
>>   		r5_core = cluster->r5_cores[i];
>>   
>> -		ret = zynqmp_r5_get_mem_region_node(r5_core);
>> -		if (ret)
>> -			dev_warn(dev, "memory-region prop failed %d\n", ret);
>> -
>>   		/* Initialize r5 cores with power-domains parsed from dts */
>>   		ret = of_property_read_u32_index(r5_core->np, "power-domains",
>>   						 1, &r5_core->pm_domain_id);
>> -- 
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 3/3] remoteproc: xilinx: add mailbox channels for rpmsg
  2023-02-22 19:06   ` Mathieu Poirier
@ 2023-02-22 20:49     ` Tanmay Shah
  0 siblings, 0 replies; 15+ messages in thread
From: Tanmay Shah @ 2023-02-22 20:49 UTC (permalink / raw)
  To: Mathieu Poirier, Tanmay Shah
  Cc: michal.simek, andersson, jaswinder.singh, ben.levinsky,
	shubhrajyoti.datta, linux-kernel, linux-arm-kernel,
	linux-remoteproc


On 2/22/23 11:06 AM, Mathieu Poirier wrote:
> On Mon, Feb 13, 2023 at 01:18:26PM -0800, Tanmay Shah wrote:
>> [ ... ]
>> +
>>   /*
>>    * zynqmp_r5_set_mode()
>>    *
>> @@ -617,7 +819,7 @@ static int zynqmp_r5_rproc_unprepare(struct rproc *rproc)
>>   	return 0;
>>   }
>>   
>> -static const struct rproc_ops zynqmp_r5_rproc_ops = {
>> +static struct rproc_ops zynqmp_r5_rproc_ops = {
>>   	.prepare	= zynqmp_r5_rproc_prepare,
>>   	.unprepare	= zynqmp_r5_rproc_unprepare,
>>   	.start		= zynqmp_r5_rproc_start,
>> @@ -642,6 +844,7 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>>   {
>>   	struct zynqmp_r5_core *r5_core;
>>   	struct rproc *r5_rproc;
>> +	struct mbox_info *ipi;
>>   	int ret;
>>   
>>   	/* Set up DMA mask */
>> @@ -649,12 +852,23 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>>   	if (ret)
>>   		return ERR_PTR(ret);
>>   
>> +	/*
>> +	 * If mailbox nodes are disabled using "status" property then setting up
>> +	 * mailbox channels will be failed. In that case we don't really need
>> +	 * kick() operation. Include .kick() only if mbox channels are acquired
>> +	 * successfully.
>> +	 */
>> +	ipi = zynqmp_r5_setup_mbox(cdev);
>> +	if (ipi)
>> +		zynqmp_r5_rproc_ops.kick = zynqmp_r5_rproc_kick;
>> +
>>   	/* Allocate remoteproc instance */
>>   	r5_rproc = rproc_alloc(cdev, dev_name(cdev),
>>   			       &zynqmp_r5_rproc_ops,
>>   			       NULL, sizeof(struct zynqmp_r5_core));
>>   	if (!r5_rproc) {
>>   		dev_err(cdev, "failed to allocate memory for rproc instance\n");
>> +		zynqmp_r5_free_mbox(ipi);
>>   		return ERR_PTR(-ENOMEM);
>>   	}
>>   
>> @@ -665,6 +879,7 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>>   	if (!r5_core->np) {
>>   		dev_err(cdev, "can't get device node for r5 core\n");
>>   		ret = -EINVAL;
>> +		zynqmp_r5_free_mbox(ipi);
>>   		goto free_rproc;
>>   	}
>>   
>> @@ -672,10 +887,17 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>>   	ret = rproc_add(r5_rproc);
>>   	if (ret) {
>>   		dev_err(cdev, "failed to add r5 remoteproc\n");
>> +		zynqmp_r5_free_mbox(ipi);
>>   		goto free_rproc;
>>   	}
>>   
>> +	if (ipi) {
>> +		r5_core->ipi = ipi;
>> +		ipi->r5_core = r5_core;
>> +	}
>> +
>>   	r5_core->rproc = r5_rproc;
>> +
>>   	return r5_core;
>>   
>>   free_rproc:
>> @@ -918,6 +1140,7 @@ static int zynqmp_r5_cluster_init(struct zynqmp_r5_cluster *cluster)
>>   	while (i >= 0) {
>>   		put_device(child_devs[i]);
>>   		if (r5_cores[i]) {
>> +			zynqmp_r5_free_mbox(r5_cores[i]->ipi);
> The mailboxes are initialized in zynqmp_r5_add_rproc_core() but free'd here in
> case of trouble, which introduces coupling between the two functions.  I suggest
> moving zynqmp_r5_setup_mbox() in zynqmp_r5_cluster_init() and initialize both
> mailboxes in it.
>
> I am done reviewing this set.


Ack. Yes that makes sense.

Thanks,

Tanmay


>
> Thanks,
> Mathieu
>
> Thanks,
> Mathieu
>
>>   			of_reserved_mem_device_release(r5_cores[i]->dev);
>>   			rproc_del(r5_cores[i]->rproc);
>>   			rproc_free(r5_cores[i]->rproc);
>> @@ -942,6 +1165,7 @@ static void zynqmp_r5_cluster_exit(void *data)
>>   
>>   	for (i = 0; i < cluster->core_count; i++) {
>>   		r5_core = cluster->r5_cores[i];
>> +		zynqmp_r5_free_mbox(r5_core->ipi);
>>   		of_reserved_mem_device_release(r5_core->dev);
>>   		put_device(r5_core->dev);
>>   		rproc_del(r5_core->rproc);
>> -- 
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes
  2023-02-22 17:34   ` Mathieu Poirier
@ 2023-02-23  9:40     ` Michal Simek
  2023-02-23 14:47       ` Tanmay Shah
  0 siblings, 1 reply; 15+ messages in thread
From: Michal Simek @ 2023-02-23  9:40 UTC (permalink / raw)
  To: Mathieu Poirier, Tanmay Shah
  Cc: andersson, jaswinder.singh, ben.levinsky, shubhrajyoti.datta,
	linux-kernel, linux-arm-kernel, linux-remoteproc



On 2/22/23 18:34, Mathieu Poirier wrote:
> On Mon, Feb 13, 2023 at 01:18:24PM -0800, Tanmay Shah wrote:
>> As of now only one child node is handled by zynqmp-ipi
>> mailbox driver. Upon introducing remoteproc r5 core mailbox
>> nodes, found few enhancements in Xilinx zynqmp mailbox driver
>> as following:
>>
>> - fix mailbox child node counts
>>    If child mailbox node status is disabled it causes
>>    crash in interrupt handler. Fix this by assigning
>>    only available child node during driver probe.
>>
>> - fix typo in IPI documentation %s/12/32/
>>    Xilinx IPI message buffers allows 32-byte data transfer.
>>    Fix documentation that says 12 bytes
>>
>> - fix bug in zynqmp-ipi isr handling
>>    Multiple IPI channels are mapped to same interrupt handler.
>>    Current isr implementation handles only one channel per isr.
>>    Fix this behavior by checking isr status bit of all child
>>    mailbox nodes.
>>
>> Fixes: 4981b82ba2ff ("mailbox: ZynqMP IPI mailbox controller")
>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
>> ---
>>
>> Changelog:
>>    - This is first version of this change, however posting as part of the series
>>      that has version v3.
>>
>> v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/
>>
>>   drivers/mailbox/zynqmp-ipi-mailbox.c       | 8 ++++----
>>   include/linux/mailbox/zynqmp-ipi-message.h | 2 +-
>>   2 files changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
>> index 12e004ff1a14..b1498f6f06e1 100644
>> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
>> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
>> @@ -152,7 +152,7 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
>>   	struct zynqmp_ipi_message *msg;
>>   	u64 arg0, arg3;
>>   	struct arm_smccc_res res;
>> -	int ret, i;
>> +	int ret, i, status = IRQ_NONE;
>>   
>>   	(void)irq;
>>   	arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
>> @@ -170,11 +170,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
>>   				memcpy_fromio(msg->data, mchan->req_buf,
>>   					      msg->len);
>>   				mbox_chan_received_data(chan, (void *)msg);
>> -				return IRQ_HANDLED;
>> +				status = IRQ_HANDLED;
>>   			}
>>   		}
>>   	}
>> -	return IRQ_NONE;
>> +	return status;
>>   }
>>   
>>   /**
>> @@ -634,7 +634,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
>>   	struct zynqmp_ipi_mbox *mbox;
>>   	int num_mboxes, ret = -EINVAL;
>>   
>> -	num_mboxes = of_get_child_count(np);
>> +	num_mboxes = of_get_available_child_count(np);
>>   	pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * sizeof(*mbox)),
>>   			     GFP_KERNEL);
>>   	if (!pdata)
>> diff --git a/include/linux/mailbox/zynqmp-ipi-message.h b/include/linux/mailbox/zynqmp-ipi-message.h
>> index 35ce84c8ca02..31d8046d945e 100644
>> --- a/include/linux/mailbox/zynqmp-ipi-message.h
>> +++ b/include/linux/mailbox/zynqmp-ipi-message.h
>> @@ -9,7 +9,7 @@
>>    * @data: message payload
>>    *
>>    * This is the structure for data used in mbox_send_message
>> - * the maximum length of data buffer is fixed to 12 bytes.
>> + * the maximum length of data buffer is fixed to 32 bytes.
>>    * Client is supposed to be aware of this.
> 
> I agree that this should be split in 3 patches but the fixes are so small that
> it is hardly required.  I'll leave it up to Michal to decide.

Generic guidance is saying that you should split that patches. I personally 
prefer to have one patch per change. It is useful for bisecting and faster for 
reviewing.
I would expect that this patch should go via mailbox tree and the rest via 
remoteproc tree. That's why maintainer should say what it is preferred way.

In connection mailbox. I recently had some time to look at this driver and I 
didn't really get why there are registers listed. Because all that addresses can 
be calculated based on soc compatible string and by xlnx,ipi-id for both sides.

Thanks,
Michal


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes
  2023-02-23  9:40     ` Michal Simek
@ 2023-02-23 14:47       ` Tanmay Shah
  2023-02-24  8:37         ` Michal Simek
  0 siblings, 1 reply; 15+ messages in thread
From: Tanmay Shah @ 2023-02-23 14:47 UTC (permalink / raw)
  To: Michal Simek, Mathieu Poirier, Tanmay Shah
  Cc: andersson, jaswinder.singh, ben.levinsky, shubhrajyoti.datta,
	linux-kernel, linux-arm-kernel, linux-remoteproc


On 2/23/23 1:40 AM, Michal Simek wrote:
>
>
> On 2/22/23 18:34, Mathieu Poirier wrote:
>> On Mon, Feb 13, 2023 at 01:18:24PM -0800, Tanmay Shah wrote:
>>> As of now only one child node is handled by zynqmp-ipi
>>> mailbox driver. Upon introducing remoteproc r5 core mailbox
>>> nodes, found few enhancements in Xilinx zynqmp mailbox driver
>>> as following:
>>>
>>> - fix mailbox child node counts
>>>    If child mailbox node status is disabled it causes
>>>    crash in interrupt handler. Fix this by assigning
>>>    only available child node during driver probe.
>>>
>>> - fix typo in IPI documentation %s/12/32/
>>>    Xilinx IPI message buffers allows 32-byte data transfer.
>>>    Fix documentation that says 12 bytes
>>>
>>> - fix bug in zynqmp-ipi isr handling
>>>    Multiple IPI channels are mapped to same interrupt handler.
>>>    Current isr implementation handles only one channel per isr.
>>>    Fix this behavior by checking isr status bit of all child
>>>    mailbox nodes.
>>>
>>> Fixes: 4981b82ba2ff ("mailbox: ZynqMP IPI mailbox controller")
>>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
>>> ---
>>>
>>> Changelog:
>>>    - This is first version of this change, however posting as part 
>>> of the series
>>>      that has version v3.
>>>
>>> v2: 
>>> https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/
>>>
>>>   drivers/mailbox/zynqmp-ipi-mailbox.c       | 8 ++++----
>>>   include/linux/mailbox/zynqmp-ipi-message.h | 2 +-
>>>   2 files changed, 5 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c 
>>> b/drivers/mailbox/zynqmp-ipi-mailbox.c
>>> index 12e004ff1a14..b1498f6f06e1 100644
>>> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
>>> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
>>> @@ -152,7 +152,7 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, 
>>> void *data)
>>>       struct zynqmp_ipi_message *msg;
>>>       u64 arg0, arg3;
>>>       struct arm_smccc_res res;
>>> -    int ret, i;
>>> +    int ret, i, status = IRQ_NONE;
>>>         (void)irq;
>>>       arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
>>> @@ -170,11 +170,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int 
>>> irq, void *data)
>>>                   memcpy_fromio(msg->data, mchan->req_buf,
>>>                             msg->len);
>>>                   mbox_chan_received_data(chan, (void *)msg);
>>> -                return IRQ_HANDLED;
>>> +                status = IRQ_HANDLED;
>>>               }
>>>           }
>>>       }
>>> -    return IRQ_NONE;
>>> +    return status;
>>>   }
>>>     /**
>>> @@ -634,7 +634,7 @@ static int zynqmp_ipi_probe(struct 
>>> platform_device *pdev)
>>>       struct zynqmp_ipi_mbox *mbox;
>>>       int num_mboxes, ret = -EINVAL;
>>>   -    num_mboxes = of_get_child_count(np);
>>> +    num_mboxes = of_get_available_child_count(np);
>>>       pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * 
>>> sizeof(*mbox)),
>>>                    GFP_KERNEL);
>>>       if (!pdata)
>>> diff --git a/include/linux/mailbox/zynqmp-ipi-message.h 
>>> b/include/linux/mailbox/zynqmp-ipi-message.h
>>> index 35ce84c8ca02..31d8046d945e 100644
>>> --- a/include/linux/mailbox/zynqmp-ipi-message.h
>>> +++ b/include/linux/mailbox/zynqmp-ipi-message.h
>>> @@ -9,7 +9,7 @@
>>>    * @data: message payload
>>>    *
>>>    * This is the structure for data used in mbox_send_message
>>> - * the maximum length of data buffer is fixed to 12 bytes.
>>> + * the maximum length of data buffer is fixed to 32 bytes.
>>>    * Client is supposed to be aware of this.
>>
>> I agree that this should be split in 3 patches but the fixes are so 
>> small that
>> it is hardly required.  I'll leave it up to Michal to decide.
>
> Generic guidance is saying that you should split that patches. I 
> personally prefer to have one patch per change. It is useful for 
> bisecting and faster for reviewing.
> I would expect that this patch should go via mailbox tree and the rest 
> via remoteproc tree. That's why maintainer should say what it is 
> preferred way.
>

Thanks Michal for reviews. I will split the patch in three different 
patches.


> In connection mailbox. I recently had some time to look at this driver 
> and I didn't really get why there are registers listed. Because all 
> that addresses can be calculated based on soc compatible string and by 
> xlnx,ipi-id for both sides.


Yes the IPI configuration register addresses are retrieved from TF-A in 
zynqmp-ipi-driver using xlnx,ipi-id property.

Other than that there are message buffers provided in hardware for IPI 
communication. We list those message buffer addresses

using reg addresses and they are expected in dts. As per bindings we do 
not map message buffers to IPI ID.

I am not sure which register listing you are referring to ?


>
> Thanks,
> Michal
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes
  2023-02-23 14:47       ` Tanmay Shah
@ 2023-02-24  8:37         ` Michal Simek
  2023-02-27 15:25           ` Tanmay Shah
  0 siblings, 1 reply; 15+ messages in thread
From: Michal Simek @ 2023-02-24  8:37 UTC (permalink / raw)
  To: Tanmay Shah, Mathieu Poirier, Tanmay Shah
  Cc: andersson, jaswinder.singh, ben.levinsky, shubhrajyoti.datta,
	linux-kernel, linux-arm-kernel, linux-remoteproc



On 2/23/23 15:47, Tanmay Shah wrote:
> 
> On 2/23/23 1:40 AM, Michal Simek wrote:
>>
>>
>> On 2/22/23 18:34, Mathieu Poirier wrote:
>>> On Mon, Feb 13, 2023 at 01:18:24PM -0800, Tanmay Shah wrote:
>>>> As of now only one child node is handled by zynqmp-ipi
>>>> mailbox driver. Upon introducing remoteproc r5 core mailbox
>>>> nodes, found few enhancements in Xilinx zynqmp mailbox driver
>>>> as following:
>>>>
>>>> - fix mailbox child node counts
>>>>    If child mailbox node status is disabled it causes
>>>>    crash in interrupt handler. Fix this by assigning
>>>>    only available child node during driver probe.
>>>>
>>>> - fix typo in IPI documentation %s/12/32/
>>>>    Xilinx IPI message buffers allows 32-byte data transfer.
>>>>    Fix documentation that says 12 bytes
>>>>
>>>> - fix bug in zynqmp-ipi isr handling
>>>>    Multiple IPI channels are mapped to same interrupt handler.
>>>>    Current isr implementation handles only one channel per isr.
>>>>    Fix this behavior by checking isr status bit of all child
>>>>    mailbox nodes.
>>>>
>>>> Fixes: 4981b82ba2ff ("mailbox: ZynqMP IPI mailbox controller")
>>>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
>>>> ---
>>>>
>>>> Changelog:
>>>>    - This is first version of this change, however posting as part of the 
>>>> series
>>>>      that has version v3.
>>>>
>>>> v2: https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/
>>>>
>>>>   drivers/mailbox/zynqmp-ipi-mailbox.c       | 8 ++++----
>>>>   include/linux/mailbox/zynqmp-ipi-message.h | 2 +-
>>>>   2 files changed, 5 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c 
>>>> b/drivers/mailbox/zynqmp-ipi-mailbox.c
>>>> index 12e004ff1a14..b1498f6f06e1 100644
>>>> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
>>>> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
>>>> @@ -152,7 +152,7 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void 
>>>> *data)
>>>>       struct zynqmp_ipi_message *msg;
>>>>       u64 arg0, arg3;
>>>>       struct arm_smccc_res res;
>>>> -    int ret, i;
>>>> +    int ret, i, status = IRQ_NONE;
>>>>         (void)irq;
>>>>       arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
>>>> @@ -170,11 +170,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void 
>>>> *data)
>>>>                   memcpy_fromio(msg->data, mchan->req_buf,
>>>>                             msg->len);
>>>>                   mbox_chan_received_data(chan, (void *)msg);
>>>> -                return IRQ_HANDLED;
>>>> +                status = IRQ_HANDLED;
>>>>               }
>>>>           }
>>>>       }
>>>> -    return IRQ_NONE;
>>>> +    return status;
>>>>   }
>>>>     /**
>>>> @@ -634,7 +634,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
>>>>       struct zynqmp_ipi_mbox *mbox;
>>>>       int num_mboxes, ret = -EINVAL;
>>>>   -    num_mboxes = of_get_child_count(np);
>>>> +    num_mboxes = of_get_available_child_count(np);
>>>>       pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * sizeof(*mbox)),
>>>>                    GFP_KERNEL);
>>>>       if (!pdata)
>>>> diff --git a/include/linux/mailbox/zynqmp-ipi-message.h 
>>>> b/include/linux/mailbox/zynqmp-ipi-message.h
>>>> index 35ce84c8ca02..31d8046d945e 100644
>>>> --- a/include/linux/mailbox/zynqmp-ipi-message.h
>>>> +++ b/include/linux/mailbox/zynqmp-ipi-message.h
>>>> @@ -9,7 +9,7 @@
>>>>    * @data: message payload
>>>>    *
>>>>    * This is the structure for data used in mbox_send_message
>>>> - * the maximum length of data buffer is fixed to 12 bytes.
>>>> + * the maximum length of data buffer is fixed to 32 bytes.
>>>>    * Client is supposed to be aware of this.
>>>
>>> I agree that this should be split in 3 patches but the fixes are so small that
>>> it is hardly required.  I'll leave it up to Michal to decide.
>>
>> Generic guidance is saying that you should split that patches. I personally 
>> prefer to have one patch per change. It is useful for bisecting and faster for 
>> reviewing.
>> I would expect that this patch should go via mailbox tree and the rest via 
>> remoteproc tree. That's why maintainer should say what it is preferred way.
>>
> 
> Thanks Michal for reviews. I will split the patch in three different patches.
> 
> 
>> In connection mailbox. I recently had some time to look at this driver and I 
>> didn't really get why there are registers listed. Because all that addresses 
>> can be calculated based on soc compatible string and by xlnx,ipi-id for both 
>> sides.
> 
> 
> Yes the IPI configuration register addresses are retrieved from TF-A in 
> zynqmp-ipi-driver using xlnx,ipi-id property.
> 
> Other than that there are message buffers provided in hardware for IPI 
> communication. We list those message buffer addresses
> 
> using reg addresses and they are expected in dts. As per bindings we do not map 
> message buffers to IPI ID.
> 
> I am not sure which register listing you are referring to ?

Based on
https://docs.xilinx.com/r/en-US/am011-versal-acap-trm/Message-Buffer

xlnx,ipi-id = 2 (versal case) APU
pmu1 has xlnx,ipi-id = 1; PMC

Base on versal is 0xFF3F0000

Local buffers for sending from 2 -> 1
Buffer 2 starts at offset 0x400

Order in destination is PSM, PMC, IPI0... where you have request 32B and 
response 32B too.

It means 2->1 - target is PMC
that means 0x40 for request 0x60 for response.

When this is put together

0xff3f0000 + 0x400 + 0x40 = ff3f0440 - local request
0xff3f0000 + 0x400 + 0x60 = ff3f0460 - local response

For the way back from 1->2
Buffer one starts at 0x200
I want to send it to APU which we use channel 2 for.
Channel 2 start at ID * 0x40 = 0x80 is for request
0x80 + 32 = 0xa0 for response

It means 2->1 - target is APU at ID 2
0xff3f0000 + 0x200 + 0x80 = ff3f0280 - remote request
0xff3f0000 + 0x200 + 0xa0 = ff3f02a0 - remote response

Based on this you see that reg/reg names property are pretty much useless and 
should be removed from dt binding document because simply base and source ipi-id 
and destination ipi-id will tell you which addresses should be used.

As far as I know ZynqMP is using the same logic. The only difference is really 
just different base address for buffers.

That's why I think the DT node should be just like this and all addresses

Versal
         versal_ipi {
                 compatible = "xlnx,versal-ipi-mailbox";
                 interrupt-parent = <&gic>;
                 interrupts = <0 30 4>;
                 xlnx,ipi-id = <2>;

                 ipi_mailbox_pmu1: mailbox {
                         #mbox-cells = <1>;
                         xlnx,ipi-id = <1>;
                 };
         };

Where different compatible string will ensure that base address is assigned 
based on SOC.

Thanks,
Michal

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes
  2023-02-24  8:37         ` Michal Simek
@ 2023-02-27 15:25           ` Tanmay Shah
  2023-02-27 15:28             ` Michal Simek
  0 siblings, 1 reply; 15+ messages in thread
From: Tanmay Shah @ 2023-02-27 15:25 UTC (permalink / raw)
  To: Michal Simek, Mathieu Poirier, Tanmay Shah
  Cc: andersson, jaswinder.singh, ben.levinsky, shubhrajyoti.datta,
	linux-kernel, linux-arm-kernel, linux-remoteproc

Thanks Michal for this calculation.

I will send separate patch after some more analysis to accommodate this 
implementation of accessing message buffers from ipi-id.

However, for this series this isn't required. For this series, I will 
just split this patch into three different patches. I hope it's okay.

Thanks,

Tanmay

On 2/24/23 2:37 AM, Michal Simek wrote:
>
>
> On 2/23/23 15:47, Tanmay Shah wrote:
>>
>> On 2/23/23 1:40 AM, Michal Simek wrote:
>>>
>>>
>>> On 2/22/23 18:34, Mathieu Poirier wrote:
>>>> On Mon, Feb 13, 2023 at 01:18:24PM -0800, Tanmay Shah wrote:
>>>>> As of now only one child node is handled by zynqmp-ipi
>>>>> mailbox driver. Upon introducing remoteproc r5 core mailbox
>>>>> nodes, found few enhancements in Xilinx zynqmp mailbox driver
>>>>> as following:
>>>>>
>>>>> - fix mailbox child node counts
>>>>>    If child mailbox node status is disabled it causes
>>>>>    crash in interrupt handler. Fix this by assigning
>>>>>    only available child node during driver probe.
>>>>>
>>>>> - fix typo in IPI documentation %s/12/32/
>>>>>    Xilinx IPI message buffers allows 32-byte data transfer.
>>>>>    Fix documentation that says 12 bytes
>>>>>
>>>>> - fix bug in zynqmp-ipi isr handling
>>>>>    Multiple IPI channels are mapped to same interrupt handler.
>>>>>    Current isr implementation handles only one channel per isr.
>>>>>    Fix this behavior by checking isr status bit of all child
>>>>>    mailbox nodes.
>>>>>
>>>>> Fixes: 4981b82ba2ff ("mailbox: ZynqMP IPI mailbox controller")
>>>>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
>>>>> ---
>>>>>
>>>>> Changelog:
>>>>>    - This is first version of this change, however posting as part 
>>>>> of the series
>>>>>      that has version v3.
>>>>>
>>>>> v2: 
>>>>> https://lore.kernel.org/all/20230126213154.1707300-1-tanmay.shah@amd.com/
>>>>>
>>>>>   drivers/mailbox/zynqmp-ipi-mailbox.c       | 8 ++++----
>>>>>   include/linux/mailbox/zynqmp-ipi-message.h | 2 +-
>>>>>   2 files changed, 5 insertions(+), 5 deletions(-)
>>>>>
>>>>> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c 
>>>>> b/drivers/mailbox/zynqmp-ipi-mailbox.c
>>>>> index 12e004ff1a14..b1498f6f06e1 100644
>>>>> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
>>>>> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
>>>>> @@ -152,7 +152,7 @@ static irqreturn_t zynqmp_ipi_interrupt(int 
>>>>> irq, void *data)
>>>>>       struct zynqmp_ipi_message *msg;
>>>>>       u64 arg0, arg3;
>>>>>       struct arm_smccc_res res;
>>>>> -    int ret, i;
>>>>> +    int ret, i, status = IRQ_NONE;
>>>>>         (void)irq;
>>>>>       arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
>>>>> @@ -170,11 +170,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int 
>>>>> irq, void *data)
>>>>>                   memcpy_fromio(msg->data, mchan->req_buf,
>>>>>                             msg->len);
>>>>>                   mbox_chan_received_data(chan, (void *)msg);
>>>>> -                return IRQ_HANDLED;
>>>>> +                status = IRQ_HANDLED;
>>>>>               }
>>>>>           }
>>>>>       }
>>>>> -    return IRQ_NONE;
>>>>> +    return status;
>>>>>   }
>>>>>     /**
>>>>> @@ -634,7 +634,7 @@ static int zynqmp_ipi_probe(struct 
>>>>> platform_device *pdev)
>>>>>       struct zynqmp_ipi_mbox *mbox;
>>>>>       int num_mboxes, ret = -EINVAL;
>>>>>   -    num_mboxes = of_get_child_count(np);
>>>>> +    num_mboxes = of_get_available_child_count(np);
>>>>>       pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * 
>>>>> sizeof(*mbox)),
>>>>>                    GFP_KERNEL);
>>>>>       if (!pdata)
>>>>> diff --git a/include/linux/mailbox/zynqmp-ipi-message.h 
>>>>> b/include/linux/mailbox/zynqmp-ipi-message.h
>>>>> index 35ce84c8ca02..31d8046d945e 100644
>>>>> --- a/include/linux/mailbox/zynqmp-ipi-message.h
>>>>> +++ b/include/linux/mailbox/zynqmp-ipi-message.h
>>>>> @@ -9,7 +9,7 @@
>>>>>    * @data: message payload
>>>>>    *
>>>>>    * This is the structure for data used in mbox_send_message
>>>>> - * the maximum length of data buffer is fixed to 12 bytes.
>>>>> + * the maximum length of data buffer is fixed to 32 bytes.
>>>>>    * Client is supposed to be aware of this.
>>>>
>>>> I agree that this should be split in 3 patches but the fixes are so 
>>>> small that
>>>> it is hardly required.  I'll leave it up to Michal to decide.
>>>
>>> Generic guidance is saying that you should split that patches. I 
>>> personally prefer to have one patch per change. It is useful for 
>>> bisecting and faster for reviewing.
>>> I would expect that this patch should go via mailbox tree and the 
>>> rest via remoteproc tree. That's why maintainer should say what it 
>>> is preferred way.
>>>
>>
>> Thanks Michal for reviews. I will split the patch in three different 
>> patches.
>>
>>
>>> In connection mailbox. I recently had some time to look at this 
>>> driver and I didn't really get why there are registers listed. 
>>> Because all that addresses can be calculated based on soc compatible 
>>> string and by xlnx,ipi-id for both sides.
>>
>>
>> Yes the IPI configuration register addresses are retrieved from TF-A 
>> in zynqmp-ipi-driver using xlnx,ipi-id property.
>>
>> Other than that there are message buffers provided in hardware for 
>> IPI communication. We list those message buffer addresses
>>
>> using reg addresses and they are expected in dts. As per bindings we 
>> do not map message buffers to IPI ID.
>>
>> I am not sure which register listing you are referring to ?
>
> Based on
> https://docs.xilinx.com/r/en-US/am011-versal-acap-trm/Message-Buffer
>
> xlnx,ipi-id = 2 (versal case) APU
> pmu1 has xlnx,ipi-id = 1; PMC
>
> Base on versal is 0xFF3F0000
>
> Local buffers for sending from 2 -> 1
> Buffer 2 starts at offset 0x400
>
> Order in destination is PSM, PMC, IPI0... where you have request 32B 
> and response 32B too.
>
> It means 2->1 - target is PMC
> that means 0x40 for request 0x60 for response.
>
> When this is put together
>
> 0xff3f0000 + 0x400 + 0x40 = ff3f0440 - local request
> 0xff3f0000 + 0x400 + 0x60 = ff3f0460 - local response
>
> For the way back from 1->2
> Buffer one starts at 0x200
> I want to send it to APU which we use channel 2 for.
> Channel 2 start at ID * 0x40 = 0x80 is for request
> 0x80 + 32 = 0xa0 for response
>
> It means 2->1 - target is APU at ID 2
> 0xff3f0000 + 0x200 + 0x80 = ff3f0280 - remote request
> 0xff3f0000 + 0x200 + 0xa0 = ff3f02a0 - remote response
>
> Based on this you see that reg/reg names property are pretty much 
> useless and should be removed from dt binding document because simply 
> base and source ipi-id and destination ipi-id will tell you which 
> addresses should be used.
>
> As far as I know ZynqMP is using the same logic. The only difference 
> is really just different base address for buffers.
>
> That's why I think the DT node should be just like this and all addresses
>
> Versal
>         versal_ipi {
>                 compatible = "xlnx,versal-ipi-mailbox";
>                 interrupt-parent = <&gic>;
>                 interrupts = <0 30 4>;
>                 xlnx,ipi-id = <2>;
>
>                 ipi_mailbox_pmu1: mailbox {
>                         #mbox-cells = <1>;
>                         xlnx,ipi-id = <1>;
>                 };
>         };
>
> Where different compatible string will ensure that base address is 
> assigned based on SOC.
>
> Thanks,
> Michal

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes
  2023-02-27 15:25           ` Tanmay Shah
@ 2023-02-27 15:28             ` Michal Simek
  0 siblings, 0 replies; 15+ messages in thread
From: Michal Simek @ 2023-02-27 15:28 UTC (permalink / raw)
  To: Tanmay Shah, Mathieu Poirier, Tanmay Shah
  Cc: andersson, jaswinder.singh, ben.levinsky, shubhrajyoti.datta,
	linux-kernel, linux-arm-kernel, linux-remoteproc



On 2/27/23 16:25, Tanmay Shah wrote:
> Thanks Michal for this calculation.
> 
> I will send separate patch after some more analysis to accommodate this 
> implementation of accessing message buffers from ipi-id.

thx.

> 
> However, for this series this isn't required. For this series, I will just split 
> this patch into three different patches. I hope it's okay.

yes that's fine.

M

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-02-27 15:28 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-13 21:18 [PATCH v3 0/3] drivers: remoteproc: xilinx: add mailbox support Tanmay Shah
2023-02-13 21:18 ` [PATCH v3 1/3] drivers: mailbox: zynqmp: handle multiple child nodes Tanmay Shah
2023-02-22  5:28   ` Datta, Shubhrajyoti
2023-02-22 17:34   ` Mathieu Poirier
2023-02-23  9:40     ` Michal Simek
2023-02-23 14:47       ` Tanmay Shah
2023-02-24  8:37         ` Michal Simek
2023-02-27 15:25           ` Tanmay Shah
2023-02-27 15:28             ` Michal Simek
2023-02-13 21:18 ` [PATCH v3 2/3] drivers: remoteproc: xilinx: fix carveout names Tanmay Shah
2023-02-22 18:41   ` Mathieu Poirier
2023-02-22 20:47     ` Tanmay Shah
2023-02-13 21:18 ` [PATCH v3 3/3] remoteproc: xilinx: add mailbox channels for rpmsg Tanmay Shah
2023-02-22 19:06   ` Mathieu Poirier
2023-02-22 20:49     ` Tanmay Shah

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).