linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] mailbox: zynqmp: Enable Bufferless IPIs for Versal based SOCs
@ 2023-12-14 21:13 Ben Levinsky
  2023-12-14 21:13 ` [PATCH 1/3] mailbox: zynqmp: Move of_match structure closer to usage Ben Levinsky
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Ben Levinsky @ 2023-12-14 21:13 UTC (permalink / raw)
  To: jassisinghbrar, linux-kernel, michal.simek
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel

For Xilinx-AMD Versal and Versal NET SOC's there exist also
inter-processor-interrupts (IPIs) without IPI Message Buffers. For these
enable use of IPI Mailbox driver for send/receive as well.

This is enabled with new compatible string: "xlnx,versal-ipi-mailbox"

Original, buffered usage for ZynqMP based SOC is still supported.

Note that the linked patch provides corresponding bindings.
Depends on: https://lore.kernel.org/all/20231214054224.957336-3-tanmay.shah@amd.com/T/

Ben Levinsky (3):
  mailbox: zynqmp: Move of_match structure closer to usage
  mailbox: zynqmp: Move buffered IPI setup to of_match selected routine
  mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's

 drivers/mailbox/zynqmp-ipi-mailbox.c | 275 ++++++++++++++++++++++-----
 1 file changed, 231 insertions(+), 44 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] mailbox: zynqmp: Move of_match structure closer to usage
  2023-12-14 21:13 [PATCH 0/3] mailbox: zynqmp: Enable Bufferless IPIs for Versal based SOCs Ben Levinsky
@ 2023-12-14 21:13 ` Ben Levinsky
  2023-12-14 21:13 ` [PATCH 2/3] mailbox: zynqmp: Move buffered IPI setup to of_match selected routine Ben Levinsky
  2023-12-14 21:13 ` [PATCH 3/3] mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's Ben Levinsky
  2 siblings, 0 replies; 9+ messages in thread
From: Ben Levinsky @ 2023-12-14 21:13 UTC (permalink / raw)
  To: jassisinghbrar, linux-kernel, michal.simek
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel

The of_match structure zynqmp_ipi_of_match is now adjacent to where it
used in the zynqmp_ipi_driver structure for readability.

Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
---
 drivers/mailbox/zynqmp-ipi-mailbox.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
index 7fa533e80dd9..2c10aa01b3bb 100644
--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
+++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
@@ -416,11 +416,6 @@ static struct mbox_chan *zynqmp_ipi_of_xlate(struct mbox_controller *mbox,
 	return chan;
 }
 
-static const struct of_device_id zynqmp_ipi_of_match[] = {
-	{ .compatible = "xlnx,zynqmp-ipi-mailbox" },
-	{},
-};
-MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
 
 /**
  * zynqmp_ipi_mbox_get_buf_res - Get buffer resource from the IPI dev node
@@ -698,6 +693,12 @@ static int zynqmp_ipi_remove(struct platform_device *pdev)
 	return 0;
 }
 
+static const struct of_device_id zynqmp_ipi_of_match[] = {
+	{ .compatible = "xlnx,zynqmp-ipi-mailbox" },
+	{},
+};
+MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
+
 static struct platform_driver zynqmp_ipi_driver = {
 	.probe = zynqmp_ipi_probe,
 	.remove = zynqmp_ipi_remove,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] mailbox: zynqmp: Move buffered IPI setup to of_match selected routine
  2023-12-14 21:13 [PATCH 0/3] mailbox: zynqmp: Enable Bufferless IPIs for Versal based SOCs Ben Levinsky
  2023-12-14 21:13 ` [PATCH 1/3] mailbox: zynqmp: Move of_match structure closer to usage Ben Levinsky
@ 2023-12-14 21:13 ` Ben Levinsky
  2023-12-20 13:23   ` Michal Simek
  2023-12-14 21:13 ` [PATCH 3/3] mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's Ben Levinsky
  2 siblings, 1 reply; 9+ messages in thread
From: Ben Levinsky @ 2023-12-14 21:13 UTC (permalink / raw)
  To: jassisinghbrar, linux-kernel, michal.simek
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel

Move routine that initializes the mailboxes for send and receive to
a function pointer that is set based on compatible string.

Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
---
 drivers/mailbox/zynqmp-ipi-mailbox.c | 124 +++++++++++++++++++--------
 1 file changed, 89 insertions(+), 35 deletions(-)

diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
index 2c10aa01b3bb..edefb80a6e47 100644
--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
+++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
@@ -72,6 +72,10 @@ struct zynqmp_ipi_mchan {
 	unsigned int chan_type;
 };
 
+struct zynqmp_ipi_mbox;
+
+typedef int (*setup_ipi_fn)(struct zynqmp_ipi_mbox *ipi_mbox, struct device_node *node);
+
 /**
  * struct zynqmp_ipi_mbox - Description of a ZynqMP IPI mailbox
  *                          platform data.
@@ -82,6 +86,7 @@ struct zynqmp_ipi_mchan {
  * @mbox:                 mailbox Controller
  * @mchans:               array for channels, tx channel and rx channel.
  * @irq:                  IPI agent interrupt ID
+ * setup_ipi_fn:          Function Pointer to set up IPI Channels
  */
 struct zynqmp_ipi_mbox {
 	struct zynqmp_ipi_pdata *pdata;
@@ -89,6 +94,7 @@ struct zynqmp_ipi_mbox {
 	u32 remote_id;
 	struct mbox_controller mbox;
 	struct zynqmp_ipi_mchan mchans[2];
+	setup_ipi_fn setup_ipi_fn;
 };
 
 /**
@@ -466,12 +472,9 @@ static void zynqmp_ipi_mbox_dev_release(struct device *dev)
 static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
 				 struct device_node *node)
 {
-	struct zynqmp_ipi_mchan *mchan;
 	struct mbox_chan *chans;
 	struct mbox_controller *mbox;
-	struct resource res;
 	struct device *dev, *mdev;
-	const char *name;
 	int ret;
 
 	dev = ipi_mbox->pdata->dev;
@@ -491,6 +494,75 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
 	}
 	mdev = &ipi_mbox->dev;
 
+	/* Get the IPI remote agent ID */
+	ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
+	if (ret < 0) {
+		dev_err(dev, "No IPI remote ID is specified.\n");
+		return ret;
+	}
+
+	ret = ipi_mbox->setup_ipi_fn(ipi_mbox, node);
+	if (ret) {
+		dev_err(dev, "Failed to set up IPI Buffers.\n");
+		return ret;
+	}
+
+	mbox = &ipi_mbox->mbox;
+	mbox->dev = mdev;
+	mbox->ops = &zynqmp_ipi_chan_ops;
+	mbox->num_chans = 2;
+	mbox->txdone_irq = false;
+	mbox->txdone_poll = true;
+	mbox->txpoll_period = 5;
+	mbox->of_xlate = zynqmp_ipi_of_xlate;
+	chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
+	if (!chans)
+		return -ENOMEM;
+	mbox->chans = chans;
+	chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
+	chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
+	ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
+	ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
+	ret = devm_mbox_controller_register(mdev, mbox);
+	if (ret)
+		dev_err(mdev,
+			"Failed to register mbox_controller(%d)\n", ret);
+	else
+		dev_info(mdev,
+			 "Registered ZynqMP IPI mbox with TX/RX channels.\n");
+	return ret;
+}
+
+/**
+ * zynqmp_ipi_setup - set up IPI Buffers for classic flow
+ *
+ * @ipi_mbox: pointer to IPI mailbox private data structure
+ * @node: IPI mailbox device node
+ *
+ * This will be used to set up IPI Buffers for ZynqMP SOC if user
+ * wishes to use classic driver usage model on new SOC's with only
+ * buffered IPIs.
+ *
+ * Note that bufferless IPIs and mixed usage of buffered and bufferless
+ * IPIs are not supported with this flow.
+ *
+ * This will be invoked with compatible string "xlnx,zynqmp-ipi-mailbox".
+ *
+ * Return: 0 for success, negative value for failure
+ */
+static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
+			    struct device_node *node)
+{
+	struct zynqmp_ipi_mchan *mchan;
+	struct device *mdev;
+	struct resource res;
+	struct device *dev;
+	const char *name;
+	int ret;
+
+	mdev = &ipi_mbox->dev;
+	dev = ipi_mbox->pdata->dev;
+
 	mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
 	name = "local_request_region";
 	ret = zynqmp_ipi_mbox_get_buf_res(node, name, &res);
@@ -565,37 +637,7 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
 	if (!mchan->rx_buf)
 		return -ENOMEM;
 
-	/* Get the IPI remote agent ID */
-	ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
-	if (ret < 0) {
-		dev_err(dev, "No IPI remote ID is specified.\n");
-		return ret;
-	}
-
-	mbox = &ipi_mbox->mbox;
-	mbox->dev = mdev;
-	mbox->ops = &zynqmp_ipi_chan_ops;
-	mbox->num_chans = 2;
-	mbox->txdone_irq = false;
-	mbox->txdone_poll = true;
-	mbox->txpoll_period = 5;
-	mbox->of_xlate = zynqmp_ipi_of_xlate;
-	chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
-	if (!chans)
-		return -ENOMEM;
-	mbox->chans = chans;
-	chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
-	chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
-	ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
-	ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
-	ret = devm_mbox_controller_register(mdev, mbox);
-	if (ret)
-		dev_err(mdev,
-			"Failed to register mbox_controller(%d)\n", ret);
-	else
-		dev_info(mdev,
-			 "Registered ZynqMP IPI mbox with TX/RX channels.\n");
-	return ret;
+	return 0;
 }
 
 /**
@@ -626,6 +668,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
 	struct zynqmp_ipi_pdata *pdata;
 	struct zynqmp_ipi_mbox *mbox;
 	int num_mboxes, ret = -EINVAL;
+	setup_ipi_fn ipi_fn;
 
 	num_mboxes = of_get_available_child_count(np);
 	if (num_mboxes == 0) {
@@ -646,9 +689,18 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
 		return ret;
 	}
 
+	ipi_fn = (setup_ipi_fn)device_get_match_data(&pdev->dev);
+	if (!ipi_fn) {
+		dev_err(dev,
+			"Mbox Compatible String is missing IPI Setup fn.\n");
+		return -ENODEV;
+	}
+
 	pdata->num_mboxes = num_mboxes;
 
 	mbox = pdata->ipi_mboxes;
+	mbox->setup_ipi_fn = ipi_fn;
+
 	for_each_available_child_of_node(np, nc) {
 		mbox->pdata = pdata;
 		ret = zynqmp_ipi_mbox_probe(mbox, nc);
@@ -694,7 +746,9 @@ static int zynqmp_ipi_remove(struct platform_device *pdev)
 }
 
 static const struct of_device_id zynqmp_ipi_of_match[] = {
-	{ .compatible = "xlnx,zynqmp-ipi-mailbox" },
+	{ .compatible = "xlnx,zynqmp-ipi-mailbox",
+	  .data = &zynqmp_ipi_setup,
+	},
 	{},
 };
 MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's
  2023-12-14 21:13 [PATCH 0/3] mailbox: zynqmp: Enable Bufferless IPIs for Versal based SOCs Ben Levinsky
  2023-12-14 21:13 ` [PATCH 1/3] mailbox: zynqmp: Move of_match structure closer to usage Ben Levinsky
  2023-12-14 21:13 ` [PATCH 2/3] mailbox: zynqmp: Move buffered IPI setup to of_match selected routine Ben Levinsky
@ 2023-12-14 21:13 ` Ben Levinsky
  2023-12-20 13:29   ` Michal Simek
  2 siblings, 1 reply; 9+ messages in thread
From: Ben Levinsky @ 2023-12-14 21:13 UTC (permalink / raw)
  To: jassisinghbrar, linux-kernel, michal.simek
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel

On Xilinx-AMD Versal and Versal-NET, there exist both
inter-processor-interrupts with corresponding message buffers and without
such buffers.

Add a routine that, if the corresponding DT compatible
string "xlnx,versal-ipi-mailbox" is used then a Versal-based SOC
can use a mailbox Device Tree entry where both host and remote
can use either of the buffered or bufferless interrupts.

Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
---
Note that the linked patch provides corresponding bindings.
Depends on: https://lore.kernel.org/all/20231214054224.957336-3-tanmay.shah@amd.com/T/
---
 drivers/mailbox/zynqmp-ipi-mailbox.c | 146 +++++++++++++++++++++++++--
 1 file changed, 139 insertions(+), 7 deletions(-)

diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
index edefb80a6e47..316d9406064e 100644
--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
+++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
@@ -52,6 +52,13 @@
 #define IPI_MB_CHNL_TX	0 /* IPI mailbox TX channel */
 #define IPI_MB_CHNL_RX	1 /* IPI mailbox RX channel */
 
+/* IPI Message Buffer Information */
+#define RESP_OFFSET	0x20U
+#define DEST_OFFSET	0x40U
+#define IPI_BUF_SIZE	0x20U
+#define DST_BIT_POS	9U
+#define SRC_BITMASK	GENMASK(11, 8)
+
 /**
  * struct zynqmp_ipi_mchan - Description of a Xilinx ZynqMP IPI mailbox channel
  * @is_opened: indicate if the IPI channel is opened
@@ -170,9 +177,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
 		if (ret > 0 && ret & IPI_MB_STATUS_RECV_PENDING) {
 			if (mchan->is_opened) {
 				msg = mchan->rx_buf;
-				msg->len = mchan->req_buf_size;
-				memcpy_fromio(msg->data, mchan->req_buf,
-					      msg->len);
+				if (msg) {
+					msg->len = mchan->req_buf_size;
+					memcpy_fromio(msg->data, mchan->req_buf,
+						      msg->len);
+				}
 				mbox_chan_received_data(chan, (void *)msg);
 				status = IRQ_HANDLED;
 			}
@@ -282,26 +291,26 @@ static int zynqmp_ipi_send_data(struct mbox_chan *chan, void *data)
 
 	if (mchan->chan_type == IPI_MB_CHNL_TX) {
 		/* Send request message */
-		if (msg && msg->len > mchan->req_buf_size) {
+		if (msg && msg->len > mchan->req_buf_size && mchan->req_buf) {
 			dev_err(dev, "channel %d message length %u > max %lu\n",
 				mchan->chan_type, (unsigned int)msg->len,
 				mchan->req_buf_size);
 			return -EINVAL;
 		}
-		if (msg && msg->len)
+		if (msg && msg->len && mchan->req_buf)
 			memcpy_toio(mchan->req_buf, msg->data, msg->len);
 		/* Kick IPI mailbox to send message */
 		arg0 = SMC_IPI_MAILBOX_NOTIFY;
 		zynqmp_ipi_fw_call(ipi_mbox, arg0, 0, &res);
 	} else {
 		/* Send response message */
-		if (msg && msg->len > mchan->resp_buf_size) {
+		if (msg && msg->len > mchan->resp_buf_size && mchan->resp_buf) {
 			dev_err(dev, "channel %d message length %u > max %lu\n",
 				mchan->chan_type, (unsigned int)msg->len,
 				mchan->resp_buf_size);
 			return -EINVAL;
 		}
-		if (msg && msg->len)
+		if (msg && msg->len && mchan->resp_buf)
 			memcpy_toio(mchan->resp_buf, msg->data, msg->len);
 		arg0 = SMC_IPI_MAILBOX_ACK;
 		zynqmp_ipi_fw_call(ipi_mbox, arg0, IPI_SMC_ACK_EIRQ_MASK,
@@ -640,6 +649,126 @@ static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
 	return 0;
 }
 
+/**
+ * versal_ipi_setup - Set up IPIs to support mixed usage of
+ *				 Buffered and Bufferless IPIs.
+ *
+ * @ipi_mbox: pointer to IPI mailbox private data structure
+ * @node: IPI mailbox device node
+ *
+ * Return: 0 for success, negative value for failure
+ */
+static int versal_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
+			    struct device_node *node)
+{
+	struct zynqmp_ipi_mchan *tx_mchan, *rx_mchan;
+	struct resource host_res, remote_res;
+	struct device_node *parent_node;
+	int host_idx, remote_idx;
+	struct device *mdev, *dev;
+
+	tx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
+	rx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
+	parent_node = of_get_parent(node);
+	dev = ipi_mbox->pdata->dev;
+	mdev = &ipi_mbox->dev;
+
+	host_idx = zynqmp_ipi_mbox_get_buf_res(parent_node, "msg", &host_res);
+	remote_idx = zynqmp_ipi_mbox_get_buf_res(node, "msg", &remote_res);
+
+	/*
+	 * Only set up buffers if both sides claim to have msg buffers.
+	 * This is because each buffered IPI's corresponding msg buffers
+	 * are reserved for use by other buffered IPI's.
+	 */
+	if (!host_idx && !remote_idx) {
+		u32 host_src, host_dst, remote_src, remote_dst;
+		u32 buff_sz;
+
+		buff_sz = resource_size(&host_res);
+
+		host_src = host_res.start & SRC_BITMASK;
+		remote_src = remote_res.start & SRC_BITMASK;
+
+		host_dst = (host_src >> DST_BIT_POS) * DEST_OFFSET;
+		remote_dst = (remote_src >> DST_BIT_POS) * DEST_OFFSET;
+
+		/* Validate that IPI IDs is within IPI Message buffer space. */
+		if (host_dst >= buff_sz || remote_dst >= buff_sz) {
+			dev_err(mdev,
+				"Invalid IPI Message buffer values: %x %x\n",
+				host_dst, remote_dst);
+			return -EINVAL;
+		}
+
+		tx_mchan->req_buf = devm_ioremap(mdev,
+						 host_res.start | remote_dst,
+						 IPI_BUF_SIZE);
+		if (!tx_mchan->req_buf) {
+			dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+			return -ENOMEM;
+		}
+
+		tx_mchan->resp_buf = devm_ioremap(mdev,
+						  (remote_res.start | host_dst) +
+						  RESP_OFFSET, IPI_BUF_SIZE);
+		if (!tx_mchan->resp_buf) {
+			dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+			return -ENOMEM;
+		}
+
+		rx_mchan->req_buf = devm_ioremap(mdev,
+						 remote_res.start | host_dst,
+						 IPI_BUF_SIZE);
+		if (!rx_mchan->req_buf) {
+			dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+			return -ENOMEM;
+		}
+
+		rx_mchan->resp_buf = devm_ioremap(mdev,
+						  (host_res.start | remote_dst) +
+						  RESP_OFFSET, IPI_BUF_SIZE);
+		if (!rx_mchan->resp_buf) {
+			dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
+			return -ENOMEM;
+		}
+
+		tx_mchan->resp_buf_size = IPI_BUF_SIZE;
+		tx_mchan->req_buf_size = IPI_BUF_SIZE;
+		tx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
+						sizeof(struct zynqmp_ipi_message),
+						GFP_KERNEL);
+		if (!tx_mchan->rx_buf)
+			return -ENOMEM;
+
+		rx_mchan->resp_buf_size = IPI_BUF_SIZE;
+		rx_mchan->req_buf_size = IPI_BUF_SIZE;
+		rx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
+						sizeof(struct zynqmp_ipi_message),
+						GFP_KERNEL);
+		if (!rx_mchan->rx_buf)
+			return -ENOMEM;
+	} else {
+		/*
+		 * If here, then set up Bufferless IPI Channel because
+		 * one or both of the IPI's is bufferless.
+		 */
+		tx_mchan->req_buf = NULL;
+		tx_mchan->resp_buf = NULL;
+		tx_mchan->rx_buf = NULL;
+		tx_mchan->resp_buf_size = 0;
+		tx_mchan->req_buf_size = 0;
+
+		rx_mchan->req_buf = NULL;
+		rx_mchan->resp_buf = NULL;
+		rx_mchan->rx_buf = NULL;
+		rx_mchan->resp_buf_size = 0;
+		rx_mchan->req_buf_size = 0;
+	}
+
+	return 0;
+}
+
 /**
  * zynqmp_ipi_free_mboxes - Free IPI mailboxes devices
  *
@@ -749,6 +878,9 @@ static const struct of_device_id zynqmp_ipi_of_match[] = {
 	{ .compatible = "xlnx,zynqmp-ipi-mailbox",
 	  .data = &zynqmp_ipi_setup,
 	},
+	{ .compatible = "xlnx,versal-ipi-mailbox",
+	  .data = &versal_ipi_setup,
+	},
 	{},
 };
 MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] mailbox: zynqmp: Move buffered IPI setup to of_match selected routine
  2023-12-14 21:13 ` [PATCH 2/3] mailbox: zynqmp: Move buffered IPI setup to of_match selected routine Ben Levinsky
@ 2023-12-20 13:23   ` Michal Simek
  2023-12-20 17:44     ` Ben Levinsky
  0 siblings, 1 reply; 9+ messages in thread
From: Michal Simek @ 2023-12-20 13:23 UTC (permalink / raw)
  To: Ben Levinsky, jassisinghbrar, linux-kernel
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel



On 12/14/23 22:13, Ben Levinsky wrote:
> Move routine that initializes the mailboxes for send and receive to
> a function pointer that is set based on compatible string.
> 
> Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
> ---
>   drivers/mailbox/zynqmp-ipi-mailbox.c | 124 +++++++++++++++++++--------
>   1 file changed, 89 insertions(+), 35 deletions(-)
> 
> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
> index 2c10aa01b3bb..edefb80a6e47 100644
> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
> @@ -72,6 +72,10 @@ struct zynqmp_ipi_mchan {
>   	unsigned int chan_type;
>   };
>   
> +struct zynqmp_ipi_mbox;
> +
> +typedef int (*setup_ipi_fn)(struct zynqmp_ipi_mbox *ipi_mbox, struct device_node *node);
> +
>   /**
>    * struct zynqmp_ipi_mbox - Description of a ZynqMP IPI mailbox
>    *                          platform data.
> @@ -82,6 +86,7 @@ struct zynqmp_ipi_mchan {
>    * @mbox:                 mailbox Controller
>    * @mchans:               array for channels, tx channel and rx channel.
>    * @irq:                  IPI agent interrupt ID
> + * setup_ipi_fn:          Function Pointer to set up IPI Channels

Here should be @setup_ipi_fn.

>    */
>   struct zynqmp_ipi_mbox {
>   	struct zynqmp_ipi_pdata *pdata;
> @@ -89,6 +94,7 @@ struct zynqmp_ipi_mbox {
>   	u32 remote_id;
>   	struct mbox_controller mbox;
>   	struct zynqmp_ipi_mchan mchans[2];
> +	setup_ipi_fn setup_ipi_fn;
>   };
>   
>   /**
> @@ -466,12 +472,9 @@ static void zynqmp_ipi_mbox_dev_release(struct device *dev)
>   static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
>   				 struct device_node *node)
>   {
> -	struct zynqmp_ipi_mchan *mchan;
>   	struct mbox_chan *chans;
>   	struct mbox_controller *mbox;
> -	struct resource res;
>   	struct device *dev, *mdev;
> -	const char *name;
>   	int ret;
>   
>   	dev = ipi_mbox->pdata->dev;
> @@ -491,6 +494,75 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
>   	}
>   	mdev = &ipi_mbox->dev;
>   
> +	/* Get the IPI remote agent ID */
> +	ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
> +	if (ret < 0) {
> +		dev_err(dev, "No IPI remote ID is specified.\n");
> +		return ret;
> +	}
> +
> +	ret = ipi_mbox->setup_ipi_fn(ipi_mbox, node);
> +	if (ret) {
> +		dev_err(dev, "Failed to set up IPI Buffers.\n");
> +		return ret;
> +	}
> +
> +	mbox = &ipi_mbox->mbox;
> +	mbox->dev = mdev;
> +	mbox->ops = &zynqmp_ipi_chan_ops;
> +	mbox->num_chans = 2;
> +	mbox->txdone_irq = false;
> +	mbox->txdone_poll = true;
> +	mbox->txpoll_period = 5;
> +	mbox->of_xlate = zynqmp_ipi_of_xlate;
> +	chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
> +	if (!chans)
> +		return -ENOMEM;
> +	mbox->chans = chans;
> +	chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
> +	chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
> +	ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
> +	ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
> +	ret = devm_mbox_controller_register(mdev, mbox);
> +	if (ret)
> +		dev_err(mdev,
> +			"Failed to register mbox_controller(%d)\n", ret);
> +	else
> +		dev_info(mdev,
> +			 "Registered ZynqMP IPI mbox with TX/RX channels.\n");
> +	return ret;
> +}
> +
> +/**
> + * zynqmp_ipi_setup - set up IPI Buffers for classic flow
> + *
> + * @ipi_mbox: pointer to IPI mailbox private data structure
> + * @node: IPI mailbox device node
> + *
> + * This will be used to set up IPI Buffers for ZynqMP SOC if user
> + * wishes to use classic driver usage model on new SOC's with only
> + * buffered IPIs.
> + *
> + * Note that bufferless IPIs and mixed usage of buffered and bufferless
> + * IPIs are not supported with this flow.
> + *
> + * This will be invoked with compatible string "xlnx,zynqmp-ipi-mailbox".
> + *
> + * Return: 0 for success, negative value for failure
> + */
> +static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
> +			    struct device_node *node)
> +{
> +	struct zynqmp_ipi_mchan *mchan;
> +	struct device *mdev;
> +	struct resource res;
> +	struct device *dev;

nit: you can put it to the same line mdev, dev.

> +	const char *name;
> +	int ret;
> +
> +	mdev = &ipi_mbox->dev;
> +	dev = ipi_mbox->pdata->dev;
> +
>   	mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
>   	name = "local_request_region";
>   	ret = zynqmp_ipi_mbox_get_buf_res(node, name, &res);
> @@ -565,37 +637,7 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
>   	if (!mchan->rx_buf)
>   		return -ENOMEM;
>   
> -	/* Get the IPI remote agent ID */
> -	ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
> -	if (ret < 0) {
> -		dev_err(dev, "No IPI remote ID is specified.\n");
> -		return ret;
> -	}
> -
> -	mbox = &ipi_mbox->mbox;
> -	mbox->dev = mdev;
> -	mbox->ops = &zynqmp_ipi_chan_ops;
> -	mbox->num_chans = 2;
> -	mbox->txdone_irq = false;
> -	mbox->txdone_poll = true;
> -	mbox->txpoll_period = 5;
> -	mbox->of_xlate = zynqmp_ipi_of_xlate;
> -	chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
> -	if (!chans)
> -		return -ENOMEM;
> -	mbox->chans = chans;
> -	chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
> -	chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
> -	ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
> -	ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
> -	ret = devm_mbox_controller_register(mdev, mbox);
> -	if (ret)
> -		dev_err(mdev,
> -			"Failed to register mbox_controller(%d)\n", ret);
> -	else
> -		dev_info(mdev,
> -			 "Registered ZynqMP IPI mbox with TX/RX channels.\n");
> -	return ret;
> +	return 0;
>   }
>   
>   /**
> @@ -626,6 +668,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
>   	struct zynqmp_ipi_pdata *pdata;
>   	struct zynqmp_ipi_mbox *mbox;
>   	int num_mboxes, ret = -EINVAL;
> +	setup_ipi_fn ipi_fn;
>   
>   	num_mboxes = of_get_available_child_count(np);
>   	if (num_mboxes == 0) {
> @@ -646,9 +689,18 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
>   		return ret;
>   	}
>   
> +	ipi_fn = (setup_ipi_fn)device_get_match_data(&pdev->dev);
> +	if (!ipi_fn) {
> +		dev_err(dev,
> +			"Mbox Compatible String is missing IPI Setup fn.\n");
> +		return -ENODEV;
> +	}
> +
>   	pdata->num_mboxes = num_mboxes;
>   
>   	mbox = pdata->ipi_mboxes;
> +	mbox->setup_ipi_fn = ipi_fn;
> +
>   	for_each_available_child_of_node(np, nc) {
>   		mbox->pdata = pdata;
>   		ret = zynqmp_ipi_mbox_probe(mbox, nc);
> @@ -694,7 +746,9 @@ static int zynqmp_ipi_remove(struct platform_device *pdev)
>   }
>   
>   static const struct of_device_id zynqmp_ipi_of_match[] = {
> -	{ .compatible = "xlnx,zynqmp-ipi-mailbox" },
> +	{ .compatible = "xlnx,zynqmp-ipi-mailbox",
> +	  .data = &zynqmp_ipi_setup,
> +	},
>   	{},
>   };
>   MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);


M

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's
  2023-12-14 21:13 ` [PATCH 3/3] mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's Ben Levinsky
@ 2023-12-20 13:29   ` Michal Simek
  2023-12-20 17:42     ` Ben Levinsky
  0 siblings, 1 reply; 9+ messages in thread
From: Michal Simek @ 2023-12-20 13:29 UTC (permalink / raw)
  To: Ben Levinsky, jassisinghbrar, linux-kernel
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel



On 12/14/23 22:13, Ben Levinsky wrote:
> On Xilinx-AMD Versal and Versal-NET, there exist both
> inter-processor-interrupts with corresponding message buffers and without
> such buffers.
> 
> Add a routine that, if the corresponding DT compatible
> string "xlnx,versal-ipi-mailbox" is used then a Versal-based SOC
> can use a mailbox Device Tree entry where both host and remote
> can use either of the buffered or bufferless interrupts.
> 
> Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
> ---
> Note that the linked patch provides corresponding bindings.
> Depends on: https://lore.kernel.org/all/20231214054224.957336-3-tanmay.shah@amd.com/T/
> ---
>   drivers/mailbox/zynqmp-ipi-mailbox.c | 146 +++++++++++++++++++++++++--
>   1 file changed, 139 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
> index edefb80a6e47..316d9406064e 100644
> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
> @@ -52,6 +52,13 @@
>   #define IPI_MB_CHNL_TX	0 /* IPI mailbox TX channel */
>   #define IPI_MB_CHNL_RX	1 /* IPI mailbox RX channel */
>   
> +/* IPI Message Buffer Information */
> +#define RESP_OFFSET	0x20U
> +#define DEST_OFFSET	0x40U
> +#define IPI_BUF_SIZE	0x20U
> +#define DST_BIT_POS	9U
> +#define SRC_BITMASK	GENMASK(11, 8)
> +
>   /**
>    * struct zynqmp_ipi_mchan - Description of a Xilinx ZynqMP IPI mailbox channel
>    * @is_opened: indicate if the IPI channel is opened
> @@ -170,9 +177,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
>   		if (ret > 0 && ret & IPI_MB_STATUS_RECV_PENDING) {
>   			if (mchan->is_opened) {
>   				msg = mchan->rx_buf;
> -				msg->len = mchan->req_buf_size;
> -				memcpy_fromio(msg->data, mchan->req_buf,
> -					      msg->len);
> +				if (msg) {
> +					msg->len = mchan->req_buf_size;
> +					memcpy_fromio(msg->data, mchan->req_buf,
> +						      msg->len);
> +				}
>   				mbox_chan_received_data(chan, (void *)msg);
>   				status = IRQ_HANDLED;
>   			}
> @@ -282,26 +291,26 @@ static int zynqmp_ipi_send_data(struct mbox_chan *chan, void *data)
>   
>   	if (mchan->chan_type == IPI_MB_CHNL_TX) {
>   		/* Send request message */
> -		if (msg && msg->len > mchan->req_buf_size) {
> +		if (msg && msg->len > mchan->req_buf_size && mchan->req_buf) {
>   			dev_err(dev, "channel %d message length %u > max %lu\n",
>   				mchan->chan_type, (unsigned int)msg->len,
>   				mchan->req_buf_size);
>   			return -EINVAL;
>   		}
> -		if (msg && msg->len)
> +		if (msg && msg->len && mchan->req_buf)
>   			memcpy_toio(mchan->req_buf, msg->data, msg->len);
>   		/* Kick IPI mailbox to send message */
>   		arg0 = SMC_IPI_MAILBOX_NOTIFY;
>   		zynqmp_ipi_fw_call(ipi_mbox, arg0, 0, &res);
>   	} else {
>   		/* Send response message */
> -		if (msg && msg->len > mchan->resp_buf_size) {
> +		if (msg && msg->len > mchan->resp_buf_size && mchan->resp_buf) {
>   			dev_err(dev, "channel %d message length %u > max %lu\n",
>   				mchan->chan_type, (unsigned int)msg->len,
>   				mchan->resp_buf_size);
>   			return -EINVAL;
>   		}
> -		if (msg && msg->len)
> +		if (msg && msg->len && mchan->resp_buf)
>   			memcpy_toio(mchan->resp_buf, msg->data, msg->len);
>   		arg0 = SMC_IPI_MAILBOX_ACK;
>   		zynqmp_ipi_fw_call(ipi_mbox, arg0, IPI_SMC_ACK_EIRQ_MASK,
> @@ -640,6 +649,126 @@ static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
>   	return 0;
>   }
>   
> +/**
> + * versal_ipi_setup - Set up IPIs to support mixed usage of
> + *				 Buffered and Bufferless IPIs.
> + *
> + * @ipi_mbox: pointer to IPI mailbox private data structure
> + * @node: IPI mailbox device node
> + *
> + * Return: 0 for success, negative value for failure
> + */
> +static int versal_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
> +			    struct device_node *node)
> +{
> +	struct zynqmp_ipi_mchan *tx_mchan, *rx_mchan;
> +	struct resource host_res, remote_res;
> +	struct device_node *parent_node;
> +	int host_idx, remote_idx;
> +	struct device *mdev, *dev;
> +
> +	tx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
> +	rx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
> +	parent_node = of_get_parent(node);
> +	dev = ipi_mbox->pdata->dev;
> +	mdev = &ipi_mbox->dev;
> +
> +	host_idx = zynqmp_ipi_mbox_get_buf_res(parent_node, "msg", &host_res);
> +	remote_idx = zynqmp_ipi_mbox_get_buf_res(node, "msg", &remote_res);
> +
> +	/*
> +	 * Only set up buffers if both sides claim to have msg buffers.
> +	 * This is because each buffered IPI's corresponding msg buffers
> +	 * are reserved for use by other buffered IPI's.
> +	 */
> +	if (!host_idx && !remote_idx) {
> +		u32 host_src, host_dst, remote_src, remote_dst;
> +		u32 buff_sz;
> +
> +		buff_sz = resource_size(&host_res);
> +
> +		host_src = host_res.start & SRC_BITMASK;
> +		remote_src = remote_res.start & SRC_BITMASK;
> +
> +		host_dst = (host_src >> DST_BIT_POS) * DEST_OFFSET;
> +		remote_dst = (remote_src >> DST_BIT_POS) * DEST_OFFSET;
> +
> +		/* Validate that IPI IDs is within IPI Message buffer space. */
> +		if (host_dst >= buff_sz || remote_dst >= buff_sz) {
> +			dev_err(mdev,
> +				"Invalid IPI Message buffer values: %x %x\n",
> +				host_dst, remote_dst);
> +			return -EINVAL;
> +		}
> +
> +		tx_mchan->req_buf = devm_ioremap(mdev,
> +						 host_res.start | remote_dst,
> +						 IPI_BUF_SIZE);
> +		if (!tx_mchan->req_buf) {
> +			dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
> +			return -ENOMEM;
> +		}
> +
> +		tx_mchan->resp_buf = devm_ioremap(mdev,
> +						  (remote_res.start | host_dst) +
> +						  RESP_OFFSET, IPI_BUF_SIZE);
> +		if (!tx_mchan->resp_buf) {
> +			dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
> +			return -ENOMEM;
> +		}
> +
> +		rx_mchan->req_buf = devm_ioremap(mdev,
> +						 remote_res.start | host_dst,
> +						 IPI_BUF_SIZE);
> +		if (!rx_mchan->req_buf) {
> +			dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
> +			return -ENOMEM;
> +		}
> +
> +		rx_mchan->resp_buf = devm_ioremap(mdev,
> +						  (host_res.start | remote_dst) +
> +						  RESP_OFFSET, IPI_BUF_SIZE);
> +		if (!rx_mchan->resp_buf) {
> +			dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
> +			return -ENOMEM;
> +		}
> +
> +		tx_mchan->resp_buf_size = IPI_BUF_SIZE;
> +		tx_mchan->req_buf_size = IPI_BUF_SIZE;
> +		tx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
> +						sizeof(struct zynqmp_ipi_message),
> +						GFP_KERNEL);
> +		if (!tx_mchan->rx_buf)
> +			return -ENOMEM;
> +
> +		rx_mchan->resp_buf_size = IPI_BUF_SIZE;
> +		rx_mchan->req_buf_size = IPI_BUF_SIZE;
> +		rx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
> +						sizeof(struct zynqmp_ipi_message),
> +						GFP_KERNEL);
> +		if (!rx_mchan->rx_buf)
> +			return -ENOMEM;
> +	} else {
> +		/*
> +		 * If here, then set up Bufferless IPI Channel because
> +		 * one or both of the IPI's is bufferless.
> +		 */
> +		tx_mchan->req_buf = NULL;
> +		tx_mchan->resp_buf = NULL;
> +		tx_mchan->rx_buf = NULL;
> +		tx_mchan->resp_buf_size = 0;
> +		tx_mchan->req_buf_size = 0;
> +
> +		rx_mchan->req_buf = NULL;
> +		rx_mchan->resp_buf = NULL;
> +		rx_mchan->rx_buf = NULL;
> +		rx_mchan->resp_buf_size = 0;
> +		rx_mchan->req_buf_size = 0;

Just curious if this is really needed. If none fills that values aren't they 
actually already 0/NULL because that location is cleared by kzalloc.

Thanks,
Michal


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's
  2023-12-20 13:29   ` Michal Simek
@ 2023-12-20 17:42     ` Ben Levinsky
  2024-01-03 14:40       ` Ben Levinsky
  0 siblings, 1 reply; 9+ messages in thread
From: Ben Levinsky @ 2023-12-20 17:42 UTC (permalink / raw)
  To: Michal Simek, jassisinghbrar, linux-kernel
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel


On 12/20/23 5:29 AM, Michal Simek wrote:
>
>
> On 12/14/23 22:13, Ben Levinsky wrote:
>> On Xilinx-AMD Versal and Versal-NET, there exist both
>> inter-processor-interrupts with corresponding message buffers and without
>> such buffers.
>>
>> Add a routine that, if the corresponding DT compatible
>> string "xlnx,versal-ipi-mailbox" is used then a Versal-based SOC
>> can use a mailbox Device Tree entry where both host and remote
>> can use either of the buffered or bufferless interrupts.
>>
>> Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
>> ---
>> Note that the linked patch provides corresponding bindings.
>> Depends on: https://lore.kernel.org/all/20231214054224.957336-3-tanmay.shah@amd.com/T/
>> ---
>>   drivers/mailbox/zynqmp-ipi-mailbox.c | 146 +++++++++++++++++++++++++--
>>   1 file changed, 139 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
>> index edefb80a6e47..316d9406064e 100644
>> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
>> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
>> @@ -52,6 +52,13 @@
>>   #define IPI_MB_CHNL_TX    0 /* IPI mailbox TX channel */
>>   #define IPI_MB_CHNL_RX    1 /* IPI mailbox RX channel */
>>   +/* IPI Message Buffer Information */
>> +#define RESP_OFFSET    0x20U
>> +#define DEST_OFFSET    0x40U
>> +#define IPI_BUF_SIZE    0x20U
>> +#define DST_BIT_POS    9U
>> +#define SRC_BITMASK    GENMASK(11, 8)
>> +
>>   /**
>>    * struct zynqmp_ipi_mchan - Description of a Xilinx ZynqMP IPI mailbox channel
>>    * @is_opened: indicate if the IPI channel is opened
>> @@ -170,9 +177,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
>>           if (ret > 0 && ret & IPI_MB_STATUS_RECV_PENDING) {
>>               if (mchan->is_opened) {
>>                   msg = mchan->rx_buf;
>> -                msg->len = mchan->req_buf_size;
>> -                memcpy_fromio(msg->data, mchan->req_buf,
>> -                          msg->len);
>> +                if (msg) {
>> +                    msg->len = mchan->req_buf_size;
>> +                    memcpy_fromio(msg->data, mchan->req_buf,
>> +                              msg->len);
>> +                }
>>                   mbox_chan_received_data(chan, (void *)msg);
>>                   status = IRQ_HANDLED;
>>               }
>> @@ -282,26 +291,26 @@ static int zynqmp_ipi_send_data(struct mbox_chan *chan, void *data)
>>         if (mchan->chan_type == IPI_MB_CHNL_TX) {
>>           /* Send request message */
>> -        if (msg && msg->len > mchan->req_buf_size) {
>> +        if (msg && msg->len > mchan->req_buf_size && mchan->req_buf) {
>>               dev_err(dev, "channel %d message length %u > max %lu\n",
>>                   mchan->chan_type, (unsigned int)msg->len,
>>                   mchan->req_buf_size);
>>               return -EINVAL;
>>           }
>> -        if (msg && msg->len)
>> +        if (msg && msg->len && mchan->req_buf)
>>               memcpy_toio(mchan->req_buf, msg->data, msg->len);
>>           /* Kick IPI mailbox to send message */
>>           arg0 = SMC_IPI_MAILBOX_NOTIFY;
>>           zynqmp_ipi_fw_call(ipi_mbox, arg0, 0, &res);
>>       } else {
>>           /* Send response message */
>> -        if (msg && msg->len > mchan->resp_buf_size) {
>> +        if (msg && msg->len > mchan->resp_buf_size && mchan->resp_buf) {
>>               dev_err(dev, "channel %d message length %u > max %lu\n",
>>                   mchan->chan_type, (unsigned int)msg->len,
>>                   mchan->resp_buf_size);
>>               return -EINVAL;
>>           }
>> -        if (msg && msg->len)
>> +        if (msg && msg->len && mchan->resp_buf)
>>               memcpy_toio(mchan->resp_buf, msg->data, msg->len);
>>           arg0 = SMC_IPI_MAILBOX_ACK;
>>           zynqmp_ipi_fw_call(ipi_mbox, arg0, IPI_SMC_ACK_EIRQ_MASK,
>> @@ -640,6 +649,126 @@ static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
>>       return 0;
>>   }
>>   +/**
>> + * versal_ipi_setup - Set up IPIs to support mixed usage of
>> + *                 Buffered and Bufferless IPIs.
>> + *
>> + * @ipi_mbox: pointer to IPI mailbox private data structure
>> + * @node: IPI mailbox device node
>> + *
>> + * Return: 0 for success, negative value for failure
>> + */
>> +static int versal_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
>> +                struct device_node *node)
>> +{
>> +    struct zynqmp_ipi_mchan *tx_mchan, *rx_mchan;
>> +    struct resource host_res, remote_res;
>> +    struct device_node *parent_node;
>> +    int host_idx, remote_idx;
>> +    struct device *mdev, *dev;
>> +
>> +    tx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
>> +    rx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
>> +    parent_node = of_get_parent(node);
>> +    dev = ipi_mbox->pdata->dev;
>> +    mdev = &ipi_mbox->dev;
>> +
>> +    host_idx = zynqmp_ipi_mbox_get_buf_res(parent_node, "msg", &host_res);
>> +    remote_idx = zynqmp_ipi_mbox_get_buf_res(node, "msg", &remote_res);
>> +
>> +    /*
>> +     * Only set up buffers if both sides claim to have msg buffers.
>> +     * This is because each buffered IPI's corresponding msg buffers
>> +     * are reserved for use by other buffered IPI's.
>> +     */
>> +    if (!host_idx && !remote_idx) {
>> +        u32 host_src, host_dst, remote_src, remote_dst;
>> +        u32 buff_sz;
>> +
>> +        buff_sz = resource_size(&host_res);
>> +
>> +        host_src = host_res.start & SRC_BITMASK;
>> +        remote_src = remote_res.start & SRC_BITMASK;
>> +
>> +        host_dst = (host_src >> DST_BIT_POS) * DEST_OFFSET;
>> +        remote_dst = (remote_src >> DST_BIT_POS) * DEST_OFFSET;
>> +
>> +        /* Validate that IPI IDs is within IPI Message buffer space. */
>> +        if (host_dst >= buff_sz || remote_dst >= buff_sz) {
>> +            dev_err(mdev,
>> +                "Invalid IPI Message buffer values: %x %x\n",
>> +                host_dst, remote_dst);
>> +            return -EINVAL;
>> +        }
>> +
>> +        tx_mchan->req_buf = devm_ioremap(mdev,
>> +                         host_res.start | remote_dst,
>> +                         IPI_BUF_SIZE);
>> +        if (!tx_mchan->req_buf) {
>> +            dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
>> +            return -ENOMEM;
>> +        }
>> +
>> +        tx_mchan->resp_buf = devm_ioremap(mdev,
>> +                          (remote_res.start | host_dst) +
>> +                          RESP_OFFSET, IPI_BUF_SIZE);
>> +        if (!tx_mchan->resp_buf) {
>> +            dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
>> +            return -ENOMEM;
>> +        }
>> +
>> +        rx_mchan->req_buf = devm_ioremap(mdev,
>> +                         remote_res.start | host_dst,
>> +                         IPI_BUF_SIZE);
>> +        if (!rx_mchan->req_buf) {
>> +            dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
>> +            return -ENOMEM;
>> +        }
>> +
>> +        rx_mchan->resp_buf = devm_ioremap(mdev,
>> +                          (host_res.start | remote_dst) +
>> +                          RESP_OFFSET, IPI_BUF_SIZE);
>> +        if (!rx_mchan->resp_buf) {
>> +            dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
>> +            return -ENOMEM;
>> +        }
>> +
>> +        tx_mchan->resp_buf_size = IPI_BUF_SIZE;
>> +        tx_mchan->req_buf_size = IPI_BUF_SIZE;
>> +        tx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
>> +                        sizeof(struct zynqmp_ipi_message),
>> +                        GFP_KERNEL);
>> +        if (!tx_mchan->rx_buf)
>> +            return -ENOMEM;
>> +
>> +        rx_mchan->resp_buf_size = IPI_BUF_SIZE;
>> +        rx_mchan->req_buf_size = IPI_BUF_SIZE;
>> +        rx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
>> +                        sizeof(struct zynqmp_ipi_message),
>> +                        GFP_KERNEL);
>> +        if (!rx_mchan->rx_buf)
>> +            return -ENOMEM;
>> +    } else {
>> +        /*
>> +         * If here, then set up Bufferless IPI Channel because
>> +         * one or both of the IPI's is bufferless.
>> +         */
>> +        tx_mchan->req_buf = NULL;
>> +        tx_mchan->resp_buf = NULL;
>> +        tx_mchan->rx_buf = NULL;
>> +        tx_mchan->resp_buf_size = 0;
>> +        tx_mchan->req_buf_size = 0;
>> +
>> +        rx_mchan->req_buf = NULL;
>> +        rx_mchan->resp_buf = NULL;
>> +        rx_mchan->rx_buf = NULL;
>> +        rx_mchan->resp_buf_size = 0;
>> +        rx_mchan->req_buf_size = 0;
>
> Just curious if this is really needed. If none fills that values aren't they actually already 0/NULL because that location is cleared by kzalloc.
>
> Thanks,
> Michal

Confirmed. I removed the whole else condition and it still worked.

Will update in next rev after I receive any pending feedback from Jassi


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] mailbox: zynqmp: Move buffered IPI setup to of_match selected routine
  2023-12-20 13:23   ` Michal Simek
@ 2023-12-20 17:44     ` Ben Levinsky
  0 siblings, 0 replies; 9+ messages in thread
From: Ben Levinsky @ 2023-12-20 17:44 UTC (permalink / raw)
  To: Michal Simek, jassisinghbrar, linux-kernel
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel


On 12/20/23 5:23 AM, Michal Simek wrote:
>
>
> On 12/14/23 22:13, Ben Levinsky wrote:
>> Move routine that initializes the mailboxes for send and receive to
>> a function pointer that is set based on compatible string.
>>
>> Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
>> ---
>>   drivers/mailbox/zynqmp-ipi-mailbox.c | 124 +++++++++++++++++++--------
>>   1 file changed, 89 insertions(+), 35 deletions(-)
>>
>> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
>> index 2c10aa01b3bb..edefb80a6e47 100644
>> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
>> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
>> @@ -72,6 +72,10 @@ struct zynqmp_ipi_mchan {
>>       unsigned int chan_type;
>>   };
>>   +struct zynqmp_ipi_mbox;
>> +
>> +typedef int (*setup_ipi_fn)(struct zynqmp_ipi_mbox *ipi_mbox, struct device_node *node);
>> +
>>   /**
>>    * struct zynqmp_ipi_mbox - Description of a ZynqMP IPI mailbox
>>    *                          platform data.
>> @@ -82,6 +86,7 @@ struct zynqmp_ipi_mchan {
>>    * @mbox:                 mailbox Controller
>>    * @mchans:               array for channels, tx channel and rx channel.
>>    * @irq:                  IPI agent interrupt ID
>> + * setup_ipi_fn:          Function Pointer to set up IPI Channels
>
> Here should be @setup_ipi_fn.
will fix.
>
>>    */
>>   struct zynqmp_ipi_mbox {
>>       struct zynqmp_ipi_pdata *pdata;
>> @@ -89,6 +94,7 @@ struct zynqmp_ipi_mbox {
>>       u32 remote_id;
>>       struct mbox_controller mbox;
>>       struct zynqmp_ipi_mchan mchans[2];
>> +    setup_ipi_fn setup_ipi_fn;
>>   };
>>     /**
>> @@ -466,12 +472,9 @@ static void zynqmp_ipi_mbox_dev_release(struct device *dev)
>>   static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
>>                    struct device_node *node)
>>   {
>> -    struct zynqmp_ipi_mchan *mchan;
>>       struct mbox_chan *chans;
>>       struct mbox_controller *mbox;
>> -    struct resource res;
>>       struct device *dev, *mdev;
>> -    const char *name;
>>       int ret;
>>         dev = ipi_mbox->pdata->dev;
>> @@ -491,6 +494,75 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
>>       }
>>       mdev = &ipi_mbox->dev;
>>   +    /* Get the IPI remote agent ID */
>> +    ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
>> +    if (ret < 0) {
>> +        dev_err(dev, "No IPI remote ID is specified.\n");
>> +        return ret;
>> +    }
>> +
>> +    ret = ipi_mbox->setup_ipi_fn(ipi_mbox, node);
>> +    if (ret) {
>> +        dev_err(dev, "Failed to set up IPI Buffers.\n");
>> +        return ret;
>> +    }
>> +
>> +    mbox = &ipi_mbox->mbox;
>> +    mbox->dev = mdev;
>> +    mbox->ops = &zynqmp_ipi_chan_ops;
>> +    mbox->num_chans = 2;
>> +    mbox->txdone_irq = false;
>> +    mbox->txdone_poll = true;
>> +    mbox->txpoll_period = 5;
>> +    mbox->of_xlate = zynqmp_ipi_of_xlate;
>> +    chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
>> +    if (!chans)
>> +        return -ENOMEM;
>> +    mbox->chans = chans;
>> +    chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
>> +    chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
>> +    ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
>> +    ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
>> +    ret = devm_mbox_controller_register(mdev, mbox);
>> +    if (ret)
>> +        dev_err(mdev,
>> +            "Failed to register mbox_controller(%d)\n", ret);
>> +    else
>> +        dev_info(mdev,
>> +             "Registered ZynqMP IPI mbox with TX/RX channels.\n");
>> +    return ret;
>> +}
>> +
>> +/**
>> + * zynqmp_ipi_setup - set up IPI Buffers for classic flow
>> + *
>> + * @ipi_mbox: pointer to IPI mailbox private data structure
>> + * @node: IPI mailbox device node
>> + *
>> + * This will be used to set up IPI Buffers for ZynqMP SOC if user
>> + * wishes to use classic driver usage model on new SOC's with only
>> + * buffered IPIs.
>> + *
>> + * Note that bufferless IPIs and mixed usage of buffered and bufferless
>> + * IPIs are not supported with this flow.
>> + *
>> + * This will be invoked with compatible string "xlnx,zynqmp-ipi-mailbox".
>> + *
>> + * Return: 0 for success, negative value for failure
>> + */
>> +static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
>> +                struct device_node *node)
>> +{
>> +    struct zynqmp_ipi_mchan *mchan;
>> +    struct device *mdev;
>> +    struct resource res;
>> +    struct device *dev;
>
> nit: you can put it to the same line mdev, dev.
will fix.
>
>> +    const char *name;
>> +    int ret;
>> +
>> +    mdev = &ipi_mbox->dev;
>> +    dev = ipi_mbox->pdata->dev;
>> +
>>       mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
>>       name = "local_request_region";
>>       ret = zynqmp_ipi_mbox_get_buf_res(node, name, &res);
>> @@ -565,37 +637,7 @@ static int zynqmp_ipi_mbox_probe(struct zynqmp_ipi_mbox *ipi_mbox,
>>       if (!mchan->rx_buf)
>>           return -ENOMEM;
>>   -    /* Get the IPI remote agent ID */
>> -    ret = of_property_read_u32(node, "xlnx,ipi-id", &ipi_mbox->remote_id);
>> -    if (ret < 0) {
>> -        dev_err(dev, "No IPI remote ID is specified.\n");
>> -        return ret;
>> -    }
>> -
>> -    mbox = &ipi_mbox->mbox;
>> -    mbox->dev = mdev;
>> -    mbox->ops = &zynqmp_ipi_chan_ops;
>> -    mbox->num_chans = 2;
>> -    mbox->txdone_irq = false;
>> -    mbox->txdone_poll = true;
>> -    mbox->txpoll_period = 5;
>> -    mbox->of_xlate = zynqmp_ipi_of_xlate;
>> -    chans = devm_kzalloc(mdev, 2 * sizeof(*chans), GFP_KERNEL);
>> -    if (!chans)
>> -        return -ENOMEM;
>> -    mbox->chans = chans;
>> -    chans[IPI_MB_CHNL_TX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
>> -    chans[IPI_MB_CHNL_RX].con_priv = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
>> -    ipi_mbox->mchans[IPI_MB_CHNL_TX].chan_type = IPI_MB_CHNL_TX;
>> -    ipi_mbox->mchans[IPI_MB_CHNL_RX].chan_type = IPI_MB_CHNL_RX;
>> -    ret = devm_mbox_controller_register(mdev, mbox);
>> -    if (ret)
>> -        dev_err(mdev,
>> -            "Failed to register mbox_controller(%d)\n", ret);
>> -    else
>> -        dev_info(mdev,
>> -             "Registered ZynqMP IPI mbox with TX/RX channels.\n");
>> -    return ret;
>> +    return 0;
>>   }
>>     /**
>> @@ -626,6 +668,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
>>       struct zynqmp_ipi_pdata *pdata;
>>       struct zynqmp_ipi_mbox *mbox;
>>       int num_mboxes, ret = -EINVAL;
>> +    setup_ipi_fn ipi_fn;
>>         num_mboxes = of_get_available_child_count(np);
>>       if (num_mboxes == 0) {
>> @@ -646,9 +689,18 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
>>           return ret;
>>       }
>>   +    ipi_fn = (setup_ipi_fn)device_get_match_data(&pdev->dev);
>> +    if (!ipi_fn) {
>> +        dev_err(dev,
>> +            "Mbox Compatible String is missing IPI Setup fn.\n");
>> +        return -ENODEV;
>> +    }
>> +
>>       pdata->num_mboxes = num_mboxes;
>>         mbox = pdata->ipi_mboxes;
>> +    mbox->setup_ipi_fn = ipi_fn;
>> +
>>       for_each_available_child_of_node(np, nc) {
>>           mbox->pdata = pdata;
>>           ret = zynqmp_ipi_mbox_probe(mbox, nc);
>> @@ -694,7 +746,9 @@ static int zynqmp_ipi_remove(struct platform_device *pdev)
>>   }
>>     static const struct of_device_id zynqmp_ipi_of_match[] = {
>> -    { .compatible = "xlnx,zynqmp-ipi-mailbox" },
>> +    { .compatible = "xlnx,zynqmp-ipi-mailbox",
>> +      .data = &zynqmp_ipi_setup,
>> +    },
>>       {},
>>   };
>>   MODULE_DEVICE_TABLE(of, zynqmp_ipi_of_match);
>
>
> M

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's
  2023-12-20 17:42     ` Ben Levinsky
@ 2024-01-03 14:40       ` Ben Levinsky
  0 siblings, 0 replies; 9+ messages in thread
From: Ben Levinsky @ 2024-01-03 14:40 UTC (permalink / raw)
  To: Michal Simek, jassisinghbrar, linux-kernel
  Cc: shubhrajyoti.datta, tanmay.shah, linux-arm-kernel

Hi Jassi,


Please review when you can.


Thanks and Kind Regards,

Ben

On 12/20/23 9:42 AM, Ben Levinsky wrote:
> On 12/20/23 5:29 AM, Michal Simek wrote:
>>
>> On 12/14/23 22:13, Ben Levinsky wrote:
>>> On Xilinx-AMD Versal and Versal-NET, there exist both
>>> inter-processor-interrupts with corresponding message buffers and without
>>> such buffers.
>>>
>>> Add a routine that, if the corresponding DT compatible
>>> string "xlnx,versal-ipi-mailbox" is used then a Versal-based SOC
>>> can use a mailbox Device Tree entry where both host and remote
>>> can use either of the buffered or bufferless interrupts.
>>>
>>> Signed-off-by: Ben Levinsky <ben.levinsky@amd.com>
>>> ---
>>> Note that the linked patch provides corresponding bindings.
>>> Depends on: https://lore.kernel.org/all/20231214054224.957336-3-tanmay.shah@amd.com/T/
>>> ---
>>>   drivers/mailbox/zynqmp-ipi-mailbox.c | 146 +++++++++++++++++++++++++--
>>>   1 file changed, 139 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
>>> index edefb80a6e47..316d9406064e 100644
>>> --- a/drivers/mailbox/zynqmp-ipi-mailbox.c
>>> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
>>> @@ -52,6 +52,13 @@
>>>   #define IPI_MB_CHNL_TX    0 /* IPI mailbox TX channel */
>>>   #define IPI_MB_CHNL_RX    1 /* IPI mailbox RX channel */
>>>   +/* IPI Message Buffer Information */
>>> +#define RESP_OFFSET    0x20U
>>> +#define DEST_OFFSET    0x40U
>>> +#define IPI_BUF_SIZE    0x20U
>>> +#define DST_BIT_POS    9U
>>> +#define SRC_BITMASK    GENMASK(11, 8)
>>> +
>>>   /**
>>>    * struct zynqmp_ipi_mchan - Description of a Xilinx ZynqMP IPI mailbox channel
>>>    * @is_opened: indicate if the IPI channel is opened
>>> @@ -170,9 +177,11 @@ static irqreturn_t zynqmp_ipi_interrupt(int irq, void *data)
>>>           if (ret > 0 && ret & IPI_MB_STATUS_RECV_PENDING) {
>>>               if (mchan->is_opened) {
>>>                   msg = mchan->rx_buf;
>>> -                msg->len = mchan->req_buf_size;
>>> -                memcpy_fromio(msg->data, mchan->req_buf,
>>> -                          msg->len);
>>> +                if (msg) {
>>> +                    msg->len = mchan->req_buf_size;
>>> +                    memcpy_fromio(msg->data, mchan->req_buf,
>>> +                              msg->len);
>>> +                }
>>>                   mbox_chan_received_data(chan, (void *)msg);
>>>                   status = IRQ_HANDLED;
>>>               }
>>> @@ -282,26 +291,26 @@ static int zynqmp_ipi_send_data(struct mbox_chan *chan, void *data)
>>>         if (mchan->chan_type == IPI_MB_CHNL_TX) {
>>>           /* Send request message */
>>> -        if (msg && msg->len > mchan->req_buf_size) {
>>> +        if (msg && msg->len > mchan->req_buf_size && mchan->req_buf) {
>>>               dev_err(dev, "channel %d message length %u > max %lu\n",
>>>                   mchan->chan_type, (unsigned int)msg->len,
>>>                   mchan->req_buf_size);
>>>               return -EINVAL;
>>>           }
>>> -        if (msg && msg->len)
>>> +        if (msg && msg->len && mchan->req_buf)
>>>               memcpy_toio(mchan->req_buf, msg->data, msg->len);
>>>           /* Kick IPI mailbox to send message */
>>>           arg0 = SMC_IPI_MAILBOX_NOTIFY;
>>>           zynqmp_ipi_fw_call(ipi_mbox, arg0, 0, &res);
>>>       } else {
>>>           /* Send response message */
>>> -        if (msg && msg->len > mchan->resp_buf_size) {
>>> +        if (msg && msg->len > mchan->resp_buf_size && mchan->resp_buf) {
>>>               dev_err(dev, "channel %d message length %u > max %lu\n",
>>>                   mchan->chan_type, (unsigned int)msg->len,
>>>                   mchan->resp_buf_size);
>>>               return -EINVAL;
>>>           }
>>> -        if (msg && msg->len)
>>> +        if (msg && msg->len && mchan->resp_buf)
>>>               memcpy_toio(mchan->resp_buf, msg->data, msg->len);
>>>           arg0 = SMC_IPI_MAILBOX_ACK;
>>>           zynqmp_ipi_fw_call(ipi_mbox, arg0, IPI_SMC_ACK_EIRQ_MASK,
>>> @@ -640,6 +649,126 @@ static int zynqmp_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
>>>       return 0;
>>>   }
>>>   +/**
>>> + * versal_ipi_setup - Set up IPIs to support mixed usage of
>>> + *                 Buffered and Bufferless IPIs.
>>> + *
>>> + * @ipi_mbox: pointer to IPI mailbox private data structure
>>> + * @node: IPI mailbox device node
>>> + *
>>> + * Return: 0 for success, negative value for failure
>>> + */
>>> +static int versal_ipi_setup(struct zynqmp_ipi_mbox *ipi_mbox,
>>> +                struct device_node *node)
>>> +{
>>> +    struct zynqmp_ipi_mchan *tx_mchan, *rx_mchan;
>>> +    struct resource host_res, remote_res;
>>> +    struct device_node *parent_node;
>>> +    int host_idx, remote_idx;
>>> +    struct device *mdev, *dev;
>>> +
>>> +    tx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_TX];
>>> +    rx_mchan = &ipi_mbox->mchans[IPI_MB_CHNL_RX];
>>> +    parent_node = of_get_parent(node);
>>> +    dev = ipi_mbox->pdata->dev;
>>> +    mdev = &ipi_mbox->dev;
>>> +
>>> +    host_idx = zynqmp_ipi_mbox_get_buf_res(parent_node, "msg", &host_res);
>>> +    remote_idx = zynqmp_ipi_mbox_get_buf_res(node, "msg", &remote_res);
>>> +
>>> +    /*
>>> +     * Only set up buffers if both sides claim to have msg buffers.
>>> +     * This is because each buffered IPI's corresponding msg buffers
>>> +     * are reserved for use by other buffered IPI's.
>>> +     */
>>> +    if (!host_idx && !remote_idx) {
>>> +        u32 host_src, host_dst, remote_src, remote_dst;
>>> +        u32 buff_sz;
>>> +
>>> +        buff_sz = resource_size(&host_res);
>>> +
>>> +        host_src = host_res.start & SRC_BITMASK;
>>> +        remote_src = remote_res.start & SRC_BITMASK;
>>> +
>>> +        host_dst = (host_src >> DST_BIT_POS) * DEST_OFFSET;
>>> +        remote_dst = (remote_src >> DST_BIT_POS) * DEST_OFFSET;
>>> +
>>> +        /* Validate that IPI IDs is within IPI Message buffer space. */
>>> +        if (host_dst >= buff_sz || remote_dst >= buff_sz) {
>>> +            dev_err(mdev,
>>> +                "Invalid IPI Message buffer values: %x %x\n",
>>> +                host_dst, remote_dst);
>>> +            return -EINVAL;
>>> +        }
>>> +
>>> +        tx_mchan->req_buf = devm_ioremap(mdev,
>>> +                         host_res.start | remote_dst,
>>> +                         IPI_BUF_SIZE);
>>> +        if (!tx_mchan->req_buf) {
>>> +            dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
>>> +            return -ENOMEM;
>>> +        }
>>> +
>>> +        tx_mchan->resp_buf = devm_ioremap(mdev,
>>> +                          (remote_res.start | host_dst) +
>>> +                          RESP_OFFSET, IPI_BUF_SIZE);
>>> +        if (!tx_mchan->resp_buf) {
>>> +            dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
>>> +            return -ENOMEM;
>>> +        }
>>> +
>>> +        rx_mchan->req_buf = devm_ioremap(mdev,
>>> +                         remote_res.start | host_dst,
>>> +                         IPI_BUF_SIZE);
>>> +        if (!rx_mchan->req_buf) {
>>> +            dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
>>> +            return -ENOMEM;
>>> +        }
>>> +
>>> +        rx_mchan->resp_buf = devm_ioremap(mdev,
>>> +                          (host_res.start | remote_dst) +
>>> +                          RESP_OFFSET, IPI_BUF_SIZE);
>>> +        if (!rx_mchan->resp_buf) {
>>> +            dev_err(mdev, "Unable to map IPI buffer I/O memory\n");
>>> +            return -ENOMEM;
>>> +        }
>>> +
>>> +        tx_mchan->resp_buf_size = IPI_BUF_SIZE;
>>> +        tx_mchan->req_buf_size = IPI_BUF_SIZE;
>>> +        tx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
>>> +                        sizeof(struct zynqmp_ipi_message),
>>> +                        GFP_KERNEL);
>>> +        if (!tx_mchan->rx_buf)
>>> +            return -ENOMEM;
>>> +
>>> +        rx_mchan->resp_buf_size = IPI_BUF_SIZE;
>>> +        rx_mchan->req_buf_size = IPI_BUF_SIZE;
>>> +        rx_mchan->rx_buf = devm_kzalloc(mdev, IPI_BUF_SIZE +
>>> +                        sizeof(struct zynqmp_ipi_message),
>>> +                        GFP_KERNEL);
>>> +        if (!rx_mchan->rx_buf)
>>> +            return -ENOMEM;
>>> +    } else {
>>> +        /*
>>> +         * If here, then set up Bufferless IPI Channel because
>>> +         * one or both of the IPI's is bufferless.
>>> +         */
>>> +        tx_mchan->req_buf = NULL;
>>> +        tx_mchan->resp_buf = NULL;
>>> +        tx_mchan->rx_buf = NULL;
>>> +        tx_mchan->resp_buf_size = 0;
>>> +        tx_mchan->req_buf_size = 0;
>>> +
>>> +        rx_mchan->req_buf = NULL;
>>> +        rx_mchan->resp_buf = NULL;
>>> +        rx_mchan->rx_buf = NULL;
>>> +        rx_mchan->resp_buf_size = 0;
>>> +        rx_mchan->req_buf_size = 0;
>> Just curious if this is really needed. If none fills that values aren't they actually already 0/NULL because that location is cleared by kzalloc.
>>
>> Thanks,
>> Michal
> Confirmed. I removed the whole else condition and it still worked.
>
> Will update in next rev after I receive any pending feedback from Jassi
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-01-03 14:40 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-14 21:13 [PATCH 0/3] mailbox: zynqmp: Enable Bufferless IPIs for Versal based SOCs Ben Levinsky
2023-12-14 21:13 ` [PATCH 1/3] mailbox: zynqmp: Move of_match structure closer to usage Ben Levinsky
2023-12-14 21:13 ` [PATCH 2/3] mailbox: zynqmp: Move buffered IPI setup to of_match selected routine Ben Levinsky
2023-12-20 13:23   ` Michal Simek
2023-12-20 17:44     ` Ben Levinsky
2023-12-14 21:13 ` [PATCH 3/3] mailbox: zynqmp: Enable Bufferless IPI usage on Versal-based SOC's Ben Levinsky
2023-12-20 13:29   ` Michal Simek
2023-12-20 17:42     ` Ben Levinsky
2024-01-03 14:40       ` Ben Levinsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).