All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Introduction of rpmsg_rx_done
@ 2022-06-08  1:16 Chris Lew
  2022-06-08  1:16 ` [PATCH 1/4] rpmsg: core: Add rx done hooks Chris Lew
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Chris Lew @ 2022-06-08  1:16 UTC (permalink / raw)
  To: bjorn.andersson, mathieu.poirier
  Cc: linux-remoteproc, linux-arm-msm, linux-kernel, quic_clew

This series proposes an implementation for the rpmsg framework to do
deferred cleanup of buffers provided in the rx callback. The current
implementation assumes that the client is done with the buffer after
returning from the rx callback.

In some cases where the data size is large, the client may want to
avoid copying the data in the rx callback for later processing. This
series proposes two new facilities for signaling that they want to
hold on to a buffer after the rx callback.
They are:
 - New API rpmsg_rx_done() to tell the rpmsg framework the client is
   done with the buffer
 - New return codes for the rx callback to signal that the client will
   hold onto a buffer and later call rpmsg_rx_done()

This series implements the qcom_glink_native backend for these new
facilities.
 
Chris Lew (4):
  rpmsg: core: Add rx done hooks
  rpmsg: char: Add support to use rpmsg_rx_done
  rpmsg: glink: Try to send rx done in irq
  rpmsg: glink: Add support for rpmsg_rx_done

 drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++--------
 drivers/rpmsg/rpmsg_char.c        |  50 ++++++++++++++++-
 drivers/rpmsg/rpmsg_core.c        |  20 +++++++
 drivers/rpmsg/rpmsg_internal.h    |   1 +
 include/linux/rpmsg.h             |  24 ++++++++
 5 files changed, 183 insertions(+), 24 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] rpmsg: core: Add rx done hooks
  2022-06-08  1:16 [PATCH 0/4] Introduction of rpmsg_rx_done Chris Lew
@ 2022-06-08  1:16 ` Chris Lew
  2022-07-18  8:31   ` Arnaud POULIQUEN
  2022-07-27 17:16   ` Mathieu Poirier
  2022-06-08  1:16 ` [PATCH 2/4] rpmsg: char: Add support to use rpmsg_rx_done Chris Lew
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 13+ messages in thread
From: Chris Lew @ 2022-06-08  1:16 UTC (permalink / raw)
  To: bjorn.andersson, mathieu.poirier
  Cc: linux-remoteproc, linux-arm-msm, linux-kernel, quic_clew

In order to reduce the amount of copies in the rpmsg framework, it is
necessary for clients to take brief ownership of the receive buffer.

Add the capability for clients to notify the rpmsg framework and the
underlying transports when it is going to hold onto a buffer and also
notify when the client is done with the buffer.

In the .rx_cb of the rpmsg drivers, if they wish to use the received
buffer at a later point, they should return RPMSG_DEFER. Otherwise
returning RPMSG_HANDLED (0) will signal the framework that the client
is done with the resources and can continue with cleanup.

The clients should check if their rpmsg endpoint supports the rx_done
operation with the new state variable in the rpmsg_endpoint since not
all endpoints will have the ability to support this operation.

Signed-off-by: Chris Lew <quic_clew@quicinc.com>
---
 drivers/rpmsg/rpmsg_core.c     | 20 ++++++++++++++++++++
 drivers/rpmsg/rpmsg_internal.h |  1 +
 include/linux/rpmsg.h          | 24 ++++++++++++++++++++++++
 3 files changed, 45 insertions(+)

diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
index 290c1f02da10..359be643060f 100644
--- a/drivers/rpmsg/rpmsg_core.c
+++ b/drivers/rpmsg/rpmsg_core.c
@@ -351,6 +351,26 @@ ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept)
 }
 EXPORT_SYMBOL(rpmsg_get_mtu);
 
+/**
+ * rpmsg_rx_done() - release resources related to @data from a @rx_cb
+ * @ept:	the rpmsg endpoint
+ * @data:	payload from a message
+ *
+ * Returns 0 on success and an appropriate error value on failure.
+ */
+int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data)
+{
+	if (WARN_ON(!ept))
+		return -EINVAL;
+	if (!ept->ops->rx_done)
+		return -ENXIO;
+	if (!ept->rx_done)
+		return -EINVAL;
+
+	return ept->ops->rx_done(ept, data);
+}
+EXPORT_SYMBOL(rpmsg_rx_done);
+
 /*
  * match a rpmsg channel with a channel info struct.
  * this is used to make sure we're not creating rpmsg devices for channels
diff --git a/drivers/rpmsg/rpmsg_internal.h b/drivers/rpmsg/rpmsg_internal.h
index a22cd4abe7d1..99cb86ce638e 100644
--- a/drivers/rpmsg/rpmsg_internal.h
+++ b/drivers/rpmsg/rpmsg_internal.h
@@ -76,6 +76,7 @@ struct rpmsg_endpoint_ops {
 	__poll_t (*poll)(struct rpmsg_endpoint *ept, struct file *filp,
 			     poll_table *wait);
 	ssize_t (*get_mtu)(struct rpmsg_endpoint *ept);
+	int (*rx_done)(struct rpmsg_endpoint *ept, void *data);
 };
 
 struct device *rpmsg_find_device(struct device *parent,
diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h
index 523c98b96cb4..8e34222e8bca 100644
--- a/include/linux/rpmsg.h
+++ b/include/linux/rpmsg.h
@@ -63,6 +63,18 @@ struct rpmsg_device {
 	const struct rpmsg_device_ops *ops;
 };
 
+/**
+ * rpmsg rx callback return definitions
+ * @RPMSG_HANDLED: rpmsg user is done processing data, framework can free the
+ *                 resources related to the buffer
+ * @RPMSG_DEFER:   rpmsg user is not done processing data, framework will hold
+ *                 onto resources related to the buffer until rpmsg_rx_done is
+ *                 called. User should check their endpoint to see if rx_done
+ *                 is a supported operation.
+ */
+#define RPMSG_HANDLED	0
+#define RPMSG_DEFER	1
+
 typedef int (*rpmsg_rx_cb_t)(struct rpmsg_device *, void *, int, void *, u32);
 
 /**
@@ -71,6 +83,7 @@ typedef int (*rpmsg_rx_cb_t)(struct rpmsg_device *, void *, int, void *, u32);
  * @refcount: when this drops to zero, the ept is deallocated
  * @cb: rx callback handler
  * @cb_lock: must be taken before accessing/changing @cb
+ * @rx_done: if set, rpmsg endpoint supports rpmsg_rx_done
  * @addr: local rpmsg address
  * @priv: private data for the driver's use
  *
@@ -93,6 +106,7 @@ struct rpmsg_endpoint {
 	struct kref refcount;
 	rpmsg_rx_cb_t cb;
 	struct mutex cb_lock;
+	bool rx_done;
 	u32 addr;
 	void *priv;
 
@@ -192,6 +206,8 @@ __poll_t rpmsg_poll(struct rpmsg_endpoint *ept, struct file *filp,
 
 ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept);
 
+int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data);
+
 #else
 
 static inline int rpmsg_register_device_override(struct rpmsg_device *rpdev,
@@ -316,6 +332,14 @@ static inline ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept)
 	return -ENXIO;
 }
 
+static inline int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data)
+{
+	/* This shouldn't be possible */
+	WARN_ON(1);
+
+	return -ENXIO;
+}
+
 #endif /* IS_ENABLED(CONFIG_RPMSG) */
 
 /* use a macro to avoid include chaining to get THIS_MODULE */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/4] rpmsg: char: Add support to use rpmsg_rx_done
  2022-06-08  1:16 [PATCH 0/4] Introduction of rpmsg_rx_done Chris Lew
  2022-06-08  1:16 ` [PATCH 1/4] rpmsg: core: Add rx done hooks Chris Lew
@ 2022-06-08  1:16 ` Chris Lew
  2022-07-27 17:18   ` Mathieu Poirier
  2022-06-08  1:16 ` [PATCH 3/4] rpmsg: glink: Try to send rx done in irq Chris Lew
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Chris Lew @ 2022-06-08  1:16 UTC (permalink / raw)
  To: bjorn.andersson, mathieu.poirier
  Cc: linux-remoteproc, linux-arm-msm, linux-kernel, quic_clew

Add support into the rpmsg char driver to skip copying the data into an
skb if the endpoint supports rpmsg_rx_done. If the endpoint supports
the rx_done operation, allocate a zero sized skb and set the data to
the buffer returned in the rx callback. When the packet is read from
the character device, release the memory by calling rpmsg_rx_done().

Signed-off-by: Chris Lew <quic_clew@quicinc.com>
---
 drivers/rpmsg/rpmsg_char.c | 50 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 48 insertions(+), 2 deletions(-)

diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c
index b6183d4f62a2..be62ddcf356c 100644
--- a/drivers/rpmsg/rpmsg_char.c
+++ b/drivers/rpmsg/rpmsg_char.c
@@ -91,8 +91,8 @@ int rpmsg_chrdev_eptdev_destroy(struct device *dev, void *data)
 }
 EXPORT_SYMBOL(rpmsg_chrdev_eptdev_destroy);
 
-static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len,
-			void *priv, u32 addr)
+static int rpmsg_ept_copy_cb(struct rpmsg_device *rpdev, void *buf, int len,
+			     void *priv, u32 addr)
 {
 	struct rpmsg_eptdev *eptdev = priv;
 	struct sk_buff *skb;
@@ -113,6 +113,43 @@ static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len,
 	return 0;
 }
 
+static int rpmsg_ept_no_copy_cb(struct rpmsg_device *rpdev, void *buf, int len,
+				void *priv, u32 addr)
+{
+	struct rpmsg_eptdev *eptdev = priv;
+	struct sk_buff *skb;
+
+	skb = alloc_skb(0, GFP_ATOMIC);
+	if (!skb)
+		return -ENOMEM;
+
+	skb->head = buf;
+	skb->data = buf;
+	skb_reset_tail_pointer(skb);
+	skb_set_end_offset(skb, len);
+	skb_put(skb, len);
+
+	spin_lock(&eptdev->queue_lock);
+	skb_queue_tail(&eptdev->queue, skb);
+	spin_unlock(&eptdev->queue_lock);
+
+	/* wake up any blocking processes, waiting for new data */
+	wake_up_interruptible(&eptdev->readq);
+
+	return RPMSG_DEFER;
+}
+
+static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len,
+			void *priv, u32 addr)
+{
+	struct rpmsg_eptdev *eptdev = priv;
+	rpmsg_rx_cb_t cb;
+
+	cb = (eptdev->ept->rx_done) ? rpmsg_ept_no_copy_cb : rpmsg_ept_copy_cb;
+
+	return cb(rpdev, buf, len, priv, addr);
+}
+
 static int rpmsg_eptdev_open(struct inode *inode, struct file *filp)
 {
 	struct rpmsg_eptdev *eptdev = cdev_to_eptdev(inode->i_cdev);
@@ -210,6 +247,15 @@ static ssize_t rpmsg_eptdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
 	if (copy_to_iter(skb->data, use, to) != use)
 		use = -EFAULT;
 
+	if (eptdev->ept->rx_done) {
+		rpmsg_rx_done(eptdev->ept, skb->data);
+		/*
+		 * Data memory is freed by rpmsg_rx_done(), reset the skb data
+		 * pointers so kfree_skb() does not try to free a second time.
+		 */
+		skb->head = NULL;
+		skb->data = NULL;
+	}
 	kfree_skb(skb);
 
 	return use;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/4] rpmsg: glink: Try to send rx done in irq
  2022-06-08  1:16 [PATCH 0/4] Introduction of rpmsg_rx_done Chris Lew
  2022-06-08  1:16 ` [PATCH 1/4] rpmsg: core: Add rx done hooks Chris Lew
  2022-06-08  1:16 ` [PATCH 2/4] rpmsg: char: Add support to use rpmsg_rx_done Chris Lew
@ 2022-06-08  1:16 ` Chris Lew
  2022-06-08  1:16 ` [PATCH 4/4] rpmsg: glink: Add support for rpmsg_rx_done Chris Lew
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Chris Lew @ 2022-06-08  1:16 UTC (permalink / raw)
  To: bjorn.andersson, mathieu.poirier
  Cc: linux-remoteproc, linux-arm-msm, linux-kernel, quic_clew

Some remote processors and usecases such as audio playback are
sensitive to the response time of rx done. Try to send the rx done cmd
from irq context. If trysend fails, defer the rx done work like before.

Signed-off-by: Chris Lew <quic_clew@quicinc.com>
---
 drivers/rpmsg/qcom_glink_native.c | 60 ++++++++++++++++++++++++++-------------
 1 file changed, 40 insertions(+), 20 deletions(-)

diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
index 07586514991f..799e602113a1 100644
--- a/drivers/rpmsg/qcom_glink_native.c
+++ b/drivers/rpmsg/qcom_glink_native.c
@@ -497,12 +497,11 @@ static void qcom_glink_send_close_ack(struct qcom_glink *glink,
 	qcom_glink_tx(glink, &req, sizeof(req), NULL, 0, true);
 }
 
-static void qcom_glink_rx_done_work(struct work_struct *work)
+static int qcom_glink_send_rx_done(struct qcom_glink *glink,
+				struct glink_channel *channel,
+				struct glink_core_rx_intent *intent,
+				bool wait)
 {
-	struct glink_channel *channel = container_of(work, struct glink_channel,
-						     intent_work);
-	struct qcom_glink *glink = channel->glink;
-	struct glink_core_rx_intent *intent, *tmp;
 	struct {
 		u16 id;
 		u16 lcid;
@@ -510,26 +509,41 @@ static void qcom_glink_rx_done_work(struct work_struct *work)
 	} __packed cmd;
 
 	unsigned int cid = channel->lcid;
-	unsigned int iid;
-	bool reuse;
+	unsigned int iid = intent->id;
+	bool reuse = intent->reuse;
+	int ret;
+
+	cmd.id = reuse ? RPM_CMD_RX_DONE_W_REUSE : RPM_CMD_RX_DONE;
+	cmd.lcid = cid;
+	cmd.liid = iid;
+
+	ret = qcom_glink_tx(glink, &cmd, sizeof(cmd), NULL, 0, wait);
+	if (ret)
+		return ret;
+
+	if (!reuse) {
+		kfree(intent->data);
+		kfree(intent);
+	}
+
+	return 0;
+}
+
+static void qcom_glink_rx_done_work(struct work_struct *work)
+{
+	struct glink_channel *channel = container_of(work, struct glink_channel,
+						     intent_work);
+	struct qcom_glink *glink = channel->glink;
+	struct glink_core_rx_intent *intent, *tmp;
 	unsigned long flags;
 
 	spin_lock_irqsave(&channel->intent_lock, flags);
 	list_for_each_entry_safe(intent, tmp, &channel->done_intents, node) {
 		list_del(&intent->node);
 		spin_unlock_irqrestore(&channel->intent_lock, flags);
-		iid = intent->id;
-		reuse = intent->reuse;
 
-		cmd.id = reuse ? RPM_CMD_RX_DONE_W_REUSE : RPM_CMD_RX_DONE;
-		cmd.lcid = cid;
-		cmd.liid = iid;
+		qcom_glink_send_rx_done(glink, channel, intent, true);
 
-		qcom_glink_tx(glink, &cmd, sizeof(cmd), NULL, 0, true);
-		if (!reuse) {
-			kfree(intent->data);
-			kfree(intent);
-		}
 		spin_lock_irqsave(&channel->intent_lock, flags);
 	}
 	spin_unlock_irqrestore(&channel->intent_lock, flags);
@@ -539,6 +553,8 @@ static void qcom_glink_rx_done(struct qcom_glink *glink,
 			       struct glink_channel *channel,
 			       struct glink_core_rx_intent *intent)
 {
+	int ret = -EAGAIN;
+
 	/* We don't send RX_DONE to intentless systems */
 	if (glink->intentless) {
 		kfree(intent->data);
@@ -555,10 +571,14 @@ static void qcom_glink_rx_done(struct qcom_glink *glink,
 
 	/* Schedule the sending of a rx_done indication */
 	spin_lock(&channel->intent_lock);
-	list_add_tail(&intent->node, &channel->done_intents);
-	spin_unlock(&channel->intent_lock);
+	if (list_empty(&channel->done_intents))
+		ret = qcom_glink_send_rx_done(glink, channel, intent, false);
 
-	schedule_work(&channel->intent_work);
+	if (ret) {
+		list_add_tail(&intent->node, &channel->done_intents);
+		schedule_work(&channel->intent_work);
+	}
+	spin_unlock(&channel->intent_lock);
 }
 
 /**
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/4] rpmsg: glink: Add support for rpmsg_rx_done
  2022-06-08  1:16 [PATCH 0/4] Introduction of rpmsg_rx_done Chris Lew
                   ` (2 preceding siblings ...)
  2022-06-08  1:16 ` [PATCH 3/4] rpmsg: glink: Try to send rx done in irq Chris Lew
@ 2022-06-08  1:16 ` Chris Lew
  2022-07-27 17:21   ` Mathieu Poirier
  2022-07-18  8:26 ` [PATCH 0/4] Introduction of rpmsg_rx_done Arnaud POULIQUEN
  2022-07-26 17:51 ` Mathieu Poirier
  5 siblings, 1 reply; 13+ messages in thread
From: Chris Lew @ 2022-06-08  1:16 UTC (permalink / raw)
  To: bjorn.andersson, mathieu.poirier
  Cc: linux-remoteproc, linux-arm-msm, linux-kernel, quic_clew

Add the implementation for the hooks of rpmsg_rx_done. If a client
signals they want to hold onto a buffer with RPMSG_DEFER in the rx_cb,
glink will move that intent to a deferred cleanup list. On the new
rpmsg rx_done call, the glink transport will search this deferred
cleanup list for the matching buffer and release the intent.

Signed-off-by: Chris Lew <quic_clew@quicinc.com>
---
 drivers/rpmsg/qcom_glink_native.c | 54 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 51 insertions(+), 3 deletions(-)

diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
index 799e602113a1..db0dcc04f393 100644
--- a/drivers/rpmsg/qcom_glink_native.c
+++ b/drivers/rpmsg/qcom_glink_native.c
@@ -146,6 +146,7 @@ enum {
  * @riids:	idr of all remote intents
  * @intent_work: worker responsible for transmitting rx_done packets
  * @done_intents: list of intents that needs to be announced rx_done
+ * @defer_intents: list of intents held by the client released by rpmsg_rx_done
  * @buf:	receive buffer, for gathering fragments
  * @buf_offset:	write offset in @buf
  * @buf_size:	size of current @buf
@@ -174,6 +175,7 @@ struct glink_channel {
 	struct idr riids;
 	struct work_struct intent_work;
 	struct list_head done_intents;
+	struct list_head defer_intents;
 
 	struct glink_core_rx_intent *buf;
 	int buf_offset;
@@ -232,6 +234,7 @@ static struct glink_channel *qcom_glink_alloc_channel(struct qcom_glink *glink,
 	init_completion(&channel->intent_req_comp);
 
 	INIT_LIST_HEAD(&channel->done_intents);
+	INIT_LIST_HEAD(&channel->defer_intents);
 	INIT_WORK(&channel->intent_work, qcom_glink_rx_done_work);
 
 	idr_init(&channel->liids);
@@ -261,6 +264,12 @@ static void qcom_glink_channel_release(struct kref *ref)
 			kfree(intent);
 		}
 	}
+	list_for_each_entry_safe(intent, tmp, &channel->defer_intents, node) {
+		if (!intent->reuse) {
+			kfree(intent->data);
+			kfree(intent);
+		}
+	}
 
 	idr_for_each_entry(&channel->liids, tmp, iid) {
 		kfree(tmp->data);
@@ -549,9 +558,10 @@ static void qcom_glink_rx_done_work(struct work_struct *work)
 	spin_unlock_irqrestore(&channel->intent_lock, flags);
 }
 
-static void qcom_glink_rx_done(struct qcom_glink *glink,
+static void __qcom_glink_rx_done(struct qcom_glink *glink,
 			       struct glink_channel *channel,
-			       struct glink_core_rx_intent *intent)
+			       struct glink_core_rx_intent *intent,
+			       bool defer)
 {
 	int ret = -EAGAIN;
 
@@ -569,6 +579,14 @@ static void qcom_glink_rx_done(struct qcom_glink *glink,
 		spin_unlock(&channel->intent_lock);
 	}
 
+	/* Move intent to defer list until client calls rpmsg_rx_done */
+	if (defer) {
+		spin_lock(&channel->intent_lock);
+		list_add_tail(&intent->node, &channel->defer_intents);
+		spin_unlock(&channel->intent_lock);
+		return;
+	}
+
 	/* Schedule the sending of a rx_done indication */
 	spin_lock(&channel->intent_lock);
 	if (list_empty(&channel->done_intents))
@@ -581,6 +599,28 @@ static void qcom_glink_rx_done(struct qcom_glink *glink,
 	spin_unlock(&channel->intent_lock);
 }
 
+static int qcom_glink_rx_done(struct rpmsg_endpoint *ept, void *data)
+{
+	struct glink_channel *channel = to_glink_channel(ept);
+	struct qcom_glink *glink = channel->glink;
+	struct glink_core_rx_intent *intent, *tmp;
+	unsigned long flags;
+
+	spin_lock_irqsave(&channel->intent_lock, flags);
+	list_for_each_entry_safe(intent, tmp, &channel->defer_intents, node) {
+		if (intent->data == data) {
+			list_del(&intent->node);
+			spin_unlock_irqrestore(&channel->intent_lock, flags);
+
+			qcom_glink_send_rx_done(glink, channel, intent, true);
+			return 0;
+		}
+	}
+	spin_unlock_irqrestore(&channel->intent_lock, flags);
+
+	return -EINVAL;
+}
+
 /**
  * qcom_glink_receive_version() - receive version/features from remote system
  *
@@ -841,6 +881,7 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail)
 	} __packed hdr;
 	unsigned int chunk_size;
 	unsigned int left_size;
+	bool rx_done_defer;
 	unsigned int rcid;
 	unsigned int liid;
 	int ret = 0;
@@ -935,7 +976,12 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail)
 		intent->offset = 0;
 		channel->buf = NULL;
 
-		qcom_glink_rx_done(glink, channel, intent);
+		if (channel->ept.rx_done && ret == RPMSG_DEFER)
+			rx_done_defer = true;
+		else
+			rx_done_defer = false;
+
+		__qcom_glink_rx_done(glink, channel, intent, rx_done_defer);
 	}
 
 advance_rx:
@@ -1212,6 +1258,7 @@ static struct rpmsg_endpoint *qcom_glink_create_ept(struct rpmsg_device *rpdev,
 	ept->cb = cb;
 	ept->priv = priv;
 	ept->ops = &glink_endpoint_ops;
+	ept->rx_done = true;
 
 	return ept;
 }
@@ -1462,6 +1509,7 @@ static const struct rpmsg_endpoint_ops glink_endpoint_ops = {
 	.sendto = qcom_glink_sendto,
 	.trysend = qcom_glink_trysend,
 	.trysendto = qcom_glink_trysendto,
+	.rx_done = qcom_glink_rx_done,
 };
 
 static void qcom_glink_rpdev_release(struct device *dev)
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/4] Introduction of rpmsg_rx_done
  2022-06-08  1:16 [PATCH 0/4] Introduction of rpmsg_rx_done Chris Lew
                   ` (3 preceding siblings ...)
  2022-06-08  1:16 ` [PATCH 4/4] rpmsg: glink: Add support for rpmsg_rx_done Chris Lew
@ 2022-07-18  8:26 ` Arnaud POULIQUEN
  2022-07-18 16:54   ` Mathieu Poirier
  2022-07-26 17:51 ` Mathieu Poirier
  5 siblings, 1 reply; 13+ messages in thread
From: Arnaud POULIQUEN @ 2022-07-18  8:26 UTC (permalink / raw)
  To: Chris Lew, bjorn.andersson, mathieu.poirier
  Cc: linux-remoteproc, linux-arm-msm, linux-kernel

Hello Chris,

On 6/8/22 03:16, Chris Lew wrote:
> This series proposes an implementation for the rpmsg framework to do
> deferred cleanup of buffers provided in the rx callback. The current
> implementation assumes that the client is done with the buffer after
> returning from the rx callback.
> 
> In some cases where the data size is large, the client may want to
> avoid copying the data in the rx callback for later processing. This
> series proposes two new facilities for signaling that they want to
> hold on to a buffer after the rx callback.
> They are:
>  - New API rpmsg_rx_done() to tell the rpmsg framework the client is
>    done with the buffer
>  - New return codes for the rx callback to signal that the client will
>    hold onto a buffer and later call rpmsg_rx_done()
> 
> This series implements the qcom_glink_native backend for these new
> facilities.

The API you proposed seems to me quite smart and adaptable to the rpmsg
virtio backend.

My main concern is about the release of the buffer when the endpoint
is destroyed.

Does the buffer release should be handled by each services or by the
core?

I wonder if the buffer list could be managed by the core part by adding
the list in the rpmsg_endpoint structure. On destroy the core could call
the rx_done for each remaining buffers in list...

I let Bjorn and Mathieu advise on this...

Thanks,
Arnaud

>  
> Chris Lew (4):
>   rpmsg: core: Add rx done hooks
>   rpmsg: char: Add support to use rpmsg_rx_done
>   rpmsg: glink: Try to send rx done in irq
>   rpmsg: glink: Add support for rpmsg_rx_done
> 
>  drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++--------
>  drivers/rpmsg/rpmsg_char.c        |  50 ++++++++++++++++-
>  drivers/rpmsg/rpmsg_core.c        |  20 +++++++
>  drivers/rpmsg/rpmsg_internal.h    |   1 +
>  include/linux/rpmsg.h             |  24 ++++++++
>  5 files changed, 183 insertions(+), 24 deletions(-)
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] rpmsg: core: Add rx done hooks
  2022-06-08  1:16 ` [PATCH 1/4] rpmsg: core: Add rx done hooks Chris Lew
@ 2022-07-18  8:31   ` Arnaud POULIQUEN
  2022-07-27 17:16   ` Mathieu Poirier
  1 sibling, 0 replies; 13+ messages in thread
From: Arnaud POULIQUEN @ 2022-07-18  8:31 UTC (permalink / raw)
  To: Chris Lew, bjorn.andersson, mathieu.poirier
  Cc: linux-remoteproc, linux-arm-msm, linux-kernel



On 6/8/22 03:16, Chris Lew wrote:
> In order to reduce the amount of copies in the rpmsg framework, it is
> necessary for clients to take brief ownership of the receive buffer.
> 
> Add the capability for clients to notify the rpmsg framework and the
> underlying transports when it is going to hold onto a buffer and also
> notify when the client is done with the buffer.
> 
> In the .rx_cb of the rpmsg drivers, if they wish to use the received
> buffer at a later point, they should return RPMSG_DEFER. Otherwise
> returning RPMSG_HANDLED (0) will signal the framework that the client
> is done with the resources and can continue with cleanup.
> 
> The clients should check if their rpmsg endpoint supports the rx_done
> operation with the new state variable in the rpmsg_endpoint since not
> all endpoints will have the ability to support this operation.
> 
> Signed-off-by: Chris Lew <quic_clew@quicinc.com>
> ---
>  drivers/rpmsg/rpmsg_core.c     | 20 ++++++++++++++++++++
>  drivers/rpmsg/rpmsg_internal.h |  1 +
>  include/linux/rpmsg.h          | 24 ++++++++++++++++++++++++
>  3 files changed, 45 insertions(+)
> 
> diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
> index 290c1f02da10..359be643060f 100644
> --- a/drivers/rpmsg/rpmsg_core.c
> +++ b/drivers/rpmsg/rpmsg_core.c
> @@ -351,6 +351,26 @@ ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept)
>  }
>  EXPORT_SYMBOL(rpmsg_get_mtu);
>  
> +/**
> + * rpmsg_rx_done() - release resources related to @data from a @rx_cb
> + * @ept:	the rpmsg endpoint
> + * @data:	payload from a message
> + *
> + * Returns 0 on success and an appropriate error value on failure.
> + */
> +int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data)
> +{
> +	if (WARN_ON(!ept))
> +		return -EINVAL;
> +	if (!ept->ops->rx_done)
> +		return -ENXIO;
> +	if (!ept->rx_done)
> +		return -EINVAL;
> +
> +	return ept->ops->rx_done(ept, data);
> +}
> +EXPORT_SYMBOL(rpmsg_rx_done);
> +
>  /*
>   * match a rpmsg channel with a channel info struct.
>   * this is used to make sure we're not creating rpmsg devices for channels
> diff --git a/drivers/rpmsg/rpmsg_internal.h b/drivers/rpmsg/rpmsg_internal.h
> index a22cd4abe7d1..99cb86ce638e 100644
> --- a/drivers/rpmsg/rpmsg_internal.h
> +++ b/drivers/rpmsg/rpmsg_internal.h
> @@ -76,6 +76,7 @@ struct rpmsg_endpoint_ops {
>  	__poll_t (*poll)(struct rpmsg_endpoint *ept, struct file *filp,
>  			     poll_table *wait);
>  	ssize_t (*get_mtu)(struct rpmsg_endpoint *ept);
> +	int (*rx_done)(struct rpmsg_endpoint *ept, void *data);
>  };
>  
>  struct device *rpmsg_find_device(struct device *parent,
> diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h
> index 523c98b96cb4..8e34222e8bca 100644
> --- a/include/linux/rpmsg.h
> +++ b/include/linux/rpmsg.h
> @@ -63,6 +63,18 @@ struct rpmsg_device {
>  	const struct rpmsg_device_ops *ops;
>  };
>  
> +/**
> + * rpmsg rx callback return definitions
> + * @RPMSG_HANDLED: rpmsg user is done processing data, framework can free the
> + *                 resources related to the buffer
> + * @RPMSG_DEFER:   rpmsg user is not done processing data, framework will hold
> + *                 onto resources related to the buffer until rpmsg_rx_done is
> + *                 called. User should check their endpoint to see if rx_done
> + *                 is a supported operation.
> + */
> +#define RPMSG_HANDLED	0
> +#define RPMSG_DEFER	1

DEFER or HOLD?
In both case, would be nice to update the up-streamed RPMSG service
devices to reflect this update, even if the compatibility is preserved.

> +
>  typedef int (*rpmsg_rx_cb_t)(struct rpmsg_device *, void *, int, void *, u32);
>  
>  /**
> @@ -71,6 +83,7 @@ typedef int (*rpmsg_rx_cb_t)(struct rpmsg_device *, void *, int, void *, u32);
>   * @refcount: when this drops to zero, the ept is deallocated
>   * @cb: rx callback handler
>   * @cb_lock: must be taken before accessing/changing @cb
> + * @rx_done: if set, rpmsg endpoint supports rpmsg_rx_done

Same: done or hold?

Perhaps a bitmap here would be better for future.
I guess that similar feature could be requested for TX...

Regards,
Arnaud


>   * @addr: local rpmsg address
>   * @priv: private data for the driver's use
>   *
> @@ -93,6 +106,7 @@ struct rpmsg_endpoint {
>  	struct kref refcount;
>  	rpmsg_rx_cb_t cb;
>  	struct mutex cb_lock;
> +	bool rx_done;
>  	u32 addr;
>  	void *priv;
>  
> @@ -192,6 +206,8 @@ __poll_t rpmsg_poll(struct rpmsg_endpoint *ept, struct file *filp,
>  
>  ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept);
>  
> +int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data);
> +
>  #else
>  
>  static inline int rpmsg_register_device_override(struct rpmsg_device *rpdev,
> @@ -316,6 +332,14 @@ static inline ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept)
>  	return -ENXIO;
>  }
>  
> +static inline int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data)
> +{
> +	/* This shouldn't be possible */
> +	WARN_ON(1);
> +
> +	return -ENXIO;
> +}
> +
>  #endif /* IS_ENABLED(CONFIG_RPMSG) */
>  
>  /* use a macro to avoid include chaining to get THIS_MODULE */

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/4] Introduction of rpmsg_rx_done
  2022-07-18  8:26 ` [PATCH 0/4] Introduction of rpmsg_rx_done Arnaud POULIQUEN
@ 2022-07-18 16:54   ` Mathieu Poirier
  2022-07-27 17:25     ` Mathieu Poirier
  0 siblings, 1 reply; 13+ messages in thread
From: Mathieu Poirier @ 2022-07-18 16:54 UTC (permalink / raw)
  To: Arnaud POULIQUEN
  Cc: Chris Lew, bjorn.andersson, linux-remoteproc, linux-arm-msm,
	linux-kernel

On Mon, 18 Jul 2022 at 02:26, Arnaud POULIQUEN
<arnaud.pouliquen@foss.st.com> wrote:
>
> Hello Chris,
>
> On 6/8/22 03:16, Chris Lew wrote:
> > This series proposes an implementation for the rpmsg framework to do
> > deferred cleanup of buffers provided in the rx callback. The current
> > implementation assumes that the client is done with the buffer after
> > returning from the rx callback.
> >
> > In some cases where the data size is large, the client may want to
> > avoid copying the data in the rx callback for later processing. This
> > series proposes two new facilities for signaling that they want to
> > hold on to a buffer after the rx callback.
> > They are:
> >  - New API rpmsg_rx_done() to tell the rpmsg framework the client is
> >    done with the buffer
> >  - New return codes for the rx callback to signal that the client will
> >    hold onto a buffer and later call rpmsg_rx_done()
> >
> > This series implements the qcom_glink_native backend for these new
> > facilities.
>
> The API you proposed seems to me quite smart and adaptable to the rpmsg
> virtio backend.
>
> My main concern is about the release of the buffer when the endpoint
> is destroyed.
>
> Does the buffer release should be handled by each services or by the
> core?
>
> I wonder if the buffer list could be managed by the core part by adding
> the list in the rpmsg_endpoint structure. On destroy the core could call
> the rx_done for each remaining buffers in list...
>
> I let Bjorn and Mathieu advise on this...

Thanks for taking a look Arnaud.  I'll get to this sortly.

>
> Thanks,
> Arnaud
>
> >
> > Chris Lew (4):
> >   rpmsg: core: Add rx done hooks
> >   rpmsg: char: Add support to use rpmsg_rx_done
> >   rpmsg: glink: Try to send rx done in irq
> >   rpmsg: glink: Add support for rpmsg_rx_done
> >
> >  drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++--------
> >  drivers/rpmsg/rpmsg_char.c        |  50 ++++++++++++++++-
> >  drivers/rpmsg/rpmsg_core.c        |  20 +++++++
> >  drivers/rpmsg/rpmsg_internal.h    |   1 +
> >  include/linux/rpmsg.h             |  24 ++++++++
> >  5 files changed, 183 insertions(+), 24 deletions(-)
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/4] Introduction of rpmsg_rx_done
  2022-06-08  1:16 [PATCH 0/4] Introduction of rpmsg_rx_done Chris Lew
                   ` (4 preceding siblings ...)
  2022-07-18  8:26 ` [PATCH 0/4] Introduction of rpmsg_rx_done Arnaud POULIQUEN
@ 2022-07-26 17:51 ` Mathieu Poirier
  5 siblings, 0 replies; 13+ messages in thread
From: Mathieu Poirier @ 2022-07-26 17:51 UTC (permalink / raw)
  To: Chris Lew; +Cc: bjorn.andersson, linux-remoteproc, linux-arm-msm, linux-kernel

On Tue, Jun 07, 2022 at 06:16:41PM -0700, Chris Lew wrote:
> This series proposes an implementation for the rpmsg framework to do
> deferred cleanup of buffers provided in the rx callback. The current
> implementation assumes that the client is done with the buffer after
> returning from the rx callback.
> 
> In some cases where the data size is large, the client may want to
> avoid copying the data in the rx callback for later processing. This
> series proposes two new facilities for signaling that they want to
> hold on to a buffer after the rx callback.
> They are:
>  - New API rpmsg_rx_done() to tell the rpmsg framework the client is
>    done with the buffer
>  - New return codes for the rx callback to signal that the client will
>    hold onto a buffer and later call rpmsg_rx_done()
> 
> This series implements the qcom_glink_native backend for these new
> facilities.
>  
> Chris Lew (4):
>   rpmsg: core: Add rx done hooks
>   rpmsg: char: Add support to use rpmsg_rx_done
>   rpmsg: glink: Try to send rx done in irq
>   rpmsg: glink: Add support for rpmsg_rx_done
> 
>  drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++--------
>  drivers/rpmsg/rpmsg_char.c        |  50 ++++++++++++++++-
>  drivers/rpmsg/rpmsg_core.c        |  20 +++++++
>  drivers/rpmsg/rpmsg_internal.h    |   1 +
>  include/linux/rpmsg.h             |  24 ++++++++
>  5 files changed, 183 insertions(+), 24 deletions(-)

I have started reviewing this set.  Comments to come later today or tomorrow.

Thanks,
Mathieu

> 
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] rpmsg: core: Add rx done hooks
  2022-06-08  1:16 ` [PATCH 1/4] rpmsg: core: Add rx done hooks Chris Lew
  2022-07-18  8:31   ` Arnaud POULIQUEN
@ 2022-07-27 17:16   ` Mathieu Poirier
  1 sibling, 0 replies; 13+ messages in thread
From: Mathieu Poirier @ 2022-07-27 17:16 UTC (permalink / raw)
  To: Chris Lew; +Cc: bjorn.andersson, linux-remoteproc, linux-arm-msm, linux-kernel

On Tue, Jun 07, 2022 at 06:16:42PM -0700, Chris Lew wrote:
> In order to reduce the amount of copies in the rpmsg framework, it is
> necessary for clients to take brief ownership of the receive buffer.
> 
> Add the capability for clients to notify the rpmsg framework and the
> underlying transports when it is going to hold onto a buffer and also
> notify when the client is done with the buffer.
> 
> In the .rx_cb of the rpmsg drivers, if they wish to use the received
> buffer at a later point, they should return RPMSG_DEFER. Otherwise
> returning RPMSG_HANDLED (0) will signal the framework that the client
> is done with the resources and can continue with cleanup.
> 
> The clients should check if their rpmsg endpoint supports the rx_done
> operation with the new state variable in the rpmsg_endpoint since not
> all endpoints will have the ability to support this operation.
> 
> Signed-off-by: Chris Lew <quic_clew@quicinc.com>
> ---
>  drivers/rpmsg/rpmsg_core.c     | 20 ++++++++++++++++++++
>  drivers/rpmsg/rpmsg_internal.h |  1 +
>  include/linux/rpmsg.h          | 24 ++++++++++++++++++++++++
>  3 files changed, 45 insertions(+)
> 
> diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
> index 290c1f02da10..359be643060f 100644
> --- a/drivers/rpmsg/rpmsg_core.c
> +++ b/drivers/rpmsg/rpmsg_core.c
> @@ -351,6 +351,26 @@ ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept)
>  }
>  EXPORT_SYMBOL(rpmsg_get_mtu);
>  
> +/**
> + * rpmsg_rx_done() - release resources related to @data from a @rx_cb
> + * @ept:	the rpmsg endpoint
> + * @data:	payload from a message
> + *
> + * Returns 0 on success and an appropriate error value on failure.
> + */
> +int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data)
> +{
> +	if (WARN_ON(!ept))
> +		return -EINVAL;
> +	if (!ept->ops->rx_done)
> +		return -ENXIO;
> +	if (!ept->rx_done)
> +		return -EINVAL;
> +
> +	return ept->ops->rx_done(ept, data);
> +}
> +EXPORT_SYMBOL(rpmsg_rx_done);
> +
>  /*
>   * match a rpmsg channel with a channel info struct.
>   * this is used to make sure we're not creating rpmsg devices for channels
> diff --git a/drivers/rpmsg/rpmsg_internal.h b/drivers/rpmsg/rpmsg_internal.h
> index a22cd4abe7d1..99cb86ce638e 100644
> --- a/drivers/rpmsg/rpmsg_internal.h
> +++ b/drivers/rpmsg/rpmsg_internal.h
> @@ -76,6 +76,7 @@ struct rpmsg_endpoint_ops {
>  	__poll_t (*poll)(struct rpmsg_endpoint *ept, struct file *filp,
>  			     poll_table *wait);
>  	ssize_t (*get_mtu)(struct rpmsg_endpoint *ept);
> +	int (*rx_done)(struct rpmsg_endpoint *ept, void *data);
>  };
>  
>  struct device *rpmsg_find_device(struct device *parent,
> diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h
> index 523c98b96cb4..8e34222e8bca 100644
> --- a/include/linux/rpmsg.h
> +++ b/include/linux/rpmsg.h
> @@ -63,6 +63,18 @@ struct rpmsg_device {
>  	const struct rpmsg_device_ops *ops;
>  };
>  
> +/**
> + * rpmsg rx callback return definitions
> + * @RPMSG_HANDLED: rpmsg user is done processing data, framework can free the
> + *                 resources related to the buffer
> + * @RPMSG_DEFER:   rpmsg user is not done processing data, framework will hold
> + *                 onto resources related to the buffer until rpmsg_rx_done is
> + *                 called. User should check their endpoint to see if rx_done
> + *                 is a supported operation.
> + */
> +#define RPMSG_HANDLED	0
> +#define RPMSG_DEFER	1
> +
>  typedef int (*rpmsg_rx_cb_t)(struct rpmsg_device *, void *, int, void *, u32);
>  
>  /**
> @@ -71,6 +83,7 @@ typedef int (*rpmsg_rx_cb_t)(struct rpmsg_device *, void *, int, void *, u32);
>   * @refcount: when this drops to zero, the ept is deallocated
>   * @cb: rx callback handler
>   * @cb_lock: must be taken before accessing/changing @cb
> + * @rx_done: if set, rpmsg endpoint supports rpmsg_rx_done
>   * @addr: local rpmsg address
>   * @priv: private data for the driver's use
>   *
> @@ -93,6 +106,7 @@ struct rpmsg_endpoint {
>  	struct kref refcount;
>  	rpmsg_rx_cb_t cb;
>  	struct mutex cb_lock;
> +	bool rx_done;

Do you see a scenario where rpmsg_endpoint_ops::rx_done holds a valid pointer
but rpmsg_epndpoint::rx_done is set to false?  If not please remove.

>  	u32 addr;
>  	void *priv;
>  
> @@ -192,6 +206,8 @@ __poll_t rpmsg_poll(struct rpmsg_endpoint *ept, struct file *filp,
>  
>  ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept);
>  
> +int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data);
> +
>  #else
>  
>  static inline int rpmsg_register_device_override(struct rpmsg_device *rpdev,
> @@ -316,6 +332,14 @@ static inline ssize_t rpmsg_get_mtu(struct rpmsg_endpoint *ept)
>  	return -ENXIO;
>  }
>  
> +static inline int rpmsg_rx_done(struct rpmsg_endpoint *ept, void *data)
> +{
> +	/* This shouldn't be possible */
> +	WARN_ON(1);
> +
> +	return -ENXIO;
> +}
> +
>  #endif /* IS_ENABLED(CONFIG_RPMSG) */
>  
>  /* use a macro to avoid include chaining to get THIS_MODULE */
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] rpmsg: char: Add support to use rpmsg_rx_done
  2022-06-08  1:16 ` [PATCH 2/4] rpmsg: char: Add support to use rpmsg_rx_done Chris Lew
@ 2022-07-27 17:18   ` Mathieu Poirier
  0 siblings, 0 replies; 13+ messages in thread
From: Mathieu Poirier @ 2022-07-27 17:18 UTC (permalink / raw)
  To: Chris Lew; +Cc: bjorn.andersson, linux-remoteproc, linux-arm-msm, linux-kernel

On Tue, Jun 07, 2022 at 06:16:43PM -0700, Chris Lew wrote:
> Add support into the rpmsg char driver to skip copying the data into an
> skb if the endpoint supports rpmsg_rx_done. If the endpoint supports
> the rx_done operation, allocate a zero sized skb and set the data to
> the buffer returned in the rx callback. When the packet is read from
> the character device, release the memory by calling rpmsg_rx_done().
> 
> Signed-off-by: Chris Lew <quic_clew@quicinc.com>
> ---
>  drivers/rpmsg/rpmsg_char.c | 50 ++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 48 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/rpmsg/rpmsg_char.c b/drivers/rpmsg/rpmsg_char.c
> index b6183d4f62a2..be62ddcf356c 100644
> --- a/drivers/rpmsg/rpmsg_char.c
> +++ b/drivers/rpmsg/rpmsg_char.c
> @@ -91,8 +91,8 @@ int rpmsg_chrdev_eptdev_destroy(struct device *dev, void *data)
>  }
>  EXPORT_SYMBOL(rpmsg_chrdev_eptdev_destroy);
>  
> -static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len,
> -			void *priv, u32 addr)
> +static int rpmsg_ept_copy_cb(struct rpmsg_device *rpdev, void *buf, int len,
> +			     void *priv, u32 addr)
>  {
>  	struct rpmsg_eptdev *eptdev = priv;
>  	struct sk_buff *skb;
> @@ -113,6 +113,43 @@ static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len,
>  	return 0;
>  }
>  
> +static int rpmsg_ept_no_copy_cb(struct rpmsg_device *rpdev, void *buf, int len,
> +				void *priv, u32 addr)
> +{
> +	struct rpmsg_eptdev *eptdev = priv;
> +	struct sk_buff *skb;
> +
> +	skb = alloc_skb(0, GFP_ATOMIC);
> +	if (!skb)
> +		return -ENOMEM;
> +
> +	skb->head = buf;
> +	skb->data = buf;
> +	skb_reset_tail_pointer(skb);
> +	skb_set_end_offset(skb, len);
> +	skb_put(skb, len);
> +

I was worried about all that open ended code but looking at the sk_buff API I
don't think it is possible to do otherwise.  As such:

Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>


> +	spin_lock(&eptdev->queue_lock);
> +	skb_queue_tail(&eptdev->queue, skb);
> +	spin_unlock(&eptdev->queue_lock);
> +
> +	/* wake up any blocking processes, waiting for new data */
> +	wake_up_interruptible(&eptdev->readq);
> +
> +	return RPMSG_DEFER;
> +}
> +
> +static int rpmsg_ept_cb(struct rpmsg_device *rpdev, void *buf, int len,
> +			void *priv, u32 addr)
> +{
> +	struct rpmsg_eptdev *eptdev = priv;
> +	rpmsg_rx_cb_t cb;
> +
> +	cb = (eptdev->ept->rx_done) ? rpmsg_ept_no_copy_cb : rpmsg_ept_copy_cb;
> +
> +	return cb(rpdev, buf, len, priv, addr);
> +}
> +
>  static int rpmsg_eptdev_open(struct inode *inode, struct file *filp)
>  {
>  	struct rpmsg_eptdev *eptdev = cdev_to_eptdev(inode->i_cdev);
> @@ -210,6 +247,15 @@ static ssize_t rpmsg_eptdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
>  	if (copy_to_iter(skb->data, use, to) != use)
>  		use = -EFAULT;
>  
> +	if (eptdev->ept->rx_done) {
> +		rpmsg_rx_done(eptdev->ept, skb->data);
> +		/*
> +		 * Data memory is freed by rpmsg_rx_done(), reset the skb data
> +		 * pointers so kfree_skb() does not try to free a second time.
> +		 */
> +		skb->head = NULL;
> +		skb->data = NULL;
> +	}
>  	kfree_skb(skb);
>  
>  	return use;
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] rpmsg: glink: Add support for rpmsg_rx_done
  2022-06-08  1:16 ` [PATCH 4/4] rpmsg: glink: Add support for rpmsg_rx_done Chris Lew
@ 2022-07-27 17:21   ` Mathieu Poirier
  0 siblings, 0 replies; 13+ messages in thread
From: Mathieu Poirier @ 2022-07-27 17:21 UTC (permalink / raw)
  To: Chris Lew; +Cc: bjorn.andersson, linux-remoteproc, linux-arm-msm, linux-kernel

On Tue, Jun 07, 2022 at 06:16:45PM -0700, Chris Lew wrote:
> Add the implementation for the hooks of rpmsg_rx_done. If a client
> signals they want to hold onto a buffer with RPMSG_DEFER in the rx_cb,
> glink will move that intent to a deferred cleanup list. On the new
> rpmsg rx_done call, the glink transport will search this deferred
> cleanup list for the matching buffer and release the intent.
> 
> Signed-off-by: Chris Lew <quic_clew@quicinc.com>
> ---
>  drivers/rpmsg/qcom_glink_native.c | 54 ++++++++++++++++++++++++++++++++++++---
>  1 file changed, 51 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
> index 799e602113a1..db0dcc04f393 100644
> --- a/drivers/rpmsg/qcom_glink_native.c
> +++ b/drivers/rpmsg/qcom_glink_native.c
> @@ -146,6 +146,7 @@ enum {
>   * @riids:	idr of all remote intents
>   * @intent_work: worker responsible for transmitting rx_done packets
>   * @done_intents: list of intents that needs to be announced rx_done
> + * @defer_intents: list of intents held by the client released by rpmsg_rx_done
>   * @buf:	receive buffer, for gathering fragments
>   * @buf_offset:	write offset in @buf
>   * @buf_size:	size of current @buf
> @@ -174,6 +175,7 @@ struct glink_channel {
>  	struct idr riids;
>  	struct work_struct intent_work;
>  	struct list_head done_intents;
> +	struct list_head defer_intents;
>  
>  	struct glink_core_rx_intent *buf;
>  	int buf_offset;
> @@ -232,6 +234,7 @@ static struct glink_channel *qcom_glink_alloc_channel(struct qcom_glink *glink,
>  	init_completion(&channel->intent_req_comp);
>  
>  	INIT_LIST_HEAD(&channel->done_intents);
> +	INIT_LIST_HEAD(&channel->defer_intents);
>  	INIT_WORK(&channel->intent_work, qcom_glink_rx_done_work);
>  
>  	idr_init(&channel->liids);
> @@ -261,6 +264,12 @@ static void qcom_glink_channel_release(struct kref *ref)
>  			kfree(intent);
>  		}
>  	}
> +	list_for_each_entry_safe(intent, tmp, &channel->defer_intents, node) {
> +		if (!intent->reuse) {
> +			kfree(intent->data);
> +			kfree(intent);
> +		}
> +	}
>  
>  	idr_for_each_entry(&channel->liids, tmp, iid) {
>  		kfree(tmp->data);
> @@ -549,9 +558,10 @@ static void qcom_glink_rx_done_work(struct work_struct *work)
>  	spin_unlock_irqrestore(&channel->intent_lock, flags);
>  }
>  
> -static void qcom_glink_rx_done(struct qcom_glink *glink,
> +static void __qcom_glink_rx_done(struct qcom_glink *glink,
>  			       struct glink_channel *channel,
> -			       struct glink_core_rx_intent *intent)
> +			       struct glink_core_rx_intent *intent,
> +			       bool defer)
>  {
>  	int ret = -EAGAIN;
>  
> @@ -569,6 +579,14 @@ static void qcom_glink_rx_done(struct qcom_glink *glink,
>  		spin_unlock(&channel->intent_lock);
>  	}
>  
> +	/* Move intent to defer list until client calls rpmsg_rx_done */
> +	if (defer) {
> +		spin_lock(&channel->intent_lock);
> +		list_add_tail(&intent->node, &channel->defer_intents);
> +		spin_unlock(&channel->intent_lock);
> +		return;
> +	}
> +
>  	/* Schedule the sending of a rx_done indication */
>  	spin_lock(&channel->intent_lock);
>  	if (list_empty(&channel->done_intents))
> @@ -581,6 +599,28 @@ static void qcom_glink_rx_done(struct qcom_glink *glink,
>  	spin_unlock(&channel->intent_lock);
>  }
>  
> +static int qcom_glink_rx_done(struct rpmsg_endpoint *ept, void *data)
> +{
> +	struct glink_channel *channel = to_glink_channel(ept);
> +	struct qcom_glink *glink = channel->glink;
> +	struct glink_core_rx_intent *intent, *tmp;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&channel->intent_lock, flags);
> +	list_for_each_entry_safe(intent, tmp, &channel->defer_intents, node) {
> +		if (intent->data == data) {
> +			list_del(&intent->node);
> +			spin_unlock_irqrestore(&channel->intent_lock, flags);
> +
> +			qcom_glink_send_rx_done(glink, channel, intent, true);
> +			return 0;
> +		}
> +	}
> +	spin_unlock_irqrestore(&channel->intent_lock, flags);
> +
> +	return -EINVAL;
> +}
> +
>  /**
>   * qcom_glink_receive_version() - receive version/features from remote system
>   *
> @@ -841,6 +881,7 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail)
>  	} __packed hdr;
>  	unsigned int chunk_size;
>  	unsigned int left_size;
> +	bool rx_done_defer;
>  	unsigned int rcid;
>  	unsigned int liid;
>  	int ret = 0;
> @@ -935,7 +976,12 @@ static int qcom_glink_rx_data(struct qcom_glink *glink, size_t avail)
>  		intent->offset = 0;
>  		channel->buf = NULL;
>  
> -		qcom_glink_rx_done(glink, channel, intent);
> +		if (channel->ept.rx_done && ret == RPMSG_DEFER)

I don't see where @ret could be set to RPMSG_DEFER in this function...

Thanks,
Mathieu


> +			rx_done_defer = true;
> +		else
> +			rx_done_defer = false;
> +
> +		__qcom_glink_rx_done(glink, channel, intent, rx_done_defer);
>  	}
>  
>  advance_rx:
> @@ -1212,6 +1258,7 @@ static struct rpmsg_endpoint *qcom_glink_create_ept(struct rpmsg_device *rpdev,
>  	ept->cb = cb;
>  	ept->priv = priv;
>  	ept->ops = &glink_endpoint_ops;
> +	ept->rx_done = true;
>  
>  	return ept;
>  }
> @@ -1462,6 +1509,7 @@ static const struct rpmsg_endpoint_ops glink_endpoint_ops = {
>  	.sendto = qcom_glink_sendto,
>  	.trysend = qcom_glink_trysend,
>  	.trysendto = qcom_glink_trysendto,
> +	.rx_done = qcom_glink_rx_done,
>  };
>  
>  static void qcom_glink_rpdev_release(struct device *dev)
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/4] Introduction of rpmsg_rx_done
  2022-07-18 16:54   ` Mathieu Poirier
@ 2022-07-27 17:25     ` Mathieu Poirier
  0 siblings, 0 replies; 13+ messages in thread
From: Mathieu Poirier @ 2022-07-27 17:25 UTC (permalink / raw)
  To: Arnaud POULIQUEN
  Cc: Chris Lew, bjorn.andersson, linux-remoteproc, linux-arm-msm,
	linux-kernel

On Mon, Jul 18, 2022 at 10:54:30AM -0600, Mathieu Poirier wrote:
> On Mon, 18 Jul 2022 at 02:26, Arnaud POULIQUEN
> <arnaud.pouliquen@foss.st.com> wrote:
> >
> > Hello Chris,
> >
> > On 6/8/22 03:16, Chris Lew wrote:
> > > This series proposes an implementation for the rpmsg framework to do
> > > deferred cleanup of buffers provided in the rx callback. The current
> > > implementation assumes that the client is done with the buffer after
> > > returning from the rx callback.
> > >
> > > In some cases where the data size is large, the client may want to
> > > avoid copying the data in the rx callback for later processing. This
> > > series proposes two new facilities for signaling that they want to
> > > hold on to a buffer after the rx callback.
> > > They are:
> > >  - New API rpmsg_rx_done() to tell the rpmsg framework the client is
> > >    done with the buffer
> > >  - New return codes for the rx callback to signal that the client will
> > >    hold onto a buffer and later call rpmsg_rx_done()
> > >
> > > This series implements the qcom_glink_native backend for these new
> > > facilities.
> >
> > The API you proposed seems to me quite smart and adaptable to the rpmsg
> > virtio backend.
> >
> > My main concern is about the release of the buffer when the endpoint
> > is destroyed.
> >
> > Does the buffer release should be handled by each services or by the
> > core?
> >
> > I wonder if the buffer list could be managed by the core part by adding
> > the list in the rpmsg_endpoint structure. On destroy the core could call
> > the rx_done for each remaining buffers in list...

Arnaud has a valid point, though rpmst_endpoint_ops::destroy_ept() is there for
this kind of cleanup (and this patchet is making use of it).

I think we can leave things as they are now and consider moving to the core if
we see a trend in future submissions.

Thanks,
Mathieu

> >
> > I let Bjorn and Mathieu advise on this...
> 
> Thanks for taking a look Arnaud.  I'll get to this sortly.
> 
> >
> > Thanks,
> > Arnaud
> >
> > >
> > > Chris Lew (4):
> > >   rpmsg: core: Add rx done hooks
> > >   rpmsg: char: Add support to use rpmsg_rx_done
> > >   rpmsg: glink: Try to send rx done in irq
> > >   rpmsg: glink: Add support for rpmsg_rx_done
> > >
> > >  drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++--------
> > >  drivers/rpmsg/rpmsg_char.c        |  50 ++++++++++++++++-
> > >  drivers/rpmsg/rpmsg_core.c        |  20 +++++++
> > >  drivers/rpmsg/rpmsg_internal.h    |   1 +
> > >  include/linux/rpmsg.h             |  24 ++++++++
> > >  5 files changed, 183 insertions(+), 24 deletions(-)
> > >

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-07-27 18:27 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-08  1:16 [PATCH 0/4] Introduction of rpmsg_rx_done Chris Lew
2022-06-08  1:16 ` [PATCH 1/4] rpmsg: core: Add rx done hooks Chris Lew
2022-07-18  8:31   ` Arnaud POULIQUEN
2022-07-27 17:16   ` Mathieu Poirier
2022-06-08  1:16 ` [PATCH 2/4] rpmsg: char: Add support to use rpmsg_rx_done Chris Lew
2022-07-27 17:18   ` Mathieu Poirier
2022-06-08  1:16 ` [PATCH 3/4] rpmsg: glink: Try to send rx done in irq Chris Lew
2022-06-08  1:16 ` [PATCH 4/4] rpmsg: glink: Add support for rpmsg_rx_done Chris Lew
2022-07-27 17:21   ` Mathieu Poirier
2022-07-18  8:26 ` [PATCH 0/4] Introduction of rpmsg_rx_done Arnaud POULIQUEN
2022-07-18 16:54   ` Mathieu Poirier
2022-07-27 17:25     ` Mathieu Poirier
2022-07-26 17:51 ` Mathieu Poirier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.