linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Loic Pallardy <loic.pallardy@st.com>
To: <bjorn.andersson@linaro.org>, <ohad@wizery.com>, <lee.jones@linaro.org>
Cc: <loic.pallardy@st.com>, <linux-remoteproc@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <kernel@stlinux.com>,
	<patrice.chotard@st.com>, <hugues.fruchet@st.com>
Subject: [PATCH v4 4/5] rpmsg: virtio_rpmsg: get buffer configuration from virtio
Date: Tue, 28 Mar 2017 13:49:46 +0200	[thread overview]
Message-ID: <1490701787-15205-5-git-send-email-loic.pallardy@st.com> (raw)
In-Reply-To: <1490701787-15205-1-git-send-email-loic.pallardy@st.com>

Some coprocessors have memory mapping constraints which require
predefined buffer location or specific buffer size different from
default definition.
Coprocessor resources are defined in associated firmware resource table.
Remoteproc offers access to firmware resource table via virtio get
interface.

This patch modifies rpmsg_probe sequence to:
- retrieve rpmsg buffer configuration (if any)
- verify and apply configuration
- allocate buffer according to requested configuration

Signed-off-by: Loic Pallardy <loic.pallardy@st.com>
---
Changes since V1:
- Move Rpmsg buffer physical address initialization in patch 5 "rpmsg: virtio_rpmsg:
don't allocate buffer if provided by low level driver"
- Remove extra lines

No change since v2.

---
 drivers/rpmsg/virtio_rpmsg_bus.c | 48 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 46 insertions(+), 2 deletions(-)

diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 1c7cde9..69285c1 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -32,6 +32,7 @@
 #include <linux/sched.h>
 #include <linux/wait.h>
 #include <linux/rpmsg.h>
+#include <linux/rpmsg/virtio_rpmsg.h>
 #include <linux/mutex.h>
 #include <linux/of_device.h>
 
@@ -870,6 +871,44 @@ static int rpmsg_ns_cb(struct rpmsg_device *rpdev, void *data, int len,
 	return 0;
 }
 
+static int virtio_rpmsg_get_config(struct virtio_device *vdev)
+{
+	struct virtio_rpmsg_cfg virtio_cfg;
+	struct virtproc_info *vrp = vdev->priv;
+	size_t total_buf_space;
+	int ret = 0;
+
+	memset(&virtio_cfg, 0, sizeof(virtio_cfg));
+	vdev->config->get(vdev, RPMSG_CONFIG_OFFSET, &virtio_cfg,
+			  sizeof(virtio_cfg));
+
+	if (virtio_cfg.id == VIRTIO_ID_RPMSG && virtio_cfg.version == 1 &&
+	    virtio_cfg.reserved == 0) {
+		if (virtio_cfg.buf_size <= MAX_RPMSG_BUF_SIZE) {
+			vrp->buf_size = virtio_cfg.buf_size;
+		} else {
+			WARN_ON(1);
+			dev_warn(&vdev->dev, "Requested RPMsg buffer size too big: %d\n",
+				 vrp->buf_size);
+			ret = -EINVAL;
+			goto out;
+		}
+
+		/* Check rpmsg buffer length */
+		total_buf_space = vrp->num_bufs * vrp->buf_size;
+		if ((virtio_cfg.len != -1) &&
+		    (total_buf_space > virtio_cfg.len)) {
+			dev_warn(&vdev->dev, "Not enough memory for buffers: %d\n",
+				 total_buf_space);
+			ret = -ENOMEM;
+			goto out;
+		}
+		return !ret;
+	}
+out:
+	return ret;
+}
+
 static int rpmsg_probe(struct virtio_device *vdev)
 {
 	vq_callback_t *vq_cbs[] = { rpmsg_recv_done, rpmsg_xmit_done };
@@ -900,6 +939,8 @@ static int rpmsg_probe(struct virtio_device *vdev)
 	vrp->rvq = vqs[0];
 	vrp->svq = vqs[1];
 
+	vdev->priv = vrp;
+
 	/* we expect symmetric tx/rx vrings */
 	WARN_ON(virtqueue_get_vring_size(vrp->rvq) !=
 		virtqueue_get_vring_size(vrp->svq));
@@ -912,6 +953,11 @@ static int rpmsg_probe(struct virtio_device *vdev)
 
 	vrp->buf_size = MAX_RPMSG_BUF_SIZE;
 
+	/* Try to get rpmsg configuration if any */
+	err = virtio_rpmsg_get_config(vdev);
+	if (err < 0)
+		goto free_vrp;
+
 	total_buf_space = vrp->num_bufs * vrp->buf_size;
 
 	/* allocate coherent memory for the buffers */
@@ -947,8 +993,6 @@ static int rpmsg_probe(struct virtio_device *vdev)
 	/* suppress "tx-complete" interrupts */
 	virtqueue_disable_cb(vrp->svq);
 
-	vdev->priv = vrp;
-
 	/* if supported by the remote processor, enable the name service */
 	if (virtio_has_feature(vdev, VIRTIO_RPMSG_F_NS)) {
 		/* a dedicated endpoint handles the name service msgs */
-- 
1.9.1

  parent reply	other threads:[~2017-03-28 11:52 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-28 11:49 [PATCH v4 0/5] virtio_rpmsg: make rpmsg channel configurable Loic Pallardy
2017-03-28 11:49 ` [PATCH v4 1/5] rpmsg: virtio_rpmsg: set rpmsg_buf_size customizable Loic Pallardy
2017-05-26 16:24   ` Suman Anna
2017-03-28 11:49 ` [PATCH v4 2/5] rpmsg: virtio_rpmsg_bus: fix sg_set_buf() when addr is not a valid kernel address Loic Pallardy
2017-08-24 22:03   ` Suman Anna
2017-03-28 11:49 ` [PATCH v4 3/5] include: virtio_rpmsg: add virtio rpmsg configuration structure Loic Pallardy
2017-03-28 11:49 ` Loic Pallardy [this message]
2017-03-28 11:49 ` [PATCH v4 5/5] rpmsg: virtio_rpmsg: set buffer configuration to virtio Loic Pallardy
2017-05-03 15:34 ` [PATCH v4 0/5] virtio_rpmsg: make rpmsg channel configurable Loic Pallardy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1490701787-15205-5-git-send-email-loic.pallardy@st.com \
    --to=loic.pallardy@st.com \
    --cc=bjorn.andersson@linaro.org \
    --cc=hugues.fruchet@st.com \
    --cc=kernel@stlinux.com \
    --cc=lee.jones@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=ohad@wizery.com \
    --cc=patrice.chotard@st.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).