All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/4] Add a vhost RPMsg API
@ 2020-08-26 17:46 ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Mathieu Poirier,
	Vincent Whitchurch

Hi,

Next update:

v5:
- don't hard-code message layout

v4:
- add endianness conversions to comply with the VirtIO standard

v3:
- address several checkpatch warnings
- address comments from Mathieu Poirier

v2:
- update patch #5 with a correct vhost_dev_init() prototype
- drop patch #6 - it depends on a different patch, that is currently
  an RFC
- address comments from Pierre-Louis Bossart:
  * remove "default n" from Kconfig

Linux supports RPMsg over VirtIO for "remote processor" / AMP use
cases. It can however also be used for virtualisation scenarios,
e.g. when using KVM to run Linux on both the host and the guests.
This patch set adds a wrapper API to facilitate writing vhost
drivers for such RPMsg-based solutions. The first use case is an
audio DSP virtualisation project, currently under development, ready
for review and submission, available at
https://github.com/thesofproject/linux/pull/1501/commits

Thanks
Guennadi

Guennadi Liakhovetski (4):
  vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
  rpmsg: move common structures and defines to headers
  rpmsg: update documentation
  vhost: add an RPMsg API

 Documentation/rpmsg.txt          |   6 +-
 drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
 drivers/vhost/Kconfig            |   7 +
 drivers/vhost/Makefile           |   3 +
 drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
 drivers/vhost/vhost_rpmsg.h      |  74 ++++++
 include/linux/virtio_rpmsg.h     |  83 +++++++
 include/uapi/linux/rpmsg.h       |   3 +
 include/uapi/linux/vhost.h       |   4 +-
 9 files changed, 551 insertions(+), 80 deletions(-)
 create mode 100644 drivers/vhost/rpmsg.c
 create mode 100644 drivers/vhost/vhost_rpmsg.h
 create mode 100644 include/linux/virtio_rpmsg.h

-- 
2.28.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v5 0/4] Add a vhost RPMsg API
@ 2020-08-26 17:46 ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: Ohad Ben-Cohen, Mathieu Poirier, Michael S. Tsirkin,
	Vincent Whitchurch, linux-remoteproc, Pierre-Louis Bossart,
	virtualization, Liam Girdwood, Bjorn Andersson,
	sound-open-firmware

Hi,

Next update:

v5:
- don't hard-code message layout

v4:
- add endianness conversions to comply with the VirtIO standard

v3:
- address several checkpatch warnings
- address comments from Mathieu Poirier

v2:
- update patch #5 with a correct vhost_dev_init() prototype
- drop patch #6 - it depends on a different patch, that is currently
  an RFC
- address comments from Pierre-Louis Bossart:
  * remove "default n" from Kconfig

Linux supports RPMsg over VirtIO for "remote processor" / AMP use
cases. It can however also be used for virtualisation scenarios,
e.g. when using KVM to run Linux on both the host and the guests.
This patch set adds a wrapper API to facilitate writing vhost
drivers for such RPMsg-based solutions. The first use case is an
audio DSP virtualisation project, currently under development, ready
for review and submission, available at
https://github.com/thesofproject/linux/pull/1501/commits

Thanks
Guennadi

Guennadi Liakhovetski (4):
  vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
  rpmsg: move common structures and defines to headers
  rpmsg: update documentation
  vhost: add an RPMsg API

 Documentation/rpmsg.txt          |   6 +-
 drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
 drivers/vhost/Kconfig            |   7 +
 drivers/vhost/Makefile           |   3 +
 drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
 drivers/vhost/vhost_rpmsg.h      |  74 ++++++
 include/linux/virtio_rpmsg.h     |  83 +++++++
 include/uapi/linux/rpmsg.h       |   3 +
 include/uapi/linux/vhost.h       |   4 +-
 9 files changed, 551 insertions(+), 80 deletions(-)
 create mode 100644 drivers/vhost/rpmsg.c
 create mode 100644 drivers/vhost/vhost_rpmsg.h
 create mode 100644 include/linux/virtio_rpmsg.h

-- 
2.28.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
  2020-08-26 17:46 ` Guennadi Liakhovetski
@ 2020-08-26 17:46   ` Guennadi Liakhovetski
  -1 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Mathieu Poirier,
	Vincent Whitchurch

VHOST_VSOCK_SET_RUNNING is used by the vhost vsock driver to perform
crucial VirtQueue initialisation, like assigning .private fields and
calling vhost_vq_init_access(), and clean up. However, this ioctl is
actually extremely useful for any vhost driver, that doesn't have a
side channel to inform it of a status change, e.g. upon a guest
reboot. This patch makes that ioctl generic, while preserving its
numeric value and also keeping the original alias.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
---
 include/uapi/linux/vhost.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
index 75232185324a..11a4948b6216 100644
--- a/include/uapi/linux/vhost.h
+++ b/include/uapi/linux/vhost.h
@@ -97,6 +97,8 @@
 #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
 #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
 
+#define VHOST_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
+
 /* VHOST_NET specific defines */
 
 /* Attach virtio net ring to a raw socket, or tap device.
@@ -118,7 +120,7 @@
 /* VHOST_VSOCK specific defines */
 
 #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
-#define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
+#define VHOST_VSOCK_SET_RUNNING		VHOST_SET_RUNNING
 
 /* VHOST_VDPA specific defines */
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
@ 2020-08-26 17:46   ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: Ohad Ben-Cohen, Mathieu Poirier, Michael S. Tsirkin,
	Vincent Whitchurch, linux-remoteproc, Pierre-Louis Bossart,
	virtualization, Liam Girdwood, Bjorn Andersson,
	sound-open-firmware

VHOST_VSOCK_SET_RUNNING is used by the vhost vsock driver to perform
crucial VirtQueue initialisation, like assigning .private fields and
calling vhost_vq_init_access(), and clean up. However, this ioctl is
actually extremely useful for any vhost driver, that doesn't have a
side channel to inform it of a status change, e.g. upon a guest
reboot. This patch makes that ioctl generic, while preserving its
numeric value and also keeping the original alias.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
---
 include/uapi/linux/vhost.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
index 75232185324a..11a4948b6216 100644
--- a/include/uapi/linux/vhost.h
+++ b/include/uapi/linux/vhost.h
@@ -97,6 +97,8 @@
 #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
 #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
 
+#define VHOST_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
+
 /* VHOST_NET specific defines */
 
 /* Attach virtio net ring to a raw socket, or tap device.
@@ -118,7 +120,7 @@
 /* VHOST_VSOCK specific defines */
 
 #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
-#define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
+#define VHOST_VSOCK_SET_RUNNING		VHOST_SET_RUNNING
 
 /* VHOST_VDPA specific defines */
 
-- 
2.28.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v5 2/4] rpmsg: move common structures and defines to headers
  2020-08-26 17:46 ` Guennadi Liakhovetski
@ 2020-08-26 17:46   ` Guennadi Liakhovetski
  -1 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Mathieu Poirier,
	Vincent Whitchurch

virtio_rpmsg_bus.c keeps RPMsg protocol structure declarations and
common defines like the ones, needed for name-space announcements,
internal. Move them to common headers instead.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
---
 drivers/rpmsg/virtio_rpmsg_bus.c | 78 +-----------------------------
 include/linux/virtio_rpmsg.h     | 83 ++++++++++++++++++++++++++++++++
 include/uapi/linux/rpmsg.h       |  3 ++
 3 files changed, 88 insertions(+), 76 deletions(-)
 create mode 100644 include/linux/virtio_rpmsg.h

diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 9006fc7f73d0..9d5dd3f0a648 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -26,7 +26,9 @@
 #include <linux/virtio_byteorder.h>
 #include <linux/virtio_ids.h>
 #include <linux/virtio_config.h>
+#include <linux/virtio_rpmsg.h>
 #include <linux/wait.h>
+#include <uapi/linux/rpmsg.h>
 
 #include "rpmsg_internal.h"
 
@@ -70,58 +72,6 @@ struct virtproc_info {
 	struct rpmsg_endpoint *ns_ept;
 };
 
-/* The feature bitmap for virtio rpmsg */
-#define VIRTIO_RPMSG_F_NS	0 /* RP supports name service notifications */
-
-/**
- * struct rpmsg_hdr - common header for all rpmsg messages
- * @src: source address
- * @dst: destination address
- * @reserved: reserved for future use
- * @len: length of payload (in bytes)
- * @flags: message flags
- * @data: @len bytes of message payload data
- *
- * Every message sent(/received) on the rpmsg bus begins with this header.
- */
-struct rpmsg_hdr {
-	__virtio32 src;
-	__virtio32 dst;
-	__virtio32 reserved;
-	__virtio16 len;
-	__virtio16 flags;
-	u8 data[];
-} __packed;
-
-/**
- * struct rpmsg_ns_msg - dynamic name service announcement message
- * @name: name of remote service that is published
- * @addr: address of remote service that is published
- * @flags: indicates whether service is created or destroyed
- *
- * This message is sent across to publish a new service, or announce
- * about its removal. When we receive these messages, an appropriate
- * rpmsg channel (i.e device) is created/destroyed. In turn, the ->probe()
- * or ->remove() handler of the appropriate rpmsg driver will be invoked
- * (if/as-soon-as one is registered).
- */
-struct rpmsg_ns_msg {
-	char name[RPMSG_NAME_SIZE];
-	__virtio32 addr;
-	__virtio32 flags;
-} __packed;
-
-/**
- * enum rpmsg_ns_flags - dynamic name service announcement flags
- *
- * @RPMSG_NS_CREATE: a new remote service was just created
- * @RPMSG_NS_DESTROY: a known remote service was just destroyed
- */
-enum rpmsg_ns_flags {
-	RPMSG_NS_CREATE		= 0,
-	RPMSG_NS_DESTROY	= 1,
-};
-
 /**
  * @vrp: the remote processor this channel belongs to
  */
@@ -134,27 +84,6 @@ struct virtio_rpmsg_channel {
 #define to_virtio_rpmsg_channel(_rpdev) \
 	container_of(_rpdev, struct virtio_rpmsg_channel, rpdev)
 
-/*
- * We're allocating buffers of 512 bytes each for communications. The
- * number of buffers will be computed from the number of buffers supported
- * by the vring, upto a maximum of 512 buffers (256 in each direction).
- *
- * Each buffer will have 16 bytes for the msg header and 496 bytes for
- * the payload.
- *
- * This will utilize a maximum total space of 256KB for the buffers.
- *
- * We might also want to add support for user-provided buffers in time.
- * This will allow bigger buffer size flexibility, and can also be used
- * to achieve zero-copy messaging.
- *
- * Note that these numbers are purely a decision of this driver - we
- * can change this without changing anything in the firmware of the remote
- * processor.
- */
-#define MAX_RPMSG_NUM_BUFS	(512)
-#define MAX_RPMSG_BUF_SIZE	(512)
-
 /*
  * Local addresses are dynamically allocated on-demand.
  * We do not dynamically assign addresses from the low 1024 range,
@@ -162,9 +91,6 @@ struct virtio_rpmsg_channel {
  */
 #define RPMSG_RESERVED_ADDRESSES	(1024)
 
-/* Address 53 is reserved for advertising remote services */
-#define RPMSG_NS_ADDR			(53)
-
 static void virtio_rpmsg_destroy_ept(struct rpmsg_endpoint *ept);
 static int virtio_rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len);
 static int virtio_rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len,
diff --git a/include/linux/virtio_rpmsg.h b/include/linux/virtio_rpmsg.h
new file mode 100644
index 000000000000..fcb523831e73
--- /dev/null
+++ b/include/linux/virtio_rpmsg.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_VIRTIO_RPMSG_H
+#define _LINUX_VIRTIO_RPMSG_H
+
+#include <linux/mod_devicetable.h>
+#include <linux/types.h>
+#include <linux/virtio_types.h>
+
+/**
+ * struct rpmsg_hdr - common header for all rpmsg messages
+ * @src: source address
+ * @dst: destination address
+ * @reserved: reserved for future use
+ * @len: length of payload (in bytes)
+ * @flags: message flags
+ * @data: @len bytes of message payload data
+ *
+ * Every message sent(/received) on the rpmsg bus begins with this header.
+ */
+struct rpmsg_hdr {
+	__virtio32 src;
+	__virtio32 dst;
+	__virtio32 reserved;
+	__virtio16 len;
+	__virtio16 flags;
+	u8 data[];
+} __packed;
+
+/**
+ * struct rpmsg_ns_msg - dynamic name service announcement message
+ * @name: name of remote service that is published
+ * @addr: address of remote service that is published
+ * @flags: indicates whether service is created or destroyed
+ *
+ * This message is sent across to publish a new service, or announce
+ * about its removal. When we receive these messages, an appropriate
+ * rpmsg channel (i.e device) is created/destroyed. In turn, the ->probe()
+ * or ->remove() handler of the appropriate rpmsg driver will be invoked
+ * (if/as-soon-as one is registered).
+ */
+struct rpmsg_ns_msg {
+	char name[RPMSG_NAME_SIZE];
+	__virtio32 addr;
+	__virtio32 flags;
+} __packed;
+
+/**
+ * enum rpmsg_ns_flags - dynamic name service announcement flags
+ *
+ * @RPMSG_NS_CREATE: a new remote service was just created
+ * @RPMSG_NS_DESTROY: a known remote service was just destroyed
+ */
+enum rpmsg_ns_flags {
+	RPMSG_NS_CREATE		= 0,
+	RPMSG_NS_DESTROY	= 1,
+};
+
+/*
+ * We're allocating buffers of 512 bytes each for communications. The
+ * number of buffers will be computed from the number of buffers supported
+ * by the vring, upto a maximum of 512 buffers (256 in each direction).
+ *
+ * Each buffer will have 16 bytes for the msg header and 496 bytes for
+ * the payload.
+ *
+ * This will utilize a maximum total space of 256KB for the buffers.
+ *
+ * We might also want to add support for user-provided buffers in time.
+ * This will allow bigger buffer size flexibility, and can also be used
+ * to achieve zero-copy messaging.
+ *
+ * Note that these numbers are purely a decision of this driver - we
+ * can change this without changing anything in the firmware of the remote
+ * processor.
+ */
+#define MAX_RPMSG_NUM_BUFS	512
+#define MAX_RPMSG_BUF_SIZE	512
+
+/* Address 53 is reserved for advertising remote services */
+#define RPMSG_NS_ADDR		53
+
+#endif
diff --git a/include/uapi/linux/rpmsg.h b/include/uapi/linux/rpmsg.h
index e14c6dab4223..d669c04ef289 100644
--- a/include/uapi/linux/rpmsg.h
+++ b/include/uapi/linux/rpmsg.h
@@ -24,4 +24,7 @@ struct rpmsg_endpoint_info {
 #define RPMSG_CREATE_EPT_IOCTL	_IOW(0xb5, 0x1, struct rpmsg_endpoint_info)
 #define RPMSG_DESTROY_EPT_IOCTL	_IO(0xb5, 0x2)
 
+/* The feature bitmap for virtio rpmsg */
+#define VIRTIO_RPMSG_F_NS	0 /* RP supports name service notifications */
+
 #endif
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v5 2/4] rpmsg: move common structures and defines to headers
@ 2020-08-26 17:46   ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: Ohad Ben-Cohen, Mathieu Poirier, Michael S. Tsirkin,
	Vincent Whitchurch, linux-remoteproc, Pierre-Louis Bossart,
	virtualization, Liam Girdwood, Bjorn Andersson,
	sound-open-firmware

virtio_rpmsg_bus.c keeps RPMsg protocol structure declarations and
common defines like the ones, needed for name-space announcements,
internal. Move them to common headers instead.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
---
 drivers/rpmsg/virtio_rpmsg_bus.c | 78 +-----------------------------
 include/linux/virtio_rpmsg.h     | 83 ++++++++++++++++++++++++++++++++
 include/uapi/linux/rpmsg.h       |  3 ++
 3 files changed, 88 insertions(+), 76 deletions(-)
 create mode 100644 include/linux/virtio_rpmsg.h

diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
index 9006fc7f73d0..9d5dd3f0a648 100644
--- a/drivers/rpmsg/virtio_rpmsg_bus.c
+++ b/drivers/rpmsg/virtio_rpmsg_bus.c
@@ -26,7 +26,9 @@
 #include <linux/virtio_byteorder.h>
 #include <linux/virtio_ids.h>
 #include <linux/virtio_config.h>
+#include <linux/virtio_rpmsg.h>
 #include <linux/wait.h>
+#include <uapi/linux/rpmsg.h>
 
 #include "rpmsg_internal.h"
 
@@ -70,58 +72,6 @@ struct virtproc_info {
 	struct rpmsg_endpoint *ns_ept;
 };
 
-/* The feature bitmap for virtio rpmsg */
-#define VIRTIO_RPMSG_F_NS	0 /* RP supports name service notifications */
-
-/**
- * struct rpmsg_hdr - common header for all rpmsg messages
- * @src: source address
- * @dst: destination address
- * @reserved: reserved for future use
- * @len: length of payload (in bytes)
- * @flags: message flags
- * @data: @len bytes of message payload data
- *
- * Every message sent(/received) on the rpmsg bus begins with this header.
- */
-struct rpmsg_hdr {
-	__virtio32 src;
-	__virtio32 dst;
-	__virtio32 reserved;
-	__virtio16 len;
-	__virtio16 flags;
-	u8 data[];
-} __packed;
-
-/**
- * struct rpmsg_ns_msg - dynamic name service announcement message
- * @name: name of remote service that is published
- * @addr: address of remote service that is published
- * @flags: indicates whether service is created or destroyed
- *
- * This message is sent across to publish a new service, or announce
- * about its removal. When we receive these messages, an appropriate
- * rpmsg channel (i.e device) is created/destroyed. In turn, the ->probe()
- * or ->remove() handler of the appropriate rpmsg driver will be invoked
- * (if/as-soon-as one is registered).
- */
-struct rpmsg_ns_msg {
-	char name[RPMSG_NAME_SIZE];
-	__virtio32 addr;
-	__virtio32 flags;
-} __packed;
-
-/**
- * enum rpmsg_ns_flags - dynamic name service announcement flags
- *
- * @RPMSG_NS_CREATE: a new remote service was just created
- * @RPMSG_NS_DESTROY: a known remote service was just destroyed
- */
-enum rpmsg_ns_flags {
-	RPMSG_NS_CREATE		= 0,
-	RPMSG_NS_DESTROY	= 1,
-};
-
 /**
  * @vrp: the remote processor this channel belongs to
  */
@@ -134,27 +84,6 @@ struct virtio_rpmsg_channel {
 #define to_virtio_rpmsg_channel(_rpdev) \
 	container_of(_rpdev, struct virtio_rpmsg_channel, rpdev)
 
-/*
- * We're allocating buffers of 512 bytes each for communications. The
- * number of buffers will be computed from the number of buffers supported
- * by the vring, upto a maximum of 512 buffers (256 in each direction).
- *
- * Each buffer will have 16 bytes for the msg header and 496 bytes for
- * the payload.
- *
- * This will utilize a maximum total space of 256KB for the buffers.
- *
- * We might also want to add support for user-provided buffers in time.
- * This will allow bigger buffer size flexibility, and can also be used
- * to achieve zero-copy messaging.
- *
- * Note that these numbers are purely a decision of this driver - we
- * can change this without changing anything in the firmware of the remote
- * processor.
- */
-#define MAX_RPMSG_NUM_BUFS	(512)
-#define MAX_RPMSG_BUF_SIZE	(512)
-
 /*
  * Local addresses are dynamically allocated on-demand.
  * We do not dynamically assign addresses from the low 1024 range,
@@ -162,9 +91,6 @@ struct virtio_rpmsg_channel {
  */
 #define RPMSG_RESERVED_ADDRESSES	(1024)
 
-/* Address 53 is reserved for advertising remote services */
-#define RPMSG_NS_ADDR			(53)
-
 static void virtio_rpmsg_destroy_ept(struct rpmsg_endpoint *ept);
 static int virtio_rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len);
 static int virtio_rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len,
diff --git a/include/linux/virtio_rpmsg.h b/include/linux/virtio_rpmsg.h
new file mode 100644
index 000000000000..fcb523831e73
--- /dev/null
+++ b/include/linux/virtio_rpmsg.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_VIRTIO_RPMSG_H
+#define _LINUX_VIRTIO_RPMSG_H
+
+#include <linux/mod_devicetable.h>
+#include <linux/types.h>
+#include <linux/virtio_types.h>
+
+/**
+ * struct rpmsg_hdr - common header for all rpmsg messages
+ * @src: source address
+ * @dst: destination address
+ * @reserved: reserved for future use
+ * @len: length of payload (in bytes)
+ * @flags: message flags
+ * @data: @len bytes of message payload data
+ *
+ * Every message sent(/received) on the rpmsg bus begins with this header.
+ */
+struct rpmsg_hdr {
+	__virtio32 src;
+	__virtio32 dst;
+	__virtio32 reserved;
+	__virtio16 len;
+	__virtio16 flags;
+	u8 data[];
+} __packed;
+
+/**
+ * struct rpmsg_ns_msg - dynamic name service announcement message
+ * @name: name of remote service that is published
+ * @addr: address of remote service that is published
+ * @flags: indicates whether service is created or destroyed
+ *
+ * This message is sent across to publish a new service, or announce
+ * about its removal. When we receive these messages, an appropriate
+ * rpmsg channel (i.e device) is created/destroyed. In turn, the ->probe()
+ * or ->remove() handler of the appropriate rpmsg driver will be invoked
+ * (if/as-soon-as one is registered).
+ */
+struct rpmsg_ns_msg {
+	char name[RPMSG_NAME_SIZE];
+	__virtio32 addr;
+	__virtio32 flags;
+} __packed;
+
+/**
+ * enum rpmsg_ns_flags - dynamic name service announcement flags
+ *
+ * @RPMSG_NS_CREATE: a new remote service was just created
+ * @RPMSG_NS_DESTROY: a known remote service was just destroyed
+ */
+enum rpmsg_ns_flags {
+	RPMSG_NS_CREATE		= 0,
+	RPMSG_NS_DESTROY	= 1,
+};
+
+/*
+ * We're allocating buffers of 512 bytes each for communications. The
+ * number of buffers will be computed from the number of buffers supported
+ * by the vring, upto a maximum of 512 buffers (256 in each direction).
+ *
+ * Each buffer will have 16 bytes for the msg header and 496 bytes for
+ * the payload.
+ *
+ * This will utilize a maximum total space of 256KB for the buffers.
+ *
+ * We might also want to add support for user-provided buffers in time.
+ * This will allow bigger buffer size flexibility, and can also be used
+ * to achieve zero-copy messaging.
+ *
+ * Note that these numbers are purely a decision of this driver - we
+ * can change this without changing anything in the firmware of the remote
+ * processor.
+ */
+#define MAX_RPMSG_NUM_BUFS	512
+#define MAX_RPMSG_BUF_SIZE	512
+
+/* Address 53 is reserved for advertising remote services */
+#define RPMSG_NS_ADDR		53
+
+#endif
diff --git a/include/uapi/linux/rpmsg.h b/include/uapi/linux/rpmsg.h
index e14c6dab4223..d669c04ef289 100644
--- a/include/uapi/linux/rpmsg.h
+++ b/include/uapi/linux/rpmsg.h
@@ -24,4 +24,7 @@ struct rpmsg_endpoint_info {
 #define RPMSG_CREATE_EPT_IOCTL	_IOW(0xb5, 0x1, struct rpmsg_endpoint_info)
 #define RPMSG_DESTROY_EPT_IOCTL	_IO(0xb5, 0x2)
 
+/* The feature bitmap for virtio rpmsg */
+#define VIRTIO_RPMSG_F_NS	0 /* RP supports name service notifications */
+
 #endif
-- 
2.28.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v5 3/4] rpmsg: update documentation
  2020-08-26 17:46 ` Guennadi Liakhovetski
@ 2020-08-26 17:46   ` Guennadi Liakhovetski
  -1 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Mathieu Poirier,
	Vincent Whitchurch

rpmsg_create_ept() takes struct rpmsg_channel_info chinfo as its last
argument, not a u32 value. The first two arguments are also updated.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
---
 Documentation/rpmsg.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt
index 24b7a9e1a5f9..1ce353cb232a 100644
--- a/Documentation/rpmsg.txt
+++ b/Documentation/rpmsg.txt
@@ -192,9 +192,9 @@ Returns 0 on success and an appropriate error value on failure.
 
 ::
 
-  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev,
-		void (*cb)(struct rpmsg_channel *, void *, int, void *, u32),
-		void *priv, u32 addr);
+  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
+					  rpmsg_rx_cb_t cb, void *priv,
+					  struct rpmsg_channel_info chinfo);
 
 every rpmsg address in the system is bound to an rx callback (so when
 inbound messages arrive, they are dispatched by the rpmsg bus using the
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v5 3/4] rpmsg: update documentation
@ 2020-08-26 17:46   ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: Ohad Ben-Cohen, Mathieu Poirier, Michael S. Tsirkin,
	Vincent Whitchurch, linux-remoteproc, Pierre-Louis Bossart,
	virtualization, Liam Girdwood, Bjorn Andersson,
	sound-open-firmware

rpmsg_create_ept() takes struct rpmsg_channel_info chinfo as its last
argument, not a u32 value. The first two arguments are also updated.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
---
 Documentation/rpmsg.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt
index 24b7a9e1a5f9..1ce353cb232a 100644
--- a/Documentation/rpmsg.txt
+++ b/Documentation/rpmsg.txt
@@ -192,9 +192,9 @@ Returns 0 on success and an appropriate error value on failure.
 
 ::
 
-  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev,
-		void (*cb)(struct rpmsg_channel *, void *, int, void *, u32),
-		void *priv, u32 addr);
+  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
+					  rpmsg_rx_cb_t cb, void *priv,
+					  struct rpmsg_channel_info chinfo);
 
 every rpmsg address in the system is bound to an rx callback (so when
 inbound messages arrive, they are dispatched by the rpmsg bus using the
-- 
2.28.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v5 4/4] vhost: add an RPMsg API
  2020-08-26 17:46 ` Guennadi Liakhovetski
@ 2020-08-26 17:46   ` Guennadi Liakhovetski
  -1 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Mathieu Poirier,
	Vincent Whitchurch

Linux supports running the RPMsg protocol over the VirtIO transport
protocol, but currently there is only support for VirtIO clients and
no support for a VirtIO server. This patch adds a vhost-based RPMsg
server implementation.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
---
 drivers/vhost/Kconfig       |   7 +
 drivers/vhost/Makefile      |   3 +
 drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
 drivers/vhost/vhost_rpmsg.h |  74 +++++++
 4 files changed, 457 insertions(+)
 create mode 100644 drivers/vhost/rpmsg.c
 create mode 100644 drivers/vhost/vhost_rpmsg.h

diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
index 587fbae06182..046b948fc411 100644
--- a/drivers/vhost/Kconfig
+++ b/drivers/vhost/Kconfig
@@ -38,6 +38,13 @@ config VHOST_NET
 	  To compile this driver as a module, choose M here: the module will
 	  be called vhost_net.
 
+config VHOST_RPMSG
+	tristate
+	select VHOST
+	help
+	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
+	  drivers, using the RPMsg over VirtIO protocol.
+
 config VHOST_SCSI
 	tristate "VHOST_SCSI TCM fabric driver"
 	depends on TARGET_CORE && EVENTFD
diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
index f3e1897cce85..9cf459d59f97 100644
--- a/drivers/vhost/Makefile
+++ b/drivers/vhost/Makefile
@@ -2,6 +2,9 @@
 obj-$(CONFIG_VHOST_NET) += vhost_net.o
 vhost_net-y := net.o
 
+obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
+vhost_rpmsg-y := rpmsg.o
+
 obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
 vhost_scsi-y := scsi.o
 
diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
new file mode 100644
index 000000000000..c26d7a4afc6d
--- /dev/null
+++ b/drivers/vhost/rpmsg.c
@@ -0,0 +1,373 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright(c) 2020 Intel Corporation. All rights reserved.
+ *
+ * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
+ *
+ * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
+ * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
+ * These functions handle creation of 2 virtual queues, handling of endpoint
+ * addresses, sending a name-space announcement to the guest as well as any
+ * user messages. This API can be used by any vhost driver to handle RPMsg
+ * specific processing.
+ * Specific vhost drivers, using this API will use their own VirtIO device
+ * IDs, that should then also be added to the ID table in virtio_rpmsg_bus.c
+ */
+
+#include <linux/compat.h>
+#include <linux/file.h>
+#include <linux/miscdevice.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/vhost.h>
+#include <linux/virtio_rpmsg.h>
+#include <uapi/linux/rpmsg.h>
+
+#include "vhost.h"
+#include "vhost_rpmsg.h"
+
+/*
+ * All virtio-rpmsg virtual queue kicks always come with just one buffer -
+ * either input or output, but we can also handle split messages
+ */
+static int vhost_rpmsg_get_msg(struct vhost_virtqueue *vq, unsigned int *cnt)
+{
+	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
+	unsigned int out, in;
+	int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in,
+				     NULL, NULL);
+	if (head < 0) {
+		vq_err(vq, "%s(): error %d getting buffer\n",
+		       __func__, head);
+		return head;
+	}
+
+	/* Nothing new? */
+	if (head == vq->num)
+		return head;
+
+	if (vq == &vr->vq[VIRTIO_RPMSG_RESPONSE]) {
+		if (out) {
+			vq_err(vq, "%s(): invalid %d output in response queue\n",
+			       __func__, out);
+			goto return_buf;
+		}
+
+		*cnt = in;
+	}
+
+	if (vq == &vr->vq[VIRTIO_RPMSG_REQUEST]) {
+		if (in) {
+			vq_err(vq, "%s(): invalid %d input in request queue\n",
+		       __func__, in);
+			goto return_buf;
+		}
+
+		*cnt = out;
+	}
+
+	return head;
+
+return_buf:
+	vhost_add_used(vq, head, 0);
+
+	return -EINVAL;
+}
+
+static const struct vhost_rpmsg_ept *vhost_rpmsg_ept_find(struct vhost_rpmsg *vr, int addr)
+{
+	unsigned int i;
+
+	for (i = 0; i < vr->n_epts; i++)
+		if (vr->ept[i].addr == addr)
+			return vr->ept + i;
+
+	return NULL;
+}
+
+/*
+ * if len < 0, then for reading a request, the complete virtual queue buffer
+ * size is prepared, for sending a response, the length in the iterator is used
+ */
+int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
+			   unsigned int qid, ssize_t len)
+	__acquires(vq->mutex)
+{
+	struct vhost_virtqueue *vq = vr->vq + qid;
+	unsigned int cnt;
+	ssize_t ret;
+	size_t tmp;
+
+	if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
+		return -EINVAL;
+
+	iter->vq = vq;
+
+	mutex_lock(&vq->mutex);
+	vhost_disable_notify(&vr->dev, vq);
+
+	iter->head = vhost_rpmsg_get_msg(vq, &cnt);
+	if (iter->head == vq->num)
+		iter->head = -EAGAIN;
+
+	if (iter->head < 0) {
+		ret = iter->head;
+		goto unlock;
+	}
+
+	tmp = iov_length(vq->iov, cnt);
+	if (tmp < sizeof(iter->rhdr)) {
+		vq_err(vq, "%s(): size %zu too small\n", __func__, tmp);
+		ret = -ENOBUFS;
+		goto return_buf;
+	}
+
+	switch (qid) {
+	case VIRTIO_RPMSG_REQUEST:
+		if (len >= 0) {
+			if (tmp < sizeof(iter->rhdr) + len) {
+				ret = -ENOBUFS;
+				goto return_buf;
+			}
+
+			tmp = len + sizeof(iter->rhdr);
+		}
+
+		/* len is now the size of the payload */
+		iov_iter_init(&iter->iov_iter, WRITE, vq->iov, cnt, tmp);
+
+		/* Read the RPMSG header with endpoint addresses */
+		tmp = copy_from_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
+		if (tmp != sizeof(iter->rhdr)) {
+			vq_err(vq, "%s(): got %zu instead of %zu\n", __func__,
+			       tmp, sizeof(iter->rhdr));
+			ret = -EIO;
+			goto return_buf;
+		}
+
+		iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.dst));
+		if (!iter->ept) {
+			vq_err(vq, "%s(): no endpoint with address %d\n",
+			       __func__, vhost32_to_cpu(vq, iter->rhdr.dst));
+			ret = -ENOENT;
+			goto return_buf;
+		}
+
+		/* Let the endpoint read the payload */
+		if (iter->ept->read) {
+			ret = iter->ept->read(vr, iter);
+			if (ret < 0)
+				goto return_buf;
+
+			iter->rhdr.len = cpu_to_vhost16(vq, ret);
+		} else {
+			iter->rhdr.len = 0;
+		}
+
+		/* Prepare for the response phase */
+		iter->rhdr.dst = iter->rhdr.src;
+		iter->rhdr.src = cpu_to_vhost32(vq, iter->ept->addr);
+
+		break;
+	case VIRTIO_RPMSG_RESPONSE:
+		if (!iter->ept && iter->rhdr.dst != cpu_to_vhost32(vq, RPMSG_NS_ADDR)) {
+			/*
+			 * Usually the iterator is configured when processing a
+			 * message on the request queue, but it's also possible
+			 * to send a message on the response queue without a
+			 * preceding request, in that case the iterator must
+			 * contain source and destination addresses.
+			 */
+			iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.src));
+			if (!iter->ept) {
+				ret = -ENOENT;
+				goto return_buf;
+			}
+		}
+
+		if (len >= 0) {
+			if (tmp < sizeof(iter->rhdr) + len) {
+				ret = -ENOBUFS;
+				goto return_buf;
+			}
+
+			iter->rhdr.len = cpu_to_vhost16(vq, len);
+			tmp = len + sizeof(iter->rhdr);
+		}
+
+		/* len is now the size of the payload */
+		iov_iter_init(&iter->iov_iter, READ, vq->iov, cnt, tmp);
+
+		/* Write the RPMSG header with endpoint addresses */
+		tmp = copy_to_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
+		if (tmp != sizeof(iter->rhdr)) {
+			ret = -EIO;
+			goto return_buf;
+		}
+
+		/* Let the endpoint write the payload */
+		if (iter->ept && iter->ept->write) {
+			ret = iter->ept->write(vr, iter);
+			if (ret < 0)
+				goto return_buf;
+		}
+
+		break;
+	}
+
+	return 0;
+
+return_buf:
+	vhost_add_used(vq, iter->head, 0);
+unlock:
+	vhost_enable_notify(&vr->dev, vq);
+	mutex_unlock(&vq->mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_start_lock);
+
+size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
+			void *data, size_t size)
+{
+	/*
+	 * We could check for excess data, but copy_{to,from}_iter() don't do
+	 * that either
+	 */
+	if (iter->vq == vr->vq + VIRTIO_RPMSG_RESPONSE)
+		return copy_to_iter(data, size, &iter->iov_iter);
+
+	return copy_from_iter(data, size, &iter->iov_iter);
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_copy);
+
+int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
+			      struct vhost_rpmsg_iter *iter)
+	__releases(vq->mutex)
+{
+	if (iter->head >= 0)
+		vhost_add_used_and_signal(iter->vq->dev, iter->vq, iter->head,
+					  vhost16_to_cpu(iter->vq, iter->rhdr.len) +
+					  sizeof(iter->rhdr));
+
+	vhost_enable_notify(&vr->dev, iter->vq);
+	mutex_unlock(&iter->vq->mutex);
+
+	return iter->head;
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_finish_unlock);
+
+/*
+ * Return false to terminate the external loop only if we fail to obtain either
+ * a request or a response buffer
+ */
+static bool handle_rpmsg_req_single(struct vhost_rpmsg *vr,
+				    struct vhost_virtqueue *vq)
+{
+	struct vhost_rpmsg_iter iter;
+	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_REQUEST, -EINVAL);
+	if (!ret)
+		ret = vhost_rpmsg_finish_unlock(vr, &iter);
+	if (ret < 0) {
+		if (ret != -EAGAIN)
+			vq_err(vq, "%s(): RPMSG processing failed %d\n",
+			       __func__, ret);
+		return false;
+	}
+
+	if (!iter.ept->write)
+		return true;
+
+	ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, -EINVAL);
+	if (!ret)
+		ret = vhost_rpmsg_finish_unlock(vr, &iter);
+	if (ret < 0) {
+		vq_err(vq, "%s(): RPMSG finalising failed %d\n", __func__, ret);
+		return false;
+	}
+
+	return true;
+}
+
+static void handle_rpmsg_req_kick(struct vhost_work *work)
+{
+	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
+						  poll.work);
+	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
+
+	while (handle_rpmsg_req_single(vr, vq))
+		;
+}
+
+/*
+ * initialise two virtqueues with an array of endpoints,
+ * request and response callbacks
+ */
+void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
+		      unsigned int n_epts)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(vr->vq); i++)
+		vr->vq_p[i] = &vr->vq[i];
+
+	/* vq[0]: host -> guest, vq[1]: host <- guest */
+	vr->vq[VIRTIO_RPMSG_REQUEST].handle_kick = handle_rpmsg_req_kick;
+	vr->vq[VIRTIO_RPMSG_RESPONSE].handle_kick = NULL;
+
+	vr->ept = ept;
+	vr->n_epts = n_epts;
+
+	vhost_dev_init(&vr->dev, vr->vq_p, VIRTIO_RPMSG_NUM_OF_VQS,
+		       UIO_MAXIOV, 0, 0, true, NULL);
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_init);
+
+void vhost_rpmsg_destroy(struct vhost_rpmsg *vr)
+{
+	if (vhost_dev_has_owner(&vr->dev))
+		vhost_poll_flush(&vr->vq[VIRTIO_RPMSG_REQUEST].poll);
+
+	vhost_dev_cleanup(&vr->dev);
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_destroy);
+
+/* send namespace */
+int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name, unsigned int src)
+{
+	struct vhost_virtqueue *vq = &vr->vq[VIRTIO_RPMSG_RESPONSE];
+	struct vhost_rpmsg_iter iter = {
+		.rhdr = {
+			.src = 0,
+			.dst = cpu_to_vhost32(vq, RPMSG_NS_ADDR),
+			.flags = cpu_to_vhost16(vq, RPMSG_NS_CREATE), /* rpmsg_recv_single() */
+		},
+	};
+	struct rpmsg_ns_msg ns = {
+		.addr = cpu_to_vhost32(vq, src),
+		.flags = cpu_to_vhost32(vq, RPMSG_NS_CREATE), /* for rpmsg_ns_cb() */
+	};
+	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, sizeof(ns));
+
+	if (ret < 0)
+		return ret;
+
+	strlcpy(ns.name, name, sizeof(ns.name));
+
+	ret = vhost_rpmsg_copy(vr, &iter, &ns, sizeof(ns));
+	if (ret != sizeof(ns))
+		vq_err(iter.vq, "%s(): added %d instead of %zu bytes\n",
+		       __func__, ret, sizeof(ns));
+
+	ret = vhost_rpmsg_finish_unlock(vr, &iter);
+	if (ret < 0)
+		vq_err(iter.vq, "%s(): namespace announcement failed: %d\n",
+		       __func__, ret);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_ns_announce);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Intel, Inc.");
+MODULE_DESCRIPTION("Vhost RPMsg API");
diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
new file mode 100644
index 000000000000..30072cecb8a0
--- /dev/null
+++ b/drivers/vhost/vhost_rpmsg.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright(c) 2020 Intel Corporation. All rights reserved.
+ *
+ * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
+ */
+
+#ifndef VHOST_RPMSG_H
+#define VHOST_RPMSG_H
+
+#include <linux/uio.h>
+#include <linux/virtio_rpmsg.h>
+
+#include "vhost.h"
+
+/* RPMsg uses two VirtQueues: one for each direction */
+enum {
+	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
+	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
+	/* Keep last */
+	VIRTIO_RPMSG_NUM_OF_VQS,
+};
+
+struct vhost_rpmsg_ept;
+
+struct vhost_rpmsg_iter {
+	struct iov_iter iov_iter;
+	struct rpmsg_hdr rhdr;
+	struct vhost_virtqueue *vq;
+	const struct vhost_rpmsg_ept *ept;
+	int head;
+	void *priv;
+};
+
+struct vhost_rpmsg {
+	struct vhost_dev dev;
+	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
+	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
+	const struct vhost_rpmsg_ept *ept;
+	unsigned int n_epts;
+};
+
+struct vhost_rpmsg_ept {
+	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
+	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
+	int addr;
+};
+
+static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
+{
+	return iter->rhdr.len;
+}
+
+#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
+	.rhdr = {						\
+			.src = cpu_to_vhost32(_vq, _src),	\
+			.dst = cpu_to_vhost32(_vq, _dst),	\
+		},						\
+	}
+
+void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
+		      unsigned int n_epts);
+void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
+int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
+			    unsigned int src);
+int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
+			   struct vhost_rpmsg_iter *iter,
+			   unsigned int qid, ssize_t len);
+size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
+			void *data, size_t size);
+int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
+			      struct vhost_rpmsg_iter *iter);
+
+#endif
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v5 4/4] vhost: add an RPMsg API
@ 2020-08-26 17:46   ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-08-26 17:46 UTC (permalink / raw)
  To: kvm
  Cc: Ohad Ben-Cohen, Mathieu Poirier, Michael S. Tsirkin,
	Vincent Whitchurch, linux-remoteproc, Pierre-Louis Bossart,
	virtualization, Liam Girdwood, Bjorn Andersson,
	sound-open-firmware

Linux supports running the RPMsg protocol over the VirtIO transport
protocol, but currently there is only support for VirtIO clients and
no support for a VirtIO server. This patch adds a vhost-based RPMsg
server implementation.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
---
 drivers/vhost/Kconfig       |   7 +
 drivers/vhost/Makefile      |   3 +
 drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
 drivers/vhost/vhost_rpmsg.h |  74 +++++++
 4 files changed, 457 insertions(+)
 create mode 100644 drivers/vhost/rpmsg.c
 create mode 100644 drivers/vhost/vhost_rpmsg.h

diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
index 587fbae06182..046b948fc411 100644
--- a/drivers/vhost/Kconfig
+++ b/drivers/vhost/Kconfig
@@ -38,6 +38,13 @@ config VHOST_NET
 	  To compile this driver as a module, choose M here: the module will
 	  be called vhost_net.
 
+config VHOST_RPMSG
+	tristate
+	select VHOST
+	help
+	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
+	  drivers, using the RPMsg over VirtIO protocol.
+
 config VHOST_SCSI
 	tristate "VHOST_SCSI TCM fabric driver"
 	depends on TARGET_CORE && EVENTFD
diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
index f3e1897cce85..9cf459d59f97 100644
--- a/drivers/vhost/Makefile
+++ b/drivers/vhost/Makefile
@@ -2,6 +2,9 @@
 obj-$(CONFIG_VHOST_NET) += vhost_net.o
 vhost_net-y := net.o
 
+obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
+vhost_rpmsg-y := rpmsg.o
+
 obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
 vhost_scsi-y := scsi.o
 
diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
new file mode 100644
index 000000000000..c26d7a4afc6d
--- /dev/null
+++ b/drivers/vhost/rpmsg.c
@@ -0,0 +1,373 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright(c) 2020 Intel Corporation. All rights reserved.
+ *
+ * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
+ *
+ * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
+ * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
+ * These functions handle creation of 2 virtual queues, handling of endpoint
+ * addresses, sending a name-space announcement to the guest as well as any
+ * user messages. This API can be used by any vhost driver to handle RPMsg
+ * specific processing.
+ * Specific vhost drivers, using this API will use their own VirtIO device
+ * IDs, that should then also be added to the ID table in virtio_rpmsg_bus.c
+ */
+
+#include <linux/compat.h>
+#include <linux/file.h>
+#include <linux/miscdevice.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/vhost.h>
+#include <linux/virtio_rpmsg.h>
+#include <uapi/linux/rpmsg.h>
+
+#include "vhost.h"
+#include "vhost_rpmsg.h"
+
+/*
+ * All virtio-rpmsg virtual queue kicks always come with just one buffer -
+ * either input or output, but we can also handle split messages
+ */
+static int vhost_rpmsg_get_msg(struct vhost_virtqueue *vq, unsigned int *cnt)
+{
+	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
+	unsigned int out, in;
+	int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in,
+				     NULL, NULL);
+	if (head < 0) {
+		vq_err(vq, "%s(): error %d getting buffer\n",
+		       __func__, head);
+		return head;
+	}
+
+	/* Nothing new? */
+	if (head == vq->num)
+		return head;
+
+	if (vq == &vr->vq[VIRTIO_RPMSG_RESPONSE]) {
+		if (out) {
+			vq_err(vq, "%s(): invalid %d output in response queue\n",
+			       __func__, out);
+			goto return_buf;
+		}
+
+		*cnt = in;
+	}
+
+	if (vq == &vr->vq[VIRTIO_RPMSG_REQUEST]) {
+		if (in) {
+			vq_err(vq, "%s(): invalid %d input in request queue\n",
+		       __func__, in);
+			goto return_buf;
+		}
+
+		*cnt = out;
+	}
+
+	return head;
+
+return_buf:
+	vhost_add_used(vq, head, 0);
+
+	return -EINVAL;
+}
+
+static const struct vhost_rpmsg_ept *vhost_rpmsg_ept_find(struct vhost_rpmsg *vr, int addr)
+{
+	unsigned int i;
+
+	for (i = 0; i < vr->n_epts; i++)
+		if (vr->ept[i].addr == addr)
+			return vr->ept + i;
+
+	return NULL;
+}
+
+/*
+ * if len < 0, then for reading a request, the complete virtual queue buffer
+ * size is prepared, for sending a response, the length in the iterator is used
+ */
+int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
+			   unsigned int qid, ssize_t len)
+	__acquires(vq->mutex)
+{
+	struct vhost_virtqueue *vq = vr->vq + qid;
+	unsigned int cnt;
+	ssize_t ret;
+	size_t tmp;
+
+	if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
+		return -EINVAL;
+
+	iter->vq = vq;
+
+	mutex_lock(&vq->mutex);
+	vhost_disable_notify(&vr->dev, vq);
+
+	iter->head = vhost_rpmsg_get_msg(vq, &cnt);
+	if (iter->head == vq->num)
+		iter->head = -EAGAIN;
+
+	if (iter->head < 0) {
+		ret = iter->head;
+		goto unlock;
+	}
+
+	tmp = iov_length(vq->iov, cnt);
+	if (tmp < sizeof(iter->rhdr)) {
+		vq_err(vq, "%s(): size %zu too small\n", __func__, tmp);
+		ret = -ENOBUFS;
+		goto return_buf;
+	}
+
+	switch (qid) {
+	case VIRTIO_RPMSG_REQUEST:
+		if (len >= 0) {
+			if (tmp < sizeof(iter->rhdr) + len) {
+				ret = -ENOBUFS;
+				goto return_buf;
+			}
+
+			tmp = len + sizeof(iter->rhdr);
+		}
+
+		/* len is now the size of the payload */
+		iov_iter_init(&iter->iov_iter, WRITE, vq->iov, cnt, tmp);
+
+		/* Read the RPMSG header with endpoint addresses */
+		tmp = copy_from_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
+		if (tmp != sizeof(iter->rhdr)) {
+			vq_err(vq, "%s(): got %zu instead of %zu\n", __func__,
+			       tmp, sizeof(iter->rhdr));
+			ret = -EIO;
+			goto return_buf;
+		}
+
+		iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.dst));
+		if (!iter->ept) {
+			vq_err(vq, "%s(): no endpoint with address %d\n",
+			       __func__, vhost32_to_cpu(vq, iter->rhdr.dst));
+			ret = -ENOENT;
+			goto return_buf;
+		}
+
+		/* Let the endpoint read the payload */
+		if (iter->ept->read) {
+			ret = iter->ept->read(vr, iter);
+			if (ret < 0)
+				goto return_buf;
+
+			iter->rhdr.len = cpu_to_vhost16(vq, ret);
+		} else {
+			iter->rhdr.len = 0;
+		}
+
+		/* Prepare for the response phase */
+		iter->rhdr.dst = iter->rhdr.src;
+		iter->rhdr.src = cpu_to_vhost32(vq, iter->ept->addr);
+
+		break;
+	case VIRTIO_RPMSG_RESPONSE:
+		if (!iter->ept && iter->rhdr.dst != cpu_to_vhost32(vq, RPMSG_NS_ADDR)) {
+			/*
+			 * Usually the iterator is configured when processing a
+			 * message on the request queue, but it's also possible
+			 * to send a message on the response queue without a
+			 * preceding request, in that case the iterator must
+			 * contain source and destination addresses.
+			 */
+			iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.src));
+			if (!iter->ept) {
+				ret = -ENOENT;
+				goto return_buf;
+			}
+		}
+
+		if (len >= 0) {
+			if (tmp < sizeof(iter->rhdr) + len) {
+				ret = -ENOBUFS;
+				goto return_buf;
+			}
+
+			iter->rhdr.len = cpu_to_vhost16(vq, len);
+			tmp = len + sizeof(iter->rhdr);
+		}
+
+		/* len is now the size of the payload */
+		iov_iter_init(&iter->iov_iter, READ, vq->iov, cnt, tmp);
+
+		/* Write the RPMSG header with endpoint addresses */
+		tmp = copy_to_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
+		if (tmp != sizeof(iter->rhdr)) {
+			ret = -EIO;
+			goto return_buf;
+		}
+
+		/* Let the endpoint write the payload */
+		if (iter->ept && iter->ept->write) {
+			ret = iter->ept->write(vr, iter);
+			if (ret < 0)
+				goto return_buf;
+		}
+
+		break;
+	}
+
+	return 0;
+
+return_buf:
+	vhost_add_used(vq, iter->head, 0);
+unlock:
+	vhost_enable_notify(&vr->dev, vq);
+	mutex_unlock(&vq->mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_start_lock);
+
+size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
+			void *data, size_t size)
+{
+	/*
+	 * We could check for excess data, but copy_{to,from}_iter() don't do
+	 * that either
+	 */
+	if (iter->vq == vr->vq + VIRTIO_RPMSG_RESPONSE)
+		return copy_to_iter(data, size, &iter->iov_iter);
+
+	return copy_from_iter(data, size, &iter->iov_iter);
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_copy);
+
+int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
+			      struct vhost_rpmsg_iter *iter)
+	__releases(vq->mutex)
+{
+	if (iter->head >= 0)
+		vhost_add_used_and_signal(iter->vq->dev, iter->vq, iter->head,
+					  vhost16_to_cpu(iter->vq, iter->rhdr.len) +
+					  sizeof(iter->rhdr));
+
+	vhost_enable_notify(&vr->dev, iter->vq);
+	mutex_unlock(&iter->vq->mutex);
+
+	return iter->head;
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_finish_unlock);
+
+/*
+ * Return false to terminate the external loop only if we fail to obtain either
+ * a request or a response buffer
+ */
+static bool handle_rpmsg_req_single(struct vhost_rpmsg *vr,
+				    struct vhost_virtqueue *vq)
+{
+	struct vhost_rpmsg_iter iter;
+	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_REQUEST, -EINVAL);
+	if (!ret)
+		ret = vhost_rpmsg_finish_unlock(vr, &iter);
+	if (ret < 0) {
+		if (ret != -EAGAIN)
+			vq_err(vq, "%s(): RPMSG processing failed %d\n",
+			       __func__, ret);
+		return false;
+	}
+
+	if (!iter.ept->write)
+		return true;
+
+	ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, -EINVAL);
+	if (!ret)
+		ret = vhost_rpmsg_finish_unlock(vr, &iter);
+	if (ret < 0) {
+		vq_err(vq, "%s(): RPMSG finalising failed %d\n", __func__, ret);
+		return false;
+	}
+
+	return true;
+}
+
+static void handle_rpmsg_req_kick(struct vhost_work *work)
+{
+	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
+						  poll.work);
+	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
+
+	while (handle_rpmsg_req_single(vr, vq))
+		;
+}
+
+/*
+ * initialise two virtqueues with an array of endpoints,
+ * request and response callbacks
+ */
+void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
+		      unsigned int n_epts)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(vr->vq); i++)
+		vr->vq_p[i] = &vr->vq[i];
+
+	/* vq[0]: host -> guest, vq[1]: host <- guest */
+	vr->vq[VIRTIO_RPMSG_REQUEST].handle_kick = handle_rpmsg_req_kick;
+	vr->vq[VIRTIO_RPMSG_RESPONSE].handle_kick = NULL;
+
+	vr->ept = ept;
+	vr->n_epts = n_epts;
+
+	vhost_dev_init(&vr->dev, vr->vq_p, VIRTIO_RPMSG_NUM_OF_VQS,
+		       UIO_MAXIOV, 0, 0, true, NULL);
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_init);
+
+void vhost_rpmsg_destroy(struct vhost_rpmsg *vr)
+{
+	if (vhost_dev_has_owner(&vr->dev))
+		vhost_poll_flush(&vr->vq[VIRTIO_RPMSG_REQUEST].poll);
+
+	vhost_dev_cleanup(&vr->dev);
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_destroy);
+
+/* send namespace */
+int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name, unsigned int src)
+{
+	struct vhost_virtqueue *vq = &vr->vq[VIRTIO_RPMSG_RESPONSE];
+	struct vhost_rpmsg_iter iter = {
+		.rhdr = {
+			.src = 0,
+			.dst = cpu_to_vhost32(vq, RPMSG_NS_ADDR),
+			.flags = cpu_to_vhost16(vq, RPMSG_NS_CREATE), /* rpmsg_recv_single() */
+		},
+	};
+	struct rpmsg_ns_msg ns = {
+		.addr = cpu_to_vhost32(vq, src),
+		.flags = cpu_to_vhost32(vq, RPMSG_NS_CREATE), /* for rpmsg_ns_cb() */
+	};
+	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, sizeof(ns));
+
+	if (ret < 0)
+		return ret;
+
+	strlcpy(ns.name, name, sizeof(ns.name));
+
+	ret = vhost_rpmsg_copy(vr, &iter, &ns, sizeof(ns));
+	if (ret != sizeof(ns))
+		vq_err(iter.vq, "%s(): added %d instead of %zu bytes\n",
+		       __func__, ret, sizeof(ns));
+
+	ret = vhost_rpmsg_finish_unlock(vr, &iter);
+	if (ret < 0)
+		vq_err(iter.vq, "%s(): namespace announcement failed: %d\n",
+		       __func__, ret);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(vhost_rpmsg_ns_announce);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Intel, Inc.");
+MODULE_DESCRIPTION("Vhost RPMsg API");
diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
new file mode 100644
index 000000000000..30072cecb8a0
--- /dev/null
+++ b/drivers/vhost/vhost_rpmsg.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright(c) 2020 Intel Corporation. All rights reserved.
+ *
+ * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
+ */
+
+#ifndef VHOST_RPMSG_H
+#define VHOST_RPMSG_H
+
+#include <linux/uio.h>
+#include <linux/virtio_rpmsg.h>
+
+#include "vhost.h"
+
+/* RPMsg uses two VirtQueues: one for each direction */
+enum {
+	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
+	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
+	/* Keep last */
+	VIRTIO_RPMSG_NUM_OF_VQS,
+};
+
+struct vhost_rpmsg_ept;
+
+struct vhost_rpmsg_iter {
+	struct iov_iter iov_iter;
+	struct rpmsg_hdr rhdr;
+	struct vhost_virtqueue *vq;
+	const struct vhost_rpmsg_ept *ept;
+	int head;
+	void *priv;
+};
+
+struct vhost_rpmsg {
+	struct vhost_dev dev;
+	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
+	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
+	const struct vhost_rpmsg_ept *ept;
+	unsigned int n_epts;
+};
+
+struct vhost_rpmsg_ept {
+	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
+	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
+	int addr;
+};
+
+static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
+{
+	return iter->rhdr.len;
+}
+
+#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
+	.rhdr = {						\
+			.src = cpu_to_vhost32(_vq, _src),	\
+			.dst = cpu_to_vhost32(_vq, _dst),	\
+		},						\
+	}
+
+void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
+		      unsigned int n_epts);
+void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
+int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
+			    unsigned int src);
+int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
+			   struct vhost_rpmsg_iter *iter,
+			   unsigned int qid, ssize_t len);
+size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
+			void *data, size_t size);
+int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
+			      struct vhost_rpmsg_iter *iter);
+
+#endif
-- 
2.28.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 2/4] rpmsg: move common structures and defines to headers
  2020-08-26 17:46   ` Guennadi Liakhovetski
  (?)
@ 2020-08-31 19:38   ` Mathieu Poirier
  -1 siblings, 0 replies; 34+ messages in thread
From: Mathieu Poirier @ 2020-08-31 19:38 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Wed, Aug 26, 2020 at 07:46:34PM +0200, Guennadi Liakhovetski wrote:
> virtio_rpmsg_bus.c keeps RPMsg protocol structure declarations and
> common defines like the ones, needed for name-space announcements,
> internal. Move them to common headers instead.
> 
> Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>

https://www.spinics.net/lists/kvm/msg222766.html

> ---
>  drivers/rpmsg/virtio_rpmsg_bus.c | 78 +-----------------------------
>  include/linux/virtio_rpmsg.h     | 83 ++++++++++++++++++++++++++++++++
>  include/uapi/linux/rpmsg.h       |  3 ++
>  3 files changed, 88 insertions(+), 76 deletions(-)
>  create mode 100644 include/linux/virtio_rpmsg.h
> 
> diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
> index 9006fc7f73d0..9d5dd3f0a648 100644
> --- a/drivers/rpmsg/virtio_rpmsg_bus.c
> +++ b/drivers/rpmsg/virtio_rpmsg_bus.c
> @@ -26,7 +26,9 @@
>  #include <linux/virtio_byteorder.h>
>  #include <linux/virtio_ids.h>
>  #include <linux/virtio_config.h>
> +#include <linux/virtio_rpmsg.h>
>  #include <linux/wait.h>
> +#include <uapi/linux/rpmsg.h>
>  
>  #include "rpmsg_internal.h"
>  
> @@ -70,58 +72,6 @@ struct virtproc_info {
>  	struct rpmsg_endpoint *ns_ept;
>  };
>  
> -/* The feature bitmap for virtio rpmsg */
> -#define VIRTIO_RPMSG_F_NS	0 /* RP supports name service notifications */
> -
> -/**
> - * struct rpmsg_hdr - common header for all rpmsg messages
> - * @src: source address
> - * @dst: destination address
> - * @reserved: reserved for future use
> - * @len: length of payload (in bytes)
> - * @flags: message flags
> - * @data: @len bytes of message payload data
> - *
> - * Every message sent(/received) on the rpmsg bus begins with this header.
> - */
> -struct rpmsg_hdr {
> -	__virtio32 src;
> -	__virtio32 dst;
> -	__virtio32 reserved;
> -	__virtio16 len;
> -	__virtio16 flags;
> -	u8 data[];
> -} __packed;
> -
> -/**
> - * struct rpmsg_ns_msg - dynamic name service announcement message
> - * @name: name of remote service that is published
> - * @addr: address of remote service that is published
> - * @flags: indicates whether service is created or destroyed
> - *
> - * This message is sent across to publish a new service, or announce
> - * about its removal. When we receive these messages, an appropriate
> - * rpmsg channel (i.e device) is created/destroyed. In turn, the ->probe()
> - * or ->remove() handler of the appropriate rpmsg driver will be invoked
> - * (if/as-soon-as one is registered).
> - */
> -struct rpmsg_ns_msg {
> -	char name[RPMSG_NAME_SIZE];
> -	__virtio32 addr;
> -	__virtio32 flags;
> -} __packed;
> -
> -/**
> - * enum rpmsg_ns_flags - dynamic name service announcement flags
> - *
> - * @RPMSG_NS_CREATE: a new remote service was just created
> - * @RPMSG_NS_DESTROY: a known remote service was just destroyed
> - */
> -enum rpmsg_ns_flags {
> -	RPMSG_NS_CREATE		= 0,
> -	RPMSG_NS_DESTROY	= 1,
> -};
> -
>  /**
>   * @vrp: the remote processor this channel belongs to
>   */
> @@ -134,27 +84,6 @@ struct virtio_rpmsg_channel {
>  #define to_virtio_rpmsg_channel(_rpdev) \
>  	container_of(_rpdev, struct virtio_rpmsg_channel, rpdev)
>  
> -/*
> - * We're allocating buffers of 512 bytes each for communications. The
> - * number of buffers will be computed from the number of buffers supported
> - * by the vring, upto a maximum of 512 buffers (256 in each direction).
> - *
> - * Each buffer will have 16 bytes for the msg header and 496 bytes for
> - * the payload.
> - *
> - * This will utilize a maximum total space of 256KB for the buffers.
> - *
> - * We might also want to add support for user-provided buffers in time.
> - * This will allow bigger buffer size flexibility, and can also be used
> - * to achieve zero-copy messaging.
> - *
> - * Note that these numbers are purely a decision of this driver - we
> - * can change this without changing anything in the firmware of the remote
> - * processor.
> - */
> -#define MAX_RPMSG_NUM_BUFS	(512)
> -#define MAX_RPMSG_BUF_SIZE	(512)
> -
>  /*
>   * Local addresses are dynamically allocated on-demand.
>   * We do not dynamically assign addresses from the low 1024 range,
> @@ -162,9 +91,6 @@ struct virtio_rpmsg_channel {
>   */
>  #define RPMSG_RESERVED_ADDRESSES	(1024)
>  
> -/* Address 53 is reserved for advertising remote services */
> -#define RPMSG_NS_ADDR			(53)
> -
>  static void virtio_rpmsg_destroy_ept(struct rpmsg_endpoint *ept);
>  static int virtio_rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len);
>  static int virtio_rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len,
> diff --git a/include/linux/virtio_rpmsg.h b/include/linux/virtio_rpmsg.h
> new file mode 100644
> index 000000000000..fcb523831e73
> --- /dev/null
> +++ b/include/linux/virtio_rpmsg.h
> @@ -0,0 +1,83 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _LINUX_VIRTIO_RPMSG_H
> +#define _LINUX_VIRTIO_RPMSG_H
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/types.h>
> +#include <linux/virtio_types.h>
> +
> +/**
> + * struct rpmsg_hdr - common header for all rpmsg messages
> + * @src: source address
> + * @dst: destination address
> + * @reserved: reserved for future use
> + * @len: length of payload (in bytes)
> + * @flags: message flags
> + * @data: @len bytes of message payload data
> + *
> + * Every message sent(/received) on the rpmsg bus begins with this header.
> + */
> +struct rpmsg_hdr {
> +	__virtio32 src;
> +	__virtio32 dst;
> +	__virtio32 reserved;
> +	__virtio16 len;
> +	__virtio16 flags;
> +	u8 data[];
> +} __packed;
> +
> +/**
> + * struct rpmsg_ns_msg - dynamic name service announcement message
> + * @name: name of remote service that is published
> + * @addr: address of remote service that is published
> + * @flags: indicates whether service is created or destroyed
> + *
> + * This message is sent across to publish a new service, or announce
> + * about its removal. When we receive these messages, an appropriate
> + * rpmsg channel (i.e device) is created/destroyed. In turn, the ->probe()
> + * or ->remove() handler of the appropriate rpmsg driver will be invoked
> + * (if/as-soon-as one is registered).
> + */
> +struct rpmsg_ns_msg {
> +	char name[RPMSG_NAME_SIZE];
> +	__virtio32 addr;
> +	__virtio32 flags;
> +} __packed;
> +
> +/**
> + * enum rpmsg_ns_flags - dynamic name service announcement flags
> + *
> + * @RPMSG_NS_CREATE: a new remote service was just created
> + * @RPMSG_NS_DESTROY: a known remote service was just destroyed
> + */
> +enum rpmsg_ns_flags {
> +	RPMSG_NS_CREATE		= 0,
> +	RPMSG_NS_DESTROY	= 1,
> +};
> +
> +/*
> + * We're allocating buffers of 512 bytes each for communications. The
> + * number of buffers will be computed from the number of buffers supported
> + * by the vring, upto a maximum of 512 buffers (256 in each direction).
> + *
> + * Each buffer will have 16 bytes for the msg header and 496 bytes for
> + * the payload.
> + *
> + * This will utilize a maximum total space of 256KB for the buffers.
> + *
> + * We might also want to add support for user-provided buffers in time.
> + * This will allow bigger buffer size flexibility, and can also be used
> + * to achieve zero-copy messaging.
> + *
> + * Note that these numbers are purely a decision of this driver - we
> + * can change this without changing anything in the firmware of the remote
> + * processor.
> + */
> +#define MAX_RPMSG_NUM_BUFS	512
> +#define MAX_RPMSG_BUF_SIZE	512
> +
> +/* Address 53 is reserved for advertising remote services */
> +#define RPMSG_NS_ADDR		53
> +
> +#endif
> diff --git a/include/uapi/linux/rpmsg.h b/include/uapi/linux/rpmsg.h
> index e14c6dab4223..d669c04ef289 100644
> --- a/include/uapi/linux/rpmsg.h
> +++ b/include/uapi/linux/rpmsg.h
> @@ -24,4 +24,7 @@ struct rpmsg_endpoint_info {
>  #define RPMSG_CREATE_EPT_IOCTL	_IOW(0xb5, 0x1, struct rpmsg_endpoint_info)
>  #define RPMSG_DESTROY_EPT_IOCTL	_IO(0xb5, 0x2)
>  
> +/* The feature bitmap for virtio rpmsg */
> +#define VIRTIO_RPMSG_F_NS	0 /* RP supports name service notifications */
> +
>  #endif
> -- 
> 2.28.0
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 0/4] Add a vhost RPMsg API
  2020-08-26 17:46 ` Guennadi Liakhovetski
@ 2020-09-08 14:16   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 34+ messages in thread
From: Michael S. Tsirkin @ 2020-09-08 14:16 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Jason Wang, Ohad Ben-Cohen,
	Bjorn Andersson, Mathieu Poirier, Vincent Whitchurch

On Wed, Aug 26, 2020 at 07:46:32PM +0200, Guennadi Liakhovetski wrote:
> Hi,
> 
> Next update:

OK could we get some acks from rpmsg folks on this please?
It's been quite a while, patchset is not huge.


> v5:
> - don't hard-code message layout
> 
> v4:
> - add endianness conversions to comply with the VirtIO standard
> 
> v3:
> - address several checkpatch warnings
> - address comments from Mathieu Poirier
> 
> v2:
> - update patch #5 with a correct vhost_dev_init() prototype
> - drop patch #6 - it depends on a different patch, that is currently
>   an RFC
> - address comments from Pierre-Louis Bossart:
>   * remove "default n" from Kconfig
> 
> Linux supports RPMsg over VirtIO for "remote processor" / AMP use
> cases. It can however also be used for virtualisation scenarios,
> e.g. when using KVM to run Linux on both the host and the guests.
> This patch set adds a wrapper API to facilitate writing vhost
> drivers for such RPMsg-based solutions. The first use case is an
> audio DSP virtualisation project, currently under development, ready
> for review and submission, available at
> https://github.com/thesofproject/linux/pull/1501/commits
> 
> Thanks
> Guennadi
> 
> Guennadi Liakhovetski (4):
>   vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
>   rpmsg: move common structures and defines to headers
>   rpmsg: update documentation
>   vhost: add an RPMsg API
> 
>  Documentation/rpmsg.txt          |   6 +-
>  drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
>  drivers/vhost/Kconfig            |   7 +
>  drivers/vhost/Makefile           |   3 +
>  drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
>  drivers/vhost/vhost_rpmsg.h      |  74 ++++++
>  include/linux/virtio_rpmsg.h     |  83 +++++++
>  include/uapi/linux/rpmsg.h       |   3 +
>  include/uapi/linux/vhost.h       |   4 +-
>  9 files changed, 551 insertions(+), 80 deletions(-)
>  create mode 100644 drivers/vhost/rpmsg.c
>  create mode 100644 drivers/vhost/vhost_rpmsg.h
>  create mode 100644 include/linux/virtio_rpmsg.h
> 
> -- 
> 2.28.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 0/4] Add a vhost RPMsg API
@ 2020-09-08 14:16   ` Michael S. Tsirkin
  0 siblings, 0 replies; 34+ messages in thread
From: Michael S. Tsirkin @ 2020-09-08 14:16 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Ohad Ben-Cohen, Mathieu Poirier, kvm, Vincent Whitchurch,
	linux-remoteproc, Pierre-Louis Bossart, virtualization,
	Liam Girdwood, Bjorn Andersson, sound-open-firmware

On Wed, Aug 26, 2020 at 07:46:32PM +0200, Guennadi Liakhovetski wrote:
> Hi,
> 
> Next update:

OK could we get some acks from rpmsg folks on this please?
It's been quite a while, patchset is not huge.


> v5:
> - don't hard-code message layout
> 
> v4:
> - add endianness conversions to comply with the VirtIO standard
> 
> v3:
> - address several checkpatch warnings
> - address comments from Mathieu Poirier
> 
> v2:
> - update patch #5 with a correct vhost_dev_init() prototype
> - drop patch #6 - it depends on a different patch, that is currently
>   an RFC
> - address comments from Pierre-Louis Bossart:
>   * remove "default n" from Kconfig
> 
> Linux supports RPMsg over VirtIO for "remote processor" / AMP use
> cases. It can however also be used for virtualisation scenarios,
> e.g. when using KVM to run Linux on both the host and the guests.
> This patch set adds a wrapper API to facilitate writing vhost
> drivers for such RPMsg-based solutions. The first use case is an
> audio DSP virtualisation project, currently under development, ready
> for review and submission, available at
> https://github.com/thesofproject/linux/pull/1501/commits
> 
> Thanks
> Guennadi
> 
> Guennadi Liakhovetski (4):
>   vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
>   rpmsg: move common structures and defines to headers
>   rpmsg: update documentation
>   vhost: add an RPMsg API
> 
>  Documentation/rpmsg.txt          |   6 +-
>  drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
>  drivers/vhost/Kconfig            |   7 +
>  drivers/vhost/Makefile           |   3 +
>  drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
>  drivers/vhost/vhost_rpmsg.h      |  74 ++++++
>  include/linux/virtio_rpmsg.h     |  83 +++++++
>  include/uapi/linux/rpmsg.h       |   3 +
>  include/uapi/linux/vhost.h       |   4 +-
>  9 files changed, 551 insertions(+), 80 deletions(-)
>  create mode 100644 drivers/vhost/rpmsg.c
>  create mode 100644 drivers/vhost/vhost_rpmsg.h
>  create mode 100644 include/linux/virtio_rpmsg.h
> 
> -- 
> 2.28.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 0/4] Add a vhost RPMsg API
  2020-09-08 14:16   ` Michael S. Tsirkin
  (?)
@ 2020-09-08 22:20   ` Mathieu Poirier
  -1 siblings, 0 replies; 34+ messages in thread
From: Mathieu Poirier @ 2020-09-08 22:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Guennadi Liakhovetski, kvm, linux-remoteproc, virtualization,
	sound-open-firmware, Pierre-Louis Bossart, Liam Girdwood,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Tue, Sep 08, 2020 at 10:16:52AM -0400, Michael S. Tsirkin wrote:
> On Wed, Aug 26, 2020 at 07:46:32PM +0200, Guennadi Liakhovetski wrote:
> > Hi,
> > 
> > Next update:
> 
> OK could we get some acks from rpmsg folks on this please?
> It's been quite a while, patchset is not huge.

There is a V6 of this set where Guennadi and I have agreed that patches 2 and 3
have been dealt with. Patch 1 is trivial, leaving only patch 4.  I had initially
decided to skip it because the vhost driver is completely foreign to me and the
cycles to change that are scarse.  But this set [1] from Arnaud has brought to
the fore issues related to the definition struct rpmsg_ns_msg, also used by
Guennadi's work.

As such I don't really have a choice now, I will review this series tomorrow or
Thursday.

[1]. https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335 

> 
> 
> > v5:
> > - don't hard-code message layout
> > 
> > v4:
> > - add endianness conversions to comply with the VirtIO standard
> > 
> > v3:
> > - address several checkpatch warnings
> > - address comments from Mathieu Poirier
> > 
> > v2:
> > - update patch #5 with a correct vhost_dev_init() prototype
> > - drop patch #6 - it depends on a different patch, that is currently
> >   an RFC
> > - address comments from Pierre-Louis Bossart:
> >   * remove "default n" from Kconfig
> > 
> > Linux supports RPMsg over VirtIO for "remote processor" / AMP use
> > cases. It can however also be used for virtualisation scenarios,
> > e.g. when using KVM to run Linux on both the host and the guests.
> > This patch set adds a wrapper API to facilitate writing vhost
> > drivers for such RPMsg-based solutions. The first use case is an
> > audio DSP virtualisation project, currently under development, ready
> > for review and submission, available at
> > https://github.com/thesofproject/linux/pull/1501/commits
> > 
> > Thanks
> > Guennadi
> > 
> > Guennadi Liakhovetski (4):
> >   vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
> >   rpmsg: move common structures and defines to headers
> >   rpmsg: update documentation
> >   vhost: add an RPMsg API
> > 
> >  Documentation/rpmsg.txt          |   6 +-
> >  drivers/rpmsg/virtio_rpmsg_bus.c |  78 +------
> >  drivers/vhost/Kconfig            |   7 +
> >  drivers/vhost/Makefile           |   3 +
> >  drivers/vhost/rpmsg.c            | 373 +++++++++++++++++++++++++++++++
> >  drivers/vhost/vhost_rpmsg.h      |  74 ++++++
> >  include/linux/virtio_rpmsg.h     |  83 +++++++
> >  include/uapi/linux/rpmsg.h       |   3 +
> >  include/uapi/linux/vhost.h       |   4 +-
> >  9 files changed, 551 insertions(+), 80 deletions(-)
> >  create mode 100644 drivers/vhost/rpmsg.c
> >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> >  create mode 100644 include/linux/virtio_rpmsg.h
> > 
> > -- 
> > 2.28.0
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
  2020-08-26 17:46   ` Guennadi Liakhovetski
  (?)
@ 2020-09-09 22:39   ` Mathieu Poirier
  2020-09-10  8:38       ` Guennadi Liakhovetski
  -1 siblings, 1 reply; 34+ messages in thread
From: Mathieu Poirier @ 2020-09-09 22:39 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

Good afternoon,

On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> Linux supports running the RPMsg protocol over the VirtIO transport
> protocol, but currently there is only support for VirtIO clients and
> no support for a VirtIO server. This patch adds a vhost-based RPMsg
> server implementation.

This changelog is very confusing...  At this time the name service in the
remoteproc space runs as a server on the application processor.  But from the
above the remoteproc usecase seems to be considered to be a client
configuration.

And I don't see a server implementation per se...  It is more like a client
implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
messages from host to guest.

Perhaps it is my lack of familiarity with vhost terminology.

> 
> Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> ---
>  drivers/vhost/Kconfig       |   7 +
>  drivers/vhost/Makefile      |   3 +
>  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
>  drivers/vhost/vhost_rpmsg.h |  74 +++++++
>  4 files changed, 457 insertions(+)
>  create mode 100644 drivers/vhost/rpmsg.c
>  create mode 100644 drivers/vhost/vhost_rpmsg.h
> 
> diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> index 587fbae06182..046b948fc411 100644
> --- a/drivers/vhost/Kconfig
> +++ b/drivers/vhost/Kconfig
> @@ -38,6 +38,13 @@ config VHOST_NET
>  	  To compile this driver as a module, choose M here: the module will
>  	  be called vhost_net.
>  
> +config VHOST_RPMSG
> +	tristate
> +	select VHOST
> +	help
> +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> +	  drivers, using the RPMsg over VirtIO protocol.

I had to assume vhost drivers are running on the host and virtIO drivers on the
guests.  This may be common knowledge for people familiar with vhosts but
certainly obscur for commoners  Having a help section that is clear on what is
happening would remove any ambiguity.

> +
>  config VHOST_SCSI
>  	tristate "VHOST_SCSI TCM fabric driver"
>  	depends on TARGET_CORE && EVENTFD
> diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> index f3e1897cce85..9cf459d59f97 100644
> --- a/drivers/vhost/Makefile
> +++ b/drivers/vhost/Makefile
> @@ -2,6 +2,9 @@
>  obj-$(CONFIG_VHOST_NET) += vhost_net.o
>  vhost_net-y := net.o
>  
> +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> +vhost_rpmsg-y := rpmsg.o
> +
>  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
>  vhost_scsi-y := scsi.o
>  
> diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> new file mode 100644
> index 000000000000..c26d7a4afc6d
> --- /dev/null
> +++ b/drivers/vhost/rpmsg.c
> @@ -0,0 +1,373 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> + *
> + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> + *
> + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c

Again, very confusing.  The changelog refers to a server implementation but to
me this refers to a client implementation, especially if rpmsg_recv_single() and
rpmsg_ns_cb() are used on the other side of the pipe.  

> + * These functions handle creation of 2 virtual queues, handling of endpoint
> + * addresses, sending a name-space announcement to the guest as well as any
> + * user messages. This API can be used by any vhost driver to handle RPMsg
> + * specific processing.
> + * Specific vhost drivers, using this API will use their own VirtIO device
> + * IDs, that should then also be added to the ID table in virtio_rpmsg_bus.c
> + */
> +
> +#include <linux/compat.h>
> +#include <linux/file.h>
> +#include <linux/miscdevice.h>
> +#include <linux/module.h>
> +#include <linux/mutex.h>
> +#include <linux/vhost.h>
> +#include <linux/virtio_rpmsg.h>
> +#include <uapi/linux/rpmsg.h>
> +
> +#include "vhost.h"
> +#include "vhost_rpmsg.h"
> +
> +/*
> + * All virtio-rpmsg virtual queue kicks always come with just one buffer -
> + * either input or output, but we can also handle split messages
> + */
> +static int vhost_rpmsg_get_msg(struct vhost_virtqueue *vq, unsigned int *cnt)
> +{
> +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> +	unsigned int out, in;
> +	int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in,
> +				     NULL, NULL);
> +	if (head < 0) {
> +		vq_err(vq, "%s(): error %d getting buffer\n",
> +		       __func__, head);
> +		return head;
> +	}
> +
> +	/* Nothing new? */
> +	if (head == vq->num)
> +		return head;
> +
> +	if (vq == &vr->vq[VIRTIO_RPMSG_RESPONSE]) {
> +		if (out) {
> +			vq_err(vq, "%s(): invalid %d output in response queue\n",
> +			       __func__, out);
> +			goto return_buf;
> +		}
> +
> +		*cnt = in;
> +	}
> +
> +	if (vq == &vr->vq[VIRTIO_RPMSG_REQUEST]) {
> +		if (in) {
> +			vq_err(vq, "%s(): invalid %d input in request queue\n",
> +		       __func__, in);
> +			goto return_buf;
> +		}
> +
> +		*cnt = out;
> +	}
> +
> +	return head;
> +
> +return_buf:
> +	vhost_add_used(vq, head, 0);
> +
> +	return -EINVAL;
> +}
> +
> +static const struct vhost_rpmsg_ept *vhost_rpmsg_ept_find(struct vhost_rpmsg *vr, int addr)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < vr->n_epts; i++)
> +		if (vr->ept[i].addr == addr)
> +			return vr->ept + i;
> +
> +	return NULL;
> +}
> +
> +/*
> + * if len < 0, then for reading a request, the complete virtual queue buffer
> + * size is prepared, for sending a response, the length in the iterator is used
> + */
> +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> +			   unsigned int qid, ssize_t len)
> +	__acquires(vq->mutex)
> +{
> +	struct vhost_virtqueue *vq = vr->vq + qid;
> +	unsigned int cnt;
> +	ssize_t ret;
> +	size_t tmp;
> +
> +	if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
> +		return -EINVAL;
> +
> +	iter->vq = vq;
> +
> +	mutex_lock(&vq->mutex);
> +	vhost_disable_notify(&vr->dev, vq);
> +
> +	iter->head = vhost_rpmsg_get_msg(vq, &cnt);
> +	if (iter->head == vq->num)
> +		iter->head = -EAGAIN;
> +
> +	if (iter->head < 0) {
> +		ret = iter->head;
> +		goto unlock;
> +	}
> +
> +	tmp = iov_length(vq->iov, cnt);
> +	if (tmp < sizeof(iter->rhdr)) {
> +		vq_err(vq, "%s(): size %zu too small\n", __func__, tmp);
> +		ret = -ENOBUFS;
> +		goto return_buf;
> +	}
> +
> +	switch (qid) {
> +	case VIRTIO_RPMSG_REQUEST:
> +		if (len >= 0) {
> +			if (tmp < sizeof(iter->rhdr) + len) {
> +				ret = -ENOBUFS;
> +				goto return_buf;
> +			}
> +
> +			tmp = len + sizeof(iter->rhdr);
> +		}
> +
> +		/* len is now the size of the payload */
> +		iov_iter_init(&iter->iov_iter, WRITE, vq->iov, cnt, tmp);
> +
> +		/* Read the RPMSG header with endpoint addresses */
> +		tmp = copy_from_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
> +		if (tmp != sizeof(iter->rhdr)) {
> +			vq_err(vq, "%s(): got %zu instead of %zu\n", __func__,
> +			       tmp, sizeof(iter->rhdr));
> +			ret = -EIO;
> +			goto return_buf;
> +		}
> +
> +		iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.dst));
> +		if (!iter->ept) {
> +			vq_err(vq, "%s(): no endpoint with address %d\n",
> +			       __func__, vhost32_to_cpu(vq, iter->rhdr.dst));
> +			ret = -ENOENT;
> +			goto return_buf;
> +		}
> +
> +		/* Let the endpoint read the payload */
> +		if (iter->ept->read) {
> +			ret = iter->ept->read(vr, iter);
> +			if (ret < 0)
> +				goto return_buf;
> +
> +			iter->rhdr.len = cpu_to_vhost16(vq, ret);
> +		} else {
> +			iter->rhdr.len = 0;
> +		}
> +
> +		/* Prepare for the response phase */
> +		iter->rhdr.dst = iter->rhdr.src;
> +		iter->rhdr.src = cpu_to_vhost32(vq, iter->ept->addr);
> +
> +		break;
> +	case VIRTIO_RPMSG_RESPONSE:
> +		if (!iter->ept && iter->rhdr.dst != cpu_to_vhost32(vq, RPMSG_NS_ADDR)) {
> +			/*
> +			 * Usually the iterator is configured when processing a
> +			 * message on the request queue, but it's also possible
> +			 * to send a message on the response queue without a
> +			 * preceding request, in that case the iterator must
> +			 * contain source and destination addresses.
> +			 */
> +			iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.src));
> +			if (!iter->ept) {
> +				ret = -ENOENT;
> +				goto return_buf;
> +			}
> +		}
> +
> +		if (len >= 0) {
> +			if (tmp < sizeof(iter->rhdr) + len) {
> +				ret = -ENOBUFS;
> +				goto return_buf;
> +			}
> +
> +			iter->rhdr.len = cpu_to_vhost16(vq, len);
> +			tmp = len + sizeof(iter->rhdr);
> +		}
> +
> +		/* len is now the size of the payload */
> +		iov_iter_init(&iter->iov_iter, READ, vq->iov, cnt, tmp);
> +
> +		/* Write the RPMSG header with endpoint addresses */
> +		tmp = copy_to_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
> +		if (tmp != sizeof(iter->rhdr)) {
> +			ret = -EIO;
> +			goto return_buf;
> +		}
> +
> +		/* Let the endpoint write the payload */
> +		if (iter->ept && iter->ept->write) {
> +			ret = iter->ept->write(vr, iter);
> +			if (ret < 0)
> +				goto return_buf;
> +		}
> +
> +		break;
> +	}
> +
> +	return 0;
> +
> +return_buf:
> +	vhost_add_used(vq, iter->head, 0);
> +unlock:
> +	vhost_enable_notify(&vr->dev, vq);
> +	mutex_unlock(&vq->mutex);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(vhost_rpmsg_start_lock);
> +
> +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> +			void *data, size_t size)
> +{
> +	/*
> +	 * We could check for excess data, but copy_{to,from}_iter() don't do
> +	 * that either
> +	 */
> +	if (iter->vq == vr->vq + VIRTIO_RPMSG_RESPONSE)
> +		return copy_to_iter(data, size, &iter->iov_iter);
> +
> +	return copy_from_iter(data, size, &iter->iov_iter);
> +}
> +EXPORT_SYMBOL_GPL(vhost_rpmsg_copy);
> +
> +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> +			      struct vhost_rpmsg_iter *iter)
> +	__releases(vq->mutex)
> +{
> +	if (iter->head >= 0)
> +		vhost_add_used_and_signal(iter->vq->dev, iter->vq, iter->head,
> +					  vhost16_to_cpu(iter->vq, iter->rhdr.len) +
> +					  sizeof(iter->rhdr));
> +
> +	vhost_enable_notify(&vr->dev, iter->vq);
> +	mutex_unlock(&iter->vq->mutex);
> +
> +	return iter->head;
> +}
> +EXPORT_SYMBOL_GPL(vhost_rpmsg_finish_unlock);
> +
> +/*
> + * Return false to terminate the external loop only if we fail to obtain either
> + * a request or a response buffer
> + */
> +static bool handle_rpmsg_req_single(struct vhost_rpmsg *vr,
> +				    struct vhost_virtqueue *vq)
> +{
> +	struct vhost_rpmsg_iter iter;
> +	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_REQUEST, -EINVAL);
> +	if (!ret)
> +		ret = vhost_rpmsg_finish_unlock(vr, &iter);
> +	if (ret < 0) {
> +		if (ret != -EAGAIN)
> +			vq_err(vq, "%s(): RPMSG processing failed %d\n",
> +			       __func__, ret);
> +		return false;
> +	}
> +
> +	if (!iter.ept->write)
> +		return true;
> +
> +	ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, -EINVAL);
> +	if (!ret)
> +		ret = vhost_rpmsg_finish_unlock(vr, &iter);
> +	if (ret < 0) {
> +		vq_err(vq, "%s(): RPMSG finalising failed %d\n", __func__, ret);
> +		return false;
> +	}
> +
> +	return true;
> +}
> +
> +static void handle_rpmsg_req_kick(struct vhost_work *work)
> +{
> +	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
> +						  poll.work);
> +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> +
> +	while (handle_rpmsg_req_single(vr, vq))
> +		;
> +}
> +
> +/*
> + * initialise two virtqueues with an array of endpoints,
> + * request and response callbacks
> + */
> +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> +		      unsigned int n_epts)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(vr->vq); i++)
> +		vr->vq_p[i] = &vr->vq[i];
> +
> +	/* vq[0]: host -> guest, vq[1]: host <- guest */
> +	vr->vq[VIRTIO_RPMSG_REQUEST].handle_kick = handle_rpmsg_req_kick;
> +	vr->vq[VIRTIO_RPMSG_RESPONSE].handle_kick = NULL;
> +
> +	vr->ept = ept;
> +	vr->n_epts = n_epts;
> +
> +	vhost_dev_init(&vr->dev, vr->vq_p, VIRTIO_RPMSG_NUM_OF_VQS,
> +		       UIO_MAXIOV, 0, 0, true, NULL);
> +}
> +EXPORT_SYMBOL_GPL(vhost_rpmsg_init);
> +
> +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr)
> +{
> +	if (vhost_dev_has_owner(&vr->dev))
> +		vhost_poll_flush(&vr->vq[VIRTIO_RPMSG_REQUEST].poll);
> +
> +	vhost_dev_cleanup(&vr->dev);
> +}
> +EXPORT_SYMBOL_GPL(vhost_rpmsg_destroy);
> +
> +/* send namespace */
> +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name, unsigned int src)
> +{
> +	struct vhost_virtqueue *vq = &vr->vq[VIRTIO_RPMSG_RESPONSE];
> +	struct vhost_rpmsg_iter iter = {
> +		.rhdr = {
> +			.src = 0,
> +			.dst = cpu_to_vhost32(vq, RPMSG_NS_ADDR),
> +			.flags = cpu_to_vhost16(vq, RPMSG_NS_CREATE), /* rpmsg_recv_single() */

Where is the flag used in rpmsg_recv_single()?  It is used for the name space
message (as you have below) but not in the header when doing a name space
announcement.

> +		},
> +	};
> +	struct rpmsg_ns_msg ns = {
> +		.addr = cpu_to_vhost32(vq, src),
> +		.flags = cpu_to_vhost32(vq, RPMSG_NS_CREATE), /* for rpmsg_ns_cb() */
> +	};
> +	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, sizeof(ns));
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	strlcpy(ns.name, name, sizeof(ns.name));
> +
> +	ret = vhost_rpmsg_copy(vr, &iter, &ns, sizeof(ns));
> +	if (ret != sizeof(ns))
> +		vq_err(iter.vq, "%s(): added %d instead of %zu bytes\n",
> +		       __func__, ret, sizeof(ns));
> +
> +	ret = vhost_rpmsg_finish_unlock(vr, &iter);
> +	if (ret < 0)
> +		vq_err(iter.vq, "%s(): namespace announcement failed: %d\n",
> +		       __func__, ret);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(vhost_rpmsg_ns_announce);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Intel, Inc.");
> +MODULE_DESCRIPTION("Vhost RPMsg API");
> diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> new file mode 100644
> index 000000000000..30072cecb8a0
> --- /dev/null
> +++ b/drivers/vhost/vhost_rpmsg.h
> @@ -0,0 +1,74 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> + *
> + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> + */
> +
> +#ifndef VHOST_RPMSG_H
> +#define VHOST_RPMSG_H
> +
> +#include <linux/uio.h>
> +#include <linux/virtio_rpmsg.h>
> +
> +#include "vhost.h"
> +
> +/* RPMsg uses two VirtQueues: one for each direction */
> +enum {
> +	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
> +	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
> +	/* Keep last */
> +	VIRTIO_RPMSG_NUM_OF_VQS,
> +};
> +
> +struct vhost_rpmsg_ept;
> +
> +struct vhost_rpmsg_iter {
> +	struct iov_iter iov_iter;
> +	struct rpmsg_hdr rhdr;
> +	struct vhost_virtqueue *vq;
> +	const struct vhost_rpmsg_ept *ept;
> +	int head;
> +	void *priv;

I don't see @priv being used anywhere.

> +};
> +
> +struct vhost_rpmsg {
> +	struct vhost_dev dev;
> +	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> +	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> +	const struct vhost_rpmsg_ept *ept;
> +	unsigned int n_epts;
> +};
> +
> +struct vhost_rpmsg_ept {
> +	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> +	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> +	int addr;
> +};
> +
> +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> +{
> +	return iter->rhdr.len;
> +}

Again, I don't see where this is used.

> +
> +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
> +	.rhdr = {						\
> +			.src = cpu_to_vhost32(_vq, _src),	\
> +			.dst = cpu_to_vhost32(_vq, _dst),	\
> +		},						\
> +	}

Same.

Thanks,
Mathieu

> +
> +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> +		      unsigned int n_epts);
> +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> +			    unsigned int src);
> +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> +			   struct vhost_rpmsg_iter *iter,
> +			   unsigned int qid, ssize_t len);
> +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> +			void *data, size_t size);
> +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> +			      struct vhost_rpmsg_iter *iter);
> +
> +#endif
> -- 
> 2.28.0
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
  2020-08-26 17:46   ` Guennadi Liakhovetski
  (?)
@ 2020-09-09 22:42   ` Mathieu Poirier
  2020-09-10  7:15       ` Guennadi Liakhovetski
  -1 siblings, 1 reply; 34+ messages in thread
From: Mathieu Poirier @ 2020-09-09 22:42 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Wed, Aug 26, 2020 at 07:46:33PM +0200, Guennadi Liakhovetski wrote:
> VHOST_VSOCK_SET_RUNNING is used by the vhost vsock driver to perform
> crucial VirtQueue initialisation, like assigning .private fields and
> calling vhost_vq_init_access(), and clean up. However, this ioctl is
> actually extremely useful for any vhost driver, that doesn't have a
> side channel to inform it of a status change, e.g. upon a guest
> reboot. This patch makes that ioctl generic, while preserving its
> numeric value and also keeping the original alias.
> 
> Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> ---
>  include/uapi/linux/vhost.h | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> index 75232185324a..11a4948b6216 100644
> --- a/include/uapi/linux/vhost.h
> +++ b/include/uapi/linux/vhost.h
> @@ -97,6 +97,8 @@
>  #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
>  #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
>  
> +#define VHOST_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
> +

I don't see it used in the next patches and as such should be part of another
series.

>  /* VHOST_NET specific defines */
>  
>  /* Attach virtio net ring to a raw socket, or tap device.
> @@ -118,7 +120,7 @@
>  /* VHOST_VSOCK specific defines */
>  
>  #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
> -#define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
> +#define VHOST_VSOCK_SET_RUNNING		VHOST_SET_RUNNING
>  
>  /* VHOST_VDPA specific defines */
>  
> -- 
> 2.28.0
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 3/4] rpmsg: update documentation
  2020-08-26 17:46   ` Guennadi Liakhovetski
  (?)
@ 2020-09-09 22:45   ` Mathieu Poirier
  2020-09-10  7:18       ` Guennadi Liakhovetski
  -1 siblings, 1 reply; 34+ messages in thread
From: Mathieu Poirier @ 2020-09-09 22:45 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Wed, Aug 26, 2020 at 07:46:35PM +0200, Guennadi Liakhovetski wrote:
> rpmsg_create_ept() takes struct rpmsg_channel_info chinfo as its last
> argument, not a u32 value. The first two arguments are also updated.
> 
> Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> ---
>  Documentation/rpmsg.txt | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt
> index 24b7a9e1a5f9..1ce353cb232a 100644
> --- a/Documentation/rpmsg.txt
> +++ b/Documentation/rpmsg.txt
> @@ -192,9 +192,9 @@ Returns 0 on success and an appropriate error value on failure.
>  
>  ::
>  
> -  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev,
> -		void (*cb)(struct rpmsg_channel *, void *, int, void *, u32),
> -		void *priv, u32 addr);
> +  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
> +					  rpmsg_rx_cb_t cb, void *priv,
> +					  struct rpmsg_channel_info chinfo);

Again I don't see this being used in this set...  It should have been sent on
its own to the remoteproc and documentation mailing list.  Note that
Documentation/rpmsg.txt is now Documentation/staging/rpmsg.rst

>  
>  every rpmsg address in the system is bound to an rx callback (so when
>  inbound messages arrive, they are dispatched by the rpmsg bus using the
> -- 
> 2.28.0
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
  2020-09-09 22:42   ` Mathieu Poirier
@ 2020-09-10  7:15       ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-10  7:15 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

Hi Mathieu,

On Wed, Sep 09, 2020 at 04:42:14PM -0600, Mathieu Poirier wrote:
> On Wed, Aug 26, 2020 at 07:46:33PM +0200, Guennadi Liakhovetski wrote:
> > VHOST_VSOCK_SET_RUNNING is used by the vhost vsock driver to perform
> > crucial VirtQueue initialisation, like assigning .private fields and
> > calling vhost_vq_init_access(), and clean up. However, this ioctl is
> > actually extremely useful for any vhost driver, that doesn't have a
> > side channel to inform it of a status change, e.g. upon a guest
> > reboot. This patch makes that ioctl generic, while preserving its
> > numeric value and also keeping the original alias.
> > 
> > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > ---
> >  include/uapi/linux/vhost.h | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> > index 75232185324a..11a4948b6216 100644
> > --- a/include/uapi/linux/vhost.h
> > +++ b/include/uapi/linux/vhost.h
> > @@ -97,6 +97,8 @@
> >  #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
> >  #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
> >  
> > +#define VHOST_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
> > +
> 
> I don't see it used in the next patches and as such should be part of another
> series.

It isn't used in the next patches, it is used in this patch - see below.

Thanks
Guennadi

> >  /* VHOST_NET specific defines */
> >  
> >  /* Attach virtio net ring to a raw socket, or tap device.
> > @@ -118,7 +120,7 @@
> >  /* VHOST_VSOCK specific defines */
> >  
> >  #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
> > -#define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
> > +#define VHOST_VSOCK_SET_RUNNING		VHOST_SET_RUNNING
> >  
> >  /* VHOST_VDPA specific defines */
> >  
> > -- 
> > 2.28.0
> > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
@ 2020-09-10  7:15       ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-10  7:15 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Ohad Ben-Cohen, kvm, Michael S. Tsirkin, Vincent Whitchurch,
	linux-remoteproc, Pierre-Louis Bossart, virtualization,
	Liam Girdwood, Bjorn Andersson, sound-open-firmware

Hi Mathieu,

On Wed, Sep 09, 2020 at 04:42:14PM -0600, Mathieu Poirier wrote:
> On Wed, Aug 26, 2020 at 07:46:33PM +0200, Guennadi Liakhovetski wrote:
> > VHOST_VSOCK_SET_RUNNING is used by the vhost vsock driver to perform
> > crucial VirtQueue initialisation, like assigning .private fields and
> > calling vhost_vq_init_access(), and clean up. However, this ioctl is
> > actually extremely useful for any vhost driver, that doesn't have a
> > side channel to inform it of a status change, e.g. upon a guest
> > reboot. This patch makes that ioctl generic, while preserving its
> > numeric value and also keeping the original alias.
> > 
> > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > ---
> >  include/uapi/linux/vhost.h | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> > index 75232185324a..11a4948b6216 100644
> > --- a/include/uapi/linux/vhost.h
> > +++ b/include/uapi/linux/vhost.h
> > @@ -97,6 +97,8 @@
> >  #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
> >  #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
> >  
> > +#define VHOST_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
> > +
> 
> I don't see it used in the next patches and as such should be part of another
> series.

It isn't used in the next patches, it is used in this patch - see below.

Thanks
Guennadi

> >  /* VHOST_NET specific defines */
> >  
> >  /* Attach virtio net ring to a raw socket, or tap device.
> > @@ -118,7 +120,7 @@
> >  /* VHOST_VSOCK specific defines */
> >  
> >  #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
> > -#define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
> > +#define VHOST_VSOCK_SET_RUNNING		VHOST_SET_RUNNING
> >  
> >  /* VHOST_VDPA specific defines */
> >  
> > -- 
> > 2.28.0
> > 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 3/4] rpmsg: update documentation
  2020-09-09 22:45   ` Mathieu Poirier
@ 2020-09-10  7:18       ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-10  7:18 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Wed, Sep 09, 2020 at 04:45:21PM -0600, Mathieu Poirier wrote:
> On Wed, Aug 26, 2020 at 07:46:35PM +0200, Guennadi Liakhovetski wrote:
> > rpmsg_create_ept() takes struct rpmsg_channel_info chinfo as its last
> > argument, not a u32 value. The first two arguments are also updated.
> > 
> > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > ---
> >  Documentation/rpmsg.txt | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> > 
> > diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt
> > index 24b7a9e1a5f9..1ce353cb232a 100644
> > --- a/Documentation/rpmsg.txt
> > +++ b/Documentation/rpmsg.txt
> > @@ -192,9 +192,9 @@ Returns 0 on success and an appropriate error value on failure.
> >  
> >  ::
> >  
> > -  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev,
> > -		void (*cb)(struct rpmsg_channel *, void *, int, void *, u32),
> > -		void *priv, u32 addr);
> > +  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
> > +					  rpmsg_rx_cb_t cb, void *priv,
> > +					  struct rpmsg_channel_info chinfo);
> 
> Again I don't see this being used in this set...  It should have been sent on
> its own to the remoteproc and documentation mailing list.  Note that
> Documentation/rpmsg.txt is now Documentation/staging/rpmsg.rst

Sure, can send it separately.

Thanks
Guennadi

> >  every rpmsg address in the system is bound to an rx callback (so when
> >  inbound messages arrive, they are dispatched by the rpmsg bus using the
> > -- 
> > 2.28.0
> > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 3/4] rpmsg: update documentation
@ 2020-09-10  7:18       ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-10  7:18 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Ohad Ben-Cohen, kvm, Michael S. Tsirkin, Vincent Whitchurch,
	linux-remoteproc, Pierre-Louis Bossart, virtualization,
	Liam Girdwood, Bjorn Andersson, sound-open-firmware

On Wed, Sep 09, 2020 at 04:45:21PM -0600, Mathieu Poirier wrote:
> On Wed, Aug 26, 2020 at 07:46:35PM +0200, Guennadi Liakhovetski wrote:
> > rpmsg_create_ept() takes struct rpmsg_channel_info chinfo as its last
> > argument, not a u32 value. The first two arguments are also updated.
> > 
> > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > ---
> >  Documentation/rpmsg.txt | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> > 
> > diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt
> > index 24b7a9e1a5f9..1ce353cb232a 100644
> > --- a/Documentation/rpmsg.txt
> > +++ b/Documentation/rpmsg.txt
> > @@ -192,9 +192,9 @@ Returns 0 on success and an appropriate error value on failure.
> >  
> >  ::
> >  
> > -  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev,
> > -		void (*cb)(struct rpmsg_channel *, void *, int, void *, u32),
> > -		void *priv, u32 addr);
> > +  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
> > +					  rpmsg_rx_cb_t cb, void *priv,
> > +					  struct rpmsg_channel_info chinfo);
> 
> Again I don't see this being used in this set...  It should have been sent on
> its own to the remoteproc and documentation mailing list.  Note that
> Documentation/rpmsg.txt is now Documentation/staging/rpmsg.rst

Sure, can send it separately.

Thanks
Guennadi

> >  every rpmsg address in the system is bound to an rx callback (so when
> >  inbound messages arrive, they are dispatched by the rpmsg bus using the
> > -- 
> > 2.28.0
> > 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
  2020-09-09 22:39   ` Mathieu Poirier
@ 2020-09-10  8:38       ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-10  8:38 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

Hi Mathieu,

On Wed, Sep 09, 2020 at 04:39:46PM -0600, Mathieu Poirier wrote:
> Good afternoon,
> 
> On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> > Linux supports running the RPMsg protocol over the VirtIO transport
> > protocol, but currently there is only support for VirtIO clients and
> > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > server implementation.
> 
> This changelog is very confusing...  At this time the name service in the
> remoteproc space runs as a server on the application processor.  But from the
> above the remoteproc usecase seems to be considered to be a client
> configuration.

I agree that this isn't very obvious. But I think it is common to call the 
host "a server" and guests "clients." E.g. in vhost.c in the top-of-thefile 
comment:

 * Generic code for virtio server in host kernel.

I think the generic concept behind this notation is, that as guests boot, 
they send their requests to the host, e.g. VirtIO device drivers on guests 
send requests over VirtQueues to VirtIO servers on the host, which can run 
either in the user- or in the kernel-space. And I think you can follow that 
logic in case of devices or remote processors too: it's the main CPU(s) 
that boot(s) and start talking to devices and remote processors, so in that 
sence devices are servers and the CPUs are their clients.

And yes, the name-space announcement use-case seems confusing to me too - it 
reverts the relationship in a way: once a guest has booted and established 
connections to any rpmsg "devices," those send their namespace announcements 
back. But I think this can be regarded as server identification: you connect 
to a server and it replies with its identification and capabilities.

> And I don't see a server implementation per se...  It is more like a client
> implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
> messages from host to guest.
> 
> Perhaps it is my lack of familiarity with vhost terminology.
> 
> > 
> > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > ---
> >  drivers/vhost/Kconfig       |   7 +
> >  drivers/vhost/Makefile      |   3 +
> >  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
> >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> >  4 files changed, 457 insertions(+)
> >  create mode 100644 drivers/vhost/rpmsg.c
> >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > 
> > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > index 587fbae06182..046b948fc411 100644
> > --- a/drivers/vhost/Kconfig
> > +++ b/drivers/vhost/Kconfig
> > @@ -38,6 +38,13 @@ config VHOST_NET
> >  	  To compile this driver as a module, choose M here: the module will
> >  	  be called vhost_net.
> >  
> > +config VHOST_RPMSG
> > +	tristate
> > +	select VHOST
> > +	help
> > +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > +	  drivers, using the RPMsg over VirtIO protocol.
> 
> I had to assume vhost drivers are running on the host and virtIO drivers on the
> guests.  This may be common knowledge for people familiar with vhosts but
> certainly obscur for commoners  Having a help section that is clear on what is
> happening would remove any ambiguity.

It is the terminology, yes, but you're right, the wording isn't very clear, will 
improve.

> > +
> >  config VHOST_SCSI
> >  	tristate "VHOST_SCSI TCM fabric driver"
> >  	depends on TARGET_CORE && EVENTFD
> > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > index f3e1897cce85..9cf459d59f97 100644
> > --- a/drivers/vhost/Makefile
> > +++ b/drivers/vhost/Makefile
> > @@ -2,6 +2,9 @@
> >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> >  vhost_net-y := net.o
> >  
> > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > +vhost_rpmsg-y := rpmsg.o
> > +
> >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> >  vhost_scsi-y := scsi.o
> >  
> > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > new file mode 100644
> > index 000000000000..c26d7a4afc6d
> > --- /dev/null
> > +++ b/drivers/vhost/rpmsg.c
> > @@ -0,0 +1,373 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > + *
> > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > + *
> > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> 
> Again, very confusing.  The changelog refers to a server implementation but to
> me this refers to a client implementation, especially if rpmsg_recv_single() and
> rpmsg_ns_cb() are used on the other side of the pipe.  

I think the above is correct. "Vhost" indicates, that this is running on the host. 
"match the guest side" means, that you can use this API on the host and it is 
designed to work together with the RPMsg VirtIO drivers running on guests, as 
implemented *on guests* by virtio_rpmsg_bus.c. Would "to work together" be a better 
description than "to match?"

> > + * These functions handle creation of 2 virtual queues, handling of endpoint
> > + * addresses, sending a name-space announcement to the guest as well as any
> > + * user messages. This API can be used by any vhost driver to handle RPMsg
> > + * specific processing.
> > + * Specific vhost drivers, using this API will use their own VirtIO device
> > + * IDs, that should then also be added to the ID table in virtio_rpmsg_bus.c
> > + */
> > +
> > +#include <linux/compat.h>
> > +#include <linux/file.h>
> > +#include <linux/miscdevice.h>
> > +#include <linux/module.h>
> > +#include <linux/mutex.h>
> > +#include <linux/vhost.h>
> > +#include <linux/virtio_rpmsg.h>
> > +#include <uapi/linux/rpmsg.h>
> > +
> > +#include "vhost.h"
> > +#include "vhost_rpmsg.h"
> > +
> > +/*
> > + * All virtio-rpmsg virtual queue kicks always come with just one buffer -
> > + * either input or output, but we can also handle split messages
> > + */
> > +static int vhost_rpmsg_get_msg(struct vhost_virtqueue *vq, unsigned int *cnt)
> > +{
> > +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> > +	unsigned int out, in;
> > +	int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in,
> > +				     NULL, NULL);
> > +	if (head < 0) {
> > +		vq_err(vq, "%s(): error %d getting buffer\n",
> > +		       __func__, head);
> > +		return head;
> > +	}
> > +
> > +	/* Nothing new? */
> > +	if (head == vq->num)
> > +		return head;
> > +
> > +	if (vq == &vr->vq[VIRTIO_RPMSG_RESPONSE]) {
> > +		if (out) {
> > +			vq_err(vq, "%s(): invalid %d output in response queue\n",
> > +			       __func__, out);
> > +			goto return_buf;
> > +		}
> > +
> > +		*cnt = in;
> > +	}
> > +
> > +	if (vq == &vr->vq[VIRTIO_RPMSG_REQUEST]) {
> > +		if (in) {
> > +			vq_err(vq, "%s(): invalid %d input in request queue\n",
> > +		       __func__, in);
> > +			goto return_buf;
> > +		}
> > +
> > +		*cnt = out;
> > +	}
> > +
> > +	return head;
> > +
> > +return_buf:
> > +	vhost_add_used(vq, head, 0);
> > +
> > +	return -EINVAL;
> > +}
> > +
> > +static const struct vhost_rpmsg_ept *vhost_rpmsg_ept_find(struct vhost_rpmsg *vr, int addr)
> > +{
> > +	unsigned int i;
> > +
> > +	for (i = 0; i < vr->n_epts; i++)
> > +		if (vr->ept[i].addr == addr)
> > +			return vr->ept + i;
> > +
> > +	return NULL;
> > +}
> > +
> > +/*
> > + * if len < 0, then for reading a request, the complete virtual queue buffer
> > + * size is prepared, for sending a response, the length in the iterator is used
> > + */
> > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > +			   unsigned int qid, ssize_t len)
> > +	__acquires(vq->mutex)
> > +{
> > +	struct vhost_virtqueue *vq = vr->vq + qid;
> > +	unsigned int cnt;
> > +	ssize_t ret;
> > +	size_t tmp;
> > +
> > +	if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
> > +		return -EINVAL;
> > +
> > +	iter->vq = vq;
> > +
> > +	mutex_lock(&vq->mutex);
> > +	vhost_disable_notify(&vr->dev, vq);
> > +
> > +	iter->head = vhost_rpmsg_get_msg(vq, &cnt);
> > +	if (iter->head == vq->num)
> > +		iter->head = -EAGAIN;
> > +
> > +	if (iter->head < 0) {
> > +		ret = iter->head;
> > +		goto unlock;
> > +	}
> > +
> > +	tmp = iov_length(vq->iov, cnt);
> > +	if (tmp < sizeof(iter->rhdr)) {
> > +		vq_err(vq, "%s(): size %zu too small\n", __func__, tmp);
> > +		ret = -ENOBUFS;
> > +		goto return_buf;
> > +	}
> > +
> > +	switch (qid) {
> > +	case VIRTIO_RPMSG_REQUEST:
> > +		if (len >= 0) {
> > +			if (tmp < sizeof(iter->rhdr) + len) {
> > +				ret = -ENOBUFS;
> > +				goto return_buf;
> > +			}
> > +
> > +			tmp = len + sizeof(iter->rhdr);
> > +		}
> > +
> > +		/* len is now the size of the payload */
> > +		iov_iter_init(&iter->iov_iter, WRITE, vq->iov, cnt, tmp);
> > +
> > +		/* Read the RPMSG header with endpoint addresses */
> > +		tmp = copy_from_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
> > +		if (tmp != sizeof(iter->rhdr)) {
> > +			vq_err(vq, "%s(): got %zu instead of %zu\n", __func__,
> > +			       tmp, sizeof(iter->rhdr));
> > +			ret = -EIO;
> > +			goto return_buf;
> > +		}
> > +
> > +		iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.dst));
> > +		if (!iter->ept) {
> > +			vq_err(vq, "%s(): no endpoint with address %d\n",
> > +			       __func__, vhost32_to_cpu(vq, iter->rhdr.dst));
> > +			ret = -ENOENT;
> > +			goto return_buf;
> > +		}
> > +
> > +		/* Let the endpoint read the payload */
> > +		if (iter->ept->read) {
> > +			ret = iter->ept->read(vr, iter);
> > +			if (ret < 0)
> > +				goto return_buf;
> > +
> > +			iter->rhdr.len = cpu_to_vhost16(vq, ret);
> > +		} else {
> > +			iter->rhdr.len = 0;
> > +		}
> > +
> > +		/* Prepare for the response phase */
> > +		iter->rhdr.dst = iter->rhdr.src;
> > +		iter->rhdr.src = cpu_to_vhost32(vq, iter->ept->addr);
> > +
> > +		break;
> > +	case VIRTIO_RPMSG_RESPONSE:
> > +		if (!iter->ept && iter->rhdr.dst != cpu_to_vhost32(vq, RPMSG_NS_ADDR)) {
> > +			/*
> > +			 * Usually the iterator is configured when processing a
> > +			 * message on the request queue, but it's also possible
> > +			 * to send a message on the response queue without a
> > +			 * preceding request, in that case the iterator must
> > +			 * contain source and destination addresses.
> > +			 */
> > +			iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.src));
> > +			if (!iter->ept) {
> > +				ret = -ENOENT;
> > +				goto return_buf;
> > +			}
> > +		}
> > +
> > +		if (len >= 0) {
> > +			if (tmp < sizeof(iter->rhdr) + len) {
> > +				ret = -ENOBUFS;
> > +				goto return_buf;
> > +			}
> > +
> > +			iter->rhdr.len = cpu_to_vhost16(vq, len);
> > +			tmp = len + sizeof(iter->rhdr);
> > +		}
> > +
> > +		/* len is now the size of the payload */
> > +		iov_iter_init(&iter->iov_iter, READ, vq->iov, cnt, tmp);
> > +
> > +		/* Write the RPMSG header with endpoint addresses */
> > +		tmp = copy_to_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
> > +		if (tmp != sizeof(iter->rhdr)) {
> > +			ret = -EIO;
> > +			goto return_buf;
> > +		}
> > +
> > +		/* Let the endpoint write the payload */
> > +		if (iter->ept && iter->ept->write) {
> > +			ret = iter->ept->write(vr, iter);
> > +			if (ret < 0)
> > +				goto return_buf;
> > +		}
> > +
> > +		break;
> > +	}
> > +
> > +	return 0;
> > +
> > +return_buf:
> > +	vhost_add_used(vq, iter->head, 0);
> > +unlock:
> > +	vhost_enable_notify(&vr->dev, vq);
> > +	mutex_unlock(&vq->mutex);
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_start_lock);
> > +
> > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > +			void *data, size_t size)
> > +{
> > +	/*
> > +	 * We could check for excess data, but copy_{to,from}_iter() don't do
> > +	 * that either
> > +	 */
> > +	if (iter->vq == vr->vq + VIRTIO_RPMSG_RESPONSE)
> > +		return copy_to_iter(data, size, &iter->iov_iter);
> > +
> > +	return copy_from_iter(data, size, &iter->iov_iter);
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_copy);
> > +
> > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > +			      struct vhost_rpmsg_iter *iter)
> > +	__releases(vq->mutex)
> > +{
> > +	if (iter->head >= 0)
> > +		vhost_add_used_and_signal(iter->vq->dev, iter->vq, iter->head,
> > +					  vhost16_to_cpu(iter->vq, iter->rhdr.len) +
> > +					  sizeof(iter->rhdr));
> > +
> > +	vhost_enable_notify(&vr->dev, iter->vq);
> > +	mutex_unlock(&iter->vq->mutex);
> > +
> > +	return iter->head;
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_finish_unlock);
> > +
> > +/*
> > + * Return false to terminate the external loop only if we fail to obtain either
> > + * a request or a response buffer
> > + */
> > +static bool handle_rpmsg_req_single(struct vhost_rpmsg *vr,
> > +				    struct vhost_virtqueue *vq)
> > +{
> > +	struct vhost_rpmsg_iter iter;
> > +	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_REQUEST, -EINVAL);
> > +	if (!ret)
> > +		ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > +	if (ret < 0) {
> > +		if (ret != -EAGAIN)
> > +			vq_err(vq, "%s(): RPMSG processing failed %d\n",
> > +			       __func__, ret);
> > +		return false;
> > +	}
> > +
> > +	if (!iter.ept->write)
> > +		return true;
> > +
> > +	ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, -EINVAL);
> > +	if (!ret)
> > +		ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > +	if (ret < 0) {
> > +		vq_err(vq, "%s(): RPMSG finalising failed %d\n", __func__, ret);
> > +		return false;
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +static void handle_rpmsg_req_kick(struct vhost_work *work)
> > +{
> > +	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
> > +						  poll.work);
> > +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> > +
> > +	while (handle_rpmsg_req_single(vr, vq))
> > +		;
> > +}
> > +
> > +/*
> > + * initialise two virtqueues with an array of endpoints,
> > + * request and response callbacks
> > + */
> > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > +		      unsigned int n_epts)
> > +{
> > +	unsigned int i;
> > +
> > +	for (i = 0; i < ARRAY_SIZE(vr->vq); i++)
> > +		vr->vq_p[i] = &vr->vq[i];
> > +
> > +	/* vq[0]: host -> guest, vq[1]: host <- guest */
> > +	vr->vq[VIRTIO_RPMSG_REQUEST].handle_kick = handle_rpmsg_req_kick;
> > +	vr->vq[VIRTIO_RPMSG_RESPONSE].handle_kick = NULL;
> > +
> > +	vr->ept = ept;
> > +	vr->n_epts = n_epts;
> > +
> > +	vhost_dev_init(&vr->dev, vr->vq_p, VIRTIO_RPMSG_NUM_OF_VQS,
> > +		       UIO_MAXIOV, 0, 0, true, NULL);
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_init);
> > +
> > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr)
> > +{
> > +	if (vhost_dev_has_owner(&vr->dev))
> > +		vhost_poll_flush(&vr->vq[VIRTIO_RPMSG_REQUEST].poll);
> > +
> > +	vhost_dev_cleanup(&vr->dev);
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_destroy);
> > +
> > +/* send namespace */
> > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name, unsigned int src)
> > +{
> > +	struct vhost_virtqueue *vq = &vr->vq[VIRTIO_RPMSG_RESPONSE];
> > +	struct vhost_rpmsg_iter iter = {
> > +		.rhdr = {
> > +			.src = 0,
> > +			.dst = cpu_to_vhost32(vq, RPMSG_NS_ADDR),
> > +			.flags = cpu_to_vhost16(vq, RPMSG_NS_CREATE), /* rpmsg_recv_single() */
> 
> Where is the flag used in rpmsg_recv_single()?  It is used for the name space
> message (as you have below) but not in the header when doing a name space
> announcement.

I think you're right, it isn't needed here, will remove.

> > +		},
> > +	};
> > +	struct rpmsg_ns_msg ns = {
> > +		.addr = cpu_to_vhost32(vq, src),
> > +		.flags = cpu_to_vhost32(vq, RPMSG_NS_CREATE), /* for rpmsg_ns_cb() */
> > +	};
> > +	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, sizeof(ns));
> > +
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	strlcpy(ns.name, name, sizeof(ns.name));
> > +
> > +	ret = vhost_rpmsg_copy(vr, &iter, &ns, sizeof(ns));
> > +	if (ret != sizeof(ns))
> > +		vq_err(iter.vq, "%s(): added %d instead of %zu bytes\n",
> > +		       __func__, ret, sizeof(ns));
> > +
> > +	ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > +	if (ret < 0)
> > +		vq_err(iter.vq, "%s(): namespace announcement failed: %d\n",
> > +		       __func__, ret);
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_ns_announce);
> > +
> > +MODULE_LICENSE("GPL v2");
> > +MODULE_AUTHOR("Intel, Inc.");
> > +MODULE_DESCRIPTION("Vhost RPMsg API");
> > diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> > new file mode 100644
> > index 000000000000..30072cecb8a0
> > --- /dev/null
> > +++ b/drivers/vhost/vhost_rpmsg.h
> > @@ -0,0 +1,74 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > + *
> > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > + */
> > +
> > +#ifndef VHOST_RPMSG_H
> > +#define VHOST_RPMSG_H
> > +
> > +#include <linux/uio.h>
> > +#include <linux/virtio_rpmsg.h>
> > +
> > +#include "vhost.h"
> > +
> > +/* RPMsg uses two VirtQueues: one for each direction */
> > +enum {
> > +	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
> > +	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
> > +	/* Keep last */
> > +	VIRTIO_RPMSG_NUM_OF_VQS,
> > +};
> > +
> > +struct vhost_rpmsg_ept;
> > +
> > +struct vhost_rpmsg_iter {
> > +	struct iov_iter iov_iter;
> > +	struct rpmsg_hdr rhdr;
> > +	struct vhost_virtqueue *vq;
> > +	const struct vhost_rpmsg_ept *ept;
> > +	int head;
> > +	void *priv;
> 
> I don't see @priv being used anywhere.

That's logical: this is a field, private to the API users, so the core shouldn't 
use it :-) It's used in later patches.

> 
> > +};
> > +
> > +struct vhost_rpmsg {
> > +	struct vhost_dev dev;
> > +	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> > +	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> > +	const struct vhost_rpmsg_ept *ept;
> > +	unsigned int n_epts;
> > +};
> > +
> > +struct vhost_rpmsg_ept {
> > +	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > +	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > +	int addr;
> > +};
> > +
> > +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> > +{
> > +	return iter->rhdr.len;
> > +}
> 
> Again, I don't see where this is used.

This is exported API, it's used by users.

> > +
> > +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
> > +	.rhdr = {						\
> > +			.src = cpu_to_vhost32(_vq, _src),	\
> > +			.dst = cpu_to_vhost32(_vq, _dst),	\
> > +		},						\
> > +	}
> 
> Same.

ditto.

Thanks
Guennadi

> Thanks,
> Mathieu
> 
> > +
> > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > +		      unsigned int n_epts);
> > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> > +			    unsigned int src);
> > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> > +			   struct vhost_rpmsg_iter *iter,
> > +			   unsigned int qid, ssize_t len);
> > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > +			void *data, size_t size);
> > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > +			      struct vhost_rpmsg_iter *iter);
> > +
> > +#endif
> > -- 
> > 2.28.0
> > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
@ 2020-09-10  8:38       ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-10  8:38 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Ohad Ben-Cohen, kvm, Michael S. Tsirkin, Vincent Whitchurch,
	linux-remoteproc, Pierre-Louis Bossart, virtualization,
	Liam Girdwood, Bjorn Andersson, sound-open-firmware

Hi Mathieu,

On Wed, Sep 09, 2020 at 04:39:46PM -0600, Mathieu Poirier wrote:
> Good afternoon,
> 
> On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> > Linux supports running the RPMsg protocol over the VirtIO transport
> > protocol, but currently there is only support for VirtIO clients and
> > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > server implementation.
> 
> This changelog is very confusing...  At this time the name service in the
> remoteproc space runs as a server on the application processor.  But from the
> above the remoteproc usecase seems to be considered to be a client
> configuration.

I agree that this isn't very obvious. But I think it is common to call the 
host "a server" and guests "clients." E.g. in vhost.c in the top-of-thefile 
comment:

 * Generic code for virtio server in host kernel.

I think the generic concept behind this notation is, that as guests boot, 
they send their requests to the host, e.g. VirtIO device drivers on guests 
send requests over VirtQueues to VirtIO servers on the host, which can run 
either in the user- or in the kernel-space. And I think you can follow that 
logic in case of devices or remote processors too: it's the main CPU(s) 
that boot(s) and start talking to devices and remote processors, so in that 
sence devices are servers and the CPUs are their clients.

And yes, the name-space announcement use-case seems confusing to me too - it 
reverts the relationship in a way: once a guest has booted and established 
connections to any rpmsg "devices," those send their namespace announcements 
back. But I think this can be regarded as server identification: you connect 
to a server and it replies with its identification and capabilities.

> And I don't see a server implementation per se...  It is more like a client
> implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
> messages from host to guest.
> 
> Perhaps it is my lack of familiarity with vhost terminology.
> 
> > 
> > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > ---
> >  drivers/vhost/Kconfig       |   7 +
> >  drivers/vhost/Makefile      |   3 +
> >  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
> >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> >  4 files changed, 457 insertions(+)
> >  create mode 100644 drivers/vhost/rpmsg.c
> >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > 
> > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > index 587fbae06182..046b948fc411 100644
> > --- a/drivers/vhost/Kconfig
> > +++ b/drivers/vhost/Kconfig
> > @@ -38,6 +38,13 @@ config VHOST_NET
> >  	  To compile this driver as a module, choose M here: the module will
> >  	  be called vhost_net.
> >  
> > +config VHOST_RPMSG
> > +	tristate
> > +	select VHOST
> > +	help
> > +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > +	  drivers, using the RPMsg over VirtIO protocol.
> 
> I had to assume vhost drivers are running on the host and virtIO drivers on the
> guests.  This may be common knowledge for people familiar with vhosts but
> certainly obscur for commoners  Having a help section that is clear on what is
> happening would remove any ambiguity.

It is the terminology, yes, but you're right, the wording isn't very clear, will 
improve.

> > +
> >  config VHOST_SCSI
> >  	tristate "VHOST_SCSI TCM fabric driver"
> >  	depends on TARGET_CORE && EVENTFD
> > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > index f3e1897cce85..9cf459d59f97 100644
> > --- a/drivers/vhost/Makefile
> > +++ b/drivers/vhost/Makefile
> > @@ -2,6 +2,9 @@
> >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> >  vhost_net-y := net.o
> >  
> > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > +vhost_rpmsg-y := rpmsg.o
> > +
> >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> >  vhost_scsi-y := scsi.o
> >  
> > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > new file mode 100644
> > index 000000000000..c26d7a4afc6d
> > --- /dev/null
> > +++ b/drivers/vhost/rpmsg.c
> > @@ -0,0 +1,373 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > + *
> > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > + *
> > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> 
> Again, very confusing.  The changelog refers to a server implementation but to
> me this refers to a client implementation, especially if rpmsg_recv_single() and
> rpmsg_ns_cb() are used on the other side of the pipe.  

I think the above is correct. "Vhost" indicates, that this is running on the host. 
"match the guest side" means, that you can use this API on the host and it is 
designed to work together with the RPMsg VirtIO drivers running on guests, as 
implemented *on guests* by virtio_rpmsg_bus.c. Would "to work together" be a better 
description than "to match?"

> > + * These functions handle creation of 2 virtual queues, handling of endpoint
> > + * addresses, sending a name-space announcement to the guest as well as any
> > + * user messages. This API can be used by any vhost driver to handle RPMsg
> > + * specific processing.
> > + * Specific vhost drivers, using this API will use their own VirtIO device
> > + * IDs, that should then also be added to the ID table in virtio_rpmsg_bus.c
> > + */
> > +
> > +#include <linux/compat.h>
> > +#include <linux/file.h>
> > +#include <linux/miscdevice.h>
> > +#include <linux/module.h>
> > +#include <linux/mutex.h>
> > +#include <linux/vhost.h>
> > +#include <linux/virtio_rpmsg.h>
> > +#include <uapi/linux/rpmsg.h>
> > +
> > +#include "vhost.h"
> > +#include "vhost_rpmsg.h"
> > +
> > +/*
> > + * All virtio-rpmsg virtual queue kicks always come with just one buffer -
> > + * either input or output, but we can also handle split messages
> > + */
> > +static int vhost_rpmsg_get_msg(struct vhost_virtqueue *vq, unsigned int *cnt)
> > +{
> > +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> > +	unsigned int out, in;
> > +	int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in,
> > +				     NULL, NULL);
> > +	if (head < 0) {
> > +		vq_err(vq, "%s(): error %d getting buffer\n",
> > +		       __func__, head);
> > +		return head;
> > +	}
> > +
> > +	/* Nothing new? */
> > +	if (head == vq->num)
> > +		return head;
> > +
> > +	if (vq == &vr->vq[VIRTIO_RPMSG_RESPONSE]) {
> > +		if (out) {
> > +			vq_err(vq, "%s(): invalid %d output in response queue\n",
> > +			       __func__, out);
> > +			goto return_buf;
> > +		}
> > +
> > +		*cnt = in;
> > +	}
> > +
> > +	if (vq == &vr->vq[VIRTIO_RPMSG_REQUEST]) {
> > +		if (in) {
> > +			vq_err(vq, "%s(): invalid %d input in request queue\n",
> > +		       __func__, in);
> > +			goto return_buf;
> > +		}
> > +
> > +		*cnt = out;
> > +	}
> > +
> > +	return head;
> > +
> > +return_buf:
> > +	vhost_add_used(vq, head, 0);
> > +
> > +	return -EINVAL;
> > +}
> > +
> > +static const struct vhost_rpmsg_ept *vhost_rpmsg_ept_find(struct vhost_rpmsg *vr, int addr)
> > +{
> > +	unsigned int i;
> > +
> > +	for (i = 0; i < vr->n_epts; i++)
> > +		if (vr->ept[i].addr == addr)
> > +			return vr->ept + i;
> > +
> > +	return NULL;
> > +}
> > +
> > +/*
> > + * if len < 0, then for reading a request, the complete virtual queue buffer
> > + * size is prepared, for sending a response, the length in the iterator is used
> > + */
> > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > +			   unsigned int qid, ssize_t len)
> > +	__acquires(vq->mutex)
> > +{
> > +	struct vhost_virtqueue *vq = vr->vq + qid;
> > +	unsigned int cnt;
> > +	ssize_t ret;
> > +	size_t tmp;
> > +
> > +	if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
> > +		return -EINVAL;
> > +
> > +	iter->vq = vq;
> > +
> > +	mutex_lock(&vq->mutex);
> > +	vhost_disable_notify(&vr->dev, vq);
> > +
> > +	iter->head = vhost_rpmsg_get_msg(vq, &cnt);
> > +	if (iter->head == vq->num)
> > +		iter->head = -EAGAIN;
> > +
> > +	if (iter->head < 0) {
> > +		ret = iter->head;
> > +		goto unlock;
> > +	}
> > +
> > +	tmp = iov_length(vq->iov, cnt);
> > +	if (tmp < sizeof(iter->rhdr)) {
> > +		vq_err(vq, "%s(): size %zu too small\n", __func__, tmp);
> > +		ret = -ENOBUFS;
> > +		goto return_buf;
> > +	}
> > +
> > +	switch (qid) {
> > +	case VIRTIO_RPMSG_REQUEST:
> > +		if (len >= 0) {
> > +			if (tmp < sizeof(iter->rhdr) + len) {
> > +				ret = -ENOBUFS;
> > +				goto return_buf;
> > +			}
> > +
> > +			tmp = len + sizeof(iter->rhdr);
> > +		}
> > +
> > +		/* len is now the size of the payload */
> > +		iov_iter_init(&iter->iov_iter, WRITE, vq->iov, cnt, tmp);
> > +
> > +		/* Read the RPMSG header with endpoint addresses */
> > +		tmp = copy_from_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
> > +		if (tmp != sizeof(iter->rhdr)) {
> > +			vq_err(vq, "%s(): got %zu instead of %zu\n", __func__,
> > +			       tmp, sizeof(iter->rhdr));
> > +			ret = -EIO;
> > +			goto return_buf;
> > +		}
> > +
> > +		iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.dst));
> > +		if (!iter->ept) {
> > +			vq_err(vq, "%s(): no endpoint with address %d\n",
> > +			       __func__, vhost32_to_cpu(vq, iter->rhdr.dst));
> > +			ret = -ENOENT;
> > +			goto return_buf;
> > +		}
> > +
> > +		/* Let the endpoint read the payload */
> > +		if (iter->ept->read) {
> > +			ret = iter->ept->read(vr, iter);
> > +			if (ret < 0)
> > +				goto return_buf;
> > +
> > +			iter->rhdr.len = cpu_to_vhost16(vq, ret);
> > +		} else {
> > +			iter->rhdr.len = 0;
> > +		}
> > +
> > +		/* Prepare for the response phase */
> > +		iter->rhdr.dst = iter->rhdr.src;
> > +		iter->rhdr.src = cpu_to_vhost32(vq, iter->ept->addr);
> > +
> > +		break;
> > +	case VIRTIO_RPMSG_RESPONSE:
> > +		if (!iter->ept && iter->rhdr.dst != cpu_to_vhost32(vq, RPMSG_NS_ADDR)) {
> > +			/*
> > +			 * Usually the iterator is configured when processing a
> > +			 * message on the request queue, but it's also possible
> > +			 * to send a message on the response queue without a
> > +			 * preceding request, in that case the iterator must
> > +			 * contain source and destination addresses.
> > +			 */
> > +			iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.src));
> > +			if (!iter->ept) {
> > +				ret = -ENOENT;
> > +				goto return_buf;
> > +			}
> > +		}
> > +
> > +		if (len >= 0) {
> > +			if (tmp < sizeof(iter->rhdr) + len) {
> > +				ret = -ENOBUFS;
> > +				goto return_buf;
> > +			}
> > +
> > +			iter->rhdr.len = cpu_to_vhost16(vq, len);
> > +			tmp = len + sizeof(iter->rhdr);
> > +		}
> > +
> > +		/* len is now the size of the payload */
> > +		iov_iter_init(&iter->iov_iter, READ, vq->iov, cnt, tmp);
> > +
> > +		/* Write the RPMSG header with endpoint addresses */
> > +		tmp = copy_to_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
> > +		if (tmp != sizeof(iter->rhdr)) {
> > +			ret = -EIO;
> > +			goto return_buf;
> > +		}
> > +
> > +		/* Let the endpoint write the payload */
> > +		if (iter->ept && iter->ept->write) {
> > +			ret = iter->ept->write(vr, iter);
> > +			if (ret < 0)
> > +				goto return_buf;
> > +		}
> > +
> > +		break;
> > +	}
> > +
> > +	return 0;
> > +
> > +return_buf:
> > +	vhost_add_used(vq, iter->head, 0);
> > +unlock:
> > +	vhost_enable_notify(&vr->dev, vq);
> > +	mutex_unlock(&vq->mutex);
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_start_lock);
> > +
> > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > +			void *data, size_t size)
> > +{
> > +	/*
> > +	 * We could check for excess data, but copy_{to,from}_iter() don't do
> > +	 * that either
> > +	 */
> > +	if (iter->vq == vr->vq + VIRTIO_RPMSG_RESPONSE)
> > +		return copy_to_iter(data, size, &iter->iov_iter);
> > +
> > +	return copy_from_iter(data, size, &iter->iov_iter);
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_copy);
> > +
> > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > +			      struct vhost_rpmsg_iter *iter)
> > +	__releases(vq->mutex)
> > +{
> > +	if (iter->head >= 0)
> > +		vhost_add_used_and_signal(iter->vq->dev, iter->vq, iter->head,
> > +					  vhost16_to_cpu(iter->vq, iter->rhdr.len) +
> > +					  sizeof(iter->rhdr));
> > +
> > +	vhost_enable_notify(&vr->dev, iter->vq);
> > +	mutex_unlock(&iter->vq->mutex);
> > +
> > +	return iter->head;
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_finish_unlock);
> > +
> > +/*
> > + * Return false to terminate the external loop only if we fail to obtain either
> > + * a request or a response buffer
> > + */
> > +static bool handle_rpmsg_req_single(struct vhost_rpmsg *vr,
> > +				    struct vhost_virtqueue *vq)
> > +{
> > +	struct vhost_rpmsg_iter iter;
> > +	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_REQUEST, -EINVAL);
> > +	if (!ret)
> > +		ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > +	if (ret < 0) {
> > +		if (ret != -EAGAIN)
> > +			vq_err(vq, "%s(): RPMSG processing failed %d\n",
> > +			       __func__, ret);
> > +		return false;
> > +	}
> > +
> > +	if (!iter.ept->write)
> > +		return true;
> > +
> > +	ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, -EINVAL);
> > +	if (!ret)
> > +		ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > +	if (ret < 0) {
> > +		vq_err(vq, "%s(): RPMSG finalising failed %d\n", __func__, ret);
> > +		return false;
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +static void handle_rpmsg_req_kick(struct vhost_work *work)
> > +{
> > +	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
> > +						  poll.work);
> > +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> > +
> > +	while (handle_rpmsg_req_single(vr, vq))
> > +		;
> > +}
> > +
> > +/*
> > + * initialise two virtqueues with an array of endpoints,
> > + * request and response callbacks
> > + */
> > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > +		      unsigned int n_epts)
> > +{
> > +	unsigned int i;
> > +
> > +	for (i = 0; i < ARRAY_SIZE(vr->vq); i++)
> > +		vr->vq_p[i] = &vr->vq[i];
> > +
> > +	/* vq[0]: host -> guest, vq[1]: host <- guest */
> > +	vr->vq[VIRTIO_RPMSG_REQUEST].handle_kick = handle_rpmsg_req_kick;
> > +	vr->vq[VIRTIO_RPMSG_RESPONSE].handle_kick = NULL;
> > +
> > +	vr->ept = ept;
> > +	vr->n_epts = n_epts;
> > +
> > +	vhost_dev_init(&vr->dev, vr->vq_p, VIRTIO_RPMSG_NUM_OF_VQS,
> > +		       UIO_MAXIOV, 0, 0, true, NULL);
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_init);
> > +
> > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr)
> > +{
> > +	if (vhost_dev_has_owner(&vr->dev))
> > +		vhost_poll_flush(&vr->vq[VIRTIO_RPMSG_REQUEST].poll);
> > +
> > +	vhost_dev_cleanup(&vr->dev);
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_destroy);
> > +
> > +/* send namespace */
> > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name, unsigned int src)
> > +{
> > +	struct vhost_virtqueue *vq = &vr->vq[VIRTIO_RPMSG_RESPONSE];
> > +	struct vhost_rpmsg_iter iter = {
> > +		.rhdr = {
> > +			.src = 0,
> > +			.dst = cpu_to_vhost32(vq, RPMSG_NS_ADDR),
> > +			.flags = cpu_to_vhost16(vq, RPMSG_NS_CREATE), /* rpmsg_recv_single() */
> 
> Where is the flag used in rpmsg_recv_single()?  It is used for the name space
> message (as you have below) but not in the header when doing a name space
> announcement.

I think you're right, it isn't needed here, will remove.

> > +		},
> > +	};
> > +	struct rpmsg_ns_msg ns = {
> > +		.addr = cpu_to_vhost32(vq, src),
> > +		.flags = cpu_to_vhost32(vq, RPMSG_NS_CREATE), /* for rpmsg_ns_cb() */
> > +	};
> > +	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, sizeof(ns));
> > +
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	strlcpy(ns.name, name, sizeof(ns.name));
> > +
> > +	ret = vhost_rpmsg_copy(vr, &iter, &ns, sizeof(ns));
> > +	if (ret != sizeof(ns))
> > +		vq_err(iter.vq, "%s(): added %d instead of %zu bytes\n",
> > +		       __func__, ret, sizeof(ns));
> > +
> > +	ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > +	if (ret < 0)
> > +		vq_err(iter.vq, "%s(): namespace announcement failed: %d\n",
> > +		       __func__, ret);
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(vhost_rpmsg_ns_announce);
> > +
> > +MODULE_LICENSE("GPL v2");
> > +MODULE_AUTHOR("Intel, Inc.");
> > +MODULE_DESCRIPTION("Vhost RPMsg API");
> > diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> > new file mode 100644
> > index 000000000000..30072cecb8a0
> > --- /dev/null
> > +++ b/drivers/vhost/vhost_rpmsg.h
> > @@ -0,0 +1,74 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > + *
> > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > + */
> > +
> > +#ifndef VHOST_RPMSG_H
> > +#define VHOST_RPMSG_H
> > +
> > +#include <linux/uio.h>
> > +#include <linux/virtio_rpmsg.h>
> > +
> > +#include "vhost.h"
> > +
> > +/* RPMsg uses two VirtQueues: one for each direction */
> > +enum {
> > +	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
> > +	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
> > +	/* Keep last */
> > +	VIRTIO_RPMSG_NUM_OF_VQS,
> > +};
> > +
> > +struct vhost_rpmsg_ept;
> > +
> > +struct vhost_rpmsg_iter {
> > +	struct iov_iter iov_iter;
> > +	struct rpmsg_hdr rhdr;
> > +	struct vhost_virtqueue *vq;
> > +	const struct vhost_rpmsg_ept *ept;
> > +	int head;
> > +	void *priv;
> 
> I don't see @priv being used anywhere.

That's logical: this is a field, private to the API users, so the core shouldn't 
use it :-) It's used in later patches.

> 
> > +};
> > +
> > +struct vhost_rpmsg {
> > +	struct vhost_dev dev;
> > +	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> > +	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> > +	const struct vhost_rpmsg_ept *ept;
> > +	unsigned int n_epts;
> > +};
> > +
> > +struct vhost_rpmsg_ept {
> > +	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > +	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > +	int addr;
> > +};
> > +
> > +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> > +{
> > +	return iter->rhdr.len;
> > +}
> 
> Again, I don't see where this is used.

This is exported API, it's used by users.

> > +
> > +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
> > +	.rhdr = {						\
> > +			.src = cpu_to_vhost32(_vq, _src),	\
> > +			.dst = cpu_to_vhost32(_vq, _dst),	\
> > +		},						\
> > +	}
> 
> Same.

ditto.

Thanks
Guennadi

> Thanks,
> Mathieu
> 
> > +
> > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > +		      unsigned int n_epts);
> > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> > +			    unsigned int src);
> > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> > +			   struct vhost_rpmsg_iter *iter,
> > +			   unsigned int qid, ssize_t len);
> > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > +			void *data, size_t size);
> > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > +			      struct vhost_rpmsg_iter *iter);
> > +
> > +#endif
> > -- 
> > 2.28.0
> > 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 3/4] rpmsg: update documentation
  2020-09-10  7:18       ` Guennadi Liakhovetski
@ 2020-09-10 11:19         ` Guennadi Liakhovetski
  -1 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-10 11:19 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Thu, Sep 10, 2020 at 09:18:41AM +0200, Guennadi Liakhovetski wrote:
> On Wed, Sep 09, 2020 at 04:45:21PM -0600, Mathieu Poirier wrote:
> > On Wed, Aug 26, 2020 at 07:46:35PM +0200, Guennadi Liakhovetski wrote:
> > > rpmsg_create_ept() takes struct rpmsg_channel_info chinfo as its last
> > > argument, not a u32 value. The first two arguments are also updated.
> > > 
> > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > ---
> > >  Documentation/rpmsg.txt | 6 +++---
> > >  1 file changed, 3 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt
> > > index 24b7a9e1a5f9..1ce353cb232a 100644
> > > --- a/Documentation/rpmsg.txt
> > > +++ b/Documentation/rpmsg.txt
> > > @@ -192,9 +192,9 @@ Returns 0 on success and an appropriate error value on failure.
> > >  
> > >  ::
> > >  
> > > -  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev,
> > > -		void (*cb)(struct rpmsg_channel *, void *, int, void *, u32),
> > > -		void *priv, u32 addr);
> > > +  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
> > > +					  rpmsg_rx_cb_t cb, void *priv,
> > > +					  struct rpmsg_channel_info chinfo);
> > 
> > Again I don't see this being used in this set...  It should have been sent on
> > its own to the remoteproc and documentation mailing list.  Note that
> > Documentation/rpmsg.txt is now Documentation/staging/rpmsg.rst

But you haven't pulled that change into your tree yet. Should I send as is for now 
or wait for you to cherry-pick that change?

> Sure, can send it separately.
> 
> Thanks
> Guennadi
> 
> > >  every rpmsg address in the system is bound to an rx callback (so when
> > >  inbound messages arrive, they are dispatched by the rpmsg bus using the
> > > -- 
> > > 2.28.0
> > > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 3/4] rpmsg: update documentation
@ 2020-09-10 11:19         ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-10 11:19 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Ohad Ben-Cohen, kvm, Michael S. Tsirkin, Vincent Whitchurch,
	linux-remoteproc, Pierre-Louis Bossart, virtualization,
	Liam Girdwood, Bjorn Andersson, sound-open-firmware

On Thu, Sep 10, 2020 at 09:18:41AM +0200, Guennadi Liakhovetski wrote:
> On Wed, Sep 09, 2020 at 04:45:21PM -0600, Mathieu Poirier wrote:
> > On Wed, Aug 26, 2020 at 07:46:35PM +0200, Guennadi Liakhovetski wrote:
> > > rpmsg_create_ept() takes struct rpmsg_channel_info chinfo as its last
> > > argument, not a u32 value. The first two arguments are also updated.
> > > 
> > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > ---
> > >  Documentation/rpmsg.txt | 6 +++---
> > >  1 file changed, 3 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt
> > > index 24b7a9e1a5f9..1ce353cb232a 100644
> > > --- a/Documentation/rpmsg.txt
> > > +++ b/Documentation/rpmsg.txt
> > > @@ -192,9 +192,9 @@ Returns 0 on success and an appropriate error value on failure.
> > >  
> > >  ::
> > >  
> > > -  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_channel *rpdev,
> > > -		void (*cb)(struct rpmsg_channel *, void *, int, void *, u32),
> > > -		void *priv, u32 addr);
> > > +  struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_device *rpdev,
> > > +					  rpmsg_rx_cb_t cb, void *priv,
> > > +					  struct rpmsg_channel_info chinfo);
> > 
> > Again I don't see this being used in this set...  It should have been sent on
> > its own to the remoteproc and documentation mailing list.  Note that
> > Documentation/rpmsg.txt is now Documentation/staging/rpmsg.rst

But you haven't pulled that change into your tree yet. Should I send as is for now 
or wait for you to cherry-pick that change?

> Sure, can send it separately.
> 
> Thanks
> Guennadi
> 
> > >  every rpmsg address in the system is bound to an rx callback (so when
> > >  inbound messages arrive, they are dispatched by the rpmsg bus using the
> > > -- 
> > > 2.28.0
> > > 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
  2020-09-10  7:15       ` Guennadi Liakhovetski
  (?)
@ 2020-09-10 16:46       ` Mathieu Poirier
  2020-09-11  7:59           ` Guennadi Liakhovetski
  -1 siblings, 1 reply; 34+ messages in thread
From: Mathieu Poirier @ 2020-09-10 16:46 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Thu, Sep 10, 2020 at 09:15:13AM +0200, Guennadi Liakhovetski wrote:
> Hi Mathieu,
> 
> On Wed, Sep 09, 2020 at 04:42:14PM -0600, Mathieu Poirier wrote:
> > On Wed, Aug 26, 2020 at 07:46:33PM +0200, Guennadi Liakhovetski wrote:
> > > VHOST_VSOCK_SET_RUNNING is used by the vhost vsock driver to perform
> > > crucial VirtQueue initialisation, like assigning .private fields and
> > > calling vhost_vq_init_access(), and clean up. However, this ioctl is
> > > actually extremely useful for any vhost driver, that doesn't have a
> > > side channel to inform it of a status change, e.g. upon a guest
> > > reboot. This patch makes that ioctl generic, while preserving its
> > > numeric value and also keeping the original alias.
> > > 
> > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > ---
> > >  include/uapi/linux/vhost.h | 4 +++-
> > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> > > index 75232185324a..11a4948b6216 100644
> > > --- a/include/uapi/linux/vhost.h
> > > +++ b/include/uapi/linux/vhost.h
> > > @@ -97,6 +97,8 @@
> > >  #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
> > >  #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
> > >  
> > > +#define VHOST_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
> > > +
> > 
> > I don't see it used in the next patches and as such should be part of another
> > series.
> 
> It isn't used in the next patches, it is used in this patch - see below.
>

Right, but why is this part of this set?  What does it bring?  It should be part
of a patchset where "VHOST_SET_RUNNING" is used.
 
> Thanks
> Guennadi
> 
> > >  /* VHOST_NET specific defines */
> > >  
> > >  /* Attach virtio net ring to a raw socket, or tap device.
> > > @@ -118,7 +120,7 @@
> > >  /* VHOST_VSOCK specific defines */
> > >  
> > >  #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
> > > -#define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
> > > +#define VHOST_VSOCK_SET_RUNNING		VHOST_SET_RUNNING
> > >  
> > >  /* VHOST_VDPA specific defines */
> > >  
> > > -- 
> > > 2.28.0
> > > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
  2020-09-10  8:38       ` Guennadi Liakhovetski
  (?)
@ 2020-09-10 17:22       ` Mathieu Poirier
  2020-09-11  7:46         ` Guennadi Liakhovetski
  -1 siblings, 1 reply; 34+ messages in thread
From: Mathieu Poirier @ 2020-09-10 17:22 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

Good morning Guennadi,

On Thu, Sep 10, 2020 at 10:38:54AM +0200, Guennadi Liakhovetski wrote:
> Hi Mathieu,
> 
> On Wed, Sep 09, 2020 at 04:39:46PM -0600, Mathieu Poirier wrote:
> > Good afternoon,
> > 
> > On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> > > Linux supports running the RPMsg protocol over the VirtIO transport
> > > protocol, but currently there is only support for VirtIO clients and
> > > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > > server implementation.
> > 
> > This changelog is very confusing...  At this time the name service in the
> > remoteproc space runs as a server on the application processor.  But from the
> > above the remoteproc usecase seems to be considered to be a client
> > configuration.
> 
> I agree that this isn't very obvious. But I think it is common to call the 
> host "a server" and guests "clients." E.g. in vhost.c in the top-of-thefile 
> comment:

Ok - that part we agree on.

> 
>  * Generic code for virtio server in host kernel.
> 
> I think the generic concept behind this notation is, that as guests boot, 
> they send their requests to the host, e.g. VirtIO device drivers on guests 
> send requests over VirtQueues to VirtIO servers on the host, which can run 
> either in the user- or in the kernel-space. And I think you can follow that 

I can see that process taking place.  After all virtIO devices on guests are
only stubs that need host support for access to HW.

> logic in case of devices or remote processors too: it's the main CPU(s) 
> that boot(s) and start talking to devices and remote processors, so in that 
> sence devices are servers and the CPUs are their clients.

In the remote processor case, the remoteproc core (application processor) sets up
the name service but does not initiate the communication with a remote
processor.  It simply waits there for a name space request to come in from the
remote processor.

> 
> And yes, the name-space announcement use-case seems confusing to me too - it 
> reverts the relationship in a way: once a guest has booted and established 
> connections to any rpmsg "devices," those send their namespace announcements 
> back. But I think this can be regarded as server identification: you connect 
> to a server and it replies with its identification and capabilities.

Based on the above can I assume vhost_rpmsg_ns_announce() is sent from the
guest?

I saw your V7, something I will look into.  In the mean time I need to bring
your attention to this set [1] from Arnaud.  Please have a look as it will
impact your work.

https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335

> 
> > And I don't see a server implementation per se...  It is more like a client
> > implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
> > messages from host to guest.
> > 
> > Perhaps it is my lack of familiarity with vhost terminology.
> > 
> > > 
> > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > ---
> > >  drivers/vhost/Kconfig       |   7 +
> > >  drivers/vhost/Makefile      |   3 +
> > >  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
> > >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> > >  4 files changed, 457 insertions(+)
> > >  create mode 100644 drivers/vhost/rpmsg.c
> > >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > > 
> > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > index 587fbae06182..046b948fc411 100644
> > > --- a/drivers/vhost/Kconfig
> > > +++ b/drivers/vhost/Kconfig
> > > @@ -38,6 +38,13 @@ config VHOST_NET
> > >  	  To compile this driver as a module, choose M here: the module will
> > >  	  be called vhost_net.
> > >  
> > > +config VHOST_RPMSG
> > > +	tristate
> > > +	select VHOST
> > > +	help
> > > +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > > +	  drivers, using the RPMsg over VirtIO protocol.
> > 
> > I had to assume vhost drivers are running on the host and virtIO drivers on the
> > guests.  This may be common knowledge for people familiar with vhosts but
> > certainly obscur for commoners  Having a help section that is clear on what is
> > happening would remove any ambiguity.
> 
> It is the terminology, yes, but you're right, the wording isn't very clear, will 
> improve.
> 
> > > +
> > >  config VHOST_SCSI
> > >  	tristate "VHOST_SCSI TCM fabric driver"
> > >  	depends on TARGET_CORE && EVENTFD
> > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > index f3e1897cce85..9cf459d59f97 100644
> > > --- a/drivers/vhost/Makefile
> > > +++ b/drivers/vhost/Makefile
> > > @@ -2,6 +2,9 @@
> > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > >  vhost_net-y := net.o
> > >  
> > > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > > +vhost_rpmsg-y := rpmsg.o
> > > +
> > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > >  vhost_scsi-y := scsi.o
> > >  
> > > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > > new file mode 100644
> > > index 000000000000..c26d7a4afc6d
> > > --- /dev/null
> > > +++ b/drivers/vhost/rpmsg.c
> > > @@ -0,0 +1,373 @@
> > > +// SPDX-License-Identifier: GPL-2.0-only
> > > +/*
> > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > + *
> > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > + *
> > > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> > 
> > Again, very confusing.  The changelog refers to a server implementation but to
> > me this refers to a client implementation, especially if rpmsg_recv_single() and
> > rpmsg_ns_cb() are used on the other side of the pipe.  
> 
> I think the above is correct. "Vhost" indicates, that this is running on the host. 
> "match the guest side" means, that you can use this API on the host and it is 
> designed to work together with the RPMsg VirtIO drivers running on guests, as 
> implemented *on guests* by virtio_rpmsg_bus.c. Would "to work together" be a better 
> description than "to match?"

Lets forget about this part now and concentrate on the above conversation.
Things will start to make sense at one point.

> 
> > > + * These functions handle creation of 2 virtual queues, handling of endpoint
> > > + * addresses, sending a name-space announcement to the guest as well as any
> > > + * user messages. This API can be used by any vhost driver to handle RPMsg
> > > + * specific processing.
> > > + * Specific vhost drivers, using this API will use their own VirtIO device
> > > + * IDs, that should then also be added to the ID table in virtio_rpmsg_bus.c
> > > + */
> > > +
> > > +#include <linux/compat.h>
> > > +#include <linux/file.h>
> > > +#include <linux/miscdevice.h>
> > > +#include <linux/module.h>
> > > +#include <linux/mutex.h>
> > > +#include <linux/vhost.h>
> > > +#include <linux/virtio_rpmsg.h>
> > > +#include <uapi/linux/rpmsg.h>
> > > +
> > > +#include "vhost.h"
> > > +#include "vhost_rpmsg.h"
> > > +
> > > +/*
> > > + * All virtio-rpmsg virtual queue kicks always come with just one buffer -
> > > + * either input or output, but we can also handle split messages
> > > + */
> > > +static int vhost_rpmsg_get_msg(struct vhost_virtqueue *vq, unsigned int *cnt)
> > > +{
> > > +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> > > +	unsigned int out, in;
> > > +	int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in,
> > > +				     NULL, NULL);
> > > +	if (head < 0) {
> > > +		vq_err(vq, "%s(): error %d getting buffer\n",
> > > +		       __func__, head);
> > > +		return head;
> > > +	}
> > > +
> > > +	/* Nothing new? */
> > > +	if (head == vq->num)
> > > +		return head;
> > > +
> > > +	if (vq == &vr->vq[VIRTIO_RPMSG_RESPONSE]) {
> > > +		if (out) {
> > > +			vq_err(vq, "%s(): invalid %d output in response queue\n",
> > > +			       __func__, out);
> > > +			goto return_buf;
> > > +		}
> > > +
> > > +		*cnt = in;
> > > +	}
> > > +
> > > +	if (vq == &vr->vq[VIRTIO_RPMSG_REQUEST]) {
> > > +		if (in) {
> > > +			vq_err(vq, "%s(): invalid %d input in request queue\n",
> > > +		       __func__, in);
> > > +			goto return_buf;
> > > +		}
> > > +
> > > +		*cnt = out;
> > > +	}
> > > +
> > > +	return head;
> > > +
> > > +return_buf:
> > > +	vhost_add_used(vq, head, 0);
> > > +
> > > +	return -EINVAL;
> > > +}
> > > +
> > > +static const struct vhost_rpmsg_ept *vhost_rpmsg_ept_find(struct vhost_rpmsg *vr, int addr)
> > > +{
> > > +	unsigned int i;
> > > +
> > > +	for (i = 0; i < vr->n_epts; i++)
> > > +		if (vr->ept[i].addr == addr)
> > > +			return vr->ept + i;
> > > +
> > > +	return NULL;
> > > +}
> > > +
> > > +/*
> > > + * if len < 0, then for reading a request, the complete virtual queue buffer
> > > + * size is prepared, for sending a response, the length in the iterator is used
> > > + */
> > > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > > +			   unsigned int qid, ssize_t len)
> > > +	__acquires(vq->mutex)
> > > +{
> > > +	struct vhost_virtqueue *vq = vr->vq + qid;
> > > +	unsigned int cnt;
> > > +	ssize_t ret;
> > > +	size_t tmp;
> > > +
> > > +	if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
> > > +		return -EINVAL;
> > > +
> > > +	iter->vq = vq;
> > > +
> > > +	mutex_lock(&vq->mutex);
> > > +	vhost_disable_notify(&vr->dev, vq);
> > > +
> > > +	iter->head = vhost_rpmsg_get_msg(vq, &cnt);
> > > +	if (iter->head == vq->num)
> > > +		iter->head = -EAGAIN;
> > > +
> > > +	if (iter->head < 0) {
> > > +		ret = iter->head;
> > > +		goto unlock;
> > > +	}
> > > +
> > > +	tmp = iov_length(vq->iov, cnt);
> > > +	if (tmp < sizeof(iter->rhdr)) {
> > > +		vq_err(vq, "%s(): size %zu too small\n", __func__, tmp);
> > > +		ret = -ENOBUFS;
> > > +		goto return_buf;
> > > +	}
> > > +
> > > +	switch (qid) {
> > > +	case VIRTIO_RPMSG_REQUEST:
> > > +		if (len >= 0) {
> > > +			if (tmp < sizeof(iter->rhdr) + len) {
> > > +				ret = -ENOBUFS;
> > > +				goto return_buf;
> > > +			}
> > > +
> > > +			tmp = len + sizeof(iter->rhdr);
> > > +		}
> > > +
> > > +		/* len is now the size of the payload */
> > > +		iov_iter_init(&iter->iov_iter, WRITE, vq->iov, cnt, tmp);
> > > +
> > > +		/* Read the RPMSG header with endpoint addresses */
> > > +		tmp = copy_from_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
> > > +		if (tmp != sizeof(iter->rhdr)) {
> > > +			vq_err(vq, "%s(): got %zu instead of %zu\n", __func__,
> > > +			       tmp, sizeof(iter->rhdr));
> > > +			ret = -EIO;
> > > +			goto return_buf;
> > > +		}
> > > +
> > > +		iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.dst));
> > > +		if (!iter->ept) {
> > > +			vq_err(vq, "%s(): no endpoint with address %d\n",
> > > +			       __func__, vhost32_to_cpu(vq, iter->rhdr.dst));
> > > +			ret = -ENOENT;
> > > +			goto return_buf;
> > > +		}
> > > +
> > > +		/* Let the endpoint read the payload */
> > > +		if (iter->ept->read) {
> > > +			ret = iter->ept->read(vr, iter);
> > > +			if (ret < 0)
> > > +				goto return_buf;
> > > +
> > > +			iter->rhdr.len = cpu_to_vhost16(vq, ret);
> > > +		} else {
> > > +			iter->rhdr.len = 0;
> > > +		}
> > > +
> > > +		/* Prepare for the response phase */
> > > +		iter->rhdr.dst = iter->rhdr.src;
> > > +		iter->rhdr.src = cpu_to_vhost32(vq, iter->ept->addr);
> > > +
> > > +		break;
> > > +	case VIRTIO_RPMSG_RESPONSE:
> > > +		if (!iter->ept && iter->rhdr.dst != cpu_to_vhost32(vq, RPMSG_NS_ADDR)) {
> > > +			/*
> > > +			 * Usually the iterator is configured when processing a
> > > +			 * message on the request queue, but it's also possible
> > > +			 * to send a message on the response queue without a
> > > +			 * preceding request, in that case the iterator must
> > > +			 * contain source and destination addresses.
> > > +			 */
> > > +			iter->ept = vhost_rpmsg_ept_find(vr, vhost32_to_cpu(vq, iter->rhdr.src));
> > > +			if (!iter->ept) {
> > > +				ret = -ENOENT;
> > > +				goto return_buf;
> > > +			}
> > > +		}
> > > +
> > > +		if (len >= 0) {
> > > +			if (tmp < sizeof(iter->rhdr) + len) {
> > > +				ret = -ENOBUFS;
> > > +				goto return_buf;
> > > +			}
> > > +
> > > +			iter->rhdr.len = cpu_to_vhost16(vq, len);
> > > +			tmp = len + sizeof(iter->rhdr);
> > > +		}
> > > +
> > > +		/* len is now the size of the payload */
> > > +		iov_iter_init(&iter->iov_iter, READ, vq->iov, cnt, tmp);
> > > +
> > > +		/* Write the RPMSG header with endpoint addresses */
> > > +		tmp = copy_to_iter(&iter->rhdr, sizeof(iter->rhdr), &iter->iov_iter);
> > > +		if (tmp != sizeof(iter->rhdr)) {
> > > +			ret = -EIO;
> > > +			goto return_buf;
> > > +		}
> > > +
> > > +		/* Let the endpoint write the payload */
> > > +		if (iter->ept && iter->ept->write) {
> > > +			ret = iter->ept->write(vr, iter);
> > > +			if (ret < 0)
> > > +				goto return_buf;
> > > +		}
> > > +
> > > +		break;
> > > +	}
> > > +
> > > +	return 0;
> > > +
> > > +return_buf:
> > > +	vhost_add_used(vq, iter->head, 0);
> > > +unlock:
> > > +	vhost_enable_notify(&vr->dev, vq);
> > > +	mutex_unlock(&vq->mutex);
> > > +
> > > +	return ret;
> > > +}
> > > +EXPORT_SYMBOL_GPL(vhost_rpmsg_start_lock);
> > > +
> > > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > > +			void *data, size_t size)
> > > +{
> > > +	/*
> > > +	 * We could check for excess data, but copy_{to,from}_iter() don't do
> > > +	 * that either
> > > +	 */
> > > +	if (iter->vq == vr->vq + VIRTIO_RPMSG_RESPONSE)
> > > +		return copy_to_iter(data, size, &iter->iov_iter);
> > > +
> > > +	return copy_from_iter(data, size, &iter->iov_iter);
> > > +}
> > > +EXPORT_SYMBOL_GPL(vhost_rpmsg_copy);
> > > +
> > > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > > +			      struct vhost_rpmsg_iter *iter)
> > > +	__releases(vq->mutex)
> > > +{
> > > +	if (iter->head >= 0)
> > > +		vhost_add_used_and_signal(iter->vq->dev, iter->vq, iter->head,
> > > +					  vhost16_to_cpu(iter->vq, iter->rhdr.len) +
> > > +					  sizeof(iter->rhdr));
> > > +
> > > +	vhost_enable_notify(&vr->dev, iter->vq);
> > > +	mutex_unlock(&iter->vq->mutex);
> > > +
> > > +	return iter->head;
> > > +}
> > > +EXPORT_SYMBOL_GPL(vhost_rpmsg_finish_unlock);
> > > +
> > > +/*
> > > + * Return false to terminate the external loop only if we fail to obtain either
> > > + * a request or a response buffer
> > > + */
> > > +static bool handle_rpmsg_req_single(struct vhost_rpmsg *vr,
> > > +				    struct vhost_virtqueue *vq)
> > > +{
> > > +	struct vhost_rpmsg_iter iter;
> > > +	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_REQUEST, -EINVAL);
> > > +	if (!ret)
> > > +		ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > > +	if (ret < 0) {
> > > +		if (ret != -EAGAIN)
> > > +			vq_err(vq, "%s(): RPMSG processing failed %d\n",
> > > +			       __func__, ret);
> > > +		return false;
> > > +	}
> > > +
> > > +	if (!iter.ept->write)
> > > +		return true;
> > > +
> > > +	ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, -EINVAL);
> > > +	if (!ret)
> > > +		ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > > +	if (ret < 0) {
> > > +		vq_err(vq, "%s(): RPMSG finalising failed %d\n", __func__, ret);
> > > +		return false;
> > > +	}
> > > +
> > > +	return true;
> > > +}
> > > +
> > > +static void handle_rpmsg_req_kick(struct vhost_work *work)
> > > +{
> > > +	struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
> > > +						  poll.work);
> > > +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> > > +
> > > +	while (handle_rpmsg_req_single(vr, vq))
> > > +		;
> > > +}
> > > +
> > > +/*
> > > + * initialise two virtqueues with an array of endpoints,
> > > + * request and response callbacks
> > > + */
> > > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > > +		      unsigned int n_epts)
> > > +{
> > > +	unsigned int i;
> > > +
> > > +	for (i = 0; i < ARRAY_SIZE(vr->vq); i++)
> > > +		vr->vq_p[i] = &vr->vq[i];
> > > +
> > > +	/* vq[0]: host -> guest, vq[1]: host <- guest */
> > > +	vr->vq[VIRTIO_RPMSG_REQUEST].handle_kick = handle_rpmsg_req_kick;
> > > +	vr->vq[VIRTIO_RPMSG_RESPONSE].handle_kick = NULL;
> > > +
> > > +	vr->ept = ept;
> > > +	vr->n_epts = n_epts;
> > > +
> > > +	vhost_dev_init(&vr->dev, vr->vq_p, VIRTIO_RPMSG_NUM_OF_VQS,
> > > +		       UIO_MAXIOV, 0, 0, true, NULL);
> > > +}
> > > +EXPORT_SYMBOL_GPL(vhost_rpmsg_init);
> > > +
> > > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr)
> > > +{
> > > +	if (vhost_dev_has_owner(&vr->dev))
> > > +		vhost_poll_flush(&vr->vq[VIRTIO_RPMSG_REQUEST].poll);
> > > +
> > > +	vhost_dev_cleanup(&vr->dev);
> > > +}
> > > +EXPORT_SYMBOL_GPL(vhost_rpmsg_destroy);
> > > +
> > > +/* send namespace */
> > > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name, unsigned int src)
> > > +{
> > > +	struct vhost_virtqueue *vq = &vr->vq[VIRTIO_RPMSG_RESPONSE];
> > > +	struct vhost_rpmsg_iter iter = {
> > > +		.rhdr = {
> > > +			.src = 0,
> > > +			.dst = cpu_to_vhost32(vq, RPMSG_NS_ADDR),
> > > +			.flags = cpu_to_vhost16(vq, RPMSG_NS_CREATE), /* rpmsg_recv_single() */
> > 
> > Where is the flag used in rpmsg_recv_single()?  It is used for the name space
> > message (as you have below) but not in the header when doing a name space
> > announcement.
> 
> I think you're right, it isn't needed here, will remove.
> 
> > > +		},
> > > +	};
> > > +	struct rpmsg_ns_msg ns = {
> > > +		.addr = cpu_to_vhost32(vq, src),
> > > +		.flags = cpu_to_vhost32(vq, RPMSG_NS_CREATE), /* for rpmsg_ns_cb() */
> > > +	};
> > > +	int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, sizeof(ns));
> > > +
> > > +	if (ret < 0)
> > > +		return ret;
> > > +
> > > +	strlcpy(ns.name, name, sizeof(ns.name));
> > > +
> > > +	ret = vhost_rpmsg_copy(vr, &iter, &ns, sizeof(ns));
> > > +	if (ret != sizeof(ns))
> > > +		vq_err(iter.vq, "%s(): added %d instead of %zu bytes\n",
> > > +		       __func__, ret, sizeof(ns));
> > > +
> > > +	ret = vhost_rpmsg_finish_unlock(vr, &iter);
> > > +	if (ret < 0)
> > > +		vq_err(iter.vq, "%s(): namespace announcement failed: %d\n",
> > > +		       __func__, ret);
> > > +
> > > +	return ret;
> > > +}
> > > +EXPORT_SYMBOL_GPL(vhost_rpmsg_ns_announce);
> > > +
> > > +MODULE_LICENSE("GPL v2");
> > > +MODULE_AUTHOR("Intel, Inc.");
> > > +MODULE_DESCRIPTION("Vhost RPMsg API");
> > > diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> > > new file mode 100644
> > > index 000000000000..30072cecb8a0
> > > --- /dev/null
> > > +++ b/drivers/vhost/vhost_rpmsg.h
> > > @@ -0,0 +1,74 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > + *
> > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > + */
> > > +
> > > +#ifndef VHOST_RPMSG_H
> > > +#define VHOST_RPMSG_H
> > > +
> > > +#include <linux/uio.h>
> > > +#include <linux/virtio_rpmsg.h>
> > > +
> > > +#include "vhost.h"
> > > +
> > > +/* RPMsg uses two VirtQueues: one for each direction */
> > > +enum {
> > > +	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
> > > +	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
> > > +	/* Keep last */
> > > +	VIRTIO_RPMSG_NUM_OF_VQS,
> > > +};
> > > +
> > > +struct vhost_rpmsg_ept;
> > > +
> > > +struct vhost_rpmsg_iter {
> > > +	struct iov_iter iov_iter;
> > > +	struct rpmsg_hdr rhdr;
> > > +	struct vhost_virtqueue *vq;
> > > +	const struct vhost_rpmsg_ept *ept;
> > > +	int head;
> > > +	void *priv;
> > 
> > I don't see @priv being used anywhere.
> 
> That's logical: this is a field, private to the API users, so the core shouldn't 
> use it :-) It's used in later patches.

That is where structure documentation is useful.  I will let Michael decide what
he wants to do.

Thanks for the feedback,
Mathieu

> 
> > 
> > > +};
> > > +
> > > +struct vhost_rpmsg {
> > > +	struct vhost_dev dev;
> > > +	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> > > +	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> > > +	const struct vhost_rpmsg_ept *ept;
> > > +	unsigned int n_epts;
> > > +};
> > > +
> > > +struct vhost_rpmsg_ept {
> > > +	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > +	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > +	int addr;
> > > +};
> > > +
> > > +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> > > +{
> > > +	return iter->rhdr.len;
> > > +}
> > 
> > Again, I don't see where this is used.
> 
> This is exported API, it's used by users.
>
> > > +
> > > +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
> > > +	.rhdr = {						\
> > > +			.src = cpu_to_vhost32(_vq, _src),	\
> > > +			.dst = cpu_to_vhost32(_vq, _dst),	\
> > > +		},						\
> > > +	}
> > 
> > Same.
> 
> ditto.
> 
> Thanks
> Guennadi
> 
> > Thanks,
> > Mathieu
> > 
> > > +
> > > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > > +		      unsigned int n_epts);
> > > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> > > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> > > +			    unsigned int src);
> > > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> > > +			   struct vhost_rpmsg_iter *iter,
> > > +			   unsigned int qid, ssize_t len);
> > > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > > +			void *data, size_t size);
> > > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > > +			      struct vhost_rpmsg_iter *iter);
> > > +
> > > +#endif
> > > -- 
> > > 2.28.0
> > > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
  2020-09-10 17:22       ` Mathieu Poirier
@ 2020-09-11  7:46         ` Guennadi Liakhovetski
  2020-09-11 17:33           ` Mathieu Poirier
  0 siblings, 1 reply; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-11  7:46 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Ohad Ben-Cohen, kvm, Michael S. Tsirkin, Vincent Whitchurch,
	linux-remoteproc, Pierre-Louis Bossart, virtualization,
	Liam Girdwood, Bjorn Andersson, sound-open-firmware

Hi Mathieu,

On Thu, Sep 10, 2020 at 11:22:11AM -0600, Mathieu Poirier wrote:
> Good morning Guennadi,
> 
> On Thu, Sep 10, 2020 at 10:38:54AM +0200, Guennadi Liakhovetski wrote:
> > Hi Mathieu,
> > 
> > On Wed, Sep 09, 2020 at 04:39:46PM -0600, Mathieu Poirier wrote:
> > > Good afternoon,
> > > 
> > > On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> > > > Linux supports running the RPMsg protocol over the VirtIO transport
> > > > protocol, but currently there is only support for VirtIO clients and
> > > > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > > > server implementation.
> > > 
> > > This changelog is very confusing...  At this time the name service in the
> > > remoteproc space runs as a server on the application processor.  But from the
> > > above the remoteproc usecase seems to be considered to be a client
> > > configuration.
> > 
> > I agree that this isn't very obvious. But I think it is common to call the 
> > host "a server" and guests "clients." E.g. in vhost.c in the top-of-thefile 
> > comment:
> 
> Ok - that part we agree on.
> 
> > 
> >  * Generic code for virtio server in host kernel.
> > 
> > I think the generic concept behind this notation is, that as guests boot, 
> > they send their requests to the host, e.g. VirtIO device drivers on guests 
> > send requests over VirtQueues to VirtIO servers on the host, which can run 
> > either in the user- or in the kernel-space. And I think you can follow that 
> 
> I can see that process taking place.  After all virtIO devices on guests are
> only stubs that need host support for access to HW.
> 
> > logic in case of devices or remote processors too: it's the main CPU(s) 
> > that boot(s) and start talking to devices and remote processors, so in that 
> > sence devices are servers and the CPUs are their clients.
> 
> In the remote processor case, the remoteproc core (application processor) sets up
> the name service but does not initiate the communication with a remote
> processor.  It simply waits there for a name space request to come in from the
> remote processor.

Hm, I don't see that in two examples, that I looked at: mtk and virtio. In both 
cases the announcement seems to be directly coming from the application processor 
maybe after some initialisation.

> > And yes, the name-space announcement use-case seems confusing to me too - it 
> > reverts the relationship in a way: once a guest has booted and established 
> > connections to any rpmsg "devices," those send their namespace announcements 
> > back. But I think this can be regarded as server identification: you connect 
> > to a server and it replies with its identification and capabilities.
> 
> Based on the above can I assume vhost_rpmsg_ns_announce() is sent from the
> guest?

No, it's "vhost_..." so it's running on the host. The host (the server, an 
analogue of the application processor, IIUC) sends NS announcements to guests.

> I saw your V7, something I will look into.  In the mean time I need to bring
> your attention to this set [1] from Arnaud.  Please have a look as it will
> impact your work.
> 
> https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335

Yes, I've had a look at that series, thanks for forwarding it to me. TBH I 
don't quite understand some choices there, e.g. creating a separate driver and 
then having to register devices just for the namespace announcement. I don't 
think creating virtual devices is taken easily in Linux. But either way I 
don't think our series conflict a lot, but I do hope that I can merge my 
series first, he'd just have to switch to using the header, that I'm adding. 
Hardly too many changes otherwise.

> > > And I don't see a server implementation per se...  It is more like a client
> > > implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
> > > messages from host to guest.
> > > 
> > > Perhaps it is my lack of familiarity with vhost terminology.
> > > 
> > > > 
> > > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > ---
> > > >  drivers/vhost/Kconfig       |   7 +
> > > >  drivers/vhost/Makefile      |   3 +
> > > >  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
> > > >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> > > >  4 files changed, 457 insertions(+)
> > > >  create mode 100644 drivers/vhost/rpmsg.c
> > > >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > > > 
> > > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > > index 587fbae06182..046b948fc411 100644
> > > > --- a/drivers/vhost/Kconfig
> > > > +++ b/drivers/vhost/Kconfig
> > > > @@ -38,6 +38,13 @@ config VHOST_NET
> > > >  	  To compile this driver as a module, choose M here: the module will
> > > >  	  be called vhost_net.
> > > >  
> > > > +config VHOST_RPMSG
> > > > +	tristate
> > > > +	select VHOST
> > > > +	help
> > > > +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > > > +	  drivers, using the RPMsg over VirtIO protocol.
> > > 
> > > I had to assume vhost drivers are running on the host and virtIO drivers on the
> > > guests.  This may be common knowledge for people familiar with vhosts but
> > > certainly obscur for commoners  Having a help section that is clear on what is
> > > happening would remove any ambiguity.
> > 
> > It is the terminology, yes, but you're right, the wording isn't very clear, will 
> > improve.
> > 
> > > > +
> > > >  config VHOST_SCSI
> > > >  	tristate "VHOST_SCSI TCM fabric driver"
> > > >  	depends on TARGET_CORE && EVENTFD
> > > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > > index f3e1897cce85..9cf459d59f97 100644
> > > > --- a/drivers/vhost/Makefile
> > > > +++ b/drivers/vhost/Makefile
> > > > @@ -2,6 +2,9 @@
> > > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > > >  vhost_net-y := net.o
> > > >  
> > > > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > > > +vhost_rpmsg-y := rpmsg.o
> > > > +
> > > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > > >  vhost_scsi-y := scsi.o
> > > >  
> > > > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > > > new file mode 100644
> > > > index 000000000000..c26d7a4afc6d
> > > > --- /dev/null
> > > > +++ b/drivers/vhost/rpmsg.c
> > > > @@ -0,0 +1,373 @@
> > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > +/*
> > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > + *
> > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > + *
> > > > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > > > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> > > 
> > > Again, very confusing.  The changelog refers to a server implementation but to
> > > me this refers to a client implementation, especially if rpmsg_recv_single() and
> > > rpmsg_ns_cb() are used on the other side of the pipe.  
> > 
> > I think the above is correct. "Vhost" indicates, that this is running on the host. 
> > "match the guest side" means, that you can use this API on the host and it is 
> > designed to work together with the RPMsg VirtIO drivers running on guests, as 
> > implemented *on guests* by virtio_rpmsg_bus.c. Would "to work together" be a better 
> > description than "to match?"
> 
> Lets forget about this part now and concentrate on the above conversation.
> Things will start to make sense at one point.

I've improved that description a bit, it was indeed rather clumsy.

[snip]

> > > > diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> > > > new file mode 100644
> > > > index 000000000000..30072cecb8a0
> > > > --- /dev/null
> > > > +++ b/drivers/vhost/vhost_rpmsg.h
> > > > @@ -0,0 +1,74 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > + *
> > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > + */
> > > > +
> > > > +#ifndef VHOST_RPMSG_H
> > > > +#define VHOST_RPMSG_H
> > > > +
> > > > +#include <linux/uio.h>
> > > > +#include <linux/virtio_rpmsg.h>
> > > > +
> > > > +#include "vhost.h"
> > > > +
> > > > +/* RPMsg uses two VirtQueues: one for each direction */
> > > > +enum {
> > > > +	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
> > > > +	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
> > > > +	/* Keep last */
> > > > +	VIRTIO_RPMSG_NUM_OF_VQS,
> > > > +};
> > > > +
> > > > +struct vhost_rpmsg_ept;
> > > > +
> > > > +struct vhost_rpmsg_iter {
> > > > +	struct iov_iter iov_iter;
> > > > +	struct rpmsg_hdr rhdr;
> > > > +	struct vhost_virtqueue *vq;
> > > > +	const struct vhost_rpmsg_ept *ept;
> > > > +	int head;
> > > > +	void *priv;
> > > 
> > > I don't see @priv being used anywhere.
> > 
> > That's logical: this is a field, private to the API users, so the core shouldn't 
> > use it :-) It's used in later patches.
> 
> That is where structure documentation is useful.  I will let Michael decide what
> he wants to do.

I can add some kerneldoc documentation there, no problem.

> Thanks for the feedback,

Thanks for your reviews! I'd very much like to close all the still open points 
and merge the series ASAP.

Thanks
Guennadi

> Mathieu
> 
> > 
> > > 
> > > > +};
> > > > +
> > > > +struct vhost_rpmsg {
> > > > +	struct vhost_dev dev;
> > > > +	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > +	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > +	const struct vhost_rpmsg_ept *ept;
> > > > +	unsigned int n_epts;
> > > > +};
> > > > +
> > > > +struct vhost_rpmsg_ept {
> > > > +	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > +	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > +	int addr;
> > > > +};
> > > > +
> > > > +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> > > > +{
> > > > +	return iter->rhdr.len;
> > > > +}
> > > 
> > > Again, I don't see where this is used.
> > 
> > This is exported API, it's used by users.
> >
> > > > +
> > > > +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
> > > > +	.rhdr = {						\
> > > > +			.src = cpu_to_vhost32(_vq, _src),	\
> > > > +			.dst = cpu_to_vhost32(_vq, _dst),	\
> > > > +		},						\
> > > > +	}
> > > 
> > > Same.
> > 
> > ditto.
> > 
> > Thanks
> > Guennadi
> > 
> > > Thanks,
> > > Mathieu
> > > 
> > > > +
> > > > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > > > +		      unsigned int n_epts);
> > > > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> > > > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> > > > +			    unsigned int src);
> > > > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> > > > +			   struct vhost_rpmsg_iter *iter,
> > > > +			   unsigned int qid, ssize_t len);
> > > > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > > > +			void *data, size_t size);
> > > > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > > > +			      struct vhost_rpmsg_iter *iter);
> > > > +
> > > > +#endif
> > > > -- 
> > > > 2.28.0
> > > > 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
  2020-09-10 16:46       ` Mathieu Poirier
@ 2020-09-11  7:59           ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-11  7:59 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Thu, Sep 10, 2020 at 10:46:43AM -0600, Mathieu Poirier wrote:
> On Thu, Sep 10, 2020 at 09:15:13AM +0200, Guennadi Liakhovetski wrote:
> > Hi Mathieu,
> > 
> > On Wed, Sep 09, 2020 at 04:42:14PM -0600, Mathieu Poirier wrote:
> > > On Wed, Aug 26, 2020 at 07:46:33PM +0200, Guennadi Liakhovetski wrote:
> > > > VHOST_VSOCK_SET_RUNNING is used by the vhost vsock driver to perform
> > > > crucial VirtQueue initialisation, like assigning .private fields and
> > > > calling vhost_vq_init_access(), and clean up. However, this ioctl is
> > > > actually extremely useful for any vhost driver, that doesn't have a
> > > > side channel to inform it of a status change, e.g. upon a guest
> > > > reboot. This patch makes that ioctl generic, while preserving its
> > > > numeric value and also keeping the original alias.
> > > > 
> > > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > ---
> > > >  include/uapi/linux/vhost.h | 4 +++-
> > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> > > > index 75232185324a..11a4948b6216 100644
> > > > --- a/include/uapi/linux/vhost.h
> > > > +++ b/include/uapi/linux/vhost.h
> > > > @@ -97,6 +97,8 @@
> > > >  #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
> > > >  #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
> > > >  
> > > > +#define VHOST_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
> > > > +
> > > 
> > > I don't see it used in the next patches and as such should be part of another
> > > series.
> > 
> > It isn't used in the next patches, it is used in this patch - see below.
> >
> 
> Right, but why is this part of this set?  What does it bring?  It should be part
> of a patchset where "VHOST_SET_RUNNING" is used.

Ok, I can remove this patch from this series and make it a part of the series, 
containing [1] "vhost: add an SOF Audio DSP driver"

Thanks
Guennadi

[1] https://www.spinics.net/lists/linux-virtualization/msg43309.html

> > > >  /* VHOST_NET specific defines */
> > > >  
> > > >  /* Attach virtio net ring to a raw socket, or tap device.
> > > > @@ -118,7 +120,7 @@
> > > >  /* VHOST_VSOCK specific defines */
> > > >  
> > > >  #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
> > > > -#define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
> > > > +#define VHOST_VSOCK_SET_RUNNING		VHOST_SET_RUNNING
> > > >  
> > > >  /* VHOST_VDPA specific defines */
> > > >  
> > > > -- 
> > > > 2.28.0
> > > > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl
@ 2020-09-11  7:59           ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-11  7:59 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Ohad Ben-Cohen, kvm, Michael S. Tsirkin, Vincent Whitchurch,
	linux-remoteproc, Pierre-Louis Bossart, virtualization,
	Liam Girdwood, Bjorn Andersson, sound-open-firmware

On Thu, Sep 10, 2020 at 10:46:43AM -0600, Mathieu Poirier wrote:
> On Thu, Sep 10, 2020 at 09:15:13AM +0200, Guennadi Liakhovetski wrote:
> > Hi Mathieu,
> > 
> > On Wed, Sep 09, 2020 at 04:42:14PM -0600, Mathieu Poirier wrote:
> > > On Wed, Aug 26, 2020 at 07:46:33PM +0200, Guennadi Liakhovetski wrote:
> > > > VHOST_VSOCK_SET_RUNNING is used by the vhost vsock driver to perform
> > > > crucial VirtQueue initialisation, like assigning .private fields and
> > > > calling vhost_vq_init_access(), and clean up. However, this ioctl is
> > > > actually extremely useful for any vhost driver, that doesn't have a
> > > > side channel to inform it of a status change, e.g. upon a guest
> > > > reboot. This patch makes that ioctl generic, while preserving its
> > > > numeric value and also keeping the original alias.
> > > > 
> > > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > ---
> > > >  include/uapi/linux/vhost.h | 4 +++-
> > > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
> > > > index 75232185324a..11a4948b6216 100644
> > > > --- a/include/uapi/linux/vhost.h
> > > > +++ b/include/uapi/linux/vhost.h
> > > > @@ -97,6 +97,8 @@
> > > >  #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
> > > >  #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
> > > >  
> > > > +#define VHOST_SET_RUNNING _IOW(VHOST_VIRTIO, 0x61, int)
> > > > +
> > > 
> > > I don't see it used in the next patches and as such should be part of another
> > > series.
> > 
> > It isn't used in the next patches, it is used in this patch - see below.
> >
> 
> Right, but why is this part of this set?  What does it bring?  It should be part
> of a patchset where "VHOST_SET_RUNNING" is used.

Ok, I can remove this patch from this series and make it a part of the series, 
containing [1] "vhost: add an SOF Audio DSP driver"

Thanks
Guennadi

[1] https://www.spinics.net/lists/linux-virtualization/msg43309.html

> > > >  /* VHOST_NET specific defines */
> > > >  
> > > >  /* Attach virtio net ring to a raw socket, or tap device.
> > > > @@ -118,7 +120,7 @@
> > > >  /* VHOST_VSOCK specific defines */
> > > >  
> > > >  #define VHOST_VSOCK_SET_GUEST_CID	_IOW(VHOST_VIRTIO, 0x60, __u64)
> > > > -#define VHOST_VSOCK_SET_RUNNING		_IOW(VHOST_VIRTIO, 0x61, int)
> > > > +#define VHOST_VSOCK_SET_RUNNING		VHOST_SET_RUNNING
> > > >  
> > > >  /* VHOST_VDPA specific defines */
> > > >  
> > > > -- 
> > > > 2.28.0
> > > > 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
  2020-09-11  7:46         ` Guennadi Liakhovetski
@ 2020-09-11 17:33           ` Mathieu Poirier
  2020-09-15 12:48               ` Guennadi Liakhovetski
  0 siblings, 1 reply; 34+ messages in thread
From: Mathieu Poirier @ 2020-09-11 17:33 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Fri, Sep 11, 2020 at 09:46:56AM +0200, Guennadi Liakhovetski wrote:
> Hi Mathieu,
> 
> On Thu, Sep 10, 2020 at 11:22:11AM -0600, Mathieu Poirier wrote:
> > Good morning Guennadi,
> > 
> > On Thu, Sep 10, 2020 at 10:38:54AM +0200, Guennadi Liakhovetski wrote:
> > > Hi Mathieu,
> > > 
> > > On Wed, Sep 09, 2020 at 04:39:46PM -0600, Mathieu Poirier wrote:
> > > > Good afternoon,
> > > > 
> > > > On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> > > > > Linux supports running the RPMsg protocol over the VirtIO transport
> > > > > protocol, but currently there is only support for VirtIO clients and
> > > > > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > > > > server implementation.
> > > > 
> > > > This changelog is very confusing...  At this time the name service in the
> > > > remoteproc space runs as a server on the application processor.  But from the
> > > > above the remoteproc usecase seems to be considered to be a client
> > > > configuration.
> > > 
> > > I agree that this isn't very obvious. But I think it is common to call the 
> > > host "a server" and guests "clients." E.g. in vhost.c in the top-of-thefile 
> > > comment:
> > 
> > Ok - that part we agree on.
> > 
> > > 
> > >  * Generic code for virtio server in host kernel.
> > > 
> > > I think the generic concept behind this notation is, that as guests boot, 
> > > they send their requests to the host, e.g. VirtIO device drivers on guests 
> > > send requests over VirtQueues to VirtIO servers on the host, which can run 
> > > either in the user- or in the kernel-space. And I think you can follow that 
> > 
> > I can see that process taking place.  After all virtIO devices on guests are
> > only stubs that need host support for access to HW.
> > 
> > > logic in case of devices or remote processors too: it's the main CPU(s) 
> > > that boot(s) and start talking to devices and remote processors, so in that 
> > > sence devices are servers and the CPUs are their clients.
> > 
> > In the remote processor case, the remoteproc core (application processor) sets up
> > the name service but does not initiate the communication with a remote
> > processor.  It simply waits there for a name space request to come in from the
> > remote processor.
> 
> Hm, I don't see that in two examples, that I looked at: mtk and virtio. In both 
> cases the announcement seems to be directly coming from the application processor 
> maybe after some initialisation.
>
 
Can you expand on that part - perhaps point me to the (virtio) code you are
referring to? 

> > > And yes, the name-space announcement use-case seems confusing to me too - it 
> > > reverts the relationship in a way: once a guest has booted and established 
> > > connections to any rpmsg "devices," those send their namespace announcements 
> > > back. But I think this can be regarded as server identification: you connect 
> > > to a server and it replies with its identification and capabilities.
> > 
> > Based on the above can I assume vhost_rpmsg_ns_announce() is sent from the
> > guest?
> 
> No, it's "vhost_..." so it's running on the host.

Ok, that's better and confirms the usage of the VIRTIO_RPMSG_RESPONSE queue.
When reading your explanation above, I thought the term "those" referred to the
guest.  In light of your explanation I now understand that "those" referred to
the rpmgs devices on the host.

In the above paragraph you write:

... "once a guest has booted and established connections to any rpmsg "devices",
those send their namespace announcements back".  

I'd like to unpack a few things about this sentence:

1) In this context, how is a "connection" established between a guest and a host?

2) How does the guest now about the rpmsg devices it has made a connection to?

3) Why is a namespace announcement needed at all when guests are aware of the
rpmsg devices instantiated on the host, and have already connected to them?
  

> The host (the server, an 
> analogue of the application processor, IIUC) sends NS announcements to guests.

I think we have just found the source of the confusion - in the remoteproc world
the application processor receives name announcements, it doesn't send them.

> 
> > I saw your V7, something I will look into.  In the mean time I need to bring
> > your attention to this set [1] from Arnaud.  Please have a look as it will
> > impact your work.
> > 
> > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335
> 
> Yes, I've had a look at that series, thanks for forwarding it to me. TBH I 
> don't quite understand some choices there, e.g. creating a separate driver and 
> then having to register devices just for the namespace announcement. I don't 
> think creating virtual devices is taken easily in Linux. But either way I 
> don't think our series conflict a lot, but I do hope that I can merge my 
> series first, he'd just have to switch to using the header, that I'm adding. 
> Hardly too many changes otherwise.

It is not the conflicts between the series that I wanted to highlight but the
fact that name service is in the process of becoming a driver on its own, and
with no dependence on the transport mechanism.

> 
> > > > And I don't see a server implementation per se...  It is more like a client
> > > > implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
> > > > messages from host to guest.
> > > > 
> > > > Perhaps it is my lack of familiarity with vhost terminology.
> > > > 
> > > > > 
> > > > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > ---
> > > > >  drivers/vhost/Kconfig       |   7 +
> > > > >  drivers/vhost/Makefile      |   3 +
> > > > >  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
> > > > >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> > > > >  4 files changed, 457 insertions(+)
> > > > >  create mode 100644 drivers/vhost/rpmsg.c
> > > > >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > > > > 
> > > > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > > > index 587fbae06182..046b948fc411 100644
> > > > > --- a/drivers/vhost/Kconfig
> > > > > +++ b/drivers/vhost/Kconfig
> > > > > @@ -38,6 +38,13 @@ config VHOST_NET
> > > > >  	  To compile this driver as a module, choose M here: the module will
> > > > >  	  be called vhost_net.
> > > > >  
> > > > > +config VHOST_RPMSG
> > > > > +	tristate
> > > > > +	select VHOST
> > > > > +	help
> > > > > +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > > > > +	  drivers, using the RPMsg over VirtIO protocol.
> > > > 
> > > > I had to assume vhost drivers are running on the host and virtIO drivers on the
> > > > guests.  This may be common knowledge for people familiar with vhosts but
> > > > certainly obscur for commoners  Having a help section that is clear on what is
> > > > happening would remove any ambiguity.
> > > 
> > > It is the terminology, yes, but you're right, the wording isn't very clear, will 
> > > improve.
> > > 
> > > > > +
> > > > >  config VHOST_SCSI
> > > > >  	tristate "VHOST_SCSI TCM fabric driver"
> > > > >  	depends on TARGET_CORE && EVENTFD
> > > > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > > > index f3e1897cce85..9cf459d59f97 100644
> > > > > --- a/drivers/vhost/Makefile
> > > > > +++ b/drivers/vhost/Makefile
> > > > > @@ -2,6 +2,9 @@
> > > > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > > > >  vhost_net-y := net.o
> > > > >  
> > > > > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > > > > +vhost_rpmsg-y := rpmsg.o
> > > > > +
> > > > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > > > >  vhost_scsi-y := scsi.o
> > > > >  
> > > > > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > > > > new file mode 100644
> > > > > index 000000000000..c26d7a4afc6d
> > > > > --- /dev/null
> > > > > +++ b/drivers/vhost/rpmsg.c
> > > > > @@ -0,0 +1,373 @@
> > > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > > +/*
> > > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > > + *
> > > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > + *
> > > > > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > > > > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> > > > 
> > > > Again, very confusing.  The changelog refers to a server implementation but to
> > > > me this refers to a client implementation, especially if rpmsg_recv_single() and
> > > > rpmsg_ns_cb() are used on the other side of the pipe.  
> > > 
> > > I think the above is correct. "Vhost" indicates, that this is running on the host. 
> > > "match the guest side" means, that you can use this API on the host and it is 
> > > designed to work together with the RPMsg VirtIO drivers running on guests, as 
> > > implemented *on guests* by virtio_rpmsg_bus.c. Would "to work together" be a better 
> > > description than "to match?"
> > 
> > Lets forget about this part now and concentrate on the above conversation.
> > Things will start to make sense at one point.
> 
> I've improved that description a bit, it was indeed rather clumsy.

Much appreciated - I'll take a look a V7 next week.

> 
> [snip]
> 
> > > > > diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> > > > > new file mode 100644
> > > > > index 000000000000..30072cecb8a0
> > > > > --- /dev/null
> > > > > +++ b/drivers/vhost/vhost_rpmsg.h
> > > > > @@ -0,0 +1,74 @@
> > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > +/*
> > > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > > + *
> > > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > + */
> > > > > +
> > > > > +#ifndef VHOST_RPMSG_H
> > > > > +#define VHOST_RPMSG_H
> > > > > +
> > > > > +#include <linux/uio.h>
> > > > > +#include <linux/virtio_rpmsg.h>
> > > > > +
> > > > > +#include "vhost.h"
> > > > > +
> > > > > +/* RPMsg uses two VirtQueues: one for each direction */
> > > > > +enum {
> > > > > +	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
> > > > > +	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
> > > > > +	/* Keep last */
> > > > > +	VIRTIO_RPMSG_NUM_OF_VQS,
> > > > > +};
> > > > > +
> > > > > +struct vhost_rpmsg_ept;
> > > > > +
> > > > > +struct vhost_rpmsg_iter {
> > > > > +	struct iov_iter iov_iter;
> > > > > +	struct rpmsg_hdr rhdr;
> > > > > +	struct vhost_virtqueue *vq;
> > > > > +	const struct vhost_rpmsg_ept *ept;
> > > > > +	int head;
> > > > > +	void *priv;
> > > > 
> > > > I don't see @priv being used anywhere.
> > > 
> > > That's logical: this is a field, private to the API users, so the core shouldn't 
> > > use it :-) It's used in later patches.
> > 
> > That is where structure documentation is useful.  I will let Michael decide what
> > he wants to do.
> 
> I can add some kerneldoc documentation there, no problem.
> 
> > Thanks for the feedback,
> 
> Thanks for your reviews! I'd very much like to close all the still open points 
> and merge the series ASAP.
> 
> Thanks
> Guennadi
> 
> > Mathieu
> > 
> > > 
> > > > 
> > > > > +};
> > > > > +
> > > > > +struct vhost_rpmsg {
> > > > > +	struct vhost_dev dev;
> > > > > +	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > > +	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > > +	const struct vhost_rpmsg_ept *ept;
> > > > > +	unsigned int n_epts;
> > > > > +};
> > > > > +
> > > > > +struct vhost_rpmsg_ept {
> > > > > +	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > > +	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > > +	int addr;
> > > > > +};
> > > > > +
> > > > > +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> > > > > +{
> > > > > +	return iter->rhdr.len;
> > > > > +}
> > > > 
> > > > Again, I don't see where this is used.
> > > 
> > > This is exported API, it's used by users.
> > >
> > > > > +
> > > > > +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
> > > > > +	.rhdr = {						\
> > > > > +			.src = cpu_to_vhost32(_vq, _src),	\
> > > > > +			.dst = cpu_to_vhost32(_vq, _dst),	\
> > > > > +		},						\
> > > > > +	}
> > > > 
> > > > Same.
> > > 
> > > ditto.
> > > 
> > > Thanks
> > > Guennadi
> > > 
> > > > Thanks,
> > > > Mathieu
> > > > 
> > > > > +
> > > > > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > > > > +		      unsigned int n_epts);
> > > > > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> > > > > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> > > > > +			    unsigned int src);
> > > > > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> > > > > +			   struct vhost_rpmsg_iter *iter,
> > > > > +			   unsigned int qid, ssize_t len);
> > > > > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > > > > +			void *data, size_t size);
> > > > > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > > > > +			      struct vhost_rpmsg_iter *iter);
> > > > > +
> > > > > +#endif
> > > > > -- 
> > > > > 2.28.0
> > > > > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
  2020-09-11 17:33           ` Mathieu Poirier
@ 2020-09-15 12:48               ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-15 12:48 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Fri, Sep 11, 2020 at 11:33:13AM -0600, Mathieu Poirier wrote:
> On Fri, Sep 11, 2020 at 09:46:56AM +0200, Guennadi Liakhovetski wrote:
> > Hi Mathieu,
> > 
> > On Thu, Sep 10, 2020 at 11:22:11AM -0600, Mathieu Poirier wrote:
> > > Good morning Guennadi,
> > > 
> > > On Thu, Sep 10, 2020 at 10:38:54AM +0200, Guennadi Liakhovetski wrote:
> > > > Hi Mathieu,
> > > > 
> > > > On Wed, Sep 09, 2020 at 04:39:46PM -0600, Mathieu Poirier wrote:
> > > > > Good afternoon,
> > > > > 
> > > > > On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> > > > > > Linux supports running the RPMsg protocol over the VirtIO transport
> > > > > > protocol, but currently there is only support for VirtIO clients and
> > > > > > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > > > > > server implementation.
> > > > > 
> > > > > This changelog is very confusing...  At this time the name service in the
> > > > > remoteproc space runs as a server on the application processor.  But from the
> > > > > above the remoteproc usecase seems to be considered to be a client
> > > > > configuration.
> > > > 
> > > > I agree that this isn't very obvious. But I think it is common to call the 
> > > > host "a server" and guests "clients." E.g. in vhost.c in the top-of-thefile 
> > > > comment:
> > > 
> > > Ok - that part we agree on.
> > > 
> > > > 
> > > >  * Generic code for virtio server in host kernel.
> > > > 
> > > > I think the generic concept behind this notation is, that as guests boot, 
> > > > they send their requests to the host, e.g. VirtIO device drivers on guests 
> > > > send requests over VirtQueues to VirtIO servers on the host, which can run 
> > > > either in the user- or in the kernel-space. And I think you can follow that 
> > > 
> > > I can see that process taking place.  After all virtIO devices on guests are
> > > only stubs that need host support for access to HW.
> > > 
> > > > logic in case of devices or remote processors too: it's the main CPU(s) 
> > > > that boot(s) and start talking to devices and remote processors, so in that 
> > > > sence devices are servers and the CPUs are their clients.
> > > 
> > > In the remote processor case, the remoteproc core (application processor) sets up
> > > the name service but does not initiate the communication with a remote
> > > processor.  It simply waits there for a name space request to come in from the
> > > remote processor.
> > 
> > Hm, I don't see that in two examples, that I looked at: mtk and virtio. In both 
> > cases the announcement seems to be directly coming from the application processor 
> > maybe after some initialisation.
>  
> Can you expand on that part - perhaps point me to the (virtio) code you are
> referring to? 

Ok, we're both right: it goes both ways.

Here's my understanding of the control flow of virtio_rpmsg_bus.c:

1. The driver registers a VirtIO driver with the VIRTIO_ID_RPMSG ID.
2. When the driver is probed, if the server / the application processor supports the 
   VIRTIO_RPMSG_F_NS feature, the driver calls __rpmsg_create_ept() to create an 
   endpoint with rpmsg_ns_cb() as a callback.
3. When a namespace announcement arrives from the server, the callback is called, 
   which then registers a new channel (in case of CREATE). That then created an
   rpmsg device.
4. If there's a matching rpmsg driver for that device, it's .probe() method is 
   called, so it can then add its own rpmsg endpoints, to be used for its proper 
   communication.

Now there was indeed something in virtio_rpmsg_bus.c that I didn't fully understand: 
virtio_rpmsg_announce_create() and virtio_rpmsg_announce_destroy() functions. Now I 
understood, that as the client registers its custom channels, it also then can 
send name service announcements to the application processor, using those functions. 
This is also described in [1] as:

<quote>
Name Service sub-component (optional)

This subcomponent is a minimum implementation of the name service which is present 
in the Linux Kernel implementation of RPMsg. It allows the communicating node both 
to send announcements about "named" endpoint (in other words, channel) creation or 
deletion and to receive these announcement taking any user-defined action in an 
application callback. 
</quote>

Also in Documentation/rpmsg.txt

<quote>
...the remote processor announces the existence of a remote rpmsg service by 
sending a name service message (which contains the name and rpmsg addr of the 
remote service, see struct rpmsg_ns_msg).
</quote>

in [2]:

<quote>
In the current protocol, at startup, the master sends notification to remote to let 
it know that it can receive name service announcement.
</quote>

> > > > And yes, the name-space announcement use-case seems confusing to me too - it 
> > > > reverts the relationship in a way: once a guest has booted and established 
> > > > connections to any rpmsg "devices," those send their namespace announcements 
> > > > back. But I think this can be regarded as server identification: you connect 
> > > > to a server and it replies with its identification and capabilities.
> > > 
> > > Based on the above can I assume vhost_rpmsg_ns_announce() is sent from the
> > > guest?
> > 
> > No, it's "vhost_..." so it's running on the host.
> 
> Ok, that's better and confirms the usage of the VIRTIO_RPMSG_RESPONSE queue.
> When reading your explanation above, I thought the term "those" referred to the
> guest.  In light of your explanation I now understand that "those" referred to
> the rpmgs devices on the host.
> 
> In the above paragraph you write:
> 
> ... "once a guest has booted and established connections to any rpmsg "devices",
> those send their namespace announcements back".  
> 
> I'd like to unpack a few things about this sentence:
> 
> 1) In this context, how is a "connection" established between a guest and a host?

That's handled by the VirtIO / VirtQueues in the case of virtio_rpmsg_bus.c but in 
general, as mentioned in [2]

<quote>
However, master does not consider the fact that if the remote is ready to handle 
notification at this point in time.
</quote>

> 2) How does the guest now about the rpmsg devices it has made a connection to?

Again, that's the same as with all other VirtIO / KVM / Qemu devices: in a common 
Qemu case, it's the Qemu which emulates the hardware and registers those devices.

> 3) Why is a namespace announcement needed at all when guests are aware of the
> rpmsg devices instantiated on the host, and have already connected to them?

It is indeed optional according to the protocol, but as described above, without 
it the virtio_rpmsg_bus.c driver won't create rpmsg channels / devices, so, no 
probing will take place.

> > The host (the server, an 
> > analogue of the application processor, IIUC) sends NS announcements to guests.
> 
> I think we have just found the source of the confusion - in the remoteproc world
> the application processor receives name announcements, it doesn't send them.

Interesting, well, we know now that both directions are possible, but I still 
don't know whether all configurations are valid: only down, only up, none or both.

Thanks
Guennadi

[1] https://nxpmicro.github.io/rpmsg-lite/
[2] https://github.com/OpenAMP/open-amp/wiki/RPMsg-Messaging-Protocol

> > > I saw your V7, something I will look into.  In the mean time I need to bring
> > > your attention to this set [1] from Arnaud.  Please have a look as it will
> > > impact your work.
> > > 
> > > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335
> > 
> > Yes, I've had a look at that series, thanks for forwarding it to me. TBH I 
> > don't quite understand some choices there, e.g. creating a separate driver and 
> > then having to register devices just for the namespace announcement. I don't 
> > think creating virtual devices is taken easily in Linux. But either way I 
> > don't think our series conflict a lot, but I do hope that I can merge my 
> > series first, he'd just have to switch to using the header, that I'm adding. 
> > Hardly too many changes otherwise.
> 
> It is not the conflicts between the series that I wanted to highlight but the
> fact that name service is in the process of becoming a driver on its own, and
> with no dependence on the transport mechanism.
> 
> > 
> > > > > And I don't see a server implementation per se...  It is more like a client
> > > > > implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
> > > > > messages from host to guest.
> > > > > 
> > > > > Perhaps it is my lack of familiarity with vhost terminology.
> > > > > 
> > > > > > 
> > > > > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > ---
> > > > > >  drivers/vhost/Kconfig       |   7 +
> > > > > >  drivers/vhost/Makefile      |   3 +
> > > > > >  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
> > > > > >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> > > > > >  4 files changed, 457 insertions(+)
> > > > > >  create mode 100644 drivers/vhost/rpmsg.c
> > > > > >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > > > > > 
> > > > > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > > > > index 587fbae06182..046b948fc411 100644
> > > > > > --- a/drivers/vhost/Kconfig
> > > > > > +++ b/drivers/vhost/Kconfig
> > > > > > @@ -38,6 +38,13 @@ config VHOST_NET
> > > > > >  	  To compile this driver as a module, choose M here: the module will
> > > > > >  	  be called vhost_net.
> > > > > >  
> > > > > > +config VHOST_RPMSG
> > > > > > +	tristate
> > > > > > +	select VHOST
> > > > > > +	help
> > > > > > +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > > > > > +	  drivers, using the RPMsg over VirtIO protocol.
> > > > > 
> > > > > I had to assume vhost drivers are running on the host and virtIO drivers on the
> > > > > guests.  This may be common knowledge for people familiar with vhosts but
> > > > > certainly obscur for commoners  Having a help section that is clear on what is
> > > > > happening would remove any ambiguity.
> > > > 
> > > > It is the terminology, yes, but you're right, the wording isn't very clear, will 
> > > > improve.
> > > > 
> > > > > > +
> > > > > >  config VHOST_SCSI
> > > > > >  	tristate "VHOST_SCSI TCM fabric driver"
> > > > > >  	depends on TARGET_CORE && EVENTFD
> > > > > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > > > > index f3e1897cce85..9cf459d59f97 100644
> > > > > > --- a/drivers/vhost/Makefile
> > > > > > +++ b/drivers/vhost/Makefile
> > > > > > @@ -2,6 +2,9 @@
> > > > > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > > > > >  vhost_net-y := net.o
> > > > > >  
> > > > > > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > > > > > +vhost_rpmsg-y := rpmsg.o
> > > > > > +
> > > > > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > > > > >  vhost_scsi-y := scsi.o
> > > > > >  
> > > > > > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > > > > > new file mode 100644
> > > > > > index 000000000000..c26d7a4afc6d
> > > > > > --- /dev/null
> > > > > > +++ b/drivers/vhost/rpmsg.c
> > > > > > @@ -0,0 +1,373 @@
> > > > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > > > +/*
> > > > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > > > + *
> > > > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > + *
> > > > > > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > > > > > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> > > > > 
> > > > > Again, very confusing.  The changelog refers to a server implementation but to
> > > > > me this refers to a client implementation, especially if rpmsg_recv_single() and
> > > > > rpmsg_ns_cb() are used on the other side of the pipe.  
> > > > 
> > > > I think the above is correct. "Vhost" indicates, that this is running on the host. 
> > > > "match the guest side" means, that you can use this API on the host and it is 
> > > > designed to work together with the RPMsg VirtIO drivers running on guests, as 
> > > > implemented *on guests* by virtio_rpmsg_bus.c. Would "to work together" be a better 
> > > > description than "to match?"
> > > 
> > > Lets forget about this part now and concentrate on the above conversation.
> > > Things will start to make sense at one point.
> > 
> > I've improved that description a bit, it was indeed rather clumsy.
> 
> Much appreciated - I'll take a look a V7 next week.
> 
> > 
> > [snip]
> > 
> > > > > > diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> > > > > > new file mode 100644
> > > > > > index 000000000000..30072cecb8a0
> > > > > > --- /dev/null
> > > > > > +++ b/drivers/vhost/vhost_rpmsg.h
> > > > > > @@ -0,0 +1,74 @@
> > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > +/*
> > > > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > > > + *
> > > > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > + */
> > > > > > +
> > > > > > +#ifndef VHOST_RPMSG_H
> > > > > > +#define VHOST_RPMSG_H
> > > > > > +
> > > > > > +#include <linux/uio.h>
> > > > > > +#include <linux/virtio_rpmsg.h>
> > > > > > +
> > > > > > +#include "vhost.h"
> > > > > > +
> > > > > > +/* RPMsg uses two VirtQueues: one for each direction */
> > > > > > +enum {
> > > > > > +	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
> > > > > > +	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
> > > > > > +	/* Keep last */
> > > > > > +	VIRTIO_RPMSG_NUM_OF_VQS,
> > > > > > +};
> > > > > > +
> > > > > > +struct vhost_rpmsg_ept;
> > > > > > +
> > > > > > +struct vhost_rpmsg_iter {
> > > > > > +	struct iov_iter iov_iter;
> > > > > > +	struct rpmsg_hdr rhdr;
> > > > > > +	struct vhost_virtqueue *vq;
> > > > > > +	const struct vhost_rpmsg_ept *ept;
> > > > > > +	int head;
> > > > > > +	void *priv;
> > > > > 
> > > > > I don't see @priv being used anywhere.
> > > > 
> > > > That's logical: this is a field, private to the API users, so the core shouldn't 
> > > > use it :-) It's used in later patches.
> > > 
> > > That is where structure documentation is useful.  I will let Michael decide what
> > > he wants to do.
> > 
> > I can add some kerneldoc documentation there, no problem.
> > 
> > > Thanks for the feedback,
> > 
> > Thanks for your reviews! I'd very much like to close all the still open points 
> > and merge the series ASAP.
> > 
> > Thanks
> > Guennadi
> > 
> > > Mathieu
> > > 
> > > > 
> > > > > 
> > > > > > +};
> > > > > > +
> > > > > > +struct vhost_rpmsg {
> > > > > > +	struct vhost_dev dev;
> > > > > > +	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > > > +	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > > > +	const struct vhost_rpmsg_ept *ept;
> > > > > > +	unsigned int n_epts;
> > > > > > +};
> > > > > > +
> > > > > > +struct vhost_rpmsg_ept {
> > > > > > +	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > > > +	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > > > +	int addr;
> > > > > > +};
> > > > > > +
> > > > > > +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> > > > > > +{
> > > > > > +	return iter->rhdr.len;
> > > > > > +}
> > > > > 
> > > > > Again, I don't see where this is used.
> > > > 
> > > > This is exported API, it's used by users.
> > > >
> > > > > > +
> > > > > > +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
> > > > > > +	.rhdr = {						\
> > > > > > +			.src = cpu_to_vhost32(_vq, _src),	\
> > > > > > +			.dst = cpu_to_vhost32(_vq, _dst),	\
> > > > > > +		},						\
> > > > > > +	}
> > > > > 
> > > > > Same.
> > > > 
> > > > ditto.
> > > > 
> > > > Thanks
> > > > Guennadi
> > > > 
> > > > > Thanks,
> > > > > Mathieu
> > > > > 
> > > > > > +
> > > > > > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > > > > > +		      unsigned int n_epts);
> > > > > > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> > > > > > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> > > > > > +			    unsigned int src);
> > > > > > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> > > > > > +			   struct vhost_rpmsg_iter *iter,
> > > > > > +			   unsigned int qid, ssize_t len);
> > > > > > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > > > > > +			void *data, size_t size);
> > > > > > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > > > > > +			      struct vhost_rpmsg_iter *iter);
> > > > > > +
> > > > > > +#endif
> > > > > > -- 
> > > > > > 2.28.0
> > > > > > 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
@ 2020-09-15 12:48               ` Guennadi Liakhovetski
  0 siblings, 0 replies; 34+ messages in thread
From: Guennadi Liakhovetski @ 2020-09-15 12:48 UTC (permalink / raw)
  To: Mathieu Poirier
  Cc: Ohad Ben-Cohen, kvm, Michael S. Tsirkin, Vincent Whitchurch,
	linux-remoteproc, Pierre-Louis Bossart, virtualization,
	Liam Girdwood, Bjorn Andersson, sound-open-firmware

On Fri, Sep 11, 2020 at 11:33:13AM -0600, Mathieu Poirier wrote:
> On Fri, Sep 11, 2020 at 09:46:56AM +0200, Guennadi Liakhovetski wrote:
> > Hi Mathieu,
> > 
> > On Thu, Sep 10, 2020 at 11:22:11AM -0600, Mathieu Poirier wrote:
> > > Good morning Guennadi,
> > > 
> > > On Thu, Sep 10, 2020 at 10:38:54AM +0200, Guennadi Liakhovetski wrote:
> > > > Hi Mathieu,
> > > > 
> > > > On Wed, Sep 09, 2020 at 04:39:46PM -0600, Mathieu Poirier wrote:
> > > > > Good afternoon,
> > > > > 
> > > > > On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> > > > > > Linux supports running the RPMsg protocol over the VirtIO transport
> > > > > > protocol, but currently there is only support for VirtIO clients and
> > > > > > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > > > > > server implementation.
> > > > > 
> > > > > This changelog is very confusing...  At this time the name service in the
> > > > > remoteproc space runs as a server on the application processor.  But from the
> > > > > above the remoteproc usecase seems to be considered to be a client
> > > > > configuration.
> > > > 
> > > > I agree that this isn't very obvious. But I think it is common to call the 
> > > > host "a server" and guests "clients." E.g. in vhost.c in the top-of-thefile 
> > > > comment:
> > > 
> > > Ok - that part we agree on.
> > > 
> > > > 
> > > >  * Generic code for virtio server in host kernel.
> > > > 
> > > > I think the generic concept behind this notation is, that as guests boot, 
> > > > they send their requests to the host, e.g. VirtIO device drivers on guests 
> > > > send requests over VirtQueues to VirtIO servers on the host, which can run 
> > > > either in the user- or in the kernel-space. And I think you can follow that 
> > > 
> > > I can see that process taking place.  After all virtIO devices on guests are
> > > only stubs that need host support for access to HW.
> > > 
> > > > logic in case of devices or remote processors too: it's the main CPU(s) 
> > > > that boot(s) and start talking to devices and remote processors, so in that 
> > > > sence devices are servers and the CPUs are their clients.
> > > 
> > > In the remote processor case, the remoteproc core (application processor) sets up
> > > the name service but does not initiate the communication with a remote
> > > processor.  It simply waits there for a name space request to come in from the
> > > remote processor.
> > 
> > Hm, I don't see that in two examples, that I looked at: mtk and virtio. In both 
> > cases the announcement seems to be directly coming from the application processor 
> > maybe after some initialisation.
>  
> Can you expand on that part - perhaps point me to the (virtio) code you are
> referring to? 

Ok, we're both right: it goes both ways.

Here's my understanding of the control flow of virtio_rpmsg_bus.c:

1. The driver registers a VirtIO driver with the VIRTIO_ID_RPMSG ID.
2. When the driver is probed, if the server / the application processor supports the 
   VIRTIO_RPMSG_F_NS feature, the driver calls __rpmsg_create_ept() to create an 
   endpoint with rpmsg_ns_cb() as a callback.
3. When a namespace announcement arrives from the server, the callback is called, 
   which then registers a new channel (in case of CREATE). That then created an
   rpmsg device.
4. If there's a matching rpmsg driver for that device, it's .probe() method is 
   called, so it can then add its own rpmsg endpoints, to be used for its proper 
   communication.

Now there was indeed something in virtio_rpmsg_bus.c that I didn't fully understand: 
virtio_rpmsg_announce_create() and virtio_rpmsg_announce_destroy() functions. Now I 
understood, that as the client registers its custom channels, it also then can 
send name service announcements to the application processor, using those functions. 
This is also described in [1] as:

<quote>
Name Service sub-component (optional)

This subcomponent is a minimum implementation of the name service which is present 
in the Linux Kernel implementation of RPMsg. It allows the communicating node both 
to send announcements about "named" endpoint (in other words, channel) creation or 
deletion and to receive these announcement taking any user-defined action in an 
application callback. 
</quote>

Also in Documentation/rpmsg.txt

<quote>
...the remote processor announces the existence of a remote rpmsg service by 
sending a name service message (which contains the name and rpmsg addr of the 
remote service, see struct rpmsg_ns_msg).
</quote>

in [2]:

<quote>
In the current protocol, at startup, the master sends notification to remote to let 
it know that it can receive name service announcement.
</quote>

> > > > And yes, the name-space announcement use-case seems confusing to me too - it 
> > > > reverts the relationship in a way: once a guest has booted and established 
> > > > connections to any rpmsg "devices," those send their namespace announcements 
> > > > back. But I think this can be regarded as server identification: you connect 
> > > > to a server and it replies with its identification and capabilities.
> > > 
> > > Based on the above can I assume vhost_rpmsg_ns_announce() is sent from the
> > > guest?
> > 
> > No, it's "vhost_..." so it's running on the host.
> 
> Ok, that's better and confirms the usage of the VIRTIO_RPMSG_RESPONSE queue.
> When reading your explanation above, I thought the term "those" referred to the
> guest.  In light of your explanation I now understand that "those" referred to
> the rpmgs devices on the host.
> 
> In the above paragraph you write:
> 
> ... "once a guest has booted and established connections to any rpmsg "devices",
> those send their namespace announcements back".  
> 
> I'd like to unpack a few things about this sentence:
> 
> 1) In this context, how is a "connection" established between a guest and a host?

That's handled by the VirtIO / VirtQueues in the case of virtio_rpmsg_bus.c but in 
general, as mentioned in [2]

<quote>
However, master does not consider the fact that if the remote is ready to handle 
notification at this point in time.
</quote>

> 2) How does the guest now about the rpmsg devices it has made a connection to?

Again, that's the same as with all other VirtIO / KVM / Qemu devices: in a common 
Qemu case, it's the Qemu which emulates the hardware and registers those devices.

> 3) Why is a namespace announcement needed at all when guests are aware of the
> rpmsg devices instantiated on the host, and have already connected to them?

It is indeed optional according to the protocol, but as described above, without 
it the virtio_rpmsg_bus.c driver won't create rpmsg channels / devices, so, no 
probing will take place.

> > The host (the server, an 
> > analogue of the application processor, IIUC) sends NS announcements to guests.
> 
> I think we have just found the source of the confusion - in the remoteproc world
> the application processor receives name announcements, it doesn't send them.

Interesting, well, we know now that both directions are possible, but I still 
don't know whether all configurations are valid: only down, only up, none or both.

Thanks
Guennadi

[1] https://nxpmicro.github.io/rpmsg-lite/
[2] https://github.com/OpenAMP/open-amp/wiki/RPMsg-Messaging-Protocol

> > > I saw your V7, something I will look into.  In the mean time I need to bring
> > > your attention to this set [1] from Arnaud.  Please have a look as it will
> > > impact your work.
> > > 
> > > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335
> > 
> > Yes, I've had a look at that series, thanks for forwarding it to me. TBH I 
> > don't quite understand some choices there, e.g. creating a separate driver and 
> > then having to register devices just for the namespace announcement. I don't 
> > think creating virtual devices is taken easily in Linux. But either way I 
> > don't think our series conflict a lot, but I do hope that I can merge my 
> > series first, he'd just have to switch to using the header, that I'm adding. 
> > Hardly too many changes otherwise.
> 
> It is not the conflicts between the series that I wanted to highlight but the
> fact that name service is in the process of becoming a driver on its own, and
> with no dependence on the transport mechanism.
> 
> > 
> > > > > And I don't see a server implementation per se...  It is more like a client
> > > > > implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
> > > > > messages from host to guest.
> > > > > 
> > > > > Perhaps it is my lack of familiarity with vhost terminology.
> > > > > 
> > > > > > 
> > > > > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > ---
> > > > > >  drivers/vhost/Kconfig       |   7 +
> > > > > >  drivers/vhost/Makefile      |   3 +
> > > > > >  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
> > > > > >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> > > > > >  4 files changed, 457 insertions(+)
> > > > > >  create mode 100644 drivers/vhost/rpmsg.c
> > > > > >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > > > > > 
> > > > > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > > > > index 587fbae06182..046b948fc411 100644
> > > > > > --- a/drivers/vhost/Kconfig
> > > > > > +++ b/drivers/vhost/Kconfig
> > > > > > @@ -38,6 +38,13 @@ config VHOST_NET
> > > > > >  	  To compile this driver as a module, choose M here: the module will
> > > > > >  	  be called vhost_net.
> > > > > >  
> > > > > > +config VHOST_RPMSG
> > > > > > +	tristate
> > > > > > +	select VHOST
> > > > > > +	help
> > > > > > +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > > > > > +	  drivers, using the RPMsg over VirtIO protocol.
> > > > > 
> > > > > I had to assume vhost drivers are running on the host and virtIO drivers on the
> > > > > guests.  This may be common knowledge for people familiar with vhosts but
> > > > > certainly obscur for commoners  Having a help section that is clear on what is
> > > > > happening would remove any ambiguity.
> > > > 
> > > > It is the terminology, yes, but you're right, the wording isn't very clear, will 
> > > > improve.
> > > > 
> > > > > > +
> > > > > >  config VHOST_SCSI
> > > > > >  	tristate "VHOST_SCSI TCM fabric driver"
> > > > > >  	depends on TARGET_CORE && EVENTFD
> > > > > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > > > > index f3e1897cce85..9cf459d59f97 100644
> > > > > > --- a/drivers/vhost/Makefile
> > > > > > +++ b/drivers/vhost/Makefile
> > > > > > @@ -2,6 +2,9 @@
> > > > > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > > > > >  vhost_net-y := net.o
> > > > > >  
> > > > > > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > > > > > +vhost_rpmsg-y := rpmsg.o
> > > > > > +
> > > > > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > > > > >  vhost_scsi-y := scsi.o
> > > > > >  
> > > > > > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > > > > > new file mode 100644
> > > > > > index 000000000000..c26d7a4afc6d
> > > > > > --- /dev/null
> > > > > > +++ b/drivers/vhost/rpmsg.c
> > > > > > @@ -0,0 +1,373 @@
> > > > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > > > +/*
> > > > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > > > + *
> > > > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > + *
> > > > > > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > > > > > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> > > > > 
> > > > > Again, very confusing.  The changelog refers to a server implementation but to
> > > > > me this refers to a client implementation, especially if rpmsg_recv_single() and
> > > > > rpmsg_ns_cb() are used on the other side of the pipe.  
> > > > 
> > > > I think the above is correct. "Vhost" indicates, that this is running on the host. 
> > > > "match the guest side" means, that you can use this API on the host and it is 
> > > > designed to work together with the RPMsg VirtIO drivers running on guests, as 
> > > > implemented *on guests* by virtio_rpmsg_bus.c. Would "to work together" be a better 
> > > > description than "to match?"
> > > 
> > > Lets forget about this part now and concentrate on the above conversation.
> > > Things will start to make sense at one point.
> > 
> > I've improved that description a bit, it was indeed rather clumsy.
> 
> Much appreciated - I'll take a look a V7 next week.
> 
> > 
> > [snip]
> > 
> > > > > > diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> > > > > > new file mode 100644
> > > > > > index 000000000000..30072cecb8a0
> > > > > > --- /dev/null
> > > > > > +++ b/drivers/vhost/vhost_rpmsg.h
> > > > > > @@ -0,0 +1,74 @@
> > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > +/*
> > > > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > > > + *
> > > > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > + */
> > > > > > +
> > > > > > +#ifndef VHOST_RPMSG_H
> > > > > > +#define VHOST_RPMSG_H
> > > > > > +
> > > > > > +#include <linux/uio.h>
> > > > > > +#include <linux/virtio_rpmsg.h>
> > > > > > +
> > > > > > +#include "vhost.h"
> > > > > > +
> > > > > > +/* RPMsg uses two VirtQueues: one for each direction */
> > > > > > +enum {
> > > > > > +	VIRTIO_RPMSG_RESPONSE,	/* RPMsg response (host->guest) buffers */
> > > > > > +	VIRTIO_RPMSG_REQUEST,	/* RPMsg request (guest->host) buffers */
> > > > > > +	/* Keep last */
> > > > > > +	VIRTIO_RPMSG_NUM_OF_VQS,
> > > > > > +};
> > > > > > +
> > > > > > +struct vhost_rpmsg_ept;
> > > > > > +
> > > > > > +struct vhost_rpmsg_iter {
> > > > > > +	struct iov_iter iov_iter;
> > > > > > +	struct rpmsg_hdr rhdr;
> > > > > > +	struct vhost_virtqueue *vq;
> > > > > > +	const struct vhost_rpmsg_ept *ept;
> > > > > > +	int head;
> > > > > > +	void *priv;
> > > > > 
> > > > > I don't see @priv being used anywhere.
> > > > 
> > > > That's logical: this is a field, private to the API users, so the core shouldn't 
> > > > use it :-) It's used in later patches.
> > > 
> > > That is where structure documentation is useful.  I will let Michael decide what
> > > he wants to do.
> > 
> > I can add some kerneldoc documentation there, no problem.
> > 
> > > Thanks for the feedback,
> > 
> > Thanks for your reviews! I'd very much like to close all the still open points 
> > and merge the series ASAP.
> > 
> > Thanks
> > Guennadi
> > 
> > > Mathieu
> > > 
> > > > 
> > > > > 
> > > > > > +};
> > > > > > +
> > > > > > +struct vhost_rpmsg {
> > > > > > +	struct vhost_dev dev;
> > > > > > +	struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > > > +	struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > > > +	const struct vhost_rpmsg_ept *ept;
> > > > > > +	unsigned int n_epts;
> > > > > > +};
> > > > > > +
> > > > > > +struct vhost_rpmsg_ept {
> > > > > > +	ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > > > +	ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > > > +	int addr;
> > > > > > +};
> > > > > > +
> > > > > > +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> > > > > > +{
> > > > > > +	return iter->rhdr.len;
> > > > > > +}
> > > > > 
> > > > > Again, I don't see where this is used.
> > > > 
> > > > This is exported API, it's used by users.
> > > >
> > > > > > +
> > > > > > +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {			\
> > > > > > +	.rhdr = {						\
> > > > > > +			.src = cpu_to_vhost32(_vq, _src),	\
> > > > > > +			.dst = cpu_to_vhost32(_vq, _dst),	\
> > > > > > +		},						\
> > > > > > +	}
> > > > > 
> > > > > Same.
> > > > 
> > > > ditto.
> > > > 
> > > > Thanks
> > > > Guennadi
> > > > 
> > > > > Thanks,
> > > > > Mathieu
> > > > > 
> > > > > > +
> > > > > > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > > > > > +		      unsigned int n_epts);
> > > > > > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> > > > > > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> > > > > > +			    unsigned int src);
> > > > > > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> > > > > > +			   struct vhost_rpmsg_iter *iter,
> > > > > > +			   unsigned int qid, ssize_t len);
> > > > > > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > > > > > +			void *data, size_t size);
> > > > > > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > > > > > +			      struct vhost_rpmsg_iter *iter);
> > > > > > +
> > > > > > +#endif
> > > > > > -- 
> > > > > > 2.28.0
> > > > > > 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v5 4/4] vhost: add an RPMsg API
  2020-09-15 12:48               ` Guennadi Liakhovetski
  (?)
@ 2020-09-16 17:09               ` Mathieu Poirier
  -1 siblings, 0 replies; 34+ messages in thread
From: Mathieu Poirier @ 2020-09-16 17:09 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: kvm, linux-remoteproc, virtualization, sound-open-firmware,
	Pierre-Louis Bossart, Liam Girdwood, Michael S. Tsirkin,
	Jason Wang, Ohad Ben-Cohen, Bjorn Andersson, Vincent Whitchurch

On Tue, 15 Sep 2020 at 06:49, Guennadi Liakhovetski
<guennadi.liakhovetski@linux.intel.com> wrote:
>
> On Fri, Sep 11, 2020 at 11:33:13AM -0600, Mathieu Poirier wrote:
> > On Fri, Sep 11, 2020 at 09:46:56AM +0200, Guennadi Liakhovetski wrote:
> > > Hi Mathieu,
> > >
> > > On Thu, Sep 10, 2020 at 11:22:11AM -0600, Mathieu Poirier wrote:
> > > > Good morning Guennadi,
> > > >
> > > > On Thu, Sep 10, 2020 at 10:38:54AM +0200, Guennadi Liakhovetski wrote:
> > > > > Hi Mathieu,
> > > > >
> > > > > On Wed, Sep 09, 2020 at 04:39:46PM -0600, Mathieu Poirier wrote:
> > > > > > Good afternoon,
> > > > > >
> > > > > > On Wed, Aug 26, 2020 at 07:46:36PM +0200, Guennadi Liakhovetski wrote:
> > > > > > > Linux supports running the RPMsg protocol over the VirtIO transport
> > > > > > > protocol, but currently there is only support for VirtIO clients and
> > > > > > > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > > > > > > server implementation.
> > > > > >
> > > > > > This changelog is very confusing...  At this time the name service in the
> > > > > > remoteproc space runs as a server on the application processor.  But from the
> > > > > > above the remoteproc usecase seems to be considered to be a client
> > > > > > configuration.
> > > > >
> > > > > I agree that this isn't very obvious. But I think it is common to call the
> > > > > host "a server" and guests "clients." E.g. in vhost.c in the top-of-thefile
> > > > > comment:
> > > >
> > > > Ok - that part we agree on.
> > > >
> > > > >
> > > > >  * Generic code for virtio server in host kernel.
> > > > >
> > > > > I think the generic concept behind this notation is, that as guests boot,
> > > > > they send their requests to the host, e.g. VirtIO device drivers on guests
> > > > > send requests over VirtQueues to VirtIO servers on the host, which can run
> > > > > either in the user- or in the kernel-space. And I think you can follow that
> > > >
> > > > I can see that process taking place.  After all virtIO devices on guests are
> > > > only stubs that need host support for access to HW.
> > > >
> > > > > logic in case of devices or remote processors too: it's the main CPU(s)
> > > > > that boot(s) and start talking to devices and remote processors, so in that
> > > > > sence devices are servers and the CPUs are their clients.
> > > >
> > > > In the remote processor case, the remoteproc core (application processor) sets up
> > > > the name service but does not initiate the communication with a remote
> > > > processor.  It simply waits there for a name space request to come in from the
> > > > remote processor.
> > >
> > > Hm, I don't see that in two examples, that I looked at: mtk and virtio. In both
> > > cases the announcement seems to be directly coming from the application processor
> > > maybe after some initialisation.
> >
> > Can you expand on that part - perhaps point me to the (virtio) code you are
> > referring to?
>
> Ok, we're both right: it goes both ways.
>
> Here's my understanding of the control flow of virtio_rpmsg_bus.c:
>
> 1. The driver registers a VirtIO driver with the VIRTIO_ID_RPMSG ID.

s/virtIO driver/virtIO device

> 2. When the driver is probed, if the server / the application processor supports the
>    VIRTIO_RPMSG_F_NS feature, the driver calls __rpmsg_create_ept() to create an
>    endpoint with rpmsg_ns_cb() as a callback.
> 3. When a namespace announcement arrives from the server, the callback is called,
>    which then registers a new channel (in case of CREATE). That then created an
>    rpmsg device.
> 4. If there's a matching rpmsg driver for that device, it's .probe() method is
>    called, so it can then add its own rpmsg endpoints, to be used for its proper
>    communication.

The above depiction is correct.

>
> Now there was indeed something in virtio_rpmsg_bus.c that I didn't fully understand:
> virtio_rpmsg_announce_create() and virtio_rpmsg_announce_destroy() functions. Now I
> understood, that as the client registers its custom channels, it also then can
> send name service announcements to the application processor, using those functions.

Function virtio_rpmsg_announce_create/destroy() are not used when
channels are created on the application processor after the reception
of a name service announcement from a remote processor.  Indeed, the
address of the remote service is already conveyed in
rpmsg_ns_msg::addr and need not be announced because it is already
known by the remote processor.

Create/destroy() functions are useful when the application processor
registers a channel on its own, i.e not prodded by the remote
processor (rpmsg_ns_cb()).  It is important to note that at this point
we do not have an rpmsg device (that is virtIO based) that uses this
functionality.

> This is also described in [1] as:
>
> <quote>
> Name Service sub-component (optional)
>
> This subcomponent is a minimum implementation of the name service which is present
> in the Linux Kernel implementation of RPMsg. It allows the communicating node both
> to send announcements about "named" endpoint (in other words, channel) creation or
> deletion and to receive these announcement taking any user-defined action in an
> application callback.

Right, we are talking about the same thing.

> </quote>
>
> Also in Documentation/rpmsg.txt
>
> <quote>
> ...the remote processor announces the existence of a remote rpmsg service by
> sending a name service message (which contains the name and rpmsg addr of the
> remote service, see struct rpmsg_ns_msg).
> </quote>
>
> in [2]:
>
> <quote>
> In the current protocol, at startup, the master sends notification to remote to let
> it know that it can receive name service announcement.

The RPMSG protocol is not involved in that path.  The only
notification send by the application processor goes through the virtIO
framework with virtio_device_ready() and virtqueue_notify():

https://elixir.bootlin.com/linux/v5.9-rc4/source/drivers/rpmsg/virtio_rpmsg_bus.c#L973

> </quote>
>
> > > > > And yes, the name-space announcement use-case seems confusing to me too - it
> > > > > reverts the relationship in a way: once a guest has booted and established
> > > > > connections to any rpmsg "devices," those send their namespace announcements
> > > > > back. But I think this can be regarded as server identification: you connect
> > > > > to a server and it replies with its identification and capabilities.
> > > >
> > > > Based on the above can I assume vhost_rpmsg_ns_announce() is sent from the
> > > > guest?
> > >
> > > No, it's "vhost_..." so it's running on the host.
> >
> > Ok, that's better and confirms the usage of the VIRTIO_RPMSG_RESPONSE queue.
> > When reading your explanation above, I thought the term "those" referred to the
> > guest.  In light of your explanation I now understand that "those" referred to
> > the rpmgs devices on the host.
> >
> > In the above paragraph you write:
> >
> > ... "once a guest has booted and established connections to any rpmsg "devices",
> > those send their namespace announcements back".
> >
> > I'd like to unpack a few things about this sentence:
> >
> > 1) In this context, how is a "connection" established between a guest and a host?
>
> That's handled by the VirtIO / VirtQueues in the case of virtio_rpmsg_bus.c but in
> general, as mentioned in [2]
>
> <quote>
> However, master does not consider the fact that if the remote is ready to handle
> notification at this point in time.
> </quote>
>
> > 2) How does the guest now about the rpmsg devices it has made a connection to?
>
> Again, that's the same as with all other VirtIO / KVM / Qemu devices: in a common
> Qemu case, it's the Qemu which emulates the hardware and registers those devices.
>
> > 3) Why is a namespace announcement needed at all when guests are aware of the
> > rpmsg devices instantiated on the host, and have already connected to them?
>
> It is indeed optional according to the protocol, but as described above, without
> it the virtio_rpmsg_bus.c driver won't create rpmsg channels / devices, so, no
> probing will take place.

I now have a better understanding of what you are trying to do.  On
the flip side this thread is too long to continue with it - I will
review V7 and we can pickup from there.

Mathieu

>
> > > The host (the server, an
> > > analogue of the application processor, IIUC) sends NS announcements to guests.
> >
> > I think we have just found the source of the confusion - in the remoteproc world
> > the application processor receives name announcements, it doesn't send them.
>
> Interesting, well, we know now that both directions are possible, but I still
> don't know whether all configurations are valid: only down, only up, none or both.
>
> Thanks
> Guennadi
>
> [1] https://nxpmicro.github.io/rpmsg-lite/
> [2] https://github.com/OpenAMP/open-amp/wiki/RPMsg-Messaging-Protocol
>
> > > > I saw your V7, something I will look into.  In the mean time I need to bring
> > > > your attention to this set [1] from Arnaud.  Please have a look as it will
> > > > impact your work.
> > > >
> > > > https://patchwork.kernel.org/project/linux-remoteproc/list/?series=338335
> > >
> > > Yes, I've had a look at that series, thanks for forwarding it to me. TBH I
> > > don't quite understand some choices there, e.g. creating a separate driver and
> > > then having to register devices just for the namespace announcement. I don't
> > > think creating virtual devices is taken easily in Linux. But either way I
> > > don't think our series conflict a lot, but I do hope that I can merge my
> > > series first, he'd just have to switch to using the header, that I'm adding.
> > > Hardly too many changes otherwise.
> >
> > It is not the conflicts between the series that I wanted to highlight but the
> > fact that name service is in the process of becoming a driver on its own, and
> > with no dependence on the transport mechanism.
> >
> > >
> > > > > > And I don't see a server implementation per se...  It is more like a client
> > > > > > implementation since vhost_rpmsg_announce() uses the RESPONSE queue, which sends
> > > > > > messages from host to guest.
> > > > > >
> > > > > > Perhaps it is my lack of familiarity with vhost terminology.
> > > > > >
> > > > > > >
> > > > > > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > > ---
> > > > > > >  drivers/vhost/Kconfig       |   7 +
> > > > > > >  drivers/vhost/Makefile      |   3 +
> > > > > > >  drivers/vhost/rpmsg.c       | 373 ++++++++++++++++++++++++++++++++++++
> > > > > > >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> > > > > > >  4 files changed, 457 insertions(+)
> > > > > > >  create mode 100644 drivers/vhost/rpmsg.c
> > > > > > >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > > > > > >
> > > > > > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > > > > > index 587fbae06182..046b948fc411 100644
> > > > > > > --- a/drivers/vhost/Kconfig
> > > > > > > +++ b/drivers/vhost/Kconfig
> > > > > > > @@ -38,6 +38,13 @@ config VHOST_NET
> > > > > > >       To compile this driver as a module, choose M here: the module will
> > > > > > >       be called vhost_net.
> > > > > > >
> > > > > > > +config VHOST_RPMSG
> > > > > > > +   tristate
> > > > > > > +   select VHOST
> > > > > > > +   help
> > > > > > > +     Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > > > > > > +     drivers, using the RPMsg over VirtIO protocol.
> > > > > >
> > > > > > I had to assume vhost drivers are running on the host and virtIO drivers on the
> > > > > > guests.  This may be common knowledge for people familiar with vhosts but
> > > > > > certainly obscur for commoners  Having a help section that is clear on what is
> > > > > > happening would remove any ambiguity.
> > > > >
> > > > > It is the terminology, yes, but you're right, the wording isn't very clear, will
> > > > > improve.
> > > > >
> > > > > > > +
> > > > > > >  config VHOST_SCSI
> > > > > > >     tristate "VHOST_SCSI TCM fabric driver"
> > > > > > >     depends on TARGET_CORE && EVENTFD
> > > > > > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > > > > > index f3e1897cce85..9cf459d59f97 100644
> > > > > > > --- a/drivers/vhost/Makefile
> > > > > > > +++ b/drivers/vhost/Makefile
> > > > > > > @@ -2,6 +2,9 @@
> > > > > > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > > > > > >  vhost_net-y := net.o
> > > > > > >
> > > > > > > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > > > > > > +vhost_rpmsg-y := rpmsg.o
> > > > > > > +
> > > > > > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > > > > > >  vhost_scsi-y := scsi.o
> > > > > > >
> > > > > > > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > > > > > > new file mode 100644
> > > > > > > index 000000000000..c26d7a4afc6d
> > > > > > > --- /dev/null
> > > > > > > +++ b/drivers/vhost/rpmsg.c
> > > > > > > @@ -0,0 +1,373 @@
> > > > > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > > > > +/*
> > > > > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > > > > + *
> > > > > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > > + *
> > > > > > > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > > > > > > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> > > > > >
> > > > > > Again, very confusing.  The changelog refers to a server implementation but to
> > > > > > me this refers to a client implementation, especially if rpmsg_recv_single() and
> > > > > > rpmsg_ns_cb() are used on the other side of the pipe.
> > > > >
> > > > > I think the above is correct. "Vhost" indicates, that this is running on the host.
> > > > > "match the guest side" means, that you can use this API on the host and it is
> > > > > designed to work together with the RPMsg VirtIO drivers running on guests, as
> > > > > implemented *on guests* by virtio_rpmsg_bus.c. Would "to work together" be a better
> > > > > description than "to match?"
> > > >
> > > > Lets forget about this part now and concentrate on the above conversation.
> > > > Things will start to make sense at one point.
> > >
> > > I've improved that description a bit, it was indeed rather clumsy.
> >
> > Much appreciated - I'll take a look a V7 next week.
> >
> > >
> > > [snip]
> > >
> > > > > > > diff --git a/drivers/vhost/vhost_rpmsg.h b/drivers/vhost/vhost_rpmsg.h
> > > > > > > new file mode 100644
> > > > > > > index 000000000000..30072cecb8a0
> > > > > > > --- /dev/null
> > > > > > > +++ b/drivers/vhost/vhost_rpmsg.h
> > > > > > > @@ -0,0 +1,74 @@
> > > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > > +/*
> > > > > > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > > > > > + *
> > > > > > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
> > > > > > > + */
> > > > > > > +
> > > > > > > +#ifndef VHOST_RPMSG_H
> > > > > > > +#define VHOST_RPMSG_H
> > > > > > > +
> > > > > > > +#include <linux/uio.h>
> > > > > > > +#include <linux/virtio_rpmsg.h>
> > > > > > > +
> > > > > > > +#include "vhost.h"
> > > > > > > +
> > > > > > > +/* RPMsg uses two VirtQueues: one for each direction */
> > > > > > > +enum {
> > > > > > > +   VIRTIO_RPMSG_RESPONSE,  /* RPMsg response (host->guest) buffers */
> > > > > > > +   VIRTIO_RPMSG_REQUEST,   /* RPMsg request (guest->host) buffers */
> > > > > > > +   /* Keep last */
> > > > > > > +   VIRTIO_RPMSG_NUM_OF_VQS,
> > > > > > > +};
> > > > > > > +
> > > > > > > +struct vhost_rpmsg_ept;
> > > > > > > +
> > > > > > > +struct vhost_rpmsg_iter {
> > > > > > > +   struct iov_iter iov_iter;
> > > > > > > +   struct rpmsg_hdr rhdr;
> > > > > > > +   struct vhost_virtqueue *vq;
> > > > > > > +   const struct vhost_rpmsg_ept *ept;
> > > > > > > +   int head;
> > > > > > > +   void *priv;
> > > > > >
> > > > > > I don't see @priv being used anywhere.
> > > > >
> > > > > That's logical: this is a field, private to the API users, so the core shouldn't
> > > > > use it :-) It's used in later patches.
> > > >
> > > > That is where structure documentation is useful.  I will let Michael decide what
> > > > he wants to do.
> > >
> > > I can add some kerneldoc documentation there, no problem.
> > >
> > > > Thanks for the feedback,
> > >
> > > Thanks for your reviews! I'd very much like to close all the still open points
> > > and merge the series ASAP.
> > >
> > > Thanks
> > > Guennadi
> > >
> > > > Mathieu
> > > >
> > > > >
> > > > > >
> > > > > > > +};
> > > > > > > +
> > > > > > > +struct vhost_rpmsg {
> > > > > > > +   struct vhost_dev dev;
> > > > > > > +   struct vhost_virtqueue vq[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > > > > +   struct vhost_virtqueue *vq_p[VIRTIO_RPMSG_NUM_OF_VQS];
> > > > > > > +   const struct vhost_rpmsg_ept *ept;
> > > > > > > +   unsigned int n_epts;
> > > > > > > +};
> > > > > > > +
> > > > > > > +struct vhost_rpmsg_ept {
> > > > > > > +   ssize_t (*read)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > > > > +   ssize_t (*write)(struct vhost_rpmsg *, struct vhost_rpmsg_iter *);
> > > > > > > +   int addr;
> > > > > > > +};
> > > > > > > +
> > > > > > > +static inline size_t vhost_rpmsg_iter_len(const struct vhost_rpmsg_iter *iter)
> > > > > > > +{
> > > > > > > +   return iter->rhdr.len;
> > > > > > > +}
> > > > > >
> > > > > > Again, I don't see where this is used.
> > > > >
> > > > > This is exported API, it's used by users.
> > > > >
> > > > > > > +
> > > > > > > +#define VHOST_RPMSG_ITER(_vq, _src, _dst) {                        \
> > > > > > > +   .rhdr = {                                               \
> > > > > > > +                   .src = cpu_to_vhost32(_vq, _src),       \
> > > > > > > +                   .dst = cpu_to_vhost32(_vq, _dst),       \
> > > > > > > +           },                                              \
> > > > > > > +   }
> > > > > >
> > > > > > Same.
> > > > >
> > > > > ditto.
> > > > >
> > > > > Thanks
> > > > > Guennadi
> > > > >
> > > > > > Thanks,
> > > > > > Mathieu
> > > > > >
> > > > > > > +
> > > > > > > +void vhost_rpmsg_init(struct vhost_rpmsg *vr, const struct vhost_rpmsg_ept *ept,
> > > > > > > +                 unsigned int n_epts);
> > > > > > > +void vhost_rpmsg_destroy(struct vhost_rpmsg *vr);
> > > > > > > +int vhost_rpmsg_ns_announce(struct vhost_rpmsg *vr, const char *name,
> > > > > > > +                       unsigned int src);
> > > > > > > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr,
> > > > > > > +                      struct vhost_rpmsg_iter *iter,
> > > > > > > +                      unsigned int qid, ssize_t len);
> > > > > > > +size_t vhost_rpmsg_copy(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter *iter,
> > > > > > > +                   void *data, size_t size);
> > > > > > > +int vhost_rpmsg_finish_unlock(struct vhost_rpmsg *vr,
> > > > > > > +                         struct vhost_rpmsg_iter *iter);
> > > > > > > +
> > > > > > > +#endif
> > > > > > > --
> > > > > > > 2.28.0
> > > > > > >

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2020-09-16 20:37 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-26 17:46 [PATCH v5 0/4] Add a vhost RPMsg API Guennadi Liakhovetski
2020-08-26 17:46 ` Guennadi Liakhovetski
2020-08-26 17:46 ` [PATCH v5 1/4] vhost: convert VHOST_VSOCK_SET_RUNNING to a generic ioctl Guennadi Liakhovetski
2020-08-26 17:46   ` Guennadi Liakhovetski
2020-09-09 22:42   ` Mathieu Poirier
2020-09-10  7:15     ` Guennadi Liakhovetski
2020-09-10  7:15       ` Guennadi Liakhovetski
2020-09-10 16:46       ` Mathieu Poirier
2020-09-11  7:59         ` Guennadi Liakhovetski
2020-09-11  7:59           ` Guennadi Liakhovetski
2020-08-26 17:46 ` [PATCH v5 2/4] rpmsg: move common structures and defines to headers Guennadi Liakhovetski
2020-08-26 17:46   ` Guennadi Liakhovetski
2020-08-31 19:38   ` Mathieu Poirier
2020-08-26 17:46 ` [PATCH v5 3/4] rpmsg: update documentation Guennadi Liakhovetski
2020-08-26 17:46   ` Guennadi Liakhovetski
2020-09-09 22:45   ` Mathieu Poirier
2020-09-10  7:18     ` Guennadi Liakhovetski
2020-09-10  7:18       ` Guennadi Liakhovetski
2020-09-10 11:19       ` Guennadi Liakhovetski
2020-09-10 11:19         ` Guennadi Liakhovetski
2020-08-26 17:46 ` [PATCH v5 4/4] vhost: add an RPMsg API Guennadi Liakhovetski
2020-08-26 17:46   ` Guennadi Liakhovetski
2020-09-09 22:39   ` Mathieu Poirier
2020-09-10  8:38     ` Guennadi Liakhovetski
2020-09-10  8:38       ` Guennadi Liakhovetski
2020-09-10 17:22       ` Mathieu Poirier
2020-09-11  7:46         ` Guennadi Liakhovetski
2020-09-11 17:33           ` Mathieu Poirier
2020-09-15 12:48             ` Guennadi Liakhovetski
2020-09-15 12:48               ` Guennadi Liakhovetski
2020-09-16 17:09               ` Mathieu Poirier
2020-09-08 14:16 ` [PATCH v5 0/4] Add a vhost " Michael S. Tsirkin
2020-09-08 14:16   ` Michael S. Tsirkin
2020-09-08 22:20   ` Mathieu Poirier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.