dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] drm/virtio: Add window server support
@ 2018-01-26 13:58 Tomeu Vizoso
  2018-01-26 13:58 ` [PATCH v3 1/2] " Tomeu Vizoso
  2018-01-26 13:58 ` [PATCH v3 2/2] drm/virtio: Handle buffers from the compositor Tomeu Vizoso
  0 siblings, 2 replies; 26+ messages in thread
From: Tomeu Vizoso @ 2018-01-26 13:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, dri-devel,
	virtualization, Zach Reizner, kernel

Hi,

this work is based on the virtio_wl driver in the ChromeOS kernel by
Zach Reizner, currently at:

https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.4/drivers/virtio/virtio_wl.c

There's one feature missing currently, which is letting clients write
directly to the host part of a resource, so the extra copy in
TRANSFER_TO_HOST isn't needed.

Have pushed the QEMU counterpart to this branch, though it isn't as
polished atm:

https://gitlab.collabora.com/tomeu/qemu/commits/winsrv-wip

Thanks,

Tomeu


Tomeu Vizoso (2):
  drm/virtio: Add window server support
  drm/virtio: Handle buffers from the compositor

 drivers/gpu/drm/virtio/virtgpu_drv.c   |   1 +
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  39 ++++-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 219 +++++++++++++++++++++++++
 drivers/gpu/drm/virtio/virtgpu_kms.c   |  66 ++++++--
 drivers/gpu/drm/virtio/virtgpu_vq.c    | 285 ++++++++++++++++++++++++++++++++-
 include/uapi/drm/virtgpu_drm.h         |  29 ++++
 include/uapi/linux/virtio_gpu.h        |  43 +++++
 7 files changed, 667 insertions(+), 15 deletions(-)

-- 
2.14.3

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 1/2] drm/virtio: Add window server support
  2018-01-26 13:58 [PATCH v3 0/2] drm/virtio: Add window server support Tomeu Vizoso
@ 2018-01-26 13:58 ` Tomeu Vizoso
  2018-02-01 16:36   ` Gerd Hoffmann
  2018-01-26 13:58 ` [PATCH v3 2/2] drm/virtio: Handle buffers from the compositor Tomeu Vizoso
  1 sibling, 1 reply; 26+ messages in thread
From: Tomeu Vizoso @ 2018-01-26 13:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, dri-devel,
	virtualization, Zach Reizner, kernel

This is to allow clients running within VMs to be able to communicate
with a compositor in the host. Clients will use the communication
protocol that the compositor supports, and virtio-gpu will assist with
making buffers available in both sides, and copying content as needed.

It is expected that a service in the guest will act as a proxy,
interacting with virtio-gpu to support unmodified clients. For some
features of the protocol, the hypervisor might have to intervene and
also parse the protocol data to properly bridge resources. The following
IOCTLs have been added to this effect:

*_WINSRV_CONNECT: Opens a connection to the compositor in the host,
returns a FD that represents this connection and on which the following
IOCTLs can be used. Callers are expected to poll this FD for new
messages from the compositor.

*_WINSRV_TX: Asks the hypervisor to forward a message to the compositor

*_WINSRV_RX: Returns all queued messages

Alongside protocol data that is opaque to the kernel, the client can
send file descriptors that reference GEM buffers allocated by
virtio-gpu. The guest proxy is expected to figure out when a client is
passing a FD that refers to shared memory in the guest and allocate a
virtio-gpu buffer of the same size with DRM_VIRTGPU_RESOURCE_CREATE.

When the client notifies the compositor that it can read from that buffer,
the proxy should copy the contents from the SHM region to the virtio-gpu
resource and call DRM_VIRTGPU_TRANSFER_TO_HOST.

This has been tested with Wayland clients that make use of wl_shm to
pass buffers to the compositor, but is expected to work similarly for X
clients that make use of MIT-SHM with FD passing.

v2: * Add padding to two virtio command structs
    * Properly cast two __user pointers (kbuild test robot)

v3: * Handle absence of winsrv support in QEMU (Dave Airlie)

Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
---

 drivers/gpu/drm/virtio/virtgpu_drv.c   |   1 +
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  39 ++++-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 165 +++++++++++++++++++
 drivers/gpu/drm/virtio/virtgpu_kms.c   |  66 ++++++--
 drivers/gpu/drm/virtio/virtgpu_vq.c    | 285 ++++++++++++++++++++++++++++++++-
 include/uapi/drm/virtgpu_drm.h         |  29 ++++
 include/uapi/linux/virtio_gpu.h        |  43 +++++
 7 files changed, 613 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index 49a3d8d5a249..a528ddedd09f 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -79,6 +79,7 @@ static unsigned int features[] = {
 	 */
 	VIRTIO_GPU_F_VIRGL,
 #endif
+	VIRTIO_GPU_F_WINSRV,
 };
 static struct virtio_driver virtio_gpu_driver = {
 	.feature_table = features,
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index da2fb585fea4..268b386e1232 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -178,6 +178,8 @@ struct virtio_gpu_device {
 
 	struct virtio_gpu_queue ctrlq;
 	struct virtio_gpu_queue cursorq;
+	struct virtio_gpu_queue winsrv_rxq;
+	struct virtio_gpu_queue winsrv_txq;
 	struct kmem_cache *vbufs;
 	bool vqs_ready;
 
@@ -205,10 +207,32 @@ struct virtio_gpu_device {
 
 struct virtio_gpu_fpriv {
 	uint32_t ctx_id;
+
+	struct list_head winsrv_conns; /* list of virtio_gpu_winsrv_conn */
+	spinlock_t winsrv_lock;
+};
+
+struct virtio_gpu_winsrv_rx_qentry {
+	struct virtio_gpu_winsrv_rx *cmd;
+	struct list_head next;
+};
+
+struct virtio_gpu_winsrv_conn {
+	struct virtio_gpu_device *vgdev;
+
+	spinlock_t lock;
+
+	int fd;
+	struct drm_file *drm_file;
+
+	struct list_head cmdq; /* queue of virtio_gpu_winsrv_rx_qentry */
+	wait_queue_head_t cmdwq;
+
+	struct list_head next;
 };
 
 /* virtio_ioctl.c */
-#define DRM_VIRTIO_NUM_IOCTLS 10
+#define DRM_VIRTIO_NUM_IOCTLS 11
 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 
 /* virtio_kms.c */
@@ -318,9 +342,22 @@ virtio_gpu_cmd_resource_create_3d(struct virtio_gpu_device *vgdev,
 void virtio_gpu_ctrl_ack(struct virtqueue *vq);
 void virtio_gpu_cursor_ack(struct virtqueue *vq);
 void virtio_gpu_fence_ack(struct virtqueue *vq);
+void virtio_gpu_winsrv_tx_ack(struct virtqueue *vq);
+void virtio_gpu_winsrv_rx_read(struct virtqueue *vq);
 void virtio_gpu_dequeue_ctrl_func(struct work_struct *work);
 void virtio_gpu_dequeue_cursor_func(struct work_struct *work);
+void virtio_gpu_dequeue_winsrv_rx_func(struct work_struct *work);
+void virtio_gpu_dequeue_winsrv_tx_func(struct work_struct *work);
 void virtio_gpu_dequeue_fence_func(struct work_struct *work);
+void virtio_gpu_fill_winsrv_rx(struct virtio_gpu_device *vgdev);
+void virtio_gpu_queue_winsrv_rx_in(struct virtio_gpu_device *vgdev,
+				   struct virtio_gpu_winsrv_rx *cmd);
+int virtio_gpu_cmd_winsrv_connect(struct virtio_gpu_device *vgdev, int fd);
+void virtio_gpu_cmd_winsrv_disconnect(struct virtio_gpu_device *vgdev, int fd);
+int virtio_gpu_cmd_winsrv_tx(struct virtio_gpu_device *vgdev,
+			     const char __user *buffer, u32 len,
+			     int *fds, struct virtio_gpu_winsrv_conn *conn,
+			     bool nonblock);
 
 /* virtio_gpu_display.c */
 int virtio_gpu_framebuffer_init(struct drm_device *dev,
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 0528edb4a2bf..d4230b1fa91d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -25,6 +25,9 @@
  * OTHER DEALINGS IN THE SOFTWARE.
  */
 
+#include <linux/anon_inodes.h>
+#include <linux/syscalls.h>
+
 #include <drm/drmP.h>
 #include <drm/virtgpu_drm.h>
 #include <drm/ttm/ttm_execbuf_util.h>
@@ -527,6 +530,165 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev,
 	return 0;
 }
 
+static unsigned int winsrv_poll(struct file *filp,
+				struct poll_table_struct *wait)
+{
+	struct virtio_gpu_winsrv_conn *conn = filp->private_data;
+	unsigned int mask = 0;
+
+	spin_lock(&conn->lock);
+	poll_wait(filp, &conn->cmdwq, wait);
+	if (!list_empty(&conn->cmdq))
+		mask |= POLLIN | POLLRDNORM;
+	spin_unlock(&conn->lock);
+
+	return mask;
+}
+
+static int winsrv_ioctl_rx(struct virtio_gpu_device *vgdev,
+			   struct virtio_gpu_winsrv_conn *conn,
+			   struct drm_virtgpu_winsrv *cmd)
+{
+	struct virtio_gpu_winsrv_rx_qentry *qentry, *tmp;
+	struct virtio_gpu_winsrv_rx *virtio_cmd;
+	int available_len = cmd->len;
+	int read_count = 0;
+
+	list_for_each_entry_safe(qentry, tmp, &conn->cmdq, next) {
+		virtio_cmd = qentry->cmd;
+		if (virtio_cmd->len > available_len)
+			return 0;
+
+		if (copy_to_user((void __user *)cmd->data + read_count,
+				 virtio_cmd->data,
+				 virtio_cmd->len)) {
+			/* return error unless we have some data to return */
+			if (read_count == 0)
+				return -EFAULT;
+		}
+
+		available_len -= virtio_cmd->len;
+		read_count += virtio_cmd->len;
+
+		virtio_gpu_queue_winsrv_rx_in(vgdev, virtio_cmd);
+
+		list_del(&qentry->next);
+		kfree(qentry);
+	}
+
+	cmd->len = read_count;
+
+	return 0;
+}
+
+static long winsrv_ioctl(struct file *filp, unsigned int cmd,
+			 unsigned long arg)
+{
+	struct virtio_gpu_winsrv_conn *conn = filp->private_data;
+	struct virtio_gpu_device *vgdev = conn->vgdev;
+	struct drm_virtgpu_winsrv winsrv_cmd;
+	int ret;
+
+	if (_IOC_SIZE(cmd) > sizeof(winsrv_cmd))
+		return -EINVAL;
+
+	if (copy_from_user(&winsrv_cmd, (void __user *)arg,
+			   _IOC_SIZE(cmd)) != 0)
+		return -EFAULT;
+
+	switch (cmd) {
+	case DRM_IOCTL_VIRTGPU_WINSRV_RX:
+		ret = winsrv_ioctl_rx(vgdev, conn, &winsrv_cmd);
+		if (copy_to_user((void __user *)arg, &winsrv_cmd,
+				 _IOC_SIZE(cmd)) != 0)
+			return -EFAULT;
+
+		break;
+
+	case DRM_IOCTL_VIRTGPU_WINSRV_TX:
+		ret = virtio_gpu_cmd_winsrv_tx(vgdev,
+				u64_to_user_ptr(winsrv_cmd.data),
+				winsrv_cmd.len,
+				winsrv_cmd.fds,
+				conn,
+				filp->f_flags & O_NONBLOCK);
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static int winsrv_release(struct inode *inodep, struct file *filp)
+{
+	struct virtio_gpu_winsrv_conn *conn = filp->private_data;
+	struct virtio_gpu_device *vgdev = conn->vgdev;
+
+	virtio_gpu_cmd_winsrv_disconnect(vgdev, conn->fd);
+
+	list_del(&conn->next);
+	kfree(conn);
+
+	return 0;
+}
+
+static const struct file_operations winsrv_fops = {
+	.poll = winsrv_poll,
+	.unlocked_ioctl = winsrv_ioctl,
+	.release = winsrv_release,
+};
+
+static int virtio_gpu_winsrv_connect(struct drm_device *dev, void *data,
+				     struct drm_file *file)
+{
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
+	struct drm_virtgpu_winsrv_connect *args = data;
+	struct virtio_gpu_winsrv_conn *conn;
+	int ret;
+
+	if (!virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_WINSRV))
+		return -ENODEV;
+
+	conn = kzalloc(sizeof(*conn), GFP_KERNEL);
+	if (!conn)
+		return -ENOMEM;
+
+	conn->vgdev = vgdev;
+	conn->drm_file = file;
+	spin_lock_init(&conn->lock);
+	INIT_LIST_HEAD(&conn->cmdq);
+	init_waitqueue_head(&conn->cmdwq);
+
+	ret = anon_inode_getfd("[virtgpu_winsrv]", &winsrv_fops, conn,
+			       O_CLOEXEC | O_RDWR);
+	if (ret < 0)
+		goto free_conn;
+
+	conn->fd = ret;
+
+	ret = virtio_gpu_cmd_winsrv_connect(vgdev, conn->fd);
+	if (ret < 0)
+		goto close_fd;
+
+	spin_lock(&vfpriv->winsrv_lock);
+	list_add_tail(&conn->next, &vfpriv->winsrv_conns);
+	spin_unlock(&vfpriv->winsrv_lock);
+
+	args->fd = conn->fd;
+
+	return 0;
+
+close_fd:
+	sys_close(conn->fd);
+
+free_conn:
+	kfree(conn);
+
+	return ret;
+}
+
 struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 	DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl,
 			  DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
@@ -558,4 +720,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 
 	DRM_IOCTL_DEF_DRV(VIRTGPU_GET_CAPS, virtio_gpu_get_caps_ioctl,
 			  DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
+
+	DRM_IOCTL_DEF_DRV(VIRTGPU_WINSRV_CONNECT, virtio_gpu_winsrv_connect,
+			  DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW),
 };
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 6400506a06b0..87b118d4b13c 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -128,13 +128,16 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev,
 int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags)
 {
 	static vq_callback_t *callbacks[] = {
-		virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack
+		virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack,
+		virtio_gpu_winsrv_rx_read, virtio_gpu_winsrv_tx_ack
 	};
-	static const char * const names[] = { "control", "cursor" };
+	static const char * const names[] = { "control", "cursor",
+					      "winsrv-rx", "winsrv-tx" };
 
 	struct virtio_gpu_device *vgdev;
 	/* this will expand later */
-	struct virtqueue *vqs[2];
+	struct virtqueue *vqs[4];
+	int nr_queues = 2;
 	u32 num_scanouts, num_capsets;
 	int ret;
 
@@ -158,6 +161,10 @@ int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags)
 	init_waitqueue_head(&vgdev->resp_wq);
 	virtio_gpu_init_vq(&vgdev->ctrlq, virtio_gpu_dequeue_ctrl_func);
 	virtio_gpu_init_vq(&vgdev->cursorq, virtio_gpu_dequeue_cursor_func);
+	virtio_gpu_init_vq(&vgdev->winsrv_rxq,
+			   virtio_gpu_dequeue_winsrv_rx_func);
+	virtio_gpu_init_vq(&vgdev->winsrv_txq,
+			   virtio_gpu_dequeue_winsrv_tx_func);
 
 	vgdev->fence_drv.context = dma_fence_context_alloc(1);
 	spin_lock_init(&vgdev->fence_drv.lock);
@@ -175,13 +182,21 @@ int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags)
 	DRM_INFO("virgl 3d acceleration not supported by guest\n");
 #endif
 
-	ret = virtio_find_vqs(vgdev->vdev, 2, vqs, callbacks, names, NULL);
+	if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_WINSRV))
+		nr_queues += 2;
+
+	ret = virtio_find_vqs(vgdev->vdev, nr_queues, vqs, callbacks, names,
+			      NULL);
 	if (ret) {
 		DRM_ERROR("failed to find virt queues\n");
 		goto err_vqs;
 	}
 	vgdev->ctrlq.vq = vqs[0];
 	vgdev->cursorq.vq = vqs[1];
+	if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_WINSRV)) {
+		vgdev->winsrv_rxq.vq = vqs[2];
+		vgdev->winsrv_txq.vq = vqs[3];
+	}
 	ret = virtio_gpu_alloc_vbufs(vgdev);
 	if (ret) {
 		DRM_ERROR("failed to alloc vbufs\n");
@@ -215,6 +230,10 @@ int virtio_gpu_driver_load(struct drm_device *dev, unsigned long flags)
 		goto err_modeset;
 
 	virtio_device_ready(vgdev->vdev);
+
+	if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_WINSRV))
+		virtio_gpu_fill_winsrv_rx(vgdev);
+
 	vgdev->vqs_ready = true;
 
 	if (num_capsets)
@@ -256,6 +275,8 @@ void virtio_gpu_driver_unload(struct drm_device *dev)
 	vgdev->vqs_ready = false;
 	flush_work(&vgdev->ctrlq.dequeue_work);
 	flush_work(&vgdev->cursorq.dequeue_work);
+	flush_work(&vgdev->winsrv_rxq.dequeue_work);
+	flush_work(&vgdev->winsrv_txq.dequeue_work);
 	flush_work(&vgdev->config_changed_work);
 	vgdev->vdev->config->del_vqs(vgdev->vdev);
 
@@ -274,25 +295,43 @@ int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file)
 	uint32_t id;
 	char dbgname[64], tmpname[TASK_COMM_LEN];
 
-	/* can't create contexts without 3d renderer */
-	if (!vgdev->has_virgl_3d)
-		return 0;
-
-	get_task_comm(tmpname, current);
-	snprintf(dbgname, sizeof(dbgname), "%s", tmpname);
-	dbgname[63] = 0;
 	/* allocate a virt GPU context for this opener */
 	vfpriv = kzalloc(sizeof(*vfpriv), GFP_KERNEL);
 	if (!vfpriv)
 		return -ENOMEM;
 
-	virtio_gpu_context_create(vgdev, strlen(dbgname), dbgname, &id);
+	/* can't create contexts without 3d renderer */
+	if (vgdev->has_virgl_3d) {
+		get_task_comm(tmpname, current);
+		snprintf(dbgname, sizeof(dbgname), "%s", tmpname);
+		dbgname[63] = 0;
+
+		virtio_gpu_context_create(vgdev, strlen(dbgname), dbgname, &id);
+
+		vfpriv->ctx_id = id;
+	}
+
+	spin_lock_init(&vfpriv->winsrv_lock);
+	INIT_LIST_HEAD(&vfpriv->winsrv_conns);
 
-	vfpriv->ctx_id = id;
 	file->driver_priv = vfpriv;
+
 	return 0;
 }
 
+static void virtio_gpu_cleanup_conns(struct virtio_gpu_fpriv *vfpriv)
+{
+	struct virtio_gpu_winsrv_conn *conn, *tmp;
+	struct virtio_gpu_winsrv_rx_qentry *qentry, *tmp2;
+
+	list_for_each_entry_safe(conn, tmp, &vfpriv->winsrv_conns, next) {
+		list_for_each_entry_safe(qentry, tmp2, &conn->cmdq, next) {
+			kfree(qentry);
+		}
+		kfree(conn);
+	}
+}
+
 void virtio_gpu_driver_postclose(struct drm_device *dev, struct drm_file *file)
 {
 	struct virtio_gpu_device *vgdev = dev->dev_private;
@@ -303,6 +342,7 @@ void virtio_gpu_driver_postclose(struct drm_device *dev, struct drm_file *file)
 
 	vfpriv = file->driver_priv;
 
+	virtio_gpu_cleanup_conns(vfpriv);
 	virtio_gpu_context_destroy(vgdev, vfpriv->ctx_id);
 	kfree(vfpriv);
 	file->driver_priv = NULL;
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 9eb96fb2c147..ea5f9352d364 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -32,7 +32,7 @@
 #include <linux/virtio_config.h>
 #include <linux/virtio_ring.h>
 
-#define MAX_INLINE_CMD_SIZE   96
+#define MAX_INLINE_CMD_SIZE   144
 #define MAX_INLINE_RESP_SIZE  24
 #define VBUFFER_SIZE          (sizeof(struct virtio_gpu_vbuffer) \
 			       + MAX_INLINE_CMD_SIZE		 \
@@ -72,6 +72,67 @@ void virtio_gpu_cursor_ack(struct virtqueue *vq)
 	schedule_work(&vgdev->cursorq.dequeue_work);
 }
 
+void virtio_gpu_winsrv_rx_read(struct virtqueue *vq)
+{
+	struct drm_device *dev = vq->vdev->priv;
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+
+	schedule_work(&vgdev->winsrv_rxq.dequeue_work);
+}
+
+void virtio_gpu_winsrv_tx_ack(struct virtqueue *vq)
+{
+	struct drm_device *dev = vq->vdev->priv;
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+
+	schedule_work(&vgdev->winsrv_txq.dequeue_work);
+}
+
+void virtio_gpu_queue_winsrv_rx_in(struct virtio_gpu_device *vgdev,
+				   struct virtio_gpu_winsrv_rx *cmd)
+{
+	struct virtqueue *vq = vgdev->winsrv_rxq.vq;
+	struct scatterlist sg[1];
+	int ret;
+
+	sg_init_one(sg, cmd, sizeof(*cmd));
+
+	spin_lock(&vgdev->winsrv_rxq.qlock);
+retry:
+	ret = virtqueue_add_inbuf(vq, sg, 1, cmd, GFP_KERNEL);
+	if (ret == -ENOSPC) {
+		spin_unlock(&vgdev->winsrv_rxq.qlock);
+		wait_event(vgdev->winsrv_rxq.ack_queue, vq->num_free);
+		spin_lock(&vgdev->winsrv_rxq.qlock);
+		goto retry;
+	}
+	virtqueue_kick(vq);
+	spin_unlock(&vgdev->winsrv_rxq.qlock);
+}
+
+void virtio_gpu_fill_winsrv_rx(struct virtio_gpu_device *vgdev)
+{
+	struct virtqueue *vq = vgdev->winsrv_rxq.vq;
+	struct virtio_gpu_winsrv_rx *cmd;
+	int ret = 0;
+
+	while (vq->num_free > 0) {
+		cmd = kmalloc(sizeof(*cmd), GFP_KERNEL);
+		if (!cmd) {
+			ret = -ENOMEM;
+			goto clear_queue;
+		}
+
+		virtio_gpu_queue_winsrv_rx_in(vgdev, cmd);
+	}
+
+	return;
+
+clear_queue:
+	while ((cmd = virtqueue_detach_unused_buf(vq)))
+		kfree(cmd);
+}
+
 int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev)
 {
 	vgdev->vbufs = kmem_cache_create("virtio-gpu-vbufs",
@@ -258,6 +319,96 @@ void virtio_gpu_dequeue_cursor_func(struct work_struct *work)
 	wake_up(&vgdev->cursorq.ack_queue);
 }
 
+void virtio_gpu_dequeue_winsrv_tx_func(struct work_struct *work)
+{
+	struct virtio_gpu_device *vgdev =
+		container_of(work, struct virtio_gpu_device,
+			     winsrv_txq.dequeue_work);
+	struct virtio_gpu_vbuffer *vbuf;
+	int len;
+
+	spin_lock(&vgdev->winsrv_txq.qlock);
+	do {
+		while ((vbuf = virtqueue_get_buf(vgdev->winsrv_txq.vq, &len)))
+			free_vbuf(vgdev, vbuf);
+	} while (!virtqueue_enable_cb(vgdev->winsrv_txq.vq));
+	spin_unlock(&vgdev->winsrv_txq.qlock);
+
+	wake_up(&vgdev->winsrv_txq.ack_queue);
+}
+
+static struct virtio_gpu_winsrv_conn *find_conn(struct virtio_gpu_device *vgdev,
+						int fd)
+{
+	struct virtio_gpu_winsrv_conn *conn;
+	struct drm_device *ddev = vgdev->ddev;
+	struct drm_file *file;
+	struct virtio_gpu_fpriv *vfpriv;
+
+	mutex_lock(&ddev->filelist_mutex);
+	list_for_each_entry(file, &ddev->filelist, lhead) {
+		vfpriv = file->driver_priv;
+		spin_lock(&vfpriv->winsrv_lock);
+		list_for_each_entry(conn, &vfpriv->winsrv_conns, next) {
+			if (conn->fd == fd) {
+				spin_lock(&conn->lock);
+				spin_unlock(&vfpriv->winsrv_lock);
+				mutex_unlock(&ddev->filelist_mutex);
+				return conn;
+			}
+		}
+		spin_unlock(&vfpriv->winsrv_lock);
+	}
+	mutex_unlock(&ddev->filelist_mutex);
+
+	return NULL;
+}
+
+static void handle_rx_cmd(struct virtio_gpu_device *vgdev,
+			  struct virtio_gpu_winsrv_rx *cmd)
+{
+	struct virtio_gpu_winsrv_conn *conn;
+	struct virtio_gpu_winsrv_rx_qentry *qentry;
+
+	conn = find_conn(vgdev, cmd->client_fd);
+	if (!conn) {
+		DRM_DEBUG("recv for unknown client fd %u\n", cmd->client_fd);
+		return;
+	}
+
+	qentry = kzalloc(sizeof(*qentry), GFP_KERNEL);
+	if (!qentry) {
+		spin_unlock(&conn->lock);
+		DRM_DEBUG("failed to allocate qentry for winsrv connection\n");
+		return;
+	}
+
+	qentry->cmd = cmd;
+
+	list_add_tail(&qentry->next, &conn->cmdq);
+	wake_up_interruptible(&conn->cmdwq);
+	spin_unlock(&conn->lock);
+}
+
+void virtio_gpu_dequeue_winsrv_rx_func(struct work_struct *work)
+{
+	struct virtio_gpu_device *vgdev =
+		container_of(work, struct virtio_gpu_device,
+			     winsrv_rxq.dequeue_work);
+	struct virtio_gpu_winsrv_rx *cmd;
+	unsigned int len;
+
+	spin_lock(&vgdev->winsrv_rxq.qlock);
+	while ((cmd = virtqueue_get_buf(vgdev->winsrv_rxq.vq, &len)) != NULL) {
+		spin_unlock(&vgdev->winsrv_rxq.qlock);
+		handle_rx_cmd(vgdev, cmd);
+		spin_lock(&vgdev->winsrv_rxq.qlock);
+	}
+	spin_unlock(&vgdev->winsrv_rxq.qlock);
+
+	virtqueue_kick(vgdev->winsrv_rxq.vq);
+}
+
 static int virtio_gpu_queue_ctrl_buffer_locked(struct virtio_gpu_device *vgdev,
 					       struct virtio_gpu_vbuffer *vbuf)
 		__releases(&vgdev->ctrlq.qlock)
@@ -380,6 +531,41 @@ static int virtio_gpu_queue_cursor(struct virtio_gpu_device *vgdev,
 	return ret;
 }
 
+static int virtio_gpu_queue_winsrv_tx(struct virtio_gpu_device *vgdev,
+				      struct virtio_gpu_vbuffer *vbuf)
+{
+	struct virtqueue *vq = vgdev->winsrv_txq.vq;
+	struct scatterlist *sgs[2], vcmd, vout;
+	int ret;
+
+	if (!vgdev->vqs_ready)
+		return -ENODEV;
+
+	sg_init_one(&vcmd, vbuf->buf, vbuf->size);
+	sgs[0] = &vcmd;
+
+	sg_init_one(&vout, vbuf->data_buf, vbuf->data_size);
+	sgs[1] = &vout;
+
+	spin_lock(&vgdev->winsrv_txq.qlock);
+retry:
+	ret = virtqueue_add_sgs(vq, sgs, 2, 0, vbuf, GFP_ATOMIC);
+	if (ret == -ENOSPC) {
+		spin_unlock(&vgdev->winsrv_txq.qlock);
+		wait_event(vgdev->winsrv_txq.ack_queue, vq->num_free);
+		spin_lock(&vgdev->winsrv_txq.qlock);
+		goto retry;
+	}
+
+	virtqueue_kick(vq);
+
+	spin_unlock(&vgdev->winsrv_txq.qlock);
+
+	if (!ret)
+		ret = vq->num_free;
+	return ret;
+}
+
 /* just create gem objects for userspace and long lived objects,
    just use dma_alloced pages for the queue objects? */
 
@@ -890,3 +1076,100 @@ void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
 	memcpy(cur_p, &output->cursor, sizeof(output->cursor));
 	virtio_gpu_queue_cursor(vgdev, vbuf);
 }
+
+int virtio_gpu_cmd_winsrv_connect(struct virtio_gpu_device *vgdev, int fd)
+{
+	struct virtio_gpu_winsrv_connect *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_WINSRV_CONNECT);
+	cmd_p->client_fd = cpu_to_le32(fd);
+	return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+}
+
+void virtio_gpu_cmd_winsrv_disconnect(struct virtio_gpu_device *vgdev, int fd)
+{
+	struct virtio_gpu_winsrv_disconnect *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_WINSRV_DISCONNECT);
+	cmd_p->client_fd = cpu_to_le32(fd);
+	virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+}
+
+int virtio_gpu_cmd_winsrv_tx(struct virtio_gpu_device *vgdev,
+			     const char __user *buffer, u32 len,
+			     int *fds, struct virtio_gpu_winsrv_conn *conn,
+			     bool nonblock)
+{
+	int client_fd = conn->fd;
+	struct drm_file *file = conn->drm_file;
+	struct virtio_gpu_winsrv_tx *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+	uint32_t gem_handle;
+	struct drm_gem_object *gobj = NULL;
+	struct virtio_gpu_object *qobj = NULL;
+	int ret, i, fd;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_WINSRV_TX);
+
+	for (i = 0; i < VIRTIO_GPU_WINSRV_MAX_ALLOCS; i++) {
+		cmd_p->resource_ids[i] = -1;
+
+		fd = fds[i];
+		if (fd < 0)
+			break;
+
+		ret = drm_gem_prime_fd_to_handle(vgdev->ddev, file, fd,
+						 &gem_handle);
+		if (ret != 0)
+			goto err_free_vbuf;
+
+		gobj = drm_gem_object_lookup(file, gem_handle);
+		if (gobj == NULL) {
+			ret = -ENOENT;
+			goto err_free_vbuf;
+		}
+
+		qobj = gem_to_virtio_gpu_obj(gobj);
+		cmd_p->resource_ids[i] = qobj->hw_res_handle;
+	}
+
+	cmd_p->client_fd = client_fd;
+	cmd_p->len = cpu_to_le32(len);
+
+	/* gets freed when the ring has consumed it */
+	vbuf->data_buf = kmalloc(cmd_p->len, GFP_KERNEL);
+	if (!vbuf->data_buf) {
+		DRM_ERROR("failed to allocate winsrv tx buffer\n");
+		ret = -ENOMEM;
+		goto err_free_vbuf;
+	}
+
+	vbuf->data_size = cmd_p->len;
+
+	if (copy_from_user(vbuf->data_buf, buffer, cmd_p->len)) {
+		ret = -EFAULT;
+		goto err_free_databuf;
+	}
+
+	virtio_gpu_queue_winsrv_tx(vgdev, vbuf);
+
+	return 0;
+
+err_free_databuf:
+	kfree(vbuf->data_buf);
+err_free_vbuf:
+	free_vbuf(vgdev, vbuf);
+
+	return ret;
+}
diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 91a31ffed828..89b0a1a707a7 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -46,6 +46,11 @@ extern "C" {
 #define DRM_VIRTGPU_TRANSFER_TO_HOST 0x07
 #define DRM_VIRTGPU_WAIT     0x08
 #define DRM_VIRTGPU_GET_CAPS  0x09
+#define DRM_VIRTGPU_WINSRV_CONNECT  0x0a
+#define DRM_VIRTGPU_WINSRV_TX  0x0b
+#define DRM_VIRTGPU_WINSRV_RX  0x0c
+
+#define VIRTGPU_WINSRV_MAX_ALLOCS 28
 
 struct drm_virtgpu_map {
 	__u64 offset; /* use for mmap system call */
@@ -132,6 +137,18 @@ struct drm_virtgpu_get_caps {
 	__u32 pad;
 };
 
+struct drm_virtgpu_winsrv {
+	__s32 fds[VIRTGPU_WINSRV_MAX_ALLOCS];
+	__u64 data;
+	__u32 len;
+	__u32 pad;
+};
+
+struct drm_virtgpu_winsrv_connect {
+	__u32 fd;   /* returned by kernel */
+	__u32 pad;
+};
+
 #define DRM_IOCTL_VIRTGPU_MAP \
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map)
 
@@ -167,6 +184,18 @@ struct drm_virtgpu_get_caps {
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_GET_CAPS, \
 	struct drm_virtgpu_get_caps)
 
+#define DRM_IOCTL_VIRTGPU_WINSRV_CONNECT \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_WINSRV_CONNECT, \
+		struct drm_virtgpu_winsrv_connect)
+
+#define DRM_IOCTL_VIRTGPU_WINSRV_TX \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_WINSRV_TX, \
+		struct drm_virtgpu_winsrv)
+
+#define DRM_IOCTL_VIRTGPU_WINSRV_RX \
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_WINSRV_RX, \
+		struct drm_virtgpu_winsrv)
+
 #if defined(__cplusplus)
 }
 #endif
diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 4b04ead26cd9..3567f84d03e9 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -41,6 +41,7 @@
 #include <linux/types.h>
 
 #define VIRTIO_GPU_F_VIRGL 0
+#define VIRTIO_GPU_F_WINSRV 1
 
 enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_UNDEFINED = 0,
@@ -71,6 +72,12 @@ enum virtio_gpu_ctrl_type {
 	VIRTIO_GPU_CMD_UPDATE_CURSOR = 0x0300,
 	VIRTIO_GPU_CMD_MOVE_CURSOR,
 
+	/* window server commands */
+	VIRTIO_GPU_CMD_WINSRV_CONNECT = 0x0400,
+	VIRTIO_GPU_CMD_WINSRV_DISCONNECT,
+	VIRTIO_GPU_CMD_WINSRV_TX,
+	VIRTIO_GPU_CMD_WINSRV_RX,
+
 	/* success responses */
 	VIRTIO_GPU_RESP_OK_NODATA = 0x1100,
 	VIRTIO_GPU_RESP_OK_DISPLAY_INFO,
@@ -290,6 +297,42 @@ struct virtio_gpu_resp_capset {
 	__u8 capset_data[];
 };
 
+/* VIRTIO_GPU_CMD_WINSRV_CONNECT */
+struct virtio_gpu_winsrv_connect {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 client_fd;
+	__le32 padding;
+};
+
+/* VIRTIO_GPU_CMD_WINSRV_DISCONNECT */
+struct virtio_gpu_winsrv_disconnect {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 client_fd;
+	__le32 padding;
+};
+
+#define VIRTIO_GPU_WINSRV_MAX_ALLOCS 28
+#define VIRTIO_GPU_WINSRV_TX_MAX_DATA 4096
+
+/* VIRTIO_GPU_CMD_WINSRV_TX */
+/* these commands are followed in the queue descriptor by protocol buffers */
+struct virtio_gpu_winsrv_tx {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__u32 client_fd;
+	__u32 len;
+	__le32 resource_ids[VIRTIO_GPU_WINSRV_MAX_ALLOCS];
+};
+
+/* VIRTIO_GPU_CMD_WINSRV_RX */
+struct virtio_gpu_winsrv_rx {
+	struct virtio_gpu_ctrl_hdr hdr;
+	__le32 client_fd;
+	__u8 data[VIRTIO_GPU_WINSRV_TX_MAX_DATA];
+	__u32 len;
+	__u64 pfns[VIRTIO_GPU_WINSRV_MAX_ALLOCS];
+	__u32 lens[VIRTIO_GPU_WINSRV_MAX_ALLOCS];
+};
+
 #define VIRTIO_GPU_EVENT_DISPLAY (1 << 0)
 
 struct virtio_gpu_config {
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 2/2] drm/virtio: Handle buffers from the compositor
  2018-01-26 13:58 [PATCH v3 0/2] drm/virtio: Add window server support Tomeu Vizoso
  2018-01-26 13:58 ` [PATCH v3 1/2] " Tomeu Vizoso
@ 2018-01-26 13:58 ` Tomeu Vizoso
  1 sibling, 0 replies; 26+ messages in thread
From: Tomeu Vizoso @ 2018-01-26 13:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tomeu Vizoso, David Airlie, dri-devel, virtualization,
	Zach Reizner, kernel

When retrieving queued messages from the compositor in the host for
clients in the guest, handle buffers that may be passed.

These buffers should have been mapped to the guest's address space, for
example via the KVM_SET_USER_MEMORY_REGION ioctl.

Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
---

 drivers/gpu/drm/virtio/virtgpu_ioctl.c | 54 ++++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index d4230b1fa91d..57b1ad51d251 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -545,14 +545,58 @@ static unsigned int winsrv_poll(struct file *filp,
 	return mask;
 }
 
+struct virtio_gpu_winsrv_region {
+	uint64_t pfn;
+	size_t size;
+};
+
+static int winsrv_fd_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+	struct virtio_gpu_winsrv_region *region = filp->private_data;
+	unsigned long vm_size = vma->vm_end - vma->vm_start;
+	int ret = 0;
+
+	if (vm_size +
+	    (vma->vm_pgoff << PAGE_SHIFT) > PAGE_ALIGN(region->size))
+		return -EINVAL;
+
+	ret = io_remap_pfn_range(vma, vma->vm_start, region->pfn, vm_size,
+				 vma->vm_page_prot);
+	if (ret)
+		return ret;
+
+	vma->vm_flags |= VM_PFNMAP | VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+
+	return ret;
+}
+
+static int winsrv_fd_release(struct inode *inodep, struct file *filp)
+{
+	struct virtio_gpu_winsrv_region *region = filp->private_data;
+
+	kfree(region);
+
+	return 0;
+}
+
+static const struct file_operations winsrv_fd_fops = {
+	.mmap = winsrv_fd_mmap,
+	.release = winsrv_fd_release,
+};
+
 static int winsrv_ioctl_rx(struct virtio_gpu_device *vgdev,
 			   struct virtio_gpu_winsrv_conn *conn,
 			   struct drm_virtgpu_winsrv *cmd)
 {
 	struct virtio_gpu_winsrv_rx_qentry *qentry, *tmp;
 	struct virtio_gpu_winsrv_rx *virtio_cmd;
+	struct virtio_gpu_winsrv_region *region;
 	int available_len = cmd->len;
 	int read_count = 0;
+	int i;
+
+	for (i = 0; i < VIRTGPU_WINSRV_MAX_ALLOCS; i++)
+		cmd->fds[i] = -1;
 
 	list_for_each_entry_safe(qentry, tmp, &conn->cmdq, next) {
 		virtio_cmd = qentry->cmd;
@@ -567,6 +611,16 @@ static int winsrv_ioctl_rx(struct virtio_gpu_device *vgdev,
 				return -EFAULT;
 		}
 
+		for (i = 0; virtio_cmd->pfns[i]; i++) {
+			region = kmalloc(sizeof(*region), GFP_KERNEL);
+			region->pfn = virtio_cmd->pfns[i];
+			region->size = virtio_cmd->lens[i];
+			cmd->fds[i] = anon_inode_getfd("[winsrv_fd]",
+						       &winsrv_fd_fops,
+						       region,
+						       O_CLOEXEC | O_RDWR);
+		}
+
 		available_len -= virtio_cmd->len;
 		read_count += virtio_cmd->len;
 
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-01-26 13:58 ` [PATCH v3 1/2] " Tomeu Vizoso
@ 2018-02-01 16:36   ` Gerd Hoffmann
  2018-02-05  8:19     ` Tomeu Vizoso
  0 siblings, 1 reply; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-01 16:36 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: linux-kernel, Zach Reizner, kernel, dri-devel, virtualization,
	Michael S. Tsirkin, David Airlie, Jason Wang

  Hi,

Sorry for joining the party late.  Had a broken finger and was offline
for a bunch of weeks (and a buif backlog afterwards ...).

> This is to allow clients running within VMs to be able to communicate
> with a compositor in the host. Clients will use the communication
> protocol that the compositor supports, and virtio-gpu will assist with
> making buffers available in both sides, and copying content as needed.

Why not use virtio-vsock to run the wayland protocol?  I don't like the
idea to duplicate something with very simliar functionality in
virtio-gpu.

> It is expected that a service in the guest will act as a proxy,
> interacting with virtio-gpu to support unmodified clients.

If you have a guest proxy anyway using virtio-sock for the protocol
stream and virtio-gpu for buffer sharing (and some day 3d rendering too)
should work fine I think.

> When the client notifies the compositor that it can read from that buffer,
> the proxy should copy the contents from the SHM region to the virtio-gpu
> resource and call DRM_VIRTGPU_TRANSFER_TO_HOST.

What is the plan for the host side?  I see basically two options.
Either implement the host wayland proxy directly in qemu.  Or implement
it as separate process, which then needs some help from qemu to get
access to the buffers.  The later would allow qemu running independant
from the desktop session.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-01 16:36   ` Gerd Hoffmann
@ 2018-02-05  8:19     ` Tomeu Vizoso
  2018-02-05 12:20       ` Gerd Hoffmann
  2019-03-18 12:47       ` Tomeu Vizoso
  0 siblings, 2 replies; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-05  8:19 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, Jason Wang, linux-kernel,
	dri-devel, virtualization, kernel

On 1 February 2018 at 17:36, Gerd Hoffmann <kraxel@redhat.com> wrote:
> Hi,
> 
> Sorry for joining the party late. Had a broken finger and was
> offline for a bunch of weeks (and a buif backlog afterwards ...).

Hi, no problem, hope it's fine now.

>> This is to allow clients running within VMs to be able to
>> communicate with a compositor in the host. Clients will use the
>> communication protocol that the compositor supports, and virtio-gpu
>> will assist with making buffers available in both sides, and
>> copying content as needed.
> 
> Why not use virtio-vsock to run the wayland protocol? I don't like
> the idea to duplicate something with very simliar functionality in 
> virtio-gpu.

The reason for abandoning that approach was the type of objects that
could be shared via virtio-vsock would be extremely limited. Besides
that being potentially confusing to users, it would mean from the
implementation side that either virtio-vsock would gain a dependency on
the drm subsystem, or an appropriate abstraction for shareable buffers
would need to be added for little gain.

Another factor that was taken into account was that the complexity
required for implementing passing protocol data around was very small
when compared with the buffer sharing mechanism.

>> It is expected that a service in the guest will act as a proxy, 
>> interacting with virtio-gpu to support unmodified clients.
> 
> If you have a guest proxy anyway using virtio-sock for the protocol 
> stream and virtio-gpu for buffer sharing (and some day 3d rendering
> too) should work fine I think.

If I understand correctly your proposal, virtio-gpu would be used for
creating buffers that could be shared across domains, but something
equivalent to SCM_RIGHTS would still be needed in virtio-vsock?

If so, that's what was planned initially, with the concern being that we
would be adding a bunch of complexity to virtio-vsock that would be only
used in this specific use case. Then we would also need to figure out
how virtio-vsock would be able to work with buffers from virtio-gpu
(either direct dependency or a new abstraction).

If the mechanics of passing presentation data were very complex, I think
this approach would have more merit. But as you can see from the code,
it isn't that bad.

>> When the client notifies the compositor that it can read from that
buffer,
>> the proxy should copy the contents from the SHM region to the
>> virtio-gpu resource and call DRM_VIRTGPU_TRANSFER_TO_HOST.
> 
> What is the plan for the host side? I see basically two options. 
> Either implement the host wayland proxy directly in qemu. Or
> implement it as separate process, which then needs some help from
> qemu to get access to the buffers. The later would allow qemu running
> independant from the desktop session.

Regarding synchronizing buffers, this will stop becoming needed in
subsequent commits as all shared memory is allocated in the host and
mapped to the guest via KVM_SET_USER_MEMORY_REGION.

This is already the case for buffers passed from the compositor to the
clients (see patch 2/2), and I'm working on the equivalent for buffers
from the guest to the host (clients still have to create buffers with
DRM_VIRTGPU_RESOURCE_CREATE but they will be only backend by host memory
so no calls to DRM_VIRTGPU_TRANSFER_TO_HOST are needed).

But in the case that we still need a proxy for some reason on the host
side, I think it would be better to have it in the same process where
virtio-gpu is implemented. In crosvm's case it would be in a process
separate from the main VMM, as device processes are isolated from each
other with minijail (see
https://chromium.googlesource.com/chromiumos/platform/crosvm/ ).

Regards,

Tomeu
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-05  8:19     ` Tomeu Vizoso
@ 2018-02-05 12:20       ` Gerd Hoffmann
  2018-02-05 14:46         ` Tomeu Vizoso
  2019-03-18 12:47       ` Tomeu Vizoso
  1 sibling, 1 reply; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-05 12:20 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: linux-kernel, Zach Reizner, kernel, dri-devel, virtualization,
	Michael S. Tsirkin, David Airlie, Jason Wang

  Hi,

> > Why not use virtio-vsock to run the wayland protocol? I don't like
> > the idea to duplicate something with very simliar functionality in
> > virtio-gpu.
> 
> The reason for abandoning that approach was the type of objects that
> could be shared via virtio-vsock would be extremely limited. Besides
> that being potentially confusing to users, it would mean from the
> implementation side that either virtio-vsock would gain a dependency on
> the drm subsystem, or an appropriate abstraction for shareable buffers
> would need to be added for little gain.

Well, no.  The idea is that virtio-vsock and virtio-gpu are used largely
as-is, without knowing about each other.  The guest wayland proxy which
does the buffer management talks to both devices.

> > If you have a guest proxy anyway using virtio-sock for the protocol
> > stream and virtio-gpu for buffer sharing (and some day 3d rendering
> > too) should work fine I think.
> 
> If I understand correctly your proposal, virtio-gpu would be used for
> creating buffers that could be shared across domains, but something
> equivalent to SCM_RIGHTS would still be needed in virtio-vsock?

Yes, the proxy would send a reference to the buffer over virtio-vsock.
I was more thinking about a struct specifying something like
"ressource-id 42 on virtio-gpu-pci device in slot 1:23.0" instead of
using SCM_RIGHTS.

> If the mechanics of passing presentation data were very complex, I think
> this approach would have more merit. But as you can see from the code,
> it isn't that bad.

Well, the devil is in the details.  If you have multiple connections you
don't want one being able to stall the others for example.  There are
reasons took quite a while to bring virtio-vsock to the state where it
is today.

> > What is the plan for the host side? I see basically two options. Either
> > implement the host wayland proxy directly in qemu. Or
> > implement it as separate process, which then needs some help from
> > qemu to get access to the buffers. The later would allow qemu running
> > independant from the desktop session.
> 
> Regarding synchronizing buffers, this will stop becoming needed in
> subsequent commits as all shared memory is allocated in the host and
> mapped to the guest via KVM_SET_USER_MEMORY_REGION.

--verbose please.  The qemu patches linked from the cover letter not
exactly helpful in understanding how all this is supposed to work.

> This is already the case for buffers passed from the compositor to the
> clients (see patch 2/2), and I'm working on the equivalent for buffers
> from the guest to the host (clients still have to create buffers with
> DRM_VIRTGPU_RESOURCE_CREATE but they will be only backend by host memory
> so no calls to DRM_VIRTGPU_TRANSFER_TO_HOST are needed).

Same here.  --verbose please.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-05 12:20       ` Gerd Hoffmann
@ 2018-02-05 14:46         ` Tomeu Vizoso
  2018-02-05 16:03           ` Gerd Hoffmann
  0 siblings, 1 reply; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-05 14:46 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

On 02/05/2018 01:20 PM, Gerd Hoffmann wrote:
>    Hi,
> 
>>> Why not use virtio-vsock to run the wayland protocol? I don't like
>>> the idea to duplicate something with very simliar functionality in
>>> virtio-gpu.
>>
>> The reason for abandoning that approach was the type of objects that
>> could be shared via virtio-vsock would be extremely limited. Besides
>> that being potentially confusing to users, it would mean from the
>> implementation side that either virtio-vsock would gain a dependency on
>> the drm subsystem, or an appropriate abstraction for shareable buffers
>> would need to be added for little gain.
> 
> Well, no.  The idea is that virtio-vsock and virtio-gpu are used largely
> as-is, without knowing about each other.  The guest wayland proxy which
> does the buffer management talks to both devices.

Note that the proxy won't know anything about buffers if clients opt-in 
for zero-copy support (they allocate the buffers in a way that allows 
for sharing with the host).

>>> If you have a guest proxy anyway using virtio-sock for the protocol
>>> stream and virtio-gpu for buffer sharing (and some day 3d rendering
>>> too) should work fine I think.
>>
>> If I understand correctly your proposal, virtio-gpu would be used for
>> creating buffers that could be shared across domains, but something
>> equivalent to SCM_RIGHTS would still be needed in virtio-vsock?
> 
> Yes, the proxy would send a reference to the buffer over virtio-vsock.
> I was more thinking about a struct specifying something like
> "ressource-id 42 on virtio-gpu-pci device in slot 1:23.0" instead of
> using SCM_RIGHTS.

Can you extend on this? I'm having trouble figuring out how this could 
work in a way that keeps protocol data together with the resources it 
refers to.

>> If the mechanics of passing presentation data were very complex, I think
>> this approach would have more merit. But as you can see from the code,
>> it isn't that bad.
> 
> Well, the devil is in the details.  If you have multiple connections you
> don't want one being able to stall the others for example.  There are
> reasons took quite a while to bring virtio-vsock to the state where it
> is today.

Yes, but at the same time there are use cases that virtio-vsock has to 
support but aren't important in this scenario.

>>> What is the plan for the host side? I see basically two options. Either
>>> implement the host wayland proxy directly in qemu. Or
>>> implement it as separate process, which then needs some help from
>>> qemu to get access to the buffers. The later would allow qemu running
>>> independant from the desktop session.
>>
>> Regarding synchronizing buffers, this will stop becoming needed in
>> subsequent commits as all shared memory is allocated in the host and
>> mapped to the guest via KVM_SET_USER_MEMORY_REGION.
>
> --verbose please.  The qemu patches linked from the cover letter not
> exactly helpful in understanding how all this is supposed to work.

A client will allocate a buffer with DRM_VIRTGPU_RESOURCE_CREATE, export 
it and pass the FD to the compositor (via the proxy).

During resource creation, QEMU would allocate a shmem buffer and map it 
into the guest with KVM_SET_USER_MEMORY_REGION.

The client would mmap that resource and render to it. Because it's 
backed by host memory, the compositor would be able to read it without 
any further copies.

>> This is already the case for buffers passed from the compositor to the
>> clients (see patch 2/2), and I'm working on the equivalent for buffers
>> from the guest to the host (clients still have to create buffers with
>> DRM_VIRTGPU_RESOURCE_CREATE but they will be only backend by host memory
>> so no calls to DRM_VIRTGPU_TRANSFER_TO_HOST are needed).
> 
> Same here.  --verbose please.

When a FD comes from the compositor, QEMU mmaps it and maps that virtual 
address to the guest via KVM_SET_USER_MEMORY_REGION.

When the guest proxy reads from the winsrv socket, it will get a FD that 
wraps the buffer referenced above.

When the client reads from the guest proxy, it would get a FD that 
references that same buffer and would mmap it. At that point, the client 
is reading from the same physical pages where the compositor wrote to.

To be clear, I'm not against solving this via some form of restricted FD 
passing in virtio-vsock, but Stefan (added to CC) thought that it would 
be cleaner to do it all within virtio-gpu. This is the thread where it 
was discussed:

https://lkml.kernel.org/r/<2d73a3e1-af70-83a1-0e84-98b5932ea20c@collabora.com>

Thanks,

Tomeu
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-05 14:46         ` Tomeu Vizoso
@ 2018-02-05 16:03           ` Gerd Hoffmann
  2018-02-06 12:41             ` Tomeu Vizoso
  2018-02-06 15:00             ` Pekka Paalanen
  0 siblings, 2 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-05 16:03 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

On Mon, Feb 05, 2018 at 03:46:17PM +0100, Tomeu Vizoso wrote:
> On 02/05/2018 01:20 PM, Gerd Hoffmann wrote:
> >    Hi,
> > 
> > > > Why not use virtio-vsock to run the wayland protocol? I don't like
> > > > the idea to duplicate something with very simliar functionality in
> > > > virtio-gpu.
> > > 
> > > The reason for abandoning that approach was the type of objects that
> > > could be shared via virtio-vsock would be extremely limited. Besides
> > > that being potentially confusing to users, it would mean from the
> > > implementation side that either virtio-vsock would gain a dependency on
> > > the drm subsystem, or an appropriate abstraction for shareable buffers
> > > would need to be added for little gain.
> > 
> > Well, no.  The idea is that virtio-vsock and virtio-gpu are used largely
> > as-is, without knowing about each other.  The guest wayland proxy which
> > does the buffer management talks to both devices.
> 
> Note that the proxy won't know anything about buffers if clients opt-in for
> zero-copy support (they allocate the buffers in a way that allows for
> sharing with the host).

Hmm?  I'm assuming the wayland client (in the guest) talks to the
wayland proxy, using the wayland protocol, like it would talk to a
wayland display server.  Buffers must be passed from client to
server/proxy somehow, probably using fd passing, so where is the
problem?

Or did I misunderstand the role of the proxy?

> > > > If you have a guest proxy anyway using virtio-sock for the protocol
> > > > stream and virtio-gpu for buffer sharing (and some day 3d rendering
> > > > too) should work fine I think.
> > > 
> > > If I understand correctly your proposal, virtio-gpu would be used for
> > > creating buffers that could be shared across domains, but something
> > > equivalent to SCM_RIGHTS would still be needed in virtio-vsock?
> > 
> > Yes, the proxy would send a reference to the buffer over virtio-vsock.
> > I was more thinking about a struct specifying something like
> > "ressource-id 42 on virtio-gpu-pci device in slot 1:23.0" instead of
> > using SCM_RIGHTS.
> 
> Can you extend on this? I'm having trouble figuring out how this could work
> in a way that keeps protocol data together with the resources it refers to.

Don't know much about the wayland protocol.  Assuming you are passing
buffers as file handles, together with some information what kind of
buffer this is (sysv shm, dma-buf, ...).

We have a proxy on both ends.  One running in the guest, one on the host
(be it qemu or some external one).  So these two have to agree on how to
pass buffers from one to the other.  One way would be to have them talk
a simple meta protocol to each other, with "here comes a chunk wayland
protocol to pass along" and "here is a buffer mgmt message".  Possibly
it is better to extend the wayland protocol to also cover this new kind
of buffer, so you don't need the meta protocol.

The proxies would talk normal wayland protocol to the client (in the
guest) and the server (on the host).  They will have to transform the
buffer into something they can pass along using the wayland protocol.

> > > > What is the plan for the host side? I see basically two options. Either
> > > > implement the host wayland proxy directly in qemu. Or
> > > > implement it as separate process, which then needs some help from
> > > > qemu to get access to the buffers. The later would allow qemu running
> > > > independant from the desktop session.
> > > 
> > > Regarding synchronizing buffers, this will stop becoming needed in
> > > subsequent commits as all shared memory is allocated in the host and
> > > mapped to the guest via KVM_SET_USER_MEMORY_REGION.
> > 
> > --verbose please.  The qemu patches linked from the cover letter not
> > exactly helpful in understanding how all this is supposed to work.
> 
> A client will allocate a buffer with DRM_VIRTGPU_RESOURCE_CREATE, export it
> and pass the FD to the compositor (via the proxy).
> 
> During resource creation, QEMU would allocate a shmem buffer and map it into
> the guest with KVM_SET_USER_MEMORY_REGION.

So the buffer magically shows up somewhere in the physical address space
of the guest?  That kind if magic usually isn't a very good idea.

> When a FD comes from the compositor, QEMU mmaps it and maps that virtual
> address to the guest via KVM_SET_USER_MEMORY_REGION.
> 
> When the guest proxy reads from the winsrv socket, it will get a FD that
> wraps the buffer referenced above.
> 
> When the client reads from the guest proxy, it would get a FD that
> references that same buffer and would mmap it. At that point, the client is
> reading from the same physical pages where the compositor wrote to.

Hmm.  I allways assumed the wayland client allocates the buffers, not
the server.  Is that wrong?

What is your plan for 3d acceleration support?

> To be clear, I'm not against solving this via some form of restricted FD
> passing in virtio-vsock, but Stefan (added to CC) thought that it would be
> cleaner to do it all within virtio-gpu.

Well, when targeting 3d acceleration it makes alot of sense to use
virtio-gpu.  And it makes sense to have 2d and 3d modes work as simliar
as possible.  That is not the direction you are taking with your
proposal though ...

If you don't plan for 3d support I'm wondering whenever virtio-gpu is a
good pick.  Mapping trickery aside, you wouldn't get linear buffers
which can easily be shared between host and guest, because guest buffers
are not required to be linear in guest physical memory.  One copy will
be needed, from (scattered) guest physical memory buffer to (linear)
host buffer.

One possible alternative would be to build on stdvga.  It has a pci
memory bar, it has a drm driver (bochs) which allows allocating drm
buffers in that bar.  They are linear buffers in both guest physical and
host virtual memory.  If we add an option to qemu to allocate the memory
bar in sysv shared memory it can easily be exported to other processes
on the host.  The wayland client in the guest can map it directly too,
it only needs to create a drm buffer and mmap it.  You can get zero-copy
without having to play mapping tricks.

cheers,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-05 16:03           ` Gerd Hoffmann
@ 2018-02-06 12:41             ` Tomeu Vizoso
  2018-02-06 14:23               ` Gerd Hoffmann
  2018-02-06 15:00             ` Pekka Paalanen
  1 sibling, 1 reply; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-06 12:41 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

On 02/05/2018 05:03 PM, Gerd Hoffmann wrote:
> On Mon, Feb 05, 2018 at 03:46:17PM +0100, Tomeu Vizoso wrote:
>> On 02/05/2018 01:20 PM, Gerd Hoffmann wrote:
>>>     Hi,
>>>
>>>>> Why not use virtio-vsock to run the wayland protocol? I don't like
>>>>> the idea to duplicate something with very simliar functionality in
>>>>> virtio-gpu.
>>>>
>>>> The reason for abandoning that approach was the type of objects that
>>>> could be shared via virtio-vsock would be extremely limited. Besides
>>>> that being potentially confusing to users, it would mean from the
>>>> implementation side that either virtio-vsock would gain a dependency on
>>>> the drm subsystem, or an appropriate abstraction for shareable buffers
>>>> would need to be added for little gain.
>>>
>>> Well, no.  The idea is that virtio-vsock and virtio-gpu are used largely
>>> as-is, without knowing about each other.  The guest wayland proxy which
>>> does the buffer management talks to both devices.
>>
>> Note that the proxy won't know anything about buffers if clients opt-in for
>> zero-copy support (they allocate the buffers in a way that allows for
>> sharing with the host).
> 
> Hmm?  I'm assuming the wayland client (in the guest) talks to the
> wayland proxy, using the wayland protocol, like it would talk to a
> wayland display server.  Buffers must be passed from client to
> server/proxy somehow, probably using fd passing, so where is the
> problem?
> 
> Or did I misunderstand the role of the proxy?

Hi Gerd,

it's starting to look to me that we're talking a bit past the other, so 
I have pasted below a few words describing my current plan regarding the 
3 key scenarios that I'm addressing.

I mention below KVM_SET_USER_MEMORY_REGION, but I guess we can discuss 
alternatives such as the one you are proposing using PCI BARs at a later 
stage.

I really think that whatever we come up with needs to support 3D clients 
as well.


Creation of shareable buffer by guest
-------------------------------------------------

1. Client requests virtio driver to create a buffer suitable for sharing 
with host (DRM_VIRTGPU_RESOURCE_CREATE)

2. Virtio driver creates a new resource ID and passes the request to 
QEMU (VIRTIO_GPU_CMD_RESOURCE_CREATE_2D)

3. QEMU creates a shmem file (for example with mkostemp), associates 
that FD with the ID of this resource

4. QEMU maps that buffer to the guest's address space 
(KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver

5. DRM_VIRTGPU_RESOURCE_CREATE returns the resource id just created

6. Client mmaps it with DRM_IOCTL_VIRTGPU_MAP+mmap and can render to it

7. Gets a FD with DRM_IOCTL_PRIME_HANDLE_TO_FD that can be sent around


Send of shareable buffer by guest
---------------------------------------------

1. Client sends the host a message that refers to this buffer, passing 
the FD using SCM_RIGHTS

2. Guest proxy passes the message (serialized data + FDs) on to the 
virtio driver responsable for winsrv support

3. virtio driver puts the data and the resource ids corresponding to the 
FDs in a virtqueue, kicks it

4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each 
resource, sends data + FDs to the compositor with SCM_RIGHTS


Reception of buffer from the compositor
-----------------------------------------------------

1. QEMU reads from the socket and gets a FD via SCM_RIGHTS

2. QEMU mmaps the FD and maps the resulting pointer to the guest via 
KVM_SET_USER_MEMORY_REGION

3. QEMU sends the guest PFN along the presentation data to the virtio 
driver (VIRTIO_GPU_CMD_WINSRV_RX)

4. Virtio driver wraps a FD around that PFN, puts it in a queue

5. Guest proxy calls DRM_IOCTL_VIRTGPU_WINSRV_RX and gets data plus that FD

6. Guest proxy sends that data + FD to the client via SCM_RIGHTS

7. Client gets FD, mmaps it and reads the data from the compositor



Thanks,

Tomeu


>>>>> If you have a guest proxy anyway using virtio-sock for the protocol
>>>>> stream and virtio-gpu for buffer sharing (and some day 3d rendering
>>>>> too) should work fine I think.
>>>>
>>>> If I understand correctly your proposal, virtio-gpu would be used for
>>>> creating buffers that could be shared across domains, but something
>>>> equivalent to SCM_RIGHTS would still be needed in virtio-vsock?
>>>
>>> Yes, the proxy would send a reference to the buffer over virtio-vsock.
>>> I was more thinking about a struct specifying something like
>>> "ressource-id 42 on virtio-gpu-pci device in slot 1:23.0" instead of
>>> using SCM_RIGHTS.
>>
>> Can you extend on this? I'm having trouble figuring out how this could work
>> in a way that keeps protocol data together with the resources it refers to.
> 
> Don't know much about the wayland protocol.  Assuming you are passing
> buffers as file handles, together with some information what kind of
> buffer this is (sysv shm, dma-buf, ...).
> 
> We have a proxy on both ends.  One running in the guest, one on the host
> (be it qemu or some external one).  So these two have to agree on how to
> pass buffers from one to the other.  One way would be to have them talk
> a simple meta protocol to each other, with "here comes a chunk wayland
> protocol to pass along" and "here is a buffer mgmt message".  Possibly
> it is better to extend the wayland protocol to also cover this new kind
> of buffer, so you don't need the meta protocol.
> 
> The proxies would talk normal wayland protocol to the client (in the
> guest) and the server (on the host).  They will have to transform the
> buffer into something they can pass along using the wayland protocol.
> 
>>>>> What is the plan for the host side? I see basically two options. Either
>>>>> implement the host wayland proxy directly in qemu. Or
>>>>> implement it as separate process, which then needs some help from
>>>>> qemu to get access to the buffers. The later would allow qemu running
>>>>> independant from the desktop session.
>>>>
>>>> Regarding synchronizing buffers, this will stop becoming needed in
>>>> subsequent commits as all shared memory is allocated in the host and
>>>> mapped to the guest via KVM_SET_USER_MEMORY_REGION.
>>>
>>> --verbose please.  The qemu patches linked from the cover letter not
>>> exactly helpful in understanding how all this is supposed to work.
>>
>> A client will allocate a buffer with DRM_VIRTGPU_RESOURCE_CREATE, export it
>> and pass the FD to the compositor (via the proxy).
>>
>> During resource creation, QEMU would allocate a shmem buffer and map it into
>> the guest with KVM_SET_USER_MEMORY_REGION.
> 
> So the buffer magically shows up somewhere in the physical address space
> of the guest?  That kind if magic usually isn't a very good idea.
> 
>> When a FD comes from the compositor, QEMU mmaps it and maps that virtual
>> address to the guest via KVM_SET_USER_MEMORY_REGION.
>>
>> When the guest proxy reads from the winsrv socket, it will get a FD that
>> wraps the buffer referenced above.
>>
>> When the client reads from the guest proxy, it would get a FD that
>> references that same buffer and would mmap it. At that point, the client is
>> reading from the same physical pages where the compositor wrote to.
> 
> Hmm.  I allways assumed the wayland client allocates the buffers, not
> the server.  Is that wrong?
> 
> What is your plan for 3d acceleration support?
> 
>> To be clear, I'm not against solving this via some form of restricted FD
>> passing in virtio-vsock, but Stefan (added to CC) thought that it would be
>> cleaner to do it all within virtio-gpu.
> 
> Well, when targeting 3d acceleration it makes alot of sense to use
> virtio-gpu.  And it makes sense to have 2d and 3d modes work as simliar
> as possible.  That is not the direction you are taking with your
> proposal though ...
> 
> If you don't plan for 3d support I'm wondering whenever virtio-gpu is a
> good pick.  Mapping trickery aside, you wouldn't get linear buffers
> which can easily be shared between host and guest, because guest buffers
> are not required to be linear in guest physical memory.  One copy will
> be needed, from (scattered) guest physical memory buffer to (linear)
> host buffer.
> 
> One possible alternative would be to build on stdvga.  It has a pci
> memory bar, it has a drm driver (bochs) which allows allocating drm
> buffers in that bar.  They are linear buffers in both guest physical and
> host virtual memory.  If we add an option to qemu to allocate the memory
> bar in sysv shared memory it can easily be exported to other processes
> on the host.  The wayland client in the guest can map it directly too,
> it only needs to create a drm buffer and mmap it.  You can get zero-copy
> without having to play mapping tricks.
> 
> cheers,
>    Gerd
> 
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-06 12:41             ` Tomeu Vizoso
@ 2018-02-06 14:23               ` Gerd Hoffmann
  2018-02-07  1:09                 ` Michael S. Tsirkin
  2018-02-07  9:49                 ` Tomeu Vizoso
  0 siblings, 2 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-06 14:23 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

  Hi,

> > Hmm?  I'm assuming the wayland client (in the guest) talks to the
> > wayland proxy, using the wayland protocol, like it would talk to a
> > wayland display server.  Buffers must be passed from client to
> > server/proxy somehow, probably using fd passing, so where is the
> > problem?
> > 
> > Or did I misunderstand the role of the proxy?
> 
> Hi Gerd,
> 
> it's starting to look to me that we're talking a bit past the other, so I
> have pasted below a few words describing my current plan regarding the 3 key
> scenarios that I'm addressing.

You are describing the details, but I'm missing the big picture ...

So, virtualization aside, how do buffers work in wayland?  As far I know
it goes like this:

  (a) software rendering: client allocates shared memory buffer, renders
      into it, then passes a file handle for that shmem block together
      with some meta data (size, format, ...) to the wayland server.

  (b) gpu rendering: client opens a render node, allocates a buffer,
      asks the cpu to renders into it, exports the buffer as dma-buf
      (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
      (again including meta data of course).

Is that correct?

Now, with virtualization added to the mix it becomes a bit more
complicated.  Client and server are unmodified.  The client talks to the
guest proxy (wayland protocol).  The guest proxy talks to the host proxy
(protocol to be defined). The host proxy talks to the server (wayland
protocol).

Buffers must be managed along the way, and we want avoid copying around
the buffers.  The host proxy could be implemented directly in qemu, or
as separate process which cooperates with qemu for buffer management.

Fine so far?

> I really think that whatever we come up with needs to support 3D clients as
> well.

Lets start with 3d clients, I think these are easier.  They simply use
virtio-gpu for 3d rendering as usual.  When they are done the rendered
buffer already lives in a host drm buffer (because virgl runs the actual
rendering on the host gpu).  So the client passes the dma-buf to the
guest proxy, the guest proxy imports it to look up the resource-id,
passes the resource-id to the host proxy, the host proxy looks up the
drm buffer and exports it as dma-buf, then passes it to the server.
Done, without any extra data copies.

> Creation of shareable buffer by guest
> -------------------------------------------------
> 
> 1. Client requests virtio driver to create a buffer suitable for sharing
> with host (DRM_VIRTGPU_RESOURCE_CREATE)

client or guest proxy?

> 4. QEMU maps that buffer to the guest's address space
> (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver

That part is problematic.  The host can't simply allocate something in
the physical address space, because most physical address space
management is done by the guest.  All pci bars are mapped by the guest
firmware for example (or by the guest OS in case of hotplug).

> 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
> resource, sends data + FDs to the compositor with SCM_RIGHTS

BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
does the wayland protocol allow for offsets in buffer meta data, so you
can place multiple buffers in a single shmem block?

cheers,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-05 16:03           ` Gerd Hoffmann
  2018-02-06 12:41             ` Tomeu Vizoso
@ 2018-02-06 15:00             ` Pekka Paalanen
  1 sibling, 0 replies; 26+ messages in thread
From: Pekka Paalanen @ 2018-02-06 15:00 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, Stefan Hajnoczi,
	Jason Wang, linux-kernel, dri-devel, virtualization, kernel


[-- Attachment #1.1: Type: text/plain, Size: 948 bytes --]

On Mon, 5 Feb 2018 17:03:22 +0100
Gerd Hoffmann <kraxel@redhat.com> wrote:

> On Mon, Feb 05, 2018 at 03:46:17PM +0100, Tomeu Vizoso wrote:
> > On 02/05/2018 01:20 PM, Gerd Hoffmann wrote:  
> > >    Hi,
> > >   
> 
> Hmm.  I allways assumed the wayland client allocates the buffers, not
> the server.  Is that wrong?

Hi Gerd,

a fly-by comment here:

The standard operation mode on Wayland indeed is that the client
allocates any pixel buffers. It is not the whole story though.

Server allocated buffers passed to a client also exist:
- core protocol uses this to pass keymaps to clients
- people are not forbidden from writing Wayland extensions that do this
  for whatever reason

The latter server-allocated case could probably be overlooked, but the
keymap case not really. Furthermore, copy&paste and drag&drop protocol
pass pipe file descriptors via Wayland to establish client-to-client
pipes.


Thanks,
pq

[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-06 14:23               ` Gerd Hoffmann
@ 2018-02-07  1:09                 ` Michael S. Tsirkin
  2018-02-07  7:41                   ` Tomeu Vizoso
  2018-02-07  9:49                 ` Tomeu Vizoso
  1 sibling, 1 reply; 26+ messages in thread
From: Michael S. Tsirkin @ 2018-02-07  1:09 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, linux-kernel, Zach Reizner, kernel, dri-devel,
	virtualization, David Airlie, Jason Wang, Stefan Hajnoczi

On Tue, Feb 06, 2018 at 03:23:02PM +0100, Gerd Hoffmann wrote:
> > Creation of shareable buffer by guest
> > -------------------------------------------------
> > 
> > 1. Client requests virtio driver to create a buffer suitable for sharing
> > with host (DRM_VIRTGPU_RESOURCE_CREATE)
> 
> client or guest proxy?
> 
> > 4. QEMU maps that buffer to the guest's address space
> > (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver
> 
> That part is problematic.  The host can't simply allocate something in
> the physical address space, because most physical address space
> management is done by the guest.  All pci bars are mapped by the guest
> firmware for example (or by the guest OS in case of hotplug).
> 
> > 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
> > resource, sends data + FDs to the compositor with SCM_RIGHTS

If you squint hard, this sounds a bit like a use-case for vhost-user-gpu, does it not?

> BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
> does the wayland protocol allow for offsets in buffer meta data, so you
> can place multiple buffers in a single shmem block?
> 
> cheers,
>   Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-07  1:09                 ` Michael S. Tsirkin
@ 2018-02-07  7:41                   ` Tomeu Vizoso
  0 siblings, 0 replies; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-07  7:41 UTC (permalink / raw)
  To: Michael S. Tsirkin, Gerd Hoffmann
  Cc: David Airlie, linux-kernel, dri-devel, virtualization,
	Zach Reizner, kernel

On 02/07/2018 02:09 AM, Michael S. Tsirkin wrote:
> On Tue, Feb 06, 2018 at 03:23:02PM +0100, Gerd Hoffmann wrote:
>>> Creation of shareable buffer by guest
>>> -------------------------------------------------
>>>
>>> 1. Client requests virtio driver to create a buffer suitable for sharing
>>> with host (DRM_VIRTGPU_RESOURCE_CREATE)
>>
>> client or guest proxy?
>>
>>> 4. QEMU maps that buffer to the guest's address space
>>> (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver
>>
>> That part is problematic.  The host can't simply allocate something in
>> the physical address space, because most physical address space
>> management is done by the guest.  All pci bars are mapped by the guest
>> firmware for example (or by the guest OS in case of hotplug).
>>
>>> 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
>>> resource, sends data + FDs to the compositor with SCM_RIGHTS
> 
> If you squint hard, this sounds a bit like a use-case for vhost-user-gpu, does it not?

Can you extend on what makes you think that?

As an aside, crosvm runs the virtio-gpu device in a separate, jailed
process, among other virtual devices.

https://chromium.googlesource.com/chromiumos/platform/crosvm/

Regards,

Tomeu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-06 14:23               ` Gerd Hoffmann
  2018-02-07  1:09                 ` Michael S. Tsirkin
@ 2018-02-07  9:49                 ` Tomeu Vizoso
  2018-02-09 11:14                   ` Tomeu Vizoso
  2018-02-12 11:45                   ` Gerd Hoffmann
  1 sibling, 2 replies; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-07  9:49 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: linux-kernel, Zach Reizner, kernel, dri-devel, virtualization,
	Michael S. Tsirkin, David Airlie, Jason Wang, Stefan Hajnoczi

On 02/06/2018 03:23 PM, Gerd Hoffmann wrote:
>    Hi,
> 
>>> Hmm?  I'm assuming the wayland client (in the guest) talks to the
>>> wayland proxy, using the wayland protocol, like it would talk to a
>>> wayland display server.  Buffers must be passed from client to
>>> server/proxy somehow, probably using fd passing, so where is the
>>> problem?
>>>
>>> Or did I misunderstand the role of the proxy?
>>
>> Hi Gerd,
>>
>> it's starting to look to me that we're talking a bit past the other, so I
>> have pasted below a few words describing my current plan regarding the 3 key
>> scenarios that I'm addressing.
> 
> You are describing the details, but I'm missing the big picture ...
> 
> So, virtualization aside, how do buffers work in wayland?  As far I know
> it goes like this:
> 
>    (a) software rendering: client allocates shared memory buffer, renders
>        into it, then passes a file handle for that shmem block together
>        with some meta data (size, format, ...) to the wayland server.
> 
>    (b) gpu rendering: client opens a render node, allocates a buffer,
>        asks the cpu to renders into it, exports the buffer as dma-buf
>        (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
>        (again including meta data of course).
> 
> Is that correct?

Both are correct descriptions of typical behaviors. But it isn't spec'ed 
anywhere who has to do the buffer allocation.

In practical terms, the buffer allocation happens in either the 2D GUI 
toolkit (gtk+, for example), or the EGL implementation. Someone using 
this in a real product would most probably be interested in avoiding any 
extra copies and make sure that both allocate buffers via virtio-gpu, for 
example.

Depending on the use case, they could be also interested in supporting 
unmodified clients with an extra copy per buffer presentation.

That's to say that if we cannot come up with a zero-copy solution for 
unmodified clients, we should at least support zero-copy for cooperative 
clients.

> Now, with virtualization added to the mix it becomes a bit more
> complicated.  Client and server are unmodified.  The client talks to the
> guest proxy (wayland protocol).  The guest proxy talks to the host proxy
> (protocol to be defined). The host proxy talks to the server (wayland
> protocol).
> 
> Buffers must be managed along the way, and we want avoid copying around
> the buffers.  The host proxy could be implemented directly in qemu, or
> as separate process which cooperates with qemu for buffer management.
> 
> Fine so far?

Yep.

>> I really think that whatever we come up with needs to support 3D clients as
>> well.
> 
> Lets start with 3d clients, I think these are easier.  They simply use
> virtio-gpu for 3d rendering as usual.  When they are done the rendered
> buffer already lives in a host drm buffer (because virgl runs the actual
> rendering on the host gpu).  So the client passes the dma-buf to the
> guest proxy, the guest proxy imports it to look up the resource-id,
> passes the resource-id to the host proxy, the host proxy looks up the
> drm buffer and exports it as dma-buf, then passes it to the server.
> Done, without any extra data copies.

Yep.

>> Creation of shareable buffer by guest
>> -------------------------------------------------
>>
>> 1. Client requests virtio driver to create a buffer suitable for sharing
>> with host (DRM_VIRTGPU_RESOURCE_CREATE)
> 
> client or guest proxy?

As per the above, the GUI toolkit could have been modified so the client 
directly creates a shareable buffer, and renders directly to it without 
any extra copies.

If clients cannot be modified, then it's the guest proxy what has to 
create the shareable buffer and keep it in sync with the client's 
non-shareable buffer at the right times, by intercepting 
wl_surface.commit messages and copying buffer contents.

>> 4. QEMU maps that buffer to the guest's address space
>> (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver
> 
> That part is problematic.  The host can't simply allocate something in
> the physical address space, because most physical address space
> management is done by the guest.  All pci bars are mapped by the guest
> firmware for example (or by the guest OS in case of hotplug).

How can KVM_SET_USER_MEMORY_REGION ever be safely used then? I would have 
expected that callers of that ioctl have enough knowledge to be able to 
choose a physical address that won't conflict with the guest's kernel.

I see that the ivshmem device in QEMU registers the memory region in BAR 
2 of a PCI device instead. Would that be better in your opinion?

>> 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
>> resource, sends data + FDs to the compositor with SCM_RIGHTS
> 
> BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
> does the wayland protocol allow for offsets in buffer meta data, so you
> can place multiple buffers in a single shmem block?

The latter: 
https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_shm_pool

Regards,

Tomeu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-07  9:49                 ` Tomeu Vizoso
@ 2018-02-09 11:14                   ` Tomeu Vizoso
  2018-02-12 11:52                     ` Gerd Hoffmann
  2018-02-12 11:45                   ` Gerd Hoffmann
  1 sibling, 1 reply; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-09 11:14 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, linux-kernel, dri-devel,
	virtualization, Zach Reizner, kernel

Hi Gerd and Stefan,

can we reach agreement on whether vsock should be involved in this?

Thanks,

Tomeu

On 02/07/2018 10:49 AM, Tomeu Vizoso wrote:
> On 02/06/2018 03:23 PM, Gerd Hoffmann wrote:
>>    Hi,
>>
>>>> Hmm?  I'm assuming the wayland client (in the guest) talks to the
>>>> wayland proxy, using the wayland protocol, like it would talk to a
>>>> wayland display server.  Buffers must be passed from client to
>>>> server/proxy somehow, probably using fd passing, so where is the
>>>> problem?
>>>>
>>>> Or did I misunderstand the role of the proxy?
>>>
>>> Hi Gerd,
>>>
>>> it's starting to look to me that we're talking a bit past the other, so I
>>> have pasted below a few words describing my current plan regarding the 
>>> 3 key
>>> scenarios that I'm addressing.
>>
>> You are describing the details, but I'm missing the big picture ...
>>
>> So, virtualization aside, how do buffers work in wayland?  As far I know
>> it goes like this:
>>
>>    (a) software rendering: client allocates shared memory buffer, renders
>>        into it, then passes a file handle for that shmem block together
>>        with some meta data (size, format, ...) to the wayland server.
>>
>>    (b) gpu rendering: client opens a render node, allocates a buffer,
>>        asks the cpu to renders into it, exports the buffer as dma-buf
>>        (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
>>        (again including meta data of course).
>>
>> Is that correct?
> 
> Both are correct descriptions of typical behaviors. But it isn't spec'ed 
> anywhere who has to do the buffer allocation.
> 
> In practical terms, the buffer allocation happens in either the 2D GUI 
> toolkit (gtk+, for example), or the EGL implementation. Someone using 
> this in a real product would most probably be interested in avoiding any 
> extra copies and make sure that both allocate buffers via virtio-gpu, for 
> example.
> 
> Depending on the use case, they could be also interested in supporting 
> unmodified clients with an extra copy per buffer presentation.
> 
> That's to say that if we cannot come up with a zero-copy solution for 
> unmodified clients, we should at least support zero-copy for cooperative 
> clients.
> 
>> Now, with virtualization added to the mix it becomes a bit more
>> complicated.  Client and server are unmodified.  The client talks to the
>> guest proxy (wayland protocol).  The guest proxy talks to the host proxy
>> (protocol to be defined). The host proxy talks to the server (wayland
>> protocol).
>>
>> Buffers must be managed along the way, and we want avoid copying around
>> the buffers.  The host proxy could be implemented directly in qemu, or
>> as separate process which cooperates with qemu for buffer management.
>>
>> Fine so far?
> 
> Yep.
> 
>>> I really think that whatever we come up with needs to support 3D 
>>> clients as
>>> well.
>>
>> Lets start with 3d clients, I think these are easier.  They simply use
>> virtio-gpu for 3d rendering as usual.  When they are done the rendered
>> buffer already lives in a host drm buffer (because virgl runs the actual
>> rendering on the host gpu).  So the client passes the dma-buf to the
>> guest proxy, the guest proxy imports it to look up the resource-id,
>> passes the resource-id to the host proxy, the host proxy looks up the
>> drm buffer and exports it as dma-buf, then passes it to the server.
>> Done, without any extra data copies.
> 
> Yep.
> 
>>> Creation of shareable buffer by guest
>>> -------------------------------------------------
>>>
>>> 1. Client requests virtio driver to create a buffer suitable for sharing
>>> with host (DRM_VIRTGPU_RESOURCE_CREATE)
>>
>> client or guest proxy?
> 
> As per the above, the GUI toolkit could have been modified so the client 
> directly creates a shareable buffer, and renders directly to it without 
> any extra copies.
> 
> If clients cannot be modified, then it's the guest proxy what has to 
> create the shareable buffer and keep it in sync with the client's 
> non-shareable buffer at the right times, by intercepting 
> wl_surface.commit messages and copying buffer contents.
> 
>>> 4. QEMU maps that buffer to the guest's address space
>>> (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver
>>
>> That part is problematic.  The host can't simply allocate something in
>> the physical address space, because most physical address space
>> management is done by the guest.  All pci bars are mapped by the guest
>> firmware for example (or by the guest OS in case of hotplug).
> 
> How can KVM_SET_USER_MEMORY_REGION ever be safely used then? I would have 
> expected that callers of that ioctl have enough knowledge to be able to 
> choose a physical address that won't conflict with the guest's kernel.
> 
> I see that the ivshmem device in QEMU registers the memory region in BAR 
> 2 of a PCI device instead. Would that be better in your opinion?
> 
>>> 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
>>> resource, sends data + FDs to the compositor with SCM_RIGHTS
>>
>> BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
>> does the wayland protocol allow for offsets in buffer meta data, so you
>> can place multiple buffers in a single shmem block?
> 
> The latter: 
> https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_shm_pool
> 
> Regards,
> 
> Tomeu
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-07  9:49                 ` Tomeu Vizoso
  2018-02-09 11:14                   ` Tomeu Vizoso
@ 2018-02-12 11:45                   ` Gerd Hoffmann
  2018-02-13  7:41                     ` Pekka Paalanen
                                       ` (2 more replies)
  1 sibling, 3 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-12 11:45 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

  Hi,

> >    (a) software rendering: client allocates shared memory buffer, renders
> >        into it, then passes a file handle for that shmem block together
> >        with some meta data (size, format, ...) to the wayland server.
> > 
> >    (b) gpu rendering: client opens a render node, allocates a buffer,
> >        asks the cpu to renders into it, exports the buffer as dma-buf
> >        (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
> >        (again including meta data of course).
> > 
> > Is that correct?
> 
> Both are correct descriptions of typical behaviors. But it isn't spec'ed
> anywhere who has to do the buffer allocation.

Well, according to Pekka's reply it is spec'ed that way, for the
existing buffer types.  So for server allocated buffers you need
(a) a wayland protocol extension and (b) support for the extension
in the clients.

> That's to say that if we cannot come up with a zero-copy solution for
> unmodified clients, we should at least support zero-copy for cooperative
> clients.

"cooperative clients" == "client which has support for the wayland
protocol extension", correct?

> > > Creation of shareable buffer by guest
> > > -------------------------------------------------
> > > 
> > > 1. Client requests virtio driver to create a buffer suitable for sharing
> > > with host (DRM_VIRTGPU_RESOURCE_CREATE)
> > 
> > client or guest proxy?
> 
> As per the above, the GUI toolkit could have been modified so the client
> directly creates a shareable buffer, and renders directly to it without any
> extra copies.
> 
> If clients cannot be modified, then it's the guest proxy what has to create
> the shareable buffer and keep it in sync with the client's non-shareable
> buffer at the right times, by intercepting wl_surface.commit messages and
> copying buffer contents.

Ok.

> > > 4. QEMU maps that buffer to the guest's address space
> > > (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver
> > 
> > That part is problematic.  The host can't simply allocate something in
> > the physical address space, because most physical address space
> > management is done by the guest.  All pci bars are mapped by the guest
> > firmware for example (or by the guest OS in case of hotplug).
> 
> How can KVM_SET_USER_MEMORY_REGION ever be safely used then? I would have
> expected that callers of that ioctl have enough knowledge to be able to
> choose a physical address that won't conflict with the guest's kernel.

Depends on the kind of region.  Guest RAM is allocated and mapped by
qemu, guest firmware can query qemu about RAM mappings using a special
interface, then create a e820 memory map for the guest os.  PCI device
bars are mapped according to the pci config space registers, which in
turn are initialized by the guest firmware, so it is basically in the
guests hand where they show up.

> I see that the ivshmem device in QEMU registers the memory region in BAR 2
> of a PCI device instead. Would that be better in your opinion?

Yes.

> > > 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
> > > resource, sends data + FDs to the compositor with SCM_RIGHTS
> > 
> > BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
> > does the wayland protocol allow for offsets in buffer meta data, so you
> > can place multiple buffers in a single shmem block?
> 
> The latter:
> https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_shm_pool

Ah, good, that makes it alot easier.

So, yes, using ivshmem would be one option.  Tricky part here is the
buffer management though.  It's just a raw piece of memory.  The guest
proxy could mmap the pci bar and manage it.  But then it is again either
unmodified guest + copying the data, or modified client (which requests
buffers from guest proxy) for zero-copy.

Another idea would be extending stdvga.  Basically qemu would have to
use shmem as backing storage for vga memory instead of anonymous memory,
so it would be very  simliar to ivshmem on the host side.  But on the
guest side we have a drm driver for it (bochs-drm).  So clients can
allocate dumb drm buffers for software rendering, and the buffer would
already be backed by a host shmem segment.  Given that wayland already
supports drm buffers for 3d rendering that could work without extending
the wayland protocol.  The client proxy would have to translate the drm
buffer into an pci bar offset and pass it to the host side.  The host
proxy could register the pci bar as wl_shm_pool, then just pass through
the offset to reference the individual buffers.

Drawback of both approaches would be that software rendering and gpu
rendering would use quite different code paths.

We also need a solution for the keymap shmem block.  I guess the keymap
doesn't change all that often, so maybe it is easiest to just copy it
over (host proxy -> guest proxy) instead of trying to map the host shmem
into the guest?

cheers,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-09 11:14                   ` Tomeu Vizoso
@ 2018-02-12 11:52                     ` Gerd Hoffmann
  2018-02-12 14:00                       ` Tomeu Vizoso
  0 siblings, 1 reply; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-12 11:52 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Michael S. Tsirkin, David Airlie, linux-kernel, dri-devel,
	virtualization, Zach Reizner, kernel

  Hi,

> can we reach agreement on whether vsock should be involved in this?

I think the best approach would be to have guest proxy and host proxy
use vsock for the wayland protocol.  Use a wayland protocol extension to
reference the buffers in stdvga / ivshmem / virtio-gpu.  Only the two
proxies need to understand the extension, the client <=> guest proxy and
host proxy <=> server communication would be standard wayland protocol.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-12 11:52                     ` Gerd Hoffmann
@ 2018-02-12 14:00                       ` Tomeu Vizoso
  2018-02-12 14:27                         ` Gerd Hoffmann
  0 siblings, 1 reply; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-12 14:00 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

On 02/12/2018 12:52 PM, Gerd Hoffmann wrote:
>    Hi,
> 
>> can we reach agreement on whether vsock should be involved in this?
> 
> I think the best approach would be to have guest proxy and host proxy
> use vsock for the wayland protocol.  Use a wayland protocol extension to
> reference the buffers in stdvga / ivshmem / virtio-gpu.  Only the two
> proxies need to understand the extension, the client <=> guest proxy and
> host proxy <=> server communication would be standard wayland protocol.

Thanks for the ideas. What I haven't understood yet is how you see the 
actual passing of buffers via vsock. Are you thinking of using ancillary 
data to pass FDs, or something else?

Thanks,

Tomeu
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-12 14:00                       ` Tomeu Vizoso
@ 2018-02-12 14:27                         ` Gerd Hoffmann
  2018-02-12 14:42                           ` Tomeu Vizoso
  0 siblings, 1 reply; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-12 14:27 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Michael S. Tsirkin, David Airlie, linux-kernel, dri-devel,
	virtualization, Zach Reizner, kernel

On Mon, Feb 12, 2018 at 03:00:24PM +0100, Tomeu Vizoso wrote:
> On 02/12/2018 12:52 PM, Gerd Hoffmann wrote:
> >    Hi,
> > 
> > > can we reach agreement on whether vsock should be involved in this?
> > 
> > I think the best approach would be to have guest proxy and host proxy
> > use vsock for the wayland protocol.  Use a wayland protocol extension to
> > reference the buffers in stdvga / ivshmem / virtio-gpu.  Only the two
> > proxies need to understand the extension, the client <=> guest proxy and
> > host proxy <=> server communication would be standard wayland protocol.
> 
> Thanks for the ideas. What I haven't understood yet is how you see the
> actual passing of buffers via vsock. Are you thinking of using ancillary
> data to pass FDs, or something else?

I was more thinking about a struct containing enough info to allow the
proxy on the host side find the buffer, something like:

   struct {
      enum type { stdvga, virtio-cpu, ... }
      pcislot device;
      union {
         int stdvga_pcibar_offset;
         int virtio_gpu_resource_id;
      }
   }

So when the guest proxy gets a message with a fd referencing a buffer it
would have to figure where the buffer is, rewrite the message into the
struct above for the host proxy.  The host proxy would rewrite the
message again for the server.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-12 14:27                         ` Gerd Hoffmann
@ 2018-02-12 14:42                           ` Tomeu Vizoso
  2018-02-12 15:29                             ` Gerd Hoffmann
  0 siblings, 1 reply; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-12 14:42 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

On 02/12/2018 03:27 PM, Gerd Hoffmann wrote:
> On Mon, Feb 12, 2018 at 03:00:24PM +0100, Tomeu Vizoso wrote:
>> On 02/12/2018 12:52 PM, Gerd Hoffmann wrote:
>>>     Hi,
>>>
>>>> can we reach agreement on whether vsock should be involved in this?
>>>
>>> I think the best approach would be to have guest proxy and host proxy
>>> use vsock for the wayland protocol.  Use a wayland protocol extension to
>>> reference the buffers in stdvga / ivshmem / virtio-gpu.  Only the two
>>> proxies need to understand the extension, the client <=> guest proxy and
>>> host proxy <=> server communication would be standard wayland protocol.
>>
>> Thanks for the ideas. What I haven't understood yet is how you see the
>> actual passing of buffers via vsock. Are you thinking of using ancillary
>> data to pass FDs, or something else?
> 
> I was more thinking about a struct containing enough info to allow the
> proxy on the host side find the buffer, something like:
> 
>     struct {
>        enum type { stdvga, virtio-cpu, ... }
>        pcislot device;
>        union {
>           int stdvga_pcibar_offset;
>           int virtio_gpu_resource_id;
>        }
>     }
> 
> So when the guest proxy gets a message with a fd referencing a buffer it
> would have to figure where the buffer is, rewrite the message into the
> struct above for the host proxy.  The host proxy would rewrite the
> message again for the server.

What I don't understand yet is how we can keep the buffer descriptions 
together with the protocol data that references them.

With SCM_RIGHTS, the FDs are placed in the ancillary data that "travels" 
together with the protocol data that references them.

With the present series, the DRM_IOCTL_VIRTGPU_WINSRV_TX ioctl struct has 
a field for the protocol data and an array of FDs.

How are you proposing to pass instances of that struct from above along 
the protocol data that refers to them?

Thanks,

Tomeu
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-12 14:42                           ` Tomeu Vizoso
@ 2018-02-12 15:29                             ` Gerd Hoffmann
  0 siblings, 0 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-12 15:29 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

> > I was more thinking about a struct containing enough info to allow the
> > proxy on the host side find the buffer, something like:
> > 
> >     struct {
> >        enum type { stdvga, virtio-cpu, ... }
> >        pcislot device;
> >        union {
> >           int stdvga_pcibar_offset;
> >           int virtio_gpu_resource_id;
> >        }
> >     }
> > 
> > So when the guest proxy gets a message with a fd referencing a buffer it
> > would have to figure where the buffer is, rewrite the message into the
> > struct above for the host proxy.  The host proxy would rewrite the
> > message again for the server.
> 
> What I don't understand yet is how we can keep the buffer descriptions
> together with the protocol data that references them.
> 
> With SCM_RIGHTS, the FDs are placed in the ancillary data that "travels"
> together with the protocol data that references them.

Place the buffer description into the wayland extension protocol messages?

i.e. have some wl_virt_proxy protocol extension.  Then, for the stdvga case:

  (1) client sends wl_drm/create_prime_buffer request to the guest proxy
  (2) guest proxy rewrites this into wl_virt_proxy/create_buffer, or
      maybe a create_stdvga_buffer request, carrying all the information
      listed above, and sends it to the host proxy.
  (3) host proxy rewrites it again into a wl_shm_pool/create_buffer and
      forwards it to the server.

cheers,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-12 11:45                   ` Gerd Hoffmann
@ 2018-02-13  7:41                     ` Pekka Paalanen
  2018-02-13 14:27                     ` Tomeu Vizoso
  2018-02-15 15:28                     ` Tomeu Vizoso
  2 siblings, 0 replies; 26+ messages in thread
From: Pekka Paalanen @ 2018-02-13  7:41 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Michael S. Tsirkin, David Airlie, Stefan Hajnoczi,
	Jason Wang, linux-kernel, dri-devel, virtualization, kernel


[-- Attachment #1.1: Type: text/plain, Size: 1849 bytes --]

On Mon, 12 Feb 2018 12:45:40 +0100
Gerd Hoffmann <kraxel@redhat.com> wrote:

>   Hi,
> 
> > >    (a) software rendering: client allocates shared memory buffer, renders
> > >        into it, then passes a file handle for that shmem block together
> > >        with some meta data (size, format, ...) to the wayland server.
> > > 
> > >    (b) gpu rendering: client opens a render node, allocates a buffer,
> > >        asks the cpu to renders into it, exports the buffer as dma-buf
> > >        (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
> > >        (again including meta data of course).
> > > 
> > > Is that correct?  
> > 
> > Both are correct descriptions of typical behaviors. But it isn't spec'ed
> > anywhere who has to do the buffer allocation.  
> 
> Well, according to Pekka's reply it is spec'ed that way, for the
> existing buffer types.  So for server allocated buffers you need
> (a) a wayland protocol extension and (b) support for the extension
> in the clients.

Correct. Or simply a libEGL that uses such Wayland extension behind
everyone's back. I believe such things did at least exist, but are
probably not relevant for this discussion.

(If there is a standard library, like libEGL, loaded and used by both a
server and a client, that library can advertise custom private Wayland
protocol extensions and the client side can take advantage of them,
both without needing any code changes on either the server or the
client.)

> We also need a solution for the keymap shmem block.  I guess the keymap
> doesn't change all that often, so maybe it is easiest to just copy it
> over (host proxy -> guest proxy) instead of trying to map the host shmem
> into the guest?

Yes, I believe that would be a perfectly valid solution for that
particular case.


Thanks,
pq

[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-12 11:45                   ` Gerd Hoffmann
  2018-02-13  7:41                     ` Pekka Paalanen
@ 2018-02-13 14:27                     ` Tomeu Vizoso
  2018-02-16 10:48                       ` Gerd Hoffmann
  2018-02-15 15:28                     ` Tomeu Vizoso
  2 siblings, 1 reply; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-13 14:27 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

On 02/12/2018 12:45 PM, Gerd Hoffmann wrote:
>    Hi,
> 
>>>     (a) software rendering: client allocates shared memory buffer, renders
>>>         into it, then passes a file handle for that shmem block together
>>>         with some meta data (size, format, ...) to the wayland server.
>>>
>>>     (b) gpu rendering: client opens a render node, allocates a buffer,
>>>         asks the cpu to renders into it, exports the buffer as dma-buf
>>>         (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server
>>>         (again including meta data of course).
>>>
>>> Is that correct?
>>
>> Both are correct descriptions of typical behaviors. But it isn't spec'ed
>> anywhere who has to do the buffer allocation.
> 
> Well, according to Pekka's reply it is spec'ed that way, for the
> existing buffer types.  So for server allocated buffers you need
> (a) a wayland protocol extension and (b) support for the extension
> in the clients.
> 
>> That's to say that if we cannot come up with a zero-copy solution for
>> unmodified clients, we should at least support zero-copy for cooperative
>> clients.
> 
> "cooperative clients" == "client which has support for the wayland
> protocol extension", correct?

Guess it could be that, but I was rather thinking of clients that would 
allocate the buffer for wl_shm_pool with DRM_VIRTGPU_RESOURCE_CREATE or 
equivalent. Then that buffer would be exported and the fd passed using 
the standard wl_shm protocol.

>>>> 4. QEMU maps that buffer to the guest's address space
>>>> (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver
>>>
>>> That part is problematic.  The host can't simply allocate something in
>>> the physical address space, because most physical address space
>>> management is done by the guest.  All pci bars are mapped by the guest
>>> firmware for example (or by the guest OS in case of hotplug).
>>
>> How can KVM_SET_USER_MEMORY_REGION ever be safely used then? I would have
>> expected that callers of that ioctl have enough knowledge to be able to
>> choose a physical address that won't conflict with the guest's kernel.
> 
> Depends on the kind of region.  Guest RAM is allocated and mapped by
> qemu, guest firmware can query qemu about RAM mappings using a special
> interface, then create a e820 memory map for the guest os.  PCI device
> bars are mapped according to the pci config space registers, which in
> turn are initialized by the guest firmware, so it is basically in the
> guests hand where they show up.
> 
>> I see that the ivshmem device in QEMU registers the memory region in BAR 2
>> of a PCI device instead. Would that be better in your opinion?
> 
> Yes.

Would it make sense for virtio-gpu to map buffers to the guest via PCI 
BARs? So we can use a single drm driver for both 2d and 3d.

>>>> 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each
>>>> resource, sends data + FDs to the compositor with SCM_RIGHTS
>>>
>>> BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
>>> does the wayland protocol allow for offsets in buffer meta data, so you
>>> can place multiple buffers in a single shmem block?
>>
>> The latter:
>> https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_shm_pool
> 
> Ah, good, that makes it alot easier.
> 
> So, yes, using ivshmem would be one option.  Tricky part here is the
> buffer management though.  It's just a raw piece of memory.  The guest
> proxy could mmap the pci bar and manage it.  But then it is again either
> unmodified guest + copying the data, or modified client (which requests
> buffers from guest proxy) for zero-copy.
> 
> Another idea would be extending stdvga.  Basically qemu would have to
> use shmem as backing storage for vga memory instead of anonymous memory,
> so it would be very  simliar to ivshmem on the host side.  But on the
> guest side we have a drm driver for it (bochs-drm).  So clients can
> allocate dumb drm buffers for software rendering, and the buffer would
> already be backed by a host shmem segment.  Given that wayland already
> supports drm buffers for 3d rendering that could work without extending
> the wayland protocol.  The client proxy would have to translate the drm
> buffer into an pci bar offset and pass it to the host side.  The host
> proxy could register the pci bar as wl_shm_pool, then just pass through
> the offset to reference the individual buffers.
> 
> Drawback of both approaches would be that software rendering and gpu
> rendering would use quite different code paths.

Yeah, would be great if we could find a way to avoid that.

> We also need a solution for the keymap shmem block.  I guess the keymap
> doesn't change all that often, so maybe it is easiest to just copy it
> over (host proxy -> guest proxy) instead of trying to map the host shmem
> into the guest?

I think that should be fine for now. Something similar will have to 
happen for the clipboard, which currently uses pipes to exchange data.

Thanks,

Tomeu
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-12 11:45                   ` Gerd Hoffmann
  2018-02-13  7:41                     ` Pekka Paalanen
  2018-02-13 14:27                     ` Tomeu Vizoso
@ 2018-02-15 15:28                     ` Tomeu Vizoso
  2 siblings, 0 replies; 26+ messages in thread
From: Tomeu Vizoso @ 2018-02-15 15:28 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

On 02/12/2018 12:45 PM, Gerd Hoffmann wrote:>>>> 4. QEMU pops 
data+buffers from the virtqueue, looks up shmem FD for each
 >>>> resource, sends data + FDs to the compositor with SCM_RIGHTS
 >>>
 >>> BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or
 >>> does the wayland protocol allow for offsets in buffer meta data, so you
 >>> can place multiple buffers in a single shmem block?
 >>
 >> The latter:
 >> 
https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_shm_pool
 >
 > Ah, good, that makes it alot easier.
 >
 > So, yes, using ivshmem would be one option.  Tricky part here is the
 > buffer management though.  It's just a raw piece of memory.  The guest
 > proxy could mmap the pci bar and manage it.  But then it is again either
 > unmodified guest + copying the data, or modified client (which requests
 > buffers from guest proxy) for zero-copy.

What if at VIRTIO_GPU_CMD_RESOURCE_CREATE_2D time we created a ivshmem 
device to back that resource. The ivshmem device would in turn be backed 
by a hostmem device that wraps a shmem FD.

The guest client can then export that resource/BO and pass the FD to the 
guest proxy. The guest proxy would import it and put the resource_id in 
the equivalent message in our protocol extension.

QEMU would get that resource id from vsock, look up which hostmem device 
is associated with that resource, and pass its FD to the compositor.

 > We also need a solution for the keymap shmem block.  I guess the keymap
 > doesn't change all that often, so maybe it is easiest to just copy it
 > over (host proxy -> guest proxy) instead of trying to map the host shmem
 > into the guest?

Not sure if that would be much simpler than creating a ivshmem+hostmem 
combo that wraps the incoming shmem FD and then having virtio-gpu create 
a BO that imports it.

Regards,

Tomeu
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-13 14:27                     ` Tomeu Vizoso
@ 2018-02-16 10:48                       ` Gerd Hoffmann
  0 siblings, 0 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2018-02-16 10:48 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Michael S. Tsirkin, David Airlie, Stefan Hajnoczi, Jason Wang,
	linux-kernel, dri-devel, virtualization, kernel

> > Yes.
> 
> Would it make sense for virtio-gpu to map buffers to the guest via PCI BARs?
> So we can use a single drm driver for both 2d and 3d.

Should be doable.

I'm wondering two things though:

(1) Will shmem actually help avoiding a copy?

virtio-gpu with virgl will (even if the guest doesn't use opengl) store
the resources in gpu memory.  So the VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D
copy goes from guest memory directly to gpu memory, and if we export
that as dma-buf and pass it to the wayland server it should be able to
render it without doing another copy.

How does the wl_shm_pool workflow look like inside the wayland server?
Can it ask the gpu to render directly from the pool?  Or is a copy to
gpu memory needed here?  If the latter we would effectively trade one
copy for another ...

(2) Could we handle the mapping without needing shmem?

Possibly we could extend the vgem driver.  So we pass in a iov (which
qemu gets from guest via VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING), get
back a drm object.  Which effectively creates drm objects on the host
which match the drm object in the guest (both backed by the same set of
physical pages).

cheers,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 1/2] drm/virtio: Add window server support
  2018-02-05  8:19     ` Tomeu Vizoso
  2018-02-05 12:20       ` Gerd Hoffmann
@ 2019-03-18 12:47       ` Tomeu Vizoso
  1 sibling, 0 replies; 26+ messages in thread
From: Tomeu Vizoso @ 2019-03-18 12:47 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Michael S. Tsirkin, David Airlie, Jason Wang, linux-kernel,
	dri-devel, virtualization, kernel

[Tomasz wants to comment, adding him to CC]

On 2/5/18 9:19 AM, Tomeu Vizoso wrote:
> On 1 February 2018 at 17:36, Gerd Hoffmann <kraxel@redhat.com> wrote:
>> Hi,
>>
>> Sorry for joining the party late. Had a broken finger and was
>> offline for a bunch of weeks (and a buif backlog afterwards ...).
> 
> Hi, no problem, hope it's fine now.
> 
>>> This is to allow clients running within VMs to be able to
>>> communicate with a compositor in the host. Clients will use the
>>> communication protocol that the compositor supports, and virtio-gpu
>>> will assist with making buffers available in both sides, and
>>> copying content as needed.
>>
>> Why not use virtio-vsock to run the wayland protocol? I don't like
>> the idea to duplicate something with very simliar functionality in 
>> virtio-gpu.
> 
> The reason for abandoning that approach was the type of objects that
> could be shared via virtio-vsock would be extremely limited. Besides
> that being potentially confusing to users, it would mean from the
> implementation side that either virtio-vsock would gain a dependency on
> the drm subsystem, or an appropriate abstraction for shareable buffers
> would need to be added for little gain.
> 
> Another factor that was taken into account was that the complexity
> required for implementing passing protocol data around was very small
> when compared with the buffer sharing mechanism.
> 
>>> It is expected that a service in the guest will act as a proxy, 
>>> interacting with virtio-gpu to support unmodified clients.
>>
>> If you have a guest proxy anyway using virtio-sock for the protocol 
>> stream and virtio-gpu for buffer sharing (and some day 3d rendering
>> too) should work fine I think.
> 
> If I understand correctly your proposal, virtio-gpu would be used for
> creating buffers that could be shared across domains, but something
> equivalent to SCM_RIGHTS would still be needed in virtio-vsock?
> 
> If so, that's what was planned initially, with the concern being that we
> would be adding a bunch of complexity to virtio-vsock that would be only
> used in this specific use case. Then we would also need to figure out
> how virtio-vsock would be able to work with buffers from virtio-gpu
> (either direct dependency or a new abstraction).
> 
> If the mechanics of passing presentation data were very complex, I think
> this approach would have more merit. But as you can see from the code,
> it isn't that bad.
> 
>>> When the client notifies the compositor that it can read from that
> buffer,
>>> the proxy should copy the contents from the SHM region to the
>>> virtio-gpu resource and call DRM_VIRTGPU_TRANSFER_TO_HOST.
>>
>> What is the plan for the host side? I see basically two options. Either 
>> implement the host wayland proxy directly in qemu. Or
>> implement it as separate process, which then needs some help from
>> qemu to get access to the buffers. The later would allow qemu running
>> independant from the desktop session.
> 
> Regarding synchronizing buffers, this will stop becoming needed in
> subsequent commits as all shared memory is allocated in the host and
> mapped to the guest via KVM_SET_USER_MEMORY_REGION.
> 
> This is already the case for buffers passed from the compositor to the
> clients (see patch 2/2), and I'm working on the equivalent for buffers
> from the guest to the host (clients still have to create buffers with
> DRM_VIRTGPU_RESOURCE_CREATE but they will be only backend by host memory
> so no calls to DRM_VIRTGPU_TRANSFER_TO_HOST are needed).
> 
> But in the case that we still need a proxy for some reason on the host
> side, I think it would be better to have it in the same process where
> virtio-gpu is implemented. In crosvm's case it would be in a process
> separate from the main VMM, as device processes are isolated from each
> other with minijail (see
> https://chromium.googlesource.com/chromiumos/platform/crosvm/ ).
> 
> Regards,
> 
> Tomeu
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2019-03-18 12:47 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-26 13:58 [PATCH v3 0/2] drm/virtio: Add window server support Tomeu Vizoso
2018-01-26 13:58 ` [PATCH v3 1/2] " Tomeu Vizoso
2018-02-01 16:36   ` Gerd Hoffmann
2018-02-05  8:19     ` Tomeu Vizoso
2018-02-05 12:20       ` Gerd Hoffmann
2018-02-05 14:46         ` Tomeu Vizoso
2018-02-05 16:03           ` Gerd Hoffmann
2018-02-06 12:41             ` Tomeu Vizoso
2018-02-06 14:23               ` Gerd Hoffmann
2018-02-07  1:09                 ` Michael S. Tsirkin
2018-02-07  7:41                   ` Tomeu Vizoso
2018-02-07  9:49                 ` Tomeu Vizoso
2018-02-09 11:14                   ` Tomeu Vizoso
2018-02-12 11:52                     ` Gerd Hoffmann
2018-02-12 14:00                       ` Tomeu Vizoso
2018-02-12 14:27                         ` Gerd Hoffmann
2018-02-12 14:42                           ` Tomeu Vizoso
2018-02-12 15:29                             ` Gerd Hoffmann
2018-02-12 11:45                   ` Gerd Hoffmann
2018-02-13  7:41                     ` Pekka Paalanen
2018-02-13 14:27                     ` Tomeu Vizoso
2018-02-16 10:48                       ` Gerd Hoffmann
2018-02-15 15:28                     ` Tomeu Vizoso
2018-02-06 15:00             ` Pekka Paalanen
2019-03-18 12:47       ` Tomeu Vizoso
2018-01-26 13:58 ` [PATCH v3 2/2] drm/virtio: Handle buffers from the compositor Tomeu Vizoso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).