All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices
@ 2016-06-04 21:33 marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 01/14] Add qemu_chr_open_socket() marcandre.lureau
                   ` (14 more replies)
  0 siblings, 15 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Hi,

vhost-user allows to drive a virtio device in a seperate process. So
far, it has been mainly used with virtio-net. It can work with other
devices such as input and gpu, as shown in this series.

Some of the benefits of using vhost-user are:
- parallelism, since the backend is running in a different process
- flexibility, since backends may be implemented by various parties
- some process isolation (virgl is fairly recent project and a lot of
security issues have been found, opengl isn't super safe either and it
may run on closed-source and large gl libraries), although having a
limited access to guest memory could be safer.

You can run the vhost-user-gpu backend with spice only (since
importing dmabuf requires an egl context, it's not easy to do it with
sdl/gtk ui), it provides the basic cursor/2d/3d rendering, but lacks
some features (such as resize):
-object vhost-user-backend,id=vug,cmd="./vhost-user-gpu"
-device virtio-vga,virgl=true,vhost-user=vug
-spice disable-ticketing,gl=on,unix,addr=/tmp/spice.soc

As for vhost-user-input, it takes an input device path:
-object vhost-user-backend,id=vuid,cmd="vhost-user-input /dev/input/event0"
-device virtio-input-host-pci,vhost-user=vuid

This is based on top of libvhost-user series sent earlier on ML. For
convenience, the branch is also available on github:
https://github.com/elmarco/qemu/ vhost-user-gpu

Comments welcome!

Marc-André Lureau (14):
  Add qemu_chr_open_socket()
  Add vhost-user-backend
  vhost-user: split vhost_user_read()
  vhost-user: add vhost_user_input_get_config()
  Add vhost-user backend to virtio-input-host
  contrib: add vhost-user-input
  misc: rename virtio-gpu.h header guard
  vhost: make sure call fd has been received
  qemu-char: use READ_RETRIES
  qemu-char: block during sync read
  console: add dpy_gl_scanout2()
  contrib: add vhost-user-gpu
  vhost-user: add vhost_user_gpu_set_socket()
  Add virtio-gpu vhost-user backend

 Makefile                               |    6 +
 Makefile.objs                          |    2 +
 backends/Makefile.objs                 |    2 +
 backends/vhost-user.c                  |  262 +++++++++
 configure                              |    5 +
 contrib/libvhost-user/libvhost-user.h  |    1 +
 contrib/vhost-user-gpu/Makefile.objs   |    7 +
 contrib/vhost-user-gpu/main.c          | 1012 ++++++++++++++++++++++++++++++++
 contrib/vhost-user-gpu/virgl.c         |  545 +++++++++++++++++
 contrib/vhost-user-gpu/virgl.h         |   24 +
 contrib/vhost-user-gpu/vugpu.h         |  155 +++++
 contrib/vhost-user-input/Makefile.objs |    1 +
 contrib/vhost-user-input/main.c        |  369 ++++++++++++
 docs/specs/vhost-user.txt              |    9 +
 hw/display/Makefile.objs               |    2 +-
 hw/display/vhost-gpu.c                 |  264 +++++++++
 hw/display/virtio-gpu-pci.c            |    6 +
 hw/display/virtio-gpu.c                |   75 ++-
 hw/display/virtio-vga.c                |    5 +
 hw/input/virtio-input-host.c           |   67 ++-
 hw/input/virtio-input.c                |    4 +
 hw/virtio/vhost-user.c                 |   97 ++-
 hw/virtio/vhost.c                      |    5 +
 hw/virtio/virtio-pci.c                 |    5 +
 include/hw/virtio/vhost-backend.h      |    5 +
 include/hw/virtio/virtio-gpu.h         |   11 +-
 include/hw/virtio/virtio-input.h       |    2 +
 include/sysemu/char.h                  |    2 +
 include/sysemu/vhost-user-backend.h    |   65 ++
 include/ui/console.h                   |   10 +
 qemu-char.c                            |   43 +-
 ui/console.c                           |   12 +
 ui/spice-display.c                     |   19 +
 33 files changed, 3073 insertions(+), 26 deletions(-)
 create mode 100644 backends/vhost-user.c
 create mode 100644 contrib/vhost-user-gpu/Makefile.objs
 create mode 100644 contrib/vhost-user-gpu/main.c
 create mode 100644 contrib/vhost-user-gpu/virgl.c
 create mode 100644 contrib/vhost-user-gpu/virgl.h
 create mode 100644 contrib/vhost-user-gpu/vugpu.h
 create mode 100644 contrib/vhost-user-input/Makefile.objs
 create mode 100644 contrib/vhost-user-input/main.c
 create mode 100644 hw/display/vhost-gpu.c
 create mode 100644 include/sysemu/vhost-user-backend.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 01/14] Add qemu_chr_open_socket()
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 02/14] Add vhost-user-backend marcandre.lureau
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Create a CharDriver from an existing socket fd. Is there a better way to
do that?

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 include/sysemu/char.h |  2 ++
 qemu-char.c           | 37 ++++++++++++++++++++++++++++++++++++-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/include/sysemu/char.h b/include/sysemu/char.h
index 307fd8f..bd97031 100644
--- a/include/sysemu/char.h
+++ b/include/sysemu/char.h
@@ -106,6 +106,8 @@ struct CharDriverState {
  */
 CharDriverState *qemu_chr_alloc(ChardevCommon *backend, Error **errp);
 
+CharDriverState *qemu_chr_open_socket(int fd, Error **errp);
+
 /**
  * @qemu_chr_new_from_opts:
  *
diff --git a/qemu-char.c b/qemu-char.c
index b597ee1..caa737d 100644
--- a/qemu-char.c
+++ b/qemu-char.c
@@ -96,6 +96,10 @@
 static char *SocketAddress_to_str(const char *prefix, SocketAddress *addr,
                                   bool is_listen, bool is_telnet)
 {
+    if (!addr) {
+        return g_strdup(prefix);
+    }
+
     switch (addr->type) {
     case SOCKET_ADDRESS_KIND_INET:
         return g_strdup_printf("%s%s:%s:%s%s", prefix,
@@ -166,7 +170,7 @@ CharDriverState *qemu_chr_alloc(ChardevCommon *backend, Error **errp)
     CharDriverState *chr = g_malloc0(sizeof(CharDriverState));
     qemu_mutex_init(&chr->chr_write_lock);
 
-    if (backend->has_logfile) {
+    if (backend && backend->has_logfile) {
         int flags = O_WRONLY | O_CREAT;
         if (backend->has_logappend &&
             backend->logappend) {
@@ -4466,6 +4470,37 @@ static CharDriverState *qmp_chardev_open_socket(const char *id,
     return NULL;
 }
 
+CharDriverState *qemu_chr_open_socket(int fd, Error **errp)
+{
+    CharDriverState *chr;
+    TCPCharDriver *s;
+
+    chr = qemu_chr_alloc(NULL, errp);
+    if (!chr) {
+        return NULL;
+    }
+
+    qemu_set_nonblock(fd);
+
+    s = g_new0(TCPCharDriver, 1);
+    chr->opaque = s;
+    chr->chr_write = tcp_chr_write;
+    chr->chr_sync_read = tcp_chr_sync_read;
+    chr->chr_close = tcp_chr_close;
+    chr->get_msgfds = tcp_get_msgfds;
+    chr->set_msgfds = tcp_set_msgfds;
+    chr->chr_add_watch = tcp_chr_add_watch;
+    chr->chr_update_read_handler = tcp_chr_update_read_handler;
+
+    if (tcp_chr_add_client(chr, fd) < -1) {
+        free(s);
+        qemu_chr_free_common(chr);
+        return NULL;
+    }
+
+    return chr;
+}
+
 static CharDriverState *qmp_chardev_open_udp(const char *id,
                                              ChardevBackend *backend,
                                              ChardevReturn *ret,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 02/14] Add vhost-user-backend
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 01/14] Add qemu_chr_open_socket() marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 03/14] vhost-user: split vhost_user_read() marcandre.lureau
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Create a vhost-user-backend object that holds a connection to a
vhost-user backend and can be referenced from virtio devices that
support it.

Currently, you may specify the executable to spawn directly from command
line, ex: -object vhost-user-backend,id=vui,cmd="./vhost-user-input
/dev/input.."

It may be considered a security breach to allow creating processes that
may execute arbitrary executables, so this may be restricted to some
known executables or directoy. If not acceptable, the object may just
take use a socket chardev instead (like vhost-user-net today).

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 backends/Makefile.objs              |   2 +
 backends/vhost-user.c               | 262 ++++++++++++++++++++++++++++++++++++
 include/sysemu/vhost-user-backend.h |  65 +++++++++
 3 files changed, 329 insertions(+)
 create mode 100644 backends/vhost-user.c
 create mode 100644 include/sysemu/vhost-user-backend.h

diff --git a/backends/Makefile.objs b/backends/Makefile.objs
index 31a3a89..902addd 100644
--- a/backends/Makefile.objs
+++ b/backends/Makefile.objs
@@ -9,3 +9,5 @@ common-obj-$(CONFIG_TPM) += tpm.o
 
 common-obj-y += hostmem.o hostmem-ram.o
 common-obj-$(CONFIG_LINUX) += hostmem-file.o
+
+common-obj-y += vhost-user.o
diff --git a/backends/vhost-user.c b/backends/vhost-user.c
new file mode 100644
index 0000000..89121ed
--- /dev/null
+++ b/backends/vhost-user.c
@@ -0,0 +1,262 @@
+/*
+ * QEMU vhost-user backend
+ *
+ * Copyright (C) 2016 Red Hat Inc
+ *
+ * Authors:
+ *  Marc-André Lureau <marcandre.lureau@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+
+#include "qemu/osdep.h"
+#include "hw/qdev.h"
+#include "qapi/error.h"
+#include "qemu/error-report.h"
+#include "qom/object_interfaces.h"
+#include "sysemu/vhost-user-backend.h"
+#include "sysemu/char.h"
+#include "sysemu/kvm.h"
+#include "io/channel-command.h"
+#include "hw/virtio/virtio-bus.h"
+
+static bool
+ioeventfd_enabled(void)
+{
+    return kvm_enabled() && kvm_eventfds_enabled();
+}
+
+int
+vhost_user_backend_dev_init(VhostUserBackend *b, VirtIODevice *vdev,
+                            unsigned nvqs, Error **errp)
+{
+    int ret;
+
+    assert(!b->vdev);
+
+    if (!ioeventfd_enabled()) {
+        error_setg(errp, "vhost initialization failed: requires kvm");
+        return -1;
+    }
+
+    b->vdev = vdev;
+    b->dev.nvqs = nvqs;
+    b->dev.vqs = g_new(struct vhost_virtqueue, nvqs);
+
+    ret = vhost_dev_init(&b->dev, b->chr, VHOST_BACKEND_TYPE_USER);
+    if (ret < 0) {
+        error_setg(errp, "vhost initialization failed: %s", strerror(-ret));
+        return -1;
+    }
+
+    return 0;
+}
+
+void
+vhost_user_backend_start(VhostUserBackend *b)
+{
+    BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(b->vdev)));
+    VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
+    int ret;
+
+    if (b->started)
+        return;
+
+    if (!k->set_guest_notifiers) {
+        error_report("binding does not support guest notifiers");
+        return;
+    }
+
+    ret = vhost_dev_enable_notifiers(&b->dev, b->vdev);
+    if (ret < 0) {
+        return;
+    }
+
+    b->dev.acked_features = b->vdev->guest_features;
+    ret = vhost_dev_start(&b->dev, b->vdev);
+    if (ret < 0) {
+        error_report("Error start vhost dev");
+        goto err_notifiers;
+    }
+
+    ret = k->set_guest_notifiers(qbus->parent, b->dev.nvqs, true);
+    if (ret < 0) {
+        error_report("Error binding guest notifier");
+        goto err_vhost_stop;
+    }
+
+    b->started = true;
+    return;
+
+err_vhost_stop:
+    vhost_dev_stop(&b->dev, b->vdev);
+err_notifiers:
+    vhost_dev_disable_notifiers(&b->dev, b->vdev);
+}
+
+void
+vhost_user_backend_stop(VhostUserBackend *b)
+{
+    BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(b->vdev)));
+    VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
+    int ret = 0;
+
+    if (!b->started)
+        return;
+
+    if (k->set_guest_notifiers) {
+        ret = k->set_guest_notifiers(qbus->parent,
+                                     b->dev.nvqs, false);
+        if (ret < 0) {
+            error_report("vhost guest notifier cleanup failed: %d", ret);
+        }
+    }
+
+    vhost_dev_stop(&b->dev, b->vdev);
+    vhost_dev_disable_notifiers(&b->dev, b->vdev);
+
+    b->started = false;
+}
+
+static int
+vhost_user_backend_spawn_cmd(VhostUserBackend *b, int vhostfd, Error **errp)
+{
+    int devnull = open("/dev/null", O_RDWR);
+    pid_t pid;
+
+    assert(b->cmd);
+    assert(!b->child);
+
+    if (devnull < 0) {
+        error_setg_errno(errp, errno, "Unable to open /dev/null");
+        return -1;
+    }
+
+    pid = qemu_fork(errp);
+    if (pid < 0) {
+        close(devnull);
+        return -1;
+    }
+
+    if (pid == 0) { /* child */
+        int fd, maxfd = sysconf(_SC_OPEN_MAX);
+
+        dup2(devnull, STDIN_FILENO);
+        dup2(devnull, STDOUT_FILENO);
+        dup2(vhostfd, 3);
+
+        for (fd = 4; fd < maxfd; fd++) {
+            close(fd);
+        }
+
+        execlp("/bin/sh", "sh", "-c", b->cmd, NULL);
+        _exit(1);
+    }
+
+    b->child = QIO_CHANNEL(qio_channel_command_new_pid(devnull, devnull, pid));
+
+    return 0;
+}
+
+static void
+vhost_user_backend_complete(UserCreatable *uc, Error **errp)
+{
+    VhostUserBackend *b = VHOST_USER_BACKEND(uc);
+    int sv[2];
+
+    if (socketpair(PF_UNIX, SOCK_STREAM, 0, sv) == -1) {
+        error_setg_errno(errp, errno, "socketpair() failed");
+        return;
+    }
+
+    b->chr = qemu_chr_open_socket(sv[0], errp);
+    if (!b->chr) {
+        return;
+    }
+
+    vhost_user_backend_spawn_cmd(b, sv[1], errp);
+
+    close(sv[1]);
+
+    /* could vhost_dev_init() happen here, so early vhost-user message
+     * can be exchanged */
+    b->dev.opaque = b->chr;
+}
+
+static char *get_cmd(Object *obj, Error **errp)
+{
+    VhostUserBackend *b = VHOST_USER_BACKEND(obj);
+
+    return g_strdup(b->cmd);
+}
+
+static void set_cmd(Object *obj, const char *str, Error **errp)
+{
+    VhostUserBackend *b = VHOST_USER_BACKEND(obj);
+
+    if (b->child) {
+        error_setg(errp, "cannot change property value");
+        return;
+    }
+
+    g_free(b->cmd);
+    b->cmd = g_strdup(str);
+}
+
+static void vhost_user_backend_init(Object *obj)
+{
+    object_property_add_str(obj, "cmd", get_cmd, set_cmd, NULL);
+}
+
+static void vhost_user_backend_finalize(Object *obj)
+{
+    VhostUserBackend *b = VHOST_USER_BACKEND(obj);
+
+    g_free(b->cmd);
+
+    if (b->chr) {
+        qemu_chr_delete(b->chr);
+    }
+
+    if (b->child) {
+        object_unref(OBJECT(b->child));
+    }
+}
+
+static bool
+vhost_user_backend_can_be_deleted(UserCreatable *uc, Error **errp)
+{
+    return true;
+}
+
+static void
+vhost_user_backend_class_init(ObjectClass *oc, void *data)
+{
+    UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
+
+    ucc->complete = vhost_user_backend_complete;
+    ucc->can_be_deleted = vhost_user_backend_can_be_deleted;
+}
+
+static const TypeInfo vhost_user_backend_info = {
+    .name = TYPE_VHOST_USER_BACKEND,
+    .parent = TYPE_OBJECT,
+    .instance_size = sizeof(VhostUserBackend),
+    .instance_init = vhost_user_backend_init,
+    .instance_finalize = vhost_user_backend_finalize,
+    .class_size = sizeof(VhostUserBackendClass),
+    .class_init = vhost_user_backend_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        { TYPE_USER_CREATABLE },
+        { }
+    }
+};
+
+static void register_types(void)
+{
+    type_register_static(&vhost_user_backend_info);
+}
+
+type_init(register_types);
diff --git a/include/sysemu/vhost-user-backend.h b/include/sysemu/vhost-user-backend.h
new file mode 100644
index 0000000..ab7cdbc
--- /dev/null
+++ b/include/sysemu/vhost-user-backend.h
@@ -0,0 +1,65 @@
+/*
+ * QEMU vhost-user backend
+ *
+ * Copyright (C) 2016 Red Hat Inc
+ *
+ * Authors:
+ *  Marc-André Lureau <marcandre.lureau@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#ifndef QEMU_VHOST_USER_BACKEND_H
+#define QEMU_VHOST_USER_BACKEND_H
+
+#include "qom/object.h"
+#include "exec/memory.h"
+#include "qemu/option.h"
+#include "qemu/bitmap.h"
+#include "hw/virtio/vhost.h"
+#include "io/channel.h"
+
+#define TYPE_VHOST_USER_BACKEND "vhost-user-backend"
+#define VHOST_USER_BACKEND(obj) \
+    OBJECT_CHECK(VhostUserBackend, (obj), TYPE_VHOST_USER_BACKEND)
+#define VHOST_USER_BACKEND_GET_CLASS(obj) \
+    OBJECT_GET_CLASS(VhostUserBackendClass, (obj), TYPE_VHOST_USER_BACKEND)
+#define VHOST_USER_BACKEND_CLASS(klass) \
+    OBJECT_CLASS_CHECK(VhostUserBackendClass, (klass), TYPE_VHOST_USER_BACKEND)
+
+typedef struct VhostUserBackend VhostUserBackend;
+typedef struct VhostUserBackendClass VhostUserBackendClass;
+
+/**
+ * VhostUserBackendClass:
+ * @parent_class: opaque parent class container
+ */
+struct VhostUserBackendClass {
+    ObjectClass parent_class;
+};
+
+/**
+ * @VhostUserBackend
+ *
+ * @parent: opaque parent object container
+ */
+struct VhostUserBackend {
+    /* private */
+    Object parent;
+
+    char *cmd;
+
+    CharDriverState *chr;
+    struct vhost_dev dev;
+    QIOChannel *child;
+    VirtIODevice *vdev;
+
+    bool started;
+};
+
+int vhost_user_backend_dev_init(VhostUserBackend *b, VirtIODevice *vdev,
+                                unsigned nvqs, Error **errp);
+void vhost_user_backend_start(VhostUserBackend *b);
+void vhost_user_backend_stop(VhostUserBackend *b);
+
+#endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 03/14] vhost-user: split vhost_user_read()
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 01/14] Add qemu_chr_open_socket() marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 02/14] Add vhost-user-backend marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 04/14] vhost-user: add vhost_user_input_get_config() marcandre.lureau
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Split vhost_user_read(), so only header can be read with
vhost_user_read_header().

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 hw/virtio/vhost-user.c | 28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 495e09f..63ffe48 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -112,7 +112,7 @@ static bool ioeventfd_enabled(void)
     return kvm_enabled() && kvm_eventfds_enabled();
 }
 
-static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
+static int vhost_user_read_header(struct vhost_dev *dev, VhostUserMsg *msg)
 {
     CharDriverState *chr = dev->opaque;
     uint8_t *p = (uint8_t *) msg;
@@ -122,7 +122,7 @@ static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
     if (r != size) {
         error_report("Failed to read msg header. Read %d instead of %d."
                      " Original request %d.", r, size, msg->request);
-        goto fail;
+        return -1;
     }
 
     /* validate received flags */
@@ -130,7 +130,20 @@ static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
         error_report("Failed to read msg header."
                 " Flags 0x%x instead of 0x%x.", msg->flags,
                 VHOST_USER_REPLY_MASK | VHOST_USER_VERSION);
-        goto fail;
+        return -1;
+    }
+
+    return 0;
+}
+
+static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
+{
+    CharDriverState *chr = dev->opaque;
+    uint8_t *p = (uint8_t *) msg;
+    int r, size;
+
+    if (vhost_user_read_header(dev, msg) < 0) {
+        return -1;
     }
 
     /* validate message size is sane */
@@ -138,7 +151,7 @@ static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
         error_report("Failed to read msg header."
                 " Size %d exceeds the maximum %zu.", msg->size,
                 VHOST_USER_PAYLOAD_SIZE);
-        goto fail;
+        return -1;
     }
 
     if (msg->size) {
@@ -148,14 +161,11 @@ static int vhost_user_read(struct vhost_dev *dev, VhostUserMsg *msg)
         if (r != size) {
             error_report("Failed to read msg payload."
                          " Read %d instead of %d.", r, msg->size);
-            goto fail;
+            return -1;
         }
     }
 
     return 0;
-
-fail:
-    return -1;
 }
 
 static bool vhost_user_one_time_request(VhostUserRequest request)
@@ -458,7 +468,7 @@ static int vhost_user_get_u64(struct vhost_dev *dev, int request, uint64_t *u64)
     vhost_user_write(dev, &msg, NULL, 0);
 
     if (vhost_user_read(dev, &msg) < 0) {
-        return 0;
+        return -1;
     }
 
     if (msg.request != request) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 04/14] vhost-user: add vhost_user_input_get_config()
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (2 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 03/14] vhost-user: split vhost_user_read() marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 05/14] Add vhost-user backend to virtio-input-host marcandre.lureau
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 docs/specs/vhost-user.txt         |  9 ++++++
 hw/virtio/vhost-user.c            | 58 +++++++++++++++++++++++++++++++++++++++
 include/hw/virtio/vhost-backend.h |  4 +++
 3 files changed, 71 insertions(+)

diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index 777c49c..fab67a5 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -464,3 +464,12 @@ Message types
       is present in VHOST_USER_GET_PROTOCOL_FEATURES.
       The first 6 bytes of the payload contain the mac address of the guest to
       allow the vhost user backend to construct and broadcast the fake RARP.
+
+ * VHOST_USER_INPUT_GET_CONFIG
+
+      Id: 20
+      Equivalent ioctl: N/A
+      Master payload: N/A
+      Slave payload: (struct virtio_input_config)*
+
+      Ask vhost user input backend the list of virtio_input_config.
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 63ffe48..8072c16 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -58,6 +58,7 @@ typedef enum VhostUserRequest {
     VHOST_USER_GET_QUEUE_NUM = 17,
     VHOST_USER_SET_VRING_ENABLE = 18,
     VHOST_USER_SEND_RARP = 19,
+    VHOST_USER_INPUT_GET_CONFIG = 20,
     VHOST_USER_MAX
 } VhostUserRequest;
 
@@ -205,6 +206,63 @@ static int vhost_user_write(struct vhost_dev *dev, VhostUserMsg *msg,
             0 : -1;
 }
 
+static void *vhost_user_read_size(struct vhost_dev *dev, uint32_t size)
+{
+    CharDriverState *chr = dev->opaque;
+    int r;
+    uint8_t *p = g_malloc(size);
+
+    r = qemu_chr_fe_read_all(chr, p, size);
+    if (r != size) {
+        error_report("Failed to read msg payload."
+                     " Read %d instead of %d.", r, size);
+        return NULL;
+    }
+
+    return p;
+}
+
+int vhost_user_input_get_config(struct vhost_dev *dev,
+                                struct virtio_input_config **config)
+{
+    void *p = NULL;
+    VhostUserMsg msg = {
+        .request = VHOST_USER_INPUT_GET_CONFIG,
+        .flags = VHOST_USER_VERSION,
+    };
+
+    if (vhost_user_write(dev, &msg, NULL, 0) < 0) {
+        goto err;
+    }
+
+    if (vhost_user_read_header(dev, &msg) < 0) {
+        goto err;
+    }
+
+    p = vhost_user_read_size(dev, msg.size);
+    if (!p) {
+        goto err;
+    }
+
+    if (msg.request != VHOST_USER_INPUT_GET_CONFIG) {
+        error_report("Received unexpected msg type. Expected %d received %d",
+                     VHOST_USER_INPUT_GET_CONFIG, msg.request);
+        goto err;
+    }
+
+    if (msg.size % sizeof(struct virtio_input_config)) {
+        error_report("Invalid msg size");
+        goto err;
+    }
+
+    *config = p;
+    return msg.size / sizeof(struct virtio_input_config);
+
+err:
+    g_free(p);
+    return -1;
+}
+
 static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base,
                                    struct vhost_log *log)
 {
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index 95fcc96..08d34db 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -11,6 +11,7 @@
 #ifndef VHOST_BACKEND_H_
 #define VHOST_BACKEND_H_
 
+#include "standard-headers/linux/virtio_input.h"
 
 typedef enum VhostBackendType {
     VHOST_BACKEND_TYPE_NONE = 0,
@@ -107,4 +108,7 @@ extern const VhostOps user_ops;
 int vhost_set_backend_type(struct vhost_dev *dev,
                            VhostBackendType backend_type);
 
+int vhost_user_input_get_config(struct vhost_dev *dev,
+                                struct virtio_input_config **config);
+
 #endif /* VHOST_BACKEND_H_ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 05/14] Add vhost-user backend to virtio-input-host
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (3 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 04/14] vhost-user: add vhost_user_input_get_config() marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-06  6:22   ` Gerd Hoffmann
  2016-06-04 21:33 ` [Qemu-devel] [RFC 06/14] contrib: add vhost-user-input marcandre.lureau
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Learn to use a vhost-user as a virtio-input backend. Usage:

-object vhost-user-backend,id=vuid -device virtio-input-host-pci,vhost-user=vuid

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 hw/input/virtio-input-host.c     | 67 ++++++++++++++++++++++++++++++++++------
 hw/input/virtio-input.c          |  4 +++
 hw/virtio/virtio-pci.c           |  5 +++
 include/hw/virtio/virtio-input.h |  2 ++
 4 files changed, 69 insertions(+), 9 deletions(-)

diff --git a/hw/input/virtio-input-host.c b/hw/input/virtio-input-host.c
index cb79e80..c2c48bd 100644
--- a/hw/input/virtio-input-host.c
+++ b/hw/input/virtio-input-host.c
@@ -95,20 +95,14 @@ static void virtio_input_abs_config(VirtIOInputHost *vih, int axis)
     virtio_input_add_config(VIRTIO_INPUT(vih), &config);
 }
 
-static void virtio_input_host_realize(DeviceState *dev, Error **errp)
+static void virtio_input_evdev_init(VirtIOInputHost *vih, Error **errp)
 {
-    VirtIOInputHost *vih = VIRTIO_INPUT_HOST(dev);
-    VirtIOInput *vinput = VIRTIO_INPUT(dev);
+    VirtIOInput *vinput = VIRTIO_INPUT(vih);
     virtio_input_config id, *abs;
     struct input_id ids;
     int rc, ver, i, axis;
     uint8_t byte;
 
-    if (!vih->evdev) {
-        error_setg(errp, "evdev property is required");
-        return;
-    }
-
     vih->fd = open(vih->evdev, O_RDWR);
     if (vih->fd < 0)  {
         error_setg_file_open(errp, errno, vih->evdev);
@@ -175,7 +169,34 @@ static void virtio_input_host_realize(DeviceState *dev, Error **errp)
 err_close:
     close(vih->fd);
     vih->fd = -1;
-    return;
+}
+
+static void virtio_input_host_realize(DeviceState *dev, Error **errp)
+{
+    VirtIOInputHost *vih = VIRTIO_INPUT_HOST(dev);
+    VirtIOInput *vinput = VIRTIO_INPUT(vih);
+
+    if (!!vih->evdev + !!vinput->vhost != 1) {
+        error_setg(errp, "'evdev' or 'vhost-user' property is required");
+        return;
+    }
+
+    if (vih->evdev) {
+        virtio_input_evdev_init(vih, errp);
+    } else {
+        virtio_input_config *config;
+        int i, ret;
+
+        ret = vhost_user_input_get_config(&vinput->vhost->dev, &config);
+        if (ret < 0) {
+            error_setg(errp, "failed to get input config");
+            return;
+        }
+        for (i = 0; i < ret; i++) {
+            virtio_input_add_config(vinput, &config[i]);
+        }
+        g_free(config);
+    }
 }
 
 static void virtio_input_host_unrealize(DeviceState *dev, Error **errp)
@@ -210,6 +231,15 @@ static void virtio_input_host_handle_status(VirtIOInput *vinput,
     }
 }
 
+static void virtio_input_host_change_active(VirtIOInput *vinput)
+{
+    if (vinput->active) {
+        vhost_user_backend_start(vinput->vhost);
+    } else {
+        vhost_user_backend_stop(vinput->vhost);
+    }
+}
+
 static const VMStateDescription vmstate_virtio_input_host = {
     .name = "virtio-input-host",
     .unmigratable = 1,
@@ -230,6 +260,19 @@ static void virtio_input_host_class_init(ObjectClass *klass, void *data)
     vic->realize       = virtio_input_host_realize;
     vic->unrealize     = virtio_input_host_unrealize;
     vic->handle_status = virtio_input_host_handle_status;
+    vic->change_active = virtio_input_host_change_active;
+}
+
+static void virtio_input_host_user_is_busy(Object *obj, const char *name,
+                                           Object *val, Error **errp)
+{
+    VirtIOInput *vinput = VIRTIO_INPUT(obj);
+
+    if (vinput->vhost) {
+        error_setg(errp, "can't use already busy vhost-user");
+    } else {
+        qdev_prop_allow_set_link_before_realize(obj, name, val, errp);
+    }
 }
 
 static void virtio_input_host_init(Object *obj)
@@ -237,6 +280,12 @@ static void virtio_input_host_init(Object *obj)
     VirtIOInput *vinput = VIRTIO_INPUT(obj);
 
     virtio_input_init_config(vinput, virtio_input_host_config);
+
+    object_property_add_link(obj, "vhost-user", TYPE_VHOST_USER_BACKEND,
+                             (Object **)&vinput->vhost,
+                             virtio_input_host_user_is_busy,
+                             OBJ_PROP_LINK_UNREF_ON_RELEASE,
+                             &error_abort);
 }
 
 static const TypeInfo virtio_input_host_info = {
diff --git a/hw/input/virtio-input.c b/hw/input/virtio-input.c
index f59749a..3c9ac46 100644
--- a/hw/input/virtio-input.c
+++ b/hw/input/virtio-input.c
@@ -280,6 +280,10 @@ static void virtio_input_device_realize(DeviceState *dev, Error **errp)
     vinput->evt = virtio_add_queue(vdev, 64, virtio_input_handle_evt);
     vinput->sts = virtio_add_queue(vdev, 64, virtio_input_handle_sts);
 
+    if (vinput->vhost) {
+        vhost_user_backend_dev_init(vinput->vhost, vdev, 2, errp);
+    }
+
     register_savevm(dev, "virtio-input", -1, VIRTIO_INPUT_VM_VERSION,
                     virtio_input_save, virtio_input_load, vinput);
 }
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index bfedbbf..0a14fd4 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -2454,6 +2454,11 @@ static void virtio_host_initfn(Object *obj)
 
     virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
                                 TYPE_VIRTIO_INPUT_HOST);
+
+    /* could eventually be included in qdev_alias_all_properties? */
+    object_property_add_alias(obj, "vhost-user",
+                              OBJECT(&dev->vdev), "vhost-user",
+                              &error_abort);
 }
 
 static const TypeInfo virtio_host_pci_info = {
diff --git a/include/hw/virtio/virtio-input.h b/include/hw/virtio/virtio-input.h
index bddbd4b..05694b1 100644
--- a/include/hw/virtio/virtio-input.h
+++ b/include/hw/virtio/virtio-input.h
@@ -2,6 +2,7 @@
 #define _QEMU_VIRTIO_INPUT_H
 
 #include "ui/input.h"
+#include "sysemu/vhost-user-backend.h"
 
 /* ----------------------------------------------------------------- */
 /* virtio input protocol                                             */
@@ -66,6 +67,7 @@ struct VirtIOInput {
     uint32_t                          qindex, qsize;
 
     bool                              active;
+    VhostUserBackend                  *vhost;
 };
 
 struct VirtIOInputClass {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 06/14] contrib: add vhost-user-input
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (4 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 05/14] Add vhost-user backend to virtio-input-host marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 07/14] misc: rename virtio-gpu.h header guard marcandre.lureau
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Add a vhost-user input backend example, based on virtio-input-host
device. It takes an evdev path as argument, and can be associated with a
vhost-user-backend object, ex:

-object vhost-user-backend,id=vuid,cmd="vhost-user-input /dev/input/event0"

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 Makefile                               |   3 +
 Makefile.objs                          |   1 +
 configure                              |   1 +
 contrib/vhost-user-input/Makefile.objs |   1 +
 contrib/vhost-user-input/main.c        | 369 +++++++++++++++++++++++++++++++++
 5 files changed, 375 insertions(+)
 create mode 100644 contrib/vhost-user-input/Makefile.objs
 create mode 100644 contrib/vhost-user-input/main.c

diff --git a/Makefile b/Makefile
index 251217c..ae054de 100644
--- a/Makefile
+++ b/Makefile
@@ -152,6 +152,7 @@ dummy := $(call unnest-vars,, \
                 ivshmem-client-obj-y \
                 ivshmem-server-obj-y \
                 libvhost-user-obj-y \
+                vhost-user-input-obj-y \
                 qga-vss-dll-obj-y \
                 block-obj-y \
                 block-obj-m \
@@ -332,6 +333,8 @@ ivshmem-client$(EXESUF): $(ivshmem-client-obj-y) libqemuutil.a libqemustub.a
 	$(call LINK, $^)
 ivshmem-server$(EXESUF): $(ivshmem-server-obj-y) libqemuutil.a libqemustub.a
 	$(call LINK, $^)
+vhost-user-input$(EXESUF): $(vhost-user-input-obj-y) $(libvhost-user-obj-y) libqemuutil.a libqemustub.a
+	$(call LINK, $^)
 
 clean:
 # avoid old build problems by removing potentially incorrect old files
diff --git a/Makefile.objs b/Makefile.objs
index 5812da7..cdd48ca 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -116,3 +116,4 @@ qga-vss-dll-obj-y = qga/
 ivshmem-client-obj-y = contrib/ivshmem-client/
 ivshmem-server-obj-y = contrib/ivshmem-server/
 libvhost-user-obj-y = contrib/libvhost-user/
+vhost-user-input-obj-y = contrib/vhost-user-input/
diff --git a/configure b/configure
index b5aab72..b02c0f4 100755
--- a/configure
+++ b/configure
@@ -4602,6 +4602,7 @@ if test "$want_tools" = "yes" ; then
   if [ "$linux" = "yes" -o "$bsd" = "yes" -o "$solaris" = "yes" ] ; then
     tools="qemu-nbd\$(EXESUF) $tools"
     tools="ivshmem-client\$(EXESUF) ivshmem-server\$(EXESUF) $tools"
+    tools="vhost-user-input\$(EXESUF) $tools"
   fi
 fi
 if test "$softmmu" = yes ; then
diff --git a/contrib/vhost-user-input/Makefile.objs b/contrib/vhost-user-input/Makefile.objs
new file mode 100644
index 0000000..b1fad90
--- /dev/null
+++ b/contrib/vhost-user-input/Makefile.objs
@@ -0,0 +1 @@
+vhost-user-input-obj-y = main.o
diff --git a/contrib/vhost-user-input/main.c b/contrib/vhost-user-input/main.c
new file mode 100644
index 0000000..31eecb8
--- /dev/null
+++ b/contrib/vhost-user-input/main.c
@@ -0,0 +1,369 @@
+#include <glib.h>
+#include <linux/input.h>
+
+#include "qemu/osdep.h"
+#include "qemu/iov.h"
+#include "qemu/bswap.h"
+#include "contrib/libvhost-user/libvhost-user.h"
+#include "standard-headers/linux/virtio_input.h"
+
+typedef struct virtio_input_event virtio_input_event;
+typedef struct virtio_input_config virtio_input_config;
+
+typedef struct VuInput {
+    VuDev dev;
+    GSource *watches[16];
+    int evdevfd;
+    GArray *config;
+    virtio_input_event *queue;
+    uint32_t qindex, qsize;
+} VuInput;
+
+static void vi_input_send(VuInput *vi, struct virtio_input_event *event)
+{
+    VuDev *dev = &vi->dev;
+    VuVirtq *vq = vu_get_queue(dev, 0);
+    VuVirtqElement *elem;
+    unsigned have, need;
+    int i, len;
+
+    /* queue up events ... */
+    if (vi->qindex == vi->qsize) {
+        vi->qsize++;
+        vi->queue = realloc(vi->queue, vi->qsize *
+                                sizeof(virtio_input_event));
+    }
+    vi->queue[vi->qindex++] = *event;
+
+    /* ... until we see a report sync ... */
+    if (event->type != htole16(EV_SYN) ||
+        event->code != htole16(SYN_REPORT)) {
+        return;
+    }
+
+    /* ... then check available space ... */
+    need = sizeof(virtio_input_event) * vi->qindex;
+    vu_queue_get_avail_bytes(dev, vq, &have, NULL, need, 0);
+    if (have < need) {
+        vi->qindex = 0;
+        g_warning("ENOSPC in vq, dropping events");
+        return;
+    }
+
+    /* ... and finally pass them to the guest */
+    for (i = 0; i < vi->qindex; i++) {
+        elem = vu_queue_pop(dev, vq, sizeof(VuVirtqElement));
+        if (!elem) {
+            /* should not happen, we've checked for space beforehand */
+            g_warning("%s: Huh?  No vq elem available ...\n", __func__);
+            return;
+        }
+        len = iov_from_buf(elem->in_sg, elem->in_num,
+                           0, vi->queue + i, sizeof(virtio_input_event));
+        vu_queue_push(dev, vq, elem, len);
+        g_free(elem);
+    }
+    vu_queue_notify(&vi->dev, vq);
+    vi->qindex = 0;
+}
+
+static void
+vi_evdev_watch(VuDev *dev, int condition, void *data)
+{
+    VuInput *vi = data;
+    int fd = vi->evdevfd;
+
+    g_debug("Got evdev condition %x", condition);
+
+    struct virtio_input_event virtio;
+    struct input_event evdev;
+    int rc;
+
+    for (;;) {
+        rc = read(fd, &evdev, sizeof(evdev));
+        if (rc != sizeof(evdev)) {
+            break;
+        }
+
+        g_debug("input %d %d %d", evdev.type, evdev.code, evdev.value);
+
+        virtio.type  = htole16(evdev.type);
+        virtio.code  = htole16(evdev.code);
+        virtio.value = htole32(evdev.value);
+        vi_input_send(vi, &virtio);
+    }
+}
+
+static void vi_handle_sts(VuDev *dev, int qidx)
+{
+    VuInput *vi = container_of(dev, VuInput, dev);
+    VuVirtq *vq = vu_get_queue(dev, qidx);
+    virtio_input_event event;
+    VuVirtqElement *elem;
+    int len;
+
+    g_debug("%s", __func__);
+
+    for (;;) {
+        elem = vu_queue_pop(dev, vq, sizeof(VuVirtqElement));
+        if (!elem) {
+            break;
+        }
+
+        memset(&event, 0, sizeof(event));
+        len = iov_to_buf(elem->out_sg, elem->out_num,
+                         0, &event, sizeof(event));
+        g_debug("TODO handle status %d %p", len, elem);
+        vu_queue_push(dev, vq, elem, len);
+        g_free(elem);
+    }
+
+    vu_queue_notify(&vi->dev, vq);
+}
+
+static void
+vi_panic(VuDev *dev, const char *msg)
+{
+    g_critical("%s\n", msg);
+    exit(1);
+}
+
+typedef struct Watch {
+    GSource       source;
+    GIOCondition  condition;
+    gpointer      tag;
+    VuDev        *dev;
+    guint         id;
+} Watch;
+
+static GIOCondition
+vu_to_gio_condition(int condition)
+{
+    return (condition & VU_WATCH_IN ? G_IO_IN : 0) |
+           (condition & VU_WATCH_OUT ? G_IO_OUT : 0) |
+           (condition & VU_WATCH_PRI ? G_IO_PRI : 0) |
+           (condition & VU_WATCH_ERR ? G_IO_ERR : 0) |
+           (condition & VU_WATCH_HUP ? G_IO_HUP : 0);
+}
+
+static GIOCondition
+vu_from_gio_condition(int condition)
+{
+    return (condition & G_IO_IN ? VU_WATCH_IN : 0) |
+           (condition & G_IO_OUT ? VU_WATCH_OUT : 0) |
+           (condition & G_IO_PRI ? VU_WATCH_PRI : 0) |
+           (condition & G_IO_ERR ? VU_WATCH_ERR : 0) |
+           (condition & G_IO_HUP ? VU_WATCH_HUP : 0);
+}
+
+static gboolean
+watch_check(GSource *source)
+{
+    Watch *watch = (Watch *)source;
+    GIOCondition poll_condition = g_source_query_unix_fd(source, watch->tag);
+
+    return poll_condition & watch->condition;
+}
+
+static gboolean
+watch_dispatch(GSource *source,
+               GSourceFunc callback,
+               gpointer user_data)
+
+{
+    vu_watch_cb func = (vu_watch_cb)callback;
+    Watch *watch = (Watch *)source;
+    GIOCondition poll_condition = g_source_query_unix_fd(source, watch->tag);
+    int cond = vu_from_gio_condition(poll_condition & watch->condition);
+
+    (*func) (watch->dev, cond, user_data);
+
+    return G_SOURCE_CONTINUE;
+}
+
+static GSourceFuncs watch_funcs = {
+    .check = watch_check,
+    .dispatch = watch_dispatch,
+};
+
+static void
+set_fd_handler(VuDev *dev, int fd, GIOCondition condition,
+               vu_watch_cb cb, void *data)
+{
+    VuInput *vi = container_of(dev, VuInput, dev);
+    Watch *watch;
+    GSource *s;
+
+    g_assert_cmpint(fd, <, G_N_ELEMENTS(vi->watches));
+
+    s = vi->watches[fd];
+    if (cb) {
+        if (!s) {
+            s = g_source_new(&watch_funcs, sizeof(Watch));
+            watch = (Watch *)s;
+            watch->dev = dev;
+            watch->condition = condition;
+            watch->tag =
+                g_source_add_unix_fd(s, fd, condition);
+            watch->id = g_source_attach(s, NULL);
+            vi->watches[fd] = s;
+        } else {
+            watch = (Watch *)s;
+            g_source_modify_unix_fd(s, watch->tag, condition);
+        }
+
+        g_source_set_callback(s, (GSourceFunc)cb, data, NULL);
+    } else if (s) {
+        watch = (Watch *)s;
+        g_source_remove_unix_fd(s, watch->tag);
+        g_source_unref(s);
+        g_source_remove(watch->id);
+        vi->watches[fd] = NULL;
+    }
+}
+
+static void
+vi_add_watch(VuDev *dev, int fd, int condition,
+             vu_watch_cb cb, void *data)
+{
+    set_fd_handler(dev, fd, vu_to_gio_condition(condition), cb, data);
+}
+
+static void
+vi_remove_watch(VuDev *dev, int fd)
+{
+    set_fd_handler(dev, fd, 0, NULL, NULL);
+}
+
+static void
+vi_queue_set_started(VuDev *dev, int qidx, bool started)
+{
+    VuInput *vi = container_of(dev, VuInput, dev);
+    VuVirtq *vq = vu_get_queue(dev, qidx);
+
+    g_debug("queue started %d:%d", qidx, started);
+
+    if (qidx == 0) {
+        set_fd_handler(dev, vi->evdevfd, G_IO_IN,
+                       started ? vi_evdev_watch : NULL, vi);
+    } else {
+        vu_set_queue_handler(dev, vq, started ? vi_handle_sts : NULL);
+    }
+}
+
+static void
+vi_vhost_watch(VuDev *dev, int condition, void *data)
+{
+    vu_dispatch(dev);
+}
+
+static int
+vi_process_msg(VuDev *dev, VhostUserMsg *vmsg, int *do_reply)
+{
+    VuInput *vi = container_of(dev, VuInput, dev);
+
+    switch (vmsg->request) {
+    case VHOST_USER_INPUT_GET_CONFIG:
+        vmsg->size = vi->config->len * sizeof(virtio_input_config);
+        vmsg->data = g_memdup(vi->config->data, vmsg->size);
+        *do_reply = true;
+        return 1;
+    default:
+        return 0;
+    }
+}
+
+static const VuDevIface vuiface = {
+    .queue_set_started = vi_queue_set_started,
+    .process_msg = vi_process_msg,
+};
+
+static void
+vi_bits_config(VuInput *vi, int type, int count)
+{
+    virtio_input_config bits;
+    int rc, i, size = 0;
+
+    memset(&bits, 0, sizeof(bits));
+    rc = ioctl(vi->evdevfd, EVIOCGBIT(type, count / 8), bits.u.bitmap);
+    if (rc < 0) {
+        return;
+    }
+
+    for (i = 0; i < count / 8; i++) {
+        if (bits.u.bitmap[i]) {
+            size = i + 1;
+        }
+    }
+    if (size == 0) {
+        return;
+    }
+
+    bits.select = VIRTIO_INPUT_CFG_EV_BITS;
+    bits.subsel = type;
+    bits.size   = size;
+    g_array_append_val(vi->config, bits);
+}
+
+int
+main(int argc, char *argv[])
+{
+    GMainLoop *loop = NULL;
+    VuInput vi = { 0, };
+    int rc, ver;
+    virtio_input_config id;
+    struct input_id ids;
+
+    if (argc != 2) {
+        g_error("evdev path argument required");
+    }
+
+    vi.evdevfd = open(argv[1], O_RDWR);
+    if (vi.evdevfd < 0) {
+        g_error("Failed to open evdev: %s", g_strerror(errno));
+    }
+
+    rc = ioctl(vi.evdevfd, EVIOCGVERSION, &ver);
+    if (rc < 0) {
+        g_error("%s: is not an evdev device", argv[1]);
+    }
+
+    rc = ioctl(vi.evdevfd, EVIOCGRAB, 1);
+    if (rc < 0) {
+        g_error("Failed to grab device");
+    }
+
+    vi.config = g_array_new(false, false, sizeof(virtio_input_config));
+    memset(&id, 0, sizeof(id));
+    ioctl(vi.evdevfd, EVIOCGNAME(sizeof(id.u.string) - 1), id.u.string);
+    id.select = VIRTIO_INPUT_CFG_ID_NAME;
+    id.size = strlen(id.u.string);
+    g_array_append_val(vi.config, id);
+
+    if (ioctl(vi.evdevfd, EVIOCGID, &ids) == 0) {
+        memset(&id, 0, sizeof(id));
+        id.select = VIRTIO_INPUT_CFG_ID_DEVIDS;
+        id.size = sizeof(struct virtio_input_devids);
+        id.u.ids.bustype = cpu_to_le16(ids.bustype);
+        id.u.ids.vendor  = cpu_to_le16(ids.vendor);
+        id.u.ids.product = cpu_to_le16(ids.product);
+        id.u.ids.version = cpu_to_le16(ids.version);
+        g_array_append_val(vi.config, id);
+    }
+
+    vi_bits_config(&vi, EV_KEY, KEY_CNT);
+    vi_bits_config(&vi, EV_REL, REL_CNT);
+    vi_bits_config(&vi, EV_ABS, ABS_CNT);
+    vi_bits_config(&vi, EV_MSC, MSC_CNT);
+    vi_bits_config(&vi, EV_SW,  SW_CNT);
+    g_debug("config length: %u", vi.config->len);
+
+    vu_init(&vi.dev, 3, vi_panic, vi_add_watch, vi_remove_watch, &vuiface);
+    set_fd_handler(&vi.dev, 3, G_IO_IN | G_IO_HUP, vi_vhost_watch, NULL);
+
+    loop = g_main_loop_new(NULL, FALSE);
+    g_main_loop_run(loop);
+    g_main_loop_unref(loop);
+
+    return 0;
+}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 07/14] misc: rename virtio-gpu.h header guard
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (5 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 06/14] contrib: add vhost-user-input marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 08/14] vhost: make sure call fd has been received marcandre.lureau
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 include/hw/virtio/virtio-gpu.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 1602a13..0cc8e67 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -11,8 +11,8 @@
  * See the COPYING file in the top-level directory.
  */
 
-#ifndef _QEMU_VIRTIO_VGA_H
-#define _QEMU_VIRTIO_VGA_H
+#ifndef _QEMU_VIRTIO_GPU_H
+#define _QEMU_VIRTIO_GPU_H
 
 #include "qemu/queue.h"
 #include "ui/qemu-pixman.h"
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 08/14] vhost: make sure call fd has been received
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (6 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 07/14] misc: rename virtio-gpu.h header guard marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 09/14] qemu-char: use READ_RETRIES marcandre.lureau
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

vhost switches between masked notifier and guest notifier when
unmasking. virtio_pci_vq_vector_unmask() checks if there was pending
notification, but at the time it is checked, vhost-user backend doesn't
guarantee that the switch happened yet, so it may lose some events.

To solve this vhost-user race, I introduced an extra "sync"
call (waiting for a reply). One may want to make all/many vhost-user
replies mandatories by adding a new capability instead.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 hw/virtio/vhost.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 4400718..692c38f 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1181,6 +1181,7 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
     struct VirtQueue *vvq = virtio_get_queue(vdev, n);
     int r, index = n - hdev->vq_index;
     struct vhost_vring_file file;
+    uint64_t features;
 
     if (mask) {
         assert(vdev->use_guest_notifier_mask);
@@ -1192,6 +1193,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
     file.index = hdev->vhost_ops->vhost_get_vq_index(hdev, n);
     r = hdev->vhost_ops->vhost_set_vring_call(hdev, &file);
     assert(r >= 0);
+
+    /* silly sync call to make sure the call fd has been received */
+    r = hdev->vhost_ops->vhost_get_features(hdev, &features);
+    assert(r >= 0);
 }
 
 uint64_t vhost_get_features(struct vhost_dev *hdev, const int *feature_bits,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 09/14] qemu-char: use READ_RETRIES
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (7 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 08/14] vhost: make sure call fd has been received marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 10/14] qemu-char: block during sync read marcandre.lureau
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

The define was introduced with qemu_chr_fe_read_all() in 7b0bfdf52d,
however never used.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 qemu-char.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/qemu-char.c b/qemu-char.c
index caa737d..efa1e2a 100644
--- a/qemu-char.c
+++ b/qemu-char.c
@@ -325,13 +325,13 @@ int qemu_chr_fe_write_all(CharDriverState *s, const uint8_t *buf, int len)
 
 int qemu_chr_fe_read_all(CharDriverState *s, uint8_t *buf, int len)
 {
-    int offset = 0, counter = 10;
+    int offset = 0, counter = READ_RETRIES;
     int res;
 
     if (!s->chr_sync_read) {
         return 0;
     }
-    
+
     if (s->replay && replay_mode == REPLAY_MODE_PLAY) {
         return replay_char_read_all_load(buf);
     }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 10/14] qemu-char: block during sync read
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (8 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 09/14] qemu-char: use READ_RETRIES marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 11/14] console: add dpy_gl_scanout2() marcandre.lureau
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

A sync read should block until data is available, instead of
retrying. Change the channel to blocking during read.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 qemu-char.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/qemu-char.c b/qemu-char.c
index efa1e2a..ce09226 100644
--- a/qemu-char.c
+++ b/qemu-char.c
@@ -2911,7 +2911,9 @@ static int tcp_chr_sync_read(CharDriverState *chr, const uint8_t *buf, int len)
         return 0;
     }
 
+    qio_channel_set_blocking(s->ioc, true, NULL);
     size = tcp_chr_recv(chr, (void *) buf, len);
+    qio_channel_set_blocking(s->ioc, false, NULL);
     if (size == 0) {
         /* connection closed */
         tcp_chr_disconnect(chr);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 11/14] console: add dpy_gl_scanout2()
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (9 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 10/14] qemu-char: block during sync read marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-06  6:35   ` Gerd Hoffmann
  2016-06-04 21:33 ` [Qemu-devel] [RFC 12/14] contrib: add vhost-user-gpu marcandre.lureau
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Add a new scanout callback that doesn't require any gl context in
qemu (importing a dmabuf fd would require qemu egl&gl contexts, and
would be unnecessary when using spice anyway)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 include/ui/console.h | 10 ++++++++++
 ui/console.c         | 12 ++++++++++++
 ui/spice-display.c   | 19 +++++++++++++++++++
 3 files changed, 41 insertions(+)

diff --git a/include/ui/console.h b/include/ui/console.h
index 52a5f65..14fd5ad 100644
--- a/include/ui/console.h
+++ b/include/ui/console.h
@@ -218,6 +218,11 @@ typedef struct DisplayChangeListenerOps {
     void (*dpy_gl_scanout)(DisplayChangeListener *dcl,
                            uint32_t backing_id, bool backing_y_0_top,
                            uint32_t x, uint32_t y, uint32_t w, uint32_t h);
+    void (*dpy_gl_scanout2)(DisplayChangeListener *dcl,
+                            int fd, bool backing_y_0_top,
+                            uint32_t x, uint32_t y, uint32_t w, uint32_t h,
+                            uint32_t fd_w, uint32_t fd_h, uint32_t fd_stride,
+                            int fd_fourcc);
     void (*dpy_gl_update)(DisplayChangeListener *dcl,
                           uint32_t x, uint32_t y, uint32_t w, uint32_t h);
 
@@ -286,6 +291,11 @@ bool dpy_gfx_check_format(QemuConsole *con,
 void dpy_gl_scanout(QemuConsole *con,
                     uint32_t backing_id, bool backing_y_0_top,
                     uint32_t x, uint32_t y, uint32_t w, uint32_t h);
+void dpy_gl_scanout2(QemuConsole *con,
+                     int fd, bool backing_y_0_top,
+                     uint32_t x, uint32_t y, uint32_t w, uint32_t h,
+                     uint32_t fd_w, uint32_t fd_h, uint32_t fd_stride,
+                     int fd_fourcc);
 void dpy_gl_update(QemuConsole *con,
                    uint32_t x, uint32_t y, uint32_t w, uint32_t h);
 
diff --git a/ui/console.c b/ui/console.c
index bf38579..c36f742 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -1712,6 +1712,18 @@ void dpy_gl_scanout(QemuConsole *con,
                                  x, y, width, height);
 }
 
+void dpy_gl_scanout2(QemuConsole *con,
+                     int fd, bool backing_y_0_top,
+                     uint32_t x, uint32_t y, uint32_t w, uint32_t h,
+                     uint32_t fd_w, uint32_t fd_h, uint32_t fd_stride,
+                     int fd_fourcc)
+{
+    assert(con->gl);
+    con->gl->ops->dpy_gl_scanout2(con->gl, fd, backing_y_0_top,
+                                  x, y, w, h, fd_w, fd_h, fd_stride,
+                                  fd_fourcc);
+}
+
 void dpy_gl_update(QemuConsole *con,
                    uint32_t x, uint32_t y, uint32_t w, uint32_t h)
 {
diff --git a/ui/spice-display.c b/ui/spice-display.c
index 0553c5e..06d2e4e 100644
--- a/ui/spice-display.c
+++ b/ui/spice-display.c
@@ -888,6 +888,24 @@ static void qemu_spice_gl_scanout(DisplayChangeListener *dcl,
     qemu_spice_gl_monitor_config(ssd, x, y, w, h);
 }
 
+static void
+qemu_spice_gl_scanout2(DisplayChangeListener *dcl,
+                       int fd, bool y_0_top,
+                       uint32_t x, uint32_t y, uint32_t w, uint32_t h,
+                       uint32_t fd_w, uint32_t fd_h, uint32_t fd_stride,
+                       int fd_fourcc)
+{
+    SimpleSpiceDisplay *ssd = container_of(dcl, SimpleSpiceDisplay, dcl);
+
+    /* note: spice server will close the fd */
+    spice_qxl_gl_scanout(&ssd->qxl, fd,
+                         fd_w,
+                         fd_h,
+                         fd_stride, fd_fourcc, y_0_top);
+
+    qemu_spice_gl_monitor_config(ssd, x, y, w, h);
+}
+
 static void qemu_spice_gl_update(DisplayChangeListener *dcl,
                                  uint32_t x, uint32_t y, uint32_t w, uint32_t h)
 {
@@ -915,6 +933,7 @@ static const DisplayChangeListenerOps display_listener_gl_ops = {
     .dpy_gl_ctx_get_current  = qemu_egl_get_current_context,
 
     .dpy_gl_scanout          = qemu_spice_gl_scanout,
+    .dpy_gl_scanout2         = qemu_spice_gl_scanout2,
     .dpy_gl_update           = qemu_spice_gl_update,
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 12/14] contrib: add vhost-user-gpu
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (10 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 11/14] console: add dpy_gl_scanout2() marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-04 21:33 ` [Qemu-devel] [RFC 13/14] vhost-user: add vhost_user_gpu_set_socket() marcandre.lureau
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Add a vhost-user gpu backend example, based on virtio-gpu/3d device. It
is to be associated with a vhost-user-backend object, ex:

-object vhost-user-backend,id=vug,cmd="vhost-user-gpu"

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 Makefile                             |    3 +
 Makefile.objs                        |    1 +
 configure                            |    4 +
 contrib/vhost-user-gpu/Makefile.objs |    7 +
 contrib/vhost-user-gpu/main.c        | 1012 ++++++++++++++++++++++++++++++++++
 contrib/vhost-user-gpu/virgl.c       |  545 ++++++++++++++++++
 contrib/vhost-user-gpu/virgl.h       |   24 +
 contrib/vhost-user-gpu/vugpu.h       |  155 ++++++
 8 files changed, 1751 insertions(+)
 create mode 100644 contrib/vhost-user-gpu/Makefile.objs
 create mode 100644 contrib/vhost-user-gpu/main.c
 create mode 100644 contrib/vhost-user-gpu/virgl.c
 create mode 100644 contrib/vhost-user-gpu/virgl.h
 create mode 100644 contrib/vhost-user-gpu/vugpu.h

diff --git a/Makefile b/Makefile
index ae054de..a97aa94 100644
--- a/Makefile
+++ b/Makefile
@@ -153,6 +153,7 @@ dummy := $(call unnest-vars,, \
                 ivshmem-server-obj-y \
                 libvhost-user-obj-y \
                 vhost-user-input-obj-y \
+                vhost-user-gpu-obj-y \
                 qga-vss-dll-obj-y \
                 block-obj-y \
                 block-obj-m \
@@ -335,6 +336,8 @@ ivshmem-server$(EXESUF): $(ivshmem-server-obj-y) libqemuutil.a libqemustub.a
 	$(call LINK, $^)
 vhost-user-input$(EXESUF): $(vhost-user-input-obj-y) $(libvhost-user-obj-y) libqemuutil.a libqemustub.a
 	$(call LINK, $^)
+vhost-user-gpu$(EXESUF): $(vhost-user-gpu-obj-y) $(libvhost-user-obj-y) libqemuutil.a libqemustub.a
+	$(call LINK, $^)
 
 clean:
 # avoid old build problems by removing potentially incorrect old files
diff --git a/Makefile.objs b/Makefile.objs
index cdd48ca..c8a949c 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -117,3 +117,4 @@ ivshmem-client-obj-y = contrib/ivshmem-client/
 ivshmem-server-obj-y = contrib/ivshmem-server/
 libvhost-user-obj-y = contrib/libvhost-user/
 vhost-user-input-obj-y = contrib/vhost-user-input/
+vhost-user-gpu-obj-y = contrib/vhost-user-gpu/
diff --git a/configure b/configure
index b02c0f4..bf120f0 100755
--- a/configure
+++ b/configure
@@ -4603,6 +4603,7 @@ if test "$want_tools" = "yes" ; then
     tools="qemu-nbd\$(EXESUF) $tools"
     tools="ivshmem-client\$(EXESUF) ivshmem-server\$(EXESUF) $tools"
     tools="vhost-user-input\$(EXESUF) $tools"
+    tools="vhost-user-gpu\$(EXESUF) $tools"
   fi
 fi
 if test "$softmmu" = yes ; then
@@ -5924,6 +5925,9 @@ if [ "$pixman" = "internal" ]; then
   echo "config-host.h: subdir-pixman" >> $config_host_mak
 fi
 
+echo "PIXMAN_CFLAGS=$pixman_cflags" >> $config_host_mak
+echo "PIXMAN_LIBS=$pixman_libs" >> $config_host_mak
+
 if [ "$dtc_internal" = "yes" ]; then
   echo "config-host.h: subdir-dtc" >> $config_host_mak
 fi
diff --git a/contrib/vhost-user-gpu/Makefile.objs b/contrib/vhost-user-gpu/Makefile.objs
new file mode 100644
index 0000000..da7c4b8
--- /dev/null
+++ b/contrib/vhost-user-gpu/Makefile.objs
@@ -0,0 +1,7 @@
+vhost-user-gpu-obj-y = main.o virgl.o
+
+main.o-cflags := $(PIXMAN_CFLAGS)
+main.o-libs := $(PIXMAN_LIBS)
+
+virgl.o-cflags := $(VIRGL_CFLAGS)
+virgl.o-libs := $(VIRGL_LIBS)
diff --git a/contrib/vhost-user-gpu/main.c b/contrib/vhost-user-gpu/main.c
new file mode 100644
index 0000000..047fb45
--- /dev/null
+++ b/contrib/vhost-user-gpu/main.c
@@ -0,0 +1,1012 @@
+/*
+ * Virtio vhost-user GPU Device
+ *
+ * Copyright Red Hat, Inc. 2013-2016
+ *
+ * Authors:
+ *     Dave Airlie <airlied@redhat.com>
+ *     Gerd Hoffmann <kraxel@redhat.com>
+ *     Marc-André Lureau <marcandre.lureau@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#include <glib.h>
+#include <pixman.h>
+
+#include "vugpu.h"
+#include "virgl.h"
+
+struct virtio_gpu_simple_resource {
+    uint32_t resource_id;
+    uint32_t width;
+    uint32_t height;
+    uint32_t format;
+    struct iovec *iov;
+    unsigned int iov_cnt;
+    uint32_t scanout_bitmask;
+    pixman_image_t *image;
+    QTAILQ_ENTRY(virtio_gpu_simple_resource) next;
+};
+
+#define VG_DEBUG 0
+
+#define DPRINT(...)                             \
+    do {                                        \
+        if (VG_DEBUG) {                         \
+            fprintf(stderr, __VA_ARGS__);       \
+        }                                       \
+    } while (0)
+
+static const char *
+vg_cmd_to_string(int cmd)
+{
+#define CMD(cmd) [cmd] = #cmd
+    static const char *vg_cmd_str[] = {
+        CMD(VIRTIO_GPU_UNDEFINED),
+
+        /* 2d commands */
+        CMD(VIRTIO_GPU_CMD_GET_DISPLAY_INFO),
+        CMD(VIRTIO_GPU_CMD_RESOURCE_CREATE_2D),
+        CMD(VIRTIO_GPU_CMD_RESOURCE_UNREF),
+        CMD(VIRTIO_GPU_CMD_SET_SCANOUT),
+        CMD(VIRTIO_GPU_CMD_RESOURCE_FLUSH),
+        CMD(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D),
+        CMD(VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING),
+        CMD(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING),
+        CMD(VIRTIO_GPU_CMD_GET_CAPSET_INFO),
+        CMD(VIRTIO_GPU_CMD_GET_CAPSET),
+
+        /* 3d commands */
+        CMD(VIRTIO_GPU_CMD_CTX_CREATE),
+        CMD(VIRTIO_GPU_CMD_CTX_DESTROY),
+        CMD(VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE),
+        CMD(VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE),
+        CMD(VIRTIO_GPU_CMD_RESOURCE_CREATE_3D),
+        CMD(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D),
+        CMD(VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D),
+        CMD(VIRTIO_GPU_CMD_SUBMIT_3D),
+
+        /* cursor commands */
+        CMD(VIRTIO_GPU_CMD_UPDATE_CURSOR),
+        CMD(VIRTIO_GPU_CMD_MOVE_CURSOR),
+    };
+#undef REQ
+
+    if (cmd < G_N_ELEMENTS(vg_cmd_str)) {
+        return vg_cmd_str[cmd];
+    } else {
+        return "unknown";
+    }
+}
+
+ssize_t
+vg_sock_fd_write(int sock, void *buf, ssize_t buflen, int fd)
+{
+    ssize_t size;
+    struct msghdr msg;
+    struct iovec iov;
+    union {
+        struct cmsghdr cmsghdr;
+        char control[CMSG_SPACE(sizeof(int))];
+    } cmsgu;
+    struct cmsghdr *cmsg;
+
+    iov.iov_base = buf;
+    iov.iov_len = buflen;
+
+    msg.msg_name = NULL;
+    msg.msg_namelen = 0;
+    msg.msg_iov = &iov;
+    msg.msg_iovlen = 1;
+
+    if (fd != -1) {
+        msg.msg_control = cmsgu.control;
+        msg.msg_controllen = sizeof(cmsgu.control);
+
+        cmsg = CMSG_FIRSTHDR(&msg);
+        cmsg->cmsg_len = CMSG_LEN(sizeof(int));
+        cmsg->cmsg_level = SOL_SOCKET;
+        cmsg->cmsg_type = SCM_RIGHTS;
+
+        *((int *)CMSG_DATA(cmsg)) = fd;
+    } else {
+        msg.msg_control = NULL;
+        msg.msg_controllen = 0;
+    }
+
+    size = sendmsg(sock, &msg, 0);
+
+    return size;
+}
+
+static struct virtio_gpu_simple_resource *
+virtio_gpu_find_resource(VuGpu *g, uint32_t resource_id)
+{
+    struct virtio_gpu_simple_resource *res;
+
+    QTAILQ_FOREACH(res, &g->reslist, next) {
+        if (res->resource_id == resource_id) {
+            return res;
+        }
+    }
+    return NULL;
+}
+
+void
+vg_ctrl_response(VuGpu *g,
+                 struct virtio_gpu_ctrl_command *cmd,
+                 struct virtio_gpu_ctrl_hdr *resp,
+                 size_t resp_len)
+{
+    size_t s;
+
+    if (cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE) {
+        resp->flags |= VIRTIO_GPU_FLAG_FENCE;
+        resp->fence_id = cmd->cmd_hdr.fence_id;
+        resp->ctx_id = cmd->cmd_hdr.ctx_id;
+    }
+    /* qemu_hexdump(resp, stderr, "re:", resp_len); */
+    s = iov_from_buf(cmd->elem.in_sg, cmd->elem.in_num, 0, resp, resp_len);
+    if (s != resp_len) {
+        g_critical("%s: response size incorrect %zu vs %zu",
+                   __func__, s, resp_len);
+    }
+    vu_queue_push(&g->dev, cmd->vq, &cmd->elem, s);
+    vu_queue_notify(&g->dev, cmd->vq);
+    cmd->finished = true;
+}
+
+void
+vg_ctrl_response_nodata(VuGpu *g,
+                        struct virtio_gpu_ctrl_command *cmd,
+                        enum virtio_gpu_ctrl_type type)
+{
+    struct virtio_gpu_ctrl_hdr resp = {
+        .type = type,
+    };
+
+    vg_ctrl_response(g, cmd, &resp, sizeof(resp));
+}
+
+void
+vg_get_display_info(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resp_display_info dpy_info = { 0 ,};
+    int i;
+
+    dpy_info.hdr.type = VIRTIO_GPU_RESP_OK_DISPLAY_INFO;
+    for (i = 0; i < 1 /* g->conf.max_outputs */; i++) {
+        /* if (g->enabled_output_bitmask & (1 << i)) { */
+        dpy_info.pmodes[i].enabled = 1;
+        dpy_info.pmodes[i].r.width = 1024;
+        dpy_info.pmodes[i].r.height = 768;
+        /* } */
+    }
+
+    vg_ctrl_response(vg, cmd, &dpy_info.hdr, sizeof(dpy_info));
+}
+
+static pixman_format_code_t
+get_pixman_format(uint32_t virtio_gpu_format)
+{
+    switch (virtio_gpu_format) {
+#ifdef HOST_WORDS_BIGENDIAN
+    case VIRTIO_GPU_FORMAT_B8G8R8X8_UNORM:
+        return PIXMAN_b8g8r8x8;
+    case VIRTIO_GPU_FORMAT_B8G8R8A8_UNORM:
+        return PIXMAN_b8g8r8a8;
+    case VIRTIO_GPU_FORMAT_X8R8G8B8_UNORM:
+        return PIXMAN_x8r8g8b8;
+    case VIRTIO_GPU_FORMAT_A8R8G8B8_UNORM:
+        return PIXMAN_a8r8g8b8;
+    case VIRTIO_GPU_FORMAT_R8G8B8X8_UNORM:
+        return PIXMAN_r8g8b8x8;
+    case VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM:
+        return PIXMAN_r8g8b8a8;
+    case VIRTIO_GPU_FORMAT_X8B8G8R8_UNORM:
+        return PIXMAN_x8b8g8r8;
+    case VIRTIO_GPU_FORMAT_A8B8G8R8_UNORM:
+        return PIXMAN_a8b8g8r8;
+#else
+    case VIRTIO_GPU_FORMAT_B8G8R8X8_UNORM:
+        return PIXMAN_x8r8g8b8;
+    case VIRTIO_GPU_FORMAT_B8G8R8A8_UNORM:
+        return PIXMAN_a8r8g8b8;
+    case VIRTIO_GPU_FORMAT_X8R8G8B8_UNORM:
+        return PIXMAN_b8g8r8x8;
+    case VIRTIO_GPU_FORMAT_A8R8G8B8_UNORM:
+        return PIXMAN_b8g8r8a8;
+    case VIRTIO_GPU_FORMAT_R8G8B8X8_UNORM:
+        return PIXMAN_x8b8g8r8;
+    case VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM:
+        return PIXMAN_a8b8g8r8;
+    case VIRTIO_GPU_FORMAT_X8B8G8R8_UNORM:
+        return PIXMAN_r8g8b8x8;
+    case VIRTIO_GPU_FORMAT_A8B8G8R8_UNORM:
+        return PIXMAN_r8g8b8a8;
+#endif
+    default:
+        return 0;
+    }
+}
+
+static void
+vg_resource_create_2d(VuGpu *g,
+                      struct virtio_gpu_ctrl_command *cmd)
+{
+    pixman_format_code_t pformat;
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_create_2d c2d;
+
+    VUGPU_FILL_CMD(c2d);
+
+    if (c2d.resource_id == 0) {
+        g_critical("%s: resource id 0 is not allowed", __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, c2d.resource_id);
+    if (res) {
+        g_critical("%s: resource already exists %d", __func__, c2d.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    res->width = c2d.width;
+    res->height = c2d.height;
+    res->format = c2d.format;
+    res->resource_id = c2d.resource_id;
+
+    pformat = get_pixman_format(c2d.format);
+    if (!pformat) {
+        g_critical("%s: host couldn't handle guest format %d",
+                   __func__, c2d.format);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+    res->image = pixman_image_create_bits(pformat,
+                                          c2d.width,
+                                          c2d.height,
+                                          NULL, 0);
+    if (!res->image) {
+        g_critical("%s: resource creation failed %d %d %d",
+                   __func__, c2d.resource_id, c2d.width, c2d.height);
+        g_free(res);
+        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
+        return;
+    }
+
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+}
+
+static void
+vg_resource_destroy(VuGpu *g,
+                    struct virtio_gpu_simple_resource *res)
+{
+    pixman_image_unref(res->image);
+    QTAILQ_REMOVE(&g->reslist, res, next);
+    g_free(res);
+}
+
+static void
+vg_resource_unref(VuGpu *g,
+                  struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_unref unref;
+
+    VUGPU_FILL_CMD(unref);
+
+    res = virtio_gpu_find_resource(g, unref.resource_id);
+    if (!res) {
+        g_critical("%s: illegal resource specified %d",
+                   __func__, unref.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+    vg_resource_destroy(g, res);
+}
+
+int
+vg_create_mapping_iov(VuGpu *g,
+                      struct virtio_gpu_resource_attach_backing *ab,
+                      struct virtio_gpu_ctrl_command *cmd,
+                      struct iovec **iov)
+{
+    struct virtio_gpu_mem_entry *ents;
+    size_t esize, s;
+    int i;
+
+    if (ab->nr_entries > 16384) {
+        g_critical("%s: nr_entries is too big (%d > 16384)",
+                   __func__, ab->nr_entries);
+        return -1;
+    }
+
+    esize = sizeof(*ents) * ab->nr_entries;
+    ents = g_malloc(esize);
+    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
+                   sizeof(*ab), ents, esize);
+    if (s != esize) {
+        g_critical("%s: command data size incorrect %zu vs %zu",
+                   __func__, s, esize);
+        g_free(ents);
+        return -1;
+    }
+
+    *iov = g_malloc0(sizeof(struct iovec) * ab->nr_entries);
+    for (i = 0; i < ab->nr_entries; i++) {
+        uint32_t len = ents[i].length;
+        (*iov)[i].iov_len = ents[i].length;
+        (*iov)[i].iov_base = vu_gpa_to_va(&g->dev, ents[i].addr);
+        if (!(*iov)[i].iov_base || len != ents[i].length) {
+            g_critical("%s: resource %d element %d",
+                       __func__, ab->resource_id, i);
+            g_free(*iov);
+            g_free(ents);
+            *iov = NULL;
+            return -1;
+        }
+    }
+    g_free(ents);
+    return 0;
+}
+
+static void
+vg_resource_attach_backing(VuGpu *g,
+                           struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_attach_backing ab;
+    int ret;
+
+    VUGPU_FILL_CMD(ab);
+
+    res = virtio_gpu_find_resource(g, ab.resource_id);
+    if (!res) {
+        g_critical("%s: illegal resource specified %d",
+                   __func__, ab.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    ret = vg_create_mapping_iov(g, &ab, cmd, &res->iov);
+    if (ret != 0) {
+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+        return;
+    }
+
+    res->iov_cnt = ab.nr_entries;
+}
+
+static void
+vg_resource_detach_backing(VuGpu *g,
+                           struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_detach_backing detach;
+
+    VUGPU_FILL_CMD(detach);
+
+    res = virtio_gpu_find_resource(g, detach.resource_id);
+    if (!res || !res->iov) {
+        g_critical("%s: illegal resource specified %d",
+                   __func__, detach.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    g_free(res->iov);
+    res->iov = NULL;
+    res->iov_cnt = 0;
+}
+
+static void
+vg_transfer_to_host_2d(VuGpu *g,
+                       struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    int h;
+    uint32_t src_offset, dst_offset, stride;
+    int bpp;
+    pixman_format_code_t format;
+    struct virtio_gpu_transfer_to_host_2d t2d;
+
+    VUGPU_FILL_CMD(t2d);
+
+    res = virtio_gpu_find_resource(g, t2d.resource_id);
+    if (!res || !res->iov) {
+        g_critical("%s: illegal resource specified %d",
+                   __func__, t2d.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    if (t2d.r.x > res->width ||
+        t2d.r.y > res->height ||
+        t2d.r.width > res->width ||
+        t2d.r.height > res->height ||
+        t2d.r.x + t2d.r.width > res->width ||
+        t2d.r.y + t2d.r.height > res->height) {
+        g_critical("%s: transfer bounds outside resource"
+                   " bounds for resource %d: %d %d %d %d vs %d %d",
+                   __func__, t2d.resource_id, t2d.r.x, t2d.r.y,
+                   t2d.r.width, t2d.r.height, res->width, res->height);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+
+    format = pixman_image_get_format(res->image);
+    bpp = (PIXMAN_FORMAT_BPP(format) + 7) / 8;
+    stride = pixman_image_get_stride(res->image);
+
+    if (t2d.offset || t2d.r.x || t2d.r.y ||
+        t2d.r.width != pixman_image_get_width(res->image)) {
+        void *img_data = pixman_image_get_data(res->image);
+        for (h = 0; h < t2d.r.height; h++) {
+            src_offset = t2d.offset + stride * h;
+            dst_offset = (t2d.r.y + h) * stride + (t2d.r.x * bpp);
+
+            iov_to_buf(res->iov, res->iov_cnt, src_offset,
+                       (uint8_t *)img_data
+                       + dst_offset, t2d.r.width * bpp);
+        }
+    } else {
+        iov_to_buf(res->iov, res->iov_cnt, 0,
+                   pixman_image_get_data(res->image),
+                   pixman_image_get_stride(res->image)
+                   * pixman_image_get_height(res->image));
+    }
+}
+
+static void
+vg_set_scanout(VuGpu *g,
+               struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_scanout *scanout;
+    struct virtio_gpu_set_scanout ss;
+
+    VUGPU_FILL_CMD(ss);
+    g_critical("set scanout %d:%d", ss.scanout_id, ss.resource_id);
+
+    if (ss.scanout_id >= VIRTIO_GPU_MAX_SCANOUTS) {
+        g_critical("%s: illegal scanout id specified %d",
+                   __func__, ss.scanout_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID;
+        return;
+    }
+
+    if (ss.resource_id == 0) {
+        scanout = &g->scanout[ss.scanout_id];
+        if (scanout->resource_id) {
+            res = virtio_gpu_find_resource(g, scanout->resource_id);
+            if (res) {
+                res->scanout_bitmask &= ~(1 << ss.scanout_id);
+            }
+        }
+        if (ss.scanout_id == 0) {
+            g_critical("%s: illegal scanout id specified %d",
+                          __func__, ss.scanout_id);
+            cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID;
+            return;
+        }
+        /* dpy_gfx_replace_surface(g->scanout[ss.scanout_id].con, NULL); */
+        scanout->width = 0;
+        scanout->height = 0;
+        return;
+    }
+
+    /* create a surface for this scanout */
+    res = virtio_gpu_find_resource(g, ss.resource_id);
+    if (!res) {
+        g_critical("%s: illegal resource specified %d",
+                      __func__, ss.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    if (ss.r.x > res->width ||
+        ss.r.y > res->height ||
+        ss.r.width > res->width ||
+        ss.r.height > res->height ||
+        ss.r.x + ss.r.width > res->width ||
+        ss.r.y + ss.r.height > res->height) {
+        g_critical("%s: illegal scanout %d bounds for"
+                   " resource %d, (%d,%d)+%d,%d vs %d %d",
+                   __func__, ss.scanout_id, ss.resource_id, ss.r.x, ss.r.y,
+                   ss.r.width, ss.r.height, res->width, res->height);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+
+    scanout = &g->scanout[ss.scanout_id];
+
+    res->scanout_bitmask |= (1 << ss.scanout_id);
+    scanout->resource_id = ss.resource_id;
+    scanout->x = ss.r.x;
+    scanout->y = ss.r.y;
+    scanout->width = ss.r.width;
+    scanout->height = ss.r.height;
+
+    VhostGpuMsg msg = {
+        .request = VHOST_GPU_SCANOUT,
+        .size = sizeof(VhostGpuScanout),
+        .payload.scanout.scanout_id = ss.scanout_id,
+        .payload.scanout.width = scanout->width,
+        .payload.scanout.height = scanout->height
+    };
+    vg_sock_fd_write(g->sock_fd, &msg, VHOST_GPU_HDR_SIZE + msg.size, -1);
+}
+
+static void
+vg_resource_flush(VuGpu *g,
+                  struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_flush rf;
+    pixman_region16_t flush_region;
+    int i;
+
+    VUGPU_FILL_CMD(rf);
+
+    res = virtio_gpu_find_resource(g, rf.resource_id);
+    if (!res) {
+        g_critical("%s: illegal resource specified %d\n",
+                   __func__, rf.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    if (rf.r.x > res->width ||
+        rf.r.y > res->height ||
+        rf.r.width > res->width ||
+        rf.r.height > res->height ||
+        rf.r.x + rf.r.width > res->width ||
+        rf.r.y + rf.r.height > res->height) {
+        g_critical("%s: flush bounds outside resource"
+                   " bounds for resource %d: %d %d %d %d vs %d %d\n",
+                   __func__, rf.resource_id, rf.r.x, rf.r.y,
+                   rf.r.width, rf.r.height, res->width, res->height);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        return;
+    }
+
+    pixman_region_init_rect(&flush_region,
+                            rf.r.x, rf.r.y, rf.r.width, rf.r.height);
+    for (i = 0; i < VIRTIO_GPU_MAX_SCANOUTS; i++) {
+        struct virtio_gpu_scanout *scanout;
+        pixman_region16_t region, finalregion;
+        pixman_box16_t *extents;
+
+        if (!(res->scanout_bitmask & (1 << i))) {
+            continue;
+        }
+        scanout = &g->scanout[i];
+
+        pixman_region_init(&finalregion);
+        pixman_region_init_rect(&region, scanout->x, scanout->y,
+                                scanout->width, scanout->height);
+
+        pixman_region_intersect(&finalregion, &flush_region, &region);
+
+        extents = pixman_region_extents(&finalregion);
+        size_t width = extents->x2 - extents->x1;
+        size_t height = extents->y2 - extents->y1;
+        size_t bpp = PIXMAN_FORMAT_BPP(pixman_image_get_format(res->image)) / 8;
+        size_t size = width * height * bpp;
+
+        VhostGpuMsg *msg = g_malloc(VHOST_GPU_HDR_SIZE +
+                                    sizeof(VhostGpuUpdate) + size);
+        msg->request = VHOST_GPU_UPDATE;
+        msg->size = sizeof(VhostGpuUpdate) + size;
+        msg->payload.update.scanout_id = i;
+        msg->payload.update.x = extents->x1;
+        msg->payload.update.y = extents->y1;
+        msg->payload.update.width = width;
+        msg->payload.update.height = height;
+        pixman_image_t *i =
+            pixman_image_create_bits(pixman_image_get_format(res->image),
+                                     msg->payload.update.width,
+                                     msg->payload.update.height,
+                                     (uint32_t *)msg->payload.update.data,
+                                     width * bpp);
+        pixman_image_composite(PIXMAN_OP_SRC,
+                               res->image, NULL, i,
+                               extents->x1, extents->y1,
+                               0, 0, 0, 0,
+                               width, height);
+        pixman_image_unref(i);
+        vg_sock_fd_write(g->sock_fd, msg, VHOST_GPU_HDR_SIZE + msg->size, -1);
+        g_free(msg);
+
+        pixman_region_fini(&region);
+        pixman_region_fini(&finalregion);
+    }
+    pixman_region_fini(&flush_region);
+}
+
+static void
+vg_process_cmd(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd)
+{
+    switch (cmd->cmd_hdr.type) {
+    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
+        vg_get_display_info(vg, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
+        vg_resource_create_2d(vg, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
+        vg_resource_unref(vg, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
+        vg_resource_flush(vg, cmd);
+        break;
+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
+        vg_transfer_to_host_2d(vg, cmd);
+        break;
+    case VIRTIO_GPU_CMD_SET_SCANOUT:
+        vg_set_scanout(vg, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
+        vg_resource_attach_backing(vg, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
+        vg_resource_detach_backing(vg, cmd);
+        break;
+    default:
+        DPRINT("TODO handle ctrl %x\n", cmd->cmd_hdr.type);
+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+        break;
+    }
+    if (!cmd->finished) {
+        vg_ctrl_response_nodata(vg, cmd, cmd->error ? cmd->error :
+                                VIRTIO_GPU_RESP_OK_NODATA);
+    }
+}
+
+static void
+vg_handle_ctrl(VuDev *dev, int qidx)
+{
+    VuGpu *vg = container_of(dev, VuGpu, dev);
+    VuVirtq *vq = vu_get_queue(dev, qidx);
+    struct virtio_gpu_ctrl_command *cmd = NULL;
+    size_t len;
+
+    DPRINT("%s\n", __func__);
+
+    for (;;) {
+        cmd = vu_queue_pop(dev, vq, sizeof(struct virtio_gpu_ctrl_command));
+        if (!cmd) {
+            break;
+        }
+        cmd->vq = vq;
+        cmd->error = 0;
+        cmd->finished = false;
+
+        len = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
+                         0, &cmd->cmd_hdr, sizeof(cmd->cmd_hdr));
+        if (len != sizeof(cmd->cmd_hdr)) {
+            g_warning("%s: command size incorrect %zu vs %zu\n",
+                      __func__, len, sizeof(cmd->cmd_hdr));
+        }
+
+        DPRINT("%d %s\n", cmd->cmd_hdr.type,
+               vg_cmd_to_string(cmd->cmd_hdr.type));
+
+        if (vg->virgl) {
+            vg_virgl_process_cmd(vg, cmd);
+        } else {
+            vg_process_cmd(vg, cmd);
+        }
+
+        if (!cmd->finished) {
+            QTAILQ_INSERT_TAIL(&vg->fenceq, cmd, next);
+            vg->inflight++;
+        } else {
+            g_free(cmd);
+        }
+    }
+}
+
+static void
+update_cursor_data_simple(VuGpu *g, uint32_t resource_id, gpointer data)
+{
+    struct virtio_gpu_simple_resource *res;
+
+    res = virtio_gpu_find_resource(g, resource_id);
+    g_return_if_fail(res != NULL);
+    g_return_if_fail(pixman_image_get_width(res->image) == 64);
+    g_return_if_fail(pixman_image_get_height(res->image) == 64);
+    g_return_if_fail(
+        PIXMAN_FORMAT_BPP(pixman_image_get_format(res->image)) == 32);
+
+    memcpy(data, pixman_image_get_data(res->image), 64 * 64 * sizeof(uint32_t));
+}
+
+static void
+vg_handle_cursor(VuDev *dev, int qidx)
+{
+    VuGpu *g = container_of(dev, VuGpu, dev);
+    VuVirtq *vq = vu_get_queue(dev, qidx);
+    VuVirtqElement *elem;
+    size_t len;
+    struct virtio_gpu_update_cursor cursor;
+
+    for (;;) {
+        elem = vu_queue_pop(dev, vq, sizeof(VuVirtqElement));
+        if (!elem) {
+            break;
+        }
+        DPRINT("cursor out:%d in:%d\n", elem->out_num, elem->in_num);
+
+        len = iov_to_buf(elem->out_sg, elem->out_num,
+                         0, &cursor, sizeof(cursor));
+        if (len != sizeof(cursor)) {
+            g_warning("%s: cursor size incorrect %zu vs %zu\n",
+                      __func__, len, sizeof(cursor));
+        }
+        bool move = cursor.hdr.type != VIRTIO_GPU_CMD_MOVE_CURSOR;
+        DPRINT("%s move:%d\n", __func__, move);
+
+        if (move) {
+            VhostGpuMsg msg = {
+                .request = cursor.resource_id ?
+                    VHOST_GPU_CURSOR_POS : VHOST_GPU_CURSOR_POS_HIDE,
+                .size = sizeof(VhostGpuCursorPos),
+                .payload.cursor_pos = {
+                    .scanout_id = cursor.pos.scanout_id,
+                    .x = cursor.pos.x,
+                    .y = cursor.pos.y,
+                }
+            };
+            vg_sock_fd_write(g->sock_fd, &msg,
+                             VHOST_GPU_HDR_SIZE + msg.size, -1);
+        } else {
+            VhostGpuMsg msg = {
+                .request = VHOST_GPU_CURSOR_UPDATE,
+                .size = sizeof(VhostGpuCursorUpdate),
+                .payload.cursor_update = {
+                    .pos = {
+                        .scanout_id = cursor.pos.scanout_id,
+                        .x = cursor.pos.x,
+                        .y = cursor.pos.y,
+                    },
+                    .hot_x = cursor.hot_x,
+                    .hot_y = cursor.hot_y,
+                }
+            };
+            if (g->virgl) {
+                vg_virgl_update_cursor_data(g, cursor.resource_id,
+                                            msg.payload.cursor_update.data);
+            } else {
+                update_cursor_data_simple(g, cursor.resource_id,
+                                          msg.payload.cursor_update.data);
+            }
+            vg_sock_fd_write(g->sock_fd, &msg,
+                             VHOST_GPU_HDR_SIZE + msg.size, -1);
+        }
+
+        vu_queue_push(dev, vq, elem, 0);
+        vu_queue_notify(&g->dev, vq);
+        g_free(elem);
+    }
+}
+
+static void
+vg_panic(VuDev *dev, const char *msg)
+{
+    g_critical("%s\n", msg);
+    exit(1);
+}
+
+typedef struct Watch {
+    GSource       source;
+    GIOCondition  condition;
+    gpointer      tag;
+    VuDev        *dev;
+    guint         id;
+} Watch;
+
+static GIOCondition
+vu_to_gio_condition(int condition)
+{
+    return (condition & VU_WATCH_IN ? G_IO_IN : 0) |
+           (condition & VU_WATCH_OUT ? G_IO_OUT : 0) |
+           (condition & VU_WATCH_PRI ? G_IO_PRI : 0) |
+           (condition & VU_WATCH_ERR ? G_IO_ERR : 0) |
+           (condition & VU_WATCH_HUP ? G_IO_HUP : 0);
+}
+
+static GIOCondition
+vu_from_gio_condition(int condition)
+{
+    return (condition & G_IO_IN ? VU_WATCH_IN : 0) |
+           (condition & G_IO_OUT ? VU_WATCH_OUT : 0) |
+           (condition & G_IO_PRI ? VU_WATCH_PRI : 0) |
+           (condition & G_IO_ERR ? VU_WATCH_ERR : 0) |
+           (condition & G_IO_HUP ? VU_WATCH_HUP : 0);
+}
+
+static gboolean
+watch_check(GSource *source)
+{
+    Watch *watch = (Watch *)source;
+    GIOCondition poll_condition = g_source_query_unix_fd(source, watch->tag);
+
+    return poll_condition & watch->condition;
+}
+
+static gboolean
+watch_dispatch(GSource *source,
+               GSourceFunc callback,
+               gpointer user_data)
+
+{
+    vu_watch_cb func = (vu_watch_cb)callback;
+    Watch *watch = (Watch *)source;
+    GIOCondition poll_condition = g_source_query_unix_fd(source, watch->tag);
+    int cond = vu_from_gio_condition(poll_condition & watch->condition);
+
+    (*func) (watch->dev, cond, user_data);
+
+    return G_SOURCE_CONTINUE;
+}
+
+static GSourceFuncs watch_funcs = {
+    .check = watch_check,
+    .dispatch = watch_dispatch,
+};
+
+void
+vg_set_fd_handler(VuDev *dev, int fd, GIOCondition condition,
+                  vu_watch_cb cb, void *data)
+{
+    VuGpu *vg = container_of(dev, VuGpu, dev);
+    Watch *watch;
+    GSource *s;
+
+    g_assert_cmpint(fd, <, G_N_ELEMENTS(vg->watches));
+
+    s = vg->watches[fd];
+    if (cb) {
+        if (!s) {
+            s = g_source_new(&watch_funcs, sizeof(Watch));
+            watch = (Watch *)s;
+            watch->dev = dev;
+            watch->condition = condition;
+            watch->tag =
+                g_source_add_unix_fd(s, fd, condition);
+            watch->id = g_source_attach(s, NULL);
+            vg->watches[fd] = s;
+        } else {
+            watch = (Watch *)s;
+            g_source_modify_unix_fd(s, watch->tag, condition);
+        }
+
+        g_source_set_callback(s, (GSourceFunc)cb, data, NULL);
+    } else if (s) {
+        watch = (Watch *)s;
+        g_source_remove_unix_fd(s, watch->tag);
+        g_source_unref(s);
+        g_source_remove(watch->id);
+        vg->watches[fd] = NULL;
+    }
+}
+
+static void
+vg_add_watch(VuDev *dev, int fd, int condition,
+             vu_watch_cb cb, void *data)
+{
+    vg_set_fd_handler(dev, fd, vu_to_gio_condition(condition), cb, data);
+}
+
+static void
+vg_remove_watch(VuDev *dev, int fd)
+{
+    vg_set_fd_handler(dev, fd, 0, NULL, NULL);
+}
+
+static void
+vg_queue_set_started(VuDev *dev, int qidx, bool started)
+{
+    VuVirtq *vq = vu_get_queue(dev, qidx);
+
+    DPRINT("queue started %d:%d\n", qidx, started);
+
+    switch (qidx) {
+    case 0:
+        vu_set_queue_handler(dev, vq, started ? vg_handle_ctrl : NULL);
+        break;
+    case 1:
+        vu_set_queue_handler(dev, vq, started ? vg_handle_cursor : NULL);
+        break;
+    default:
+        break;
+    }
+}
+
+static int
+vg_process_msg(VuDev *dev, VhostUserMsg *msg, int *do_reply)
+{
+    VuGpu *g = container_of(dev, VuGpu, dev);
+
+    switch (msg->request) {
+    case VHOST_USER_GPU_SET_SOCKET:
+        g_return_val_if_fail(msg->fd_num == 1, 1);
+        g_return_val_if_fail(g->sock_fd == -1, 1);
+        g->sock_fd = msg->fds[0];
+        return 1;
+    default:
+        return 0;
+    }
+
+    return 0;
+}
+
+static void
+vg_set_features(VuDev *dev, uint64_t features)
+{
+    VuGpu *g = container_of(dev, VuGpu, dev);
+    bool virgl = features & (1 << VIRTIO_GPU_F_VIRGL);
+
+    if (virgl && !g->virgl_inited) {
+        vg_virgl_init(g);
+        g->virgl_inited = true;
+    }
+
+    g->virgl = virgl;
+}
+
+static const VuDevIface vuiface = {
+    .set_features = vg_set_features,
+    .queue_set_started = vg_queue_set_started,
+    .process_msg = vg_process_msg,
+};
+
+static void
+vg_vhost_watch(VuDev *dev, int condition, void *data)
+{
+    vu_dispatch(dev);
+}
+
+static void
+vg_reset(VuGpu *g)
+{
+    struct virtio_gpu_simple_resource *res, *tmp;
+
+    vu_deinit(&g->dev);
+
+    if (g->sock_fd != -1) {
+        close(g->sock_fd);
+        g->sock_fd = -1;
+    }
+
+    QTAILQ_FOREACH_SAFE(res, &g->reslist, next, tmp) {
+        vg_resource_destroy(g, res);
+    }
+}
+
+int
+main(int argc, char *argv[])
+{
+    GMainLoop *loop = NULL;
+    VuGpu g = { .sock_fd = -1 };
+
+    QTAILQ_INIT(&g.reslist);
+    QTAILQ_INIT(&g.fenceq);
+
+    vu_init(&g.dev, 3, vg_panic, vg_add_watch, vg_remove_watch, &vuiface);
+    vg_set_fd_handler(&g.dev, 3, G_IO_IN | G_IO_HUP, vg_vhost_watch, NULL);
+
+    loop = g_main_loop_new(NULL, FALSE);
+    g_main_loop_run(loop);
+    g_main_loop_unref(loop);
+
+    vg_reset(&g);
+
+    return 0;
+}
diff --git a/contrib/vhost-user-gpu/virgl.c b/contrib/vhost-user-gpu/virgl.c
new file mode 100644
index 0000000..a047531
--- /dev/null
+++ b/contrib/vhost-user-gpu/virgl.c
@@ -0,0 +1,545 @@
+/*
+ * Virtio vhost-user GPU Device
+ *
+ * Copyright Red Hat, Inc. 2013-2016
+ *
+ * Authors:
+ *     Dave Airlie <airlied@redhat.com>
+ *     Gerd Hoffmann <kraxel@redhat.com>
+ *     Marc-André Lureau <marcandre.lureau@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include <virglrenderer.h>
+#include "virgl.h"
+
+#define VG_DEBUG 0
+
+#define DPRINT(...)                             \
+    do {                                        \
+        if (VG_DEBUG) {                         \
+            fprintf(stderr, __VA_ARGS__);       \
+        }                                       \
+    } while (0)
+
+void
+vg_virgl_update_cursor_data(VuGpu *g, uint32_t resource_id,
+                            gpointer data)
+{
+    uint32_t width, height;
+    uint32_t *cursor;
+
+    cursor = virgl_renderer_get_cursor_data(resource_id, &width, &height);
+    g_return_if_fail(cursor != NULL);
+    g_return_if_fail(width == 64);
+    g_return_if_fail(height == 64);
+
+    memcpy(data, cursor, 64 * 64 * sizeof(uint32_t));
+    free(cursor);
+}
+
+static void
+virgl_cmd_context_create(VuGpu *g,
+                         struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_ctx_create cc;
+
+    VUGPU_FILL_CMD(cc);
+
+    virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen,
+                                  cc.debug_name);
+}
+
+static void
+virgl_cmd_context_destroy(VuGpu *g,
+                          struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_ctx_destroy cd;
+
+    VUGPU_FILL_CMD(cd);
+
+    virgl_renderer_context_destroy(cd.hdr.ctx_id);
+}
+
+static void
+virgl_cmd_create_resource_2d(VuGpu *g,
+                             struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_create_2d c2d;
+    struct virgl_renderer_resource_create_args args;
+
+    VUGPU_FILL_CMD(c2d);
+
+    args.handle = c2d.resource_id;
+    args.target = 2;
+    args.format = c2d.format;
+    args.bind = (1 << 1);
+    args.width = c2d.width;
+    args.height = c2d.height;
+    args.depth = 1;
+    args.array_size = 1;
+    args.last_level = 0;
+    args.nr_samples = 0;
+    args.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
+    virgl_renderer_resource_create(&args, NULL, 0);
+}
+
+static void
+virgl_cmd_create_resource_3d(VuGpu *g,
+                             struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_create_3d c3d;
+    struct virgl_renderer_resource_create_args args;
+
+    VUGPU_FILL_CMD(c3d);
+
+    args.handle = c3d.resource_id;
+    args.target = c3d.target;
+    args.format = c3d.format;
+    args.bind = c3d.bind;
+    args.width = c3d.width;
+    args.height = c3d.height;
+    args.depth = c3d.depth;
+    args.array_size = c3d.array_size;
+    args.last_level = c3d.last_level;
+    args.nr_samples = c3d.nr_samples;
+    args.flags = c3d.flags;
+    virgl_renderer_resource_create(&args, NULL, 0);
+}
+
+static void
+virgl_cmd_resource_unref(VuGpu *g,
+                         struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_unref unref;
+
+    VUGPU_FILL_CMD(unref);
+
+    virgl_renderer_resource_unref(unref.resource_id);
+}
+
+static void
+virgl_cmd_get_capset_info(VuGpu *g,
+                          struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_get_capset_info info;
+    struct virtio_gpu_resp_capset_info resp;
+
+    DPRINT("%s\n", __func__);
+    VUGPU_FILL_CMD(info);
+
+    if (info.capset_index == 0) {
+        resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL;
+        virgl_renderer_get_cap_set(resp.capset_id,
+                                   &resp.capset_max_version,
+                                   &resp.capset_max_size);
+    } else {
+        resp.capset_max_version = 0;
+        resp.capset_max_size = 0;
+    }
+    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
+    vg_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void
+virgl_cmd_get_capset(VuGpu *g,
+                     struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_get_capset gc;
+    struct virtio_gpu_resp_capset *resp;
+    uint32_t max_ver, max_size;
+
+    VUGPU_FILL_CMD(gc);
+
+    virgl_renderer_get_cap_set(gc.capset_id, &max_ver,
+                               &max_size);
+    resp = g_malloc0(sizeof(*resp) + max_size);
+
+    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
+    virgl_renderer_fill_caps(gc.capset_id,
+                             gc.capset_version,
+                             (void *)resp->capset_data);
+    vg_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + max_size);
+    g_free(resp);
+}
+
+static void
+virgl_cmd_submit_3d(VuGpu *g,
+                    struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_cmd_submit cs;
+    void *buf;
+    size_t s;
+
+    VUGPU_FILL_CMD(cs);
+
+    buf = g_malloc(cs.size);
+    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
+                   sizeof(cs), buf, cs.size);
+    if (s != cs.size) {
+        g_critical("%s: size mismatch (%zd/%d)", __func__, s, cs.size);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+        goto out;
+    }
+
+    virgl_renderer_submit_cmd(buf, cs.hdr.ctx_id, cs.size / 4);
+
+out:
+    g_free(buf);
+}
+
+static void
+virgl_cmd_transfer_to_host_2d(VuGpu *g,
+                              struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_transfer_to_host_2d t2d;
+    struct virtio_gpu_box box;
+
+    VUGPU_FILL_CMD(t2d);
+
+    box.x = t2d.r.x;
+    box.y = t2d.r.y;
+    box.z = 0;
+    box.w = t2d.r.width;
+    box.h = t2d.r.height;
+    box.d = 1;
+
+    virgl_renderer_transfer_write_iov(t2d.resource_id,
+                                      0,
+                                      0,
+                                      0,
+                                      0,
+                                      (struct virgl_box *)&box,
+                                      t2d.offset, NULL, 0);
+}
+
+static void
+virgl_cmd_transfer_to_host_3d(VuGpu *g,
+                              struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_transfer_host_3d t3d;
+
+    VUGPU_FILL_CMD(t3d);
+
+    virgl_renderer_transfer_write_iov(t3d.resource_id,
+                                      t3d.hdr.ctx_id,
+                                      t3d.level,
+                                      t3d.stride,
+                                      t3d.layer_stride,
+                                      (struct virgl_box *)&t3d.box,
+                                      t3d.offset, NULL, 0);
+}
+
+static void
+virgl_cmd_transfer_from_host_3d(VuGpu *g,
+                                struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_transfer_host_3d tf3d;
+
+    VUGPU_FILL_CMD(tf3d);
+
+    virgl_renderer_transfer_read_iov(tf3d.resource_id,
+                                     tf3d.hdr.ctx_id,
+                                     tf3d.level,
+                                     tf3d.stride,
+                                     tf3d.layer_stride,
+                                     (struct virgl_box *)&tf3d.box,
+                                     tf3d.offset, NULL, 0);
+}
+
+static void
+virgl_resource_attach_backing(VuGpu *g,
+                              struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_attach_backing att_rb;
+    struct iovec *res_iovs;
+    int ret;
+
+    VUGPU_FILL_CMD(att_rb);
+
+    ret = vg_create_mapping_iov(g, &att_rb, cmd, &res_iovs);
+    if (ret != 0) {
+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+        return;
+    }
+
+    virgl_renderer_resource_attach_iov(att_rb.resource_id,
+                                       res_iovs, att_rb.nr_entries);
+}
+
+static void
+virgl_resource_detach_backing(VuGpu *g,
+                              struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_detach_backing detach_rb;
+    struct iovec *res_iovs = NULL;
+    int num_iovs = 0;
+
+    VUGPU_FILL_CMD(detach_rb);
+
+    virgl_renderer_resource_detach_iov(detach_rb.resource_id,
+                                       &res_iovs,
+                                       &num_iovs);
+    if (res_iovs == NULL || num_iovs == 0) {
+        return;
+    }
+    g_free(res_iovs);
+}
+
+static void
+virgl_cmd_set_scanout(VuGpu *g,
+                      struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_set_scanout ss;
+    struct virgl_renderer_resource_info info;
+    int ret;
+
+    VUGPU_FILL_CMD(ss);
+
+    if (ss.scanout_id >= VIRTIO_GPU_MAX_SCANOUTS) {
+        g_critical("%s: illegal scanout id specified %d",
+                   __func__, ss.scanout_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID;
+        return;
+    }
+
+    memset(&info, 0, sizeof(info));
+
+    if (ss.resource_id && ss.r.width && ss.r.height) {
+        ret = virgl_renderer_resource_get_info(ss.resource_id, &info);
+        if (ret == -1) {
+            g_critical("%s: illegal resource specified %d\n",
+                       __func__, ss.resource_id);
+            cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+            return;
+        }
+
+        int fd = -1;
+        if (virgl_renderer_get_fd_for_texture(info.tex_id, &fd) < 0) {
+            g_critical("%s: failed to get fd for texture\n", __func__);
+            cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+            return;
+        }
+
+        VhostGpuMsg msg = {
+            .request = VHOST_GPU_GL_SCANOUT,
+            .size = sizeof(VhostGpuGlScanout),
+            .payload.gl_scanout.scanout_id = ss.scanout_id,
+            .payload.gl_scanout.x =  ss.r.x,
+            .payload.gl_scanout.y =  ss.r.y,
+            .payload.gl_scanout.width = ss.r.width,
+            .payload.gl_scanout.height = ss.r.height,
+            .payload.gl_scanout.fd_width = info.width,
+            .payload.gl_scanout.fd_height = info.height,
+            .payload.gl_scanout.fd_stride = info.stride,
+            .payload.gl_scanout.fd_flags = info.flags,
+            .payload.gl_scanout.fd_drm_fourcc = info.drm_fourcc
+        };
+        vg_sock_fd_write(g->sock_fd, &msg, VHOST_GPU_HDR_SIZE + msg.size, fd);
+        close(fd);
+    } else {
+        VhostGpuMsg msg = {
+            .request = VHOST_GPU_GL_SCANOUT,
+            .size = sizeof(VhostGpuGlScanout),
+            .payload.gl_scanout.scanout_id = ss.scanout_id,
+        };
+        vg_sock_fd_write(g->sock_fd, &msg, VHOST_GPU_HDR_SIZE + msg.size, -1);
+    }
+    g->scanout[ss.scanout_id].resource_id = ss.resource_id;
+}
+
+static void
+virgl_cmd_resource_flush(VuGpu *g,
+                         struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_resource_flush rf;
+    int i, ret;
+    uint32_t ok;
+
+    VUGPU_FILL_CMD(rf);
+
+    for (i = 0; i < VIRTIO_GPU_MAX_SCANOUTS; i++) {
+        if (g->scanout[i].resource_id != rf.resource_id) {
+            continue;
+        }
+        VhostGpuMsg msg = {
+            .request = VHOST_GPU_GL_UPDATE,
+            .size = sizeof(VhostGpuUpdate),
+            .payload.update.scanout_id = i,
+            .payload.update.x = rf.r.x,
+            .payload.update.y = rf.r.y,
+            .payload.update.width = rf.r.width,
+            .payload.update.height = rf.r.height
+        };
+        ret = vg_sock_fd_write(g->sock_fd, &msg,
+                               VHOST_GPU_HDR_SIZE + msg.size, -1);
+        g_return_if_fail(ret == VHOST_GPU_HDR_SIZE + msg.size);
+        ret = read(g->sock_fd, &ok, sizeof(ok));
+        g_return_if_fail(ret == sizeof(ok));
+    }
+}
+
+static void
+virgl_cmd_ctx_attach_resource(VuGpu *g,
+                              struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_ctx_resource att_res;
+
+    VUGPU_FILL_CMD(att_res);
+
+    virgl_renderer_ctx_attach_resource(att_res.hdr.ctx_id, att_res.resource_id);
+}
+
+static void
+virgl_cmd_ctx_detach_resource(VuGpu *g,
+                              struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_ctx_resource det_res;
+
+    VUGPU_FILL_CMD(det_res);
+
+    virgl_renderer_ctx_detach_resource(det_res.hdr.ctx_id, det_res.resource_id);
+}
+
+void vg_virgl_process_cmd(VuGpu *g, struct virtio_gpu_ctrl_command *cmd)
+{
+    virgl_renderer_force_ctx_0();
+    switch (cmd->cmd_hdr.type) {
+    case VIRTIO_GPU_CMD_CTX_CREATE:
+        virgl_cmd_context_create(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_CTX_DESTROY:
+        virgl_cmd_context_destroy(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
+        virgl_cmd_create_resource_2d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
+        virgl_cmd_create_resource_3d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_SUBMIT_3D:
+        virgl_cmd_submit_3d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
+        virgl_cmd_transfer_to_host_2d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
+        virgl_cmd_transfer_to_host_3d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
+        virgl_cmd_transfer_from_host_3d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
+        virgl_resource_attach_backing(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
+        virgl_resource_detach_backing(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_SET_SCANOUT:
+        virgl_cmd_set_scanout(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
+        virgl_cmd_resource_flush(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
+        virgl_cmd_resource_unref(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
+        /* TODO add security */
+        virgl_cmd_ctx_attach_resource(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
+        /* TODO add security */
+        virgl_cmd_ctx_detach_resource(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
+        virgl_cmd_get_capset_info(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_GET_CAPSET:
+        virgl_cmd_get_capset(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
+        vg_get_display_info(g, cmd);
+        break;
+    default:
+        DPRINT("TODO handle ctrl %x\n", cmd->cmd_hdr.type);
+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+        break;
+    }
+
+    if (cmd->finished) {
+        return;
+    }
+
+    if (cmd->error) {
+        g_warning("%s: ctrl 0x%x, error 0x%x\n", __func__,
+                  cmd->cmd_hdr.type, cmd->error);
+        vg_ctrl_response_nodata(g, cmd, cmd->error);
+        return;
+    }
+
+    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
+        vg_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
+        return;
+    }
+
+    DPRINT("Creating fence id:%" PRId64 " type:%d",
+           cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
+    virgl_renderer_create_fence(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
+}
+
+static void
+virgl_write_fence(void *opaque, uint32_t fence)
+{
+    VuGpu *g = opaque;
+    struct virtio_gpu_ctrl_command *cmd, *tmp;
+
+    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
+        /*
+         * the guest can end up emitting fences out of order
+         * so we should check all fenced cmds not just the first one.
+         */
+        if (cmd->cmd_hdr.fence_id > fence) {
+            continue;
+        }
+        DPRINT("FENCE %" PRIu64, cmd->cmd_hdr.fence_id);
+        vg_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
+        QTAILQ_REMOVE(&g->fenceq, cmd, next);
+        g_free(cmd);
+        g->inflight--;
+    }
+}
+
+static struct virgl_renderer_callbacks virgl_cbs = {
+    .version     = 1,
+    .write_fence = virgl_write_fence,
+};
+
+static void
+vg_virgl_poll(VuDev *dev, int condition, void *data)
+{
+    virgl_renderer_poll();
+}
+
+int
+vg_virgl_init(VuGpu *g)
+{
+    int ret;
+
+    ret = virgl_renderer_init(g,
+                              VIRGL_RENDERER_USE_EGL |
+                              VIRGL_RENDERER_THREAD_SYNC,
+                              &virgl_cbs);
+    if (ret != 0) {
+        return ret;
+    }
+
+    ret = virgl_renderer_get_poll_fd();
+    if (ret != -1) {
+        vg_set_fd_handler(&g->dev, ret, G_IO_IN, vg_virgl_poll, g);
+    }
+
+    return ret;
+}
diff --git a/contrib/vhost-user-gpu/virgl.h b/contrib/vhost-user-gpu/virgl.h
new file mode 100644
index 0000000..76402d4
--- /dev/null
+++ b/contrib/vhost-user-gpu/virgl.h
@@ -0,0 +1,24 @@
+/*
+ * Virtio vhost-user GPU Device
+ *
+ * Copyright Red Hat, Inc. 2013-2016
+ *
+ * Authors:
+ *     Dave Airlie <airlied@redhat.com>
+ *     Gerd Hoffmann <kraxel@redhat.com>
+ *     Marc-André Lureau <marcandre.lureau@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#ifndef VUGPU_VIRGL_H_
+#define VUGPU_VIRGL_H_
+
+#include "vugpu.h"
+
+int vg_virgl_init(VuGpu *g);
+void vg_virgl_process_cmd(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd);
+void vg_virgl_update_cursor_data(VuGpu *g, uint32_t resource_id,
+                                 gpointer data);
+
+#endif
diff --git a/contrib/vhost-user-gpu/vugpu.h b/contrib/vhost-user-gpu/vugpu.h
new file mode 100644
index 0000000..5d718fd
--- /dev/null
+++ b/contrib/vhost-user-gpu/vugpu.h
@@ -0,0 +1,155 @@
+/*
+ * Virtio vhost-user GPU Device
+ *
+ * Copyright Red Hat, Inc. 2013-2016
+ *
+ * Authors:
+ *     Dave Airlie <airlied@redhat.com>
+ *     Gerd Hoffmann <kraxel@redhat.com>
+ *     Marc-André Lureau <marcandre.lureau@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#ifndef VUGPU_H_
+#define VUGPU_H_
+
+#include "contrib/libvhost-user/libvhost-user.h"
+#include "standard-headers/linux/virtio_gpu.h"
+
+#include "qemu/osdep.h"
+#include "qemu/queue.h"
+#include "qemu/iov.h"
+#include "qemu/bswap.h"
+
+typedef enum VhostGpuRequest {
+    VHOST_GPU_NONE = 0,
+    VHOST_GPU_CURSOR_POS,
+    VHOST_GPU_CURSOR_POS_HIDE,
+    VHOST_GPU_CURSOR_UPDATE,
+    VHOST_GPU_SCANOUT,
+    VHOST_GPU_UPDATE,
+    VHOST_GPU_GL_SCANOUT,
+    VHOST_GPU_GL_UPDATE,
+} VhostGpuRequest;
+
+typedef struct VhostGpuCursorPos {
+    uint32_t scanout_id;
+    uint32_t x;
+    uint32_t y;
+} VhostGpuCursorPos;
+
+typedef struct VhostGpuCursorUpdate {
+    VhostGpuCursorPos pos;
+    uint32_t hot_x;
+    uint32_t hot_y;
+    uint32_t data[64 * 64];
+} VhostGpuCursorUpdate;
+
+typedef struct VhostGpuScanout {
+    uint32_t scanout_id;
+    uint32_t width;
+    uint32_t height;
+} VhostGpuScanout;
+
+typedef struct VhostGpuUpdate {
+    uint32_t scanout_id;
+    uint32_t x;
+    uint32_t y;
+    uint32_t width;
+    uint32_t height;
+    uint8_t data[];
+} VhostGpuUpdate;
+
+typedef struct VhostGpuGlScanout {
+    uint32_t scanout_id;
+    uint32_t x;
+    uint32_t y;
+    uint32_t width;
+    uint32_t height;
+    uint32_t fd_width;
+    uint32_t fd_height;
+    uint32_t fd_stride;
+    uint32_t fd_flags;
+    int fd_drm_fourcc;
+} VhostGpuGlScanout;
+
+typedef struct VhostGpuMsg {
+    VhostGpuRequest request;
+    uint32_t size; /* the following payload size */
+    union {
+        VhostGpuCursorPos cursor_pos;
+        VhostGpuCursorUpdate cursor_update;
+        VhostGpuScanout scanout;
+        VhostGpuUpdate update;
+        VhostGpuGlScanout gl_scanout;
+    } payload;
+} QEMU_PACKED VhostGpuMsg;
+
+static VhostGpuMsg m __attribute__ ((unused));
+#define VHOST_GPU_HDR_SIZE (sizeof(m.request) + sizeof(m.size))
+
+struct virtio_gpu_scanout {
+    uint32_t width, height;
+    int x, y;
+    int invalidate;
+    uint32_t resource_id;
+};
+
+typedef struct VuGpu {
+    VuDev dev;
+    int sock_fd;
+    GSource *watches[16];
+
+    bool virgl;
+    bool virgl_inited;
+    uint32_t inflight;
+
+    struct virtio_gpu_scanout scanout[VIRTIO_GPU_MAX_SCANOUTS];
+    QTAILQ_HEAD(, virtio_gpu_simple_resource) reslist;
+    QTAILQ_HEAD(, virtio_gpu_ctrl_command) fenceq;
+} VuGpu;
+
+struct virtio_gpu_ctrl_command {
+    VuVirtqElement elem;
+    VuVirtq *vq;
+    struct virtio_gpu_ctrl_hdr cmd_hdr;
+    uint32_t error;
+    bool finished;
+    QTAILQ_ENTRY(virtio_gpu_ctrl_command) next;
+};
+
+#define VUGPU_FILL_CMD(out) do {                                \
+        size_t s;                                               \
+        s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, 0,  \
+                       &out, sizeof(out));                      \
+        if (s != sizeof(out)) {                                 \
+            g_critical("%s: command size incorrect %zu vs %zu", \
+                       __func__, s, sizeof(out));               \
+            return;                                             \
+        }                                                       \
+    } while (0)
+
+
+void    vg_ctrl_response(VuGpu *g,
+                         struct virtio_gpu_ctrl_command *cmd,
+                         struct virtio_gpu_ctrl_hdr *resp,
+                         size_t resp_len);
+
+void    vg_ctrl_response_nodata(VuGpu *g,
+                                struct virtio_gpu_ctrl_command *cmd,
+                                enum virtio_gpu_ctrl_type type);
+
+int     vg_create_mapping_iov(VuGpu *g,
+                              struct virtio_gpu_resource_attach_backing *ab,
+                              struct virtio_gpu_ctrl_command *cmd,
+                              struct iovec **iov);
+
+void    vg_get_display_info(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd);
+
+void    vg_set_fd_handler(VuDev *dev, int fd, GIOCondition condition,
+                          vu_watch_cb cb, void *data);
+
+ssize_t vg_sock_fd_write(int sock, void *buf, ssize_t buflen, int fd);
+
+#endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 13/14] vhost-user: add vhost_user_gpu_set_socket()
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (11 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 12/14] contrib: add vhost-user-gpu marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-06  6:36   ` Gerd Hoffmann
  2016-06-04 21:33 ` [Qemu-devel] [RFC 14/14] Add virtio-gpu vhost-user backend marcandre.lureau
  2016-06-06 13:54 ` [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices Marc-André Lureau
  14 siblings, 1 reply; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Add a new vhost-user message to give a unix socket for gpu updates to a
vhost-user backend.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 contrib/libvhost-user/libvhost-user.h |  1 +
 hw/virtio/vhost-user.c                | 11 +++++++++++
 include/hw/virtio/vhost-backend.h     |  1 +
 3 files changed, 13 insertions(+)

diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
index 9733b1a..4bcbae3 100644
--- a/contrib/libvhost-user/libvhost-user.h
+++ b/contrib/libvhost-user/libvhost-user.h
@@ -61,6 +61,7 @@ typedef enum VhostUserRequest {
     VHOST_USER_SET_VRING_ENABLE = 18,
     VHOST_USER_SEND_RARP = 19,
     VHOST_USER_INPUT_GET_CONFIG = 20,
+    VHOST_USER_GPU_SET_SOCKET = 21,
     VHOST_USER_MAX
 } VhostUserRequest;
 
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 8072c16..5e6091d 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -59,6 +59,7 @@ typedef enum VhostUserRequest {
     VHOST_USER_SET_VRING_ENABLE = 18,
     VHOST_USER_SEND_RARP = 19,
     VHOST_USER_INPUT_GET_CONFIG = 20,
+    VHOST_USER_GPU_SET_SOCKET = 21,
     VHOST_USER_MAX
 } VhostUserRequest;
 
@@ -263,6 +264,16 @@ err:
     return -1;
 }
 
+int vhost_user_gpu_set_socket(struct vhost_dev *dev, int fd)
+{
+    VhostUserMsg msg = {
+        .request = VHOST_USER_GPU_SET_SOCKET,
+        .flags = VHOST_USER_VERSION,
+    };
+
+    return vhost_user_write(dev, &msg, &fd, 1);
+}
+
 static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base,
                                    struct vhost_log *log)
 {
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index 08d34db..e12930c 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -110,5 +110,6 @@ int vhost_set_backend_type(struct vhost_dev *dev,
 
 int vhost_user_input_get_config(struct vhost_dev *dev,
                                 struct virtio_input_config **config);
+int vhost_user_gpu_set_socket(struct vhost_dev *dev, int fd);
 
 #endif /* VHOST_BACKEND_H_ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [RFC 14/14] Add virtio-gpu vhost-user backend
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (12 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 13/14] vhost-user: add vhost_user_gpu_set_socket() marcandre.lureau
@ 2016-06-04 21:33 ` marcandre.lureau
  2016-06-06  6:54   ` Gerd Hoffmann
  2016-06-06 13:54 ` [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices Marc-André Lureau
  14 siblings, 1 reply; 26+ messages in thread
From: marcandre.lureau @ 2016-06-04 21:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: kraxel, Marc-André Lureau

From: Marc-André Lureau <marcandre.lureau@redhat.com>

Add to virtio-gpu devices a "vhost-user" property. When set, the
associated vhost-user backend is used to handle the virtio rings.

For now, a socketpair is created for the backend to share the rendering
results with qemu via a simple VHOST_GPU protocol.

Example usage:
-object vhost-user-backend,id=vug,cmd="./vhost-user-gpu"
-device virtio-vga,virgl=true,vhost-user=vug

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
 hw/display/Makefile.objs       |   2 +-
 hw/display/vhost-gpu.c         | 264 +++++++++++++++++++++++++++++++++++++++++
 hw/display/virtio-gpu-pci.c    |   6 +
 hw/display/virtio-gpu.c        |  75 +++++++++++-
 hw/display/virtio-vga.c        |   5 +
 include/hw/virtio/virtio-gpu.h |   7 ++
 6 files changed, 356 insertions(+), 3 deletions(-)
 create mode 100644 hw/display/vhost-gpu.c

diff --git a/hw/display/Makefile.objs b/hw/display/Makefile.objs
index d99780e..f889730 100644
--- a/hw/display/Makefile.objs
+++ b/hw/display/Makefile.objs
@@ -36,7 +36,7 @@ obj-$(CONFIG_VGA) += vga.o
 
 common-obj-$(CONFIG_QXL) += qxl.o qxl-logger.o qxl-render.o
 
-obj-$(CONFIG_VIRTIO) += virtio-gpu.o virtio-gpu-3d.o
+obj-$(CONFIG_VIRTIO) += virtio-gpu.o virtio-gpu-3d.o vhost-gpu.o
 obj-$(CONFIG_VIRTIO_PCI) += virtio-gpu-pci.o
 obj-$(CONFIG_VIRTIO_VGA) += virtio-vga.o
 virtio-gpu.o-cflags := $(VIRGL_CFLAGS)
diff --git a/hw/display/vhost-gpu.c b/hw/display/vhost-gpu.c
new file mode 100644
index 0000000..9dc8b13
--- /dev/null
+++ b/hw/display/vhost-gpu.c
@@ -0,0 +1,264 @@
+/*
+ * Virtio vhost GPU Device
+ *
+ * Copyright Red Hat, Inc. 2016
+ *
+ * Authors:
+ *     Marc-André Lureau <marcandre.lureau@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "hw/virtio/virtio-gpu.h"
+#include "sysemu/char.h"
+
+typedef enum VhostGpuRequest {
+    VHOST_GPU_NONE = 0,
+    VHOST_GPU_CURSOR_POS,
+    VHOST_GPU_CURSOR_POS_HIDE,
+    VHOST_GPU_CURSOR_UPDATE,
+    VHOST_GPU_SCANOUT,
+    VHOST_GPU_UPDATE,
+    VHOST_GPU_GL_SCANOUT,
+    VHOST_GPU_GL_UPDATE,
+} VhostGpuRequest;
+
+typedef struct VhostGpuCursorPos {
+    uint32_t scanout_id;
+    uint32_t x;
+    uint32_t y;
+} VhostGpuCursorPos;
+
+typedef struct VhostGpuCursorUpdate {
+    VhostGpuCursorPos pos;
+    uint32_t hot_x;
+    uint32_t hot_y;
+    uint32_t data[64 * 64];
+} VhostGpuCursorUpdate;
+
+typedef struct VhostGpuScanout {
+    uint32_t scanout_id;
+    uint32_t width;
+    uint32_t height;
+} VhostGpuScanout;
+
+typedef struct VhostGpuGlScanout {
+    uint32_t scanout_id;
+    uint32_t x;
+    uint32_t y;
+    uint32_t width;
+    uint32_t height;
+    uint32_t fd_width;
+    uint32_t fd_height;
+    uint32_t fd_stride;
+    uint32_t fd_flags;
+    int fd_drm_fourcc;
+} VhostGpuGlScanout;
+
+typedef struct VhostGpuUpdate {
+    uint32_t scanout_id;
+    uint32_t x;
+    uint32_t y;
+    uint32_t width;
+    uint32_t height;
+    uint8_t data[];
+} VhostGpuUpdate;
+
+typedef struct VhostGpuMsg {
+    VhostGpuRequest request;
+    uint32_t size; /* the following payload size */
+    union {
+        VhostGpuCursorPos cursor_pos;
+        VhostGpuCursorUpdate cursor_update;
+        VhostGpuScanout scanout;
+        VhostGpuUpdate update;
+        VhostGpuGlScanout gl_scanout;
+    } payload;
+} QEMU_PACKED VhostGpuMsg;
+
+static VhostGpuMsg m __attribute__ ((unused));
+#define VHOST_GPU_HDR_SIZE (sizeof(m.request) + sizeof(m.size))
+
+static void vhost_gpu_handle_cursor(VirtIOGPU *g, VhostGpuMsg *msg)
+{
+    VhostGpuCursorPos *pos = &msg->payload.cursor_pos;
+    struct virtio_gpu_scanout *s;
+
+    if (pos->scanout_id >= g->conf.max_outputs) {
+        return;
+    }
+    s = &g->scanout[pos->scanout_id];
+
+    if (msg->request == VHOST_GPU_CURSOR_UPDATE) {
+        VhostGpuCursorUpdate *up = &msg->payload.cursor_update;
+        if (!s->current_cursor) {
+            s->current_cursor = cursor_alloc(64, 64);
+        }
+
+        s->current_cursor->hot_x = up->hot_x;
+        s->current_cursor->hot_y = up->hot_y;
+
+        memcpy(s->current_cursor->data, up->data,
+               64 * 64 * sizeof(uint32_t));
+
+        dpy_cursor_define(s->con, s->current_cursor);
+    }
+
+    dpy_mouse_set(s->con, pos->x, pos->y,
+                  msg->request != VHOST_GPU_CURSOR_POS_HIDE);
+}
+
+static void vhost_gpu_handle_display(VirtIOGPU *g, VhostGpuMsg *msg)
+{
+    struct virtio_gpu_scanout *s;
+
+    switch (msg->request) {
+    case VHOST_GPU_SCANOUT: {
+        VhostGpuScanout *m = &msg->payload.scanout;
+
+        if (m->scanout_id >= g->conf.max_outputs) {
+            return;
+        }
+        s = &g->scanout[m->scanout_id];
+
+        s->ds = qemu_create_displaysurface(m->width, m->height);
+        if (!s->ds) {
+            return;
+        }
+
+        dpy_gfx_replace_surface(s->con, s->ds);
+        break;
+    }
+    case VHOST_GPU_GL_SCANOUT: {
+        VhostGpuGlScanout *m = &msg->payload.gl_scanout;
+        int fd = qemu_chr_fe_get_msgfd(g->vhost_chr);
+
+        if (m->scanout_id >= g->conf.max_outputs) {
+            close(fd);
+            break;
+        }
+
+        g->enable = 1;
+        dpy_gl_scanout2(g->scanout[m->scanout_id].con, fd,
+                        m->fd_flags & 1 /* FIXME: Y_0_TOP */,
+                        m->x, m->y, m->width, m->height,
+                        m->fd_width, m->fd_height, m->fd_stride,
+                        m->fd_drm_fourcc);
+        break;
+    }
+    case VHOST_GPU_GL_UPDATE: {
+        VhostGpuUpdate *m = &msg->payload.update;
+
+        if (m->scanout_id >= g->conf.max_outputs ||
+            !g->scanout[m->scanout_id].con) {
+            break;
+        }
+
+        dpy_gl_update(g->scanout[m->scanout_id].con,
+                      m->x, m->y, m->width, m->height);
+        break;
+    }
+    case VHOST_GPU_UPDATE: {
+        VhostGpuUpdate *m = &msg->payload.update;
+
+        if (m->scanout_id >= g->conf.max_outputs) {
+            break;
+        }
+        s = &g->scanout[m->scanout_id];
+
+        pixman_image_t *image =
+            pixman_image_create_bits(PIXMAN_x8r8g8b8,
+                                     m->width,
+                                     m->height,
+                                     (uint32_t *)m->data,
+                                     m->width * 4);
+
+        pixman_image_composite(PIXMAN_OP_SRC,
+                               image, NULL, s->ds->image,
+                               0, 0, 0, 0, m->x, m->y, m->width, m->height);
+
+        pixman_image_unref(image);
+        dpy_gfx_update(s->con, m->x, m->y, m->width, m->height);
+        break;
+    }
+    default:
+        g_warn_if_reached();
+    }
+}
+
+static void vhost_gpu_chr_read(void *opaque)
+{
+    VirtIOGPU *g = opaque;
+    VhostGpuMsg *msg = NULL;
+    VhostGpuRequest request;
+    uint32_t size;
+    int r;
+
+    r = qemu_chr_fe_read_all(g->vhost_chr,
+                             (uint8_t *)&request, sizeof(uint32_t));
+    if (r != sizeof(uint32_t)) {
+        error_report("failed to read msg header");
+        goto end;
+    }
+
+    r = qemu_chr_fe_read_all(g->vhost_chr,
+                             (uint8_t *)&size, sizeof(uint32_t));
+    if (r != sizeof(uint32_t)) {
+        error_report("failed to read msg size");
+        goto end;
+    }
+
+    msg = g_malloc(VHOST_GPU_HDR_SIZE + size);
+    g_return_if_fail(msg != NULL);
+
+    r = qemu_chr_fe_read_all(g->vhost_chr,
+                             (uint8_t *)&msg->payload, size);
+    if (r != size) {
+        error_report("failed to read msg payload %d != %d", r, size);
+        goto end;
+    }
+
+    msg->request = request;
+    msg->size = size;
+
+    if (request == VHOST_GPU_CURSOR_UPDATE ||
+        request == VHOST_GPU_CURSOR_POS ||
+        request == VHOST_GPU_CURSOR_POS_HIDE) {
+        vhost_gpu_handle_cursor(g, msg);
+    } else {
+        vhost_gpu_handle_display(g, msg);
+    }
+
+end:
+    g_free(msg);
+}
+
+int vhost_gpu_init(VirtIOGPU *g, Error **errp)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(g);
+    int sv[2];
+
+    if (vhost_user_backend_dev_init(g->vhost, vdev, 2, errp) < 0) {
+        return -1;
+    }
+
+    if (socketpair(PF_UNIX, SOCK_STREAM, 0, sv) == -1) {
+        error_setg_errno(errp, errno, "socketpair() failed");
+        return -1;
+    }
+
+    g->vhost_chr = qemu_chr_open_socket(sv[0], errp);
+    if (!g->vhost_chr) {
+        return -1;
+    }
+
+    qemu_set_fd_handler(sv[0], vhost_gpu_chr_read, NULL, g);
+
+    vhost_user_gpu_set_socket(&g->vhost->dev, sv[1]);
+
+    close(sv[1]);
+
+    return 0;
+}
diff --git a/hw/display/virtio-gpu-pci.c b/hw/display/virtio-gpu-pci.c
index a71b230..2331d87 100644
--- a/hw/display/virtio-gpu-pci.c
+++ b/hw/display/virtio-gpu-pci.c
@@ -16,6 +16,7 @@
 #include "hw/virtio/virtio-bus.h"
 #include "hw/virtio/virtio-pci.h"
 #include "hw/virtio/virtio-gpu.h"
+#include "qapi/error.h"
 
 static Property virtio_gpu_pci_properties[] = {
     DEFINE_VIRTIO_GPU_PCI_PROPERTIES(VirtIOPCIProxy),
@@ -60,6 +61,11 @@ static void virtio_gpu_initfn(Object *obj)
 
     virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
                                 TYPE_VIRTIO_GPU);
+
+    /* could eventually be included in qdev_alias_all_properties? */
+    object_property_add_alias(obj, "vhost-user",
+                              OBJECT(&dev->vdev), "vhost-user",
+                              &error_abort);
 }
 
 static const TypeInfo virtio_gpu_pci_info = {
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index f3b0f14..b92f493 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -21,6 +21,8 @@
 #include "hw/virtio/virtio-bus.h"
 #include "qemu/log.h"
 #include "qapi/error.h"
+#include "sysemu/char.h"
+#include "qemu/error-report.h"
 
 static struct virtio_gpu_simple_resource*
 virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
@@ -905,7 +907,12 @@ static void virtio_gpu_gl_block(void *opaque, bool block)
 
     g->renderer_blocked = block;
     if (!block) {
-        virtio_gpu_process_cmdq(g);
+        if (g->vhost_chr) {
+            uint32_t ok;
+            qemu_chr_fe_write(g->vhost_chr, (uint8_t *)&ok, sizeof(ok));
+        } else {
+            virtio_gpu_process_cmdq(g);
+        }
     }
 }
 
@@ -962,6 +969,10 @@ static void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
         g->cursor_vq = virtio_add_queue(vdev, 16, virtio_gpu_handle_cursor_cb);
     }
 
+    if (g->vhost && vhost_gpu_init(g, errp) < 0) {
+        return;
+    }
+
     g->ctrl_bh = qemu_bh_new(virtio_gpu_ctrl_bh, g);
     g->cursor_bh = qemu_bh_new(virtio_gpu_cursor_bh, g);
     QTAILQ_INIT(&g->reslist);
@@ -982,8 +993,27 @@ static void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
     vmstate_register(qdev, -1, &vmstate_virtio_gpu_unmigratable, g);
 }
 
+static void virtio_gpu_host_user_is_busy(Object *obj, const char *name,
+                                         Object *val, Error **errp)
+{
+    VirtIOGPU *g = VIRTIO_GPU(obj);
+
+    if (g->vhost) {
+        error_setg(errp, "can't use already busy vhost-user");
+    } else {
+        qdev_prop_allow_set_link_before_realize(obj, name, val, errp);
+    }
+}
+
 static void virtio_gpu_instance_init(Object *obj)
 {
+    VirtIOGPU *g = VIRTIO_GPU(obj);
+
+    object_property_add_link(obj, "vhost-user", TYPE_VHOST_USER_BACKEND,
+                             (Object **)&g->vhost,
+                             virtio_gpu_host_user_is_busy,
+                             OBJ_PROP_LINK_UNREF_ON_RELEASE,
+                             &error_abort);
 }
 
 static void virtio_gpu_reset(VirtIODevice *vdev)
@@ -993,7 +1023,9 @@ static void virtio_gpu_reset(VirtIODevice *vdev)
     int i;
 
     g->enable = 0;
-
+    if (g->vhost) {
+        vhost_user_backend_stop(g->vhost);
+    }
     QTAILQ_FOREACH_SAFE(res, &g->reslist, next, tmp) {
         virtio_gpu_resource_destroy(g, res);
     }
@@ -1026,6 +1058,42 @@ static void virtio_gpu_reset(VirtIODevice *vdev)
 #endif
 }
 
+static void virtio_gpu_set_status(VirtIODevice *vdev, uint8_t val)
+{
+    VirtIOGPU *g = VIRTIO_GPU(vdev);
+
+    if (g->vhost) {
+        if (val & VIRTIO_CONFIG_S_DRIVER_OK) {
+            vhost_user_backend_start(g->vhost);
+        } else {
+            vhost_user_backend_stop(g->vhost);
+        }
+    }
+}
+
+static bool virtio_gpu_guest_notifier_pending(VirtIODevice *vdev, int idx)
+{
+    VirtIOGPU *g = VIRTIO_GPU(vdev);
+
+    if (!g->vhost) {
+        return false;
+    }
+
+    return vhost_virtqueue_pending(&g->vhost->dev, idx);
+}
+
+static void virtio_gpu_guest_notifier_mask(VirtIODevice *vdev, int idx,
+                                           bool mask)
+{
+    VirtIOGPU *g = VIRTIO_GPU(vdev);
+
+    if (!g->vhost) {
+        return;
+    }
+
+    vhost_virtqueue_mask(&g->vhost->dev, vdev, idx, mask);
+}
+
 static Property virtio_gpu_properties[] = {
     DEFINE_PROP_UINT32("max_outputs", VirtIOGPU, conf.max_outputs, 1),
 #ifdef CONFIG_VIRGL
@@ -1047,6 +1115,9 @@ static void virtio_gpu_class_init(ObjectClass *klass, void *data)
     vdc->set_config = virtio_gpu_set_config;
     vdc->get_features = virtio_gpu_get_features;
     vdc->set_features = virtio_gpu_set_features;
+    vdc->set_status   = virtio_gpu_set_status;
+    vdc->guest_notifier_mask = virtio_gpu_guest_notifier_mask;
+    vdc->guest_notifier_pending = virtio_gpu_guest_notifier_pending;
 
     vdc->reset = virtio_gpu_reset;
 
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index f49f8de..6b233bb 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -181,6 +181,11 @@ static void virtio_vga_inst_initfn(Object *obj)
 
     virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
                                 TYPE_VIRTIO_GPU);
+
+    /* could eventually be included in qdev_alias_all_properties? */
+    object_property_add_alias(obj, "vhost-user",
+                              OBJECT(&dev->vdev), "vhost-user",
+                              &error_abort);
 }
 
 static TypeInfo virtio_vga_info = {
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 0cc8e67..a1e9fe5 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -19,6 +19,7 @@
 #include "ui/console.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
+#include "sysemu/vhost-user-backend.h"
 
 #include "standard-headers/linux/virtio_gpu.h"
 #define TYPE_VIRTIO_GPU "virtio-gpu-device"
@@ -82,6 +83,9 @@ struct virtio_gpu_ctrl_command {
 typedef struct VirtIOGPU {
     VirtIODevice parent_obj;
 
+    VhostUserBackend *vhost;
+    CharDriverState *vhost_chr;
+
     QEMUBH *ctrl_bh;
     QEMUBH *cursor_bh;
     VirtQueue *ctrl_vq;
@@ -161,4 +165,7 @@ void virtio_gpu_virgl_fence_poll(VirtIOGPU *g);
 void virtio_gpu_virgl_reset(VirtIOGPU *g);
 int virtio_gpu_virgl_init(VirtIOGPU *g);
 
+/* vhost-gpu.c */
+int vhost_gpu_init(VirtIOGPU *g, Error **errp);
+
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 05/14] Add vhost-user backend to virtio-input-host
  2016-06-04 21:33 ` [Qemu-devel] [RFC 05/14] Add vhost-user backend to virtio-input-host marcandre.lureau
@ 2016-06-06  6:22   ` Gerd Hoffmann
  0 siblings, 0 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2016-06-06  6:22 UTC (permalink / raw)
  To: marcandre.lureau; +Cc: qemu-devel

On Sa, 2016-06-04 at 23:33 +0200, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> Learn to use a vhost-user as a virtio-input backend. Usage:
> 
> -object vhost-user-backend,id=vuid -device virtio-input-host-pci,vhost-user=vuid

IMO this should be a separate device, named "virtio-input-vhost" or
"virtio-input-user".  There isn't really any common code between this
and virtio-input-host.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 11/14] console: add dpy_gl_scanout2()
  2016-06-04 21:33 ` [Qemu-devel] [RFC 11/14] console: add dpy_gl_scanout2() marcandre.lureau
@ 2016-06-06  6:35   ` Gerd Hoffmann
  2016-06-06 13:18     ` Marc-André Lureau
  0 siblings, 1 reply; 26+ messages in thread
From: Gerd Hoffmann @ 2016-06-06  6:35 UTC (permalink / raw)
  To: marcandre.lureau; +Cc: qemu-devel

  Hi,

> @@ -218,6 +218,11 @@ typedef struct DisplayChangeListenerOps {
>      void (*dpy_gl_scanout)(DisplayChangeListener *dcl,
>                             uint32_t backing_id, bool backing_y_0_top,
>                             uint32_t x, uint32_t y, uint32_t w, uint32_t h);
> +    void (*dpy_gl_scanout2)(DisplayChangeListener *dcl,
> +                            int fd, bool backing_y_0_top,
> +                            uint32_t x, uint32_t y, uint32_t w, uint32_t h,
> +                            uint32_t fd_w, uint32_t fd_h, uint32_t fd_stride,
> +                            int fd_fourcc);

Interface looks sane.  I'd like to see a more descriptive name than just
"2" though.  Maybe "dpy_gl_scanout_dmabuf"?  And while being at it
rename the other one to "dpy_gl_scanout_texture"?

Also: please put the spice update into a separate patch.

Adding gtk (or sdl2, or both) support would be nice, to see whenever the
interface works if qemu needs to import the dma-buf for display.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 13/14] vhost-user: add vhost_user_gpu_set_socket()
  2016-06-04 21:33 ` [Qemu-devel] [RFC 13/14] vhost-user: add vhost_user_gpu_set_socket() marcandre.lureau
@ 2016-06-06  6:36   ` Gerd Hoffmann
  0 siblings, 0 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2016-06-06  6:36 UTC (permalink / raw)
  To: marcandre.lureau; +Cc: qemu-devel

On Sa, 2016-06-04 at 23:33 +0200, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> Add a new vhost-user message to give a unix socket for gpu updates to a
> vhost-user backend.

--verbose please.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 14/14] Add virtio-gpu vhost-user backend
  2016-06-04 21:33 ` [Qemu-devel] [RFC 14/14] Add virtio-gpu vhost-user backend marcandre.lureau
@ 2016-06-06  6:54   ` Gerd Hoffmann
  0 siblings, 0 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2016-06-06  6:54 UTC (permalink / raw)
  To: marcandre.lureau; +Cc: qemu-devel

On Sa, 2016-06-04 at 23:33 +0200, marcandre.lureau@redhat.com wrote:
> From: Marc-André Lureau <marcandre.lureau@redhat.com>
> 
> Add to virtio-gpu devices a "vhost-user" property. When set, the
> associated vhost-user backend is used to handle the virtio rings.
> 
> For now, a socketpair is created for the backend to share the rendering
> results with qemu via a simple VHOST_GPU protocol.

Can you give a design overview?

On a first look this seems to not share much code with virtio-gpu
either, so I guess it makes sense to put this into a separate
virtio-gpu-vhost device too.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 11/14] console: add dpy_gl_scanout2()
  2016-06-06  6:35   ` Gerd Hoffmann
@ 2016-06-06 13:18     ` Marc-André Lureau
  2016-06-06 14:04       ` Gerd Hoffmann
  0 siblings, 1 reply; 26+ messages in thread
From: Marc-André Lureau @ 2016-06-06 13:18 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: marcandre lureau, qemu-devel

Hi

----- Original Message -----
> Hi,
> 
> > @@ -218,6 +218,11 @@ typedef struct DisplayChangeListenerOps {
> >      void (*dpy_gl_scanout)(DisplayChangeListener *dcl,
> >                             uint32_t backing_id, bool backing_y_0_top,
> >                             uint32_t x, uint32_t y, uint32_t w, uint32_t
> >                             h);
> > +    void (*dpy_gl_scanout2)(DisplayChangeListener *dcl,
> > +                            int fd, bool backing_y_0_top,
> > +                            uint32_t x, uint32_t y, uint32_t w, uint32_t
> > h,
> > +                            uint32_t fd_w, uint32_t fd_h, uint32_t
> > fd_stride,
> > +                            int fd_fourcc);
> 
> Interface looks sane.  I'd like to see a more descriptive name than just
> "2" though.  Maybe "dpy_gl_scanout_dmabuf"?  And while being at it
> rename the other one to "dpy_gl_scanout_texture"?

sounds good

> 
> Also: please put the spice update into a separate patch.

ok

> 
> Adding gtk (or sdl2, or both) support would be nice, to see whenever the
> interface works if qemu needs to import the dma-buf for display.

As I explained in cover, it's not easily doable since gtk/sdl2 use glx, and can't import dmabuf (it needs egl). I could make it work with gtk/egl (but not gtkglarea, sigh, so many UIs and subtle issues)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices
  2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
                   ` (13 preceding siblings ...)
  2016-06-04 21:33 ` [Qemu-devel] [RFC 14/14] Add virtio-gpu vhost-user backend marcandre.lureau
@ 2016-06-06 13:54 ` Marc-André Lureau
  2016-06-07 14:47   ` Gerd Hoffmann
  14 siblings, 1 reply; 26+ messages in thread
From: Marc-André Lureau @ 2016-06-06 13:54 UTC (permalink / raw)
  To: QEMU; +Cc: Gerd Hoffmann, Marc-André Lureau

Hi Gerd

Thanks for your feedback on the series. Your remarks are all valid,
but before doing more work I would like to know if there is enough
interest. It duplicates work and adds some complexity. Also, some
general feedback on design would be welcome.

What is proposed in this series:
- the vhost-user-backend is a helper object spawning, setting up and
holding a connection to a backend
- the vhost-user socket is set to be fd 3 in child process
- we may want to use only or allow specifying a unix socket chardev to
a backend (like vhost-net), in which case management of backend would
be left outside of qemu
- "add vhost-user backend to virtio-input-host" patch shows how little
is required for a virtio device to use vhost-user-backend, and is
quite a neat use case imho (allowing various input backends)
- there are device specific vhost-user messages to be added, such as
VHOST_USER_INPUT_GET_CONFIG, or we may use extra fd for communication
to pass to child during fork
- when there is a whole set of messages to add, like the VHOST_GPU*, I
decided to use a different socket, given to backend with
VHOST_USER_GPU_SET_SOCKET.

I am not sold that we need to develop a new vhost protocol for the gpu
though. I am considering the Spice worker thread (handling cursor and
display) to actually run in the vhost backend. That would make the
solution Spice specific though (unless qemu implements some of the
Spice protocol ...). Having the spice worker running in the backend
has similar advantages of robustness to reduce attack to qemu by a
spice user.

I am also wondering if several virtio backends could be combined in
the same process. This would allow to have quite easily qemu gtk/sdl
UI in a subprocess.

Going further, once we have proper reconnect & reset support in
vhost-user & virtio, one can imagine running/stoping different UIs
too.

(so after this initial rfc, that look all nice to me, the question I
ask myself is what do we actually want?)

-- 
Marc-André Lureau

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 11/14] console: add dpy_gl_scanout2()
  2016-06-06 13:18     ` Marc-André Lureau
@ 2016-06-06 14:04       ` Gerd Hoffmann
  0 siblings, 0 replies; 26+ messages in thread
From: Gerd Hoffmann @ 2016-06-06 14:04 UTC (permalink / raw)
  To: Marc-André Lureau; +Cc: marcandre lureau, qemu-devel

  Hi,

> > Adding gtk (or sdl2, or both) support would be nice, to see whenever the
> > interface works if qemu needs to import the dma-buf for display.
> 
> As I explained in cover, it's not easily doable since gtk/sdl2 use
> glx, and can't import dmabuf (it needs egl). I could make it work with
> gtk/egl (but not gtkglarea, sigh, so many UIs and subtle issues)

Ah, ok.  Was reading a bit too fast it seems, didn't notice the subtile
egl vs. glx thing.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices
  2016-06-06 13:54 ` [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices Marc-André Lureau
@ 2016-06-07 14:47   ` Gerd Hoffmann
  2016-06-07 15:01     ` Marc-André Lureau
  0 siblings, 1 reply; 26+ messages in thread
From: Gerd Hoffmann @ 2016-06-07 14:47 UTC (permalink / raw)
  To: Marc-André Lureau; +Cc: QEMU, Marc-André Lureau

On Mo, 2016-06-06 at 15:54 +0200, Marc-André Lureau wrote:
> Hi Gerd
> 
> Thanks for your feedback on the series. Your remarks are all valid,
> but before doing more work I would like to know if there is enough
> interest. It duplicates work and adds some complexity. Also, some
> general feedback on design would be welcome.
> 
> What is proposed in this series:
> - the vhost-user-backend is a helper object spawning, setting up and
> holding a connection to a backend
> - the vhost-user socket is set to be fd 3 in child process

Which implies a 1:1 relationship between object and backend.  Which
isn't that great if we want allow for multiple backends in one process
(your idea below, and I think it can be useful).

> - "add vhost-user backend to virtio-input-host" patch shows how little
> is required for a virtio device to use vhost-user-backend, and is
> quite a neat use case imho (allowing various input backends)

Indeed.  Doing a "mouse wiggler" would be a pretty minimal backend.

> - there are device specific vhost-user messages to be added, such as
> VHOST_USER_INPUT_GET_CONFIG, or we may use extra fd for communication
> to pass to child during fork

Is that needed?  I think it should be possible to create device-agnostic
messages for config access.

> - when there is a whole set of messages to add, like the VHOST_GPU*, I
> decided to use a different socket, given to backend with
> VHOST_USER_GPU_SET_SOCKET.

I would tend to send it all over the same socket.

> I am not sold that we need to develop a new vhost protocol for the gpu
> though. I am considering the Spice worker thread (handling cursor and
> display) to actually run in the vhost backend.

Interesting idea, would safe quite a few context switches for dma-buf
passing.  But it also brings new challenges, vga compatibility for
example.  Also spice channel management.  vdagent, ...

I'd suggest put it aside for now though.  Get the other stuff done
first.  Running virglrenderer in a separate process is certainly very
useful from a security point of view, and that is a big enough project
for a while I suspect.

> Going further, once we have proper reconnect & reset support in
> vhost-user & virtio, one can imagine running/stoping different UIs
> too.

That'll be quite difficult for virtio-gpu too.
virtio-input should be easy though.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices
  2016-06-07 14:47   ` Gerd Hoffmann
@ 2016-06-07 15:01     ` Marc-André Lureau
  2016-06-08  6:11       ` Gerd Hoffmann
  0 siblings, 1 reply; 26+ messages in thread
From: Marc-André Lureau @ 2016-06-07 15:01 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: Marc-André Lureau, QEMU, Marc-André Lureau

Hi

----- Original Message -----
> On Mo, 2016-06-06 at 15:54 +0200, Marc-André Lureau wrote:
> > Hi Gerd
> > 
> > Thanks for your feedback on the series. Your remarks are all valid,
> > but before doing more work I would like to know if there is enough
> > interest. It duplicates work and adds some complexity. Also, some
> > general feedback on design would be welcome.
> > 
> > What is proposed in this series:
> > - the vhost-user-backend is a helper object spawning, setting up and
> > holding a connection to a backend
> > - the vhost-user socket is set to be fd 3 in child process
> 
> Which implies a 1:1 relationship between object and backend.  Which
> isn't that great if we want allow for multiple backends in one process
> (your idea below, and I think it can be useful).
> 

That socket could use a different protocol to instantiate vhost-user device/backends (passing vhost-user sockets per device)?

> > - "add vhost-user backend to virtio-input-host" patch shows how little
> > is required for a virtio device to use vhost-user-backend, and is
> > quite a neat use case imho (allowing various input backends)
> 
> Indeed.  Doing a "mouse wiggler" would be a pretty minimal backend.
> 
> > - there are device specific vhost-user messages to be added, such as
> > VHOST_USER_INPUT_GET_CONFIG, or we may use extra fd for communication
> > to pass to child during fork
> 
> Is that needed?  I think it should be possible to create device-agnostic
> messages for config access.

VHOST_USER_INPUT_GET_CONFIG is quite virtio-input specific, since it returns the array of virtio_input_config, that is later read via virtio config selection. Can this be generalized?

> > - when there is a whole set of messages to add, like the VHOST_GPU*, I
> > decided to use a different socket, given to backend with
> > VHOST_USER_GPU_SET_SOCKET.
> 
> I would tend to send it all over the same socket.

It's possible, but currently vhost-user protocol is unidirectional (master/slave request/reply relationship). The backend cannot easily send messages on its own. So beside reinventing some display protocol, it is hard to fit in vhost-user socket today.

> 
> > I am not sold that we need to develop a new vhost protocol for the gpu
> > though. I am considering the Spice worker thread (handling cursor and
> > display) to actually run in the vhost backend.
> 
> Interesting idea, would safe quite a few context switches for dma-buf
> passing.  But it also brings new challenges, vga compatibility for
> example.  Also spice channel management.  vdagent, ...

What I had in mind is to hand off only the cursor and display channel to the vhost-gpu backend once the channel is up and the gpu is active. Eventually hand it back to qemu when switching back to VGA (sounds like it should be doable to me, but perhaps not worth it like this?)
 
> I'd suggest put it aside for now though.  Get the other stuff done
> first.  Running virglrenderer in a separate process is certainly very
> useful from a security point of view, and that is a big enough project
> for a while I suspect.

Agree, then it will require the VHOST_GPU_* messages to update qemu&spice process.

> > Going further, once we have proper reconnect & reset support in
> > vhost-user & virtio, one can imagine running/stoping different UIs
> > too.
> 
> That'll be quite difficult for virtio-gpu too.
> virtio-input should be easy though.

Right, I wasn't thinking about 3d in this case ;) Although I still hope we can get there some day.

thanks

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices
  2016-06-07 15:01     ` Marc-André Lureau
@ 2016-06-08  6:11       ` Gerd Hoffmann
  2016-06-08 12:53         ` Marc-André Lureau
  0 siblings, 1 reply; 26+ messages in thread
From: Gerd Hoffmann @ 2016-06-08  6:11 UTC (permalink / raw)
  To: Marc-André Lureau
  Cc: Marc-André Lureau, QEMU, Marc-André Lureau

On Di, 2016-06-07 at 11:01 -0400, Marc-André Lureau wrote:
> Hi
> 
> ----- Original Message -----
> > On Mo, 2016-06-06 at 15:54 +0200, Marc-André Lureau wrote:
> > > Hi Gerd
> > > 
> > > Thanks for your feedback on the series. Your remarks are all valid,
> > > but before doing more work I would like to know if there is enough
> > > interest. It duplicates work and adds some complexity. Also, some
> > > general feedback on design would be welcome.
> > > 
> > > What is proposed in this series:
> > > - the vhost-user-backend is a helper object spawning, setting up and
> > > holding a connection to a backend
> > > - the vhost-user socket is set to be fd 3 in child process
> > 
> > Which implies a 1:1 relationship between object and backend.  Which
> > isn't that great if we want allow for multiple backends in one process
> > (your idea below, and I think it can be useful).
> > 
> 
> That socket could use a different protocol to instantiate vhost-user device/backends (passing vhost-user sockets per device)?

I'd tend to simply hand the backend process one unix socket path per
device.  Maybe also allow libvirt to link things using monitor fd
passing.

It's a little less automatic, but more flexible.

> > > - there are device specific vhost-user messages to be added, such as
> > > VHOST_USER_INPUT_GET_CONFIG, or we may use extra fd for communication
> > > to pass to child during fork
> > 
> > Is that needed?  I think it should be possible to create device-agnostic
> > messages for config access.
> 
> VHOST_USER_INPUT_GET_CONFIG is quite virtio-input specific, since it
> returns the array of virtio_input_config, that is later read via
> virtio config selection. Can this be generalized?

Well, not as one-time init call.  You have to forward every write access
to the backend.  For read access the easiest would be to forward every
access too.  Or have a shadow copy for read access which is updated
after every write.

> > > - when there is a whole set of messages to add, like the VHOST_GPU*, I
> > > decided to use a different socket, given to backend with
> > > VHOST_USER_GPU_SET_SOCKET.
> > 
> > I would tend to send it all over the same socket.
> 
> It's possible, but currently vhost-user protocol is unidirectional
> (master/slave request/reply relationship). The backend cannot easily
> send messages on its own. So beside reinventing some display protocol,
> it is hard to fit in vhost-user socket today.

Ok.  So maybe it isn't that useful to use vhost-user for the gpu?  The
fundamental issue here is that qemu needs to process some of the
messages.  So you send those back to qemu via VHOST_GPU*.

So maybe it works better when we continue to terminate the rings in
qemu, then forward messages relevant for virglrenderer to the external
process.

> > > I am not sold that we need to develop a new vhost protocol for the gpu
> > > though. I am considering the Spice worker thread (handling cursor and
> > > display) to actually run in the vhost backend.
> > 
> > Interesting idea, would safe quite a few context switches for dma-buf
> > passing.  But it also brings new challenges, vga compatibility for
> > example.  Also spice channel management.  vdagent, ...
> 
> What I had in mind is to hand off only the cursor and display channel
> to the vhost-gpu backend once the channel is up and the gpu is active.
> Eventually hand it back to qemu when switching back to VGA (sounds
> like it should be doable to me, but perhaps not worth it like this?)

It's not clear to me how you want hand over the display channel from
qemu (and spice-server running as thread in qemu process context) to the
vhost backend (running in a separate process).

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices
  2016-06-08  6:11       ` Gerd Hoffmann
@ 2016-06-08 12:53         ` Marc-André Lureau
  0 siblings, 0 replies; 26+ messages in thread
From: Marc-André Lureau @ 2016-06-08 12:53 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: Marc-André Lureau, QEMU, Marc-André Lureau

Hi

----- Original Message -----
> > > > - the vhost-user-backend is a helper object spawning, setting up and
> > > > holding a connection to a backend
> > > > - the vhost-user socket is set to be fd 3 in child process
> > > 
> > > Which implies a 1:1 relationship between object and backend.  Which
> > > isn't that great if we want allow for multiple backends in one process
> > > (your idea below, and I think it can be useful).
> > > 
> > 
> > That socket could use a different protocol to instantiate vhost-user
> > device/backends (passing vhost-user sockets per device)?
> 
> I'd tend to simply hand the backend process one unix socket path per
> device.  Maybe also allow libvirt to link things using monitor fd
> passing.
> 
> It's a little less automatic, but more flexible.

Having explicit socket path is closer to the current vhost-user-net approach:

-chardev socket,id=char0,path=/tmp/vubr.sock -netdev type=vhost-user,id=mynet1,chardev=char0

so we could have:

-chardev socket,id=char0,path=/tmp/vgpu.sock
-object vhost-user-backend,id=vug,chardev=char0
-device virtio-vga,virgl=true,vhost-user=vug

This is not incompatible with what I proposed, and I think that would be enough to allow libvirt to link things using monitor fd pass.

Other option is to hide vhost-user-backend object behind a property, and use chardev only:

-chardev socket,id=char0,path=/tmp/vgpu.sock
-device virtio-vga,virgl=true,vhost-user=char0

But I found it more convenient to allow qemu to manage the backend process, if only for development.

> 
> > > > - there are device specific vhost-user messages to be added, such as
> > > > VHOST_USER_INPUT_GET_CONFIG, or we may use extra fd for communication
> > > > to pass to child during fork
> > > 
> > > Is that needed?  I think it should be possible to create device-agnostic
> > > messages for config access.
> > 
> > VHOST_USER_INPUT_GET_CONFIG is quite virtio-input specific, since it
> > returns the array of virtio_input_config, that is later read via
> > virtio config selection. Can this be generalized?
> 
> Well, not as one-time init call.  You have to forward every write access
> to the backend.  For read access the easiest would be to forward every
> access too.  Or have a shadow copy for read access which is updated
> after every write.

I see. But it would have to be explicit which device requires read/write config and which not, and many config details would have to be specified on backend side. So far, only input requires config data, gpu and net have "static" qemu side config.

> > > > - when there is a whole set of messages to add, like the VHOST_GPU*, I
> > > > decided to use a different socket, given to backend with
> > > > VHOST_USER_GPU_SET_SOCKET.
> > > 
> > > I would tend to send it all over the same socket.
> > 
> > It's possible, but currently vhost-user protocol is unidirectional
> > (master/slave request/reply relationship). The backend cannot easily
> > send messages on its own. So beside reinventing some display protocol,
> > it is hard to fit in vhost-user socket today.
> 
> Ok.  So maybe it isn't that useful to use vhost-user for the gpu?  The
> fundamental issue here is that qemu needs to process some of the
> messages.  So you send those back to qemu via VHOST_GPU*.
> 
> So maybe it works better when we continue to terminate the rings in
> qemu, then forward messages relevant for virglrenderer to the external
> process.

I would have to think about it, I am not sure how this would impact performance. I would rather teach vhost-user protocol to be bidirectionnal (and async), there would be benefits of doing that for the protocol in general (the graceful shutdown request would benefit such backend-side request support) 

> 
> > > > I am not sold that we need to develop a new vhost protocol for the gpu
> > > > though. I am considering the Spice worker thread (handling cursor and
> > > > display) to actually run in the vhost backend.
> > > 
> > > Interesting idea, would safe quite a few context switches for dma-buf
> > > passing.  But it also brings new challenges, vga compatibility for
> > > example.  Also spice channel management.  vdagent, ...
> > 
> > What I had in mind is to hand off only the cursor and display channel
> > to the vhost-gpu backend once the channel is up and the gpu is active.
> > Eventually hand it back to qemu when switching back to VGA (sounds
> > like it should be doable to me, but perhaps not worth it like this?)
> 
> It's not clear to me how you want hand over the display channel from
> qemu (and spice-server running as thread in qemu process context) to the
> vhost backend (running in a separate process).

10000ft view would be a qemu call like spice_qxl_steal(&state, &statesize, &fds, &nfds), that would gather all config and state related data and clients fds for cursor and display (qxl instance), and stop the worker thread. Then it would send this over to the backend, and resume a worker thread with a call like spice_qxl_resume(state, fds). The server is not ready for this sort of operations today though.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-06-08 12:53 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-04 21:33 [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 01/14] Add qemu_chr_open_socket() marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 02/14] Add vhost-user-backend marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 03/14] vhost-user: split vhost_user_read() marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 04/14] vhost-user: add vhost_user_input_get_config() marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 05/14] Add vhost-user backend to virtio-input-host marcandre.lureau
2016-06-06  6:22   ` Gerd Hoffmann
2016-06-04 21:33 ` [Qemu-devel] [RFC 06/14] contrib: add vhost-user-input marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 07/14] misc: rename virtio-gpu.h header guard marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 08/14] vhost: make sure call fd has been received marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 09/14] qemu-char: use READ_RETRIES marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 10/14] qemu-char: block during sync read marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 11/14] console: add dpy_gl_scanout2() marcandre.lureau
2016-06-06  6:35   ` Gerd Hoffmann
2016-06-06 13:18     ` Marc-André Lureau
2016-06-06 14:04       ` Gerd Hoffmann
2016-06-04 21:33 ` [Qemu-devel] [RFC 12/14] contrib: add vhost-user-gpu marcandre.lureau
2016-06-04 21:33 ` [Qemu-devel] [RFC 13/14] vhost-user: add vhost_user_gpu_set_socket() marcandre.lureau
2016-06-06  6:36   ` Gerd Hoffmann
2016-06-04 21:33 ` [Qemu-devel] [RFC 14/14] Add virtio-gpu vhost-user backend marcandre.lureau
2016-06-06  6:54   ` Gerd Hoffmann
2016-06-06 13:54 ` [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices Marc-André Lureau
2016-06-07 14:47   ` Gerd Hoffmann
2016-06-07 15:01     ` Marc-André Lureau
2016-06-08  6:11       ` Gerd Hoffmann
2016-06-08 12:53         ` Marc-André Lureau

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.