qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 0/9] rutabaga_gfx + gfxstream
@ 2023-08-23  1:25 Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 1/9] virtio: Add shared memory capability Gurchetan Singh
                   ` (9 more replies)
  0 siblings, 10 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

From: Gurchetan Singh <gurchetansingh@google.com>

Changes since v10:
- Licensing and comment fixes

- Official "release commits" issued for rutabaga_gfx_ffi,
  gfxstream, aemu-base.  For example, see crrev.com/c/4778941

- The release commits can make packaging easier, though once
  again all known users will likely just build from sources
  anyways

How to build both rutabaga and gfxstream guest/host libs:

https://crosvm.dev/book/appendix/rutabaga_gfx.html

Branch containing this patch series:

https://gitlab.freedesktop.org/gurchetansingh/qemu-gfxstream/-/commits/qemu-gfxstream-v11

Antonio Caggiano (2):
  virtio-gpu: CONTEXT_INIT feature
  virtio-gpu: blob prep

Dr. David Alan Gilbert (1):
  virtio: Add shared memory capability

Gerd Hoffmann (1):
  virtio-gpu: hostmem

Gurchetan Singh (5):
  gfxstream + rutabaga prep: added need defintions, fields, and options
  gfxstream + rutabaga: add initial support for gfxstream
  gfxstream + rutabaga: meson support
  gfxstream + rutabaga: enable rutabaga
  docs/system: add basic virtio-gpu documentation

 docs/system/device-emulation.rst     |    1 +
 docs/system/devices/virtio-gpu.rst   |  112 +++
 hw/display/meson.build               |   22 +
 hw/display/virtio-gpu-base.c         |    6 +-
 hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
 hw/display/virtio-gpu-pci.c          |   14 +
 hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
 hw/display/virtio-gpu.c              |   16 +-
 hw/display/virtio-vga-rutabaga.c     |   53 ++
 hw/display/virtio-vga.c              |   33 +-
 hw/virtio/virtio-pci.c               |   18 +
 include/hw/virtio/virtio-gpu-bswap.h |   18 +
 include/hw/virtio/virtio-gpu.h       |   41 +
 include/hw/virtio/virtio-pci.h       |    4 +
 meson.build                          |    7 +
 meson_options.txt                    |    2 +
 scripts/meson-buildoptions.sh        |    3 +
 softmmu/qdev-monitor.c               |    3 +
 softmmu/vl.c                         |    1 +
 19 files changed, 1506 insertions(+), 19 deletions(-)
 create mode 100644 docs/system/devices/virtio-gpu.rst
 create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
 create mode 100644 hw/display/virtio-gpu-rutabaga.c
 create mode 100644 hw/display/virtio-vga-rutabaga.c

-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v11 1/9] virtio: Add shared memory capability
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 2/9] virtio-gpu: CONTEXT_INIT feature Gurchetan Singh
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

Define a new capability type 'VIRTIO_PCI_CAP_SHARED_MEMORY_CFG' to allow
defining shared memory regions with sizes and offsets of 2^32 and more.
Multiple instances of the capability are allowed and distinguished
by a device-specific 'id'.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Acked-by: Huang Rui <ray.huang@amd.com>
Tested-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
---
 hw/virtio/virtio-pci.c         | 18 ++++++++++++++++++
 include/hw/virtio/virtio-pci.h |  4 ++++
 2 files changed, 22 insertions(+)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index edbc0daa18..da8c9ea12d 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1435,6 +1435,24 @@ static int virtio_pci_add_mem_cap(VirtIOPCIProxy *proxy,
     return offset;
 }
 
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy,
+                           uint8_t bar, uint64_t offset, uint64_t length,
+                           uint8_t id)
+{
+    struct virtio_pci_cap64 cap = {
+        .cap.cap_len = sizeof cap,
+        .cap.cfg_type = VIRTIO_PCI_CAP_SHARED_MEMORY_CFG,
+    };
+
+    cap.cap.bar = bar;
+    cap.cap.length = cpu_to_le32(length);
+    cap.length_hi = cpu_to_le32(length >> 32);
+    cap.cap.offset = cpu_to_le32(offset);
+    cap.offset_hi = cpu_to_le32(offset >> 32);
+    cap.cap.id = id;
+    return virtio_pci_add_mem_cap(proxy, &cap.cap);
+}
+
 static uint64_t virtio_pci_common_read(void *opaque, hwaddr addr,
                                        unsigned size)
 {
diff --git a/include/hw/virtio/virtio-pci.h b/include/hw/virtio/virtio-pci.h
index ab2051b64b..5a3f182f99 100644
--- a/include/hw/virtio/virtio-pci.h
+++ b/include/hw/virtio/virtio-pci.h
@@ -264,4 +264,8 @@ unsigned virtio_pci_optimal_num_queues(unsigned fixed_queues);
 void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
                                               int n, bool assign,
                                               bool with_irqfd);
+
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy, uint8_t bar, uint64_t offset,
+                           uint64_t length, uint8_t id);
+
 #endif
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v11 2/9] virtio-gpu: CONTEXT_INIT feature
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 1/9] virtio: Add shared memory capability Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 3/9] virtio-gpu: hostmem Gurchetan Singh
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

From: Antonio Caggiano <antonio.caggiano@collabora.com>

The feature can be enabled when a backend wants it.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
---
 hw/display/virtio-gpu-base.c   | 3 +++
 include/hw/virtio/virtio-gpu.h | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index ca1fb7b16f..4f2b0ba1f3 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -232,6 +232,9 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
     if (virtio_gpu_blob_enabled(g->conf)) {
         features |= (1 << VIRTIO_GPU_F_RESOURCE_BLOB);
     }
+    if (virtio_gpu_context_init_enabled(g->conf)) {
+        features |= (1 << VIRTIO_GPU_F_CONTEXT_INIT);
+    }
 
     return features;
 }
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 390c4642b8..8377c365ef 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -93,6 +93,7 @@ enum virtio_gpu_base_conf_flags {
     VIRTIO_GPU_FLAG_EDID_ENABLED,
     VIRTIO_GPU_FLAG_DMABUF_ENABLED,
     VIRTIO_GPU_FLAG_BLOB_ENABLED,
+    VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
 };
 
 #define virtio_gpu_virgl_enabled(_cfg) \
@@ -105,6 +106,8 @@ enum virtio_gpu_base_conf_flags {
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_DMABUF_ENABLED))
 #define virtio_gpu_blob_enabled(_cfg) \
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
+#define virtio_gpu_context_init_enabled(_cfg) \
+    (_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
 
 struct virtio_gpu_base_conf {
     uint32_t max_outputs;
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v11 3/9] virtio-gpu: hostmem
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 1/9] virtio: Add shared memory capability Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 2/9] virtio-gpu: CONTEXT_INIT feature Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 4/9] virtio-gpu: blob prep Gurchetan Singh
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

From: Gerd Hoffmann <kraxel@redhat.com>

Use VIRTIO_GPU_SHM_ID_HOST_VISIBLE as id for virtio-gpu.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Tested-by: Alyssa Ross <hi@alyssa.is>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/display/virtio-gpu-pci.c    | 14 ++++++++++++++
 hw/display/virtio-gpu.c        |  1 +
 hw/display/virtio-vga.c        | 33 ++++++++++++++++++++++++---------
 include/hw/virtio/virtio-gpu.h |  5 +++++
 4 files changed, 44 insertions(+), 9 deletions(-)

diff --git a/hw/display/virtio-gpu-pci.c b/hw/display/virtio-gpu-pci.c
index 93f214ff58..da6a99f038 100644
--- a/hw/display/virtio-gpu-pci.c
+++ b/hw/display/virtio-gpu-pci.c
@@ -33,6 +33,20 @@ static void virtio_gpu_pci_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
     DeviceState *vdev = DEVICE(g);
     int i;
 
+    if (virtio_gpu_hostmem_enabled(g->conf)) {
+        vpci_dev->msix_bar_idx = 1;
+        vpci_dev->modern_mem_bar_idx = 2;
+        memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+                           g->conf.hostmem);
+        pci_register_bar(&vpci_dev->pci_dev, 4,
+                         PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_PREFETCH |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                         &g->hostmem);
+        virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+                               VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+    }
+
     virtio_pci_force_virtio_1(vpci_dev);
     if (!qdev_realize(vdev, BUS(&vpci_dev->bus), errp)) {
         return;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index bbd5c6561a..48ef0d9fad 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1509,6 +1509,7 @@ static Property virtio_gpu_properties[] = {
                      256 * MiB),
     DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
                     VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
+    DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index e6fb0aa876..c8552ff760 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -115,17 +115,32 @@ static void virtio_vga_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
     pci_register_bar(&vpci_dev->pci_dev, 0,
                      PCI_BASE_ADDRESS_MEM_PREFETCH, &vga->vram);
 
-    /*
-     * Configure virtio bar and regions
-     *
-     * We use bar #2 for the mmio regions, to be compatible with stdvga.
-     * virtio regions are moved to the end of bar #2, to make room for
-     * the stdvga mmio registers at the start of bar #2.
-     */
-    vpci_dev->modern_mem_bar_idx = 2;
-    vpci_dev->msix_bar_idx = 4;
     vpci_dev->modern_io_bar_idx = 5;
 
+    if (!virtio_gpu_hostmem_enabled(g->conf)) {
+        /*
+         * Configure virtio bar and regions
+         *
+         * We use bar #2 for the mmio regions, to be compatible with stdvga.
+         * virtio regions are moved to the end of bar #2, to make room for
+         * the stdvga mmio registers at the start of bar #2.
+         */
+        vpci_dev->modern_mem_bar_idx = 2;
+        vpci_dev->msix_bar_idx = 4;
+    } else {
+        vpci_dev->msix_bar_idx = 1;
+        vpci_dev->modern_mem_bar_idx = 2;
+        memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+                           g->conf.hostmem);
+        pci_register_bar(&vpci_dev->pci_dev, 4,
+                         PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_PREFETCH |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                         &g->hostmem);
+        virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+                               VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+    }
+
     if (!(vpci_dev->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ)) {
         /*
          * with page-per-vq=off there is no padding space we can use
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 8377c365ef..de4f624e94 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -108,12 +108,15 @@ enum virtio_gpu_base_conf_flags {
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
 #define virtio_gpu_context_init_enabled(_cfg) \
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
+#define virtio_gpu_hostmem_enabled(_cfg) \
+    (_cfg.hostmem > 0)
 
 struct virtio_gpu_base_conf {
     uint32_t max_outputs;
     uint32_t flags;
     uint32_t xres;
     uint32_t yres;
+    uint64_t hostmem;
 };
 
 struct virtio_gpu_ctrl_command {
@@ -137,6 +140,8 @@ struct VirtIOGPUBase {
     int renderer_blocked;
     int enable;
 
+    MemoryRegion hostmem;
+
     struct virtio_gpu_scanout scanout[VIRTIO_GPU_MAX_SCANOUTS];
 
     int enabled_output_bitmask;
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v11 4/9] virtio-gpu: blob prep
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
                   ` (2 preceding siblings ...)
  2023-08-23  1:25 ` [PATCH v11 3/9] virtio-gpu: hostmem Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options Gurchetan Singh
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

From: Antonio Caggiano <antonio.caggiano@collabora.com>

This adds preparatory functions needed to:

     - decode blob cmds
     - tracking iovecs

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
---
 hw/display/virtio-gpu.c              | 10 +++-------
 include/hw/virtio/virtio-gpu-bswap.h | 18 ++++++++++++++++++
 include/hw/virtio/virtio-gpu.h       |  5 +++++
 3 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 48ef0d9fad..3e658f1fef 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -33,15 +33,11 @@
 
 #define VIRTIO_GPU_VM_VERSION 1
 
-static struct virtio_gpu_simple_resource*
-virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
 static struct virtio_gpu_simple_resource *
 virtio_gpu_find_check_resource(VirtIOGPU *g, uint32_t resource_id,
                                bool require_backing,
                                const char *caller, uint32_t *error);
 
-static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
-                                       struct virtio_gpu_simple_resource *res);
 static void virtio_gpu_reset_bh(void *opaque);
 
 void virtio_gpu_update_cursor_data(VirtIOGPU *g,
@@ -116,7 +112,7 @@ static void update_cursor(VirtIOGPU *g, struct virtio_gpu_update_cursor *cursor)
                   cursor->resource_id ? 1 : 0);
 }
 
-static struct virtio_gpu_simple_resource *
+struct virtio_gpu_simple_resource *
 virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id)
 {
     struct virtio_gpu_simple_resource *res;
@@ -904,8 +900,8 @@ void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
     g_free(iov);
 }
 
-static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
-                                       struct virtio_gpu_simple_resource *res)
+void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
+                                struct virtio_gpu_simple_resource *res)
 {
     virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
     res->iov = NULL;
diff --git a/include/hw/virtio/virtio-gpu-bswap.h b/include/hw/virtio/virtio-gpu-bswap.h
index 9124108485..dd1975e2d4 100644
--- a/include/hw/virtio/virtio-gpu-bswap.h
+++ b/include/hw/virtio/virtio-gpu-bswap.h
@@ -63,10 +63,28 @@ virtio_gpu_create_blob_bswap(struct virtio_gpu_resource_create_blob *cblob)
 {
     virtio_gpu_ctrl_hdr_bswap(&cblob->hdr);
     le32_to_cpus(&cblob->resource_id);
+    le32_to_cpus(&cblob->blob_mem);
     le32_to_cpus(&cblob->blob_flags);
+    le32_to_cpus(&cblob->nr_entries);
+    le64_to_cpus(&cblob->blob_id);
     le64_to_cpus(&cblob->size);
 }
 
+static inline void
+virtio_gpu_map_blob_bswap(struct virtio_gpu_resource_map_blob *mblob)
+{
+    virtio_gpu_ctrl_hdr_bswap(&mblob->hdr);
+    le32_to_cpus(&mblob->resource_id);
+    le64_to_cpus(&mblob->offset);
+}
+
+static inline void
+virtio_gpu_unmap_blob_bswap(struct virtio_gpu_resource_unmap_blob *ublob)
+{
+    virtio_gpu_ctrl_hdr_bswap(&ublob->hdr);
+    le32_to_cpus(&ublob->resource_id);
+}
+
 static inline void
 virtio_gpu_scanout_blob_bswap(struct virtio_gpu_set_scanout_blob *ssb)
 {
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index de4f624e94..55973e112f 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -257,6 +257,9 @@ void virtio_gpu_base_fill_display_info(VirtIOGPUBase *g,
 void virtio_gpu_base_generate_edid(VirtIOGPUBase *g, int scanout,
                                    struct virtio_gpu_resp_edid *edid);
 /* virtio-gpu.c */
+struct virtio_gpu_simple_resource *
+virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
+
 void virtio_gpu_ctrl_response(VirtIOGPU *g,
                               struct virtio_gpu_ctrl_command *cmd,
                               struct virtio_gpu_ctrl_hdr *resp,
@@ -275,6 +278,8 @@ int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
                                   uint32_t *niov);
 void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
                                     struct iovec *iov, uint32_t count);
+void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
+                                struct virtio_gpu_simple_resource *res);
 void virtio_gpu_process_cmdq(VirtIOGPU *g);
 void virtio_gpu_device_realize(DeviceState *qdev, Error **errp);
 void virtio_gpu_reset(VirtIODevice *vdev);
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v11 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
                   ` (3 preceding siblings ...)
  2023-08-23  1:25 ` [PATCH v11 4/9] virtio-gpu: blob prep Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

This modifies the common virtio-gpu.h file have the fields and
defintions needed by gfxstream/rutabaga, by VirtioGpuRutabaga.

Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
---
v1: void *rutabaga --> struct rutabaga *rutabaga (Akihiko)
    have a separate rutabaga device instead of using GL device (Bernard)

v2: VirtioGpuRutabaga --> VirtIOGPURutabaga (Akihiko)
    move MemoryRegionInfo into VirtIOGPURutabaga (Akihiko)
    remove 'ctx' field (Akihiko)
    remove 'rutabaga_active'

v6: remove command from commit message, refer to docs instead (Manos)

 include/hw/virtio/virtio-gpu.h | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 55973e112f..e2a07e68d9 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -38,6 +38,9 @@ OBJECT_DECLARE_SIMPLE_TYPE(VirtIOGPUGL, VIRTIO_GPU_GL)
 #define TYPE_VHOST_USER_GPU "vhost-user-gpu"
 OBJECT_DECLARE_SIMPLE_TYPE(VhostUserGPU, VHOST_USER_GPU)
 
+#define TYPE_VIRTIO_GPU_RUTABAGA "virtio-gpu-rutabaga-device"
+OBJECT_DECLARE_SIMPLE_TYPE(VirtIOGPURutabaga, VIRTIO_GPU_RUTABAGA)
+
 struct virtio_gpu_simple_resource {
     uint32_t resource_id;
     uint32_t width;
@@ -94,6 +97,7 @@ enum virtio_gpu_base_conf_flags {
     VIRTIO_GPU_FLAG_DMABUF_ENABLED,
     VIRTIO_GPU_FLAG_BLOB_ENABLED,
     VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
+    VIRTIO_GPU_FLAG_RUTABAGA_ENABLED,
 };
 
 #define virtio_gpu_virgl_enabled(_cfg) \
@@ -108,6 +112,8 @@ enum virtio_gpu_base_conf_flags {
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
 #define virtio_gpu_context_init_enabled(_cfg) \
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
+#define virtio_gpu_rutabaga_enabled(_cfg) \
+    (_cfg.flags & (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED))
 #define virtio_gpu_hostmem_enabled(_cfg) \
     (_cfg.hostmem > 0)
 
@@ -232,6 +238,28 @@ struct VhostUserGPU {
     bool backend_blocked;
 };
 
+#define MAX_SLOTS 4096
+
+struct MemoryRegionInfo {
+    int used;
+    MemoryRegion mr;
+    uint32_t resource_id;
+};
+
+struct rutabaga;
+
+struct VirtIOGPURutabaga {
+    struct VirtIOGPU parent_obj;
+
+    struct MemoryRegionInfo memory_regions[MAX_SLOTS];
+    char *capset_names;
+    char *wayland_socket_path;
+    char *wsi;
+    bool headless;
+    uint32_t num_capsets;
+    struct rutabaga *rutabaga;
+};
+
 #define VIRTIO_GPU_FILL_CMD(out) do {                                   \
         size_t s;                                                       \
         s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, 0,          \
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
                   ` (4 preceding siblings ...)
  2023-08-23  1:25 ` [PATCH v11 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23  9:59   ` Akihiko Odaki
  2023-09-13 11:57   ` Bernhard Beschow
  2023-08-23  1:25 ` [PATCH v11 7/9] gfxstream + rutabaga: meson support Gurchetan Singh
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

This adds initial support for gfxstream and cross-domain.  Both
features rely on virtio-gpu blob resources and context types, which
are also implemented in this patch.

gfxstream has a long and illustrious history in Android graphics
paravirtualization.  It has been powering graphics in the Android
Studio Emulator for more than a decade, which is the main developer
platform.

Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
The key design characteristic was a 1:1 threading model and
auto-generation, which fit nicely with the OpenGLES spec.  It also
allowed easy layering with ANGLE on the host, which provides the GLES
implementations on Windows or MacOS enviroments.

gfxstream has traditionally been maintained by a single engineer, and
between 2015 to 2021, the goldfish throne passed to Frank Yang.
Historians often remark this glorious reign ("pax gfxstreama" is the
academic term) was comparable to that of Augustus and both Queen
Elizabeths.  Just to name a few accomplishments in a resplendent
panoply: higher versions of GLES, address space graphics, snapshot
support and CTS compliant Vulkan [b].

One major drawback was the use of out-of-tree goldfish drivers.
Android engineers didn't know much about DRM/KMS and especially TTM so
a simple guest to host pipe was conceived.

Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
It was a symbol compatible replacement of virglrenderer [c] and named
"AVDVirglrenderer".  This implementation forms the basis of the
current gfxstream host implementation still in use today.

cross-domain support follows a similar arc.  Originally conceived by
Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
2018, it initially relied on the downstream "virtio-wl" device.

In 2020 and 2021, virtio-gpu was extended to include blob resources
and multiple timelines by yours truly, features gfxstream/cross-domain
both require to function correctly.

Right now, we stand at the precipice of a truly fantastic possibility:
the Android Emulator powered by upstream QEMU and upstream Linux
kernel.  gfxstream will then be packaged properfully, and app
developers can even fix gfxstream bugs on their own if they encounter
them.

It's been quite the ride, my friends.  Where will gfxstream head next,
nobody really knows.  I wouldn't be surprised if it's around for
another decade, maintained by a new generation of Android graphics
enthusiasts.

Technical details:
  - Very simple initial display integration: just used Pixman
  - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
    calls

Next steps for Android VMs:
  - The next step would be improving display integration and UI interfaces
    with the goal of the QEMU upstream graphics being in an emulator
    release [d].

Next steps for Linux VMs for display virtualization:
  - For widespread distribution, someone needs to package Sommelier or the
    wayland-proxy-virtwl [e] ideally into Debian main. In addition, newer
    versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
    which allows disabling KMS hypercalls.  If anyone cares enough, it'll
    probably be possible to build a custom VM variant that uses this display
    virtualization strategy.

[a] https://android-review.googlesource.com/c/platform/development/+/34470
[b] https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
[c] https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
[d] https://developer.android.com/studio/releases/emulator
[e] https://github.com/talex5/wayland-proxy-virtwl

Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
---
v1: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
    - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
    - Used error_report(..)
    - Used g_autofree to fix leaks on error paths
    - Removed unnecessary casts
    - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files

v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau and
    Bernard Berschow:
    - Parenthesis in CHECK macro
    - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
    - delay until g->parent_obj.enable = 1
    - Additional cast fixes
    - initialize directly in virtio_gpu_rutabaga_realize(..)
    - add debug callback to hook into QEMU error's APIs

v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
    - Autodetect Wayland socket when not explicitly specified
    - Fix map_blob error paths
    - Add comment why we need both `res` and `resource` in create blob
    - Cast and whitespace fixes
    - Big endian check comes before virtio_gpu_rutabaga_init().
    - VirtIOVGARUTABAGA --> VirtIOVGARutabaga

v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
    - Double checked all casts
    - Remove unnecessary parenthesis
    - Removed `resource` in create_blob
    - Added comment about failure case
    - Pass user-provided socket as-is
    - Use stack variable rather than heap allocation
    - Future-proofed map info API to give access flags as well

v5: Incorporated feedback from Akihiko Odaki:
    - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
    - Simplify num_capsets check
    - Call cleanup mapping on error paths
    - uint64_t --> void* for rutabaga_map(..)
    - Removed unnecessary parenthesis
    - Removed unnecessary cast
    - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
    - Reuse result variable

v6: Incorporated feedback from Akihiko Odaki:
    - Remove unnecessary #ifndef
    - Disable scanout when appropriate
    - CHECK capset index within range outside loop
    - Add capset_version

v7: Incorporated feedback from Akihiko Odaki:
    - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot

v9: Incorportated feedback from Akihiko Odaki:
    - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
    - Add error_setg(..) after rutabaga_init(..)

v10: Incorportated feedback from Akihiko Odaki:
    - error_setg(..) --> error_setg_errno(..) when appropriate
    - virtio_gpu_rutabaga_init returns a bool instead of an int

v11: Incorportated feedback from Philippe Mathieu-Daudé:
    - C-style /* */ comments and avoid // comments.
    - GPL-2.0 --> GPL-2.0-or-later

 hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
 hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
 hw/display/virtio-vga-rutabaga.c     |   53 ++
 3 files changed, 1224 insertions(+)
 create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
 create mode 100644 hw/display/virtio-gpu-rutabaga.c
 create mode 100644 hw/display/virtio-vga-rutabaga.c

diff --git a/hw/display/virtio-gpu-pci-rutabaga.c b/hw/display/virtio-gpu-pci-rutabaga.c
new file mode 100644
index 0000000000..311eff308a
--- /dev/null
+++ b/hw/display/virtio-gpu-pci-rutabaga.c
@@ -0,0 +1,50 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qemu/module.h"
+#include "hw/pci/pci.h"
+#include "hw/qdev-properties.h"
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/virtio-bus.h"
+#include "hw/virtio/virtio-gpu-pci.h"
+#include "qom/object.h"
+
+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
+typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
+DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI, VIRTIO_GPU_RUTABAGA_PCI,
+                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
+
+struct VirtIOGPURutabagaPCI {
+    VirtIOGPUPCIBase parent_obj;
+    VirtIOGPURutabaga vdev;
+};
+
+static void virtio_gpu_rutabaga_initfn(Object *obj)
+{
+    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
+
+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+                                TYPE_VIRTIO_GPU_RUTABAGA);
+    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
+}
+
+static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
+    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
+    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
+    .instance_size = sizeof(VirtIOGPURutabagaPCI),
+    .instance_init = virtio_gpu_rutabaga_initfn,
+};
+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
+module_kconfig(VIRTIO_PCI);
+
+static void virtio_gpu_rutabaga_pci_register_types(void)
+{
+    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
+}
+
+type_init(virtio_gpu_rutabaga_pci_register_types)
+
+module_dep("hw-display-virtio-gpu-pci");
diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
new file mode 100644
index 0000000000..9018e5a702
--- /dev/null
+++ b/hw/display/virtio-gpu-rutabaga.c
@@ -0,0 +1,1121 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qemu/error-report.h"
+#include "qemu/iov.h"
+#include "trace.h"
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/virtio-gpu.h"
+#include "hw/virtio/virtio-gpu-pixman.h"
+#include "hw/virtio/virtio-iommu.h"
+
+#include <glib/gmem.h>
+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
+
+#define CHECK(condition, cmd)                                                 \
+    do {                                                                      \
+        if (!(condition)) {                                                   \
+            error_report("CHECK failed in %s() %s:" "%d", __func__,           \
+                         __FILE__, __LINE__);                                 \
+            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;                        \
+            return;                                                           \
+       }                                                                      \
+    } while (0)
+
+/*
+ * This is the size of the char array in struct sock_addr_un. No Wayland socket
+ * can be created with a path longer than this, including the null terminator.
+ */
+#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
+
+struct rutabaga_aio_data {
+    struct VirtIOGPURutabaga *vr;
+    struct rutabaga_fence fence;
+};
+
+static void
+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout *s,
+                                  uint32_t resource_id)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct rutabaga_transfer transfer = { 0 };
+    struct iovec transfer_iovec;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    res = virtio_gpu_find_resource(g, resource_id);
+    if (!res) {
+        return;
+    }
+
+    if (res->width != s->current_cursor->width ||
+        res->height != s->current_cursor->height) {
+        return;
+    }
+
+    transfer.x = 0;
+    transfer.y = 0;
+    transfer.z = 0;
+    transfer.w = res->width;
+    transfer.h = res->height;
+    transfer.d = 1;
+
+    transfer_iovec.iov_base = s->current_cursor->data;
+    transfer_iovec.iov_len = res->width * res->height * 4;
+
+    rutabaga_resource_transfer_read(vr->rutabaga, 0,
+                                    resource_id, &transfer,
+                                    &transfer_iovec);
+}
+
+static void
+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
+{
+    VirtIOGPU *g = VIRTIO_GPU(b);
+    virtio_gpu_process_cmdq(g);
+}
+
+static void
+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
+                                struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct rutabaga_create_3d rc_3d = { 0 };
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_create_2d c2d;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(c2d);
+    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
+                                       c2d.width, c2d.height);
+
+    rc_3d.target = 2;
+    rc_3d.format = c2d.format;
+    rc_3d.bind = (1 << 1);
+    rc_3d.width = c2d.width;
+    rc_3d.height = c2d.height;
+    rc_3d.depth = 1;
+    rc_3d.array_size = 1;
+    rc_3d.last_level = 0;
+    rc_3d.nr_samples = 0;
+    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
+
+    result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id, &rc_3d);
+    CHECK(!result, cmd);
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    res->width = c2d.width;
+    res->height = c2d.height;
+    res->format = c2d.format;
+    res->resource_id = c2d.resource_id;
+
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+}
+
+static void
+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
+                                struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct rutabaga_create_3d rc_3d = { 0 };
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_create_3d c3d;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(c3d);
+
+    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
+                                       c3d.width, c3d.height, c3d.depth);
+
+    rc_3d.target = c3d.target;
+    rc_3d.format = c3d.format;
+    rc_3d.bind = c3d.bind;
+    rc_3d.width = c3d.width;
+    rc_3d.height = c3d.height;
+    rc_3d.depth = c3d.depth;
+    rc_3d.array_size = c3d.array_size;
+    rc_3d.last_level = c3d.last_level;
+    rc_3d.nr_samples = c3d.nr_samples;
+    rc_3d.flags = c3d.flags;
+
+    result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id, &rc_3d);
+    CHECK(!result, cmd);
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    res->width = c3d.width;
+    res->height = c3d.height;
+    res->format = c3d.format;
+    res->resource_id = c3d.resource_id;
+
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+}
+
+static void
+rutabaga_cmd_resource_unref(VirtIOGPU *g,
+                            struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_unref unref;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(unref);
+
+    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
+
+    res = virtio_gpu_find_resource(g, unref.resource_id);
+    CHECK(res, cmd);
+
+    result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
+    CHECK(!result, cmd);
+
+    if (res->image) {
+        pixman_image_unref(res->image);
+    }
+
+    QTAILQ_REMOVE(&g->reslist, res, next);
+    g_free(res);
+}
+
+static void
+rutabaga_cmd_context_create(VirtIOGPU *g,
+                            struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct virtio_gpu_ctx_create cc;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(cc);
+    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
+                                    cc.debug_name);
+
+    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
+                                     cc.context_init, cc.debug_name, cc.nlen);
+    CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_context_destroy(VirtIOGPU *g,
+                             struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct virtio_gpu_ctx_destroy cd;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(cd);
+    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
+
+    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
+    CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result, i;
+    struct virtio_gpu_scanout *scanout = NULL;
+    struct virtio_gpu_simple_resource *res;
+    struct rutabaga_transfer transfer = { 0 };
+    struct iovec transfer_iovec;
+    struct virtio_gpu_resource_flush rf;
+    bool found = false;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+    if (vr->headless) {
+        return;
+    }
+
+    VIRTIO_GPU_FILL_CMD(rf);
+    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
+                                   rf.r.width, rf.r.height, rf.r.x, rf.r.y);
+
+    res = virtio_gpu_find_resource(g, rf.resource_id);
+    CHECK(res, cmd);
+
+    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
+        scanout = &g->parent_obj.scanout[i];
+        if (i == res->scanout_bitmask) {
+            found = true;
+            break;
+        }
+    }
+
+    if (!found) {
+        return;
+    }
+
+    transfer.x = 0;
+    transfer.y = 0;
+    transfer.z = 0;
+    transfer.w = res->width;
+    transfer.h = res->height;
+    transfer.d = 1;
+
+    transfer_iovec.iov_base = pixman_image_get_data(res->image);
+    transfer_iovec.iov_len = res->width * res->height * 4;
+
+    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
+                                             rf.resource_id, &transfer,
+                                             &transfer_iovec);
+    CHECK(!result, cmd);
+    dpy_gfx_update_full(scanout->con);
+}
+
+static void
+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_scanout *scanout = NULL;
+    struct virtio_gpu_set_scanout ss;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+    if (vr->headless) {
+        return;
+    }
+
+    VIRTIO_GPU_FILL_CMD(ss);
+    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
+                                     ss.r.width, ss.r.height, ss.r.x, ss.r.y);
+
+    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
+    scanout = &g->parent_obj.scanout[ss.scanout_id];
+
+    if (ss.resource_id == 0) {
+        dpy_gfx_replace_surface(scanout->con, NULL);
+        dpy_gl_scanout_disable(scanout->con);
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, ss.resource_id);
+    CHECK(res, cmd);
+
+    if (!res->image) {
+        pixman_format_code_t pformat;
+        pformat = virtio_gpu_get_pixman_format(res->format);
+        CHECK(pformat, cmd);
+
+        res->image = pixman_image_create_bits(pformat,
+                                              res->width,
+                                              res->height,
+                                              NULL, 0);
+        CHECK(res->image, cmd);
+        pixman_image_ref(res->image);
+    }
+
+    g->parent_obj.enable = 1;
+
+    /* realloc the surface ptr */
+    scanout->ds = qemu_create_displaysurface_pixman(res->image);
+    dpy_gfx_replace_surface(scanout->con, NULL);
+    dpy_gfx_replace_surface(scanout->con, scanout->ds);
+    res->scanout_bitmask = ss.scanout_id;
+}
+
+static void
+rutabaga_cmd_submit_3d(VirtIOGPU *g,
+                       struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct virtio_gpu_cmd_submit cs;
+    struct rutabaga_command rutabaga_cmd = { 0 };
+    g_autofree uint8_t *buf = NULL;
+    size_t s;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(cs);
+    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
+
+    buf = g_new0(uint8_t, cs.size);
+    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
+                   sizeof(cs), buf, cs.size);
+    CHECK(s == cs.size, cmd);
+
+    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
+    rutabaga_cmd.cmd = buf;
+    rutabaga_cmd.cmd_size = cs.size;
+
+    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
+    CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
+                                 struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct rutabaga_transfer transfer = { 0 };
+    struct virtio_gpu_transfer_to_host_2d t2d;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(t2d);
+    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
+
+    transfer.x = t2d.r.x;
+    transfer.y = t2d.r.y;
+    transfer.z = 0;
+    transfer.w = t2d.r.width;
+    transfer.h = t2d.r.height;
+    transfer.d = 1;
+
+    result = rutabaga_resource_transfer_write(vr->rutabaga, 0, t2d.resource_id,
+                                              &transfer);
+    CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
+                                 struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct rutabaga_transfer transfer = { 0 };
+    struct virtio_gpu_transfer_host_3d t3d;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(t3d);
+    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
+
+    transfer.x = t3d.box.x;
+    transfer.y = t3d.box.y;
+    transfer.z = t3d.box.z;
+    transfer.w = t3d.box.w;
+    transfer.h = t3d.box.h;
+    transfer.d = t3d.box.d;
+    transfer.level = t3d.level;
+    transfer.stride = t3d.stride;
+    transfer.layer_stride = t3d.layer_stride;
+    transfer.offset = t3d.offset;
+
+    result = rutabaga_resource_transfer_write(vr->rutabaga, t3d.hdr.ctx_id,
+                                              t3d.resource_id, &transfer);
+    CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
+                                   struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct rutabaga_transfer transfer = { 0 };
+    struct virtio_gpu_transfer_host_3d t3d;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(t3d);
+    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
+
+    transfer.x = t3d.box.x;
+    transfer.y = t3d.box.y;
+    transfer.z = t3d.box.z;
+    transfer.w = t3d.box.w;
+    transfer.h = t3d.box.h;
+    transfer.d = t3d.box.d;
+    transfer.level = t3d.level;
+    transfer.stride = t3d.stride;
+    transfer.layer_stride = t3d.layer_stride;
+    transfer.offset = t3d.offset;
+
+    result = rutabaga_resource_transfer_read(vr->rutabaga, t3d.hdr.ctx_id,
+                                             t3d.resource_id, &transfer, NULL);
+    CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+    struct rutabaga_iovecs vecs = { 0 };
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_attach_backing att_rb;
+    int ret;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(att_rb);
+    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
+
+    res = virtio_gpu_find_resource(g, att_rb.resource_id);
+    CHECK(res, cmd);
+    CHECK(!res->iov, cmd);
+
+    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries, sizeof(att_rb),
+                                        cmd, NULL, &res->iov, &res->iov_cnt);
+    CHECK(!ret, cmd);
+
+    vecs.iovecs = res->iov;
+    vecs.num_iovecs = res->iov_cnt;
+
+    ret = rutabaga_resource_attach_backing(vr->rutabaga, att_rb.resource_id,
+                                           &vecs);
+    if (ret != 0) {
+        virtio_gpu_cleanup_mapping(g, res);
+    }
+
+    CHECK(!ret, cmd);
+}
+
+static void
+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_detach_backing detach_rb;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(detach_rb);
+    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
+
+    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
+    CHECK(res, cmd);
+
+    rutabaga_resource_detach_backing(vr->rutabaga,
+                                     detach_rb.resource_id);
+
+    virtio_gpu_cleanup_mapping(g, res);
+}
+
+static void
+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
+                                 struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct virtio_gpu_ctx_resource att_res;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(att_res);
+    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
+                                        att_res.resource_id);
+
+    result = rutabaga_context_attach_resource(vr->rutabaga, att_res.hdr.ctx_id,
+                                              att_res.resource_id);
+    CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
+                                 struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct virtio_gpu_ctx_resource det_res;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(det_res);
+    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
+                                        det_res.resource_id);
+
+    result = rutabaga_context_detach_resource(vr->rutabaga, det_res.hdr.ctx_id,
+                                              det_res.resource_id);
+    CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct virtio_gpu_get_capset_info info;
+    struct virtio_gpu_resp_capset_info resp;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(info);
+
+    result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
+                                      &resp.capset_id, &resp.capset_max_version,
+                                      &resp.capset_max_size);
+    CHECK(!result, cmd);
+
+    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void
+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    struct virtio_gpu_get_capset gc;
+    struct virtio_gpu_resp_capset *resp;
+    uint32_t capset_size, capset_version;
+    uint32_t current_id, i;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(gc);
+    for (i = 0; i < vr->num_capsets; i++) {
+        result = rutabaga_get_capset_info(vr->rutabaga, i,
+                                          &current_id, &capset_version,
+                                          &capset_size);
+        CHECK(!result, cmd);
+
+        if (current_id == gc.capset_id) {
+            break;
+        }
+    }
+
+    CHECK(i < vr->num_capsets, cmd);
+
+    resp = g_malloc0(sizeof(*resp) + capset_size);
+    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
+    rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
+                        resp->capset_data, capset_size);
+
+    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + capset_size);
+    g_free(resp);
+}
+
+static void
+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
+                                  struct virtio_gpu_ctrl_command *cmd)
+{
+    int result;
+    struct rutabaga_iovecs vecs = { 0 };
+    g_autofree struct virtio_gpu_simple_resource *res = NULL;
+    struct virtio_gpu_resource_create_blob cblob;
+    struct rutabaga_create_blob rc_blob = { 0 };
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(cblob);
+    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
+
+    CHECK(cblob.resource_id != 0, cmd);
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+
+    res->resource_id = cblob.resource_id;
+    res->blob_size = cblob.size;
+
+    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
+                                               sizeof(cblob), cmd, &res->addrs,
+                                               &res->iov, &res->iov_cnt);
+        CHECK(!result, cmd);
+    }
+
+    rc_blob.blob_id = cblob.blob_id;
+    rc_blob.blob_mem = cblob.blob_mem;
+    rc_blob.blob_flags = cblob.blob_flags;
+    rc_blob.size = cblob.size;
+
+    vecs.iovecs = res->iov;
+    vecs.num_iovecs = res->iov_cnt;
+
+    result = rutabaga_resource_create_blob(vr->rutabaga, cblob.hdr.ctx_id,
+                                           cblob.resource_id, &rc_blob, &vecs,
+                                           NULL);
+
+    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+        virtio_gpu_cleanup_mapping(g, res);
+    }
+
+    CHECK(!result, cmd);
+
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+    res = NULL;
+}
+
+static void
+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
+                               struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    uint32_t map_info = 0;
+    uint32_t slot = 0;
+    struct virtio_gpu_simple_resource *res;
+    struct rutabaga_mapping mapping = { 0 };
+    struct virtio_gpu_resource_map_blob mblob;
+    struct virtio_gpu_resp_map_info resp = { 0 };
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(mblob);
+
+    CHECK(mblob.resource_id != 0, cmd);
+
+    res = virtio_gpu_find_resource(g, mblob.resource_id);
+    CHECK(res, cmd);
+
+    result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
+                                        &map_info);
+    CHECK(!result, cmd);
+
+    /*
+     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu spec, but do
+     * exist to potentially allow the hypervisor to restrict write access to
+     * memory. QEMU does not need to use this functionality at the moment.
+     */
+    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
+
+    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id, &mapping);
+    CHECK(!result, cmd);
+
+    for (slot = 0; slot < MAX_SLOTS; slot++) {
+        if (vr->memory_regions[slot].used) {
+            continue;
+        }
+
+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
+        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
+                                   mapping.ptr);
+        memory_region_add_subregion(&g->parent_obj.hostmem,
+                                    mblob.offset, mr);
+        vr->memory_regions[slot].resource_id = mblob.resource_id;
+        vr->memory_regions[slot].used = 1;
+        break;
+    }
+
+    if (slot >= MAX_SLOTS) {
+        result = rutabaga_resource_unmap(vr->rutabaga, mblob.resource_id);
+        CHECK(!result, cmd);
+    }
+
+    CHECK(slot < MAX_SLOTS, cmd);
+
+    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void
+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
+                                 struct virtio_gpu_ctrl_command *cmd)
+{
+    int32_t result;
+    uint32_t slot = 0;
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_unmap_blob ublob;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(ublob);
+
+    CHECK(ublob.resource_id != 0, cmd);
+
+    res = virtio_gpu_find_resource(g, ublob.resource_id);
+    CHECK(res, cmd);
+
+    for (slot = 0; slot < MAX_SLOTS; slot++) {
+        if (vr->memory_regions[slot].resource_id != ublob.resource_id) {
+            continue;
+        }
+
+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
+        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
+
+        vr->memory_regions[slot].resource_id = 0;
+        vr->memory_regions[slot].used = 0;
+        break;
+    }
+
+    CHECK(slot < MAX_SLOTS, cmd);
+    result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
+    CHECK(!result, cmd);
+}
+
+static void
+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
+                                struct virtio_gpu_ctrl_command *cmd)
+{
+    struct rutabaga_fence fence = { 0 };
+    int32_t result;
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
+
+    switch (cmd->cmd_hdr.type) {
+    case VIRTIO_GPU_CMD_CTX_CREATE:
+        rutabaga_cmd_context_create(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_CTX_DESTROY:
+        rutabaga_cmd_context_destroy(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
+        rutabaga_cmd_create_resource_2d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
+        rutabaga_cmd_create_resource_3d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_SUBMIT_3D:
+        rutabaga_cmd_submit_3d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
+        rutabaga_cmd_transfer_to_host_2d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
+        rutabaga_cmd_transfer_to_host_3d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
+        rutabaga_cmd_transfer_from_host_3d(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
+        rutabaga_cmd_attach_backing(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
+        rutabaga_cmd_detach_backing(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_SET_SCANOUT:
+        rutabaga_cmd_set_scanout(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
+        rutabaga_cmd_resource_flush(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
+        rutabaga_cmd_resource_unref(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
+        rutabaga_cmd_ctx_attach_resource(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
+        rutabaga_cmd_ctx_detach_resource(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
+        rutabaga_cmd_get_capset_info(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_GET_CAPSET:
+        rutabaga_cmd_get_capset(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
+        virtio_gpu_get_display_info(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_GET_EDID:
+        virtio_gpu_get_edid(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
+        rutabaga_cmd_resource_create_blob(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
+        rutabaga_cmd_resource_map_blob(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
+        rutabaga_cmd_resource_unmap_blob(g, cmd);
+        break;
+    default:
+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+        break;
+    }
+
+    if (cmd->finished) {
+        return;
+    }
+    if (cmd->error) {
+        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
+                     cmd->cmd_hdr.type, cmd->error);
+        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
+        return;
+    }
+    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
+        virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
+        return;
+    }
+
+    fence.flags = cmd->cmd_hdr.flags;
+    fence.ctx_id = cmd->cmd_hdr.ctx_id;
+    fence.fence_id = cmd->cmd_hdr.fence_id;
+    fence.ring_idx = cmd->cmd_hdr.ring_idx;
+
+    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
+
+    result = rutabaga_create_fence(vr->rutabaga, &fence);
+    CHECK(!result, cmd);
+}
+
+static void
+virtio_gpu_rutabaga_aio_cb(void *opaque)
+{
+    struct rutabaga_aio_data *data = opaque;
+    VirtIOGPU *g = VIRTIO_GPU(data->vr);
+    struct rutabaga_fence fence_data = data->fence;
+    struct virtio_gpu_ctrl_command *cmd, *tmp;
+
+    uint32_t signaled_ctx_specific = fence_data.flags &
+                                     RUTABAGA_FLAG_INFO_RING_IDX;
+
+    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
+        /*
+         * Due to context specific timelines.
+         */
+        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
+                                       RUTABAGA_FLAG_INFO_RING_IDX;
+
+        if (signaled_ctx_specific != target_ctx_specific) {
+            continue;
+        }
+
+        if (signaled_ctx_specific &&
+           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
+            continue;
+        }
+
+        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
+            continue;
+        }
+
+        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
+        virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
+        QTAILQ_REMOVE(&g->fenceq, cmd, next);
+        g_free(cmd);
+    }
+
+    g_free(data);
+}
+
+static void
+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
+                             const struct rutabaga_fence *fence) {
+    struct rutabaga_aio_data *data;
+    VirtIOGPU *g = (VirtIOGPU *)user_data;
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    /*
+     * gfxstream and both cross-domain (and even newer versions virglrenderer:
+     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence completion on
+     * threads ("callback threads") that are different from the thread that
+     * processes the command queue ("main thread").
+     *
+     * crosvm and other virtio-gpu 1.1 implementations enable callback threads
+     * via locking.  However, on QEMU a deadlock is observed if
+     * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback] is used
+     * from a thread that is not the main thread.
+     *
+     * The reason is QEMU's internal locking is designed to work with QEMU
+     * threads (see rcu_register_thread()) and not generic C/C++/Rust threads.
+     * For now, we can workaround this by scheduling the return of the
+     * fence descriptors on the main thread.
+     */
+
+    data = g_new0(struct rutabaga_aio_data, 1);
+    data->vr = vr;
+    data->fence = *fence;
+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
+                            virtio_gpu_rutabaga_aio_cb,
+                            data);
+}
+
+static void
+virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
+                             const struct rutabaga_debug *debug) {
+
+    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
+        error_report("%s", debug->message);
+    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
+        warn_report("%s", debug->message);
+    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
+        info_report("%s", debug->message);
+    }
+}
+
+static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
+{
+    int result;
+    uint64_t capset_mask;
+    struct rutabaga_builder builder = { 0 };
+    char wayland_socket_path[UNIX_PATH_MAX];
+    struct rutabaga_channel channel = { 0 };
+    struct rutabaga_channels channels = { 0 };
+
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+    vr->rutabaga = NULL;
+
+    if (!vr->capset_names) {
+        error_setg(errp, "a capset name from the virtio-gpu spec is required");
+        return false;
+    }
+
+    builder.wsi = RUTABAGA_WSI_SURFACELESS;
+    /*
+     * Currently, if WSI is specified, the only valid strings are "surfaceless"
+     * or "headless".  Surfaceless doesn't create a native window surface, but
+     * does copy from the render target to the Pixman buffer if a virtio-gpu
+     * 2D hypercall is issued.  Surfacless is the default.
+     *
+     * Headless is like surfaceless, but doesn't copy to the Pixman buffer. The
+     * use case is automated testing environments where there is no need to view
+     * results.
+     *
+     * In the future, more performant virtio-gpu 2D UI integration may be added.
+     */
+    if (vr->wsi) {
+        if (g_str_equal(vr->wsi, "surfaceless")) {
+            vr->headless = false;
+        } else if (g_str_equal(vr->wsi, "headless")) {
+            vr->headless = true;
+        } else {
+            error_setg(errp, "invalid wsi option selected");
+            return false;
+        }
+    }
+
+    result = rutabaga_calculate_capset_mask(vr->capset_names, &capset_mask);
+    if (result) {
+        error_setg_errno(errp, -result, "invalid capset names: %s",
+                         vr->capset_names);
+        return false;
+    }
+
+    builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
+    builder.debug_cb = virtio_gpu_rutabaga_debug_cb;
+    builder.capset_mask = capset_mask;
+    builder.user_data = (uint64_t)g;
+
+    /*
+     * If the user doesn't specify the wayland socket path, we try to infer
+     * the socket via a process similar to the one used by libwayland.
+     * libwayland does the following:
+     *
+     * 1) If $WAYLAND_DISPLAY is set, attempt to connect to
+     *    $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY
+     * 2) Otherwise, attempt to connect to $XDG_RUNTIME_DIR/wayland-0
+     * 3) Otherwise, don't pass a wayland socket to rutabaga. If a guest
+     *    wayland proxy is launched, it will fail to work.
+     */
+    channel.channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
+    if (!vr->wayland_socket_path) {
+        const char *runtime_dir = getenv("XDG_RUNTIME_DIR");
+        const char *display = getenv("WAYLAND_DISPLAY");
+        if (!display) {
+            display = "wayland-0";
+        }
+
+        if (runtime_dir) {
+            result = snprintf(wayland_socket_path, UNIX_PATH_MAX,
+                              "%s/%s", runtime_dir, display);
+            if (result > 0 && result < UNIX_PATH_MAX) {
+                channel.channel_name = wayland_socket_path;
+            }
+        }
+    } else {
+        channel.channel_name = vr->wayland_socket_path;
+    }
+
+    if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN))) {
+        if (channel.channel_name) {
+            channels.channels = &channel;
+            channels.num_channels = 1;
+            builder.channels = &channels;
+        }
+    }
+
+    result = rutabaga_init(&builder, &vr->rutabaga);
+    if (result) {
+        error_setg_errno(errp, -result, "Failed to init rutabaga");
+        return result;
+    }
+
+    return true;
+}
+
+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
+{
+    int result;
+    uint32_t num_capsets;
+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+    result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
+    if (result) {
+        error_report("Failed to get capsets");
+        return 0;
+    }
+    vr->num_capsets = num_capsets;
+    return num_capsets;
+}
+
+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtIOGPU *g = VIRTIO_GPU(vdev);
+    struct virtio_gpu_ctrl_command *cmd;
+
+    if (!virtio_queue_ready(vq)) {
+        return;
+    }
+
+    cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
+    while (cmd) {
+        cmd->vq = vq;
+        cmd->error = 0;
+        cmd->finished = false;
+        QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
+        cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
+    }
+
+    virtio_gpu_process_cmdq(g);
+}
+
+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
+{
+    int num_capsets;
+    VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
+    VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
+
+#if HOST_BIG_ENDIAN
+    error_setg(errp, "rutabaga is not supported on bigendian platforms");
+    return;
+#endif
+
+    if (!virtio_gpu_rutabaga_init(gpudev, errp)) {
+        return;
+    }
+
+    num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
+    if (!num_capsets) {
+        return;
+    }
+
+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
+
+    bdev->virtio_config.num_capsets = num_capsets;
+    virtio_gpu_device_realize(qdev, errp);
+}
+
+static Property virtio_gpu_rutabaga_properties[] = {
+    DEFINE_PROP_STRING("capset_names", VirtIOGPURutabaga, capset_names),
+    DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
+                       wayland_socket_path),
+    DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
+    VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
+    VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
+
+    vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
+    vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
+    vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
+    vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
+
+    vdc->realize = virtio_gpu_rutabaga_realize;
+    device_class_set_props(dc, virtio_gpu_rutabaga_properties);
+}
+
+static const TypeInfo virtio_gpu_rutabaga_info = {
+    .name = TYPE_VIRTIO_GPU_RUTABAGA,
+    .parent = TYPE_VIRTIO_GPU,
+    .instance_size = sizeof(VirtIOGPURutabaga),
+    .class_init = virtio_gpu_rutabaga_class_init,
+};
+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
+module_kconfig(VIRTIO_GPU);
+
+static void virtio_register_types(void)
+{
+    type_register_static(&virtio_gpu_rutabaga_info);
+}
+
+type_init(virtio_register_types)
+
+module_dep("hw-display-virtio-gpu");
diff --git a/hw/display/virtio-vga-rutabaga.c b/hw/display/virtio-vga-rutabaga.c
new file mode 100644
index 0000000000..b5b43e3b90
--- /dev/null
+++ b/hw/display/virtio-vga-rutabaga.c
@@ -0,0 +1,53 @@
+/*
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "hw/pci/pci.h"
+#include "hw/qdev-properties.h"
+#include "hw/virtio/virtio-gpu.h"
+#include "hw/display/vga.h"
+#include "qapi/error.h"
+#include "qemu/module.h"
+#include "virtio-vga.h"
+#include "qom/object.h"
+
+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
+
+typedef struct VirtIOVGARutabaga VirtIOVGARutabaga;
+DECLARE_INSTANCE_CHECKER(VirtIOVGARutabaga, VIRTIO_VGA_RUTABAGA,
+                         TYPE_VIRTIO_VGA_RUTABAGA)
+
+struct VirtIOVGARutabaga {
+    VirtIOVGABase parent_obj;
+    VirtIOGPURutabaga vdev;
+};
+
+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
+{
+    VirtIOVGARutabaga *dev = VIRTIO_VGA_RUTABAGA(obj);
+
+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+                                TYPE_VIRTIO_GPU_RUTABAGA);
+    VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
+}
+
+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
+    .generic_name  = TYPE_VIRTIO_VGA_RUTABAGA,
+    .parent        = TYPE_VIRTIO_VGA_BASE,
+    .instance_size = sizeof(VirtIOVGARutabaga),
+    .instance_init = virtio_vga_rutabaga_inst_initfn,
+};
+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
+module_kconfig(VIRTIO_VGA);
+
+static void virtio_vga_register_types(void)
+{
+    if (have_vga) {
+        virtio_pci_types_register(&virtio_vga_rutabaga_info);
+    }
+}
+
+type_init(virtio_vga_register_types)
+
+module_dep("hw-display-virtio-vga");
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v11 7/9] gfxstream + rutabaga: meson support
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
                   ` (5 preceding siblings ...)
  2023-08-23  1:25 ` [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 8/9] gfxstream + rutabaga: enable rutabaga Gurchetan Singh
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

- Add meson detection of rutabaga_gfx
- Build virtio-gpu-rutabaga.c + associated vga/pci files when
  present

Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
---
v3: Fix alignment issues (Akihiko)

 hw/display/meson.build        | 22 ++++++++++++++++++++++
 meson.build                   |  7 +++++++
 meson_options.txt             |  2 ++
 scripts/meson-buildoptions.sh |  3 +++
 4 files changed, 34 insertions(+)

diff --git a/hw/display/meson.build b/hw/display/meson.build
index 413ba4ab24..e362d625dd 100644
--- a/hw/display/meson.build
+++ b/hw/display/meson.build
@@ -79,6 +79,13 @@ if config_all_devices.has_key('CONFIG_VIRTIO_GPU')
                          if_true: [files('virtio-gpu-gl.c', 'virtio-gpu-virgl.c'), pixman, virgl])
     hw_display_modules += {'virtio-gpu-gl': virtio_gpu_gl_ss}
   endif
+
+  if rutabaga.found()
+    virtio_gpu_rutabaga_ss = ss.source_set()
+    virtio_gpu_rutabaga_ss.add(when: ['CONFIG_VIRTIO_GPU', rutabaga],
+                               if_true: [files('virtio-gpu-rutabaga.c'), pixman])
+    hw_display_modules += {'virtio-gpu-rutabaga': virtio_gpu_rutabaga_ss}
+  endif
 endif
 
 if config_all_devices.has_key('CONFIG_VIRTIO_PCI')
@@ -95,6 +102,12 @@ if config_all_devices.has_key('CONFIG_VIRTIO_PCI')
                              if_true: [files('virtio-gpu-pci-gl.c'), pixman])
     hw_display_modules += {'virtio-gpu-pci-gl': virtio_gpu_pci_gl_ss}
   endif
+  if rutabaga.found()
+    virtio_gpu_pci_rutabaga_ss = ss.source_set()
+    virtio_gpu_pci_rutabaga_ss.add(when: ['CONFIG_VIRTIO_GPU', 'CONFIG_VIRTIO_PCI', rutabaga],
+                                   if_true: [files('virtio-gpu-pci-rutabaga.c'), pixman])
+    hw_display_modules += {'virtio-gpu-pci-rutabaga': virtio_gpu_pci_rutabaga_ss}
+  endif
 endif
 
 if config_all_devices.has_key('CONFIG_VIRTIO_VGA')
@@ -113,6 +126,15 @@ if config_all_devices.has_key('CONFIG_VIRTIO_VGA')
   virtio_vga_gl_ss.add(when: 'CONFIG_ACPI', if_true: files('acpi-vga.c'),
                                             if_false: files('acpi-vga-stub.c'))
   hw_display_modules += {'virtio-vga-gl': virtio_vga_gl_ss}
+
+  if rutabaga.found()
+    virtio_vga_rutabaga_ss = ss.source_set()
+    virtio_vga_rutabaga_ss.add(when: ['CONFIG_VIRTIO_VGA', rutabaga],
+                               if_true: [files('virtio-vga-rutabaga.c'), pixman])
+    virtio_vga_rutabaga_ss.add(when: 'CONFIG_ACPI', if_true: files('acpi-vga.c'),
+                                                    if_false: files('acpi-vga-stub.c'))
+    hw_display_modules += {'virtio-vga-rutabaga': virtio_vga_rutabaga_ss}
+  endif
 endif
 
 system_ss.add(when: 'CONFIG_OMAP', if_true: files('omap_lcdc.c'))
diff --git a/meson.build b/meson.build
index 98e68ef0b1..293f388e53 100644
--- a/meson.build
+++ b/meson.build
@@ -1069,6 +1069,12 @@ if not get_option('virglrenderer').auto() or have_system or have_vhost_user_gpu
                                        dependencies: virgl))
   endif
 endif
+rutabaga = not_found
+if not get_option('rutabaga_gfx').auto() or have_system or have_vhost_user_gpu
+  rutabaga = dependency('rutabaga_gfx_ffi',
+                         method: 'pkg-config',
+                         required: get_option('rutabaga_gfx'))
+endif
 blkio = not_found
 if not get_option('blkio').auto() or have_block
   blkio = dependency('blkio',
@@ -4272,6 +4278,7 @@ summary_info += {'libtasn1':          tasn1}
 summary_info += {'PAM':               pam}
 summary_info += {'iconv support':     iconv}
 summary_info += {'virgl support':     virgl}
+summary_info += {'rutabaga support':  rutabaga}
 summary_info += {'blkio support':     blkio}
 summary_info += {'curl support':      curl}
 summary_info += {'Multipath support': mpathpersist}
diff --git a/meson_options.txt b/meson_options.txt
index aaea5ddd77..dea3bf7d9c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -224,6 +224,8 @@ option('vmnet', type : 'feature', value : 'auto',
        description: 'vmnet.framework network backend support')
 option('virglrenderer', type : 'feature', value : 'auto',
        description: 'virgl rendering support')
+option('rutabaga_gfx', type : 'feature', value : 'auto',
+       description: 'rutabaga_gfx support')
 option('png', type : 'feature', value : 'auto',
        description: 'PNG support with libpng')
 option('vnc', type : 'feature', value : 'auto',
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 9da3fe299b..9a95b4f782 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -154,6 +154,7 @@ meson_options_help() {
   printf "%s\n" '  rbd             Ceph block device driver'
   printf "%s\n" '  rdma            Enable RDMA-based migration'
   printf "%s\n" '  replication     replication support'
+  printf "%s\n" '  rutabaga-gfx    rutabaga_gfx support'
   printf "%s\n" '  sdl             SDL user interface'
   printf "%s\n" '  sdl-image       SDL Image support for icons'
   printf "%s\n" '  seccomp         seccomp support'
@@ -419,6 +420,8 @@ _meson_option_parse() {
     --disable-replication) printf "%s" -Dreplication=disabled ;;
     --enable-rng-none) printf "%s" -Drng_none=true ;;
     --disable-rng-none) printf "%s" -Drng_none=false ;;
+    --enable-rutabaga-gfx) printf "%s" -Drutabaga_gfx=enabled ;;
+    --disable-rutabaga-gfx) printf "%s" -Drutabaga_gfx=disabled ;;
     --enable-safe-stack) printf "%s" -Dsafe_stack=true ;;
     --disable-safe-stack) printf "%s" -Dsafe_stack=false ;;
     --enable-sanitizers) printf "%s" -Dsanitizers=true ;;
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v11 8/9] gfxstream + rutabaga: enable rutabaga
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
                   ` (6 preceding siblings ...)
  2023-08-23  1:25 ` [PATCH v11 7/9] gfxstream + rutabaga: meson support Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23  1:25 ` [PATCH v11 9/9] docs/system: add basic virtio-gpu documentation Gurchetan Singh
  2023-08-23 11:07 ` [PATCH v11 0/9] rutabaga_gfx + gfxstream Alyssa Ross
  9 siblings, 0 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

This change enables rutabaga to receive virtio-gpu-3d hypercalls
when it is active.

Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
---
v3: Whitespace fix (Akihiko)
v9: reorder virtio_gpu_have_udmabuf() after checking if rutabaga
    is enabled to avoid spurious warnings (Akihiko)

 hw/display/virtio-gpu-base.c | 3 ++-
 hw/display/virtio-gpu.c      | 5 +++--
 softmmu/qdev-monitor.c       | 3 +++
 softmmu/vl.c                 | 1 +
 4 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 4f2b0ba1f3..50c5373b65 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -223,7 +223,8 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
 {
     VirtIOGPUBase *g = VIRTIO_GPU_BASE(vdev);
 
-    if (virtio_gpu_virgl_enabled(g->conf)) {
+    if (virtio_gpu_virgl_enabled(g->conf) ||
+        virtio_gpu_rutabaga_enabled(g->conf)) {
         features |= (1 << VIRTIO_GPU_F_VIRGL);
     }
     if (virtio_gpu_edid_enabled(g->conf)) {
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 3e658f1fef..fe094addef 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1361,8 +1361,9 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
     VirtIOGPU *g = VIRTIO_GPU(qdev);
 
     if (virtio_gpu_blob_enabled(g->parent_obj.conf)) {
-        if (!virtio_gpu_have_udmabuf()) {
-            error_setg(errp, "cannot enable blob resources without udmabuf");
+        if (!virtio_gpu_rutabaga_enabled(g->parent_obj.conf) &&
+            !virtio_gpu_have_udmabuf()) {
+            error_setg(errp, "need rutabaga or udmabuf for blob resources");
             return;
         }
 
diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
index 74f4e41338..1b8005ae55 100644
--- a/softmmu/qdev-monitor.c
+++ b/softmmu/qdev-monitor.c
@@ -86,6 +86,9 @@ static const QDevAlias qdev_alias_table[] = {
     { "virtio-gpu-pci", "virtio-gpu", QEMU_ARCH_VIRTIO_PCI },
     { "virtio-gpu-gl-device", "virtio-gpu-gl", QEMU_ARCH_VIRTIO_MMIO },
     { "virtio-gpu-gl-pci", "virtio-gpu-gl", QEMU_ARCH_VIRTIO_PCI },
+    { "virtio-gpu-rutabaga-device", "virtio-gpu-rutabaga",
+      QEMU_ARCH_VIRTIO_MMIO },
+    { "virtio-gpu-rutabaga-pci", "virtio-gpu-rutabaga", QEMU_ARCH_VIRTIO_PCI },
     { "virtio-input-host-device", "virtio-input-host", QEMU_ARCH_VIRTIO_MMIO },
     { "virtio-input-host-ccw", "virtio-input-host", QEMU_ARCH_VIRTIO_CCW },
     { "virtio-input-host-pci", "virtio-input-host", QEMU_ARCH_VIRTIO_PCI },
diff --git a/softmmu/vl.c b/softmmu/vl.c
index b0b96f67fa..2f98eefdf3 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -216,6 +216,7 @@ static struct {
     { .driver = "ati-vga",              .flag = &default_vga       },
     { .driver = "vhost-user-vga",       .flag = &default_vga       },
     { .driver = "virtio-vga-gl",        .flag = &default_vga       },
+    { .driver = "virtio-vga-rutabaga",  .flag = &default_vga       },
 };
 
 static QemuOptsList qemu_rtc_opts = {
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v11 9/9] docs/system: add basic virtio-gpu documentation
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
                   ` (7 preceding siblings ...)
  2023-08-23  1:25 ` [PATCH v11 8/9] gfxstream + rutabaga: enable rutabaga Gurchetan Singh
@ 2023-08-23  1:25 ` Gurchetan Singh
  2023-08-23 11:07 ` [PATCH v11 0/9] rutabaga_gfx + gfxstream Alyssa Ross
  9 siblings, 0 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-23  1:25 UTC (permalink / raw)
  To: qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	hi, ernunes, manos.pitsidianakis, philmd

This adds basic documentation for virtio-gpu.

Suggested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
---
v2: - Incorporated suggestions by Akihiko Odaki
    - Listed the currently supported capset_names (Bernard)

v3: - Incorporated suggestions by Akihiko Odaki and Alyssa Ross

v4: - Incorporated suggestions by Akihiko Odaki

v5: - Removed pci suffix from examples
    - Verified that -device virtio-gpu-rutabaga works.  Strangely
      enough, I don't remember changing anything, and I remember
      it not working.  I did rebase to top of tree though.
    - Fixed meson examples in crosvm docs

v8: - Remove different links for "rutabaga_gfx" and
      "gfxstream-enabled rutabaga" (Akihiko)

v11: - GPL-2.0-or-later license (Phillippe)

 docs/system/device-emulation.rst   |   1 +
 docs/system/devices/virtio-gpu.rst | 112 +++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)
 create mode 100644 docs/system/devices/virtio-gpu.rst

diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
index 4491c4cbf7..1167f3a9f2 100644
--- a/docs/system/device-emulation.rst
+++ b/docs/system/device-emulation.rst
@@ -91,6 +91,7 @@ Emulated Devices
    devices/nvme.rst
    devices/usb.rst
    devices/vhost-user.rst
+   devices/virtio-gpu.rst
    devices/virtio-pmem.rst
    devices/vhost-user-rng.rst
    devices/canokey.rst
diff --git a/docs/system/devices/virtio-gpu.rst b/docs/system/devices/virtio-gpu.rst
new file mode 100644
index 0000000000..21465e4ce2
--- /dev/null
+++ b/docs/system/devices/virtio-gpu.rst
@@ -0,0 +1,112 @@
+..
+   SPDX-License-Identifier: GPL-2.0-or-later
+
+virtio-gpu
+==========
+
+This document explains the setup and usage of the virtio-gpu device.
+The virtio-gpu device paravirtualizes the GPU and display controller.
+
+Linux kernel support
+--------------------
+
+virtio-gpu requires a guest Linux kernel built with the
+``CONFIG_DRM_VIRTIO_GPU`` option.
+
+QEMU virtio-gpu variants
+------------------------
+
+QEMU virtio-gpu device variants come in the following form:
+
+ * ``virtio-vga[-BACKEND]``
+ * ``virtio-gpu[-BACKEND][-INTERFACE]``
+ * ``vhost-user-vga``
+ * ``vhost-user-pci``
+
+**Backends:** QEMU provides a 2D virtio-gpu backend, and two accelerated
+backends: virglrenderer ('gl' device label) and rutabaga_gfx ('rutabaga'
+device label).  There is a vhost-user backend that runs the graphics stack
+in a separate process for improved isolation.
+
+**Interfaces:** QEMU further categorizes virtio-gpu device variants based
+on the interface exposed to the guest. The interfaces can be classified
+into VGA and non-VGA variants. The VGA ones are prefixed with virtio-vga
+or vhost-user-vga while the non-VGA ones are prefixed with virtio-gpu or
+vhost-user-gpu.
+
+The VGA ones always use the PCI interface, but for the non-VGA ones, the
+user can further pick between MMIO or PCI. For MMIO, the user can suffix
+the device name with -device, though vhost-user-gpu does not support MMIO.
+For PCI, the user can suffix it with -pci. Without these suffixes, the
+platform default will be chosen.
+
+virtio-gpu 2d
+-------------
+
+The default 2D backend only performs 2D operations. The guest needs to
+employ a software renderer for 3D graphics.
+
+Typically, the software renderer is provided by `Mesa`_ or `SwiftShader`_.
+Mesa's implementations (LLVMpipe, Lavapipe and virgl below) work out of box
+on typical modern Linux distributions.
+
+.. parsed-literal::
+    -device virtio-gpu
+
+.. _Mesa: https://www.mesa3d.org/
+.. _SwiftShader: https://github.com/google/swiftshader
+
+virtio-gpu virglrenderer
+------------------------
+
+When using virgl accelerated graphics mode in the guest, OpenGL API calls
+are translated into an intermediate representation (see `Gallium3D`_). The
+intermediate representation is communicated to the host and the
+`virglrenderer`_ library on the host translates the intermediate
+representation back to OpenGL API calls.
+
+.. parsed-literal::
+    -device virtio-gpu-gl
+
+.. _Gallium3D: https://www.freedesktop.org/wiki/Software/gallium/
+.. _virglrenderer: https://gitlab.freedesktop.org/virgl/virglrenderer/
+
+virtio-gpu rutabaga
+-------------------
+
+virtio-gpu can also leverage rutabaga_gfx to provide `gfxstream`_
+rendering and `Wayland display passthrough`_.  With the gfxstream rendering
+mode, GLES and Vulkan calls are forwarded to the host with minimal
+modification.
+
+The crosvm book provides directions on how to build a `gfxstream-enabled
+rutabaga`_ and launch a `guest Wayland proxy`_.
+
+This device does require host blob support (``hostmem`` field below). The
+``hostmem`` field specifies the size of virtio-gpu host memory window.
+This is typically between 256M and 8G.
+
+At least one capset (see colon separated ``capset_names`` below) must be
+specified when starting the device.  The currently supported
+``capset_names`` are ``gfxstream-vulkan`` and ``cross-domain`` on Linux
+guests. For Android guests, ``gfxstream-gles`` is also supported.
+
+The device will try to auto-detect the wayland socket path if the
+``cross-domain`` capset name is set.  The user may optionally specify
+``wayland_socket_path`` for non-standard paths.
+
+The ``wsi`` option can be set to ``surfaceless`` or ``headless``.
+Surfaceless doesn't create a native window surface, but does copy from the
+render target to the Pixman buffer if a virtio-gpu 2D hypercall is issued.
+Headless is like surfaceless, but doesn't copy to the Pixman buffer.
+Surfaceless is the default if ``wsi`` is not specified.
+
+.. parsed-literal::
+    -device virtio-gpu-rutabaga,capset_names=gfxstream-vulkan:cross-domain,
+       hostmem=8G,wayland_socket_path=/tmp/nonstandard/mock_wayland.sock,
+       wsi=headless
+
+.. _gfxstream: https://android.googlesource.com/platform/hardware/google/gfxstream/
+.. _Wayland display passthrough: https://www.youtube.com/watch?v=OZJiHMtIQ2M
+.. _gfxstream-enabled rutabaga: https://crosvm.dev/book/appendix/rutabaga_gfx.html
+.. _guest Wayland proxy: https://crosvm.dev/book/devices/wayland.html
-- 
2.42.0.rc1.204.g551eb34607-goog



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-08-23  1:25 ` [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
@ 2023-08-23  9:59   ` Akihiko Odaki
  2023-09-13 11:57   ` Bernhard Beschow
  1 sibling, 0 replies; 34+ messages in thread
From: Akihiko Odaki @ 2023-08-23  9:59 UTC (permalink / raw)
  To: Gurchetan Singh, qemu-devel
  Cc: marcandre.lureau, ray.huang, alex.bennee, shentey, hi, ernunes,
	manos.pitsidianakis, philmd

On 2023/08/23 10:25, Gurchetan Singh wrote:
> This adds initial support for gfxstream and cross-domain.  Both
> features rely on virtio-gpu blob resources and context types, which
> are also implemented in this patch.
> 
> gfxstream has a long and illustrious history in Android graphics
> paravirtualization.  It has been powering graphics in the Android
> Studio Emulator for more than a decade, which is the main developer
> platform.
> 
> Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
> The key design characteristic was a 1:1 threading model and
> auto-generation, which fit nicely with the OpenGLES spec.  It also
> allowed easy layering with ANGLE on the host, which provides the GLES
> implementations on Windows or MacOS enviroments.
> 
> gfxstream has traditionally been maintained by a single engineer, and
> between 2015 to 2021, the goldfish throne passed to Frank Yang.
> Historians often remark this glorious reign ("pax gfxstreama" is the
> academic term) was comparable to that of Augustus and both Queen
> Elizabeths.  Just to name a few accomplishments in a resplendent
> panoply: higher versions of GLES, address space graphics, snapshot
> support and CTS compliant Vulkan [b].
> 
> One major drawback was the use of out-of-tree goldfish drivers.
> Android engineers didn't know much about DRM/KMS and especially TTM so
> a simple guest to host pipe was conceived.
> 
> Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
> the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
> port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
> It was a symbol compatible replacement of virglrenderer [c] and named
> "AVDVirglrenderer".  This implementation forms the basis of the
> current gfxstream host implementation still in use today.
> 
> cross-domain support follows a similar arc.  Originally conceived by
> Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
> 2018, it initially relied on the downstream "virtio-wl" device.
> 
> In 2020 and 2021, virtio-gpu was extended to include blob resources
> and multiple timelines by yours truly, features gfxstream/cross-domain
> both require to function correctly.
> 
> Right now, we stand at the precipice of a truly fantastic possibility:
> the Android Emulator powered by upstream QEMU and upstream Linux
> kernel.  gfxstream will then be packaged properfully, and app
> developers can even fix gfxstream bugs on their own if they encounter
> them.
> 
> It's been quite the ride, my friends.  Where will gfxstream head next,
> nobody really knows.  I wouldn't be surprised if it's around for
> another decade, maintained by a new generation of Android graphics
> enthusiasts.
> 
> Technical details:
>    - Very simple initial display integration: just used Pixman
>    - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
>      calls
> 
> Next steps for Android VMs:
>    - The next step would be improving display integration and UI interfaces
>      with the goal of the QEMU upstream graphics being in an emulator
>      release [d].
> 
> Next steps for Linux VMs for display virtualization:
>    - For widespread distribution, someone needs to package Sommelier or the
>      wayland-proxy-virtwl [e] ideally into Debian main. In addition, newer
>      versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
>      which allows disabling KMS hypercalls.  If anyone cares enough, it'll
>      probably be possible to build a custom VM variant that uses this display
>      virtualization strategy.
> 
> [a] https://android-review.googlesource.com/c/platform/development/+/34470
> [b] https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
> [c] https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
> [d] https://developer.android.com/studio/releases/emulator
> [e] https://github.com/talex5/wayland-proxy-virtwl
> 
> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> Tested-by: Alyssa Ross <hi@alyssa.is>
> Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
> Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
> ---
> v1: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
>      - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>      - Used error_report(..)
>      - Used g_autofree to fix leaks on error paths
>      - Removed unnecessary casts
>      - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
> 
> v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau and
>      Bernard Berschow:
>      - Parenthesis in CHECK macro
>      - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
>      - delay until g->parent_obj.enable = 1
>      - Additional cast fixes
>      - initialize directly in virtio_gpu_rutabaga_realize(..)
>      - add debug callback to hook into QEMU error's APIs
> 
> v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>      - Autodetect Wayland socket when not explicitly specified
>      - Fix map_blob error paths
>      - Add comment why we need both `res` and `resource` in create blob
>      - Cast and whitespace fixes
>      - Big endian check comes before virtio_gpu_rutabaga_init().
>      - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
> 
> v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>      - Double checked all casts
>      - Remove unnecessary parenthesis
>      - Removed `resource` in create_blob
>      - Added comment about failure case
>      - Pass user-provided socket as-is
>      - Use stack variable rather than heap allocation
>      - Future-proofed map info API to give access flags as well
> 
> v5: Incorporated feedback from Akihiko Odaki:
>      - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
>      - Simplify num_capsets check
>      - Call cleanup mapping on error paths
>      - uint64_t --> void* for rutabaga_map(..)
>      - Removed unnecessary parenthesis
>      - Removed unnecessary cast
>      - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
>      - Reuse result variable
> 
> v6: Incorporated feedback from Akihiko Odaki:
>      - Remove unnecessary #ifndef
>      - Disable scanout when appropriate
>      - CHECK capset index within range outside loop
>      - Add capset_version
> 
> v7: Incorporated feedback from Akihiko Odaki:
>      - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
> 
> v9: Incorportated feedback from Akihiko Odaki:
>      - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
>      - Add error_setg(..) after rutabaga_init(..)
> 
> v10: Incorportated feedback from Akihiko Odaki:
>      - error_setg(..) --> error_setg_errno(..) when appropriate
>      - virtio_gpu_rutabaga_init returns a bool instead of an int
> 
> v11: Incorportated feedback from Philippe Mathieu-Daudé:
>      - C-style /* */ comments and avoid // comments.
>      - GPL-2.0 --> GPL-2.0-or-later
> 
>   hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
>   hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
>   hw/display/virtio-vga-rutabaga.c     |   53 ++
>   3 files changed, 1224 insertions(+)
>   create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
>   create mode 100644 hw/display/virtio-gpu-rutabaga.c
>   create mode 100644 hw/display/virtio-vga-rutabaga.c
> 
> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c b/hw/display/virtio-gpu-pci-rutabaga.c
> new file mode 100644
> index 0000000000..311eff308a
> --- /dev/null
> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
> @@ -0,0 +1,50 @@
> +/*
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */

For an one-line comment, just do:
/* SPDX-License-Identifier: GPL-2.0-or-later */


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
                   ` (8 preceding siblings ...)
  2023-08-23  1:25 ` [PATCH v11 9/9] docs/system: add basic virtio-gpu documentation Gurchetan Singh
@ 2023-08-23 11:07 ` Alyssa Ross
  2023-08-24 23:56   ` Gurchetan Singh
  9 siblings, 1 reply; 34+ messages in thread
From: Alyssa Ross @ 2023-08-23 11:07 UTC (permalink / raw)
  To: Gurchetan Singh, qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, shentey,
	ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 489 bytes --]

Gurchetan Singh <gurchetansingh@chromium.org> writes:

> - Official "release commits" issued for rutabaga_gfx_ffi,
>   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
>
> - The release commits can make packaging easier, though once
>   again all known users will likely just build from sources
>   anyways

It's a small thing, but could there be actual tags, rather than just
blessed commits?  It'd just make them easier to find, and save a bit of
time in review for packages.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-08-23 11:07 ` [PATCH v11 0/9] rutabaga_gfx + gfxstream Alyssa Ross
@ 2023-08-24 23:56   ` Gurchetan Singh
  2023-08-25  7:11     ` Alyssa Ross
  0 siblings, 1 reply; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-24 23:56 UTC (permalink / raw)
  To: Alyssa Ross
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 986 bytes --]

On Wed, Aug 23, 2023 at 4:07 AM Alyssa Ross <hi@alyssa.is> wrote:

> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>
> > - Official "release commits" issued for rutabaga_gfx_ffi,
> >   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
> >
> > - The release commits can make packaging easier, though once
> >   again all known users will likely just build from sources
> >   anyways
>
> It's a small thing, but could there be actual tags, rather than just
> blessed commits?  It'd just make them easier to find, and save a bit of
> time in review for packages.
>

I added:

https://crosvm.dev/book/appendix/rutabaga_gfx.html#latest-releases-for-potential-packaging

Tags are possible, but I want to clarify the use case before packaging.
Where are you thinking of packaging it for (Debian??)? Are you mostly
interested in Wayland passthrough (my guess) or gfxstream too?  Depending
your use case, we may be able to minimize the work involved.

[-- Attachment #2: Type: text/html, Size: 1685 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-08-24 23:56   ` Gurchetan Singh
@ 2023-08-25  7:11     ` Alyssa Ross
  2023-08-25 19:05       ` Gurchetan Singh
  0 siblings, 1 reply; 34+ messages in thread
From: Alyssa Ross @ 2023-08-25  7:11 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 1734 bytes --]

Gurchetan Singh <gurchetansingh@chromium.org> writes:

> On Wed, Aug 23, 2023 at 4:07 AM Alyssa Ross <hi@alyssa.is> wrote:
>
>> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>>
>> > - Official "release commits" issued for rutabaga_gfx_ffi,
>> >   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
>> >
>> > - The release commits can make packaging easier, though once
>> >   again all known users will likely just build from sources
>> >   anyways
>>
>> It's a small thing, but could there be actual tags, rather than just
>> blessed commits?  It'd just make them easier to find, and save a bit of
>> time in review for packages.
>>
>
> I added:
>
> https://crosvm.dev/book/appendix/rutabaga_gfx.html#latest-releases-for-potential-packaging
>
> Tags are possible, but I want to clarify the use case before packaging.
> Where are you thinking of packaging it for (Debian??)? Are you mostly
> interested in Wayland passthrough (my guess) or gfxstream too?  Depending
> your use case, we may be able to minimize the work involved.

Packaging for Nixpkgs (where I already maintain what to my knowledge is
the only crosvm distro package).  I'm personally mostly interested in
Wayland passthroug, but I wouldn't be surprised if others are interested
in gfxstream.  The packaging work is already done, I've just been
holding off actually pushing the packages waiting for the stable
releases.

The reason that tags would be useful is that it allows a reviewer of the
package to see at a glance that the package is built from a stable
release.  If it's just built from a commit hash, they have to go and
verify that it's a stable release, which is mildly annoying and
unconventional.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-08-25  7:11     ` Alyssa Ross
@ 2023-08-25 19:05       ` Gurchetan Singh
  2023-08-25 19:29         ` Alyssa Ross
  0 siblings, 1 reply; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-25 19:05 UTC (permalink / raw)
  To: Alyssa Ross
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 2226 bytes --]

On Fri, Aug 25, 2023 at 12:11 AM Alyssa Ross <hi@alyssa.is> wrote:

> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>
> > On Wed, Aug 23, 2023 at 4:07 AM Alyssa Ross <hi@alyssa.is> wrote:
> >
> >> Gurchetan Singh <gurchetansingh@chromium.org> writes:
> >>
> >> > - Official "release commits" issued for rutabaga_gfx_ffi,
> >> >   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
> >> >
> >> > - The release commits can make packaging easier, though once
> >> >   again all known users will likely just build from sources
> >> >   anyways
> >>
> >> It's a small thing, but could there be actual tags, rather than just
> >> blessed commits?  It'd just make them easier to find, and save a bit of
> >> time in review for packages.
> >>
> >
> > I added:
> >
> >
> https://crosvm.dev/book/appendix/rutabaga_gfx.html#latest-releases-for-potential-packaging
> >
> > Tags are possible, but I want to clarify the use case before packaging.
> > Where are you thinking of packaging it for (Debian??)? Are you mostly
> > interested in Wayland passthrough (my guess) or gfxstream too?  Depending
> > your use case, we may be able to minimize the work involved.
>
> Packaging for Nixpkgs (where I already maintain what to my knowledge is
> the only crosvm distro package).  I'm personally mostly interested in
> Wayland passthroug, but I wouldn't be surprised if others are interested
> in gfxstream.  The packaging work is already done, I've just been
> holding off actually pushing the packages waiting for the stable
> releases.
>
> The reason that tags would be useful is that it allows a reviewer of the
> package to see at a glance that the package is built from a stable
> release.  If it's just built from a commit hash, they have to go and
> verify that it's a stable release, which is mildly annoying and
> unconventional.
>

Understood.  Request to have gfxstream and AEMU v0.1.2 release tags made.

For rutabaga_gfx_ffi, is the crates.io upload sufficient?

https://crates.io/crates/rutabaga_gfx_ffi

Debian, for example, treats crates.io as the source of truth and builds
tooling around that.  I wonder if Nixpkgs as similar tooling around
crates.io.

[-- Attachment #2: Type: text/html, Size: 3440 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-08-25 19:05       ` Gurchetan Singh
@ 2023-08-25 19:29         ` Alyssa Ross
  2023-08-25 19:37           ` Alyssa Ross
  0 siblings, 1 reply; 34+ messages in thread
From: Alyssa Ross @ 2023-08-25 19:29 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 2786 bytes --]

Gurchetan Singh <gurchetansingh@chromium.org> writes:

> On Fri, Aug 25, 2023 at 12:11 AM Alyssa Ross <hi@alyssa.is> wrote:
>
>> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>>
>> > On Wed, Aug 23, 2023 at 4:07 AM Alyssa Ross <hi@alyssa.is> wrote:
>> >
>> >> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>> >>
>> >> > - Official "release commits" issued for rutabaga_gfx_ffi,
>> >> >   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
>> >> >
>> >> > - The release commits can make packaging easier, though once
>> >> >   again all known users will likely just build from sources
>> >> >   anyways
>> >>
>> >> It's a small thing, but could there be actual tags, rather than just
>> >> blessed commits?  It'd just make them easier to find, and save a bit of
>> >> time in review for packages.
>> >>
>> >
>> > I added:
>> >
>> >
>> https://crosvm.dev/book/appendix/rutabaga_gfx.html#latest-releases-for-potential-packaging
>> >
>> > Tags are possible, but I want to clarify the use case before packaging.
>> > Where are you thinking of packaging it for (Debian??)? Are you mostly
>> > interested in Wayland passthrough (my guess) or gfxstream too?  Depending
>> > your use case, we may be able to minimize the work involved.
>>
>> Packaging for Nixpkgs (where I already maintain what to my knowledge is
>> the only crosvm distro package).  I'm personally mostly interested in
>> Wayland passthroug, but I wouldn't be surprised if others are interested
>> in gfxstream.  The packaging work is already done, I've just been
>> holding off actually pushing the packages waiting for the stable
>> releases.
>>
>> The reason that tags would be useful is that it allows a reviewer of the
>> package to see at a glance that the package is built from a stable
>> release.  If it's just built from a commit hash, they have to go and
>> verify that it's a stable release, which is mildly annoying and
>> unconventional.
>>
>
> Understood.  Request to have gfxstream and AEMU v0.1.2 release tags made.
>
> For rutabaga_gfx_ffi, is the crates.io upload sufficient?
>
> https://crates.io/crates/rutabaga_gfx_ffi
>
> Debian, for example, treats crates.io as the source of truth and builds
> tooling around that.  I wonder if Nixpkgs as similar tooling around
> crates.io.

We do, and I'll use the crates.io release for the package — good
suggestion, but it's still useful to also have a tag in a git repo.  It
makes it easier if I need to do a bisect, for example.  As a distro
developer, I'm frequently jumping across codebases I am not very
familiar with to try to track down regressions, etc., and it's much
easier when I don't have to learn some special quirk of the package like
not having git tags.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-08-25 19:29         ` Alyssa Ross
@ 2023-08-25 19:37           ` Alyssa Ross
  2023-08-29  0:43             ` Gurchetan Singh
  0 siblings, 1 reply; 34+ messages in thread
From: Alyssa Ross @ 2023-08-25 19:37 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 3959 bytes --]

Alyssa Ross <hi@alyssa.is> writes:

> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>
>> On Fri, Aug 25, 2023 at 12:11 AM Alyssa Ross <hi@alyssa.is> wrote:
>>
>>> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>>>
>>> > On Wed, Aug 23, 2023 at 4:07 AM Alyssa Ross <hi@alyssa.is> wrote:
>>> >
>>> >> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>>> >>
>>> >> > - Official "release commits" issued for rutabaga_gfx_ffi,
>>> >> >   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
>>> >> >
>>> >> > - The release commits can make packaging easier, though once
>>> >> >   again all known users will likely just build from sources
>>> >> >   anyways
>>> >>
>>> >> It's a small thing, but could there be actual tags, rather than just
>>> >> blessed commits?  It'd just make them easier to find, and save a bit of
>>> >> time in review for packages.
>>> >>
>>> >
>>> > I added:
>>> >
>>> >
>>> https://crosvm.dev/book/appendix/rutabaga_gfx.html#latest-releases-for-potential-packaging
>>> >
>>> > Tags are possible, but I want to clarify the use case before packaging.
>>> > Where are you thinking of packaging it for (Debian??)? Are you mostly
>>> > interested in Wayland passthrough (my guess) or gfxstream too?  Depending
>>> > your use case, we may be able to minimize the work involved.
>>>
>>> Packaging for Nixpkgs (where I already maintain what to my knowledge is
>>> the only crosvm distro package).  I'm personally mostly interested in
>>> Wayland passthroug, but I wouldn't be surprised if others are interested
>>> in gfxstream.  The packaging work is already done, I've just been
>>> holding off actually pushing the packages waiting for the stable
>>> releases.
>>>
>>> The reason that tags would be useful is that it allows a reviewer of the
>>> package to see at a glance that the package is built from a stable
>>> release.  If it's just built from a commit hash, they have to go and
>>> verify that it's a stable release, which is mildly annoying and
>>> unconventional.
>>>
>>
>> Understood.  Request to have gfxstream and AEMU v0.1.2 release tags made.
>>
>> For rutabaga_gfx_ffi, is the crates.io upload sufficient?
>>
>> https://crates.io/crates/rutabaga_gfx_ffi
>>
>> Debian, for example, treats crates.io as the source of truth and builds
>> tooling around that.  I wonder if Nixpkgs as similar tooling around
>> crates.io.
>
> We do, and I'll use the crates.io release for the package — good
> suggestion, but it's still useful to also have a tag in a git repo.  It
> makes it easier if I need to do a bisect, for example.  As a distro
> developer, I'm frequently jumping across codebases I am not very
> familiar with to try to track down regressions, etc., and it's much
> easier when I don't have to learn some special quirk of the package like
> not having git tags.

Aha, trying to switch my package over to it has revealed that there is
actually a reason not to use the crates.io release.  It doesn't include
a Cargo.lock, which would mean we'd have to obtain one from elsewhere.
Either from the crosvm git repo, at which point we might just get all
the sources from there, or by vendoring a Cargo.lock into our own git
tree for packages, which we try to avoid because when you have a lot of
them, they become quite a large proportion of the overall size of the
repo.

(This probably differs from Debian, etc., because in Nixpkgs, we don't
package each crate dependency separately.  We only have packages for
applications (or occasionally, C ABI libraries written in Rust), and
each of those gets to bring in whatever crate dependencies it wants as
part of its build.  This means we use the upstream Cargo.lock, and
accept that different Rust packages will use lots of different versions
of dependencies, which I don't believe is the case with other distros
that take a more purist approach to Rust packaging.)

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-08-25 19:37           ` Alyssa Ross
@ 2023-08-29  0:43             ` Gurchetan Singh
  2023-09-12  8:53               ` Alyssa Ross
  0 siblings, 1 reply; 34+ messages in thread
From: Gurchetan Singh @ 2023-08-29  0:43 UTC (permalink / raw)
  To: Alyssa Ross
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 4328 bytes --]

On Fri, Aug 25, 2023 at 12:37 PM Alyssa Ross <hi@alyssa.is> wrote:

> Alyssa Ross <hi@alyssa.is> writes:
>
> > Gurchetan Singh <gurchetansingh@chromium.org> writes:
> >
> >> On Fri, Aug 25, 2023 at 12:11 AM Alyssa Ross <hi@alyssa.is> wrote:
> >>
> >>> Gurchetan Singh <gurchetansingh@chromium.org> writes:
> >>>
> >>> > On Wed, Aug 23, 2023 at 4:07 AM Alyssa Ross <hi@alyssa.is> wrote:
> >>> >
> >>> >> Gurchetan Singh <gurchetansingh@chromium.org> writes:
> >>> >>
> >>> >> > - Official "release commits" issued for rutabaga_gfx_ffi,
> >>> >> >   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
> >>> >> >
> >>> >> > - The release commits can make packaging easier, though once
> >>> >> >   again all known users will likely just build from sources
> >>> >> >   anyways
> >>> >>
> >>> >> It's a small thing, but could there be actual tags, rather than just
> >>> >> blessed commits?  It'd just make them easier to find, and save a
> bit of
> >>> >> time in review for packages.
> >>> >>
> >>> >
> >>> > I added:
> >>> >
> >>> >
> >>>
> https://crosvm.dev/book/appendix/rutabaga_gfx.html#latest-releases-for-potential-packaging
> >>> >
> >>> > Tags are possible, but I want to clarify the use case before
> packaging.
> >>> > Where are you thinking of packaging it for (Debian??)? Are you mostly
> >>> > interested in Wayland passthrough (my guess) or gfxstream too?
> Depending
> >>> > your use case, we may be able to minimize the work involved.
> >>>
> >>> Packaging for Nixpkgs (where I already maintain what to my knowledge is
> >>> the only crosvm distro package).  I'm personally mostly interested in
> >>> Wayland passthroug, but I wouldn't be surprised if others are
> interested
> >>> in gfxstream.  The packaging work is already done, I've just been
> >>> holding off actually pushing the packages waiting for the stable
> >>> releases.
> >>>
> >>> The reason that tags would be useful is that it allows a reviewer of
> the
> >>> package to see at a glance that the package is built from a stable
> >>> release.  If it's just built from a commit hash, they have to go and
> >>> verify that it's a stable release, which is mildly annoying and
> >>> unconventional.
> >>>
> >>
> >> Understood.  Request to have gfxstream and AEMU v0.1.2 release tags
> made.
> >>
> >> For rutabaga_gfx_ffi, is the crates.io upload sufficient?
> >>
> >> https://crates.io/crates/rutabaga_gfx_ffi
> >>
> >> Debian, for example, treats crates.io as the source of truth and builds
> >> tooling around that.  I wonder if Nixpkgs as similar tooling around
> >> crates.io.
> >
> > We do, and I'll use the crates.io release for the package — good
> > suggestion, but it's still useful to also have a tag in a git repo.  It
> > makes it easier if I need to do a bisect, for example.  As a distro
> > developer, I'm frequently jumping across codebases I am not very
> > familiar with to try to track down regressions, etc., and it's much
> > easier when I don't have to learn some special quirk of the package like
> > not having git tags.
>
> Aha, trying to switch my package over to it has revealed that there is
> actually a reason not to use the crates.io release.  It doesn't include
> a Cargo.lock, which would mean we'd have to obtain one from elsewhere.
> Either from the crosvm git repo, at which point we might just get all
> the sources from there, or by vendoring a Cargo.lock into our own git
> tree for packages, which we try to avoid because when you have a lot of
> them, they become quite a large proportion of the overall size of the
> repo.
>

Ack.  Request to have a rutabaga release tag in crosvm also made, should be
complete in a few days.


>
> (This probably differs from Debian, etc., because in Nixpkgs, we don't
> package each crate dependency separately.  We only have packages for
> applications (or occasionally, C ABI libraries written in Rust), and
> each of those gets to bring in whatever crate dependencies it wants as
> part of its build.  This means we use the upstream Cargo.lock, and
> accept that different Rust packages will use lots of different versions
> of dependencies, which I don't believe is the case with other distros
> that take a more purist approach to Rust packaging.)
>

[-- Attachment #2: Type: text/html, Size: 6628 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-08-29  0:43             ` Gurchetan Singh
@ 2023-09-12  8:53               ` Alyssa Ross
  2023-09-13  1:14                 ` Gurchetan Singh
  0 siblings, 1 reply; 34+ messages in thread
From: Alyssa Ross @ 2023-09-12  8:53 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 4021 bytes --]

Gurchetan Singh <gurchetansingh@chromium.org> writes:

> On Fri, Aug 25, 2023 at 12:37 PM Alyssa Ross <hi@alyssa.is> wrote:
>
>> Alyssa Ross <hi@alyssa.is> writes:
>>
>> > Gurchetan Singh <gurchetansingh@chromium.org> writes:
>> >
>> >> On Fri, Aug 25, 2023 at 12:11 AM Alyssa Ross <hi@alyssa.is> wrote:
>> >>
>> >>> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>> >>>
>> >>> > On Wed, Aug 23, 2023 at 4:07 AM Alyssa Ross <hi@alyssa.is> wrote:
>> >>> >
>> >>> >> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>> >>> >>
>> >>> >> > - Official "release commits" issued for rutabaga_gfx_ffi,
>> >>> >> >   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
>> >>> >> >
>> >>> >> > - The release commits can make packaging easier, though once
>> >>> >> >   again all known users will likely just build from sources
>> >>> >> >   anyways
>> >>> >>
>> >>> >> It's a small thing, but could there be actual tags, rather than just
>> >>> >> blessed commits?  It'd just make them easier to find, and save a
>> bit of
>> >>> >> time in review for packages.
>> >>> >>
>> >>> >
>> >>> > I added:
>> >>> >
>> >>> >
>> >>>
>> https://crosvm.dev/book/appendix/rutabaga_gfx.html#latest-releases-for-potential-packaging
>> >>> >
>> >>> > Tags are possible, but I want to clarify the use case before
>> packaging.
>> >>> > Where are you thinking of packaging it for (Debian??)? Are you mostly
>> >>> > interested in Wayland passthrough (my guess) or gfxstream too?
>> Depending
>> >>> > your use case, we may be able to minimize the work involved.
>> >>>
>> >>> Packaging for Nixpkgs (where I already maintain what to my knowledge is
>> >>> the only crosvm distro package).  I'm personally mostly interested in
>> >>> Wayland passthroug, but I wouldn't be surprised if others are
>> interested
>> >>> in gfxstream.  The packaging work is already done, I've just been
>> >>> holding off actually pushing the packages waiting for the stable
>> >>> releases.
>> >>>
>> >>> The reason that tags would be useful is that it allows a reviewer of
>> the
>> >>> package to see at a glance that the package is built from a stable
>> >>> release.  If it's just built from a commit hash, they have to go and
>> >>> verify that it's a stable release, which is mildly annoying and
>> >>> unconventional.
>> >>>
>> >>
>> >> Understood.  Request to have gfxstream and AEMU v0.1.2 release tags
>> made.
>> >>
>> >> For rutabaga_gfx_ffi, is the crates.io upload sufficient?
>> >>
>> >> https://crates.io/crates/rutabaga_gfx_ffi
>> >>
>> >> Debian, for example, treats crates.io as the source of truth and builds
>> >> tooling around that.  I wonder if Nixpkgs as similar tooling around
>> >> crates.io.
>> >
>> > We do, and I'll use the crates.io release for the package — good
>> > suggestion, but it's still useful to also have a tag in a git repo.  It
>> > makes it easier if I need to do a bisect, for example.  As a distro
>> > developer, I'm frequently jumping across codebases I am not very
>> > familiar with to try to track down regressions, etc., and it's much
>> > easier when I don't have to learn some special quirk of the package like
>> > not having git tags.
>>
>> Aha, trying to switch my package over to it has revealed that there is
>> actually a reason not to use the crates.io release.  It doesn't include
>> a Cargo.lock, which would mean we'd have to obtain one from elsewhere.
>> Either from the crosvm git repo, at which point we might just get all
>> the sources from there, or by vendoring a Cargo.lock into our own git
>> tree for packages, which we try to avoid because when you have a lot of
>> them, they become quite a large proportion of the overall size of the
>> repo.
>>
>
> Ack.  Request to have a rutabaga release tag in crosvm also made, should be
> complete in a few days.

Thanks!  I've found the rutabaga tag, but I still don't see any relevant
tags for aemu or gfxstream.  Any news there?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-09-12  8:53               ` Alyssa Ross
@ 2023-09-13  1:14                 ` Gurchetan Singh
  2023-09-13 10:10                   ` Alyssa Ross
  0 siblings, 1 reply; 34+ messages in thread
From: Gurchetan Singh @ 2023-09-13  1:14 UTC (permalink / raw)
  To: Alyssa Ross
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 4592 bytes --]

On Tue, Sep 12, 2023 at 1:53 AM Alyssa Ross <hi@alyssa.is> wrote:

> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>
> > On Fri, Aug 25, 2023 at 12:37 PM Alyssa Ross <hi@alyssa.is> wrote:
> >
> >> Alyssa Ross <hi@alyssa.is> writes:
> >>
> >> > Gurchetan Singh <gurchetansingh@chromium.org> writes:
> >> >
> >> >> On Fri, Aug 25, 2023 at 12:11 AM Alyssa Ross <hi@alyssa.is> wrote:
> >> >>
> >> >>> Gurchetan Singh <gurchetansingh@chromium.org> writes:
> >> >>>
> >> >>> > On Wed, Aug 23, 2023 at 4:07 AM Alyssa Ross <hi@alyssa.is> wrote:
> >> >>> >
> >> >>> >> Gurchetan Singh <gurchetansingh@chromium.org> writes:
> >> >>> >>
> >> >>> >> > - Official "release commits" issued for rutabaga_gfx_ffi,
> >> >>> >> >   gfxstream, aemu-base.  For example, see crrev.com/c/4778941
> >> >>> >> >
> >> >>> >> > - The release commits can make packaging easier, though once
> >> >>> >> >   again all known users will likely just build from sources
> >> >>> >> >   anyways
> >> >>> >>
> >> >>> >> It's a small thing, but could there be actual tags, rather than
> just
> >> >>> >> blessed commits?  It'd just make them easier to find, and save a
> >> bit of
> >> >>> >> time in review for packages.
> >> >>> >>
> >> >>> >
> >> >>> > I added:
> >> >>> >
> >> >>> >
> >> >>>
> >>
> https://crosvm.dev/book/appendix/rutabaga_gfx.html#latest-releases-for-potential-packaging
> >> >>> >
> >> >>> > Tags are possible, but I want to clarify the use case before
> >> packaging.
> >> >>> > Where are you thinking of packaging it for (Debian??)? Are you
> mostly
> >> >>> > interested in Wayland passthrough (my guess) or gfxstream too?
> >> Depending
> >> >>> > your use case, we may be able to minimize the work involved.
> >> >>>
> >> >>> Packaging for Nixpkgs (where I already maintain what to my
> knowledge is
> >> >>> the only crosvm distro package).  I'm personally mostly interested
> in
> >> >>> Wayland passthroug, but I wouldn't be surprised if others are
> >> interested
> >> >>> in gfxstream.  The packaging work is already done, I've just been
> >> >>> holding off actually pushing the packages waiting for the stable
> >> >>> releases.
> >> >>>
> >> >>> The reason that tags would be useful is that it allows a reviewer of
> >> the
> >> >>> package to see at a glance that the package is built from a stable
> >> >>> release.  If it's just built from a commit hash, they have to go and
> >> >>> verify that it's a stable release, which is mildly annoying and
> >> >>> unconventional.
> >> >>>
> >> >>
> >> >> Understood.  Request to have gfxstream and AEMU v0.1.2 release tags
> >> made.
> >> >>
> >> >> For rutabaga_gfx_ffi, is the crates.io upload sufficient?
> >> >>
> >> >> https://crates.io/crates/rutabaga_gfx_ffi
> >> >>
> >> >> Debian, for example, treats crates.io as the source of truth and
> builds
> >> >> tooling around that.  I wonder if Nixpkgs as similar tooling around
> >> >> crates.io.
> >> >
> >> > We do, and I'll use the crates.io release for the package — good
> >> > suggestion, but it's still useful to also have a tag in a git repo.
> It
> >> > makes it easier if I need to do a bisect, for example.  As a distro
> >> > developer, I'm frequently jumping across codebases I am not very
> >> > familiar with to try to track down regressions, etc., and it's much
> >> > easier when I don't have to learn some special quirk of the package
> like
> >> > not having git tags.
> >>
> >> Aha, trying to switch my package over to it has revealed that there is
> >> actually a reason not to use the crates.io release.  It doesn't include
> >> a Cargo.lock, which would mean we'd have to obtain one from elsewhere.
> >> Either from the crosvm git repo, at which point we might just get all
> >> the sources from there, or by vendoring a Cargo.lock into our own git
> >> tree for packages, which we try to avoid because when you have a lot of
> >> them, they become quite a large proportion of the overall size of the
> >> repo.
> >>
> >
> > Ack.  Request to have a rutabaga release tag in crosvm also made, should
> be
> > complete in a few days.
>
> Thanks!  I've found the rutabaga tag, but I still don't see any relevant
> tags for aemu or gfxstream.  Any news there?
>

It's harder to get the attention of the Android build team than the Chrome
build team.  Though, there are a few issues with AEMU/gfxstream packaging
we also need to figure out -- see "[PATCH v13 0/9] rutabaga_gfx +
gfxstream" for details -- interested in your opinion on the matter!

[-- Attachment #2: Type: text/html, Size: 7436 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 0/9] rutabaga_gfx + gfxstream
  2023-09-13  1:14                 ` Gurchetan Singh
@ 2023-09-13 10:10                   ` Alyssa Ross
  0 siblings, 0 replies; 34+ messages in thread
From: Alyssa Ross @ 2023-09-13 10:10 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, shentey, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 1265 bytes --]

Gurchetan Singh <gurchetansingh@chromium.org> writes:

> It's harder to get the attention of the Android build team than the Chrome
> build team.  Though, there are a few issues with AEMU/gfxstream packaging
> we also need to figure out -- see "[PATCH v13 0/9] rutabaga_gfx +
> gfxstream" for details -- interested in your opinion on the matter!

None of the other points there are issues for me — in Nixpkgs, every
package is installed to a unique prefix (different versions of the same
package, or even just different build recipes for the same version, or
different dependencies result in different prefixes), so library
versioning and the /usr/include directories are not blockers.  Static
libraries are also fine for Nixpkgs — any change to a library, static or
dynamic, causes all dependents to be rebuild against the new library, so
the only real disadvantage to static libraries is the duplication on
disk, which isn't a big deal.

All that's to say, I'm ready to have rutabaga support, including
gfxstream, in our QEMU package, as soon as a release of QEMU including
it is made.  Everything Marc-André has identified would still be nice to
have fixed, but for us specifically, none of it is a blocker, even the
tags I asked for.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-08-23  1:25 ` [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
  2023-08-23  9:59   ` Akihiko Odaki
@ 2023-09-13 11:57   ` Bernhard Beschow
  2023-09-14  4:38     ` Gurchetan Singh
  1 sibling, 1 reply; 34+ messages in thread
From: Bernhard Beschow @ 2023-09-13 11:57 UTC (permalink / raw)
  To: Gurchetan Singh, qemu-devel
  Cc: marcandre.lureau, akihiko.odaki, ray.huang, alex.bennee, hi,
	ernunes, manos.pitsidianakis, philmd



Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>This adds initial support for gfxstream and cross-domain.  Both
>features rely on virtio-gpu blob resources and context types, which
>are also implemented in this patch.
>
>gfxstream has a long and illustrious history in Android graphics
>paravirtualization.  It has been powering graphics in the Android
>Studio Emulator for more than a decade, which is the main developer
>platform.
>
>Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
>The key design characteristic was a 1:1 threading model and
>auto-generation, which fit nicely with the OpenGLES spec.  It also
>allowed easy layering with ANGLE on the host, which provides the GLES
>implementations on Windows or MacOS enviroments.
>
>gfxstream has traditionally been maintained by a single engineer, and
>between 2015 to 2021, the goldfish throne passed to Frank Yang.
>Historians often remark this glorious reign ("pax gfxstreama" is the
>academic term) was comparable to that of Augustus and both Queen
>Elizabeths.  Just to name a few accomplishments in a resplendent
>panoply: higher versions of GLES, address space graphics, snapshot
>support and CTS compliant Vulkan [b].
>
>One major drawback was the use of out-of-tree goldfish drivers.
>Android engineers didn't know much about DRM/KMS and especially TTM so
>a simple guest to host pipe was conceived.
>
>Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
>the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
>port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
>It was a symbol compatible replacement of virglrenderer [c] and named
>"AVDVirglrenderer".  This implementation forms the basis of the
>current gfxstream host implementation still in use today.
>
>cross-domain support follows a similar arc.  Originally conceived by
>Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
>2018, it initially relied on the downstream "virtio-wl" device.
>
>In 2020 and 2021, virtio-gpu was extended to include blob resources
>and multiple timelines by yours truly, features gfxstream/cross-domain
>both require to function correctly.
>
>Right now, we stand at the precipice of a truly fantastic possibility:
>the Android Emulator powered by upstream QEMU and upstream Linux
>kernel.  gfxstream will then be packaged properfully, and app
>developers can even fix gfxstream bugs on their own if they encounter
>them.
>
>It's been quite the ride, my friends.  Where will gfxstream head next,
>nobody really knows.  I wouldn't be surprised if it's around for
>another decade, maintained by a new generation of Android graphics
>enthusiasts.
>
>Technical details:
>  - Very simple initial display integration: just used Pixman
>  - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
>    calls
>
>Next steps for Android VMs:
>  - The next step would be improving display integration and UI interfaces
>    with the goal of the QEMU upstream graphics being in an emulator
>    release [d].
>
>Next steps for Linux VMs for display virtualization:
>  - For widespread distribution, someone needs to package Sommelier or the
>    wayland-proxy-virtwl [e] ideally into Debian main. In addition, newer
>    versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
>    which allows disabling KMS hypercalls.  If anyone cares enough, it'll
>    probably be possible to build a custom VM variant that uses this display
>    virtualization strategy.
>
>[a] https://android-review.googlesource.com/c/platform/development/+/34470
>[b] https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
>[c] https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>[d] https://developer.android.com/studio/releases/emulator
>[e] https://github.com/talex5/wayland-proxy-virtwl
>
>Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
>Tested-by: Alyssa Ross <hi@alyssa.is>
>Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>---
>v1: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
>    - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>    - Used error_report(..)
>    - Used g_autofree to fix leaks on error paths
>    - Removed unnecessary casts
>    - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>
>v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau and
>    Bernard Berschow:
>    - Parenthesis in CHECK macro
>    - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
>    - delay until g->parent_obj.enable = 1
>    - Additional cast fixes
>    - initialize directly in virtio_gpu_rutabaga_realize(..)
>    - add debug callback to hook into QEMU error's APIs
>
>v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>    - Autodetect Wayland socket when not explicitly specified
>    - Fix map_blob error paths
>    - Add comment why we need both `res` and `resource` in create blob
>    - Cast and whitespace fixes
>    - Big endian check comes before virtio_gpu_rutabaga_init().
>    - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
>
>v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>    - Double checked all casts
>    - Remove unnecessary parenthesis
>    - Removed `resource` in create_blob
>    - Added comment about failure case
>    - Pass user-provided socket as-is
>    - Use stack variable rather than heap allocation
>    - Future-proofed map info API to give access flags as well
>
>v5: Incorporated feedback from Akihiko Odaki:
>    - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
>    - Simplify num_capsets check
>    - Call cleanup mapping on error paths
>    - uint64_t --> void* for rutabaga_map(..)
>    - Removed unnecessary parenthesis
>    - Removed unnecessary cast
>    - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
>    - Reuse result variable
>
>v6: Incorporated feedback from Akihiko Odaki:
>    - Remove unnecessary #ifndef
>    - Disable scanout when appropriate
>    - CHECK capset index within range outside loop
>    - Add capset_version
>
>v7: Incorporated feedback from Akihiko Odaki:
>    - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
>
>v9: Incorportated feedback from Akihiko Odaki:
>    - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
>    - Add error_setg(..) after rutabaga_init(..)
>
>v10: Incorportated feedback from Akihiko Odaki:
>    - error_setg(..) --> error_setg_errno(..) when appropriate
>    - virtio_gpu_rutabaga_init returns a bool instead of an int
>
>v11: Incorportated feedback from Philippe Mathieu-Daudé:
>    - C-style /* */ comments and avoid // comments.
>    - GPL-2.0 --> GPL-2.0-or-later
>
> hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
> hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
> hw/display/virtio-vga-rutabaga.c     |   53 ++
> 3 files changed, 1224 insertions(+)
> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
> create mode 100644 hw/display/virtio-gpu-rutabaga.c
> create mode 100644 hw/display/virtio-vga-rutabaga.c
>
>diff --git a/hw/display/virtio-gpu-pci-rutabaga.c b/hw/display/virtio-gpu-pci-rutabaga.c
>new file mode 100644
>index 0000000000..311eff308a
>--- /dev/null
>+++ b/hw/display/virtio-gpu-pci-rutabaga.c
>@@ -0,0 +1,50 @@
>+/*
>+ * SPDX-License-Identifier: GPL-2.0-or-later
>+ */
>+
>+#include "qemu/osdep.h"
>+#include "qapi/error.h"
>+#include "qemu/module.h"
>+#include "hw/pci/pci.h"
>+#include "hw/qdev-properties.h"
>+#include "hw/virtio/virtio.h"
>+#include "hw/virtio/virtio-bus.h"
>+#include "hw/virtio/virtio-gpu-pci.h"
>+#include "qom/object.h"
>+
>+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>+typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
>+DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI, VIRTIO_GPU_RUTABAGA_PCI,
>+                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>+
>+struct VirtIOGPURutabagaPCI {
>+    VirtIOGPUPCIBase parent_obj;
>+    VirtIOGPURutabaga vdev;
>+};
>+
>+static void virtio_gpu_rutabaga_initfn(Object *obj)
>+{
>+    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>+
>+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>+                                TYPE_VIRTIO_GPU_RUTABAGA);
>+    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>+}
>+
>+static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
>+    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>+    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>+    .instance_size = sizeof(VirtIOGPURutabagaPCI),
>+    .instance_init = virtio_gpu_rutabaga_initfn,
>+};
>+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>+module_kconfig(VIRTIO_PCI);
>+
>+static void virtio_gpu_rutabaga_pci_register_types(void)
>+{
>+    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>+}
>+
>+type_init(virtio_gpu_rutabaga_pci_register_types)
>+
>+module_dep("hw-display-virtio-gpu-pci");
>diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
>new file mode 100644
>index 0000000000..9018e5a702
>--- /dev/null
>+++ b/hw/display/virtio-gpu-rutabaga.c
>@@ -0,0 +1,1121 @@
>+/*
>+ * SPDX-License-Identifier: GPL-2.0-or-later
>+ */
>+
>+#include "qemu/osdep.h"
>+#include "qapi/error.h"
>+#include "qemu/error-report.h"
>+#include "qemu/iov.h"
>+#include "trace.h"
>+#include "hw/virtio/virtio.h"
>+#include "hw/virtio/virtio-gpu.h"
>+#include "hw/virtio/virtio-gpu-pixman.h"
>+#include "hw/virtio/virtio-iommu.h"
>+
>+#include <glib/gmem.h>
>+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>+
>+#define CHECK(condition, cmd)                                                 \
>+    do {                                                                      \
>+        if (!(condition)) {                                                   \
>+            error_report("CHECK failed in %s() %s:" "%d", __func__,           \
>+                         __FILE__, __LINE__);                                 \
>+            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;                        \
>+            return;                                                           \
>+       }                                                                      \
>+    } while (0)
>+
>+/*
>+ * This is the size of the char array in struct sock_addr_un. No Wayland socket
>+ * can be created with a path longer than this, including the null terminator.
>+ */
>+#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
>+
>+struct rutabaga_aio_data {
>+    struct VirtIOGPURutabaga *vr;
>+    struct rutabaga_fence fence;
>+};
>+
>+static void
>+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout *s,
>+                                  uint32_t resource_id)
>+{
>+    struct virtio_gpu_simple_resource *res;
>+    struct rutabaga_transfer transfer = { 0 };
>+    struct iovec transfer_iovec;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    res = virtio_gpu_find_resource(g, resource_id);
>+    if (!res) {
>+        return;
>+    }
>+
>+    if (res->width != s->current_cursor->width ||
>+        res->height != s->current_cursor->height) {
>+        return;
>+    }
>+
>+    transfer.x = 0;
>+    transfer.y = 0;
>+    transfer.z = 0;
>+    transfer.w = res->width;
>+    transfer.h = res->height;
>+    transfer.d = 1;
>+
>+    transfer_iovec.iov_base = s->current_cursor->data;
>+    transfer_iovec.iov_len = res->width * res->height * 4;
>+
>+    rutabaga_resource_transfer_read(vr->rutabaga, 0,
>+                                    resource_id, &transfer,
>+                                    &transfer_iovec);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>+{
>+    VirtIOGPU *g = VIRTIO_GPU(b);
>+    virtio_gpu_process_cmdq(g);
>+}
>+
>+static void
>+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>+                                struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct rutabaga_create_3d rc_3d = { 0 };
>+    struct virtio_gpu_simple_resource *res;
>+    struct virtio_gpu_resource_create_2d c2d;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(c2d);
>+    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
>+                                       c2d.width, c2d.height);
>+
>+    rc_3d.target = 2;
>+    rc_3d.format = c2d.format;
>+    rc_3d.bind = (1 << 1);
>+    rc_3d.width = c2d.width;
>+    rc_3d.height = c2d.height;
>+    rc_3d.depth = 1;
>+    rc_3d.array_size = 1;
>+    rc_3d.last_level = 0;
>+    rc_3d.nr_samples = 0;
>+    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>+
>+    result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id, &rc_3d);
>+    CHECK(!result, cmd);
>+
>+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>+    res->width = c2d.width;
>+    res->height = c2d.height;
>+    res->format = c2d.format;
>+    res->resource_id = c2d.resource_id;
>+
>+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>+}
>+
>+static void
>+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>+                                struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct rutabaga_create_3d rc_3d = { 0 };
>+    struct virtio_gpu_simple_resource *res;
>+    struct virtio_gpu_resource_create_3d c3d;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(c3d);
>+
>+    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
>+                                       c3d.width, c3d.height, c3d.depth);
>+
>+    rc_3d.target = c3d.target;
>+    rc_3d.format = c3d.format;
>+    rc_3d.bind = c3d.bind;
>+    rc_3d.width = c3d.width;
>+    rc_3d.height = c3d.height;
>+    rc_3d.depth = c3d.depth;
>+    rc_3d.array_size = c3d.array_size;
>+    rc_3d.last_level = c3d.last_level;
>+    rc_3d.nr_samples = c3d.nr_samples;
>+    rc_3d.flags = c3d.flags;
>+
>+    result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id, &rc_3d);
>+    CHECK(!result, cmd);
>+
>+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>+    res->width = c3d.width;
>+    res->height = c3d.height;
>+    res->format = c3d.format;
>+    res->resource_id = c3d.resource_id;
>+
>+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>+}
>+
>+static void
>+rutabaga_cmd_resource_unref(VirtIOGPU *g,
>+                            struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct virtio_gpu_simple_resource *res;
>+    struct virtio_gpu_resource_unref unref;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(unref);
>+
>+    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>+
>+    res = virtio_gpu_find_resource(g, unref.resource_id);
>+    CHECK(res, cmd);
>+
>+    result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
>+    CHECK(!result, cmd);
>+
>+    if (res->image) {
>+        pixman_image_unref(res->image);
>+    }
>+
>+    QTAILQ_REMOVE(&g->reslist, res, next);
>+    g_free(res);
>+}
>+
>+static void
>+rutabaga_cmd_context_create(VirtIOGPU *g,
>+                            struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct virtio_gpu_ctx_create cc;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(cc);
>+    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>+                                    cc.debug_name);
>+
>+    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
>+                                     cc.context_init, cc.debug_name, cc.nlen);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_context_destroy(VirtIOGPU *g,
>+                             struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct virtio_gpu_ctx_destroy cd;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(cd);
>+    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>+
>+    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result, i;
>+    struct virtio_gpu_scanout *scanout = NULL;
>+    struct virtio_gpu_simple_resource *res;
>+    struct rutabaga_transfer transfer = { 0 };
>+    struct iovec transfer_iovec;
>+    struct virtio_gpu_resource_flush rf;
>+    bool found = false;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+    if (vr->headless) {
>+        return;
>+    }
>+
>+    VIRTIO_GPU_FILL_CMD(rf);
>+    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>+                                   rf.r.width, rf.r.height, rf.r.x, rf.r.y);
>+
>+    res = virtio_gpu_find_resource(g, rf.resource_id);
>+    CHECK(res, cmd);
>+
>+    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>+        scanout = &g->parent_obj.scanout[i];
>+        if (i == res->scanout_bitmask) {
>+            found = true;
>+            break;
>+        }
>+    }
>+
>+    if (!found) {
>+        return;
>+    }
>+
>+    transfer.x = 0;
>+    transfer.y = 0;
>+    transfer.z = 0;
>+    transfer.w = res->width;
>+    transfer.h = res->height;
>+    transfer.d = 1;
>+
>+    transfer_iovec.iov_base = pixman_image_get_data(res->image);
>+    transfer_iovec.iov_len = res->width * res->height * 4;
>+
>+    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>+                                             rf.resource_id, &transfer,
>+                                             &transfer_iovec);
>+    CHECK(!result, cmd);
>+    dpy_gfx_update_full(scanout->con);
>+}
>+
>+static void
>+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+    struct virtio_gpu_simple_resource *res;
>+    struct virtio_gpu_scanout *scanout = NULL;
>+    struct virtio_gpu_set_scanout ss;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+    if (vr->headless) {
>+        return;
>+    }
>+
>+    VIRTIO_GPU_FILL_CMD(ss);
>+    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
>+                                     ss.r.width, ss.r.height, ss.r.x, ss.r.y);
>+
>+    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
>+    scanout = &g->parent_obj.scanout[ss.scanout_id];
>+
>+    if (ss.resource_id == 0) {
>+        dpy_gfx_replace_surface(scanout->con, NULL);
>+        dpy_gl_scanout_disable(scanout->con);
>+        return;
>+    }
>+
>+    res = virtio_gpu_find_resource(g, ss.resource_id);
>+    CHECK(res, cmd);
>+
>+    if (!res->image) {
>+        pixman_format_code_t pformat;
>+        pformat = virtio_gpu_get_pixman_format(res->format);
>+        CHECK(pformat, cmd);
>+
>+        res->image = pixman_image_create_bits(pformat,
>+                                              res->width,
>+                                              res->height,
>+                                              NULL, 0);
>+        CHECK(res->image, cmd);
>+        pixman_image_ref(res->image);
>+    }
>+
>+    g->parent_obj.enable = 1;
>+
>+    /* realloc the surface ptr */
>+    scanout->ds = qemu_create_displaysurface_pixman(res->image);
>+    dpy_gfx_replace_surface(scanout->con, NULL);
>+    dpy_gfx_replace_surface(scanout->con, scanout->ds);
>+    res->scanout_bitmask = ss.scanout_id;
>+}
>+
>+static void
>+rutabaga_cmd_submit_3d(VirtIOGPU *g,
>+                       struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct virtio_gpu_cmd_submit cs;
>+    struct rutabaga_command rutabaga_cmd = { 0 };
>+    g_autofree uint8_t *buf = NULL;
>+    size_t s;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(cs);
>+    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>+
>+    buf = g_new0(uint8_t, cs.size);
>+    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>+                   sizeof(cs), buf, cs.size);
>+    CHECK(s == cs.size, cmd);
>+
>+    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
>+    rutabaga_cmd.cmd = buf;
>+    rutabaga_cmd.cmd_size = cs.size;
>+
>+    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>+                                 struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct rutabaga_transfer transfer = { 0 };
>+    struct virtio_gpu_transfer_to_host_2d t2d;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(t2d);
>+    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>+
>+    transfer.x = t2d.r.x;
>+    transfer.y = t2d.r.y;
>+    transfer.z = 0;
>+    transfer.w = t2d.r.width;
>+    transfer.h = t2d.r.height;
>+    transfer.d = 1;
>+
>+    result = rutabaga_resource_transfer_write(vr->rutabaga, 0, t2d.resource_id,
>+                                              &transfer);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>+                                 struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct rutabaga_transfer transfer = { 0 };
>+    struct virtio_gpu_transfer_host_3d t3d;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(t3d);
>+    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>+
>+    transfer.x = t3d.box.x;
>+    transfer.y = t3d.box.y;
>+    transfer.z = t3d.box.z;
>+    transfer.w = t3d.box.w;
>+    transfer.h = t3d.box.h;
>+    transfer.d = t3d.box.d;
>+    transfer.level = t3d.level;
>+    transfer.stride = t3d.stride;
>+    transfer.layer_stride = t3d.layer_stride;
>+    transfer.offset = t3d.offset;
>+
>+    result = rutabaga_resource_transfer_write(vr->rutabaga, t3d.hdr.ctx_id,
>+                                              t3d.resource_id, &transfer);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>+                                   struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct rutabaga_transfer transfer = { 0 };
>+    struct virtio_gpu_transfer_host_3d t3d;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(t3d);
>+    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>+
>+    transfer.x = t3d.box.x;
>+    transfer.y = t3d.box.y;
>+    transfer.z = t3d.box.z;
>+    transfer.w = t3d.box.w;
>+    transfer.h = t3d.box.h;
>+    transfer.d = t3d.box.d;
>+    transfer.level = t3d.level;
>+    transfer.stride = t3d.stride;
>+    transfer.layer_stride = t3d.layer_stride;
>+    transfer.offset = t3d.offset;
>+
>+    result = rutabaga_resource_transfer_read(vr->rutabaga, t3d.hdr.ctx_id,
>+                                             t3d.resource_id, &transfer, NULL);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+    struct rutabaga_iovecs vecs = { 0 };
>+    struct virtio_gpu_simple_resource *res;
>+    struct virtio_gpu_resource_attach_backing att_rb;
>+    int ret;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(att_rb);
>+    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>+
>+    res = virtio_gpu_find_resource(g, att_rb.resource_id);
>+    CHECK(res, cmd);
>+    CHECK(!res->iov, cmd);
>+
>+    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries, sizeof(att_rb),
>+                                        cmd, NULL, &res->iov, &res->iov_cnt);
>+    CHECK(!ret, cmd);
>+
>+    vecs.iovecs = res->iov;
>+    vecs.num_iovecs = res->iov_cnt;
>+
>+    ret = rutabaga_resource_attach_backing(vr->rutabaga, att_rb.resource_id,
>+                                           &vecs);
>+    if (ret != 0) {
>+        virtio_gpu_cleanup_mapping(g, res);
>+    }
>+
>+    CHECK(!ret, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+    struct virtio_gpu_simple_resource *res;
>+    struct virtio_gpu_resource_detach_backing detach_rb;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(detach_rb);
>+    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>+
>+    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>+    CHECK(res, cmd);
>+
>+    rutabaga_resource_detach_backing(vr->rutabaga,
>+                                     detach_rb.resource_id);
>+
>+    virtio_gpu_cleanup_mapping(g, res);
>+}
>+
>+static void
>+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>+                                 struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct virtio_gpu_ctx_resource att_res;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(att_res);
>+    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>+                                        att_res.resource_id);
>+
>+    result = rutabaga_context_attach_resource(vr->rutabaga, att_res.hdr.ctx_id,
>+                                              att_res.resource_id);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>+                                 struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct virtio_gpu_ctx_resource det_res;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(det_res);
>+    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>+                                        det_res.resource_id);
>+
>+    result = rutabaga_context_detach_resource(vr->rutabaga, det_res.hdr.ctx_id,
>+                                              det_res.resource_id);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct virtio_gpu_get_capset_info info;
>+    struct virtio_gpu_resp_capset_info resp;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(info);
>+
>+    result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
>+                                      &resp.capset_id, &resp.capset_max_version,
>+                                      &resp.capset_max_size);
>+    CHECK(!result, cmd);
>+
>+    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>+}
>+
>+static void
>+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    struct virtio_gpu_get_capset gc;
>+    struct virtio_gpu_resp_capset *resp;
>+    uint32_t capset_size, capset_version;
>+    uint32_t current_id, i;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(gc);
>+    for (i = 0; i < vr->num_capsets; i++) {
>+        result = rutabaga_get_capset_info(vr->rutabaga, i,
>+                                          &current_id, &capset_version,
>+                                          &capset_size);
>+        CHECK(!result, cmd);
>+
>+        if (current_id == gc.capset_id) {
>+            break;
>+        }
>+    }
>+
>+    CHECK(i < vr->num_capsets, cmd);
>+
>+    resp = g_malloc0(sizeof(*resp) + capset_size);
>+    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>+    rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
>+                        resp->capset_data, capset_size);
>+
>+    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + capset_size);
>+    g_free(resp);
>+}
>+
>+static void
>+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>+                                  struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int result;
>+    struct rutabaga_iovecs vecs = { 0 };
>+    g_autofree struct virtio_gpu_simple_resource *res = NULL;
>+    struct virtio_gpu_resource_create_blob cblob;
>+    struct rutabaga_create_blob rc_blob = { 0 };
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(cblob);
>+    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
>+
>+    CHECK(cblob.resource_id != 0, cmd);
>+
>+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>+
>+    res->resource_id = cblob.resource_id;
>+    res->blob_size = cblob.size;
>+
>+    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>+        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
>+                                               sizeof(cblob), cmd, &res->addrs,
>+                                               &res->iov, &res->iov_cnt);
>+        CHECK(!result, cmd);
>+    }
>+
>+    rc_blob.blob_id = cblob.blob_id;
>+    rc_blob.blob_mem = cblob.blob_mem;
>+    rc_blob.blob_flags = cblob.blob_flags;
>+    rc_blob.size = cblob.size;
>+
>+    vecs.iovecs = res->iov;
>+    vecs.num_iovecs = res->iov_cnt;
>+
>+    result = rutabaga_resource_create_blob(vr->rutabaga, cblob.hdr.ctx_id,
>+                                           cblob.resource_id, &rc_blob, &vecs,
>+                                           NULL);
>+
>+    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>+        virtio_gpu_cleanup_mapping(g, res);
>+    }
>+
>+    CHECK(!result, cmd);
>+
>+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>+    res = NULL;
>+}
>+
>+static void
>+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>+                               struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    uint32_t map_info = 0;
>+    uint32_t slot = 0;
>+    struct virtio_gpu_simple_resource *res;
>+    struct rutabaga_mapping mapping = { 0 };
>+    struct virtio_gpu_resource_map_blob mblob;
>+    struct virtio_gpu_resp_map_info resp = { 0 };
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(mblob);
>+
>+    CHECK(mblob.resource_id != 0, cmd);
>+
>+    res = virtio_gpu_find_resource(g, mblob.resource_id);
>+    CHECK(res, cmd);
>+
>+    result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
>+                                        &map_info);
>+    CHECK(!result, cmd);
>+
>+    /*
>+     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu spec, but do
>+     * exist to potentially allow the hypervisor to restrict write access to
>+     * memory. QEMU does not need to use this functionality at the moment.
>+     */
>+    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
>+
>+    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id, &mapping);
>+    CHECK(!result, cmd);
>+
>+    for (slot = 0; slot < MAX_SLOTS; slot++) {
>+        if (vr->memory_regions[slot].used) {
>+            continue;
>+        }
>+
>+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>+        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
>+                                   mapping.ptr);
>+        memory_region_add_subregion(&g->parent_obj.hostmem,
>+                                    mblob.offset, mr);
>+        vr->memory_regions[slot].resource_id = mblob.resource_id;
>+        vr->memory_regions[slot].used = 1;
>+        break;
>+    }
>+
>+    if (slot >= MAX_SLOTS) {
>+        result = rutabaga_resource_unmap(vr->rutabaga, mblob.resource_id);
>+        CHECK(!result, cmd);
>+    }
>+
>+    CHECK(slot < MAX_SLOTS, cmd);
>+
>+    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>+}
>+
>+static void
>+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>+                                 struct virtio_gpu_ctrl_command *cmd)
>+{
>+    int32_t result;
>+    uint32_t slot = 0;
>+    struct virtio_gpu_simple_resource *res;
>+    struct virtio_gpu_resource_unmap_blob ublob;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(ublob);
>+
>+    CHECK(ublob.resource_id != 0, cmd);
>+
>+    res = virtio_gpu_find_resource(g, ublob.resource_id);
>+    CHECK(res, cmd);
>+
>+    for (slot = 0; slot < MAX_SLOTS; slot++) {
>+        if (vr->memory_regions[slot].resource_id != ublob.resource_id) {
>+            continue;
>+        }
>+
>+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>+        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
>+
>+        vr->memory_regions[slot].resource_id = 0;
>+        vr->memory_regions[slot].used = 0;
>+        break;
>+    }
>+
>+    CHECK(slot < MAX_SLOTS, cmd);
>+    result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>+                                struct virtio_gpu_ctrl_command *cmd)
>+{
>+    struct rutabaga_fence fence = { 0 };
>+    int32_t result;
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>+
>+    switch (cmd->cmd_hdr.type) {
>+    case VIRTIO_GPU_CMD_CTX_CREATE:
>+        rutabaga_cmd_context_create(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_CTX_DESTROY:
>+        rutabaga_cmd_context_destroy(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>+        rutabaga_cmd_create_resource_2d(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>+        rutabaga_cmd_create_resource_3d(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_SUBMIT_3D:
>+        rutabaga_cmd_submit_3d(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>+        rutabaga_cmd_transfer_to_host_2d(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>+        rutabaga_cmd_transfer_to_host_3d(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>+        rutabaga_cmd_transfer_from_host_3d(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>+        rutabaga_cmd_attach_backing(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>+        rutabaga_cmd_detach_backing(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_SET_SCANOUT:
>+        rutabaga_cmd_set_scanout(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>+        rutabaga_cmd_resource_flush(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>+        rutabaga_cmd_resource_unref(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>+        rutabaga_cmd_ctx_attach_resource(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>+        rutabaga_cmd_ctx_detach_resource(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>+        rutabaga_cmd_get_capset_info(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_GET_CAPSET:
>+        rutabaga_cmd_get_capset(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>+        virtio_gpu_get_display_info(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_GET_EDID:
>+        virtio_gpu_get_edid(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>+        rutabaga_cmd_resource_create_blob(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>+        rutabaga_cmd_resource_map_blob(g, cmd);
>+        break;
>+    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>+        rutabaga_cmd_resource_unmap_blob(g, cmd);
>+        break;
>+    default:
>+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>+        break;
>+    }
>+
>+    if (cmd->finished) {
>+        return;
>+    }
>+    if (cmd->error) {
>+        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>+                     cmd->cmd_hdr.type, cmd->error);
>+        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>+        return;
>+    }
>+    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>+        virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
>+        return;
>+    }
>+
>+    fence.flags = cmd->cmd_hdr.flags;
>+    fence.ctx_id = cmd->cmd_hdr.ctx_id;
>+    fence.fence_id = cmd->cmd_hdr.fence_id;
>+    fence.ring_idx = cmd->cmd_hdr.ring_idx;
>+
>+    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
>+
>+    result = rutabaga_create_fence(vr->rutabaga, &fence);
>+    CHECK(!result, cmd);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_aio_cb(void *opaque)
>+{
>+    struct rutabaga_aio_data *data = opaque;
>+    VirtIOGPU *g = VIRTIO_GPU(data->vr);
>+    struct rutabaga_fence fence_data = data->fence;
>+    struct virtio_gpu_ctrl_command *cmd, *tmp;
>+
>+    uint32_t signaled_ctx_specific = fence_data.flags &
>+                                     RUTABAGA_FLAG_INFO_RING_IDX;
>+
>+    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>+        /*
>+         * Due to context specific timelines.
>+         */
>+        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>+                                       RUTABAGA_FLAG_INFO_RING_IDX;
>+
>+        if (signaled_ctx_specific != target_ctx_specific) {
>+            continue;
>+        }
>+
>+        if (signaled_ctx_specific &&
>+           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>+            continue;
>+        }
>+
>+        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>+            continue;
>+        }
>+
>+        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>+        virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
>+        QTAILQ_REMOVE(&g->fenceq, cmd, next);
>+        g_free(cmd);
>+    }
>+
>+    g_free(data);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>+                             const struct rutabaga_fence *fence) {
>+    struct rutabaga_aio_data *data;
>+    VirtIOGPU *g = (VirtIOGPU *)user_data;
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    /*
>+     * gfxstream and both cross-domain (and even newer versions virglrenderer:
>+     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence completion on
>+     * threads ("callback threads") that are different from the thread that
>+     * processes the command queue ("main thread").
>+     *
>+     * crosvm and other virtio-gpu 1.1 implementations enable callback threads
>+     * via locking.  However, on QEMU a deadlock is observed if
>+     * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback] is used
>+     * from a thread that is not the main thread.
>+     *
>+     * The reason is QEMU's internal locking is designed to work with QEMU
>+     * threads (see rcu_register_thread()) and not generic C/C++/Rust threads.
>+     * For now, we can workaround this by scheduling the return of the
>+     * fence descriptors on the main thread.
>+     */
>+
>+    data = g_new0(struct rutabaga_aio_data, 1);
>+    data->vr = vr;
>+    data->fence = *fence;
>+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
>+                            virtio_gpu_rutabaga_aio_cb,
>+                            data);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
>+                             const struct rutabaga_debug *debug) {
>+
>+    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
>+        error_report("%s", debug->message);
>+    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
>+        warn_report("%s", debug->message);
>+    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
>+        info_report("%s", debug->message);
>+    }
>+}
>+
>+static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
>+{
>+    int result;
>+    uint64_t capset_mask;
>+    struct rutabaga_builder builder = { 0 };
>+    char wayland_socket_path[UNIX_PATH_MAX];
>+    struct rutabaga_channel channel = { 0 };
>+    struct rutabaga_channels channels = { 0 };
>+
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+    vr->rutabaga = NULL;
>+
>+    if (!vr->capset_names) {
>+        error_setg(errp, "a capset name from the virtio-gpu spec is required");
>+        return false;
>+    }
>+
>+    builder.wsi = RUTABAGA_WSI_SURFACELESS;
>+    /*
>+     * Currently, if WSI is specified, the only valid strings are "surfaceless"
>+     * or "headless".  Surfaceless doesn't create a native window surface, but
>+     * does copy from the render target to the Pixman buffer if a virtio-gpu
>+     * 2D hypercall is issued.  Surfacless is the default.
>+     *
>+     * Headless is like surfaceless, but doesn't copy to the Pixman buffer. The
>+     * use case is automated testing environments where there is no need to view
>+     * results.
>+     *
>+     * In the future, more performant virtio-gpu 2D UI integration may be added.
>+     */
>+    if (vr->wsi) {
>+        if (g_str_equal(vr->wsi, "surfaceless")) {
>+            vr->headless = false;
>+        } else if (g_str_equal(vr->wsi, "headless")) {
>+            vr->headless = true;
>+        } else {
>+            error_setg(errp, "invalid wsi option selected");
>+            return false;
>+        }
>+    }
>+
>+    result = rutabaga_calculate_capset_mask(vr->capset_names, &capset_mask);

First, sorry for responding after such a long time. I've been busy with work and I'm doing QEMU in my free time.

In iteration 1 I've raised the topic on capset_names [1] and I haven't seen it answered properly. Perhaps I need to rephrase a bit so here we go: capset_names seems to be colon-separated list of bit options managed by rutabaga. This introduces yet another way of options handling. There have been talks about harmonizing options handling in QEMU since apparently it is considered too complex [2,3].

Why not pass the "capset" as a bitfield like capset_mask and have QEMU create "capset" from QOM properties? IIUC these flags could come from virtio_gpu.h which is already present in the QEMU tree. This would not inly shortcut the dependency on rutabaga here but would also be more idiomatic QEMU (since it makes the options more introspectable by internal machinery).

Of course the bitfield approach would require modifications in QEMU whenever rutabaga gains new features. However, I figure that in the long term rutabaga will be quite feature complete such that the benefits of idiomatic QEMU handling will outweigh the decoupling of the projects.

What do you think?

Best regards,
Bernhard

[1] https://lore.kernel.org/qemu-devel/D15471EC-D1D1-4DAA-A6E7-19827C36AEC8@gmail.com/
[2] https://m.youtube.com/watch?v=gtpOLQgnwug
[3] https://m.youtube.com/watch?v=FMQtog6KUlo

>+    if (result) {
>+        error_setg_errno(errp, -result, "invalid capset names: %s",
>+                         vr->capset_names);
>+        return false;
>+    }
>+
>+    builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
>+    builder.debug_cb = virtio_gpu_rutabaga_debug_cb;
>+    builder.capset_mask = capset_mask;
>+    builder.user_data = (uint64_t)g;
>+
>+    /*
>+     * If the user doesn't specify the wayland socket path, we try to infer
>+     * the socket via a process similar to the one used by libwayland.
>+     * libwayland does the following:
>+     *
>+     * 1) If $WAYLAND_DISPLAY is set, attempt to connect to
>+     *    $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY
>+     * 2) Otherwise, attempt to connect to $XDG_RUNTIME_DIR/wayland-0
>+     * 3) Otherwise, don't pass a wayland socket to rutabaga. If a guest
>+     *    wayland proxy is launched, it will fail to work.
>+     */
>+    channel.channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
>+    if (!vr->wayland_socket_path) {
>+        const char *runtime_dir = getenv("XDG_RUNTIME_DIR");
>+        const char *display = getenv("WAYLAND_DISPLAY");
>+        if (!display) {
>+            display = "wayland-0";
>+        }
>+
>+        if (runtime_dir) {
>+            result = snprintf(wayland_socket_path, UNIX_PATH_MAX,
>+                              "%s/%s", runtime_dir, display);
>+            if (result > 0 && result < UNIX_PATH_MAX) {
>+                channel.channel_name = wayland_socket_path;
>+            }
>+        }
>+    } else {
>+        channel.channel_name = vr->wayland_socket_path;
>+    }
>+
>+    if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN))) {
>+        if (channel.channel_name) {
>+            channels.channels = &channel;
>+            channels.num_channels = 1;
>+            builder.channels = &channels;
>+        }
>+    }
>+
>+    result = rutabaga_init(&builder, &vr->rutabaga);
>+    if (result) {
>+        error_setg_errno(errp, -result, "Failed to init rutabaga");
>+        return result;
>+    }
>+
>+    return true;
>+}
>+
>+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
>+{
>+    int result;
>+    uint32_t num_capsets;
>+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+    result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
>+    if (result) {
>+        error_report("Failed to get capsets");
>+        return 0;
>+    }
>+    vr->num_capsets = num_capsets;
>+    return num_capsets;
>+}
>+
>+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>+{
>+    VirtIOGPU *g = VIRTIO_GPU(vdev);
>+    struct virtio_gpu_ctrl_command *cmd;
>+
>+    if (!virtio_queue_ready(vq)) {
>+        return;
>+    }
>+
>+    cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>+    while (cmd) {
>+        cmd->vq = vq;
>+        cmd->error = 0;
>+        cmd->finished = false;
>+        QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
>+        cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>+    }
>+
>+    virtio_gpu_process_cmdq(g);
>+}
>+
>+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
>+{
>+    int num_capsets;
>+    VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
>+    VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
>+
>+#if HOST_BIG_ENDIAN
>+    error_setg(errp, "rutabaga is not supported on bigendian platforms");
>+    return;
>+#endif
>+
>+    if (!virtio_gpu_rutabaga_init(gpudev, errp)) {
>+        return;
>+    }
>+
>+    num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
>+    if (!num_capsets) {
>+        return;
>+    }
>+
>+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
>+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
>+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
>+
>+    bdev->virtio_config.num_capsets = num_capsets;
>+    virtio_gpu_device_realize(qdev, errp);
>+}
>+
>+static Property virtio_gpu_rutabaga_properties[] = {
>+    DEFINE_PROP_STRING("capset_names", VirtIOGPURutabaga, capset_names),
>+    DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
>+                       wayland_socket_path),
>+    DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
>+    DEFINE_PROP_END_OF_LIST(),
>+};
>+
>+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void *data)
>+{
>+    DeviceClass *dc = DEVICE_CLASS(klass);
>+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
>+    VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
>+    VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
>+
>+    vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
>+    vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
>+    vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
>+    vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
>+
>+    vdc->realize = virtio_gpu_rutabaga_realize;
>+    device_class_set_props(dc, virtio_gpu_rutabaga_properties);
>+}
>+
>+static const TypeInfo virtio_gpu_rutabaga_info = {
>+    .name = TYPE_VIRTIO_GPU_RUTABAGA,
>+    .parent = TYPE_VIRTIO_GPU,
>+    .instance_size = sizeof(VirtIOGPURutabaga),
>+    .class_init = virtio_gpu_rutabaga_class_init,
>+};
>+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
>+module_kconfig(VIRTIO_GPU);
>+
>+static void virtio_register_types(void)
>+{
>+    type_register_static(&virtio_gpu_rutabaga_info);
>+}
>+
>+type_init(virtio_register_types)
>+
>+module_dep("hw-display-virtio-gpu");
>diff --git a/hw/display/virtio-vga-rutabaga.c b/hw/display/virtio-vga-rutabaga.c
>new file mode 100644
>index 0000000000..b5b43e3b90
>--- /dev/null
>+++ b/hw/display/virtio-vga-rutabaga.c
>@@ -0,0 +1,53 @@
>+/*
>+ * SPDX-License-Identifier: GPL-2.0-or-later
>+ */
>+
>+#include "qemu/osdep.h"
>+#include "hw/pci/pci.h"
>+#include "hw/qdev-properties.h"
>+#include "hw/virtio/virtio-gpu.h"
>+#include "hw/display/vga.h"
>+#include "qapi/error.h"
>+#include "qemu/module.h"
>+#include "virtio-vga.h"
>+#include "qom/object.h"
>+
>+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
>+
>+typedef struct VirtIOVGARutabaga VirtIOVGARutabaga;
>+DECLARE_INSTANCE_CHECKER(VirtIOVGARutabaga, VIRTIO_VGA_RUTABAGA,
>+                         TYPE_VIRTIO_VGA_RUTABAGA)
>+
>+struct VirtIOVGARutabaga {
>+    VirtIOVGABase parent_obj;
>+    VirtIOGPURutabaga vdev;
>+};
>+
>+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
>+{
>+    VirtIOVGARutabaga *dev = VIRTIO_VGA_RUTABAGA(obj);
>+
>+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>+                                TYPE_VIRTIO_GPU_RUTABAGA);
>+    VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>+}
>+
>+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
>+    .generic_name  = TYPE_VIRTIO_VGA_RUTABAGA,
>+    .parent        = TYPE_VIRTIO_VGA_BASE,
>+    .instance_size = sizeof(VirtIOVGARutabaga),
>+    .instance_init = virtio_vga_rutabaga_inst_initfn,
>+};
>+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
>+module_kconfig(VIRTIO_VGA);
>+
>+static void virtio_vga_register_types(void)
>+{
>+    if (have_vga) {
>+        virtio_pci_types_register(&virtio_vga_rutabaga_info);
>+    }
>+}
>+
>+type_init(virtio_vga_register_types)
>+
>+module_dep("hw-display-virtio-vga");


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-13 11:57   ` Bernhard Beschow
@ 2023-09-14  4:38     ` Gurchetan Singh
  2023-09-14  7:23       ` Bernhard Beschow
  0 siblings, 1 reply; 34+ messages in thread
From: Gurchetan Singh @ 2023-09-14  4:38 UTC (permalink / raw)
  To: Bernhard Beschow
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, hi, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 55267 bytes --]

On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com> wrote:

>
>
> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
> gurchetansingh@chromium.org>:
> >This adds initial support for gfxstream and cross-domain.  Both
> >features rely on virtio-gpu blob resources and context types, which
> >are also implemented in this patch.
> >
> >gfxstream has a long and illustrious history in Android graphics
> >paravirtualization.  It has been powering graphics in the Android
> >Studio Emulator for more than a decade, which is the main developer
> >platform.
> >
> >Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
> >The key design characteristic was a 1:1 threading model and
> >auto-generation, which fit nicely with the OpenGLES spec.  It also
> >allowed easy layering with ANGLE on the host, which provides the GLES
> >implementations on Windows or MacOS enviroments.
> >
> >gfxstream has traditionally been maintained by a single engineer, and
> >between 2015 to 2021, the goldfish throne passed to Frank Yang.
> >Historians often remark this glorious reign ("pax gfxstreama" is the
> >academic term) was comparable to that of Augustus and both Queen
> >Elizabeths.  Just to name a few accomplishments in a resplendent
> >panoply: higher versions of GLES, address space graphics, snapshot
> >support and CTS compliant Vulkan [b].
> >
> >One major drawback was the use of out-of-tree goldfish drivers.
> >Android engineers didn't know much about DRM/KMS and especially TTM so
> >a simple guest to host pipe was conceived.
> >
> >Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
> >the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
> >port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
> >It was a symbol compatible replacement of virglrenderer [c] and named
> >"AVDVirglrenderer".  This implementation forms the basis of the
> >current gfxstream host implementation still in use today.
> >
> >cross-domain support follows a similar arc.  Originally conceived by
> >Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
> >2018, it initially relied on the downstream "virtio-wl" device.
> >
> >In 2020 and 2021, virtio-gpu was extended to include blob resources
> >and multiple timelines by yours truly, features gfxstream/cross-domain
> >both require to function correctly.
> >
> >Right now, we stand at the precipice of a truly fantastic possibility:
> >the Android Emulator powered by upstream QEMU and upstream Linux
> >kernel.  gfxstream will then be packaged properfully, and app
> >developers can even fix gfxstream bugs on their own if they encounter
> >them.
> >
> >It's been quite the ride, my friends.  Where will gfxstream head next,
> >nobody really knows.  I wouldn't be surprised if it's around for
> >another decade, maintained by a new generation of Android graphics
> >enthusiasts.
> >
> >Technical details:
> >  - Very simple initial display integration: just used Pixman
> >  - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
> >    calls
> >
> >Next steps for Android VMs:
> >  - The next step would be improving display integration and UI interfaces
> >    with the goal of the QEMU upstream graphics being in an emulator
> >    release [d].
> >
> >Next steps for Linux VMs for display virtualization:
> >  - For widespread distribution, someone needs to package Sommelier or the
> >    wayland-proxy-virtwl [e] ideally into Debian main. In addition, newer
> >    versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
> >    which allows disabling KMS hypercalls.  If anyone cares enough, it'll
> >    probably be possible to build a custom VM variant that uses this
> display
> >    virtualization strategy.
> >
> >[a]
> https://android-review.googlesource.com/c/platform/development/+/34470
> >[b]
> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
> >[c]
> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
> >[d] https://developer.android.com/studio/releases/emulator
> >[e] https://github.com/talex5/wayland-proxy-virtwl
> >
> >Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> >Tested-by: Alyssa Ross <hi@alyssa.is>
> >Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
> >Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
> >---
> >v1: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
> >    - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
> >    - Used error_report(..)
> >    - Used g_autofree to fix leaks on error paths
> >    - Removed unnecessary casts
> >    - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
> >
> >v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau
> and
> >    Bernard Berschow:
> >    - Parenthesis in CHECK macro
> >    - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
> >    - delay until g->parent_obj.enable = 1
> >    - Additional cast fixes
> >    - initialize directly in virtio_gpu_rutabaga_realize(..)
> >    - add debug callback to hook into QEMU error's APIs
> >
> >v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
> >    - Autodetect Wayland socket when not explicitly specified
> >    - Fix map_blob error paths
> >    - Add comment why we need both `res` and `resource` in create blob
> >    - Cast and whitespace fixes
> >    - Big endian check comes before virtio_gpu_rutabaga_init().
> >    - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
> >
> >v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
> >    - Double checked all casts
> >    - Remove unnecessary parenthesis
> >    - Removed `resource` in create_blob
> >    - Added comment about failure case
> >    - Pass user-provided socket as-is
> >    - Use stack variable rather than heap allocation
> >    - Future-proofed map info API to give access flags as well
> >
> >v5: Incorporated feedback from Akihiko Odaki:
> >    - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
> >    - Simplify num_capsets check
> >    - Call cleanup mapping on error paths
> >    - uint64_t --> void* for rutabaga_map(..)
> >    - Removed unnecessary parenthesis
> >    - Removed unnecessary cast
> >    - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
> >    - Reuse result variable
> >
> >v6: Incorporated feedback from Akihiko Odaki:
> >    - Remove unnecessary #ifndef
> >    - Disable scanout when appropriate
> >    - CHECK capset index within range outside loop
> >    - Add capset_version
> >
> >v7: Incorporated feedback from Akihiko Odaki:
> >    - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
> >
> >v9: Incorportated feedback from Akihiko Odaki:
> >    - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
> >    - Add error_setg(..) after rutabaga_init(..)
> >
> >v10: Incorportated feedback from Akihiko Odaki:
> >    - error_setg(..) --> error_setg_errno(..) when appropriate
> >    - virtio_gpu_rutabaga_init returns a bool instead of an int
> >
> >v11: Incorportated feedback from Philippe Mathieu-Daudé:
> >    - C-style /* */ comments and avoid // comments.
> >    - GPL-2.0 --> GPL-2.0-or-later
> >
> > hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
> > hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
> > hw/display/virtio-vga-rutabaga.c     |   53 ++
> > 3 files changed, 1224 insertions(+)
> > create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
> > create mode 100644 hw/display/virtio-gpu-rutabaga.c
> > create mode 100644 hw/display/virtio-vga-rutabaga.c
> >
> >diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
> b/hw/display/virtio-gpu-pci-rutabaga.c
> >new file mode 100644
> >index 0000000000..311eff308a
> >--- /dev/null
> >+++ b/hw/display/virtio-gpu-pci-rutabaga.c
> >@@ -0,0 +1,50 @@
> >+/*
> >+ * SPDX-License-Identifier: GPL-2.0-or-later
> >+ */
> >+
> >+#include "qemu/osdep.h"
> >+#include "qapi/error.h"
> >+#include "qemu/module.h"
> >+#include "hw/pci/pci.h"
> >+#include "hw/qdev-properties.h"
> >+#include "hw/virtio/virtio.h"
> >+#include "hw/virtio/virtio-bus.h"
> >+#include "hw/virtio/virtio-gpu-pci.h"
> >+#include "qom/object.h"
> >+
> >+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
> >+typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
> >+DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI, VIRTIO_GPU_RUTABAGA_PCI,
> >+                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
> >+
> >+struct VirtIOGPURutabagaPCI {
> >+    VirtIOGPUPCIBase parent_obj;
> >+    VirtIOGPURutabaga vdev;
> >+};
> >+
> >+static void virtio_gpu_rutabaga_initfn(Object *obj)
> >+{
> >+    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
> >+
> >+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> >+                                TYPE_VIRTIO_GPU_RUTABAGA);
> >+    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> >+}
> >+
> >+static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
> >+    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
> >+    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
> >+    .instance_size = sizeof(VirtIOGPURutabagaPCI),
> >+    .instance_init = virtio_gpu_rutabaga_initfn,
> >+};
> >+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
> >+module_kconfig(VIRTIO_PCI);
> >+
> >+static void virtio_gpu_rutabaga_pci_register_types(void)
> >+{
> >+    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
> >+}
> >+
> >+type_init(virtio_gpu_rutabaga_pci_register_types)
> >+
> >+module_dep("hw-display-virtio-gpu-pci");
> >diff --git a/hw/display/virtio-gpu-rutabaga.c
> b/hw/display/virtio-gpu-rutabaga.c
> >new file mode 100644
> >index 0000000000..9018e5a702
> >--- /dev/null
> >+++ b/hw/display/virtio-gpu-rutabaga.c
> >@@ -0,0 +1,1121 @@
> >+/*
> >+ * SPDX-License-Identifier: GPL-2.0-or-later
> >+ */
> >+
> >+#include "qemu/osdep.h"
> >+#include "qapi/error.h"
> >+#include "qemu/error-report.h"
> >+#include "qemu/iov.h"
> >+#include "trace.h"
> >+#include "hw/virtio/virtio.h"
> >+#include "hw/virtio/virtio-gpu.h"
> >+#include "hw/virtio/virtio-gpu-pixman.h"
> >+#include "hw/virtio/virtio-iommu.h"
> >+
> >+#include <glib/gmem.h>
> >+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
> >+
> >+#define CHECK(condition, cmd)
>      \
> >+    do {
>       \
> >+        if (!(condition)) {
>      \
> >+            error_report("CHECK failed in %s() %s:" "%d", __func__,
>      \
> >+                         __FILE__, __LINE__);
>      \
> >+            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>       \
> >+            return;
>      \
> >+       }
>       \
> >+    } while (0)
> >+
> >+/*
> >+ * This is the size of the char array in struct sock_addr_un. No Wayland
> socket
> >+ * can be created with a path longer than this, including the null
> terminator.
> >+ */
> >+#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
> >+
> >+struct rutabaga_aio_data {
> >+    struct VirtIOGPURutabaga *vr;
> >+    struct rutabaga_fence fence;
> >+};
> >+
> >+static void
> >+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
> virtio_gpu_scanout *s,
> >+                                  uint32_t resource_id)
> >+{
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct rutabaga_transfer transfer = { 0 };
> >+    struct iovec transfer_iovec;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    res = virtio_gpu_find_resource(g, resource_id);
> >+    if (!res) {
> >+        return;
> >+    }
> >+
> >+    if (res->width != s->current_cursor->width ||
> >+        res->height != s->current_cursor->height) {
> >+        return;
> >+    }
> >+
> >+    transfer.x = 0;
> >+    transfer.y = 0;
> >+    transfer.z = 0;
> >+    transfer.w = res->width;
> >+    transfer.h = res->height;
> >+    transfer.d = 1;
> >+
> >+    transfer_iovec.iov_base = s->current_cursor->data;
> >+    transfer_iovec.iov_len = res->width * res->height * 4;
> >+
> >+    rutabaga_resource_transfer_read(vr->rutabaga, 0,
> >+                                    resource_id, &transfer,
> >+                                    &transfer_iovec);
> >+}
> >+
> >+static void
> >+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
> >+{
> >+    VirtIOGPU *g = VIRTIO_GPU(b);
> >+    virtio_gpu_process_cmdq(g);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
> >+                                struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct rutabaga_create_3d rc_3d = { 0 };
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct virtio_gpu_resource_create_2d c2d;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(c2d);
> >+    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
> >+                                       c2d.width, c2d.height);
> >+
> >+    rc_3d.target = 2;
> >+    rc_3d.format = c2d.format;
> >+    rc_3d.bind = (1 << 1);
> >+    rc_3d.width = c2d.width;
> >+    rc_3d.height = c2d.height;
> >+    rc_3d.depth = 1;
> >+    rc_3d.array_size = 1;
> >+    rc_3d.last_level = 0;
> >+    rc_3d.nr_samples = 0;
> >+    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
> >+
> >+    result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id,
> &rc_3d);
> >+    CHECK(!result, cmd);
> >+
> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >+    res->width = c2d.width;
> >+    res->height = c2d.height;
> >+    res->format = c2d.format;
> >+    res->resource_id = c2d.resource_id;
> >+
> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
> >+                                struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct rutabaga_create_3d rc_3d = { 0 };
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct virtio_gpu_resource_create_3d c3d;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(c3d);
> >+
> >+    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
> >+                                       c3d.width, c3d.height, c3d.depth);
> >+
> >+    rc_3d.target = c3d.target;
> >+    rc_3d.format = c3d.format;
> >+    rc_3d.bind = c3d.bind;
> >+    rc_3d.width = c3d.width;
> >+    rc_3d.height = c3d.height;
> >+    rc_3d.depth = c3d.depth;
> >+    rc_3d.array_size = c3d.array_size;
> >+    rc_3d.last_level = c3d.last_level;
> >+    rc_3d.nr_samples = c3d.nr_samples;
> >+    rc_3d.flags = c3d.flags;
> >+
> >+    result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id,
> &rc_3d);
> >+    CHECK(!result, cmd);
> >+
> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >+    res->width = c3d.width;
> >+    res->height = c3d.height;
> >+    res->format = c3d.format;
> >+    res->resource_id = c3d.resource_id;
> >+
> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_resource_unref(VirtIOGPU *g,
> >+                            struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct virtio_gpu_resource_unref unref;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(unref);
> >+
> >+    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
> >+
> >+    res = virtio_gpu_find_resource(g, unref.resource_id);
> >+    CHECK(res, cmd);
> >+
> >+    result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
> >+    CHECK(!result, cmd);
> >+
> >+    if (res->image) {
> >+        pixman_image_unref(res->image);
> >+    }
> >+
> >+    QTAILQ_REMOVE(&g->reslist, res, next);
> >+    g_free(res);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_context_create(VirtIOGPU *g,
> >+                            struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct virtio_gpu_ctx_create cc;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(cc);
> >+    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
> >+                                    cc.debug_name);
> >+
> >+    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
> >+                                     cc.context_init, cc.debug_name,
> cc.nlen);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_context_destroy(VirtIOGPU *g,
> >+                             struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct virtio_gpu_ctx_destroy cd;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(cd);
> >+    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
> >+
> >+    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> >+{
> >+    int32_t result, i;
> >+    struct virtio_gpu_scanout *scanout = NULL;
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct rutabaga_transfer transfer = { 0 };
> >+    struct iovec transfer_iovec;
> >+    struct virtio_gpu_resource_flush rf;
> >+    bool found = false;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+    if (vr->headless) {
> >+        return;
> >+    }
> >+
> >+    VIRTIO_GPU_FILL_CMD(rf);
> >+    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
> >+                                   rf.r.width, rf.r.height, rf.r.x,
> rf.r.y);
> >+
> >+    res = virtio_gpu_find_resource(g, rf.resource_id);
> >+    CHECK(res, cmd);
> >+
> >+    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
> >+        scanout = &g->parent_obj.scanout[i];
> >+        if (i == res->scanout_bitmask) {
> >+            found = true;
> >+            break;
> >+        }
> >+    }
> >+
> >+    if (!found) {
> >+        return;
> >+    }
> >+
> >+    transfer.x = 0;
> >+    transfer.y = 0;
> >+    transfer.z = 0;
> >+    transfer.w = res->width;
> >+    transfer.h = res->height;
> >+    transfer.d = 1;
> >+
> >+    transfer_iovec.iov_base = pixman_image_get_data(res->image);
> >+    transfer_iovec.iov_len = res->width * res->height * 4;
> >+
> >+    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
> >+                                             rf.resource_id, &transfer,
> >+                                             &transfer_iovec);
> >+    CHECK(!result, cmd);
> >+    dpy_gfx_update_full(scanout->con);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> >+{
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct virtio_gpu_scanout *scanout = NULL;
> >+    struct virtio_gpu_set_scanout ss;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+    if (vr->headless) {
> >+        return;
> >+    }
> >+
> >+    VIRTIO_GPU_FILL_CMD(ss);
> >+    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
> >+                                     ss.r.width, ss.r.height, ss.r.x,
> ss.r.y);
> >+
> >+    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
> >+    scanout = &g->parent_obj.scanout[ss.scanout_id];
> >+
> >+    if (ss.resource_id == 0) {
> >+        dpy_gfx_replace_surface(scanout->con, NULL);
> >+        dpy_gl_scanout_disable(scanout->con);
> >+        return;
> >+    }
> >+
> >+    res = virtio_gpu_find_resource(g, ss.resource_id);
> >+    CHECK(res, cmd);
> >+
> >+    if (!res->image) {
> >+        pixman_format_code_t pformat;
> >+        pformat = virtio_gpu_get_pixman_format(res->format);
> >+        CHECK(pformat, cmd);
> >+
> >+        res->image = pixman_image_create_bits(pformat,
> >+                                              res->width,
> >+                                              res->height,
> >+                                              NULL, 0);
> >+        CHECK(res->image, cmd);
> >+        pixman_image_ref(res->image);
> >+    }
> >+
> >+    g->parent_obj.enable = 1;
> >+
> >+    /* realloc the surface ptr */
> >+    scanout->ds = qemu_create_displaysurface_pixman(res->image);
> >+    dpy_gfx_replace_surface(scanout->con, NULL);
> >+    dpy_gfx_replace_surface(scanout->con, scanout->ds);
> >+    res->scanout_bitmask = ss.scanout_id;
> >+}
> >+
> >+static void
> >+rutabaga_cmd_submit_3d(VirtIOGPU *g,
> >+                       struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct virtio_gpu_cmd_submit cs;
> >+    struct rutabaga_command rutabaga_cmd = { 0 };
> >+    g_autofree uint8_t *buf = NULL;
> >+    size_t s;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(cs);
> >+    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
> >+
> >+    buf = g_new0(uint8_t, cs.size);
> >+    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
> >+                   sizeof(cs), buf, cs.size);
> >+    CHECK(s == cs.size, cmd);
> >+
> >+    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
> >+    rutabaga_cmd.cmd = buf;
> >+    rutabaga_cmd.cmd_size = cs.size;
> >+
> >+    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct rutabaga_transfer transfer = { 0 };
> >+    struct virtio_gpu_transfer_to_host_2d t2d;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(t2d);
> >+    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
> >+
> >+    transfer.x = t2d.r.x;
> >+    transfer.y = t2d.r.y;
> >+    transfer.z = 0;
> >+    transfer.w = t2d.r.width;
> >+    transfer.h = t2d.r.height;
> >+    transfer.d = 1;
> >+
> >+    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
> t2d.resource_id,
> >+                                              &transfer);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct rutabaga_transfer transfer = { 0 };
> >+    struct virtio_gpu_transfer_host_3d t3d;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(t3d);
> >+    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
> >+
> >+    transfer.x = t3d.box.x;
> >+    transfer.y = t3d.box.y;
> >+    transfer.z = t3d.box.z;
> >+    transfer.w = t3d.box.w;
> >+    transfer.h = t3d.box.h;
> >+    transfer.d = t3d.box.d;
> >+    transfer.level = t3d.level;
> >+    transfer.stride = t3d.stride;
> >+    transfer.layer_stride = t3d.layer_stride;
> >+    transfer.offset = t3d.offset;
> >+
> >+    result = rutabaga_resource_transfer_write(vr->rutabaga,
> t3d.hdr.ctx_id,
> >+                                              t3d.resource_id,
> &transfer);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
> >+                                   struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct rutabaga_transfer transfer = { 0 };
> >+    struct virtio_gpu_transfer_host_3d t3d;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(t3d);
> >+    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
> >+
> >+    transfer.x = t3d.box.x;
> >+    transfer.y = t3d.box.y;
> >+    transfer.z = t3d.box.z;
> >+    transfer.w = t3d.box.w;
> >+    transfer.h = t3d.box.h;
> >+    transfer.d = t3d.box.d;
> >+    transfer.level = t3d.level;
> >+    transfer.stride = t3d.stride;
> >+    transfer.layer_stride = t3d.layer_stride;
> >+    transfer.offset = t3d.offset;
> >+
> >+    result = rutabaga_resource_transfer_read(vr->rutabaga,
> t3d.hdr.ctx_id,
> >+                                             t3d.resource_id, &transfer,
> NULL);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> >+{
> >+    struct rutabaga_iovecs vecs = { 0 };
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct virtio_gpu_resource_attach_backing att_rb;
> >+    int ret;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(att_rb);
> >+    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
> >+
> >+    res = virtio_gpu_find_resource(g, att_rb.resource_id);
> >+    CHECK(res, cmd);
> >+    CHECK(!res->iov, cmd);
> >+
> >+    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
> sizeof(att_rb),
> >+                                        cmd, NULL, &res->iov,
> &res->iov_cnt);
> >+    CHECK(!ret, cmd);
> >+
> >+    vecs.iovecs = res->iov;
> >+    vecs.num_iovecs = res->iov_cnt;
> >+
> >+    ret = rutabaga_resource_attach_backing(vr->rutabaga,
> att_rb.resource_id,
> >+                                           &vecs);
> >+    if (ret != 0) {
> >+        virtio_gpu_cleanup_mapping(g, res);
> >+    }
> >+
> >+    CHECK(!ret, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> >+{
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct virtio_gpu_resource_detach_backing detach_rb;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(detach_rb);
> >+    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
> >+
> >+    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
> >+    CHECK(res, cmd);
> >+
> >+    rutabaga_resource_detach_backing(vr->rutabaga,
> >+                                     detach_rb.resource_id);
> >+
> >+    virtio_gpu_cleanup_mapping(g, res);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct virtio_gpu_ctx_resource att_res;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(att_res);
> >+    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
> >+                                        att_res.resource_id);
> >+
> >+    result = rutabaga_context_attach_resource(vr->rutabaga,
> att_res.hdr.ctx_id,
> >+                                              att_res.resource_id);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct virtio_gpu_ctx_resource det_res;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(det_res);
> >+    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
> >+                                        det_res.resource_id);
> >+
> >+    result = rutabaga_context_detach_resource(vr->rutabaga,
> det_res.hdr.ctx_id,
> >+                                              det_res.resource_id);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
> virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    struct virtio_gpu_get_capset_info info;
> >+    struct virtio_gpu_resp_capset_info resp;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(info);
> >+
> >+    result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
> >+                                      &resp.capset_id,
> &resp.capset_max_version,
> >+                                      &resp.capset_max_size);
> >+    CHECK(!result, cmd);
> >+
> >+    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
> >+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> >+}
> >+
> >+static void
> >+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> >+{
> >+    int32_t result;
> >+    struct virtio_gpu_get_capset gc;
> >+    struct virtio_gpu_resp_capset *resp;
> >+    uint32_t capset_size, capset_version;
> >+    uint32_t current_id, i;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(gc);
> >+    for (i = 0; i < vr->num_capsets; i++) {
> >+        result = rutabaga_get_capset_info(vr->rutabaga, i,
> >+                                          &current_id, &capset_version,
> >+                                          &capset_size);
> >+        CHECK(!result, cmd);
> >+
> >+        if (current_id == gc.capset_id) {
> >+            break;
> >+        }
> >+    }
> >+
> >+    CHECK(i < vr->num_capsets, cmd);
> >+
> >+    resp = g_malloc0(sizeof(*resp) + capset_size);
> >+    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
> >+    rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
> >+                        resp->capset_data, capset_size);
> >+
> >+    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
> capset_size);
> >+    g_free(resp);
> >+}
> >+
> >+static void
> >+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
> >+                                  struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int result;
> >+    struct rutabaga_iovecs vecs = { 0 };
> >+    g_autofree struct virtio_gpu_simple_resource *res = NULL;
> >+    struct virtio_gpu_resource_create_blob cblob;
> >+    struct rutabaga_create_blob rc_blob = { 0 };
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(cblob);
> >+    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> >+
> >+    CHECK(cblob.resource_id != 0, cmd);
> >+
> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >+
> >+    res->resource_id = cblob.resource_id;
> >+    res->blob_size = cblob.size;
> >+
> >+    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> >+        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
> >+                                               sizeof(cblob), cmd,
> &res->addrs,
> >+                                               &res->iov, &res->iov_cnt);
> >+        CHECK(!result, cmd);
> >+    }
> >+
> >+    rc_blob.blob_id = cblob.blob_id;
> >+    rc_blob.blob_mem = cblob.blob_mem;
> >+    rc_blob.blob_flags = cblob.blob_flags;
> >+    rc_blob.size = cblob.size;
> >+
> >+    vecs.iovecs = res->iov;
> >+    vecs.num_iovecs = res->iov_cnt;
> >+
> >+    result = rutabaga_resource_create_blob(vr->rutabaga,
> cblob.hdr.ctx_id,
> >+                                           cblob.resource_id, &rc_blob,
> &vecs,
> >+                                           NULL);
> >+
> >+    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> >+        virtio_gpu_cleanup_mapping(g, res);
> >+    }
> >+
> >+    CHECK(!result, cmd);
> >+
> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >+    res = NULL;
> >+}
> >+
> >+static void
> >+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
> >+                               struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    uint32_t map_info = 0;
> >+    uint32_t slot = 0;
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct rutabaga_mapping mapping = { 0 };
> >+    struct virtio_gpu_resource_map_blob mblob;
> >+    struct virtio_gpu_resp_map_info resp = { 0 };
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(mblob);
> >+
> >+    CHECK(mblob.resource_id != 0, cmd);
> >+
> >+    res = virtio_gpu_find_resource(g, mblob.resource_id);
> >+    CHECK(res, cmd);
> >+
> >+    result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
> >+                                        &map_info);
> >+    CHECK(!result, cmd);
> >+
> >+    /*
> >+     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu spec,
> but do
> >+     * exist to potentially allow the hypervisor to restrict write
> access to
> >+     * memory. QEMU does not need to use this functionality at the
> moment.
> >+     */
> >+    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
> >+
> >+    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
> &mapping);
> >+    CHECK(!result, cmd);
> >+
> >+    for (slot = 0; slot < MAX_SLOTS; slot++) {
> >+        if (vr->memory_regions[slot].used) {
> >+            continue;
> >+        }
> >+
> >+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
> >+        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
> >+                                   mapping.ptr);
> >+        memory_region_add_subregion(&g->parent_obj.hostmem,
> >+                                    mblob.offset, mr);
> >+        vr->memory_regions[slot].resource_id = mblob.resource_id;
> >+        vr->memory_regions[slot].used = 1;
> >+        break;
> >+    }
> >+
> >+    if (slot >= MAX_SLOTS) {
> >+        result = rutabaga_resource_unmap(vr->rutabaga,
> mblob.resource_id);
> >+        CHECK(!result, cmd);
> >+    }
> >+
> >+    CHECK(slot < MAX_SLOTS, cmd);
> >+
> >+    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
> >+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> >+}
> >+
> >+static void
> >+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    int32_t result;
> >+    uint32_t slot = 0;
> >+    struct virtio_gpu_simple_resource *res;
> >+    struct virtio_gpu_resource_unmap_blob ublob;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(ublob);
> >+
> >+    CHECK(ublob.resource_id != 0, cmd);
> >+
> >+    res = virtio_gpu_find_resource(g, ublob.resource_id);
> >+    CHECK(res, cmd);
> >+
> >+    for (slot = 0; slot < MAX_SLOTS; slot++) {
> >+        if (vr->memory_regions[slot].resource_id != ublob.resource_id) {
> >+            continue;
> >+        }
> >+
> >+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
> >+        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
> >+
> >+        vr->memory_regions[slot].resource_id = 0;
> >+        vr->memory_regions[slot].used = 0;
> >+        break;
> >+    }
> >+
> >+    CHECK(slot < MAX_SLOTS, cmd);
> >+    result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
> >+                                struct virtio_gpu_ctrl_command *cmd)
> >+{
> >+    struct rutabaga_fence fence = { 0 };
> >+    int32_t result;
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
> >+
> >+    switch (cmd->cmd_hdr.type) {
> >+    case VIRTIO_GPU_CMD_CTX_CREATE:
> >+        rutabaga_cmd_context_create(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_CTX_DESTROY:
> >+        rutabaga_cmd_context_destroy(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
> >+        rutabaga_cmd_create_resource_2d(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
> >+        rutabaga_cmd_create_resource_3d(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_SUBMIT_3D:
> >+        rutabaga_cmd_submit_3d(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
> >+        rutabaga_cmd_transfer_to_host_2d(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
> >+        rutabaga_cmd_transfer_to_host_3d(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
> >+        rutabaga_cmd_transfer_from_host_3d(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
> >+        rutabaga_cmd_attach_backing(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
> >+        rutabaga_cmd_detach_backing(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_SET_SCANOUT:
> >+        rutabaga_cmd_set_scanout(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
> >+        rutabaga_cmd_resource_flush(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
> >+        rutabaga_cmd_resource_unref(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
> >+        rutabaga_cmd_ctx_attach_resource(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
> >+        rutabaga_cmd_ctx_detach_resource(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
> >+        rutabaga_cmd_get_capset_info(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_GET_CAPSET:
> >+        rutabaga_cmd_get_capset(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
> >+        virtio_gpu_get_display_info(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_GET_EDID:
> >+        virtio_gpu_get_edid(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
> >+        rutabaga_cmd_resource_create_blob(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
> >+        rutabaga_cmd_resource_map_blob(g, cmd);
> >+        break;
> >+    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
> >+        rutabaga_cmd_resource_unmap_blob(g, cmd);
> >+        break;
> >+    default:
> >+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >+        break;
> >+    }
> >+
> >+    if (cmd->finished) {
> >+        return;
> >+    }
> >+    if (cmd->error) {
> >+        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
> >+                     cmd->cmd_hdr.type, cmd->error);
> >+        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
> >+        return;
> >+    }
> >+    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
> >+        virtio_gpu_ctrl_response_nodata(g, cmd,
> VIRTIO_GPU_RESP_OK_NODATA);
> >+        return;
> >+    }
> >+
> >+    fence.flags = cmd->cmd_hdr.flags;
> >+    fence.ctx_id = cmd->cmd_hdr.ctx_id;
> >+    fence.fence_id = cmd->cmd_hdr.fence_id;
> >+    fence.ring_idx = cmd->cmd_hdr.ring_idx;
> >+
> >+    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
> cmd->cmd_hdr.type);
> >+
> >+    result = rutabaga_create_fence(vr->rutabaga, &fence);
> >+    CHECK(!result, cmd);
> >+}
> >+
> >+static void
> >+virtio_gpu_rutabaga_aio_cb(void *opaque)
> >+{
> >+    struct rutabaga_aio_data *data = opaque;
> >+    VirtIOGPU *g = VIRTIO_GPU(data->vr);
> >+    struct rutabaga_fence fence_data = data->fence;
> >+    struct virtio_gpu_ctrl_command *cmd, *tmp;
> >+
> >+    uint32_t signaled_ctx_specific = fence_data.flags &
> >+                                     RUTABAGA_FLAG_INFO_RING_IDX;
> >+
> >+    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
> >+        /*
> >+         * Due to context specific timelines.
> >+         */
> >+        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
> >+                                       RUTABAGA_FLAG_INFO_RING_IDX;
> >+
> >+        if (signaled_ctx_specific != target_ctx_specific) {
> >+            continue;
> >+        }
> >+
> >+        if (signaled_ctx_specific &&
> >+           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
> >+            continue;
> >+        }
> >+
> >+        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
> >+            continue;
> >+        }
> >+
> >+        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
> >+        virtio_gpu_ctrl_response_nodata(g, cmd,
> VIRTIO_GPU_RESP_OK_NODATA);
> >+        QTAILQ_REMOVE(&g->fenceq, cmd, next);
> >+        g_free(cmd);
> >+    }
> >+
> >+    g_free(data);
> >+}
> >+
> >+static void
> >+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
> >+                             const struct rutabaga_fence *fence) {
> >+    struct rutabaga_aio_data *data;
> >+    VirtIOGPU *g = (VirtIOGPU *)user_data;
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    /*
> >+     * gfxstream and both cross-domain (and even newer versions
> virglrenderer:
> >+     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
> completion on
> >+     * threads ("callback threads") that are different from the thread
> that
> >+     * processes the command queue ("main thread").
> >+     *
> >+     * crosvm and other virtio-gpu 1.1 implementations enable callback
> threads
> >+     * via locking.  However, on QEMU a deadlock is observed if
> >+     * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback]
> is used
> >+     * from a thread that is not the main thread.
> >+     *
> >+     * The reason is QEMU's internal locking is designed to work with
> QEMU
> >+     * threads (see rcu_register_thread()) and not generic C/C++/Rust
> threads.
> >+     * For now, we can workaround this by scheduling the return of the
> >+     * fence descriptors on the main thread.
> >+     */
> >+
> >+    data = g_new0(struct rutabaga_aio_data, 1);
> >+    data->vr = vr;
> >+    data->fence = *fence;
> >+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> >+                            virtio_gpu_rutabaga_aio_cb,
> >+                            data);
> >+}
> >+
> >+static void
> >+virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
> >+                             const struct rutabaga_debug *debug) {
> >+
> >+    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
> >+        error_report("%s", debug->message);
> >+    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
> >+        warn_report("%s", debug->message);
> >+    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
> >+        info_report("%s", debug->message);
> >+    }
> >+}
> >+
> >+static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
> >+{
> >+    int result;
> >+    uint64_t capset_mask;
> >+    struct rutabaga_builder builder = { 0 };
> >+    char wayland_socket_path[UNIX_PATH_MAX];
> >+    struct rutabaga_channel channel = { 0 };
> >+    struct rutabaga_channels channels = { 0 };
> >+
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+    vr->rutabaga = NULL;
> >+
> >+    if (!vr->capset_names) {
> >+        error_setg(errp, "a capset name from the virtio-gpu spec is
> required");
> >+        return false;
> >+    }
> >+
> >+    builder.wsi = RUTABAGA_WSI_SURFACELESS;
> >+    /*
> >+     * Currently, if WSI is specified, the only valid strings are
> "surfaceless"
> >+     * or "headless".  Surfaceless doesn't create a native window
> surface, but
> >+     * does copy from the render target to the Pixman buffer if a
> virtio-gpu
> >+     * 2D hypercall is issued.  Surfacless is the default.
> >+     *
> >+     * Headless is like surfaceless, but doesn't copy to the Pixman
> buffer. The
> >+     * use case is automated testing environments where there is no need
> to view
> >+     * results.
> >+     *
> >+     * In the future, more performant virtio-gpu 2D UI integration may
> be added.
> >+     */
> >+    if (vr->wsi) {
> >+        if (g_str_equal(vr->wsi, "surfaceless")) {
> >+            vr->headless = false;
> >+        } else if (g_str_equal(vr->wsi, "headless")) {
> >+            vr->headless = true;
> >+        } else {
> >+            error_setg(errp, "invalid wsi option selected");
> >+            return false;
> >+        }
> >+    }
> >+
> >+    result = rutabaga_calculate_capset_mask(vr->capset_names,
> &capset_mask);
>
> First, sorry for responding after such a long time. I've been busy with
> work and I'm doing QEMU in my free time.
>
> In iteration 1 I've raised the topic on capset_names [1] and I haven't
> seen it answered properly. Perhaps I need to rephrase a bit so here we go:
> capset_names seems to be colon-separated list of bit options managed by
> rutabaga. This introduces yet another way of options handling. There have
> been talks about harmonizing options handling in QEMU since apparently it
> is considered too complex [2,3].


> Why not pass the "capset" as a bitfield like capset_mask and have QEMU
> create "capset" from QOM properties?

IIUC these flags could come from virtio_gpu.h which is already present in
> the QEMU tree. This would not inly shortcut the dependency on rutabaga here
> but would also be more idiomatic QEMU (since it makes the options more
> introspectable by internal machinery).


> Of course the bitfield approach would require modifications in QEMU
> whenever rutabaga gains new features. However, I figure that in the long
> term rutabaga will be quite feature complete such that the benefits of
> idiomatic QEMU handling will outweigh the decoupling of the projects.
>
> What do you think?
>

I think what you're suggesting is something like -device
virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
gfxstream_vulkan + cross_domain]?

We actually did consider something like that when adding the
--context-types flag [with crosvm], but there was a desire for a
human-readable format rather than numbers [even if they are in the
virtio-gpu spec].

Additionally, there are quite a few context types that people are playing
around with [gfxstream-gles, gfxstream-composer] that are launchable and
aren't in the spec just yet.

Also, a key feature we want to explicitly **not** turn on all available
context-types and let the user decide.  That'll allow guest Mesa in
particular to do its magic in its loader.  So one may run Zink + ANV with
ioctl forwarding, or Iris + ioctl forwarding and compare performance with
the same guest image.

And another thing is one needs some knowledge of the host system to choose
the right context type.  You wouldn't do Zink + ANV ioctl forwarding on
MacOS.  So I think the task of choosing the right context type will fall to
projects that depend on QEMU (such as Android Emulator) which have some
knowledge of the host environment.

We actually have a graphics detector somewhere that calls VK/OpenGL before
launching the VM and sets the right options.  Plan is to port into
gfxstream, maybe we could use that.

So given the desire for human readable formats, being portable across VMMs
(crosvm, qemu, rust-vmm??) and experimentation, the string -> capset mask
conversion was put in the rutabaga API.  So I wouldn't change it for those
reasons.


>
> Best regards,
> Bernhard
>
> [1]
> https://lore.kernel.org/qemu-devel/D15471EC-D1D1-4DAA-A6E7-19827C36AEC8@gmail.com/
> [2] https://m.youtube.com/watch?v=gtpOLQgnwug
> [3] https://m.youtube.com/watch?v=FMQtog6KUlo
>
> >+    if (result) {
> >+        error_setg_errno(errp, -result, "invalid capset names: %s",
> >+                         vr->capset_names);
> >+        return false;
> >+    }
> >+
> >+    builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
> >+    builder.debug_cb = virtio_gpu_rutabaga_debug_cb;
> >+    builder.capset_mask = capset_mask;
> >+    builder.user_data = (uint64_t)g;
> >+
> >+    /*
> >+     * If the user doesn't specify the wayland socket path, we try to
> infer
> >+     * the socket via a process similar to the one used by libwayland.
> >+     * libwayland does the following:
> >+     *
> >+     * 1) If $WAYLAND_DISPLAY is set, attempt to connect to
> >+     *    $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY
> >+     * 2) Otherwise, attempt to connect to $XDG_RUNTIME_DIR/wayland-0
> >+     * 3) Otherwise, don't pass a wayland socket to rutabaga. If a guest
> >+     *    wayland proxy is launched, it will fail to work.
> >+     */
> >+    channel.channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
> >+    if (!vr->wayland_socket_path) {
> >+        const char *runtime_dir = getenv("XDG_RUNTIME_DIR");
> >+        const char *display = getenv("WAYLAND_DISPLAY");
> >+        if (!display) {
> >+            display = "wayland-0";
> >+        }
> >+
> >+        if (runtime_dir) {
> >+            result = snprintf(wayland_socket_path, UNIX_PATH_MAX,
> >+                              "%s/%s", runtime_dir, display);
> >+            if (result > 0 && result < UNIX_PATH_MAX) {
> >+                channel.channel_name = wayland_socket_path;
> >+            }
> >+        }
> >+    } else {
> >+        channel.channel_name = vr->wayland_socket_path;
> >+    }
> >+
> >+    if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN))) {
> >+        if (channel.channel_name) {
> >+            channels.channels = &channel;
> >+            channels.num_channels = 1;
> >+            builder.channels = &channels;
> >+        }
> >+    }
> >+
> >+    result = rutabaga_init(&builder, &vr->rutabaga);
> >+    if (result) {
> >+        error_setg_errno(errp, -result, "Failed to init rutabaga");
> >+        return result;
> >+    }
> >+
> >+    return true;
> >+}
> >+
> >+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
> >+{
> >+    int result;
> >+    uint32_t num_capsets;
> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >+
> >+    result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
> >+    if (result) {
> >+        error_report("Failed to get capsets");
> >+        return 0;
> >+    }
> >+    vr->num_capsets = num_capsets;
> >+    return num_capsets;
> >+}
> >+
> >+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev,
> VirtQueue *vq)
> >+{
> >+    VirtIOGPU *g = VIRTIO_GPU(vdev);
> >+    struct virtio_gpu_ctrl_command *cmd;
> >+
> >+    if (!virtio_queue_ready(vq)) {
> >+        return;
> >+    }
> >+
> >+    cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
> >+    while (cmd) {
> >+        cmd->vq = vq;
> >+        cmd->error = 0;
> >+        cmd->finished = false;
> >+        QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
> >+        cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
> >+    }
> >+
> >+    virtio_gpu_process_cmdq(g);
> >+}
> >+
> >+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
> >+{
> >+    int num_capsets;
> >+    VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
> >+    VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
> >+
> >+#if HOST_BIG_ENDIAN
> >+    error_setg(errp, "rutabaga is not supported on bigendian platforms");
> >+    return;
> >+#endif
> >+
> >+    if (!virtio_gpu_rutabaga_init(gpudev, errp)) {
> >+        return;
> >+    }
> >+
> >+    num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
> >+    if (!num_capsets) {
> >+        return;
> >+    }
> >+
> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
> >+
> >+    bdev->virtio_config.num_capsets = num_capsets;
> >+    virtio_gpu_device_realize(qdev, errp);
> >+}
> >+
> >+static Property virtio_gpu_rutabaga_properties[] = {
> >+    DEFINE_PROP_STRING("capset_names", VirtIOGPURutabaga, capset_names),
> >+    DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
> >+                       wayland_socket_path),
> >+    DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
> >+    DEFINE_PROP_END_OF_LIST(),
> >+};
> >+
> >+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void
> *data)
> >+{
> >+    DeviceClass *dc = DEVICE_CLASS(klass);
> >+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> >+    VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
> >+    VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
> >+
> >+    vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
> >+    vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
> >+    vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
> >+    vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
> >+
> >+    vdc->realize = virtio_gpu_rutabaga_realize;
> >+    device_class_set_props(dc, virtio_gpu_rutabaga_properties);
> >+}
> >+
> >+static const TypeInfo virtio_gpu_rutabaga_info = {
> >+    .name = TYPE_VIRTIO_GPU_RUTABAGA,
> >+    .parent = TYPE_VIRTIO_GPU,
> >+    .instance_size = sizeof(VirtIOGPURutabaga),
> >+    .class_init = virtio_gpu_rutabaga_class_init,
> >+};
> >+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
> >+module_kconfig(VIRTIO_GPU);
> >+
> >+static void virtio_register_types(void)
> >+{
> >+    type_register_static(&virtio_gpu_rutabaga_info);
> >+}
> >+
> >+type_init(virtio_register_types)
> >+
> >+module_dep("hw-display-virtio-gpu");
> >diff --git a/hw/display/virtio-vga-rutabaga.c
> b/hw/display/virtio-vga-rutabaga.c
> >new file mode 100644
> >index 0000000000..b5b43e3b90
> >--- /dev/null
> >+++ b/hw/display/virtio-vga-rutabaga.c
> >@@ -0,0 +1,53 @@
> >+/*
> >+ * SPDX-License-Identifier: GPL-2.0-or-later
> >+ */
> >+
> >+#include "qemu/osdep.h"
> >+#include "hw/pci/pci.h"
> >+#include "hw/qdev-properties.h"
> >+#include "hw/virtio/virtio-gpu.h"
> >+#include "hw/display/vga.h"
> >+#include "qapi/error.h"
> >+#include "qemu/module.h"
> >+#include "virtio-vga.h"
> >+#include "qom/object.h"
> >+
> >+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
> >+
> >+typedef struct VirtIOVGARutabaga VirtIOVGARutabaga;
> >+DECLARE_INSTANCE_CHECKER(VirtIOVGARutabaga, VIRTIO_VGA_RUTABAGA,
> >+                         TYPE_VIRTIO_VGA_RUTABAGA)
> >+
> >+struct VirtIOVGARutabaga {
> >+    VirtIOVGABase parent_obj;
> >+    VirtIOGPURutabaga vdev;
> >+};
> >+
> >+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
> >+{
> >+    VirtIOVGARutabaga *dev = VIRTIO_VGA_RUTABAGA(obj);
> >+
> >+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> >+                                TYPE_VIRTIO_GPU_RUTABAGA);
> >+    VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> >+}
> >+
> >+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
> >+    .generic_name  = TYPE_VIRTIO_VGA_RUTABAGA,
> >+    .parent        = TYPE_VIRTIO_VGA_BASE,
> >+    .instance_size = sizeof(VirtIOVGARutabaga),
> >+    .instance_init = virtio_vga_rutabaga_inst_initfn,
> >+};
> >+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
> >+module_kconfig(VIRTIO_VGA);
> >+
> >+static void virtio_vga_register_types(void)
> >+{
> >+    if (have_vga) {
> >+        virtio_pci_types_register(&virtio_vga_rutabaga_info);
> >+    }
> >+}
> >+
> >+type_init(virtio_vga_register_types)
> >+
> >+module_dep("hw-display-virtio-vga");
>

[-- Attachment #2: Type: text/html, Size: 69624 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-14  4:38     ` Gurchetan Singh
@ 2023-09-14  7:23       ` Bernhard Beschow
  2023-09-15  2:38         ` Gurchetan Singh
  0 siblings, 1 reply; 34+ messages in thread
From: Bernhard Beschow @ 2023-09-14  7:23 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, hi, ernunes, manos.pitsidianakis, philmd



Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com> wrote:
>
>>
>>
>> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
>> gurchetansingh@chromium.org>:
>> >This adds initial support for gfxstream and cross-domain.  Both
>> >features rely on virtio-gpu blob resources and context types, which
>> >are also implemented in this patch.
>> >
>> >gfxstream has a long and illustrious history in Android graphics
>> >paravirtualization.  It has been powering graphics in the Android
>> >Studio Emulator for more than a decade, which is the main developer
>> >platform.
>> >
>> >Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
>> >The key design characteristic was a 1:1 threading model and
>> >auto-generation, which fit nicely with the OpenGLES spec.  It also
>> >allowed easy layering with ANGLE on the host, which provides the GLES
>> >implementations on Windows or MacOS enviroments.
>> >
>> >gfxstream has traditionally been maintained by a single engineer, and
>> >between 2015 to 2021, the goldfish throne passed to Frank Yang.
>> >Historians often remark this glorious reign ("pax gfxstreama" is the
>> >academic term) was comparable to that of Augustus and both Queen
>> >Elizabeths.  Just to name a few accomplishments in a resplendent
>> >panoply: higher versions of GLES, address space graphics, snapshot
>> >support and CTS compliant Vulkan [b].
>> >
>> >One major drawback was the use of out-of-tree goldfish drivers.
>> >Android engineers didn't know much about DRM/KMS and especially TTM so
>> >a simple guest to host pipe was conceived.
>> >
>> >Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
>> >the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
>> >port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
>> >It was a symbol compatible replacement of virglrenderer [c] and named
>> >"AVDVirglrenderer".  This implementation forms the basis of the
>> >current gfxstream host implementation still in use today.
>> >
>> >cross-domain support follows a similar arc.  Originally conceived by
>> >Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
>> >2018, it initially relied on the downstream "virtio-wl" device.
>> >
>> >In 2020 and 2021, virtio-gpu was extended to include blob resources
>> >and multiple timelines by yours truly, features gfxstream/cross-domain
>> >both require to function correctly.
>> >
>> >Right now, we stand at the precipice of a truly fantastic possibility:
>> >the Android Emulator powered by upstream QEMU and upstream Linux
>> >kernel.  gfxstream will then be packaged properfully, and app
>> >developers can even fix gfxstream bugs on their own if they encounter
>> >them.
>> >
>> >It's been quite the ride, my friends.  Where will gfxstream head next,
>> >nobody really knows.  I wouldn't be surprised if it's around for
>> >another decade, maintained by a new generation of Android graphics
>> >enthusiasts.
>> >
>> >Technical details:
>> >  - Very simple initial display integration: just used Pixman
>> >  - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
>> >    calls
>> >
>> >Next steps for Android VMs:
>> >  - The next step would be improving display integration and UI interfaces
>> >    with the goal of the QEMU upstream graphics being in an emulator
>> >    release [d].
>> >
>> >Next steps for Linux VMs for display virtualization:
>> >  - For widespread distribution, someone needs to package Sommelier or the
>> >    wayland-proxy-virtwl [e] ideally into Debian main. In addition, newer
>> >    versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
>> >    which allows disabling KMS hypercalls.  If anyone cares enough, it'll
>> >    probably be possible to build a custom VM variant that uses this
>> display
>> >    virtualization strategy.
>> >
>> >[a]
>> https://android-review.googlesource.com/c/platform/development/+/34470
>> >[b]
>> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
>> >[c]
>> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>> >[d] https://developer.android.com/studio/releases/emulator
>> >[e] https://github.com/talex5/wayland-proxy-virtwl
>> >
>> >Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
>> >Tested-by: Alyssa Ross <hi@alyssa.is>
>> >Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>> >Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>> >---
>> >v1: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
>> >    - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>> >    - Used error_report(..)
>> >    - Used g_autofree to fix leaks on error paths
>> >    - Removed unnecessary casts
>> >    - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>> >
>> >v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau
>> and
>> >    Bernard Berschow:
>> >    - Parenthesis in CHECK macro
>> >    - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
>> >    - delay until g->parent_obj.enable = 1
>> >    - Additional cast fixes
>> >    - initialize directly in virtio_gpu_rutabaga_realize(..)
>> >    - add debug callback to hook into QEMU error's APIs
>> >
>> >v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>> >    - Autodetect Wayland socket when not explicitly specified
>> >    - Fix map_blob error paths
>> >    - Add comment why we need both `res` and `resource` in create blob
>> >    - Cast and whitespace fixes
>> >    - Big endian check comes before virtio_gpu_rutabaga_init().
>> >    - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
>> >
>> >v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>> >    - Double checked all casts
>> >    - Remove unnecessary parenthesis
>> >    - Removed `resource` in create_blob
>> >    - Added comment about failure case
>> >    - Pass user-provided socket as-is
>> >    - Use stack variable rather than heap allocation
>> >    - Future-proofed map info API to give access flags as well
>> >
>> >v5: Incorporated feedback from Akihiko Odaki:
>> >    - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
>> >    - Simplify num_capsets check
>> >    - Call cleanup mapping on error paths
>> >    - uint64_t --> void* for rutabaga_map(..)
>> >    - Removed unnecessary parenthesis
>> >    - Removed unnecessary cast
>> >    - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
>> >    - Reuse result variable
>> >
>> >v6: Incorporated feedback from Akihiko Odaki:
>> >    - Remove unnecessary #ifndef
>> >    - Disable scanout when appropriate
>> >    - CHECK capset index within range outside loop
>> >    - Add capset_version
>> >
>> >v7: Incorporated feedback from Akihiko Odaki:
>> >    - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
>> >
>> >v9: Incorportated feedback from Akihiko Odaki:
>> >    - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
>> >    - Add error_setg(..) after rutabaga_init(..)
>> >
>> >v10: Incorportated feedback from Akihiko Odaki:
>> >    - error_setg(..) --> error_setg_errno(..) when appropriate
>> >    - virtio_gpu_rutabaga_init returns a bool instead of an int
>> >
>> >v11: Incorportated feedback from Philippe Mathieu-Daudé:
>> >    - C-style /* */ comments and avoid // comments.
>> >    - GPL-2.0 --> GPL-2.0-or-later
>> >
>> > hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
>> > hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
>> > hw/display/virtio-vga-rutabaga.c     |   53 ++
>> > 3 files changed, 1224 insertions(+)
>> > create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
>> > create mode 100644 hw/display/virtio-gpu-rutabaga.c
>> > create mode 100644 hw/display/virtio-vga-rutabaga.c
>> >
>> >diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
>> b/hw/display/virtio-gpu-pci-rutabaga.c
>> >new file mode 100644
>> >index 0000000000..311eff308a
>> >--- /dev/null
>> >+++ b/hw/display/virtio-gpu-pci-rutabaga.c
>> >@@ -0,0 +1,50 @@
>> >+/*
>> >+ * SPDX-License-Identifier: GPL-2.0-or-later
>> >+ */
>> >+
>> >+#include "qemu/osdep.h"
>> >+#include "qapi/error.h"
>> >+#include "qemu/module.h"
>> >+#include "hw/pci/pci.h"
>> >+#include "hw/qdev-properties.h"
>> >+#include "hw/virtio/virtio.h"
>> >+#include "hw/virtio/virtio-bus.h"
>> >+#include "hw/virtio/virtio-gpu-pci.h"
>> >+#include "qom/object.h"
>> >+
>> >+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>> >+typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
>> >+DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI, VIRTIO_GPU_RUTABAGA_PCI,
>> >+                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>> >+
>> >+struct VirtIOGPURutabagaPCI {
>> >+    VirtIOGPUPCIBase parent_obj;
>> >+    VirtIOGPURutabaga vdev;
>> >+};
>> >+
>> >+static void virtio_gpu_rutabaga_initfn(Object *obj)
>> >+{
>> >+    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>> >+
>> >+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>> >+                                TYPE_VIRTIO_GPU_RUTABAGA);
>> >+    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>> >+}
>> >+
>> >+static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
>> >+    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>> >+    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>> >+    .instance_size = sizeof(VirtIOGPURutabagaPCI),
>> >+    .instance_init = virtio_gpu_rutabaga_initfn,
>> >+};
>> >+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>> >+module_kconfig(VIRTIO_PCI);
>> >+
>> >+static void virtio_gpu_rutabaga_pci_register_types(void)
>> >+{
>> >+    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>> >+}
>> >+
>> >+type_init(virtio_gpu_rutabaga_pci_register_types)
>> >+
>> >+module_dep("hw-display-virtio-gpu-pci");
>> >diff --git a/hw/display/virtio-gpu-rutabaga.c
>> b/hw/display/virtio-gpu-rutabaga.c
>> >new file mode 100644
>> >index 0000000000..9018e5a702
>> >--- /dev/null
>> >+++ b/hw/display/virtio-gpu-rutabaga.c
>> >@@ -0,0 +1,1121 @@
>> >+/*
>> >+ * SPDX-License-Identifier: GPL-2.0-or-later
>> >+ */
>> >+
>> >+#include "qemu/osdep.h"
>> >+#include "qapi/error.h"
>> >+#include "qemu/error-report.h"
>> >+#include "qemu/iov.h"
>> >+#include "trace.h"
>> >+#include "hw/virtio/virtio.h"
>> >+#include "hw/virtio/virtio-gpu.h"
>> >+#include "hw/virtio/virtio-gpu-pixman.h"
>> >+#include "hw/virtio/virtio-iommu.h"
>> >+
>> >+#include <glib/gmem.h>
>> >+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>> >+
>> >+#define CHECK(condition, cmd)
>>      \
>> >+    do {
>>       \
>> >+        if (!(condition)) {
>>      \
>> >+            error_report("CHECK failed in %s() %s:" "%d", __func__,
>>      \
>> >+                         __FILE__, __LINE__);
>>      \
>> >+            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>       \
>> >+            return;
>>      \
>> >+       }
>>       \
>> >+    } while (0)
>> >+
>> >+/*
>> >+ * This is the size of the char array in struct sock_addr_un. No Wayland
>> socket
>> >+ * can be created with a path longer than this, including the null
>> terminator.
>> >+ */
>> >+#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
>> >+
>> >+struct rutabaga_aio_data {
>> >+    struct VirtIOGPURutabaga *vr;
>> >+    struct rutabaga_fence fence;
>> >+};
>> >+
>> >+static void
>> >+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
>> virtio_gpu_scanout *s,
>> >+                                  uint32_t resource_id)
>> >+{
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct rutabaga_transfer transfer = { 0 };
>> >+    struct iovec transfer_iovec;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    res = virtio_gpu_find_resource(g, resource_id);
>> >+    if (!res) {
>> >+        return;
>> >+    }
>> >+
>> >+    if (res->width != s->current_cursor->width ||
>> >+        res->height != s->current_cursor->height) {
>> >+        return;
>> >+    }
>> >+
>> >+    transfer.x = 0;
>> >+    transfer.y = 0;
>> >+    transfer.z = 0;
>> >+    transfer.w = res->width;
>> >+    transfer.h = res->height;
>> >+    transfer.d = 1;
>> >+
>> >+    transfer_iovec.iov_base = s->current_cursor->data;
>> >+    transfer_iovec.iov_len = res->width * res->height * 4;
>> >+
>> >+    rutabaga_resource_transfer_read(vr->rutabaga, 0,
>> >+                                    resource_id, &transfer,
>> >+                                    &transfer_iovec);
>> >+}
>> >+
>> >+static void
>> >+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>> >+{
>> >+    VirtIOGPU *g = VIRTIO_GPU(b);
>> >+    virtio_gpu_process_cmdq(g);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>> >+                                struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct rutabaga_create_3d rc_3d = { 0 };
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct virtio_gpu_resource_create_2d c2d;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(c2d);
>> >+    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
>> >+                                       c2d.width, c2d.height);
>> >+
>> >+    rc_3d.target = 2;
>> >+    rc_3d.format = c2d.format;
>> >+    rc_3d.bind = (1 << 1);
>> >+    rc_3d.width = c2d.width;
>> >+    rc_3d.height = c2d.height;
>> >+    rc_3d.depth = 1;
>> >+    rc_3d.array_size = 1;
>> >+    rc_3d.last_level = 0;
>> >+    rc_3d.nr_samples = 0;
>> >+    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>> >+
>> >+    result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id,
>> &rc_3d);
>> >+    CHECK(!result, cmd);
>> >+
>> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >+    res->width = c2d.width;
>> >+    res->height = c2d.height;
>> >+    res->format = c2d.format;
>> >+    res->resource_id = c2d.resource_id;
>> >+
>> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>> >+                                struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct rutabaga_create_3d rc_3d = { 0 };
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct virtio_gpu_resource_create_3d c3d;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(c3d);
>> >+
>> >+    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
>> >+                                       c3d.width, c3d.height, c3d.depth);
>> >+
>> >+    rc_3d.target = c3d.target;
>> >+    rc_3d.format = c3d.format;
>> >+    rc_3d.bind = c3d.bind;
>> >+    rc_3d.width = c3d.width;
>> >+    rc_3d.height = c3d.height;
>> >+    rc_3d.depth = c3d.depth;
>> >+    rc_3d.array_size = c3d.array_size;
>> >+    rc_3d.last_level = c3d.last_level;
>> >+    rc_3d.nr_samples = c3d.nr_samples;
>> >+    rc_3d.flags = c3d.flags;
>> >+
>> >+    result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id,
>> &rc_3d);
>> >+    CHECK(!result, cmd);
>> >+
>> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >+    res->width = c3d.width;
>> >+    res->height = c3d.height;
>> >+    res->format = c3d.format;
>> >+    res->resource_id = c3d.resource_id;
>> >+
>> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_resource_unref(VirtIOGPU *g,
>> >+                            struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct virtio_gpu_resource_unref unref;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(unref);
>> >+
>> >+    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>> >+
>> >+    res = virtio_gpu_find_resource(g, unref.resource_id);
>> >+    CHECK(res, cmd);
>> >+
>> >+    result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
>> >+    CHECK(!result, cmd);
>> >+
>> >+    if (res->image) {
>> >+        pixman_image_unref(res->image);
>> >+    }
>> >+
>> >+    QTAILQ_REMOVE(&g->reslist, res, next);
>> >+    g_free(res);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_context_create(VirtIOGPU *g,
>> >+                            struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct virtio_gpu_ctx_create cc;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(cc);
>> >+    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>> >+                                    cc.debug_name);
>> >+
>> >+    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
>> >+                                     cc.context_init, cc.debug_name,
>> cc.nlen);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_context_destroy(VirtIOGPU *g,
>> >+                             struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct virtio_gpu_ctx_destroy cd;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(cd);
>> >+    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>> >+
>> >+    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>> *cmd)
>> >+{
>> >+    int32_t result, i;
>> >+    struct virtio_gpu_scanout *scanout = NULL;
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct rutabaga_transfer transfer = { 0 };
>> >+    struct iovec transfer_iovec;
>> >+    struct virtio_gpu_resource_flush rf;
>> >+    bool found = false;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+    if (vr->headless) {
>> >+        return;
>> >+    }
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(rf);
>> >+    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>> >+                                   rf.r.width, rf.r.height, rf.r.x,
>> rf.r.y);
>> >+
>> >+    res = virtio_gpu_find_resource(g, rf.resource_id);
>> >+    CHECK(res, cmd);
>> >+
>> >+    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>> >+        scanout = &g->parent_obj.scanout[i];
>> >+        if (i == res->scanout_bitmask) {
>> >+            found = true;
>> >+            break;
>> >+        }
>> >+    }
>> >+
>> >+    if (!found) {
>> >+        return;
>> >+    }
>> >+
>> >+    transfer.x = 0;
>> >+    transfer.y = 0;
>> >+    transfer.z = 0;
>> >+    transfer.w = res->width;
>> >+    transfer.h = res->height;
>> >+    transfer.d = 1;
>> >+
>> >+    transfer_iovec.iov_base = pixman_image_get_data(res->image);
>> >+    transfer_iovec.iov_len = res->width * res->height * 4;
>> >+
>> >+    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>> >+                                             rf.resource_id, &transfer,
>> >+                                             &transfer_iovec);
>> >+    CHECK(!result, cmd);
>> >+    dpy_gfx_update_full(scanout->con);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>> *cmd)
>> >+{
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct virtio_gpu_scanout *scanout = NULL;
>> >+    struct virtio_gpu_set_scanout ss;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+    if (vr->headless) {
>> >+        return;
>> >+    }
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(ss);
>> >+    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
>> >+                                     ss.r.width, ss.r.height, ss.r.x,
>> ss.r.y);
>> >+
>> >+    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
>> >+    scanout = &g->parent_obj.scanout[ss.scanout_id];
>> >+
>> >+    if (ss.resource_id == 0) {
>> >+        dpy_gfx_replace_surface(scanout->con, NULL);
>> >+        dpy_gl_scanout_disable(scanout->con);
>> >+        return;
>> >+    }
>> >+
>> >+    res = virtio_gpu_find_resource(g, ss.resource_id);
>> >+    CHECK(res, cmd);
>> >+
>> >+    if (!res->image) {
>> >+        pixman_format_code_t pformat;
>> >+        pformat = virtio_gpu_get_pixman_format(res->format);
>> >+        CHECK(pformat, cmd);
>> >+
>> >+        res->image = pixman_image_create_bits(pformat,
>> >+                                              res->width,
>> >+                                              res->height,
>> >+                                              NULL, 0);
>> >+        CHECK(res->image, cmd);
>> >+        pixman_image_ref(res->image);
>> >+    }
>> >+
>> >+    g->parent_obj.enable = 1;
>> >+
>> >+    /* realloc the surface ptr */
>> >+    scanout->ds = qemu_create_displaysurface_pixman(res->image);
>> >+    dpy_gfx_replace_surface(scanout->con, NULL);
>> >+    dpy_gfx_replace_surface(scanout->con, scanout->ds);
>> >+    res->scanout_bitmask = ss.scanout_id;
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_submit_3d(VirtIOGPU *g,
>> >+                       struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct virtio_gpu_cmd_submit cs;
>> >+    struct rutabaga_command rutabaga_cmd = { 0 };
>> >+    g_autofree uint8_t *buf = NULL;
>> >+    size_t s;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(cs);
>> >+    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>> >+
>> >+    buf = g_new0(uint8_t, cs.size);
>> >+    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>> >+                   sizeof(cs), buf, cs.size);
>> >+    CHECK(s == cs.size, cmd);
>> >+
>> >+    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
>> >+    rutabaga_cmd.cmd = buf;
>> >+    rutabaga_cmd.cmd_size = cs.size;
>> >+
>> >+    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct rutabaga_transfer transfer = { 0 };
>> >+    struct virtio_gpu_transfer_to_host_2d t2d;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(t2d);
>> >+    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>> >+
>> >+    transfer.x = t2d.r.x;
>> >+    transfer.y = t2d.r.y;
>> >+    transfer.z = 0;
>> >+    transfer.w = t2d.r.width;
>> >+    transfer.h = t2d.r.height;
>> >+    transfer.d = 1;
>> >+
>> >+    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
>> t2d.resource_id,
>> >+                                              &transfer);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct rutabaga_transfer transfer = { 0 };
>> >+    struct virtio_gpu_transfer_host_3d t3d;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(t3d);
>> >+    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>> >+
>> >+    transfer.x = t3d.box.x;
>> >+    transfer.y = t3d.box.y;
>> >+    transfer.z = t3d.box.z;
>> >+    transfer.w = t3d.box.w;
>> >+    transfer.h = t3d.box.h;
>> >+    transfer.d = t3d.box.d;
>> >+    transfer.level = t3d.level;
>> >+    transfer.stride = t3d.stride;
>> >+    transfer.layer_stride = t3d.layer_stride;
>> >+    transfer.offset = t3d.offset;
>> >+
>> >+    result = rutabaga_resource_transfer_write(vr->rutabaga,
>> t3d.hdr.ctx_id,
>> >+                                              t3d.resource_id,
>> &transfer);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>> >+                                   struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct rutabaga_transfer transfer = { 0 };
>> >+    struct virtio_gpu_transfer_host_3d t3d;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(t3d);
>> >+    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>> >+
>> >+    transfer.x = t3d.box.x;
>> >+    transfer.y = t3d.box.y;
>> >+    transfer.z = t3d.box.z;
>> >+    transfer.w = t3d.box.w;
>> >+    transfer.h = t3d.box.h;
>> >+    transfer.d = t3d.box.d;
>> >+    transfer.level = t3d.level;
>> >+    transfer.stride = t3d.stride;
>> >+    transfer.layer_stride = t3d.layer_stride;
>> >+    transfer.offset = t3d.offset;
>> >+
>> >+    result = rutabaga_resource_transfer_read(vr->rutabaga,
>> t3d.hdr.ctx_id,
>> >+                                             t3d.resource_id, &transfer,
>> NULL);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>> *cmd)
>> >+{
>> >+    struct rutabaga_iovecs vecs = { 0 };
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct virtio_gpu_resource_attach_backing att_rb;
>> >+    int ret;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(att_rb);
>> >+    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>> >+
>> >+    res = virtio_gpu_find_resource(g, att_rb.resource_id);
>> >+    CHECK(res, cmd);
>> >+    CHECK(!res->iov, cmd);
>> >+
>> >+    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
>> sizeof(att_rb),
>> >+                                        cmd, NULL, &res->iov,
>> &res->iov_cnt);
>> >+    CHECK(!ret, cmd);
>> >+
>> >+    vecs.iovecs = res->iov;
>> >+    vecs.num_iovecs = res->iov_cnt;
>> >+
>> >+    ret = rutabaga_resource_attach_backing(vr->rutabaga,
>> att_rb.resource_id,
>> >+                                           &vecs);
>> >+    if (ret != 0) {
>> >+        virtio_gpu_cleanup_mapping(g, res);
>> >+    }
>> >+
>> >+    CHECK(!ret, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>> *cmd)
>> >+{
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct virtio_gpu_resource_detach_backing detach_rb;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(detach_rb);
>> >+    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>> >+
>> >+    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>> >+    CHECK(res, cmd);
>> >+
>> >+    rutabaga_resource_detach_backing(vr->rutabaga,
>> >+                                     detach_rb.resource_id);
>> >+
>> >+    virtio_gpu_cleanup_mapping(g, res);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct virtio_gpu_ctx_resource att_res;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(att_res);
>> >+    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>> >+                                        att_res.resource_id);
>> >+
>> >+    result = rutabaga_context_attach_resource(vr->rutabaga,
>> att_res.hdr.ctx_id,
>> >+                                              att_res.resource_id);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct virtio_gpu_ctx_resource det_res;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(det_res);
>> >+    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>> >+                                        det_res.resource_id);
>> >+
>> >+    result = rutabaga_context_detach_resource(vr->rutabaga,
>> det_res.hdr.ctx_id,
>> >+                                              det_res.resource_id);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
>> virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct virtio_gpu_get_capset_info info;
>> >+    struct virtio_gpu_resp_capset_info resp;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(info);
>> >+
>> >+    result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
>> >+                                      &resp.capset_id,
>> &resp.capset_max_version,
>> >+                                      &resp.capset_max_size);
>> >+    CHECK(!result, cmd);
>> >+
>> >+    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>> >+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>> *cmd)
>> >+{
>> >+    int32_t result;
>> >+    struct virtio_gpu_get_capset gc;
>> >+    struct virtio_gpu_resp_capset *resp;
>> >+    uint32_t capset_size, capset_version;
>> >+    uint32_t current_id, i;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(gc);
>> >+    for (i = 0; i < vr->num_capsets; i++) {
>> >+        result = rutabaga_get_capset_info(vr->rutabaga, i,
>> >+                                          &current_id, &capset_version,
>> >+                                          &capset_size);
>> >+        CHECK(!result, cmd);
>> >+
>> >+        if (current_id == gc.capset_id) {
>> >+            break;
>> >+        }
>> >+    }
>> >+
>> >+    CHECK(i < vr->num_capsets, cmd);
>> >+
>> >+    resp = g_malloc0(sizeof(*resp) + capset_size);
>> >+    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>> >+    rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
>> >+                        resp->capset_data, capset_size);
>> >+
>> >+    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
>> capset_size);
>> >+    g_free(resp);
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>> >+                                  struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int result;
>> >+    struct rutabaga_iovecs vecs = { 0 };
>> >+    g_autofree struct virtio_gpu_simple_resource *res = NULL;
>> >+    struct virtio_gpu_resource_create_blob cblob;
>> >+    struct rutabaga_create_blob rc_blob = { 0 };
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(cblob);
>> >+    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
>> >+
>> >+    CHECK(cblob.resource_id != 0, cmd);
>> >+
>> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >+
>> >+    res->resource_id = cblob.resource_id;
>> >+    res->blob_size = cblob.size;
>> >+
>> >+    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>> >+        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
>> >+                                               sizeof(cblob), cmd,
>> &res->addrs,
>> >+                                               &res->iov, &res->iov_cnt);
>> >+        CHECK(!result, cmd);
>> >+    }
>> >+
>> >+    rc_blob.blob_id = cblob.blob_id;
>> >+    rc_blob.blob_mem = cblob.blob_mem;
>> >+    rc_blob.blob_flags = cblob.blob_flags;
>> >+    rc_blob.size = cblob.size;
>> >+
>> >+    vecs.iovecs = res->iov;
>> >+    vecs.num_iovecs = res->iov_cnt;
>> >+
>> >+    result = rutabaga_resource_create_blob(vr->rutabaga,
>> cblob.hdr.ctx_id,
>> >+                                           cblob.resource_id, &rc_blob,
>> &vecs,
>> >+                                           NULL);
>> >+
>> >+    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>> >+        virtio_gpu_cleanup_mapping(g, res);
>> >+    }
>> >+
>> >+    CHECK(!result, cmd);
>> >+
>> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >+    res = NULL;
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>> >+                               struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    uint32_t map_info = 0;
>> >+    uint32_t slot = 0;
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct rutabaga_mapping mapping = { 0 };
>> >+    struct virtio_gpu_resource_map_blob mblob;
>> >+    struct virtio_gpu_resp_map_info resp = { 0 };
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(mblob);
>> >+
>> >+    CHECK(mblob.resource_id != 0, cmd);
>> >+
>> >+    res = virtio_gpu_find_resource(g, mblob.resource_id);
>> >+    CHECK(res, cmd);
>> >+
>> >+    result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
>> >+                                        &map_info);
>> >+    CHECK(!result, cmd);
>> >+
>> >+    /*
>> >+     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu spec,
>> but do
>> >+     * exist to potentially allow the hypervisor to restrict write
>> access to
>> >+     * memory. QEMU does not need to use this functionality at the
>> moment.
>> >+     */
>> >+    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
>> >+
>> >+    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
>> &mapping);
>> >+    CHECK(!result, cmd);
>> >+
>> >+    for (slot = 0; slot < MAX_SLOTS; slot++) {
>> >+        if (vr->memory_regions[slot].used) {
>> >+            continue;
>> >+        }
>> >+
>> >+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>> >+        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
>> >+                                   mapping.ptr);
>> >+        memory_region_add_subregion(&g->parent_obj.hostmem,
>> >+                                    mblob.offset, mr);
>> >+        vr->memory_regions[slot].resource_id = mblob.resource_id;
>> >+        vr->memory_regions[slot].used = 1;
>> >+        break;
>> >+    }
>> >+
>> >+    if (slot >= MAX_SLOTS) {
>> >+        result = rutabaga_resource_unmap(vr->rutabaga,
>> mblob.resource_id);
>> >+        CHECK(!result, cmd);
>> >+    }
>> >+
>> >+    CHECK(slot < MAX_SLOTS, cmd);
>> >+
>> >+    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>> >+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>> >+}
>> >+
>> >+static void
>> >+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    int32_t result;
>> >+    uint32_t slot = 0;
>> >+    struct virtio_gpu_simple_resource *res;
>> >+    struct virtio_gpu_resource_unmap_blob ublob;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(ublob);
>> >+
>> >+    CHECK(ublob.resource_id != 0, cmd);
>> >+
>> >+    res = virtio_gpu_find_resource(g, ublob.resource_id);
>> >+    CHECK(res, cmd);
>> >+
>> >+    for (slot = 0; slot < MAX_SLOTS; slot++) {
>> >+        if (vr->memory_regions[slot].resource_id != ublob.resource_id) {
>> >+            continue;
>> >+        }
>> >+
>> >+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>> >+        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
>> >+
>> >+        vr->memory_regions[slot].resource_id = 0;
>> >+        vr->memory_regions[slot].used = 0;
>> >+        break;
>> >+    }
>> >+
>> >+    CHECK(slot < MAX_SLOTS, cmd);
>> >+    result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>> >+                                struct virtio_gpu_ctrl_command *cmd)
>> >+{
>> >+    struct rutabaga_fence fence = { 0 };
>> >+    int32_t result;
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>> >+
>> >+    switch (cmd->cmd_hdr.type) {
>> >+    case VIRTIO_GPU_CMD_CTX_CREATE:
>> >+        rutabaga_cmd_context_create(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_CTX_DESTROY:
>> >+        rutabaga_cmd_context_destroy(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>> >+        rutabaga_cmd_create_resource_2d(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>> >+        rutabaga_cmd_create_resource_3d(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_SUBMIT_3D:
>> >+        rutabaga_cmd_submit_3d(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>> >+        rutabaga_cmd_transfer_to_host_2d(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>> >+        rutabaga_cmd_transfer_to_host_3d(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>> >+        rutabaga_cmd_transfer_from_host_3d(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>> >+        rutabaga_cmd_attach_backing(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>> >+        rutabaga_cmd_detach_backing(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_SET_SCANOUT:
>> >+        rutabaga_cmd_set_scanout(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>> >+        rutabaga_cmd_resource_flush(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>> >+        rutabaga_cmd_resource_unref(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>> >+        rutabaga_cmd_ctx_attach_resource(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>> >+        rutabaga_cmd_ctx_detach_resource(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>> >+        rutabaga_cmd_get_capset_info(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_GET_CAPSET:
>> >+        rutabaga_cmd_get_capset(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>> >+        virtio_gpu_get_display_info(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_GET_EDID:
>> >+        virtio_gpu_get_edid(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>> >+        rutabaga_cmd_resource_create_blob(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>> >+        rutabaga_cmd_resource_map_blob(g, cmd);
>> >+        break;
>> >+    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>> >+        rutabaga_cmd_resource_unmap_blob(g, cmd);
>> >+        break;
>> >+    default:
>> >+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>> >+        break;
>> >+    }
>> >+
>> >+    if (cmd->finished) {
>> >+        return;
>> >+    }
>> >+    if (cmd->error) {
>> >+        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>> >+                     cmd->cmd_hdr.type, cmd->error);
>> >+        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>> >+        return;
>> >+    }
>> >+    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>> >+        virtio_gpu_ctrl_response_nodata(g, cmd,
>> VIRTIO_GPU_RESP_OK_NODATA);
>> >+        return;
>> >+    }
>> >+
>> >+    fence.flags = cmd->cmd_hdr.flags;
>> >+    fence.ctx_id = cmd->cmd_hdr.ctx_id;
>> >+    fence.fence_id = cmd->cmd_hdr.fence_id;
>> >+    fence.ring_idx = cmd->cmd_hdr.ring_idx;
>> >+
>> >+    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
>> cmd->cmd_hdr.type);
>> >+
>> >+    result = rutabaga_create_fence(vr->rutabaga, &fence);
>> >+    CHECK(!result, cmd);
>> >+}
>> >+
>> >+static void
>> >+virtio_gpu_rutabaga_aio_cb(void *opaque)
>> >+{
>> >+    struct rutabaga_aio_data *data = opaque;
>> >+    VirtIOGPU *g = VIRTIO_GPU(data->vr);
>> >+    struct rutabaga_fence fence_data = data->fence;
>> >+    struct virtio_gpu_ctrl_command *cmd, *tmp;
>> >+
>> >+    uint32_t signaled_ctx_specific = fence_data.flags &
>> >+                                     RUTABAGA_FLAG_INFO_RING_IDX;
>> >+
>> >+    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>> >+        /*
>> >+         * Due to context specific timelines.
>> >+         */
>> >+        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>> >+                                       RUTABAGA_FLAG_INFO_RING_IDX;
>> >+
>> >+        if (signaled_ctx_specific != target_ctx_specific) {
>> >+            continue;
>> >+        }
>> >+
>> >+        if (signaled_ctx_specific &&
>> >+           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>> >+            continue;
>> >+        }
>> >+
>> >+        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>> >+            continue;
>> >+        }
>> >+
>> >+        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>> >+        virtio_gpu_ctrl_response_nodata(g, cmd,
>> VIRTIO_GPU_RESP_OK_NODATA);
>> >+        QTAILQ_REMOVE(&g->fenceq, cmd, next);
>> >+        g_free(cmd);
>> >+    }
>> >+
>> >+    g_free(data);
>> >+}
>> >+
>> >+static void
>> >+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>> >+                             const struct rutabaga_fence *fence) {
>> >+    struct rutabaga_aio_data *data;
>> >+    VirtIOGPU *g = (VirtIOGPU *)user_data;
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    /*
>> >+     * gfxstream and both cross-domain (and even newer versions
>> virglrenderer:
>> >+     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
>> completion on
>> >+     * threads ("callback threads") that are different from the thread
>> that
>> >+     * processes the command queue ("main thread").
>> >+     *
>> >+     * crosvm and other virtio-gpu 1.1 implementations enable callback
>> threads
>> >+     * via locking.  However, on QEMU a deadlock is observed if
>> >+     * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback]
>> is used
>> >+     * from a thread that is not the main thread.
>> >+     *
>> >+     * The reason is QEMU's internal locking is designed to work with
>> QEMU
>> >+     * threads (see rcu_register_thread()) and not generic C/C++/Rust
>> threads.
>> >+     * For now, we can workaround this by scheduling the return of the
>> >+     * fence descriptors on the main thread.
>> >+     */
>> >+
>> >+    data = g_new0(struct rutabaga_aio_data, 1);
>> >+    data->vr = vr;
>> >+    data->fence = *fence;
>> >+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
>> >+                            virtio_gpu_rutabaga_aio_cb,
>> >+                            data);
>> >+}
>> >+
>> >+static void
>> >+virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
>> >+                             const struct rutabaga_debug *debug) {
>> >+
>> >+    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
>> >+        error_report("%s", debug->message);
>> >+    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
>> >+        warn_report("%s", debug->message);
>> >+    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
>> >+        info_report("%s", debug->message);
>> >+    }
>> >+}
>> >+
>> >+static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
>> >+{
>> >+    int result;
>> >+    uint64_t capset_mask;
>> >+    struct rutabaga_builder builder = { 0 };
>> >+    char wayland_socket_path[UNIX_PATH_MAX];
>> >+    struct rutabaga_channel channel = { 0 };
>> >+    struct rutabaga_channels channels = { 0 };
>> >+
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+    vr->rutabaga = NULL;
>> >+
>> >+    if (!vr->capset_names) {
>> >+        error_setg(errp, "a capset name from the virtio-gpu spec is
>> required");
>> >+        return false;
>> >+    }
>> >+
>> >+    builder.wsi = RUTABAGA_WSI_SURFACELESS;
>> >+    /*
>> >+     * Currently, if WSI is specified, the only valid strings are
>> "surfaceless"
>> >+     * or "headless".  Surfaceless doesn't create a native window
>> surface, but
>> >+     * does copy from the render target to the Pixman buffer if a
>> virtio-gpu
>> >+     * 2D hypercall is issued.  Surfacless is the default.
>> >+     *
>> >+     * Headless is like surfaceless, but doesn't copy to the Pixman
>> buffer. The
>> >+     * use case is automated testing environments where there is no need
>> to view
>> >+     * results.
>> >+     *
>> >+     * In the future, more performant virtio-gpu 2D UI integration may
>> be added.
>> >+     */
>> >+    if (vr->wsi) {
>> >+        if (g_str_equal(vr->wsi, "surfaceless")) {
>> >+            vr->headless = false;
>> >+        } else if (g_str_equal(vr->wsi, "headless")) {
>> >+            vr->headless = true;
>> >+        } else {
>> >+            error_setg(errp, "invalid wsi option selected");
>> >+            return false;
>> >+        }
>> >+    }
>> >+
>> >+    result = rutabaga_calculate_capset_mask(vr->capset_names,
>> &capset_mask);
>>
>> First, sorry for responding after such a long time. I've been busy with
>> work and I'm doing QEMU in my free time.
>>
>> In iteration 1 I've raised the topic on capset_names [1] and I haven't
>> seen it answered properly. Perhaps I need to rephrase a bit so here we go:
>> capset_names seems to be colon-separated list of bit options managed by
>> rutabaga. This introduces yet another way of options handling. There have
>> been talks about harmonizing options handling in QEMU since apparently it
>> is considered too complex [2,3].
>
>
>> Why not pass the "capset" as a bitfield like capset_mask and have QEMU
>> create "capset" from QOM properties?
>
>IIUC these flags could come from virtio_gpu.h which is already present in
>> the QEMU tree. This would not inly shortcut the dependency on rutabaga here
>> but would also be more idiomatic QEMU (since it makes the options more
>> introspectable by internal machinery).
>
>
>> Of course the bitfield approach would require modifications in QEMU
>> whenever rutabaga gains new features. However, I figure that in the long
>> term rutabaga will be quite feature complete such that the benefits of
>> idiomatic QEMU handling will outweigh the decoupling of the projects.
>>
>> What do you think?
>>
>
>I think what you're suggesting is something like -device
>virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
>gfxstream_vulkan + cross_domain]?

I was thinking more along the lines of `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where gfxstream_vulkan and cross_domain are boolean QOM properties. This would make for a human-readable format which follows QEMU style.

>
>We actually did consider something like that when adding the
>--context-types flag [with crosvm], but there was a desire for a
>human-readable format rather than numbers [even if they are in the
>virtio-gpu spec].
>
>Additionally, there are quite a few context types that people are playing
>around with [gfxstream-gles, gfxstream-composer] that are launchable and
>aren't in the spec just yet.

Right, QEMU had to be modified for this kind of experimentation. I considered this in my last paragraph and figured that in the long run QEMU *may* prefer more idiomatic option handling since it tries hard to not break its command line interface. I'm just pointing this out -- the decision is ultimately up to the community.

Why not have dedicated QEMU development branches for experimentation? Wouldn't upstreaming new features into QEMU be a good motivation to get the missing pieces into the spec, once they are mature?

>
>Also, a key feature we want to explicitly **not** turn on all available
>context-types and let the user decide.

How would you prevent that with the current colon-separated approach? Splitting capset_mask in multiple parameters is just a different syntactical representation of the same thing.

> That'll allow guest Mesa in
>particular to do its magic in its loader.  So one may run Zink + ANV with
>ioctl forwarding, or Iris + ioctl forwarding and compare performance with
>the same guest image.
>
>And another thing is one needs some knowledge of the host system to choose
>the right context type.  You wouldn't do Zink + ANV ioctl forwarding on
>MacOS.  So I think the task of choosing the right context type will fall to
>projects that depend on QEMU (such as Android Emulator) which have some
>knowledge of the host environment.
>
>We actually have a graphics detector somewhere that calls VK/OpenGL before
>launching the VM and sets the right options.  Plan is to port into
>gfxstream, maybe we could use that.

You could bail out in QEMU if rutabaga_calculate_capset_mask() detects conflicting combinations, no?

>
>So given the desire for human readable formats, being portable across VMMs
>(crosvm, qemu, rust-vmm??) and experimentation, the string -> capset mask
>conversion was put in the rutabaga API.  So I wouldn't change it for those
>reasons.

What do you mean by being portable across VMMs? Sure, QEMU had to be taught new flags before being able to use new rutabaga features. I agree that this comes with a certain inconvenience. But it may also be inconvenient for QEMU to deal with additional ad-hoc options parsing when there are efforts for harmonization.

Did my comments shed new light into the discussion?

Thanks,
Bernhard

>
>
>>
>> Best regards,
>> Bernhard
>>
>> [1]
>> https://lore.kernel.org/qemu-devel/D15471EC-D1D1-4DAA-A6E7-19827C36AEC8@gmail.com/
>> [2] https://m.youtube.com/watch?v=gtpOLQgnwug
>> [3] https://m.youtube.com/watch?v=FMQtog6KUlo
>>
>> >+    if (result) {
>> >+        error_setg_errno(errp, -result, "invalid capset names: %s",
>> >+                         vr->capset_names);
>> >+        return false;
>> >+    }
>> >+
>> >+    builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
>> >+    builder.debug_cb = virtio_gpu_rutabaga_debug_cb;
>> >+    builder.capset_mask = capset_mask;
>> >+    builder.user_data = (uint64_t)g;
>> >+
>> >+    /*
>> >+     * If the user doesn't specify the wayland socket path, we try to
>> infer
>> >+     * the socket via a process similar to the one used by libwayland.
>> >+     * libwayland does the following:
>> >+     *
>> >+     * 1) If $WAYLAND_DISPLAY is set, attempt to connect to
>> >+     *    $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY
>> >+     * 2) Otherwise, attempt to connect to $XDG_RUNTIME_DIR/wayland-0
>> >+     * 3) Otherwise, don't pass a wayland socket to rutabaga. If a guest
>> >+     *    wayland proxy is launched, it will fail to work.
>> >+     */
>> >+    channel.channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
>> >+    if (!vr->wayland_socket_path) {
>> >+        const char *runtime_dir = getenv("XDG_RUNTIME_DIR");
>> >+        const char *display = getenv("WAYLAND_DISPLAY");
>> >+        if (!display) {
>> >+            display = "wayland-0";
>> >+        }
>> >+
>> >+        if (runtime_dir) {
>> >+            result = snprintf(wayland_socket_path, UNIX_PATH_MAX,
>> >+                              "%s/%s", runtime_dir, display);
>> >+            if (result > 0 && result < UNIX_PATH_MAX) {
>> >+                channel.channel_name = wayland_socket_path;
>> >+            }
>> >+        }
>> >+    } else {
>> >+        channel.channel_name = vr->wayland_socket_path;
>> >+    }
>> >+
>> >+    if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN))) {
>> >+        if (channel.channel_name) {
>> >+            channels.channels = &channel;
>> >+            channels.num_channels = 1;
>> >+            builder.channels = &channels;
>> >+        }
>> >+    }
>> >+
>> >+    result = rutabaga_init(&builder, &vr->rutabaga);
>> >+    if (result) {
>> >+        error_setg_errno(errp, -result, "Failed to init rutabaga");
>> >+        return result;
>> >+    }
>> >+
>> >+    return true;
>> >+}
>> >+
>> >+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
>> >+{
>> >+    int result;
>> >+    uint32_t num_capsets;
>> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >+
>> >+    result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
>> >+    if (result) {
>> >+        error_report("Failed to get capsets");
>> >+        return 0;
>> >+    }
>> >+    vr->num_capsets = num_capsets;
>> >+    return num_capsets;
>> >+}
>> >+
>> >+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev,
>> VirtQueue *vq)
>> >+{
>> >+    VirtIOGPU *g = VIRTIO_GPU(vdev);
>> >+    struct virtio_gpu_ctrl_command *cmd;
>> >+
>> >+    if (!virtio_queue_ready(vq)) {
>> >+        return;
>> >+    }
>> >+
>> >+    cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>> >+    while (cmd) {
>> >+        cmd->vq = vq;
>> >+        cmd->error = 0;
>> >+        cmd->finished = false;
>> >+        QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
>> >+        cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>> >+    }
>> >+
>> >+    virtio_gpu_process_cmdq(g);
>> >+}
>> >+
>> >+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
>> >+{
>> >+    int num_capsets;
>> >+    VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
>> >+    VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
>> >+
>> >+#if HOST_BIG_ENDIAN
>> >+    error_setg(errp, "rutabaga is not supported on bigendian platforms");
>> >+    return;
>> >+#endif
>> >+
>> >+    if (!virtio_gpu_rutabaga_init(gpudev, errp)) {
>> >+        return;
>> >+    }
>> >+
>> >+    num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
>> >+    if (!num_capsets) {
>> >+        return;
>> >+    }
>> >+
>> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
>> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
>> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
>> >+
>> >+    bdev->virtio_config.num_capsets = num_capsets;
>> >+    virtio_gpu_device_realize(qdev, errp);
>> >+}
>> >+
>> >+static Property virtio_gpu_rutabaga_properties[] = {
>> >+    DEFINE_PROP_STRING("capset_names", VirtIOGPURutabaga, capset_names),
>> >+    DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
>> >+                       wayland_socket_path),
>> >+    DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
>> >+    DEFINE_PROP_END_OF_LIST(),
>> >+};
>> >+
>> >+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void
>> *data)
>> >+{
>> >+    DeviceClass *dc = DEVICE_CLASS(klass);
>> >+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
>> >+    VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
>> >+    VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
>> >+
>> >+    vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
>> >+    vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
>> >+    vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
>> >+    vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
>> >+
>> >+    vdc->realize = virtio_gpu_rutabaga_realize;
>> >+    device_class_set_props(dc, virtio_gpu_rutabaga_properties);
>> >+}
>> >+
>> >+static const TypeInfo virtio_gpu_rutabaga_info = {
>> >+    .name = TYPE_VIRTIO_GPU_RUTABAGA,
>> >+    .parent = TYPE_VIRTIO_GPU,
>> >+    .instance_size = sizeof(VirtIOGPURutabaga),
>> >+    .class_init = virtio_gpu_rutabaga_class_init,
>> >+};
>> >+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
>> >+module_kconfig(VIRTIO_GPU);
>> >+
>> >+static void virtio_register_types(void)
>> >+{
>> >+    type_register_static(&virtio_gpu_rutabaga_info);
>> >+}
>> >+
>> >+type_init(virtio_register_types)
>> >+
>> >+module_dep("hw-display-virtio-gpu");
>> >diff --git a/hw/display/virtio-vga-rutabaga.c
>> b/hw/display/virtio-vga-rutabaga.c
>> >new file mode 100644
>> >index 0000000000..b5b43e3b90
>> >--- /dev/null
>> >+++ b/hw/display/virtio-vga-rutabaga.c
>> >@@ -0,0 +1,53 @@
>> >+/*
>> >+ * SPDX-License-Identifier: GPL-2.0-or-later
>> >+ */
>> >+
>> >+#include "qemu/osdep.h"
>> >+#include "hw/pci/pci.h"
>> >+#include "hw/qdev-properties.h"
>> >+#include "hw/virtio/virtio-gpu.h"
>> >+#include "hw/display/vga.h"
>> >+#include "qapi/error.h"
>> >+#include "qemu/module.h"
>> >+#include "virtio-vga.h"
>> >+#include "qom/object.h"
>> >+
>> >+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
>> >+
>> >+typedef struct VirtIOVGARutabaga VirtIOVGARutabaga;
>> >+DECLARE_INSTANCE_CHECKER(VirtIOVGARutabaga, VIRTIO_VGA_RUTABAGA,
>> >+                         TYPE_VIRTIO_VGA_RUTABAGA)
>> >+
>> >+struct VirtIOVGARutabaga {
>> >+    VirtIOVGABase parent_obj;
>> >+    VirtIOGPURutabaga vdev;
>> >+};
>> >+
>> >+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
>> >+{
>> >+    VirtIOVGARutabaga *dev = VIRTIO_VGA_RUTABAGA(obj);
>> >+
>> >+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>> >+                                TYPE_VIRTIO_GPU_RUTABAGA);
>> >+    VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>> >+}
>> >+
>> >+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
>> >+    .generic_name  = TYPE_VIRTIO_VGA_RUTABAGA,
>> >+    .parent        = TYPE_VIRTIO_VGA_BASE,
>> >+    .instance_size = sizeof(VirtIOVGARutabaga),
>> >+    .instance_init = virtio_vga_rutabaga_inst_initfn,
>> >+};
>> >+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
>> >+module_kconfig(VIRTIO_VGA);
>> >+
>> >+static void virtio_vga_register_types(void)
>> >+{
>> >+    if (have_vga) {
>> >+        virtio_pci_types_register(&virtio_vga_rutabaga_info);
>> >+    }
>> >+}
>> >+
>> >+type_init(virtio_vga_register_types)
>> >+
>> >+module_dep("hw-display-virtio-vga");
>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-14  7:23       ` Bernhard Beschow
@ 2023-09-15  2:38         ` Gurchetan Singh
  2023-09-19 18:36           ` Bernhard Beschow
  0 siblings, 1 reply; 34+ messages in thread
From: Gurchetan Singh @ 2023-09-15  2:38 UTC (permalink / raw)
  To: Bernhard Beschow
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, hi, ernunes, manos.pitsidianakis, philmd

[-- Attachment #1: Type: text/plain, Size: 63292 bytes --]

On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow <shentey@gmail.com> wrote:

>
>
> Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <
> gurchetansingh@chromium.org>:
> >On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com>
> wrote:
> >
> >>
> >>
> >> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
> >> gurchetansingh@chromium.org>:
> >> >This adds initial support for gfxstream and cross-domain.  Both
> >> >features rely on virtio-gpu blob resources and context types, which
> >> >are also implemented in this patch.
> >> >
> >> >gfxstream has a long and illustrious history in Android graphics
> >> >paravirtualization.  It has been powering graphics in the Android
> >> >Studio Emulator for more than a decade, which is the main developer
> >> >platform.
> >> >
> >> >Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
> >> >The key design characteristic was a 1:1 threading model and
> >> >auto-generation, which fit nicely with the OpenGLES spec.  It also
> >> >allowed easy layering with ANGLE on the host, which provides the GLES
> >> >implementations on Windows or MacOS enviroments.
> >> >
> >> >gfxstream has traditionally been maintained by a single engineer, and
> >> >between 2015 to 2021, the goldfish throne passed to Frank Yang.
> >> >Historians often remark this glorious reign ("pax gfxstreama" is the
> >> >academic term) was comparable to that of Augustus and both Queen
> >> >Elizabeths.  Just to name a few accomplishments in a resplendent
> >> >panoply: higher versions of GLES, address space graphics, snapshot
> >> >support and CTS compliant Vulkan [b].
> >> >
> >> >One major drawback was the use of out-of-tree goldfish drivers.
> >> >Android engineers didn't know much about DRM/KMS and especially TTM so
> >> >a simple guest to host pipe was conceived.
> >> >
> >> >Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
> >> >the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
> >> >port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
> >> >It was a symbol compatible replacement of virglrenderer [c] and named
> >> >"AVDVirglrenderer".  This implementation forms the basis of the
> >> >current gfxstream host implementation still in use today.
> >> >
> >> >cross-domain support follows a similar arc.  Originally conceived by
> >> >Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
> >> >2018, it initially relied on the downstream "virtio-wl" device.
> >> >
> >> >In 2020 and 2021, virtio-gpu was extended to include blob resources
> >> >and multiple timelines by yours truly, features gfxstream/cross-domain
> >> >both require to function correctly.
> >> >
> >> >Right now, we stand at the precipice of a truly fantastic possibility:
> >> >the Android Emulator powered by upstream QEMU and upstream Linux
> >> >kernel.  gfxstream will then be packaged properfully, and app
> >> >developers can even fix gfxstream bugs on their own if they encounter
> >> >them.
> >> >
> >> >It's been quite the ride, my friends.  Where will gfxstream head next,
> >> >nobody really knows.  I wouldn't be surprised if it's around for
> >> >another decade, maintained by a new generation of Android graphics
> >> >enthusiasts.
> >> >
> >> >Technical details:
> >> >  - Very simple initial display integration: just used Pixman
> >> >  - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
> >> >    calls
> >> >
> >> >Next steps for Android VMs:
> >> >  - The next step would be improving display integration and UI
> interfaces
> >> >    with the goal of the QEMU upstream graphics being in an emulator
> >> >    release [d].
> >> >
> >> >Next steps for Linux VMs for display virtualization:
> >> >  - For widespread distribution, someone needs to package Sommelier or
> the
> >> >    wayland-proxy-virtwl [e] ideally into Debian main. In addition,
> newer
> >> >    versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
> >> >    which allows disabling KMS hypercalls.  If anyone cares enough,
> it'll
> >> >    probably be possible to build a custom VM variant that uses this
> >> display
> >> >    virtualization strategy.
> >> >
> >> >[a]
> >> https://android-review.googlesource.com/c/platform/development/+/34470
> >> >[b]
> >>
> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
> >> >[c]
> >>
> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
> >> >[d] https://developer.android.com/studio/releases/emulator
> >> >[e] https://github.com/talex5/wayland-proxy-virtwl
> >> >
> >> >Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> >> >Tested-by: Alyssa Ross <hi@alyssa.is>
> >> >Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
> >> >Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
> >> >---
> >> >v1: Incorported various suggestions by Akihiko Odaki and Bernard
> Berschow
> >> >    - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
> >> >    - Used error_report(..)
> >> >    - Used g_autofree to fix leaks on error paths
> >> >    - Removed unnecessary casts
> >> >    - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
> >> >
> >> >v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau
> >> and
> >> >    Bernard Berschow:
> >> >    - Parenthesis in CHECK macro
> >> >    - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
> >> >    - delay until g->parent_obj.enable = 1
> >> >    - Additional cast fixes
> >> >    - initialize directly in virtio_gpu_rutabaga_realize(..)
> >> >    - add debug callback to hook into QEMU error's APIs
> >> >
> >> >v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
> >> >    - Autodetect Wayland socket when not explicitly specified
> >> >    - Fix map_blob error paths
> >> >    - Add comment why we need both `res` and `resource` in create blob
> >> >    - Cast and whitespace fixes
> >> >    - Big endian check comes before virtio_gpu_rutabaga_init().
> >> >    - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
> >> >
> >> >v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
> >> >    - Double checked all casts
> >> >    - Remove unnecessary parenthesis
> >> >    - Removed `resource` in create_blob
> >> >    - Added comment about failure case
> >> >    - Pass user-provided socket as-is
> >> >    - Use stack variable rather than heap allocation
> >> >    - Future-proofed map info API to give access flags as well
> >> >
> >> >v5: Incorporated feedback from Akihiko Odaki:
> >> >    - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
> >> >    - Simplify num_capsets check
> >> >    - Call cleanup mapping on error paths
> >> >    - uint64_t --> void* for rutabaga_map(..)
> >> >    - Removed unnecessary parenthesis
> >> >    - Removed unnecessary cast
> >> >    - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
> >> >    - Reuse result variable
> >> >
> >> >v6: Incorporated feedback from Akihiko Odaki:
> >> >    - Remove unnecessary #ifndef
> >> >    - Disable scanout when appropriate
> >> >    - CHECK capset index within range outside loop
> >> >    - Add capset_version
> >> >
> >> >v7: Incorporated feedback from Akihiko Odaki:
> >> >    - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
> >> >
> >> >v9: Incorportated feedback from Akihiko Odaki:
> >> >    - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
> >> >    - Add error_setg(..) after rutabaga_init(..)
> >> >
> >> >v10: Incorportated feedback from Akihiko Odaki:
> >> >    - error_setg(..) --> error_setg_errno(..) when appropriate
> >> >    - virtio_gpu_rutabaga_init returns a bool instead of an int
> >> >
> >> >v11: Incorportated feedback from Philippe Mathieu-Daudé:
> >> >    - C-style /* */ comments and avoid // comments.
> >> >    - GPL-2.0 --> GPL-2.0-or-later
> >> >
> >> > hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
> >> > hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
> >> > hw/display/virtio-vga-rutabaga.c     |   53 ++
> >> > 3 files changed, 1224 insertions(+)
> >> > create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
> >> > create mode 100644 hw/display/virtio-gpu-rutabaga.c
> >> > create mode 100644 hw/display/virtio-vga-rutabaga.c
> >> >
> >> >diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
> >> b/hw/display/virtio-gpu-pci-rutabaga.c
> >> >new file mode 100644
> >> >index 0000000000..311eff308a
> >> >--- /dev/null
> >> >+++ b/hw/display/virtio-gpu-pci-rutabaga.c
> >> >@@ -0,0 +1,50 @@
> >> >+/*
> >> >+ * SPDX-License-Identifier: GPL-2.0-or-later
> >> >+ */
> >> >+
> >> >+#include "qemu/osdep.h"
> >> >+#include "qapi/error.h"
> >> >+#include "qemu/module.h"
> >> >+#include "hw/pci/pci.h"
> >> >+#include "hw/qdev-properties.h"
> >> >+#include "hw/virtio/virtio.h"
> >> >+#include "hw/virtio/virtio-bus.h"
> >> >+#include "hw/virtio/virtio-gpu-pci.h"
> >> >+#include "qom/object.h"
> >> >+
> >> >+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
> >> >+typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
> >> >+DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI,
> VIRTIO_GPU_RUTABAGA_PCI,
> >> >+                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
> >> >+
> >> >+struct VirtIOGPURutabagaPCI {
> >> >+    VirtIOGPUPCIBase parent_obj;
> >> >+    VirtIOGPURutabaga vdev;
> >> >+};
> >> >+
> >> >+static void virtio_gpu_rutabaga_initfn(Object *obj)
> >> >+{
> >> >+    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
> >> >+
> >> >+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> >> >+                                TYPE_VIRTIO_GPU_RUTABAGA);
> >> >+    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> >> >+}
> >> >+
> >> >+static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
> >> >+    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
> >> >+    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
> >> >+    .instance_size = sizeof(VirtIOGPURutabagaPCI),
> >> >+    .instance_init = virtio_gpu_rutabaga_initfn,
> >> >+};
> >> >+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
> >> >+module_kconfig(VIRTIO_PCI);
> >> >+
> >> >+static void virtio_gpu_rutabaga_pci_register_types(void)
> >> >+{
> >> >+    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
> >> >+}
> >> >+
> >> >+type_init(virtio_gpu_rutabaga_pci_register_types)
> >> >+
> >> >+module_dep("hw-display-virtio-gpu-pci");
> >> >diff --git a/hw/display/virtio-gpu-rutabaga.c
> >> b/hw/display/virtio-gpu-rutabaga.c
> >> >new file mode 100644
> >> >index 0000000000..9018e5a702
> >> >--- /dev/null
> >> >+++ b/hw/display/virtio-gpu-rutabaga.c
> >> >@@ -0,0 +1,1121 @@
> >> >+/*
> >> >+ * SPDX-License-Identifier: GPL-2.0-or-later
> >> >+ */
> >> >+
> >> >+#include "qemu/osdep.h"
> >> >+#include "qapi/error.h"
> >> >+#include "qemu/error-report.h"
> >> >+#include "qemu/iov.h"
> >> >+#include "trace.h"
> >> >+#include "hw/virtio/virtio.h"
> >> >+#include "hw/virtio/virtio-gpu.h"
> >> >+#include "hw/virtio/virtio-gpu-pixman.h"
> >> >+#include "hw/virtio/virtio-iommu.h"
> >> >+
> >> >+#include <glib/gmem.h>
> >> >+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
> >> >+
> >> >+#define CHECK(condition, cmd)
> >>      \
> >> >+    do {
> >>       \
> >> >+        if (!(condition)) {
> >>      \
> >> >+            error_report("CHECK failed in %s() %s:" "%d", __func__,
> >>      \
> >> >+                         __FILE__, __LINE__);
> >>      \
> >> >+            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >>       \
> >> >+            return;
> >>      \
> >> >+       }
> >>       \
> >> >+    } while (0)
> >> >+
> >> >+/*
> >> >+ * This is the size of the char array in struct sock_addr_un. No
> Wayland
> >> socket
> >> >+ * can be created with a path longer than this, including the null
> >> terminator.
> >> >+ */
> >> >+#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
> >> >+
> >> >+struct rutabaga_aio_data {
> >> >+    struct VirtIOGPURutabaga *vr;
> >> >+    struct rutabaga_fence fence;
> >> >+};
> >> >+
> >> >+static void
> >> >+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
> >> virtio_gpu_scanout *s,
> >> >+                                  uint32_t resource_id)
> >> >+{
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct rutabaga_transfer transfer = { 0 };
> >> >+    struct iovec transfer_iovec;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    res = virtio_gpu_find_resource(g, resource_id);
> >> >+    if (!res) {
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    if (res->width != s->current_cursor->width ||
> >> >+        res->height != s->current_cursor->height) {
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    transfer.x = 0;
> >> >+    transfer.y = 0;
> >> >+    transfer.z = 0;
> >> >+    transfer.w = res->width;
> >> >+    transfer.h = res->height;
> >> >+    transfer.d = 1;
> >> >+
> >> >+    transfer_iovec.iov_base = s->current_cursor->data;
> >> >+    transfer_iovec.iov_len = res->width * res->height * 4;
> >> >+
> >> >+    rutabaga_resource_transfer_read(vr->rutabaga, 0,
> >> >+                                    resource_id, &transfer,
> >> >+                                    &transfer_iovec);
> >> >+}
> >> >+
> >> >+static void
> >> >+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
> >> >+{
> >> >+    VirtIOGPU *g = VIRTIO_GPU(b);
> >> >+    virtio_gpu_process_cmdq(g);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
> >> >+                                struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct rutabaga_create_3d rc_3d = { 0 };
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct virtio_gpu_resource_create_2d c2d;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(c2d);
> >> >+    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
> >> >+                                       c2d.width, c2d.height);
> >> >+
> >> >+    rc_3d.target = 2;
> >> >+    rc_3d.format = c2d.format;
> >> >+    rc_3d.bind = (1 << 1);
> >> >+    rc_3d.width = c2d.width;
> >> >+    rc_3d.height = c2d.height;
> >> >+    rc_3d.depth = 1;
> >> >+    rc_3d.array_size = 1;
> >> >+    rc_3d.last_level = 0;
> >> >+    rc_3d.nr_samples = 0;
> >> >+    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
> >> >+
> >> >+    result = rutabaga_resource_create_3d(vr->rutabaga,
> c2d.resource_id,
> >> &rc_3d);
> >> >+    CHECK(!result, cmd);
> >> >+
> >> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >> >+    res->width = c2d.width;
> >> >+    res->height = c2d.height;
> >> >+    res->format = c2d.format;
> >> >+    res->resource_id = c2d.resource_id;
> >> >+
> >> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
> >> >+                                struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct rutabaga_create_3d rc_3d = { 0 };
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct virtio_gpu_resource_create_3d c3d;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(c3d);
> >> >+
> >> >+    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
> >> >+                                       c3d.width, c3d.height,
> c3d.depth);
> >> >+
> >> >+    rc_3d.target = c3d.target;
> >> >+    rc_3d.format = c3d.format;
> >> >+    rc_3d.bind = c3d.bind;
> >> >+    rc_3d.width = c3d.width;
> >> >+    rc_3d.height = c3d.height;
> >> >+    rc_3d.depth = c3d.depth;
> >> >+    rc_3d.array_size = c3d.array_size;
> >> >+    rc_3d.last_level = c3d.last_level;
> >> >+    rc_3d.nr_samples = c3d.nr_samples;
> >> >+    rc_3d.flags = c3d.flags;
> >> >+
> >> >+    result = rutabaga_resource_create_3d(vr->rutabaga,
> c3d.resource_id,
> >> &rc_3d);
> >> >+    CHECK(!result, cmd);
> >> >+
> >> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >> >+    res->width = c3d.width;
> >> >+    res->height = c3d.height;
> >> >+    res->format = c3d.format;
> >> >+    res->resource_id = c3d.resource_id;
> >> >+
> >> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_resource_unref(VirtIOGPU *g,
> >> >+                            struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct virtio_gpu_resource_unref unref;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(unref);
> >> >+
> >> >+    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
> >> >+
> >> >+    res = virtio_gpu_find_resource(g, unref.resource_id);
> >> >+    CHECK(res, cmd);
> >> >+
> >> >+    result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
> >> >+    CHECK(!result, cmd);
> >> >+
> >> >+    if (res->image) {
> >> >+        pixman_image_unref(res->image);
> >> >+    }
> >> >+
> >> >+    QTAILQ_REMOVE(&g->reslist, res, next);
> >> >+    g_free(res);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_context_create(VirtIOGPU *g,
> >> >+                            struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct virtio_gpu_ctx_create cc;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(cc);
> >> >+    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
> >> >+                                    cc.debug_name);
> >> >+
> >> >+    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
> >> >+                                     cc.context_init, cc.debug_name,
> >> cc.nlen);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_context_destroy(VirtIOGPU *g,
> >> >+                             struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct virtio_gpu_ctx_destroy cd;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(cd);
> >> >+    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
> >> >+
> >> >+    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct
> virtio_gpu_ctrl_command
> >> *cmd)
> >> >+{
> >> >+    int32_t result, i;
> >> >+    struct virtio_gpu_scanout *scanout = NULL;
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct rutabaga_transfer transfer = { 0 };
> >> >+    struct iovec transfer_iovec;
> >> >+    struct virtio_gpu_resource_flush rf;
> >> >+    bool found = false;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+    if (vr->headless) {
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(rf);
> >> >+    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
> >> >+                                   rf.r.width, rf.r.height, rf.r.x,
> >> rf.r.y);
> >> >+
> >> >+    res = virtio_gpu_find_resource(g, rf.resource_id);
> >> >+    CHECK(res, cmd);
> >> >+
> >> >+    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
> >> >+        scanout = &g->parent_obj.scanout[i];
> >> >+        if (i == res->scanout_bitmask) {
> >> >+            found = true;
> >> >+            break;
> >> >+        }
> >> >+    }
> >> >+
> >> >+    if (!found) {
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    transfer.x = 0;
> >> >+    transfer.y = 0;
> >> >+    transfer.z = 0;
> >> >+    transfer.w = res->width;
> >> >+    transfer.h = res->height;
> >> >+    transfer.d = 1;
> >> >+
> >> >+    transfer_iovec.iov_base = pixman_image_get_data(res->image);
> >> >+    transfer_iovec.iov_len = res->width * res->height * 4;
> >> >+
> >> >+    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
> >> >+                                             rf.resource_id,
> &transfer,
> >> >+                                             &transfer_iovec);
> >> >+    CHECK(!result, cmd);
> >> >+    dpy_gfx_update_full(scanout->con);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> >> *cmd)
> >> >+{
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct virtio_gpu_scanout *scanout = NULL;
> >> >+    struct virtio_gpu_set_scanout ss;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+    if (vr->headless) {
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(ss);
> >> >+    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
> >> >+                                     ss.r.width, ss.r.height, ss.r.x,
> >> ss.r.y);
> >> >+
> >> >+    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
> >> >+    scanout = &g->parent_obj.scanout[ss.scanout_id];
> >> >+
> >> >+    if (ss.resource_id == 0) {
> >> >+        dpy_gfx_replace_surface(scanout->con, NULL);
> >> >+        dpy_gl_scanout_disable(scanout->con);
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    res = virtio_gpu_find_resource(g, ss.resource_id);
> >> >+    CHECK(res, cmd);
> >> >+
> >> >+    if (!res->image) {
> >> >+        pixman_format_code_t pformat;
> >> >+        pformat = virtio_gpu_get_pixman_format(res->format);
> >> >+        CHECK(pformat, cmd);
> >> >+
> >> >+        res->image = pixman_image_create_bits(pformat,
> >> >+                                              res->width,
> >> >+                                              res->height,
> >> >+                                              NULL, 0);
> >> >+        CHECK(res->image, cmd);
> >> >+        pixman_image_ref(res->image);
> >> >+    }
> >> >+
> >> >+    g->parent_obj.enable = 1;
> >> >+
> >> >+    /* realloc the surface ptr */
> >> >+    scanout->ds = qemu_create_displaysurface_pixman(res->image);
> >> >+    dpy_gfx_replace_surface(scanout->con, NULL);
> >> >+    dpy_gfx_replace_surface(scanout->con, scanout->ds);
> >> >+    res->scanout_bitmask = ss.scanout_id;
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_submit_3d(VirtIOGPU *g,
> >> >+                       struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct virtio_gpu_cmd_submit cs;
> >> >+    struct rutabaga_command rutabaga_cmd = { 0 };
> >> >+    g_autofree uint8_t *buf = NULL;
> >> >+    size_t s;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(cs);
> >> >+    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
> >> >+
> >> >+    buf = g_new0(uint8_t, cs.size);
> >> >+    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
> >> >+                   sizeof(cs), buf, cs.size);
> >> >+    CHECK(s == cs.size, cmd);
> >> >+
> >> >+    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
> >> >+    rutabaga_cmd.cmd = buf;
> >> >+    rutabaga_cmd.cmd_size = cs.size;
> >> >+
> >> >+    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct rutabaga_transfer transfer = { 0 };
> >> >+    struct virtio_gpu_transfer_to_host_2d t2d;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(t2d);
> >> >+    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
> >> >+
> >> >+    transfer.x = t2d.r.x;
> >> >+    transfer.y = t2d.r.y;
> >> >+    transfer.z = 0;
> >> >+    transfer.w = t2d.r.width;
> >> >+    transfer.h = t2d.r.height;
> >> >+    transfer.d = 1;
> >> >+
> >> >+    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
> >> t2d.resource_id,
> >> >+                                              &transfer);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct rutabaga_transfer transfer = { 0 };
> >> >+    struct virtio_gpu_transfer_host_3d t3d;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(t3d);
> >> >+    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
> >> >+
> >> >+    transfer.x = t3d.box.x;
> >> >+    transfer.y = t3d.box.y;
> >> >+    transfer.z = t3d.box.z;
> >> >+    transfer.w = t3d.box.w;
> >> >+    transfer.h = t3d.box.h;
> >> >+    transfer.d = t3d.box.d;
> >> >+    transfer.level = t3d.level;
> >> >+    transfer.stride = t3d.stride;
> >> >+    transfer.layer_stride = t3d.layer_stride;
> >> >+    transfer.offset = t3d.offset;
> >> >+
> >> >+    result = rutabaga_resource_transfer_write(vr->rutabaga,
> >> t3d.hdr.ctx_id,
> >> >+                                              t3d.resource_id,
> >> &transfer);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
> >> >+                                   struct virtio_gpu_ctrl_command
> *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct rutabaga_transfer transfer = { 0 };
> >> >+    struct virtio_gpu_transfer_host_3d t3d;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(t3d);
> >> >+    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
> >> >+
> >> >+    transfer.x = t3d.box.x;
> >> >+    transfer.y = t3d.box.y;
> >> >+    transfer.z = t3d.box.z;
> >> >+    transfer.w = t3d.box.w;
> >> >+    transfer.h = t3d.box.h;
> >> >+    transfer.d = t3d.box.d;
> >> >+    transfer.level = t3d.level;
> >> >+    transfer.stride = t3d.stride;
> >> >+    transfer.layer_stride = t3d.layer_stride;
> >> >+    transfer.offset = t3d.offset;
> >> >+
> >> >+    result = rutabaga_resource_transfer_read(vr->rutabaga,
> >> t3d.hdr.ctx_id,
> >> >+                                             t3d.resource_id,
> &transfer,
> >> NULL);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct
> virtio_gpu_ctrl_command
> >> *cmd)
> >> >+{
> >> >+    struct rutabaga_iovecs vecs = { 0 };
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct virtio_gpu_resource_attach_backing att_rb;
> >> >+    int ret;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(att_rb);
> >> >+    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
> >> >+
> >> >+    res = virtio_gpu_find_resource(g, att_rb.resource_id);
> >> >+    CHECK(res, cmd);
> >> >+    CHECK(!res->iov, cmd);
> >> >+
> >> >+    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
> >> sizeof(att_rb),
> >> >+                                        cmd, NULL, &res->iov,
> >> &res->iov_cnt);
> >> >+    CHECK(!ret, cmd);
> >> >+
> >> >+    vecs.iovecs = res->iov;
> >> >+    vecs.num_iovecs = res->iov_cnt;
> >> >+
> >> >+    ret = rutabaga_resource_attach_backing(vr->rutabaga,
> >> att_rb.resource_id,
> >> >+                                           &vecs);
> >> >+    if (ret != 0) {
> >> >+        virtio_gpu_cleanup_mapping(g, res);
> >> >+    }
> >> >+
> >> >+    CHECK(!ret, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct
> virtio_gpu_ctrl_command
> >> *cmd)
> >> >+{
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct virtio_gpu_resource_detach_backing detach_rb;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(detach_rb);
> >> >+    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
> >> >+
> >> >+    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
> >> >+    CHECK(res, cmd);
> >> >+
> >> >+    rutabaga_resource_detach_backing(vr->rutabaga,
> >> >+                                     detach_rb.resource_id);
> >> >+
> >> >+    virtio_gpu_cleanup_mapping(g, res);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct virtio_gpu_ctx_resource att_res;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(att_res);
> >> >+    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
> >> >+                                        att_res.resource_id);
> >> >+
> >> >+    result = rutabaga_context_attach_resource(vr->rutabaga,
> >> att_res.hdr.ctx_id,
> >> >+                                              att_res.resource_id);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct virtio_gpu_ctx_resource det_res;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(det_res);
> >> >+    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
> >> >+                                        det_res.resource_id);
> >> >+
> >> >+    result = rutabaga_context_detach_resource(vr->rutabaga,
> >> det_res.hdr.ctx_id,
> >> >+                                              det_res.resource_id);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
> >> virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct virtio_gpu_get_capset_info info;
> >> >+    struct virtio_gpu_resp_capset_info resp;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(info);
> >> >+
> >> >+    result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
> >> >+                                      &resp.capset_id,
> >> &resp.capset_max_version,
> >> >+                                      &resp.capset_max_size);
> >> >+    CHECK(!result, cmd);
> >> >+
> >> >+    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
> >> >+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> >> *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    struct virtio_gpu_get_capset gc;
> >> >+    struct virtio_gpu_resp_capset *resp;
> >> >+    uint32_t capset_size, capset_version;
> >> >+    uint32_t current_id, i;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(gc);
> >> >+    for (i = 0; i < vr->num_capsets; i++) {
> >> >+        result = rutabaga_get_capset_info(vr->rutabaga, i,
> >> >+                                          &current_id,
> &capset_version,
> >> >+                                          &capset_size);
> >> >+        CHECK(!result, cmd);
> >> >+
> >> >+        if (current_id == gc.capset_id) {
> >> >+            break;
> >> >+        }
> >> >+    }
> >> >+
> >> >+    CHECK(i < vr->num_capsets, cmd);
> >> >+
> >> >+    resp = g_malloc0(sizeof(*resp) + capset_size);
> >> >+    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
> >> >+    rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
> >> >+                        resp->capset_data, capset_size);
> >> >+
> >> >+    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
> >> capset_size);
> >> >+    g_free(resp);
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
> >> >+                                  struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int result;
> >> >+    struct rutabaga_iovecs vecs = { 0 };
> >> >+    g_autofree struct virtio_gpu_simple_resource *res = NULL;
> >> >+    struct virtio_gpu_resource_create_blob cblob;
> >> >+    struct rutabaga_create_blob rc_blob = { 0 };
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(cblob);
> >> >+    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id,
> cblob.size);
> >> >+
> >> >+    CHECK(cblob.resource_id != 0, cmd);
> >> >+
> >> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >> >+
> >> >+    res->resource_id = cblob.resource_id;
> >> >+    res->blob_size = cblob.size;
> >> >+
> >> >+    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> >> >+        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
> >> >+                                               sizeof(cblob), cmd,
> >> &res->addrs,
> >> >+                                               &res->iov,
> &res->iov_cnt);
> >> >+        CHECK(!result, cmd);
> >> >+    }
> >> >+
> >> >+    rc_blob.blob_id = cblob.blob_id;
> >> >+    rc_blob.blob_mem = cblob.blob_mem;
> >> >+    rc_blob.blob_flags = cblob.blob_flags;
> >> >+    rc_blob.size = cblob.size;
> >> >+
> >> >+    vecs.iovecs = res->iov;
> >> >+    vecs.num_iovecs = res->iov_cnt;
> >> >+
> >> >+    result = rutabaga_resource_create_blob(vr->rutabaga,
> >> cblob.hdr.ctx_id,
> >> >+                                           cblob.resource_id,
> &rc_blob,
> >> &vecs,
> >> >+                                           NULL);
> >> >+
> >> >+    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> >> >+        virtio_gpu_cleanup_mapping(g, res);
> >> >+    }
> >> >+
> >> >+    CHECK(!result, cmd);
> >> >+
> >> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >> >+    res = NULL;
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
> >> >+                               struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    uint32_t map_info = 0;
> >> >+    uint32_t slot = 0;
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct rutabaga_mapping mapping = { 0 };
> >> >+    struct virtio_gpu_resource_map_blob mblob;
> >> >+    struct virtio_gpu_resp_map_info resp = { 0 };
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(mblob);
> >> >+
> >> >+    CHECK(mblob.resource_id != 0, cmd);
> >> >+
> >> >+    res = virtio_gpu_find_resource(g, mblob.resource_id);
> >> >+    CHECK(res, cmd);
> >> >+
> >> >+    result = rutabaga_resource_map_info(vr->rutabaga,
> mblob.resource_id,
> >> >+                                        &map_info);
> >> >+    CHECK(!result, cmd);
> >> >+
> >> >+    /*
> >> >+     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu
> spec,
> >> but do
> >> >+     * exist to potentially allow the hypervisor to restrict write
> >> access to
> >> >+     * memory. QEMU does not need to use this functionality at the
> >> moment.
> >> >+     */
> >> >+    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
> >> >+
> >> >+    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
> >> &mapping);
> >> >+    CHECK(!result, cmd);
> >> >+
> >> >+    for (slot = 0; slot < MAX_SLOTS; slot++) {
> >> >+        if (vr->memory_regions[slot].used) {
> >> >+            continue;
> >> >+        }
> >> >+
> >> >+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
> >> >+        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
> >> >+                                   mapping.ptr);
> >> >+        memory_region_add_subregion(&g->parent_obj.hostmem,
> >> >+                                    mblob.offset, mr);
> >> >+        vr->memory_regions[slot].resource_id = mblob.resource_id;
> >> >+        vr->memory_regions[slot].used = 1;
> >> >+        break;
> >> >+    }
> >> >+
> >> >+    if (slot >= MAX_SLOTS) {
> >> >+        result = rutabaga_resource_unmap(vr->rutabaga,
> >> mblob.resource_id);
> >> >+        CHECK(!result, cmd);
> >> >+    }
> >> >+
> >> >+    CHECK(slot < MAX_SLOTS, cmd);
> >> >+
> >> >+    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
> >> >+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> >> >+}
> >> >+
> >> >+static void
> >> >+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    int32_t result;
> >> >+    uint32_t slot = 0;
> >> >+    struct virtio_gpu_simple_resource *res;
> >> >+    struct virtio_gpu_resource_unmap_blob ublob;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(ublob);
> >> >+
> >> >+    CHECK(ublob.resource_id != 0, cmd);
> >> >+
> >> >+    res = virtio_gpu_find_resource(g, ublob.resource_id);
> >> >+    CHECK(res, cmd);
> >> >+
> >> >+    for (slot = 0; slot < MAX_SLOTS; slot++) {
> >> >+        if (vr->memory_regions[slot].resource_id !=
> ublob.resource_id) {
> >> >+            continue;
> >> >+        }
> >> >+
> >> >+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
> >> >+        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
> >> >+
> >> >+        vr->memory_regions[slot].resource_id = 0;
> >> >+        vr->memory_regions[slot].used = 0;
> >> >+        break;
> >> >+    }
> >> >+
> >> >+    CHECK(slot < MAX_SLOTS, cmd);
> >> >+    result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
> >> >+                                struct virtio_gpu_ctrl_command *cmd)
> >> >+{
> >> >+    struct rutabaga_fence fence = { 0 };
> >> >+    int32_t result;
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
> >> >+
> >> >+    switch (cmd->cmd_hdr.type) {
> >> >+    case VIRTIO_GPU_CMD_CTX_CREATE:
> >> >+        rutabaga_cmd_context_create(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_CTX_DESTROY:
> >> >+        rutabaga_cmd_context_destroy(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
> >> >+        rutabaga_cmd_create_resource_2d(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
> >> >+        rutabaga_cmd_create_resource_3d(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_SUBMIT_3D:
> >> >+        rutabaga_cmd_submit_3d(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
> >> >+        rutabaga_cmd_transfer_to_host_2d(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
> >> >+        rutabaga_cmd_transfer_to_host_3d(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
> >> >+        rutabaga_cmd_transfer_from_host_3d(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
> >> >+        rutabaga_cmd_attach_backing(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
> >> >+        rutabaga_cmd_detach_backing(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_SET_SCANOUT:
> >> >+        rutabaga_cmd_set_scanout(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
> >> >+        rutabaga_cmd_resource_flush(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
> >> >+        rutabaga_cmd_resource_unref(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
> >> >+        rutabaga_cmd_ctx_attach_resource(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
> >> >+        rutabaga_cmd_ctx_detach_resource(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
> >> >+        rutabaga_cmd_get_capset_info(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_GET_CAPSET:
> >> >+        rutabaga_cmd_get_capset(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
> >> >+        virtio_gpu_get_display_info(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_GET_EDID:
> >> >+        virtio_gpu_get_edid(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
> >> >+        rutabaga_cmd_resource_create_blob(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
> >> >+        rutabaga_cmd_resource_map_blob(g, cmd);
> >> >+        break;
> >> >+    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
> >> >+        rutabaga_cmd_resource_unmap_blob(g, cmd);
> >> >+        break;
> >> >+    default:
> >> >+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >> >+        break;
> >> >+    }
> >> >+
> >> >+    if (cmd->finished) {
> >> >+        return;
> >> >+    }
> >> >+    if (cmd->error) {
> >> >+        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
> >> >+                     cmd->cmd_hdr.type, cmd->error);
> >> >+        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
> >> >+        return;
> >> >+    }
> >> >+    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
> >> >+        virtio_gpu_ctrl_response_nodata(g, cmd,
> >> VIRTIO_GPU_RESP_OK_NODATA);
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    fence.flags = cmd->cmd_hdr.flags;
> >> >+    fence.ctx_id = cmd->cmd_hdr.ctx_id;
> >> >+    fence.fence_id = cmd->cmd_hdr.fence_id;
> >> >+    fence.ring_idx = cmd->cmd_hdr.ring_idx;
> >> >+
> >> >+    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
> >> cmd->cmd_hdr.type);
> >> >+
> >> >+    result = rutabaga_create_fence(vr->rutabaga, &fence);
> >> >+    CHECK(!result, cmd);
> >> >+}
> >> >+
> >> >+static void
> >> >+virtio_gpu_rutabaga_aio_cb(void *opaque)
> >> >+{
> >> >+    struct rutabaga_aio_data *data = opaque;
> >> >+    VirtIOGPU *g = VIRTIO_GPU(data->vr);
> >> >+    struct rutabaga_fence fence_data = data->fence;
> >> >+    struct virtio_gpu_ctrl_command *cmd, *tmp;
> >> >+
> >> >+    uint32_t signaled_ctx_specific = fence_data.flags &
> >> >+                                     RUTABAGA_FLAG_INFO_RING_IDX;
> >> >+
> >> >+    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
> >> >+        /*
> >> >+         * Due to context specific timelines.
> >> >+         */
> >> >+        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
> >> >+                                       RUTABAGA_FLAG_INFO_RING_IDX;
> >> >+
> >> >+        if (signaled_ctx_specific != target_ctx_specific) {
> >> >+            continue;
> >> >+        }
> >> >+
> >> >+        if (signaled_ctx_specific &&
> >> >+           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
> >> >+            continue;
> >> >+        }
> >> >+
> >> >+        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
> >> >+            continue;
> >> >+        }
> >> >+
> >> >+        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
> >> >+        virtio_gpu_ctrl_response_nodata(g, cmd,
> >> VIRTIO_GPU_RESP_OK_NODATA);
> >> >+        QTAILQ_REMOVE(&g->fenceq, cmd, next);
> >> >+        g_free(cmd);
> >> >+    }
> >> >+
> >> >+    g_free(data);
> >> >+}
> >> >+
> >> >+static void
> >> >+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
> >> >+                             const struct rutabaga_fence *fence) {
> >> >+    struct rutabaga_aio_data *data;
> >> >+    VirtIOGPU *g = (VirtIOGPU *)user_data;
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    /*
> >> >+     * gfxstream and both cross-domain (and even newer versions
> >> virglrenderer:
> >> >+     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
> >> completion on
> >> >+     * threads ("callback threads") that are different from the thread
> >> that
> >> >+     * processes the command queue ("main thread").
> >> >+     *
> >> >+     * crosvm and other virtio-gpu 1.1 implementations enable callback
> >> threads
> >> >+     * via locking.  However, on QEMU a deadlock is observed if
> >> >+     * virtio_gpu_ctrl_response_nodata(..) [used in the fence
> callback]
> >> is used
> >> >+     * from a thread that is not the main thread.
> >> >+     *
> >> >+     * The reason is QEMU's internal locking is designed to work with
> >> QEMU
> >> >+     * threads (see rcu_register_thread()) and not generic C/C++/Rust
> >> threads.
> >> >+     * For now, we can workaround this by scheduling the return of the
> >> >+     * fence descriptors on the main thread.
> >> >+     */
> >> >+
> >> >+    data = g_new0(struct rutabaga_aio_data, 1);
> >> >+    data->vr = vr;
> >> >+    data->fence = *fence;
> >> >+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> >> >+                            virtio_gpu_rutabaga_aio_cb,
> >> >+                            data);
> >> >+}
> >> >+
> >> >+static void
> >> >+virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
> >> >+                             const struct rutabaga_debug *debug) {
> >> >+
> >> >+    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
> >> >+        error_report("%s", debug->message);
> >> >+    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
> >> >+        warn_report("%s", debug->message);
> >> >+    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
> >> >+        info_report("%s", debug->message);
> >> >+    }
> >> >+}
> >> >+
> >> >+static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
> >> >+{
> >> >+    int result;
> >> >+    uint64_t capset_mask;
> >> >+    struct rutabaga_builder builder = { 0 };
> >> >+    char wayland_socket_path[UNIX_PATH_MAX];
> >> >+    struct rutabaga_channel channel = { 0 };
> >> >+    struct rutabaga_channels channels = { 0 };
> >> >+
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+    vr->rutabaga = NULL;
> >> >+
> >> >+    if (!vr->capset_names) {
> >> >+        error_setg(errp, "a capset name from the virtio-gpu spec is
> >> required");
> >> >+        return false;
> >> >+    }
> >> >+
> >> >+    builder.wsi = RUTABAGA_WSI_SURFACELESS;
> >> >+    /*
> >> >+     * Currently, if WSI is specified, the only valid strings are
> >> "surfaceless"
> >> >+     * or "headless".  Surfaceless doesn't create a native window
> >> surface, but
> >> >+     * does copy from the render target to the Pixman buffer if a
> >> virtio-gpu
> >> >+     * 2D hypercall is issued.  Surfacless is the default.
> >> >+     *
> >> >+     * Headless is like surfaceless, but doesn't copy to the Pixman
> >> buffer. The
> >> >+     * use case is automated testing environments where there is no
> need
> >> to view
> >> >+     * results.
> >> >+     *
> >> >+     * In the future, more performant virtio-gpu 2D UI integration may
> >> be added.
> >> >+     */
> >> >+    if (vr->wsi) {
> >> >+        if (g_str_equal(vr->wsi, "surfaceless")) {
> >> >+            vr->headless = false;
> >> >+        } else if (g_str_equal(vr->wsi, "headless")) {
> >> >+            vr->headless = true;
> >> >+        } else {
> >> >+            error_setg(errp, "invalid wsi option selected");
> >> >+            return false;
> >> >+        }
> >> >+    }
> >> >+
> >> >+    result = rutabaga_calculate_capset_mask(vr->capset_names,
> >> &capset_mask);
> >>
> >> First, sorry for responding after such a long time. I've been busy with
> >> work and I'm doing QEMU in my free time.
> >>
> >> In iteration 1 I've raised the topic on capset_names [1] and I haven't
> >> seen it answered properly. Perhaps I need to rephrase a bit so here we
> go:
> >> capset_names seems to be colon-separated list of bit options managed by
> >> rutabaga. This introduces yet another way of options handling. There
> have
> >> been talks about harmonizing options handling in QEMU since apparently
> it
> >> is considered too complex [2,3].
> >
> >
> >> Why not pass the "capset" as a bitfield like capset_mask and have QEMU
> >> create "capset" from QOM properties?
> >
> >IIUC these flags could come from virtio_gpu.h which is already present in
> >> the QEMU tree. This would not inly shortcut the dependency on rutabaga
> here
> >> but would also be more idiomatic QEMU (since it makes the options more
> >> introspectable by internal machinery).
> >
> >
> >> Of course the bitfield approach would require modifications in QEMU
> >> whenever rutabaga gains new features. However, I figure that in the long
> >> term rutabaga will be quite feature complete such that the benefits of
> >> idiomatic QEMU handling will outweigh the decoupling of the projects.
> >>
> >> What do you think?
> >>
> >
> >I think what you're suggesting is something like -device
> >virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
> >gfxstream_vulkan + cross_domain]?
>
> I was thinking more along the lines of
> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
> gfxstream_vulkan and cross_domain are boolean QOM properties. This would
> make for a human-readable format which follows QEMU style.
>
> >
> >We actually did consider something like that when adding the
> >--context-types flag [with crosvm], but there was a desire for a
> >human-readable format rather than numbers [even if they are in the
> >virtio-gpu spec].
> >
> >Additionally, there are quite a few context types that people are playing
> >around with [gfxstream-gles, gfxstream-composer] that are launchable and
> >aren't in the spec just yet.
>
> Right, QEMU had to be modified for this kind of experimentation. I
> considered this in my last paragraph and figured that in the long run QEMU
> *may* prefer more idiomatic option handling since it tries hard to not
> break its command line interface. I'm just pointing this out -- the
> decision is ultimately up to the community.
>
> Why not have dedicated QEMU development branches for experimentation?
> Wouldn't upstreaming new features into QEMU be a good motivation to get the
> missing pieces into the spec, once they are mature?


> >
> >Also, a key feature we want to explicitly **not** turn on all available
> >context-types and let the user decide.
>
> How would you prevent that with the current colon-separated approach?
> Splitting capset_mask in multiple parameters is just a different
> syntactical representation of the same thing.
>
> > That'll allow guest Mesa in
> >particular to do its magic in its loader.  So one may run Zink + ANV with
> >ioctl forwarding, or Iris + ioctl forwarding and compare performance with
> >the same guest image.
> >
> >And another thing is one needs some knowledge of the host system to choose
> >the right context type.  You wouldn't do Zink + ANV ioctl forwarding on
> >MacOS.  So I think the task of choosing the right context type will fall
> to
> >projects that depend on QEMU (such as Android Emulator) which have some
> >knowledge of the host environment.
> >
> >We actually have a graphics detector somewhere that calls VK/OpenGL before
> >launching the VM and sets the right options.  Plan is to port into
> >gfxstream, maybe we could use that.
>
> You could bail out in QEMU if rutabaga_calculate_capset_mask() detects
> conflicting combinations, no?
>
> >
> >So given the desire for human readable formats, being portable across VMMs
> >(crosvm, qemu, rust-vmm??) and experimentation, the string -> capset mask
> >conversion was put in the rutabaga API.  So I wouldn't change it for those
> >reasons.
>
> What do you mean by being portable across VMMs?


Having the API inside rutabaga is (mildly) useful when multiple VMMs have
the need to translate from a human-readable format to flags digestible by
rutabaga.

https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452

https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353

https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505

For these crosvm/qemu launchers, I imagine capset names will be plumbed all
the way through eventually (launch_cvd
--gpu_context=gfxstream-vulkan:cross-domain if you've played around with
Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you played
around with Termina VMs).

I think rust-vmm could also use the same API ("--capset_names") too.


> Sure, QEMU had to be taught new flags before being able to use new
> rutabaga features. I agree that this comes with a certain inconvenience.
> But it may also be inconvenient for QEMU to deal with additional ad-hoc
> options parsing when there are efforts for harmonization.
>
> Did my comments shed new light into the discussion?


Yes, they do.  I agree with you that both crosvm/qemu have too many flags,
and having a stable command line interface is important.  We are aiming for
stability with the `--capset_names={colon string}` command line, and at
least for crosvm looking to deprecate older options [since we've never had
an official release of crosvm yet].

I do think:

1) "capset_names=gfxstream-vulkan:cross-domain"
2) "cross-domain=on,gfxstream-vulkan=on"

are similar enough.  I would choose (1) for since I think not duplicating
the [name] -> flag logic and having a similar interface across VMMs + VMM
launchers is ever-so slightly useful.


> Thanks,
> Bernhard
>
> >
> >
> >>
> >> Best regards,
> >> Bernhard
> >>
> >> [1]
> >>
> https://lore.kernel.org/qemu-devel/D15471EC-D1D1-4DAA-A6E7-19827C36AEC8@gmail.com/
> >> [2] https://m.youtube.com/watch?v=gtpOLQgnwug
> >> [3] https://m.youtube.com/watch?v=FMQtog6KUlo
> >>
> >> >+    if (result) {
> >> >+        error_setg_errno(errp, -result, "invalid capset names: %s",
> >> >+                         vr->capset_names);
> >> >+        return false;
> >> >+    }
> >> >+
> >> >+    builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
> >> >+    builder.debug_cb = virtio_gpu_rutabaga_debug_cb;
> >> >+    builder.capset_mask = capset_mask;
> >> >+    builder.user_data = (uint64_t)g;
> >> >+
> >> >+    /*
> >> >+     * If the user doesn't specify the wayland socket path, we try to
> >> infer
> >> >+     * the socket via a process similar to the one used by libwayland.
> >> >+     * libwayland does the following:
> >> >+     *
> >> >+     * 1) If $WAYLAND_DISPLAY is set, attempt to connect to
> >> >+     *    $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY
> >> >+     * 2) Otherwise, attempt to connect to $XDG_RUNTIME_DIR/wayland-0
> >> >+     * 3) Otherwise, don't pass a wayland socket to rutabaga. If a
> guest
> >> >+     *    wayland proxy is launched, it will fail to work.
> >> >+     */
> >> >+    channel.channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
> >> >+    if (!vr->wayland_socket_path) {
> >> >+        const char *runtime_dir = getenv("XDG_RUNTIME_DIR");
> >> >+        const char *display = getenv("WAYLAND_DISPLAY");
> >> >+        if (!display) {
> >> >+            display = "wayland-0";
> >> >+        }
> >> >+
> >> >+        if (runtime_dir) {
> >> >+            result = snprintf(wayland_socket_path, UNIX_PATH_MAX,
> >> >+                              "%s/%s", runtime_dir, display);
> >> >+            if (result > 0 && result < UNIX_PATH_MAX) {
> >> >+                channel.channel_name = wayland_socket_path;
> >> >+            }
> >> >+        }
> >> >+    } else {
> >> >+        channel.channel_name = vr->wayland_socket_path;
> >> >+    }
> >> >+
> >> >+    if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN))) {
> >> >+        if (channel.channel_name) {
> >> >+            channels.channels = &channel;
> >> >+            channels.num_channels = 1;
> >> >+            builder.channels = &channels;
> >> >+        }
> >> >+    }
> >> >+
> >> >+    result = rutabaga_init(&builder, &vr->rutabaga);
> >> >+    if (result) {
> >> >+        error_setg_errno(errp, -result, "Failed to init rutabaga");
> >> >+        return result;
> >> >+    }
> >> >+
> >> >+    return true;
> >> >+}
> >> >+
> >> >+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
> >> >+{
> >> >+    int result;
> >> >+    uint32_t num_capsets;
> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >> >+
> >> >+    result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
> >> >+    if (result) {
> >> >+        error_report("Failed to get capsets");
> >> >+        return 0;
> >> >+    }
> >> >+    vr->num_capsets = num_capsets;
> >> >+    return num_capsets;
> >> >+}
> >> >+
> >> >+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev,
> >> VirtQueue *vq)
> >> >+{
> >> >+    VirtIOGPU *g = VIRTIO_GPU(vdev);
> >> >+    struct virtio_gpu_ctrl_command *cmd;
> >> >+
> >> >+    if (!virtio_queue_ready(vq)) {
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
> >> >+    while (cmd) {
> >> >+        cmd->vq = vq;
> >> >+        cmd->error = 0;
> >> >+        cmd->finished = false;
> >> >+        QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
> >> >+        cmd = virtqueue_pop(vq, sizeof(struct
> virtio_gpu_ctrl_command));
> >> >+    }
> >> >+
> >> >+    virtio_gpu_process_cmdq(g);
> >> >+}
> >> >+
> >> >+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error
> **errp)
> >> >+{
> >> >+    int num_capsets;
> >> >+    VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
> >> >+    VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
> >> >+
> >> >+#if HOST_BIG_ENDIAN
> >> >+    error_setg(errp, "rutabaga is not supported on bigendian
> platforms");
> >> >+    return;
> >> >+#endif
> >> >+
> >> >+    if (!virtio_gpu_rutabaga_init(gpudev, errp)) {
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
> >> >+    if (!num_capsets) {
> >> >+        return;
> >> >+    }
> >> >+
> >> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
> >> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
> >> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
> >> >+
> >> >+    bdev->virtio_config.num_capsets = num_capsets;
> >> >+    virtio_gpu_device_realize(qdev, errp);
> >> >+}
> >> >+
> >> >+static Property virtio_gpu_rutabaga_properties[] = {
> >> >+    DEFINE_PROP_STRING("capset_names", VirtIOGPURutabaga,
> capset_names),
> >> >+    DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
> >> >+                       wayland_socket_path),
> >> >+    DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
> >> >+    DEFINE_PROP_END_OF_LIST(),
> >> >+};
> >> >+
> >> >+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void
> >> *data)
> >> >+{
> >> >+    DeviceClass *dc = DEVICE_CLASS(klass);
> >> >+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> >> >+    VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
> >> >+    VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
> >> >+
> >> >+    vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
> >> >+    vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
> >> >+    vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
> >> >+    vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
> >> >+
> >> >+    vdc->realize = virtio_gpu_rutabaga_realize;
> >> >+    device_class_set_props(dc, virtio_gpu_rutabaga_properties);
> >> >+}
> >> >+
> >> >+static const TypeInfo virtio_gpu_rutabaga_info = {
> >> >+    .name = TYPE_VIRTIO_GPU_RUTABAGA,
> >> >+    .parent = TYPE_VIRTIO_GPU,
> >> >+    .instance_size = sizeof(VirtIOGPURutabaga),
> >> >+    .class_init = virtio_gpu_rutabaga_class_init,
> >> >+};
> >> >+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
> >> >+module_kconfig(VIRTIO_GPU);
> >> >+
> >> >+static void virtio_register_types(void)
> >> >+{
> >> >+    type_register_static(&virtio_gpu_rutabaga_info);
> >> >+}
> >> >+
> >> >+type_init(virtio_register_types)
> >> >+
> >> >+module_dep("hw-display-virtio-gpu");
> >> >diff --git a/hw/display/virtio-vga-rutabaga.c
> >> b/hw/display/virtio-vga-rutabaga.c
> >> >new file mode 100644
> >> >index 0000000000..b5b43e3b90
> >> >--- /dev/null
> >> >+++ b/hw/display/virtio-vga-rutabaga.c
> >> >@@ -0,0 +1,53 @@
> >> >+/*
> >> >+ * SPDX-License-Identifier: GPL-2.0-or-later
> >> >+ */
> >> >+
> >> >+#include "qemu/osdep.h"
> >> >+#include "hw/pci/pci.h"
> >> >+#include "hw/qdev-properties.h"
> >> >+#include "hw/virtio/virtio-gpu.h"
> >> >+#include "hw/display/vga.h"
> >> >+#include "qapi/error.h"
> >> >+#include "qemu/module.h"
> >> >+#include "virtio-vga.h"
> >> >+#include "qom/object.h"
> >> >+
> >> >+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
> >> >+
> >> >+typedef struct VirtIOVGARutabaga VirtIOVGARutabaga;
> >> >+DECLARE_INSTANCE_CHECKER(VirtIOVGARutabaga, VIRTIO_VGA_RUTABAGA,
> >> >+                         TYPE_VIRTIO_VGA_RUTABAGA)
> >> >+
> >> >+struct VirtIOVGARutabaga {
> >> >+    VirtIOVGABase parent_obj;
> >> >+    VirtIOGPURutabaga vdev;
> >> >+};
> >> >+
> >> >+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
> >> >+{
> >> >+    VirtIOVGARutabaga *dev = VIRTIO_VGA_RUTABAGA(obj);
> >> >+
> >> >+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> >> >+                                TYPE_VIRTIO_GPU_RUTABAGA);
> >> >+    VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> >> >+}
> >> >+
> >> >+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
> >> >+    .generic_name  = TYPE_VIRTIO_VGA_RUTABAGA,
> >> >+    .parent        = TYPE_VIRTIO_VGA_BASE,
> >> >+    .instance_size = sizeof(VirtIOVGARutabaga),
> >> >+    .instance_init = virtio_vga_rutabaga_inst_initfn,
> >> >+};
> >> >+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
> >> >+module_kconfig(VIRTIO_VGA);
> >> >+
> >> >+static void virtio_vga_register_types(void)
> >> >+{
> >> >+    if (have_vga) {
> >> >+        virtio_pci_types_register(&virtio_vga_rutabaga_info);
> >> >+    }
> >> >+}
> >> >+
> >> >+type_init(virtio_vga_register_types)
> >> >+
> >> >+module_dep("hw-display-virtio-vga");
> >>
>

[-- Attachment #2: Type: text/html, Size: 87314 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-15  2:38         ` Gurchetan Singh
@ 2023-09-19 18:36           ` Bernhard Beschow
  2023-09-19 22:07             ` Akihiko Odaki
  2023-09-27 11:34             ` Thomas Huth
  0 siblings, 2 replies; 34+ messages in thread
From: Bernhard Beschow @ 2023-09-19 18:36 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, hi, ernunes, manos.pitsidianakis, philmd,
	Markus Armbruster, Thomas Huth



Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow <shentey@gmail.com> wrote:
>
>>
>>
>> Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <
>> gurchetansingh@chromium.org>:
>> >On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com>
>> wrote:
>> >
>> >>
>> >>
>> >> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
>> >> gurchetansingh@chromium.org>:
>> >> >This adds initial support for gfxstream and cross-domain.  Both
>> >> >features rely on virtio-gpu blob resources and context types, which
>> >> >are also implemented in this patch.
>> >> >
>> >> >gfxstream has a long and illustrious history in Android graphics
>> >> >paravirtualization.  It has been powering graphics in the Android
>> >> >Studio Emulator for more than a decade, which is the main developer
>> >> >platform.
>> >> >
>> >> >Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
>> >> >The key design characteristic was a 1:1 threading model and
>> >> >auto-generation, which fit nicely with the OpenGLES spec.  It also
>> >> >allowed easy layering with ANGLE on the host, which provides the GLES
>> >> >implementations on Windows or MacOS enviroments.
>> >> >
>> >> >gfxstream has traditionally been maintained by a single engineer, and
>> >> >between 2015 to 2021, the goldfish throne passed to Frank Yang.
>> >> >Historians often remark this glorious reign ("pax gfxstreama" is the
>> >> >academic term) was comparable to that of Augustus and both Queen
>> >> >Elizabeths.  Just to name a few accomplishments in a resplendent
>> >> >panoply: higher versions of GLES, address space graphics, snapshot
>> >> >support and CTS compliant Vulkan [b].
>> >> >
>> >> >One major drawback was the use of out-of-tree goldfish drivers.
>> >> >Android engineers didn't know much about DRM/KMS and especially TTM so
>> >> >a simple guest to host pipe was conceived.
>> >> >
>> >> >Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
>> >> >the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
>> >> >port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
>> >> >It was a symbol compatible replacement of virglrenderer [c] and named
>> >> >"AVDVirglrenderer".  This implementation forms the basis of the
>> >> >current gfxstream host implementation still in use today.
>> >> >
>> >> >cross-domain support follows a similar arc.  Originally conceived by
>> >> >Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
>> >> >2018, it initially relied on the downstream "virtio-wl" device.
>> >> >
>> >> >In 2020 and 2021, virtio-gpu was extended to include blob resources
>> >> >and multiple timelines by yours truly, features gfxstream/cross-domain
>> >> >both require to function correctly.
>> >> >
>> >> >Right now, we stand at the precipice of a truly fantastic possibility:
>> >> >the Android Emulator powered by upstream QEMU and upstream Linux
>> >> >kernel.  gfxstream will then be packaged properfully, and app
>> >> >developers can even fix gfxstream bugs on their own if they encounter
>> >> >them.
>> >> >
>> >> >It's been quite the ride, my friends.  Where will gfxstream head next,
>> >> >nobody really knows.  I wouldn't be surprised if it's around for
>> >> >another decade, maintained by a new generation of Android graphics
>> >> >enthusiasts.
>> >> >
>> >> >Technical details:
>> >> >  - Very simple initial display integration: just used Pixman
>> >> >  - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
>> >> >    calls
>> >> >
>> >> >Next steps for Android VMs:
>> >> >  - The next step would be improving display integration and UI
>> interfaces
>> >> >    with the goal of the QEMU upstream graphics being in an emulator
>> >> >    release [d].
>> >> >
>> >> >Next steps for Linux VMs for display virtualization:
>> >> >  - For widespread distribution, someone needs to package Sommelier or
>> the
>> >> >    wayland-proxy-virtwl [e] ideally into Debian main. In addition,
>> newer
>> >> >    versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
>> >> >    which allows disabling KMS hypercalls.  If anyone cares enough,
>> it'll
>> >> >    probably be possible to build a custom VM variant that uses this
>> >> display
>> >> >    virtualization strategy.
>> >> >
>> >> >[a]
>> >> https://android-review.googlesource.com/c/platform/development/+/34470
>> >> >[b]
>> >>
>> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
>> >> >[c]
>> >>
>> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>> >> >[d] https://developer.android.com/studio/releases/emulator
>> >> >[e] https://github.com/talex5/wayland-proxy-virtwl
>> >> >
>> >> >Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
>> >> >Tested-by: Alyssa Ross <hi@alyssa.is>
>> >> >Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>> >> >Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>> >> >---
>> >> >v1: Incorported various suggestions by Akihiko Odaki and Bernard
>> Berschow
>> >> >    - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>> >> >    - Used error_report(..)
>> >> >    - Used g_autofree to fix leaks on error paths
>> >> >    - Removed unnecessary casts
>> >> >    - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>> >> >
>> >> >v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau
>> >> and
>> >> >    Bernard Berschow:
>> >> >    - Parenthesis in CHECK macro
>> >> >    - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
>> >> >    - delay until g->parent_obj.enable = 1
>> >> >    - Additional cast fixes
>> >> >    - initialize directly in virtio_gpu_rutabaga_realize(..)
>> >> >    - add debug callback to hook into QEMU error's APIs
>> >> >
>> >> >v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>> >> >    - Autodetect Wayland socket when not explicitly specified
>> >> >    - Fix map_blob error paths
>> >> >    - Add comment why we need both `res` and `resource` in create blob
>> >> >    - Cast and whitespace fixes
>> >> >    - Big endian check comes before virtio_gpu_rutabaga_init().
>> >> >    - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
>> >> >
>> >> >v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>> >> >    - Double checked all casts
>> >> >    - Remove unnecessary parenthesis
>> >> >    - Removed `resource` in create_blob
>> >> >    - Added comment about failure case
>> >> >    - Pass user-provided socket as-is
>> >> >    - Use stack variable rather than heap allocation
>> >> >    - Future-proofed map info API to give access flags as well
>> >> >
>> >> >v5: Incorporated feedback from Akihiko Odaki:
>> >> >    - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
>> >> >    - Simplify num_capsets check
>> >> >    - Call cleanup mapping on error paths
>> >> >    - uint64_t --> void* for rutabaga_map(..)
>> >> >    - Removed unnecessary parenthesis
>> >> >    - Removed unnecessary cast
>> >> >    - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
>> >> >    - Reuse result variable
>> >> >
>> >> >v6: Incorporated feedback from Akihiko Odaki:
>> >> >    - Remove unnecessary #ifndef
>> >> >    - Disable scanout when appropriate
>> >> >    - CHECK capset index within range outside loop
>> >> >    - Add capset_version
>> >> >
>> >> >v7: Incorporated feedback from Akihiko Odaki:
>> >> >    - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
>> >> >
>> >> >v9: Incorportated feedback from Akihiko Odaki:
>> >> >    - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
>> >> >    - Add error_setg(..) after rutabaga_init(..)
>> >> >
>> >> >v10: Incorportated feedback from Akihiko Odaki:
>> >> >    - error_setg(..) --> error_setg_errno(..) when appropriate
>> >> >    - virtio_gpu_rutabaga_init returns a bool instead of an int
>> >> >
>> >> >v11: Incorportated feedback from Philippe Mathieu-Daudé:
>> >> >    - C-style /* */ comments and avoid // comments.
>> >> >    - GPL-2.0 --> GPL-2.0-or-later
>> >> >
>> >> > hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
>> >> > hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
>> >> > hw/display/virtio-vga-rutabaga.c     |   53 ++
>> >> > 3 files changed, 1224 insertions(+)
>> >> > create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
>> >> > create mode 100644 hw/display/virtio-gpu-rutabaga.c
>> >> > create mode 100644 hw/display/virtio-vga-rutabaga.c
>> >> >
>> >> >diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
>> >> b/hw/display/virtio-gpu-pci-rutabaga.c
>> >> >new file mode 100644
>> >> >index 0000000000..311eff308a
>> >> >--- /dev/null
>> >> >+++ b/hw/display/virtio-gpu-pci-rutabaga.c
>> >> >@@ -0,0 +1,50 @@
>> >> >+/*
>> >> >+ * SPDX-License-Identifier: GPL-2.0-or-later
>> >> >+ */
>> >> >+
>> >> >+#include "qemu/osdep.h"
>> >> >+#include "qapi/error.h"
>> >> >+#include "qemu/module.h"
>> >> >+#include "hw/pci/pci.h"
>> >> >+#include "hw/qdev-properties.h"
>> >> >+#include "hw/virtio/virtio.h"
>> >> >+#include "hw/virtio/virtio-bus.h"
>> >> >+#include "hw/virtio/virtio-gpu-pci.h"
>> >> >+#include "qom/object.h"
>> >> >+
>> >> >+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>> >> >+typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
>> >> >+DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI,
>> VIRTIO_GPU_RUTABAGA_PCI,
>> >> >+                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>> >> >+
>> >> >+struct VirtIOGPURutabagaPCI {
>> >> >+    VirtIOGPUPCIBase parent_obj;
>> >> >+    VirtIOGPURutabaga vdev;
>> >> >+};
>> >> >+
>> >> >+static void virtio_gpu_rutabaga_initfn(Object *obj)
>> >> >+{
>> >> >+    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>> >> >+
>> >> >+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>> >> >+                                TYPE_VIRTIO_GPU_RUTABAGA);
>> >> >+    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>> >> >+}
>> >> >+
>> >> >+static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
>> >> >+    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>> >> >+    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>> >> >+    .instance_size = sizeof(VirtIOGPURutabagaPCI),
>> >> >+    .instance_init = virtio_gpu_rutabaga_initfn,
>> >> >+};
>> >> >+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>> >> >+module_kconfig(VIRTIO_PCI);
>> >> >+
>> >> >+static void virtio_gpu_rutabaga_pci_register_types(void)
>> >> >+{
>> >> >+    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>> >> >+}
>> >> >+
>> >> >+type_init(virtio_gpu_rutabaga_pci_register_types)
>> >> >+
>> >> >+module_dep("hw-display-virtio-gpu-pci");
>> >> >diff --git a/hw/display/virtio-gpu-rutabaga.c
>> >> b/hw/display/virtio-gpu-rutabaga.c
>> >> >new file mode 100644
>> >> >index 0000000000..9018e5a702
>> >> >--- /dev/null
>> >> >+++ b/hw/display/virtio-gpu-rutabaga.c
>> >> >@@ -0,0 +1,1121 @@
>> >> >+/*
>> >> >+ * SPDX-License-Identifier: GPL-2.0-or-later
>> >> >+ */
>> >> >+
>> >> >+#include "qemu/osdep.h"
>> >> >+#include "qapi/error.h"
>> >> >+#include "qemu/error-report.h"
>> >> >+#include "qemu/iov.h"
>> >> >+#include "trace.h"
>> >> >+#include "hw/virtio/virtio.h"
>> >> >+#include "hw/virtio/virtio-gpu.h"
>> >> >+#include "hw/virtio/virtio-gpu-pixman.h"
>> >> >+#include "hw/virtio/virtio-iommu.h"
>> >> >+
>> >> >+#include <glib/gmem.h>
>> >> >+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>> >> >+
>> >> >+#define CHECK(condition, cmd)
>> >>      \
>> >> >+    do {
>> >>       \
>> >> >+        if (!(condition)) {
>> >>      \
>> >> >+            error_report("CHECK failed in %s() %s:" "%d", __func__,
>> >>      \
>> >> >+                         __FILE__, __LINE__);
>> >>      \
>> >> >+            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>> >>       \
>> >> >+            return;
>> >>      \
>> >> >+       }
>> >>       \
>> >> >+    } while (0)
>> >> >+
>> >> >+/*
>> >> >+ * This is the size of the char array in struct sock_addr_un. No
>> Wayland
>> >> socket
>> >> >+ * can be created with a path longer than this, including the null
>> >> terminator.
>> >> >+ */
>> >> >+#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
>> >> >+
>> >> >+struct rutabaga_aio_data {
>> >> >+    struct VirtIOGPURutabaga *vr;
>> >> >+    struct rutabaga_fence fence;
>> >> >+};
>> >> >+
>> >> >+static void
>> >> >+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
>> >> virtio_gpu_scanout *s,
>> >> >+                                  uint32_t resource_id)
>> >> >+{
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct rutabaga_transfer transfer = { 0 };
>> >> >+    struct iovec transfer_iovec;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    res = virtio_gpu_find_resource(g, resource_id);
>> >> >+    if (!res) {
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    if (res->width != s->current_cursor->width ||
>> >> >+        res->height != s->current_cursor->height) {
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    transfer.x = 0;
>> >> >+    transfer.y = 0;
>> >> >+    transfer.z = 0;
>> >> >+    transfer.w = res->width;
>> >> >+    transfer.h = res->height;
>> >> >+    transfer.d = 1;
>> >> >+
>> >> >+    transfer_iovec.iov_base = s->current_cursor->data;
>> >> >+    transfer_iovec.iov_len = res->width * res->height * 4;
>> >> >+
>> >> >+    rutabaga_resource_transfer_read(vr->rutabaga, 0,
>> >> >+                                    resource_id, &transfer,
>> >> >+                                    &transfer_iovec);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>> >> >+{
>> >> >+    VirtIOGPU *g = VIRTIO_GPU(b);
>> >> >+    virtio_gpu_process_cmdq(g);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>> >> >+                                struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct rutabaga_create_3d rc_3d = { 0 };
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct virtio_gpu_resource_create_2d c2d;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(c2d);
>> >> >+    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
>> >> >+                                       c2d.width, c2d.height);
>> >> >+
>> >> >+    rc_3d.target = 2;
>> >> >+    rc_3d.format = c2d.format;
>> >> >+    rc_3d.bind = (1 << 1);
>> >> >+    rc_3d.width = c2d.width;
>> >> >+    rc_3d.height = c2d.height;
>> >> >+    rc_3d.depth = 1;
>> >> >+    rc_3d.array_size = 1;
>> >> >+    rc_3d.last_level = 0;
>> >> >+    rc_3d.nr_samples = 0;
>> >> >+    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>> >> >+
>> >> >+    result = rutabaga_resource_create_3d(vr->rutabaga,
>> c2d.resource_id,
>> >> &rc_3d);
>> >> >+    CHECK(!result, cmd);
>> >> >+
>> >> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >> >+    res->width = c2d.width;
>> >> >+    res->height = c2d.height;
>> >> >+    res->format = c2d.format;
>> >> >+    res->resource_id = c2d.resource_id;
>> >> >+
>> >> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>> >> >+                                struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct rutabaga_create_3d rc_3d = { 0 };
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct virtio_gpu_resource_create_3d c3d;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(c3d);
>> >> >+
>> >> >+    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
>> >> >+                                       c3d.width, c3d.height,
>> c3d.depth);
>> >> >+
>> >> >+    rc_3d.target = c3d.target;
>> >> >+    rc_3d.format = c3d.format;
>> >> >+    rc_3d.bind = c3d.bind;
>> >> >+    rc_3d.width = c3d.width;
>> >> >+    rc_3d.height = c3d.height;
>> >> >+    rc_3d.depth = c3d.depth;
>> >> >+    rc_3d.array_size = c3d.array_size;
>> >> >+    rc_3d.last_level = c3d.last_level;
>> >> >+    rc_3d.nr_samples = c3d.nr_samples;
>> >> >+    rc_3d.flags = c3d.flags;
>> >> >+
>> >> >+    result = rutabaga_resource_create_3d(vr->rutabaga,
>> c3d.resource_id,
>> >> &rc_3d);
>> >> >+    CHECK(!result, cmd);
>> >> >+
>> >> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >> >+    res->width = c3d.width;
>> >> >+    res->height = c3d.height;
>> >> >+    res->format = c3d.format;
>> >> >+    res->resource_id = c3d.resource_id;
>> >> >+
>> >> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_resource_unref(VirtIOGPU *g,
>> >> >+                            struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct virtio_gpu_resource_unref unref;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(unref);
>> >> >+
>> >> >+    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>> >> >+
>> >> >+    res = virtio_gpu_find_resource(g, unref.resource_id);
>> >> >+    CHECK(res, cmd);
>> >> >+
>> >> >+    result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
>> >> >+    CHECK(!result, cmd);
>> >> >+
>> >> >+    if (res->image) {
>> >> >+        pixman_image_unref(res->image);
>> >> >+    }
>> >> >+
>> >> >+    QTAILQ_REMOVE(&g->reslist, res, next);
>> >> >+    g_free(res);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_context_create(VirtIOGPU *g,
>> >> >+                            struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct virtio_gpu_ctx_create cc;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(cc);
>> >> >+    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>> >> >+                                    cc.debug_name);
>> >> >+
>> >> >+    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
>> >> >+                                     cc.context_init, cc.debug_name,
>> >> cc.nlen);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_context_destroy(VirtIOGPU *g,
>> >> >+                             struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct virtio_gpu_ctx_destroy cd;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(cd);
>> >> >+    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>> >> >+
>> >> >+    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct
>> virtio_gpu_ctrl_command
>> >> *cmd)
>> >> >+{
>> >> >+    int32_t result, i;
>> >> >+    struct virtio_gpu_scanout *scanout = NULL;
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct rutabaga_transfer transfer = { 0 };
>> >> >+    struct iovec transfer_iovec;
>> >> >+    struct virtio_gpu_resource_flush rf;
>> >> >+    bool found = false;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+    if (vr->headless) {
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(rf);
>> >> >+    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>> >> >+                                   rf.r.width, rf.r.height, rf.r.x,
>> >> rf.r.y);
>> >> >+
>> >> >+    res = virtio_gpu_find_resource(g, rf.resource_id);
>> >> >+    CHECK(res, cmd);
>> >> >+
>> >> >+    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>> >> >+        scanout = &g->parent_obj.scanout[i];
>> >> >+        if (i == res->scanout_bitmask) {
>> >> >+            found = true;
>> >> >+            break;
>> >> >+        }
>> >> >+    }
>> >> >+
>> >> >+    if (!found) {
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    transfer.x = 0;
>> >> >+    transfer.y = 0;
>> >> >+    transfer.z = 0;
>> >> >+    transfer.w = res->width;
>> >> >+    transfer.h = res->height;
>> >> >+    transfer.d = 1;
>> >> >+
>> >> >+    transfer_iovec.iov_base = pixman_image_get_data(res->image);
>> >> >+    transfer_iovec.iov_len = res->width * res->height * 4;
>> >> >+
>> >> >+    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>> >> >+                                             rf.resource_id,
>> &transfer,
>> >> >+                                             &transfer_iovec);
>> >> >+    CHECK(!result, cmd);
>> >> >+    dpy_gfx_update_full(scanout->con);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>> >> *cmd)
>> >> >+{
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct virtio_gpu_scanout *scanout = NULL;
>> >> >+    struct virtio_gpu_set_scanout ss;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+    if (vr->headless) {
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(ss);
>> >> >+    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
>> >> >+                                     ss.r.width, ss.r.height, ss.r.x,
>> >> ss.r.y);
>> >> >+
>> >> >+    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
>> >> >+    scanout = &g->parent_obj.scanout[ss.scanout_id];
>> >> >+
>> >> >+    if (ss.resource_id == 0) {
>> >> >+        dpy_gfx_replace_surface(scanout->con, NULL);
>> >> >+        dpy_gl_scanout_disable(scanout->con);
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    res = virtio_gpu_find_resource(g, ss.resource_id);
>> >> >+    CHECK(res, cmd);
>> >> >+
>> >> >+    if (!res->image) {
>> >> >+        pixman_format_code_t pformat;
>> >> >+        pformat = virtio_gpu_get_pixman_format(res->format);
>> >> >+        CHECK(pformat, cmd);
>> >> >+
>> >> >+        res->image = pixman_image_create_bits(pformat,
>> >> >+                                              res->width,
>> >> >+                                              res->height,
>> >> >+                                              NULL, 0);
>> >> >+        CHECK(res->image, cmd);
>> >> >+        pixman_image_ref(res->image);
>> >> >+    }
>> >> >+
>> >> >+    g->parent_obj.enable = 1;
>> >> >+
>> >> >+    /* realloc the surface ptr */
>> >> >+    scanout->ds = qemu_create_displaysurface_pixman(res->image);
>> >> >+    dpy_gfx_replace_surface(scanout->con, NULL);
>> >> >+    dpy_gfx_replace_surface(scanout->con, scanout->ds);
>> >> >+    res->scanout_bitmask = ss.scanout_id;
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_submit_3d(VirtIOGPU *g,
>> >> >+                       struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct virtio_gpu_cmd_submit cs;
>> >> >+    struct rutabaga_command rutabaga_cmd = { 0 };
>> >> >+    g_autofree uint8_t *buf = NULL;
>> >> >+    size_t s;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(cs);
>> >> >+    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>> >> >+
>> >> >+    buf = g_new0(uint8_t, cs.size);
>> >> >+    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>> >> >+                   sizeof(cs), buf, cs.size);
>> >> >+    CHECK(s == cs.size, cmd);
>> >> >+
>> >> >+    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
>> >> >+    rutabaga_cmd.cmd = buf;
>> >> >+    rutabaga_cmd.cmd_size = cs.size;
>> >> >+
>> >> >+    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct rutabaga_transfer transfer = { 0 };
>> >> >+    struct virtio_gpu_transfer_to_host_2d t2d;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(t2d);
>> >> >+    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>> >> >+
>> >> >+    transfer.x = t2d.r.x;
>> >> >+    transfer.y = t2d.r.y;
>> >> >+    transfer.z = 0;
>> >> >+    transfer.w = t2d.r.width;
>> >> >+    transfer.h = t2d.r.height;
>> >> >+    transfer.d = 1;
>> >> >+
>> >> >+    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
>> >> t2d.resource_id,
>> >> >+                                              &transfer);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct rutabaga_transfer transfer = { 0 };
>> >> >+    struct virtio_gpu_transfer_host_3d t3d;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(t3d);
>> >> >+    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>> >> >+
>> >> >+    transfer.x = t3d.box.x;
>> >> >+    transfer.y = t3d.box.y;
>> >> >+    transfer.z = t3d.box.z;
>> >> >+    transfer.w = t3d.box.w;
>> >> >+    transfer.h = t3d.box.h;
>> >> >+    transfer.d = t3d.box.d;
>> >> >+    transfer.level = t3d.level;
>> >> >+    transfer.stride = t3d.stride;
>> >> >+    transfer.layer_stride = t3d.layer_stride;
>> >> >+    transfer.offset = t3d.offset;
>> >> >+
>> >> >+    result = rutabaga_resource_transfer_write(vr->rutabaga,
>> >> t3d.hdr.ctx_id,
>> >> >+                                              t3d.resource_id,
>> >> &transfer);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>> >> >+                                   struct virtio_gpu_ctrl_command
>> *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct rutabaga_transfer transfer = { 0 };
>> >> >+    struct virtio_gpu_transfer_host_3d t3d;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(t3d);
>> >> >+    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>> >> >+
>> >> >+    transfer.x = t3d.box.x;
>> >> >+    transfer.y = t3d.box.y;
>> >> >+    transfer.z = t3d.box.z;
>> >> >+    transfer.w = t3d.box.w;
>> >> >+    transfer.h = t3d.box.h;
>> >> >+    transfer.d = t3d.box.d;
>> >> >+    transfer.level = t3d.level;
>> >> >+    transfer.stride = t3d.stride;
>> >> >+    transfer.layer_stride = t3d.layer_stride;
>> >> >+    transfer.offset = t3d.offset;
>> >> >+
>> >> >+    result = rutabaga_resource_transfer_read(vr->rutabaga,
>> >> t3d.hdr.ctx_id,
>> >> >+                                             t3d.resource_id,
>> &transfer,
>> >> NULL);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct
>> virtio_gpu_ctrl_command
>> >> *cmd)
>> >> >+{
>> >> >+    struct rutabaga_iovecs vecs = { 0 };
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct virtio_gpu_resource_attach_backing att_rb;
>> >> >+    int ret;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(att_rb);
>> >> >+    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>> >> >+
>> >> >+    res = virtio_gpu_find_resource(g, att_rb.resource_id);
>> >> >+    CHECK(res, cmd);
>> >> >+    CHECK(!res->iov, cmd);
>> >> >+
>> >> >+    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
>> >> sizeof(att_rb),
>> >> >+                                        cmd, NULL, &res->iov,
>> >> &res->iov_cnt);
>> >> >+    CHECK(!ret, cmd);
>> >> >+
>> >> >+    vecs.iovecs = res->iov;
>> >> >+    vecs.num_iovecs = res->iov_cnt;
>> >> >+
>> >> >+    ret = rutabaga_resource_attach_backing(vr->rutabaga,
>> >> att_rb.resource_id,
>> >> >+                                           &vecs);
>> >> >+    if (ret != 0) {
>> >> >+        virtio_gpu_cleanup_mapping(g, res);
>> >> >+    }
>> >> >+
>> >> >+    CHECK(!ret, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct
>> virtio_gpu_ctrl_command
>> >> *cmd)
>> >> >+{
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct virtio_gpu_resource_detach_backing detach_rb;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(detach_rb);
>> >> >+    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>> >> >+
>> >> >+    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>> >> >+    CHECK(res, cmd);
>> >> >+
>> >> >+    rutabaga_resource_detach_backing(vr->rutabaga,
>> >> >+                                     detach_rb.resource_id);
>> >> >+
>> >> >+    virtio_gpu_cleanup_mapping(g, res);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct virtio_gpu_ctx_resource att_res;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(att_res);
>> >> >+    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>> >> >+                                        att_res.resource_id);
>> >> >+
>> >> >+    result = rutabaga_context_attach_resource(vr->rutabaga,
>> >> att_res.hdr.ctx_id,
>> >> >+                                              att_res.resource_id);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct virtio_gpu_ctx_resource det_res;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(det_res);
>> >> >+    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>> >> >+                                        det_res.resource_id);
>> >> >+
>> >> >+    result = rutabaga_context_detach_resource(vr->rutabaga,
>> >> det_res.hdr.ctx_id,
>> >> >+                                              det_res.resource_id);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
>> >> virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct virtio_gpu_get_capset_info info;
>> >> >+    struct virtio_gpu_resp_capset_info resp;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(info);
>> >> >+
>> >> >+    result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
>> >> >+                                      &resp.capset_id,
>> >> &resp.capset_max_version,
>> >> >+                                      &resp.capset_max_size);
>> >> >+    CHECK(!result, cmd);
>> >> >+
>> >> >+    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>> >> >+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>> >> *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    struct virtio_gpu_get_capset gc;
>> >> >+    struct virtio_gpu_resp_capset *resp;
>> >> >+    uint32_t capset_size, capset_version;
>> >> >+    uint32_t current_id, i;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(gc);
>> >> >+    for (i = 0; i < vr->num_capsets; i++) {
>> >> >+        result = rutabaga_get_capset_info(vr->rutabaga, i,
>> >> >+                                          &current_id,
>> &capset_version,
>> >> >+                                          &capset_size);
>> >> >+        CHECK(!result, cmd);
>> >> >+
>> >> >+        if (current_id == gc.capset_id) {
>> >> >+            break;
>> >> >+        }
>> >> >+    }
>> >> >+
>> >> >+    CHECK(i < vr->num_capsets, cmd);
>> >> >+
>> >> >+    resp = g_malloc0(sizeof(*resp) + capset_size);
>> >> >+    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>> >> >+    rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
>> >> >+                        resp->capset_data, capset_size);
>> >> >+
>> >> >+    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
>> >> capset_size);
>> >> >+    g_free(resp);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>> >> >+                                  struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int result;
>> >> >+    struct rutabaga_iovecs vecs = { 0 };
>> >> >+    g_autofree struct virtio_gpu_simple_resource *res = NULL;
>> >> >+    struct virtio_gpu_resource_create_blob cblob;
>> >> >+    struct rutabaga_create_blob rc_blob = { 0 };
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(cblob);
>> >> >+    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id,
>> cblob.size);
>> >> >+
>> >> >+    CHECK(cblob.resource_id != 0, cmd);
>> >> >+
>> >> >+    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >> >+
>> >> >+    res->resource_id = cblob.resource_id;
>> >> >+    res->blob_size = cblob.size;
>> >> >+
>> >> >+    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>> >> >+        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
>> >> >+                                               sizeof(cblob), cmd,
>> >> &res->addrs,
>> >> >+                                               &res->iov,
>> &res->iov_cnt);
>> >> >+        CHECK(!result, cmd);
>> >> >+    }
>> >> >+
>> >> >+    rc_blob.blob_id = cblob.blob_id;
>> >> >+    rc_blob.blob_mem = cblob.blob_mem;
>> >> >+    rc_blob.blob_flags = cblob.blob_flags;
>> >> >+    rc_blob.size = cblob.size;
>> >> >+
>> >> >+    vecs.iovecs = res->iov;
>> >> >+    vecs.num_iovecs = res->iov_cnt;
>> >> >+
>> >> >+    result = rutabaga_resource_create_blob(vr->rutabaga,
>> >> cblob.hdr.ctx_id,
>> >> >+                                           cblob.resource_id,
>> &rc_blob,
>> >> &vecs,
>> >> >+                                           NULL);
>> >> >+
>> >> >+    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>> >> >+        virtio_gpu_cleanup_mapping(g, res);
>> >> >+    }
>> >> >+
>> >> >+    CHECK(!result, cmd);
>> >> >+
>> >> >+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >> >+    res = NULL;
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>> >> >+                               struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    uint32_t map_info = 0;
>> >> >+    uint32_t slot = 0;
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct rutabaga_mapping mapping = { 0 };
>> >> >+    struct virtio_gpu_resource_map_blob mblob;
>> >> >+    struct virtio_gpu_resp_map_info resp = { 0 };
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(mblob);
>> >> >+
>> >> >+    CHECK(mblob.resource_id != 0, cmd);
>> >> >+
>> >> >+    res = virtio_gpu_find_resource(g, mblob.resource_id);
>> >> >+    CHECK(res, cmd);
>> >> >+
>> >> >+    result = rutabaga_resource_map_info(vr->rutabaga,
>> mblob.resource_id,
>> >> >+                                        &map_info);
>> >> >+    CHECK(!result, cmd);
>> >> >+
>> >> >+    /*
>> >> >+     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu
>> spec,
>> >> but do
>> >> >+     * exist to potentially allow the hypervisor to restrict write
>> >> access to
>> >> >+     * memory. QEMU does not need to use this functionality at the
>> >> moment.
>> >> >+     */
>> >> >+    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
>> >> >+
>> >> >+    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
>> >> &mapping);
>> >> >+    CHECK(!result, cmd);
>> >> >+
>> >> >+    for (slot = 0; slot < MAX_SLOTS; slot++) {
>> >> >+        if (vr->memory_regions[slot].used) {
>> >> >+            continue;
>> >> >+        }
>> >> >+
>> >> >+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>> >> >+        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
>> >> >+                                   mapping.ptr);
>> >> >+        memory_region_add_subregion(&g->parent_obj.hostmem,
>> >> >+                                    mblob.offset, mr);
>> >> >+        vr->memory_regions[slot].resource_id = mblob.resource_id;
>> >> >+        vr->memory_regions[slot].used = 1;
>> >> >+        break;
>> >> >+    }
>> >> >+
>> >> >+    if (slot >= MAX_SLOTS) {
>> >> >+        result = rutabaga_resource_unmap(vr->rutabaga,
>> >> mblob.resource_id);
>> >> >+        CHECK(!result, cmd);
>> >> >+    }
>> >> >+
>> >> >+    CHECK(slot < MAX_SLOTS, cmd);
>> >> >+
>> >> >+    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>> >> >+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>> >> >+                                 struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    int32_t result;
>> >> >+    uint32_t slot = 0;
>> >> >+    struct virtio_gpu_simple_resource *res;
>> >> >+    struct virtio_gpu_resource_unmap_blob ublob;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(ublob);
>> >> >+
>> >> >+    CHECK(ublob.resource_id != 0, cmd);
>> >> >+
>> >> >+    res = virtio_gpu_find_resource(g, ublob.resource_id);
>> >> >+    CHECK(res, cmd);
>> >> >+
>> >> >+    for (slot = 0; slot < MAX_SLOTS; slot++) {
>> >> >+        if (vr->memory_regions[slot].resource_id !=
>> ublob.resource_id) {
>> >> >+            continue;
>> >> >+        }
>> >> >+
>> >> >+        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>> >> >+        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
>> >> >+
>> >> >+        vr->memory_regions[slot].resource_id = 0;
>> >> >+        vr->memory_regions[slot].used = 0;
>> >> >+        break;
>> >> >+    }
>> >> >+
>> >> >+    CHECK(slot < MAX_SLOTS, cmd);
>> >> >+    result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>> >> >+                                struct virtio_gpu_ctrl_command *cmd)
>> >> >+{
>> >> >+    struct rutabaga_fence fence = { 0 };
>> >> >+    int32_t result;
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>> >> >+
>> >> >+    switch (cmd->cmd_hdr.type) {
>> >> >+    case VIRTIO_GPU_CMD_CTX_CREATE:
>> >> >+        rutabaga_cmd_context_create(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_CTX_DESTROY:
>> >> >+        rutabaga_cmd_context_destroy(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>> >> >+        rutabaga_cmd_create_resource_2d(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>> >> >+        rutabaga_cmd_create_resource_3d(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_SUBMIT_3D:
>> >> >+        rutabaga_cmd_submit_3d(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>> >> >+        rutabaga_cmd_transfer_to_host_2d(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>> >> >+        rutabaga_cmd_transfer_to_host_3d(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>> >> >+        rutabaga_cmd_transfer_from_host_3d(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>> >> >+        rutabaga_cmd_attach_backing(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>> >> >+        rutabaga_cmd_detach_backing(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_SET_SCANOUT:
>> >> >+        rutabaga_cmd_set_scanout(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>> >> >+        rutabaga_cmd_resource_flush(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>> >> >+        rutabaga_cmd_resource_unref(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>> >> >+        rutabaga_cmd_ctx_attach_resource(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>> >> >+        rutabaga_cmd_ctx_detach_resource(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>> >> >+        rutabaga_cmd_get_capset_info(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_GET_CAPSET:
>> >> >+        rutabaga_cmd_get_capset(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>> >> >+        virtio_gpu_get_display_info(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_GET_EDID:
>> >> >+        virtio_gpu_get_edid(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>> >> >+        rutabaga_cmd_resource_create_blob(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>> >> >+        rutabaga_cmd_resource_map_blob(g, cmd);
>> >> >+        break;
>> >> >+    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>> >> >+        rutabaga_cmd_resource_unmap_blob(g, cmd);
>> >> >+        break;
>> >> >+    default:
>> >> >+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>> >> >+        break;
>> >> >+    }
>> >> >+
>> >> >+    if (cmd->finished) {
>> >> >+        return;
>> >> >+    }
>> >> >+    if (cmd->error) {
>> >> >+        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>> >> >+                     cmd->cmd_hdr.type, cmd->error);
>> >> >+        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>> >> >+        return;
>> >> >+    }
>> >> >+    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>> >> >+        virtio_gpu_ctrl_response_nodata(g, cmd,
>> >> VIRTIO_GPU_RESP_OK_NODATA);
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    fence.flags = cmd->cmd_hdr.flags;
>> >> >+    fence.ctx_id = cmd->cmd_hdr.ctx_id;
>> >> >+    fence.fence_id = cmd->cmd_hdr.fence_id;
>> >> >+    fence.ring_idx = cmd->cmd_hdr.ring_idx;
>> >> >+
>> >> >+    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
>> >> cmd->cmd_hdr.type);
>> >> >+
>> >> >+    result = rutabaga_create_fence(vr->rutabaga, &fence);
>> >> >+    CHECK(!result, cmd);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+virtio_gpu_rutabaga_aio_cb(void *opaque)
>> >> >+{
>> >> >+    struct rutabaga_aio_data *data = opaque;
>> >> >+    VirtIOGPU *g = VIRTIO_GPU(data->vr);
>> >> >+    struct rutabaga_fence fence_data = data->fence;
>> >> >+    struct virtio_gpu_ctrl_command *cmd, *tmp;
>> >> >+
>> >> >+    uint32_t signaled_ctx_specific = fence_data.flags &
>> >> >+                                     RUTABAGA_FLAG_INFO_RING_IDX;
>> >> >+
>> >> >+    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>> >> >+        /*
>> >> >+         * Due to context specific timelines.
>> >> >+         */
>> >> >+        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>> >> >+                                       RUTABAGA_FLAG_INFO_RING_IDX;
>> >> >+
>> >> >+        if (signaled_ctx_specific != target_ctx_specific) {
>> >> >+            continue;
>> >> >+        }
>> >> >+
>> >> >+        if (signaled_ctx_specific &&
>> >> >+           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>> >> >+            continue;
>> >> >+        }
>> >> >+
>> >> >+        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>> >> >+            continue;
>> >> >+        }
>> >> >+
>> >> >+        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>> >> >+        virtio_gpu_ctrl_response_nodata(g, cmd,
>> >> VIRTIO_GPU_RESP_OK_NODATA);
>> >> >+        QTAILQ_REMOVE(&g->fenceq, cmd, next);
>> >> >+        g_free(cmd);
>> >> >+    }
>> >> >+
>> >> >+    g_free(data);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>> >> >+                             const struct rutabaga_fence *fence) {
>> >> >+    struct rutabaga_aio_data *data;
>> >> >+    VirtIOGPU *g = (VirtIOGPU *)user_data;
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    /*
>> >> >+     * gfxstream and both cross-domain (and even newer versions
>> >> virglrenderer:
>> >> >+     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
>> >> completion on
>> >> >+     * threads ("callback threads") that are different from the thread
>> >> that
>> >> >+     * processes the command queue ("main thread").
>> >> >+     *
>> >> >+     * crosvm and other virtio-gpu 1.1 implementations enable callback
>> >> threads
>> >> >+     * via locking.  However, on QEMU a deadlock is observed if
>> >> >+     * virtio_gpu_ctrl_response_nodata(..) [used in the fence
>> callback]
>> >> is used
>> >> >+     * from a thread that is not the main thread.
>> >> >+     *
>> >> >+     * The reason is QEMU's internal locking is designed to work with
>> >> QEMU
>> >> >+     * threads (see rcu_register_thread()) and not generic C/C++/Rust
>> >> threads.
>> >> >+     * For now, we can workaround this by scheduling the return of the
>> >> >+     * fence descriptors on the main thread.
>> >> >+     */
>> >> >+
>> >> >+    data = g_new0(struct rutabaga_aio_data, 1);
>> >> >+    data->vr = vr;
>> >> >+    data->fence = *fence;
>> >> >+    aio_bh_schedule_oneshot(qemu_get_aio_context(),
>> >> >+                            virtio_gpu_rutabaga_aio_cb,
>> >> >+                            data);
>> >> >+}
>> >> >+
>> >> >+static void
>> >> >+virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
>> >> >+                             const struct rutabaga_debug *debug) {
>> >> >+
>> >> >+    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
>> >> >+        error_report("%s", debug->message);
>> >> >+    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
>> >> >+        warn_report("%s", debug->message);
>> >> >+    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
>> >> >+        info_report("%s", debug->message);
>> >> >+    }
>> >> >+}
>> >> >+
>> >> >+static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
>> >> >+{
>> >> >+    int result;
>> >> >+    uint64_t capset_mask;
>> >> >+    struct rutabaga_builder builder = { 0 };
>> >> >+    char wayland_socket_path[UNIX_PATH_MAX];
>> >> >+    struct rutabaga_channel channel = { 0 };
>> >> >+    struct rutabaga_channels channels = { 0 };
>> >> >+
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+    vr->rutabaga = NULL;
>> >> >+
>> >> >+    if (!vr->capset_names) {
>> >> >+        error_setg(errp, "a capset name from the virtio-gpu spec is
>> >> required");
>> >> >+        return false;
>> >> >+    }
>> >> >+
>> >> >+    builder.wsi = RUTABAGA_WSI_SURFACELESS;
>> >> >+    /*
>> >> >+     * Currently, if WSI is specified, the only valid strings are
>> >> "surfaceless"
>> >> >+     * or "headless".  Surfaceless doesn't create a native window
>> >> surface, but
>> >> >+     * does copy from the render target to the Pixman buffer if a
>> >> virtio-gpu
>> >> >+     * 2D hypercall is issued.  Surfacless is the default.
>> >> >+     *
>> >> >+     * Headless is like surfaceless, but doesn't copy to the Pixman
>> >> buffer. The
>> >> >+     * use case is automated testing environments where there is no
>> need
>> >> to view
>> >> >+     * results.
>> >> >+     *
>> >> >+     * In the future, more performant virtio-gpu 2D UI integration may
>> >> be added.
>> >> >+     */
>> >> >+    if (vr->wsi) {
>> >> >+        if (g_str_equal(vr->wsi, "surfaceless")) {
>> >> >+            vr->headless = false;
>> >> >+        } else if (g_str_equal(vr->wsi, "headless")) {
>> >> >+            vr->headless = true;
>> >> >+        } else {
>> >> >+            error_setg(errp, "invalid wsi option selected");
>> >> >+            return false;
>> >> >+        }
>> >> >+    }
>> >> >+
>> >> >+    result = rutabaga_calculate_capset_mask(vr->capset_names,
>> >> &capset_mask);
>> >>
>> >> First, sorry for responding after such a long time. I've been busy with
>> >> work and I'm doing QEMU in my free time.
>> >>
>> >> In iteration 1 I've raised the topic on capset_names [1] and I haven't
>> >> seen it answered properly. Perhaps I need to rephrase a bit so here we
>> go:
>> >> capset_names seems to be colon-separated list of bit options managed by
>> >> rutabaga. This introduces yet another way of options handling. There
>> have
>> >> been talks about harmonizing options handling in QEMU since apparently
>> it
>> >> is considered too complex [2,3].
>> >
>> >
>> >> Why not pass the "capset" as a bitfield like capset_mask and have QEMU
>> >> create "capset" from QOM properties?
>> >
>> >IIUC these flags could come from virtio_gpu.h which is already present in
>> >> the QEMU tree. This would not inly shortcut the dependency on rutabaga
>> here
>> >> but would also be more idiomatic QEMU (since it makes the options more
>> >> introspectable by internal machinery).
>> >
>> >
>> >> Of course the bitfield approach would require modifications in QEMU
>> >> whenever rutabaga gains new features. However, I figure that in the long
>> >> term rutabaga will be quite feature complete such that the benefits of
>> >> idiomatic QEMU handling will outweigh the decoupling of the projects.
>> >>
>> >> What do you think?
>> >>
>> >
>> >I think what you're suggesting is something like -device
>> >virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
>> >gfxstream_vulkan + cross_domain]?
>>
>> I was thinking more along the lines of
>> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
>> gfxstream_vulkan and cross_domain are boolean QOM properties. This would
>> make for a human-readable format which follows QEMU style.
>>
>> >
>> >We actually did consider something like that when adding the
>> >--context-types flag [with crosvm], but there was a desire for a
>> >human-readable format rather than numbers [even if they are in the
>> >virtio-gpu spec].
>> >
>> >Additionally, there are quite a few context types that people are playing
>> >around with [gfxstream-gles, gfxstream-composer] that are launchable and
>> >aren't in the spec just yet.
>>
>> Right, QEMU had to be modified for this kind of experimentation. I
>> considered this in my last paragraph and figured that in the long run QEMU
>> *may* prefer more idiomatic option handling since it tries hard to not
>> break its command line interface. I'm just pointing this out -- the
>> decision is ultimately up to the community.
>>
>> Why not have dedicated QEMU development branches for experimentation?
>> Wouldn't upstreaming new features into QEMU be a good motivation to get the
>> missing pieces into the spec, once they are mature?
>
>
>> >
>> >Also, a key feature we want to explicitly **not** turn on all available
>> >context-types and let the user decide.
>>
>> How would you prevent that with the current colon-separated approach?
>> Splitting capset_mask in multiple parameters is just a different
>> syntactical representation of the same thing.
>>
>> > That'll allow guest Mesa in
>> >particular to do its magic in its loader.  So one may run Zink + ANV with
>> >ioctl forwarding, or Iris + ioctl forwarding and compare performance with
>> >the same guest image.
>> >
>> >And another thing is one needs some knowledge of the host system to choose
>> >the right context type.  You wouldn't do Zink + ANV ioctl forwarding on
>> >MacOS.  So I think the task of choosing the right context type will fall
>> to
>> >projects that depend on QEMU (such as Android Emulator) which have some
>> >knowledge of the host environment.
>> >
>> >We actually have a graphics detector somewhere that calls VK/OpenGL before
>> >launching the VM and sets the right options.  Plan is to port into
>> >gfxstream, maybe we could use that.
>>
>> You could bail out in QEMU if rutabaga_calculate_capset_mask() detects
>> conflicting combinations, no?
>>
>> >
>> >So given the desire for human readable formats, being portable across VMMs
>> >(crosvm, qemu, rust-vmm??) and experimentation, the string -> capset mask
>> >conversion was put in the rutabaga API.  So I wouldn't change it for those
>> >reasons.
>>
>> What do you mean by being portable across VMMs?
>
>
>Having the API inside rutabaga is (mildly) useful when multiple VMMs have
>the need to translate from a human-readable format to flags digestible by
>rutabaga.
>
>https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452
>
>https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353
>
>https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505
>
>For these crosvm/qemu launchers, I imagine capset names will be plumbed all
>the way through eventually (launch_cvd
>--gpu_context=gfxstream-vulkan:cross-domain if you've played around with
>Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you played
>around with Termina VMs).
>
>I think rust-vmm could also use the same API ("--capset_names") too.
>
>
>> Sure, QEMU had to be taught new flags before being able to use new
>> rutabaga features. I agree that this comes with a certain inconvenience.
>> But it may also be inconvenient for QEMU to deal with additional ad-hoc
>> options parsing when there are efforts for harmonization.
>>
>> Did my comments shed new light into the discussion?
>
>
>Yes, they do.  I agree with you that both crosvm/qemu have too many flags,
>and having a stable command line interface is important.  We are aiming for
>stability with the `--capset_names={colon string}` command line, and at
>least for crosvm looking to deprecate older options [since we've never had
>an official release of crosvm yet].
>
>I do think:
>
>1) "capset_names=gfxstream-vulkan:cross-domain"
>2) "cross-domain=on,gfxstream-vulkan=on"
>
>are similar enough.  I would choose (1) for since I think not duplicating
>the [name] -> flag logic and having a similar interface across VMMs + VMM
>launchers is ever-so slightly useful.

I think we've now reached a good understanding of the issue. It's now up to the QEMU community to make a choice. So I'm cc'ing Markus and Thomas as the experts of the topic.

Best regards,
Bernhard

>
>
>> Thanks,
>> Bernhard
>>
>> >
>> >
>> >>
>> >> Best regards,
>> >> Bernhard
>> >>
>> >> [1]
>> >>
>> https://lore.kernel.org/qemu-devel/D15471EC-D1D1-4DAA-A6E7-19827C36AEC8@gmail.com/
>> >> [2] https://m.youtube.com/watch?v=gtpOLQgnwug
>> >> [3] https://m.youtube.com/watch?v=FMQtog6KUlo
>> >>
>> >> >+    if (result) {
>> >> >+        error_setg_errno(errp, -result, "invalid capset names: %s",
>> >> >+                         vr->capset_names);
>> >> >+        return false;
>> >> >+    }
>> >> >+
>> >> >+    builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
>> >> >+    builder.debug_cb = virtio_gpu_rutabaga_debug_cb;
>> >> >+    builder.capset_mask = capset_mask;
>> >> >+    builder.user_data = (uint64_t)g;
>> >> >+
>> >> >+    /*
>> >> >+     * If the user doesn't specify the wayland socket path, we try to
>> >> infer
>> >> >+     * the socket via a process similar to the one used by libwayland.
>> >> >+     * libwayland does the following:
>> >> >+     *
>> >> >+     * 1) If $WAYLAND_DISPLAY is set, attempt to connect to
>> >> >+     *    $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY
>> >> >+     * 2) Otherwise, attempt to connect to $XDG_RUNTIME_DIR/wayland-0
>> >> >+     * 3) Otherwise, don't pass a wayland socket to rutabaga. If a
>> guest
>> >> >+     *    wayland proxy is launched, it will fail to work.
>> >> >+     */
>> >> >+    channel.channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
>> >> >+    if (!vr->wayland_socket_path) {
>> >> >+        const char *runtime_dir = getenv("XDG_RUNTIME_DIR");
>> >> >+        const char *display = getenv("WAYLAND_DISPLAY");
>> >> >+        if (!display) {
>> >> >+            display = "wayland-0";
>> >> >+        }
>> >> >+
>> >> >+        if (runtime_dir) {
>> >> >+            result = snprintf(wayland_socket_path, UNIX_PATH_MAX,
>> >> >+                              "%s/%s", runtime_dir, display);
>> >> >+            if (result > 0 && result < UNIX_PATH_MAX) {
>> >> >+                channel.channel_name = wayland_socket_path;
>> >> >+            }
>> >> >+        }
>> >> >+    } else {
>> >> >+        channel.channel_name = vr->wayland_socket_path;
>> >> >+    }
>> >> >+
>> >> >+    if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN))) {
>> >> >+        if (channel.channel_name) {
>> >> >+            channels.channels = &channel;
>> >> >+            channels.num_channels = 1;
>> >> >+            builder.channels = &channels;
>> >> >+        }
>> >> >+    }
>> >> >+
>> >> >+    result = rutabaga_init(&builder, &vr->rutabaga);
>> >> >+    if (result) {
>> >> >+        error_setg_errno(errp, -result, "Failed to init rutabaga");
>> >> >+        return result;
>> >> >+    }
>> >> >+
>> >> >+    return true;
>> >> >+}
>> >> >+
>> >> >+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
>> >> >+{
>> >> >+    int result;
>> >> >+    uint32_t num_capsets;
>> >> >+    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >> >+
>> >> >+    result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
>> >> >+    if (result) {
>> >> >+        error_report("Failed to get capsets");
>> >> >+        return 0;
>> >> >+    }
>> >> >+    vr->num_capsets = num_capsets;
>> >> >+    return num_capsets;
>> >> >+}
>> >> >+
>> >> >+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev,
>> >> VirtQueue *vq)
>> >> >+{
>> >> >+    VirtIOGPU *g = VIRTIO_GPU(vdev);
>> >> >+    struct virtio_gpu_ctrl_command *cmd;
>> >> >+
>> >> >+    if (!virtio_queue_ready(vq)) {
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>> >> >+    while (cmd) {
>> >> >+        cmd->vq = vq;
>> >> >+        cmd->error = 0;
>> >> >+        cmd->finished = false;
>> >> >+        QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
>> >> >+        cmd = virtqueue_pop(vq, sizeof(struct
>> virtio_gpu_ctrl_command));
>> >> >+    }
>> >> >+
>> >> >+    virtio_gpu_process_cmdq(g);
>> >> >+}
>> >> >+
>> >> >+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error
>> **errp)
>> >> >+{
>> >> >+    int num_capsets;
>> >> >+    VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
>> >> >+    VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
>> >> >+
>> >> >+#if HOST_BIG_ENDIAN
>> >> >+    error_setg(errp, "rutabaga is not supported on bigendian
>> platforms");
>> >> >+    return;
>> >> >+#endif
>> >> >+
>> >> >+    if (!virtio_gpu_rutabaga_init(gpudev, errp)) {
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
>> >> >+    if (!num_capsets) {
>> >> >+        return;
>> >> >+    }
>> >> >+
>> >> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
>> >> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
>> >> >+    bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
>> >> >+
>> >> >+    bdev->virtio_config.num_capsets = num_capsets;
>> >> >+    virtio_gpu_device_realize(qdev, errp);
>> >> >+}
>> >> >+
>> >> >+static Property virtio_gpu_rutabaga_properties[] = {
>> >> >+    DEFINE_PROP_STRING("capset_names", VirtIOGPURutabaga,
>> capset_names),
>> >> >+    DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
>> >> >+                       wayland_socket_path),
>> >> >+    DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
>> >> >+    DEFINE_PROP_END_OF_LIST(),
>> >> >+};
>> >> >+
>> >> >+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void
>> >> *data)
>> >> >+{
>> >> >+    DeviceClass *dc = DEVICE_CLASS(klass);
>> >> >+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
>> >> >+    VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
>> >> >+    VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
>> >> >+
>> >> >+    vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
>> >> >+    vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
>> >> >+    vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
>> >> >+    vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
>> >> >+
>> >> >+    vdc->realize = virtio_gpu_rutabaga_realize;
>> >> >+    device_class_set_props(dc, virtio_gpu_rutabaga_properties);
>> >> >+}
>> >> >+
>> >> >+static const TypeInfo virtio_gpu_rutabaga_info = {
>> >> >+    .name = TYPE_VIRTIO_GPU_RUTABAGA,
>> >> >+    .parent = TYPE_VIRTIO_GPU,
>> >> >+    .instance_size = sizeof(VirtIOGPURutabaga),
>> >> >+    .class_init = virtio_gpu_rutabaga_class_init,
>> >> >+};
>> >> >+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
>> >> >+module_kconfig(VIRTIO_GPU);
>> >> >+
>> >> >+static void virtio_register_types(void)
>> >> >+{
>> >> >+    type_register_static(&virtio_gpu_rutabaga_info);
>> >> >+}
>> >> >+
>> >> >+type_init(virtio_register_types)
>> >> >+
>> >> >+module_dep("hw-display-virtio-gpu");
>> >> >diff --git a/hw/display/virtio-vga-rutabaga.c
>> >> b/hw/display/virtio-vga-rutabaga.c
>> >> >new file mode 100644
>> >> >index 0000000000..b5b43e3b90
>> >> >--- /dev/null
>> >> >+++ b/hw/display/virtio-vga-rutabaga.c
>> >> >@@ -0,0 +1,53 @@
>> >> >+/*
>> >> >+ * SPDX-License-Identifier: GPL-2.0-or-later
>> >> >+ */
>> >> >+
>> >> >+#include "qemu/osdep.h"
>> >> >+#include "hw/pci/pci.h"
>> >> >+#include "hw/qdev-properties.h"
>> >> >+#include "hw/virtio/virtio-gpu.h"
>> >> >+#include "hw/display/vga.h"
>> >> >+#include "qapi/error.h"
>> >> >+#include "qemu/module.h"
>> >> >+#include "virtio-vga.h"
>> >> >+#include "qom/object.h"
>> >> >+
>> >> >+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
>> >> >+
>> >> >+typedef struct VirtIOVGARutabaga VirtIOVGARutabaga;
>> >> >+DECLARE_INSTANCE_CHECKER(VirtIOVGARutabaga, VIRTIO_VGA_RUTABAGA,
>> >> >+                         TYPE_VIRTIO_VGA_RUTABAGA)
>> >> >+
>> >> >+struct VirtIOVGARutabaga {
>> >> >+    VirtIOVGABase parent_obj;
>> >> >+    VirtIOGPURutabaga vdev;
>> >> >+};
>> >> >+
>> >> >+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
>> >> >+{
>> >> >+    VirtIOVGARutabaga *dev = VIRTIO_VGA_RUTABAGA(obj);
>> >> >+
>> >> >+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>> >> >+                                TYPE_VIRTIO_GPU_RUTABAGA);
>> >> >+    VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>> >> >+}
>> >> >+
>> >> >+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
>> >> >+    .generic_name  = TYPE_VIRTIO_VGA_RUTABAGA,
>> >> >+    .parent        = TYPE_VIRTIO_VGA_BASE,
>> >> >+    .instance_size = sizeof(VirtIOVGARutabaga),
>> >> >+    .instance_init = virtio_vga_rutabaga_inst_initfn,
>> >> >+};
>> >> >+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
>> >> >+module_kconfig(VIRTIO_VGA);
>> >> >+
>> >> >+static void virtio_vga_register_types(void)
>> >> >+{
>> >> >+    if (have_vga) {
>> >> >+        virtio_pci_types_register(&virtio_vga_rutabaga_info);
>> >> >+    }
>> >> >+}
>> >> >+
>> >> >+type_init(virtio_vga_register_types)
>> >> >+
>> >> >+module_dep("hw-display-virtio-vga");
>> >>
>>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-19 18:36           ` Bernhard Beschow
@ 2023-09-19 22:07             ` Akihiko Odaki
  2023-09-21 23:44               ` Gurchetan Singh
  2023-09-27 11:34             ` Thomas Huth
  1 sibling, 1 reply; 34+ messages in thread
From: Akihiko Odaki @ 2023-09-19 22:07 UTC (permalink / raw)
  To: Bernhard Beschow, Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, ray.huang, alex.bennee, hi,
	ernunes, manos.pitsidianakis, philmd, Markus Armbruster,
	Thomas Huth

On 2023/09/20 3:36, Bernhard Beschow wrote:
> 
> 
> Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>> On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow <shentey@gmail.com> wrote:
>>
>>>
>>>
>>> Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <
>>> gurchetansingh@chromium.org>:
>>>> On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com>
>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
>>>>> gurchetansingh@chromium.org>:
>>>>>> This adds initial support for gfxstream and cross-domain.  Both
>>>>>> features rely on virtio-gpu blob resources and context types, which
>>>>>> are also implemented in this patch.
>>>>>>
>>>>>> gfxstream has a long and illustrious history in Android graphics
>>>>>> paravirtualization.  It has been powering graphics in the Android
>>>>>> Studio Emulator for more than a decade, which is the main developer
>>>>>> platform.
>>>>>>
>>>>>> Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
>>>>>> The key design characteristic was a 1:1 threading model and
>>>>>> auto-generation, which fit nicely with the OpenGLES spec.  It also
>>>>>> allowed easy layering with ANGLE on the host, which provides the GLES
>>>>>> implementations on Windows or MacOS enviroments.
>>>>>>
>>>>>> gfxstream has traditionally been maintained by a single engineer, and
>>>>>> between 2015 to 2021, the goldfish throne passed to Frank Yang.
>>>>>> Historians often remark this glorious reign ("pax gfxstreama" is the
>>>>>> academic term) was comparable to that of Augustus and both Queen
>>>>>> Elizabeths.  Just to name a few accomplishments in a resplendent
>>>>>> panoply: higher versions of GLES, address space graphics, snapshot
>>>>>> support and CTS compliant Vulkan [b].
>>>>>>
>>>>>> One major drawback was the use of out-of-tree goldfish drivers.
>>>>>> Android engineers didn't know much about DRM/KMS and especially TTM so
>>>>>> a simple guest to host pipe was conceived.
>>>>>>
>>>>>> Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
>>>>>> the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
>>>>>> port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
>>>>>> It was a symbol compatible replacement of virglrenderer [c] and named
>>>>>> "AVDVirglrenderer".  This implementation forms the basis of the
>>>>>> current gfxstream host implementation still in use today.
>>>>>>
>>>>>> cross-domain support follows a similar arc.  Originally conceived by
>>>>>> Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
>>>>>> 2018, it initially relied on the downstream "virtio-wl" device.
>>>>>>
>>>>>> In 2020 and 2021, virtio-gpu was extended to include blob resources
>>>>>> and multiple timelines by yours truly, features gfxstream/cross-domain
>>>>>> both require to function correctly.
>>>>>>
>>>>>> Right now, we stand at the precipice of a truly fantastic possibility:
>>>>>> the Android Emulator powered by upstream QEMU and upstream Linux
>>>>>> kernel.  gfxstream will then be packaged properfully, and app
>>>>>> developers can even fix gfxstream bugs on their own if they encounter
>>>>>> them.
>>>>>>
>>>>>> It's been quite the ride, my friends.  Where will gfxstream head next,
>>>>>> nobody really knows.  I wouldn't be surprised if it's around for
>>>>>> another decade, maintained by a new generation of Android graphics
>>>>>> enthusiasts.
>>>>>>
>>>>>> Technical details:
>>>>>>   - Very simple initial display integration: just used Pixman
>>>>>>   - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
>>>>>>     calls
>>>>>>
>>>>>> Next steps for Android VMs:
>>>>>>   - The next step would be improving display integration and UI
>>> interfaces
>>>>>>     with the goal of the QEMU upstream graphics being in an emulator
>>>>>>     release [d].
>>>>>>
>>>>>> Next steps for Linux VMs for display virtualization:
>>>>>>   - For widespread distribution, someone needs to package Sommelier or
>>> the
>>>>>>     wayland-proxy-virtwl [e] ideally into Debian main. In addition,
>>> newer
>>>>>>     versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
>>>>>>     which allows disabling KMS hypercalls.  If anyone cares enough,
>>> it'll
>>>>>>     probably be possible to build a custom VM variant that uses this
>>>>> display
>>>>>>     virtualization strategy.
>>>>>>
>>>>>> [a]
>>>>> https://android-review.googlesource.com/c/platform/development/+/34470
>>>>>> [b]
>>>>>
>>> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
>>>>>> [c]
>>>>>
>>> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>>>>>> [d] https://developer.android.com/studio/releases/emulator
>>>>>> [e] https://github.com/talex5/wayland-proxy-virtwl
>>>>>>
>>>>>> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
>>>>>> Tested-by: Alyssa Ross <hi@alyssa.is>
>>>>>> Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>>>>>> Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>>>>>> ---
>>>>>> v1: Incorported various suggestions by Akihiko Odaki and Bernard
>>> Berschow
>>>>>>     - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>>>>>>     - Used error_report(..)
>>>>>>     - Used g_autofree to fix leaks on error paths
>>>>>>     - Removed unnecessary casts
>>>>>>     - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>>>>>>
>>>>>> v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau
>>>>> and
>>>>>>     Bernard Berschow:
>>>>>>     - Parenthesis in CHECK macro
>>>>>>     - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
>>>>>>     - delay until g->parent_obj.enable = 1
>>>>>>     - Additional cast fixes
>>>>>>     - initialize directly in virtio_gpu_rutabaga_realize(..)
>>>>>>     - add debug callback to hook into QEMU error's APIs
>>>>>>
>>>>>> v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>>>>>>     - Autodetect Wayland socket when not explicitly specified
>>>>>>     - Fix map_blob error paths
>>>>>>     - Add comment why we need both `res` and `resource` in create blob
>>>>>>     - Cast and whitespace fixes
>>>>>>     - Big endian check comes before virtio_gpu_rutabaga_init().
>>>>>>     - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
>>>>>>
>>>>>> v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>>>>>>     - Double checked all casts
>>>>>>     - Remove unnecessary parenthesis
>>>>>>     - Removed `resource` in create_blob
>>>>>>     - Added comment about failure case
>>>>>>     - Pass user-provided socket as-is
>>>>>>     - Use stack variable rather than heap allocation
>>>>>>     - Future-proofed map info API to give access flags as well
>>>>>>
>>>>>> v5: Incorporated feedback from Akihiko Odaki:
>>>>>>     - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
>>>>>>     - Simplify num_capsets check
>>>>>>     - Call cleanup mapping on error paths
>>>>>>     - uint64_t --> void* for rutabaga_map(..)
>>>>>>     - Removed unnecessary parenthesis
>>>>>>     - Removed unnecessary cast
>>>>>>     - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
>>>>>>     - Reuse result variable
>>>>>>
>>>>>> v6: Incorporated feedback from Akihiko Odaki:
>>>>>>     - Remove unnecessary #ifndef
>>>>>>     - Disable scanout when appropriate
>>>>>>     - CHECK capset index within range outside loop
>>>>>>     - Add capset_version
>>>>>>
>>>>>> v7: Incorporated feedback from Akihiko Odaki:
>>>>>>     - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
>>>>>>
>>>>>> v9: Incorportated feedback from Akihiko Odaki:
>>>>>>     - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
>>>>>>     - Add error_setg(..) after rutabaga_init(..)
>>>>>>
>>>>>> v10: Incorportated feedback from Akihiko Odaki:
>>>>>>     - error_setg(..) --> error_setg_errno(..) when appropriate
>>>>>>     - virtio_gpu_rutabaga_init returns a bool instead of an int
>>>>>>
>>>>>> v11: Incorportated feedback from Philippe Mathieu-Daudé:
>>>>>>     - C-style /* */ comments and avoid // comments.
>>>>>>     - GPL-2.0 --> GPL-2.0-or-later
>>>>>>
>>>>>> hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
>>>>>> hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
>>>>>> hw/display/virtio-vga-rutabaga.c     |   53 ++
>>>>>> 3 files changed, 1224 insertions(+)
>>>>>> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
>>>>>> create mode 100644 hw/display/virtio-gpu-rutabaga.c
>>>>>> create mode 100644 hw/display/virtio-vga-rutabaga.c
>>>>>>
>>>>>> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
>>>>> b/hw/display/virtio-gpu-pci-rutabaga.c
>>>>>> new file mode 100644
>>>>>> index 0000000000..311eff308a
>>>>>> --- /dev/null
>>>>>> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
>>>>>> @@ -0,0 +1,50 @@
>>>>>> +/*
>>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>>>>>> + */
>>>>>> +
>>>>>> +#include "qemu/osdep.h"
>>>>>> +#include "qapi/error.h"
>>>>>> +#include "qemu/module.h"
>>>>>> +#include "hw/pci/pci.h"
>>>>>> +#include "hw/qdev-properties.h"
>>>>>> +#include "hw/virtio/virtio.h"
>>>>>> +#include "hw/virtio/virtio-bus.h"
>>>>>> +#include "hw/virtio/virtio-gpu-pci.h"
>>>>>> +#include "qom/object.h"
>>>>>> +
>>>>>> +#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>>>>>> +typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
>>>>>> +DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI,
>>> VIRTIO_GPU_RUTABAGA_PCI,
>>>>>> +                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>>>>>> +
>>>>>> +struct VirtIOGPURutabagaPCI {
>>>>>> +    VirtIOGPUPCIBase parent_obj;
>>>>>> +    VirtIOGPURutabaga vdev;
>>>>>> +};
>>>>>> +
>>>>>> +static void virtio_gpu_rutabaga_initfn(Object *obj)
>>>>>> +{
>>>>>> +    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>>>>>> +
>>>>>> +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>>>>>> +                                TYPE_VIRTIO_GPU_RUTABAGA);
>>>>>> +    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>>>>>> +}
>>>>>> +
>>>>>> +static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
>>>>>> +    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>>>>>> +    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>>>>>> +    .instance_size = sizeof(VirtIOGPURutabagaPCI),
>>>>>> +    .instance_init = virtio_gpu_rutabaga_initfn,
>>>>>> +};
>>>>>> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>>>>>> +module_kconfig(VIRTIO_PCI);
>>>>>> +
>>>>>> +static void virtio_gpu_rutabaga_pci_register_types(void)
>>>>>> +{
>>>>>> +    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>>>>>> +}
>>>>>> +
>>>>>> +type_init(virtio_gpu_rutabaga_pci_register_types)
>>>>>> +
>>>>>> +module_dep("hw-display-virtio-gpu-pci");
>>>>>> diff --git a/hw/display/virtio-gpu-rutabaga.c
>>>>> b/hw/display/virtio-gpu-rutabaga.c
>>>>>> new file mode 100644
>>>>>> index 0000000000..9018e5a702
>>>>>> --- /dev/null
>>>>>> +++ b/hw/display/virtio-gpu-rutabaga.c
>>>>>> @@ -0,0 +1,1121 @@
>>>>>> +/*
>>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>>>>>> + */
>>>>>> +
>>>>>> +#include "qemu/osdep.h"
>>>>>> +#include "qapi/error.h"
>>>>>> +#include "qemu/error-report.h"
>>>>>> +#include "qemu/iov.h"
>>>>>> +#include "trace.h"
>>>>>> +#include "hw/virtio/virtio.h"
>>>>>> +#include "hw/virtio/virtio-gpu.h"
>>>>>> +#include "hw/virtio/virtio-gpu-pixman.h"
>>>>>> +#include "hw/virtio/virtio-iommu.h"
>>>>>> +
>>>>>> +#include <glib/gmem.h>
>>>>>> +#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>>>>>> +
>>>>>> +#define CHECK(condition, cmd)
>>>>>       \
>>>>>> +    do {
>>>>>        \
>>>>>> +        if (!(condition)) {
>>>>>       \
>>>>>> +            error_report("CHECK failed in %s() %s:" "%d", __func__,
>>>>>       \
>>>>>> +                         __FILE__, __LINE__);
>>>>>       \
>>>>>> +            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>>>>        \
>>>>>> +            return;
>>>>>       \
>>>>>> +       }
>>>>>        \
>>>>>> +    } while (0)
>>>>>> +
>>>>>> +/*
>>>>>> + * This is the size of the char array in struct sock_addr_un. No
>>> Wayland
>>>>> socket
>>>>>> + * can be created with a path longer than this, including the null
>>>>> terminator.
>>>>>> + */
>>>>>> +#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
>>>>>> +
>>>>>> +struct rutabaga_aio_data {
>>>>>> +    struct VirtIOGPURutabaga *vr;
>>>>>> +    struct rutabaga_fence fence;
>>>>>> +};
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
>>>>> virtio_gpu_scanout *s,
>>>>>> +                                  uint32_t resource_id)
>>>>>> +{
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct iovec transfer_iovec;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, resource_id);
>>>>>> +    if (!res) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    if (res->width != s->current_cursor->width ||
>>>>>> +        res->height != s->current_cursor->height) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    transfer.x = 0;
>>>>>> +    transfer.y = 0;
>>>>>> +    transfer.z = 0;
>>>>>> +    transfer.w = res->width;
>>>>>> +    transfer.h = res->height;
>>>>>> +    transfer.d = 1;
>>>>>> +
>>>>>> +    transfer_iovec.iov_base = s->current_cursor->data;
>>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
>>>>>> +
>>>>>> +    rutabaga_resource_transfer_read(vr->rutabaga, 0,
>>>>>> +                                    resource_id, &transfer,
>>>>>> +                                    &transfer_iovec);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>>>>>> +{
>>>>>> +    VirtIOGPU *g = VIRTIO_GPU(b);
>>>>>> +    virtio_gpu_process_cmdq(g);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>>>>>> +                                struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_create_2d c2d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(c2d);
>>>>>> +    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
>>>>>> +                                       c2d.width, c2d.height);
>>>>>> +
>>>>>> +    rc_3d.target = 2;
>>>>>> +    rc_3d.format = c2d.format;
>>>>>> +    rc_3d.bind = (1 << 1);
>>>>>> +    rc_3d.width = c2d.width;
>>>>>> +    rc_3d.height = c2d.height;
>>>>>> +    rc_3d.depth = 1;
>>>>>> +    rc_3d.array_size = 1;
>>>>>> +    rc_3d.last_level = 0;
>>>>>> +    rc_3d.nr_samples = 0;
>>>>>> +    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>>>>>> +
>>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
>>> c2d.resource_id,
>>>>> &rc_3d);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>>>>> +    res->width = c2d.width;
>>>>>> +    res->height = c2d.height;
>>>>>> +    res->format = c2d.format;
>>>>>> +    res->resource_id = c2d.resource_id;
>>>>>> +
>>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>>>>>> +                                struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_create_3d c3d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(c3d);
>>>>>> +
>>>>>> +    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
>>>>>> +                                       c3d.width, c3d.height,
>>> c3d.depth);
>>>>>> +
>>>>>> +    rc_3d.target = c3d.target;
>>>>>> +    rc_3d.format = c3d.format;
>>>>>> +    rc_3d.bind = c3d.bind;
>>>>>> +    rc_3d.width = c3d.width;
>>>>>> +    rc_3d.height = c3d.height;
>>>>>> +    rc_3d.depth = c3d.depth;
>>>>>> +    rc_3d.array_size = c3d.array_size;
>>>>>> +    rc_3d.last_level = c3d.last_level;
>>>>>> +    rc_3d.nr_samples = c3d.nr_samples;
>>>>>> +    rc_3d.flags = c3d.flags;
>>>>>> +
>>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
>>> c3d.resource_id,
>>>>> &rc_3d);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>>>>> +    res->width = c3d.width;
>>>>>> +    res->height = c3d.height;
>>>>>> +    res->format = c3d.format;
>>>>>> +    res->resource_id = c3d.resource_id;
>>>>>> +
>>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_unref(VirtIOGPU *g,
>>>>>> +                            struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_unref unref;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(unref);
>>>>>> +
>>>>>> +    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    if (res->image) {
>>>>>> +        pixman_image_unref(res->image);
>>>>>> +    }
>>>>>> +
>>>>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
>>>>>> +    g_free(res);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_context_create(VirtIOGPU *g,
>>>>>> +                            struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_ctx_create cc;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cc);
>>>>>> +    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>>>>>> +                                    cc.debug_name);
>>>>>> +
>>>>>> +    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
>>>>>> +                                     cc.context_init, cc.debug_name,
>>>>> cc.nlen);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_context_destroy(VirtIOGPU *g,
>>>>>> +                             struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_ctx_destroy cd;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cd);
>>>>>> +    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>>>>>> +
>>>>>> +    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_flush(VirtIOGPU *g, struct
>>> virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    int32_t result, i;
>>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct iovec transfer_iovec;
>>>>>> +    struct virtio_gpu_resource_flush rf;
>>>>>> +    bool found = false;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +    if (vr->headless) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(rf);
>>>>>> +    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>>>>>> +                                   rf.r.width, rf.r.height, rf.r.x,
>>>>> rf.r.y);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, rf.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>>>>>> +        scanout = &g->parent_obj.scanout[i];
>>>>>> +        if (i == res->scanout_bitmask) {
>>>>>> +            found = true;
>>>>>> +            break;
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>> +    if (!found) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    transfer.x = 0;
>>>>>> +    transfer.y = 0;
>>>>>> +    transfer.z = 0;
>>>>>> +    transfer.w = res->width;
>>>>>> +    transfer.h = res->height;
>>>>>> +    transfer.d = 1;
>>>>>> +
>>>>>> +    transfer_iovec.iov_base = pixman_image_get_data(res->image);
>>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
>>>>>> +
>>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>>>>>> +                                             rf.resource_id,
>>> &transfer,
>>>>>> +                                             &transfer_iovec);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +    dpy_gfx_update_full(scanout->con);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
>>>>>> +    struct virtio_gpu_set_scanout ss;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +    if (vr->headless) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(ss);
>>>>>> +    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
>>>>>> +                                     ss.r.width, ss.r.height, ss.r.x,
>>>>> ss.r.y);
>>>>>> +
>>>>>> +    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
>>>>>> +    scanout = &g->parent_obj.scanout[ss.scanout_id];
>>>>>> +
>>>>>> +    if (ss.resource_id == 0) {
>>>>>> +        dpy_gfx_replace_surface(scanout->con, NULL);
>>>>>> +        dpy_gl_scanout_disable(scanout->con);
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, ss.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    if (!res->image) {
>>>>>> +        pixman_format_code_t pformat;
>>>>>> +        pformat = virtio_gpu_get_pixman_format(res->format);
>>>>>> +        CHECK(pformat, cmd);
>>>>>> +
>>>>>> +        res->image = pixman_image_create_bits(pformat,
>>>>>> +                                              res->width,
>>>>>> +                                              res->height,
>>>>>> +                                              NULL, 0);
>>>>>> +        CHECK(res->image, cmd);
>>>>>> +        pixman_image_ref(res->image);
>>>>>> +    }
>>>>>> +
>>>>>> +    g->parent_obj.enable = 1;
>>>>>> +
>>>>>> +    /* realloc the surface ptr */
>>>>>> +    scanout->ds = qemu_create_displaysurface_pixman(res->image);
>>>>>> +    dpy_gfx_replace_surface(scanout->con, NULL);
>>>>>> +    dpy_gfx_replace_surface(scanout->con, scanout->ds);
>>>>>> +    res->scanout_bitmask = ss.scanout_id;
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_submit_3d(VirtIOGPU *g,
>>>>>> +                       struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_cmd_submit cs;
>>>>>> +    struct rutabaga_command rutabaga_cmd = { 0 };
>>>>>> +    g_autofree uint8_t *buf = NULL;
>>>>>> +    size_t s;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cs);
>>>>>> +    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>>>>>> +
>>>>>> +    buf = g_new0(uint8_t, cs.size);
>>>>>> +    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>>>>>> +                   sizeof(cs), buf, cs.size);
>>>>>> +    CHECK(s == cs.size, cmd);
>>>>>> +
>>>>>> +    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
>>>>>> +    rutabaga_cmd.cmd = buf;
>>>>>> +    rutabaga_cmd.cmd_size = cs.size;
>>>>>> +
>>>>>> +    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct virtio_gpu_transfer_to_host_2d t2d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(t2d);
>>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>>>>>> +
>>>>>> +    transfer.x = t2d.r.x;
>>>>>> +    transfer.y = t2d.r.y;
>>>>>> +    transfer.z = 0;
>>>>>> +    transfer.w = t2d.r.width;
>>>>>> +    transfer.h = t2d.r.height;
>>>>>> +    transfer.d = 1;
>>>>>> +
>>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
>>>>> t2d.resource_id,
>>>>>> +                                              &transfer);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
>>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>>>>>> +
>>>>>> +    transfer.x = t3d.box.x;
>>>>>> +    transfer.y = t3d.box.y;
>>>>>> +    transfer.z = t3d.box.z;
>>>>>> +    transfer.w = t3d.box.w;
>>>>>> +    transfer.h = t3d.box.h;
>>>>>> +    transfer.d = t3d.box.d;
>>>>>> +    transfer.level = t3d.level;
>>>>>> +    transfer.stride = t3d.stride;
>>>>>> +    transfer.layer_stride = t3d.layer_stride;
>>>>>> +    transfer.offset = t3d.offset;
>>>>>> +
>>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga,
>>>>> t3d.hdr.ctx_id,
>>>>>> +                                              t3d.resource_id,
>>>>> &transfer);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>>>>>> +                                   struct virtio_gpu_ctrl_command
>>> *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
>>>>>> +    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>>>>>> +
>>>>>> +    transfer.x = t3d.box.x;
>>>>>> +    transfer.y = t3d.box.y;
>>>>>> +    transfer.z = t3d.box.z;
>>>>>> +    transfer.w = t3d.box.w;
>>>>>> +    transfer.h = t3d.box.h;
>>>>>> +    transfer.d = t3d.box.d;
>>>>>> +    transfer.level = t3d.level;
>>>>>> +    transfer.stride = t3d.stride;
>>>>>> +    transfer.layer_stride = t3d.layer_stride;
>>>>>> +    transfer.offset = t3d.offset;
>>>>>> +
>>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga,
>>>>> t3d.hdr.ctx_id,
>>>>>> +                                             t3d.resource_id,
>>> &transfer,
>>>>> NULL);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_attach_backing(VirtIOGPU *g, struct
>>> virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    struct rutabaga_iovecs vecs = { 0 };
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_attach_backing att_rb;
>>>>>> +    int ret;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(att_rb);
>>>>>> +    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, att_rb.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +    CHECK(!res->iov, cmd);
>>>>>> +
>>>>>> +    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
>>>>> sizeof(att_rb),
>>>>>> +                                        cmd, NULL, &res->iov,
>>>>> &res->iov_cnt);
>>>>>> +    CHECK(!ret, cmd);
>>>>>> +
>>>>>> +    vecs.iovecs = res->iov;
>>>>>> +    vecs.num_iovecs = res->iov_cnt;
>>>>>> +
>>>>>> +    ret = rutabaga_resource_attach_backing(vr->rutabaga,
>>>>> att_rb.resource_id,
>>>>>> +                                           &vecs);
>>>>>> +    if (ret != 0) {
>>>>>> +        virtio_gpu_cleanup_mapping(g, res);
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(!ret, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_detach_backing(VirtIOGPU *g, struct
>>> virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_detach_backing detach_rb;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(detach_rb);
>>>>>> +    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    rutabaga_resource_detach_backing(vr->rutabaga,
>>>>>> +                                     detach_rb.resource_id);
>>>>>> +
>>>>>> +    virtio_gpu_cleanup_mapping(g, res);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_ctx_resource att_res;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(att_res);
>>>>>> +    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>>>>>> +                                        att_res.resource_id);
>>>>>> +
>>>>>> +    result = rutabaga_context_attach_resource(vr->rutabaga,
>>>>> att_res.hdr.ctx_id,
>>>>>> +                                              att_res.resource_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_ctx_resource det_res;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(det_res);
>>>>>> +    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>>>>>> +                                        det_res.resource_id);
>>>>>> +
>>>>>> +    result = rutabaga_context_detach_resource(vr->rutabaga,
>>>>> det_res.hdr.ctx_id,
>>>>>> +                                              det_res.resource_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
>>>>> virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_get_capset_info info;
>>>>>> +    struct virtio_gpu_resp_capset_info resp;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(info);
>>>>>> +
>>>>>> +    result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
>>>>>> +                                      &resp.capset_id,
>>>>> &resp.capset_max_version,
>>>>>> +                                      &resp.capset_max_size);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_get_capset gc;
>>>>>> +    struct virtio_gpu_resp_capset *resp;
>>>>>> +    uint32_t capset_size, capset_version;
>>>>>> +    uint32_t current_id, i;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(gc);
>>>>>> +    for (i = 0; i < vr->num_capsets; i++) {
>>>>>> +        result = rutabaga_get_capset_info(vr->rutabaga, i,
>>>>>> +                                          &current_id,
>>> &capset_version,
>>>>>> +                                          &capset_size);
>>>>>> +        CHECK(!result, cmd);
>>>>>> +
>>>>>> +        if (current_id == gc.capset_id) {
>>>>>> +            break;
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(i < vr->num_capsets, cmd);
>>>>>> +
>>>>>> +    resp = g_malloc0(sizeof(*resp) + capset_size);
>>>>>> +    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>>>>>> +    rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
>>>>>> +                        resp->capset_data, capset_size);
>>>>>> +
>>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
>>>>> capset_size);
>>>>>> +    g_free(resp);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>>>>>> +                                  struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int result;
>>>>>> +    struct rutabaga_iovecs vecs = { 0 };
>>>>>> +    g_autofree struct virtio_gpu_simple_resource *res = NULL;
>>>>>> +    struct virtio_gpu_resource_create_blob cblob;
>>>>>> +    struct rutabaga_create_blob rc_blob = { 0 };
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cblob);
>>>>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id,
>>> cblob.size);
>>>>>> +
>>>>>> +    CHECK(cblob.resource_id != 0, cmd);
>>>>>> +
>>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>>>>> +
>>>>>> +    res->resource_id = cblob.resource_id;
>>>>>> +    res->blob_size = cblob.size;
>>>>>> +
>>>>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>>>>>> +        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
>>>>>> +                                               sizeof(cblob), cmd,
>>>>> &res->addrs,
>>>>>> +                                               &res->iov,
>>> &res->iov_cnt);
>>>>>> +        CHECK(!result, cmd);
>>>>>> +    }
>>>>>> +
>>>>>> +    rc_blob.blob_id = cblob.blob_id;
>>>>>> +    rc_blob.blob_mem = cblob.blob_mem;
>>>>>> +    rc_blob.blob_flags = cblob.blob_flags;
>>>>>> +    rc_blob.size = cblob.size;
>>>>>> +
>>>>>> +    vecs.iovecs = res->iov;
>>>>>> +    vecs.num_iovecs = res->iov_cnt;
>>>>>> +
>>>>>> +    result = rutabaga_resource_create_blob(vr->rutabaga,
>>>>> cblob.hdr.ctx_id,
>>>>>> +                                           cblob.resource_id,
>>> &rc_blob,
>>>>> &vecs,
>>>>>> +                                           NULL);
>>>>>> +
>>>>>> +    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>>>>>> +        virtio_gpu_cleanup_mapping(g, res);
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>>>>> +    res = NULL;
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>>>>>> +                               struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    uint32_t map_info = 0;
>>>>>> +    uint32_t slot = 0;
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct rutabaga_mapping mapping = { 0 };
>>>>>> +    struct virtio_gpu_resource_map_blob mblob;
>>>>>> +    struct virtio_gpu_resp_map_info resp = { 0 };
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(mblob);
>>>>>> +
>>>>>> +    CHECK(mblob.resource_id != 0, cmd);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    result = rutabaga_resource_map_info(vr->rutabaga,
>>> mblob.resource_id,
>>>>>> +                                        &map_info);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    /*
>>>>>> +     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu
>>> spec,
>>>>> but do
>>>>>> +     * exist to potentially allow the hypervisor to restrict write
>>>>> access to
>>>>>> +     * memory. QEMU does not need to use this functionality at the
>>>>> moment.
>>>>>> +     */
>>>>>> +    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
>>>>>> +
>>>>>> +    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
>>>>> &mapping);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
>>>>>> +        if (vr->memory_regions[slot].used) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>>>>>> +        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
>>>>>> +                                   mapping.ptr);
>>>>>> +        memory_region_add_subregion(&g->parent_obj.hostmem,
>>>>>> +                                    mblob.offset, mr);
>>>>>> +        vr->memory_regions[slot].resource_id = mblob.resource_id;
>>>>>> +        vr->memory_regions[slot].used = 1;
>>>>>> +        break;
>>>>>> +    }
>>>>>> +
>>>>>> +    if (slot >= MAX_SLOTS) {
>>>>>> +        result = rutabaga_resource_unmap(vr->rutabaga,
>>>>> mblob.resource_id);
>>>>>> +        CHECK(!result, cmd);
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
>>>>>> +
>>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    uint32_t slot = 0;
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_unmap_blob ublob;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(ublob);
>>>>>> +
>>>>>> +    CHECK(ublob.resource_id != 0, cmd);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, ublob.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
>>>>>> +        if (vr->memory_regions[slot].resource_id !=
>>> ublob.resource_id) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>>>>>> +        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
>>>>>> +
>>>>>> +        vr->memory_regions[slot].resource_id = 0;
>>>>>> +        vr->memory_regions[slot].used = 0;
>>>>>> +        break;
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
>>>>>> +    result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>>>>>> +                                struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    struct rutabaga_fence fence = { 0 };
>>>>>> +    int32_t result;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>>>>>> +
>>>>>> +    switch (cmd->cmd_hdr.type) {
>>>>>> +    case VIRTIO_GPU_CMD_CTX_CREATE:
>>>>>> +        rutabaga_cmd_context_create(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_CTX_DESTROY:
>>>>>> +        rutabaga_cmd_context_destroy(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>>>>>> +        rutabaga_cmd_create_resource_2d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>>>>>> +        rutabaga_cmd_create_resource_3d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_SUBMIT_3D:
>>>>>> +        rutabaga_cmd_submit_3d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>>>>>> +        rutabaga_cmd_transfer_to_host_2d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>>>>>> +        rutabaga_cmd_transfer_to_host_3d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>>>>>> +        rutabaga_cmd_transfer_from_host_3d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>>>>>> +        rutabaga_cmd_attach_backing(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>>>>>> +        rutabaga_cmd_detach_backing(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_SET_SCANOUT:
>>>>>> +        rutabaga_cmd_set_scanout(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>>>>>> +        rutabaga_cmd_resource_flush(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>>>>>> +        rutabaga_cmd_resource_unref(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>>>>>> +        rutabaga_cmd_ctx_attach_resource(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>>>>>> +        rutabaga_cmd_ctx_detach_resource(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>>>>>> +        rutabaga_cmd_get_capset_info(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET:
>>>>>> +        rutabaga_cmd_get_capset(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>>>>>> +        virtio_gpu_get_display_info(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_GET_EDID:
>>>>>> +        virtio_gpu_get_edid(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>>>>>> +        rutabaga_cmd_resource_create_blob(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>>>>>> +        rutabaga_cmd_resource_map_blob(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>>>>>> +        rutabaga_cmd_resource_unmap_blob(g, cmd);
>>>>>> +        break;
>>>>>> +    default:
>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>>>>> +        break;
>>>>>> +    }
>>>>>> +
>>>>>> +    if (cmd->finished) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +    if (cmd->error) {
>>>>>> +        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>>>>>> +                     cmd->cmd_hdr.type, cmd->error);
>>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>>>>>> +        return;
>>>>>> +    }
>>>>>> +    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
>>>>> VIRTIO_GPU_RESP_OK_NODATA);
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    fence.flags = cmd->cmd_hdr.flags;
>>>>>> +    fence.ctx_id = cmd->cmd_hdr.ctx_id;
>>>>>> +    fence.fence_id = cmd->cmd_hdr.fence_id;
>>>>>> +    fence.ring_idx = cmd->cmd_hdr.ring_idx;
>>>>>> +
>>>>>> +    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
>>>>> cmd->cmd_hdr.type);
>>>>>> +
>>>>>> +    result = rutabaga_create_fence(vr->rutabaga, &fence);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_aio_cb(void *opaque)
>>>>>> +{
>>>>>> +    struct rutabaga_aio_data *data = opaque;
>>>>>> +    VirtIOGPU *g = VIRTIO_GPU(data->vr);
>>>>>> +    struct rutabaga_fence fence_data = data->fence;
>>>>>> +    struct virtio_gpu_ctrl_command *cmd, *tmp;
>>>>>> +
>>>>>> +    uint32_t signaled_ctx_specific = fence_data.flags &
>>>>>> +                                     RUTABAGA_FLAG_INFO_RING_IDX;
>>>>>> +
>>>>>> +    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>>>>>> +        /*
>>>>>> +         * Due to context specific timelines.
>>>>>> +         */
>>>>>> +        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>>>>>> +                                       RUTABAGA_FLAG_INFO_RING_IDX;
>>>>>> +
>>>>>> +        if (signaled_ctx_specific != target_ctx_specific) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        if (signaled_ctx_specific &&
>>>>>> +           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
>>>>> VIRTIO_GPU_RESP_OK_NODATA);
>>>>>> +        QTAILQ_REMOVE(&g->fenceq, cmd, next);
>>>>>> +        g_free(cmd);
>>>>>> +    }
>>>>>> +
>>>>>> +    g_free(data);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>>>>>> +                             const struct rutabaga_fence *fence) {
>>>>>> +    struct rutabaga_aio_data *data;
>>>>>> +    VirtIOGPU *g = (VirtIOGPU *)user_data;
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    /*
>>>>>> +     * gfxstream and both cross-domain (and even newer versions
>>>>> virglrenderer:
>>>>>> +     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
>>>>> completion on
>>>>>> +     * threads ("callback threads") that are different from the thread
>>>>> that
>>>>>> +     * processes the command queue ("main thread").
>>>>>> +     *
>>>>>> +     * crosvm and other virtio-gpu 1.1 implementations enable callback
>>>>> threads
>>>>>> +     * via locking.  However, on QEMU a deadlock is observed if
>>>>>> +     * virtio_gpu_ctrl_response_nodata(..) [used in the fence
>>> callback]
>>>>> is used
>>>>>> +     * from a thread that is not the main thread.
>>>>>> +     *
>>>>>> +     * The reason is QEMU's internal locking is designed to work with
>>>>> QEMU
>>>>>> +     * threads (see rcu_register_thread()) and not generic C/C++/Rust
>>>>> threads.
>>>>>> +     * For now, we can workaround this by scheduling the return of the
>>>>>> +     * fence descriptors on the main thread.
>>>>>> +     */
>>>>>> +
>>>>>> +    data = g_new0(struct rutabaga_aio_data, 1);
>>>>>> +    data->vr = vr;
>>>>>> +    data->fence = *fence;
>>>>>> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
>>>>>> +                            virtio_gpu_rutabaga_aio_cb,
>>>>>> +                            data);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
>>>>>> +                             const struct rutabaga_debug *debug) {
>>>>>> +
>>>>>> +    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
>>>>>> +        error_report("%s", debug->message);
>>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
>>>>>> +        warn_report("%s", debug->message);
>>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
>>>>>> +        info_report("%s", debug->message);
>>>>>> +    }
>>>>>> +}
>>>>>> +
>>>>>> +static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
>>>>>> +{
>>>>>> +    int result;
>>>>>> +    uint64_t capset_mask;
>>>>>> +    struct rutabaga_builder builder = { 0 };
>>>>>> +    char wayland_socket_path[UNIX_PATH_MAX];
>>>>>> +    struct rutabaga_channel channel = { 0 };
>>>>>> +    struct rutabaga_channels channels = { 0 };
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +    vr->rutabaga = NULL;
>>>>>> +
>>>>>> +    if (!vr->capset_names) {
>>>>>> +        error_setg(errp, "a capset name from the virtio-gpu spec is
>>>>> required");
>>>>>> +        return false;
>>>>>> +    }
>>>>>> +
>>>>>> +    builder.wsi = RUTABAGA_WSI_SURFACELESS;
>>>>>> +    /*
>>>>>> +     * Currently, if WSI is specified, the only valid strings are
>>>>> "surfaceless"
>>>>>> +     * or "headless".  Surfaceless doesn't create a native window
>>>>> surface, but
>>>>>> +     * does copy from the render target to the Pixman buffer if a
>>>>> virtio-gpu
>>>>>> +     * 2D hypercall is issued.  Surfacless is the default.
>>>>>> +     *
>>>>>> +     * Headless is like surfaceless, but doesn't copy to the Pixman
>>>>> buffer. The
>>>>>> +     * use case is automated testing environments where there is no
>>> need
>>>>> to view
>>>>>> +     * results.
>>>>>> +     *
>>>>>> +     * In the future, more performant virtio-gpu 2D UI integration may
>>>>> be added.
>>>>>> +     */
>>>>>> +    if (vr->wsi) {
>>>>>> +        if (g_str_equal(vr->wsi, "surfaceless")) {
>>>>>> +            vr->headless = false;
>>>>>> +        } else if (g_str_equal(vr->wsi, "headless")) {
>>>>>> +            vr->headless = true;
>>>>>> +        } else {
>>>>>> +            error_setg(errp, "invalid wsi option selected");
>>>>>> +            return false;
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>> +    result = rutabaga_calculate_capset_mask(vr->capset_names,
>>>>> &capset_mask);
>>>>>
>>>>> First, sorry for responding after such a long time. I've been busy with
>>>>> work and I'm doing QEMU in my free time.
>>>>>
>>>>> In iteration 1 I've raised the topic on capset_names [1] and I haven't
>>>>> seen it answered properly. Perhaps I need to rephrase a bit so here we
>>> go:
>>>>> capset_names seems to be colon-separated list of bit options managed by
>>>>> rutabaga. This introduces yet another way of options handling. There
>>> have
>>>>> been talks about harmonizing options handling in QEMU since apparently
>>> it
>>>>> is considered too complex [2,3].
>>>>
>>>>
>>>>> Why not pass the "capset" as a bitfield like capset_mask and have QEMU
>>>>> create "capset" from QOM properties?
>>>>
>>>> IIUC these flags could come from virtio_gpu.h which is already present in
>>>>> the QEMU tree. This would not inly shortcut the dependency on rutabaga
>>> here
>>>>> but would also be more idiomatic QEMU (since it makes the options more
>>>>> introspectable by internal machinery).
>>>>
>>>>
>>>>> Of course the bitfield approach would require modifications in QEMU
>>>>> whenever rutabaga gains new features. However, I figure that in the long
>>>>> term rutabaga will be quite feature complete such that the benefits of
>>>>> idiomatic QEMU handling will outweigh the decoupling of the projects.
>>>>>
>>>>> What do you think?
>>>>>
>>>>
>>>> I think what you're suggesting is something like -device
>>>> virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
>>>> gfxstream_vulkan + cross_domain]?
>>>
>>> I was thinking more along the lines of
>>> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
>>> gfxstream_vulkan and cross_domain are boolean QOM properties. This would
>>> make for a human-readable format which follows QEMU style.
>>>
>>>>
>>>> We actually did consider something like that when adding the
>>>> --context-types flag [with crosvm], but there was a desire for a
>>>> human-readable format rather than numbers [even if they are in the
>>>> virtio-gpu spec].
>>>>
>>>> Additionally, there are quite a few context types that people are playing
>>>> around with [gfxstream-gles, gfxstream-composer] that are launchable and
>>>> aren't in the spec just yet.
>>>
>>> Right, QEMU had to be modified for this kind of experimentation. I
>>> considered this in my last paragraph and figured that in the long run QEMU
>>> *may* prefer more idiomatic option handling since it tries hard to not
>>> break its command line interface. I'm just pointing this out -- the
>>> decision is ultimately up to the community.
>>>
>>> Why not have dedicated QEMU development branches for experimentation?
>>> Wouldn't upstreaming new features into QEMU be a good motivation to get the
>>> missing pieces into the spec, once they are mature?
>>
>>
>>>>
>>>> Also, a key feature we want to explicitly **not** turn on all available
>>>> context-types and let the user decide.
>>>
>>> How would you prevent that with the current colon-separated approach?
>>> Splitting capset_mask in multiple parameters is just a different
>>> syntactical representation of the same thing.
>>>
>>>> That'll allow guest Mesa in
>>>> particular to do its magic in its loader.  So one may run Zink + ANV with
>>>> ioctl forwarding, or Iris + ioctl forwarding and compare performance with
>>>> the same guest image.
>>>>
>>>> And another thing is one needs some knowledge of the host system to choose
>>>> the right context type.  You wouldn't do Zink + ANV ioctl forwarding on
>>>> MacOS.  So I think the task of choosing the right context type will fall
>>> to
>>>> projects that depend on QEMU (such as Android Emulator) which have some
>>>> knowledge of the host environment.
>>>>
>>>> We actually have a graphics detector somewhere that calls VK/OpenGL before
>>>> launching the VM and sets the right options.  Plan is to port into
>>>> gfxstream, maybe we could use that.
>>>
>>> You could bail out in QEMU if rutabaga_calculate_capset_mask() detects
>>> conflicting combinations, no?
>>>
>>>>
>>>> So given the desire for human readable formats, being portable across VMMs
>>>> (crosvm, qemu, rust-vmm??) and experimentation, the string -> capset mask
>>>> conversion was put in the rutabaga API.  So I wouldn't change it for those
>>>> reasons.
>>>
>>> What do you mean by being portable across VMMs?
>>
>>
>> Having the API inside rutabaga is (mildly) useful when multiple VMMs have
>> the need to translate from a human-readable format to flags digestible by
>> rutabaga.
>>
>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452
>>
>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353
>>
>> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505
>>
>> For these crosvm/qemu launchers, I imagine capset names will be plumbed all
>> the way through eventually (launch_cvd
>> --gpu_context=gfxstream-vulkan:cross-domain if you've played around with
>> Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you played
>> around with Termina VMs).
>>
>> I think rust-vmm could also use the same API ("--capset_names") too.
>>
>>
>>> Sure, QEMU had to be taught new flags before being able to use new
>>> rutabaga features. I agree that this comes with a certain inconvenience.
>>> But it may also be inconvenient for QEMU to deal with additional ad-hoc
>>> options parsing when there are efforts for harmonization.
>>>
>>> Did my comments shed new light into the discussion?
>>
>>
>> Yes, they do.  I agree with you that both crosvm/qemu have too many flags,
>> and having a stable command line interface is important.  We are aiming for
>> stability with the `--capset_names={colon string}` command line, and at
>> least for crosvm looking to deprecate older options [since we've never had
>> an official release of crosvm yet].
>>
>> I do think:
>>
>> 1) "capset_names=gfxstream-vulkan:cross-domain"
>> 2) "cross-domain=on,gfxstream-vulkan=on"
>>
>> are similar enough.  I would choose (1) for since I think not duplicating
>> the [name] -> flag logic and having a similar interface across VMMs + VMM
>> launchers is ever-so slightly useful.
> 
> I think we've now reached a good understanding of the issue. It's now up to the QEMU community to make a choice. So I'm cc'ing Markus and Thomas as the experts of the topic.

As a virtio-gpu user, I'm slightly inclined to (2) since it would be 
easier to implement the same option for virtio-gpu-virgl when a need arises.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-19 22:07             ` Akihiko Odaki
@ 2023-09-21 23:44               ` Gurchetan Singh
  2023-09-22  2:41                 ` Akihiko Odaki
  2023-09-29 15:06                 ` Bernhard Beschow
  0 siblings, 2 replies; 34+ messages in thread
From: Gurchetan Singh @ 2023-09-21 23:44 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Bernhard Beschow, qemu-devel, marcandre.lureau, ray.huang,
	alex.bennee, hi, ernunes, manos.pitsidianakis, philmd,
	Markus Armbruster, Thomas Huth

[-- Attachment #1: Type: text/plain, Size: 62070 bytes --]

On Tue, Sep 19, 2023 at 3:07 PM Akihiko Odaki <akihiko.odaki@gmail.com>
wrote:

> On 2023/09/20 3:36, Bernhard Beschow wrote:
> >
> >
> > Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh <
> gurchetansingh@chromium.org>:
> >> On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow <shentey@gmail.com>
> wrote:
> >>
> >>>
> >>>
> >>> Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <
> >>> gurchetansingh@chromium.org>:
> >>>> On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com>
> >>> wrote:
> >>>>
> >>>>>
> >>>>>
> >>>>> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
> >>>>> gurchetansingh@chromium.org>:
> >>>>>> This adds initial support for gfxstream and cross-domain.  Both
> >>>>>> features rely on virtio-gpu blob resources and context types, which
> >>>>>> are also implemented in this patch.
> >>>>>>
> >>>>>> gfxstream has a long and illustrious history in Android graphics
> >>>>>> paravirtualization.  It has been powering graphics in the Android
> >>>>>> Studio Emulator for more than a decade, which is the main developer
> >>>>>> platform.
> >>>>>>
> >>>>>> Originally conceived by Jesse Hall, it was first known as "EmuGL"
> [a].
> >>>>>> The key design characteristic was a 1:1 threading model and
> >>>>>> auto-generation, which fit nicely with the OpenGLES spec.  It also
> >>>>>> allowed easy layering with ANGLE on the host, which provides the
> GLES
> >>>>>> implementations on Windows or MacOS enviroments.
> >>>>>>
> >>>>>> gfxstream has traditionally been maintained by a single engineer,
> and
> >>>>>> between 2015 to 2021, the goldfish throne passed to Frank Yang.
> >>>>>> Historians often remark this glorious reign ("pax gfxstreama" is the
> >>>>>> academic term) was comparable to that of Augustus and both Queen
> >>>>>> Elizabeths.  Just to name a few accomplishments in a resplendent
> >>>>>> panoply: higher versions of GLES, address space graphics, snapshot
> >>>>>> support and CTS compliant Vulkan [b].
> >>>>>>
> >>>>>> One major drawback was the use of out-of-tree goldfish drivers.
> >>>>>> Android engineers didn't know much about DRM/KMS and especially TTM
> so
> >>>>>> a simple guest to host pipe was conceived.
> >>>>>>
> >>>>>> Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
> >>>>>> the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
> >>>>>> port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
> >>>>>> It was a symbol compatible replacement of virglrenderer [c] and
> named
> >>>>>> "AVDVirglrenderer".  This implementation forms the basis of the
> >>>>>> current gfxstream host implementation still in use today.
> >>>>>>
> >>>>>> cross-domain support follows a similar arc.  Originally conceived by
> >>>>>> Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
> >>>>>> 2018, it initially relied on the downstream "virtio-wl" device.
> >>>>>>
> >>>>>> In 2020 and 2021, virtio-gpu was extended to include blob resources
> >>>>>> and multiple timelines by yours truly, features
> gfxstream/cross-domain
> >>>>>> both require to function correctly.
> >>>>>>
> >>>>>> Right now, we stand at the precipice of a truly fantastic
> possibility:
> >>>>>> the Android Emulator powered by upstream QEMU and upstream Linux
> >>>>>> kernel.  gfxstream will then be packaged properfully, and app
> >>>>>> developers can even fix gfxstream bugs on their own if they
> encounter
> >>>>>> them.
> >>>>>>
> >>>>>> It's been quite the ride, my friends.  Where will gfxstream head
> next,
> >>>>>> nobody really knows.  I wouldn't be surprised if it's around for
> >>>>>> another decade, maintained by a new generation of Android graphics
> >>>>>> enthusiasts.
> >>>>>>
> >>>>>> Technical details:
> >>>>>>   - Very simple initial display integration: just used Pixman
> >>>>>>   - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga
> function
> >>>>>>     calls
> >>>>>>
> >>>>>> Next steps for Android VMs:
> >>>>>>   - The next step would be improving display integration and UI
> >>> interfaces
> >>>>>>     with the goal of the QEMU upstream graphics being in an emulator
> >>>>>>     release [d].
> >>>>>>
> >>>>>> Next steps for Linux VMs for display virtualization:
> >>>>>>   - For widespread distribution, someone needs to package Sommelier
> or
> >>> the
> >>>>>>     wayland-proxy-virtwl [e] ideally into Debian main. In addition,
> >>> newer
> >>>>>>     versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS
> option,
> >>>>>>     which allows disabling KMS hypercalls.  If anyone cares enough,
> >>> it'll
> >>>>>>     probably be possible to build a custom VM variant that uses this
> >>>>> display
> >>>>>>     virtualization strategy.
> >>>>>>
> >>>>>> [a]
> >>>>>
> https://android-review.googlesource.com/c/platform/development/+/34470
> >>>>>> [b]
> >>>>>
> >>>
> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
> >>>>>> [c]
> >>>>>
> >>>
> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
> >>>>>> [d] https://developer.android.com/studio/releases/emulator
> >>>>>> [e] https://github.com/talex5/wayland-proxy-virtwl
> >>>>>>
> >>>>>> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> >>>>>> Tested-by: Alyssa Ross <hi@alyssa.is>
> >>>>>> Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
> >>>>>> Reviewed-by: Emmanouil Pitsidianakis <
> manos.pitsidianakis@linaro.org>
> >>>>>> ---
> >>>>>> v1: Incorported various suggestions by Akihiko Odaki and Bernard
> >>> Berschow
> >>>>>>     - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
> >>>>>>     - Used error_report(..)
> >>>>>>     - Used g_autofree to fix leaks on error paths
> >>>>>>     - Removed unnecessary casts
> >>>>>>     - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
> >>>>>>
> >>>>>> v2: Incorported various suggestions by Akihiko Odaki, Marc-André
> Lureau
> >>>>> and
> >>>>>>     Bernard Berschow:
> >>>>>>     - Parenthesis in CHECK macro
> >>>>>>     - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
> >>>>>>     - delay until g->parent_obj.enable = 1
> >>>>>>     - Additional cast fixes
> >>>>>>     - initialize directly in virtio_gpu_rutabaga_realize(..)
> >>>>>>     - add debug callback to hook into QEMU error's APIs
> >>>>>>
> >>>>>> v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
> >>>>>>     - Autodetect Wayland socket when not explicitly specified
> >>>>>>     - Fix map_blob error paths
> >>>>>>     - Add comment why we need both `res` and `resource` in create
> blob
> >>>>>>     - Cast and whitespace fixes
> >>>>>>     - Big endian check comes before virtio_gpu_rutabaga_init().
> >>>>>>     - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
> >>>>>>
> >>>>>> v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
> >>>>>>     - Double checked all casts
> >>>>>>     - Remove unnecessary parenthesis
> >>>>>>     - Removed `resource` in create_blob
> >>>>>>     - Added comment about failure case
> >>>>>>     - Pass user-provided socket as-is
> >>>>>>     - Use stack variable rather than heap allocation
> >>>>>>     - Future-proofed map info API to give access flags as well
> >>>>>>
> >>>>>> v5: Incorporated feedback from Akihiko Odaki:
> >>>>>>     - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
> >>>>>>     - Simplify num_capsets check
> >>>>>>     - Call cleanup mapping on error paths
> >>>>>>     - uint64_t --> void* for rutabaga_map(..)
> >>>>>>     - Removed unnecessary parenthesis
> >>>>>>     - Removed unnecessary cast
> >>>>>>     - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
> >>>>>>     - Reuse result variable
> >>>>>>
> >>>>>> v6: Incorporated feedback from Akihiko Odaki:
> >>>>>>     - Remove unnecessary #ifndef
> >>>>>>     - Disable scanout when appropriate
> >>>>>>     - CHECK capset index within range outside loop
> >>>>>>     - Add capset_version
> >>>>>>
> >>>>>> v7: Incorporated feedback from Akihiko Odaki:
> >>>>>>     - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
> >>>>>>
> >>>>>> v9: Incorportated feedback from Akihiko Odaki:
> >>>>>>     - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
> >>>>>>     - Add error_setg(..) after rutabaga_init(..)
> >>>>>>
> >>>>>> v10: Incorportated feedback from Akihiko Odaki:
> >>>>>>     - error_setg(..) --> error_setg_errno(..) when appropriate
> >>>>>>     - virtio_gpu_rutabaga_init returns a bool instead of an int
> >>>>>>
> >>>>>> v11: Incorportated feedback from Philippe Mathieu-Daudé:
> >>>>>>     - C-style /* */ comments and avoid // comments.
> >>>>>>     - GPL-2.0 --> GPL-2.0-or-later
> >>>>>>
> >>>>>> hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
> >>>>>> hw/display/virtio-gpu-rutabaga.c     | 1121
> ++++++++++++++++++++++++++
> >>>>>> hw/display/virtio-vga-rutabaga.c     |   53 ++
> >>>>>> 3 files changed, 1224 insertions(+)
> >>>>>> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
> >>>>>> create mode 100644 hw/display/virtio-gpu-rutabaga.c
> >>>>>> create mode 100644 hw/display/virtio-vga-rutabaga.c
> >>>>>>
> >>>>>> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
> >>>>> b/hw/display/virtio-gpu-pci-rutabaga.c
> >>>>>> new file mode 100644
> >>>>>> index 0000000000..311eff308a
> >>>>>> --- /dev/null
> >>>>>> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
> >>>>>> @@ -0,0 +1,50 @@
> >>>>>> +/*
> >>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
> >>>>>> + */
> >>>>>> +
> >>>>>> +#include "qemu/osdep.h"
> >>>>>> +#include "qapi/error.h"
> >>>>>> +#include "qemu/module.h"
> >>>>>> +#include "hw/pci/pci.h"
> >>>>>> +#include "hw/qdev-properties.h"
> >>>>>> +#include "hw/virtio/virtio.h"
> >>>>>> +#include "hw/virtio/virtio-bus.h"
> >>>>>> +#include "hw/virtio/virtio-gpu-pci.h"
> >>>>>> +#include "qom/object.h"
> >>>>>> +
> >>>>>> +#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
> >>>>>> +typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
> >>>>>> +DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI,
> >>> VIRTIO_GPU_RUTABAGA_PCI,
> >>>>>> +                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
> >>>>>> +
> >>>>>> +struct VirtIOGPURutabagaPCI {
> >>>>>> +    VirtIOGPUPCIBase parent_obj;
> >>>>>> +    VirtIOGPURutabaga vdev;
> >>>>>> +};
> >>>>>> +
> >>>>>> +static void virtio_gpu_rutabaga_initfn(Object *obj)
> >>>>>> +{
> >>>>>> +    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
> >>>>>> +
> >>>>>> +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> >>>>>> +                                TYPE_VIRTIO_GPU_RUTABAGA);
> >>>>>> +    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info
> = {
> >>>>>> +    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
> >>>>>> +    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
> >>>>>> +    .instance_size = sizeof(VirtIOGPURutabagaPCI),
> >>>>>> +    .instance_init = virtio_gpu_rutabaga_initfn,
> >>>>>> +};
> >>>>>> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
> >>>>>> +module_kconfig(VIRTIO_PCI);
> >>>>>> +
> >>>>>> +static void virtio_gpu_rutabaga_pci_register_types(void)
> >>>>>> +{
> >>>>>> +    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
> >>>>>> +}
> >>>>>> +
> >>>>>> +type_init(virtio_gpu_rutabaga_pci_register_types)
> >>>>>> +
> >>>>>> +module_dep("hw-display-virtio-gpu-pci");
> >>>>>> diff --git a/hw/display/virtio-gpu-rutabaga.c
> >>>>> b/hw/display/virtio-gpu-rutabaga.c
> >>>>>> new file mode 100644
> >>>>>> index 0000000000..9018e5a702
> >>>>>> --- /dev/null
> >>>>>> +++ b/hw/display/virtio-gpu-rutabaga.c
> >>>>>> @@ -0,0 +1,1121 @@
> >>>>>> +/*
> >>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
> >>>>>> + */
> >>>>>> +
> >>>>>> +#include "qemu/osdep.h"
> >>>>>> +#include "qapi/error.h"
> >>>>>> +#include "qemu/error-report.h"
> >>>>>> +#include "qemu/iov.h"
> >>>>>> +#include "trace.h"
> >>>>>> +#include "hw/virtio/virtio.h"
> >>>>>> +#include "hw/virtio/virtio-gpu.h"
> >>>>>> +#include "hw/virtio/virtio-gpu-pixman.h"
> >>>>>> +#include "hw/virtio/virtio-iommu.h"
> >>>>>> +
> >>>>>> +#include <glib/gmem.h>
> >>>>>> +#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
> >>>>>> +
> >>>>>> +#define CHECK(condition, cmd)
> >>>>>       \
> >>>>>> +    do {
> >>>>>        \
> >>>>>> +        if (!(condition)) {
> >>>>>       \
> >>>>>> +            error_report("CHECK failed in %s() %s:" "%d", __func__,
> >>>>>       \
> >>>>>> +                         __FILE__, __LINE__);
> >>>>>       \
> >>>>>> +            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >>>>>        \
> >>>>>> +            return;
> >>>>>       \
> >>>>>> +       }
> >>>>>        \
> >>>>>> +    } while (0)
> >>>>>> +
> >>>>>> +/*
> >>>>>> + * This is the size of the char array in struct sock_addr_un. No
> >>> Wayland
> >>>>> socket
> >>>>>> + * can be created with a path longer than this, including the null
> >>>>> terminator.
> >>>>>> + */
> >>>>>> +#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
> >>>>>> +
> >>>>>> +struct rutabaga_aio_data {
> >>>>>> +    struct VirtIOGPURutabaga *vr;
> >>>>>> +    struct rutabaga_fence fence;
> >>>>>> +};
> >>>>>> +
> >>>>>> +static void
> >>>>>> +virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
> >>>>> virtio_gpu_scanout *s,
> >>>>>> +                                  uint32_t resource_id)
> >>>>>> +{
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
> >>>>>> +    struct iovec transfer_iovec;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    res = virtio_gpu_find_resource(g, resource_id);
> >>>>>> +    if (!res) {
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    if (res->width != s->current_cursor->width ||
> >>>>>> +        res->height != s->current_cursor->height) {
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    transfer.x = 0;
> >>>>>> +    transfer.y = 0;
> >>>>>> +    transfer.z = 0;
> >>>>>> +    transfer.w = res->width;
> >>>>>> +    transfer.h = res->height;
> >>>>>> +    transfer.d = 1;
> >>>>>> +
> >>>>>> +    transfer_iovec.iov_base = s->current_cursor->data;
> >>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
> >>>>>> +
> >>>>>> +    rutabaga_resource_transfer_read(vr->rutabaga, 0,
> >>>>>> +                                    resource_id, &transfer,
> >>>>>> +                                    &transfer_iovec);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
> >>>>>> +{
> >>>>>> +    VirtIOGPU *g = VIRTIO_GPU(b);
> >>>>>> +    virtio_gpu_process_cmdq(g);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
> >>>>>> +                                struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct virtio_gpu_resource_create_2d c2d;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(c2d);
> >>>>>> +    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
> >>>>>> +                                       c2d.width, c2d.height);
> >>>>>> +
> >>>>>> +    rc_3d.target = 2;
> >>>>>> +    rc_3d.format = c2d.format;
> >>>>>> +    rc_3d.bind = (1 << 1);
> >>>>>> +    rc_3d.width = c2d.width;
> >>>>>> +    rc_3d.height = c2d.height;
> >>>>>> +    rc_3d.depth = 1;
> >>>>>> +    rc_3d.array_size = 1;
> >>>>>> +    rc_3d.last_level = 0;
> >>>>>> +    rc_3d.nr_samples = 0;
> >>>>>> +    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
> >>> c2d.resource_id,
> >>>>> &rc_3d);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +
> >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >>>>>> +    res->width = c2d.width;
> >>>>>> +    res->height = c2d.height;
> >>>>>> +    res->format = c2d.format;
> >>>>>> +    res->resource_id = c2d.resource_id;
> >>>>>> +
> >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
> >>>>>> +                                struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct virtio_gpu_resource_create_3d c3d;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(c3d);
> >>>>>> +
> >>>>>> +    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
> >>>>>> +                                       c3d.width, c3d.height,
> >>> c3d.depth);
> >>>>>> +
> >>>>>> +    rc_3d.target = c3d.target;
> >>>>>> +    rc_3d.format = c3d.format;
> >>>>>> +    rc_3d.bind = c3d.bind;
> >>>>>> +    rc_3d.width = c3d.width;
> >>>>>> +    rc_3d.height = c3d.height;
> >>>>>> +    rc_3d.depth = c3d.depth;
> >>>>>> +    rc_3d.array_size = c3d.array_size;
> >>>>>> +    rc_3d.last_level = c3d.last_level;
> >>>>>> +    rc_3d.nr_samples = c3d.nr_samples;
> >>>>>> +    rc_3d.flags = c3d.flags;
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
> >>> c3d.resource_id,
> >>>>> &rc_3d);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +
> >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >>>>>> +    res->width = c3d.width;
> >>>>>> +    res->height = c3d.height;
> >>>>>> +    res->format = c3d.format;
> >>>>>> +    res->resource_id = c3d.resource_id;
> >>>>>> +
> >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_resource_unref(VirtIOGPU *g,
> >>>>>> +                            struct virtio_gpu_ctrl_command *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct virtio_gpu_resource_unref unref;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(unref);
> >>>>>> +
> >>>>>> +    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
> >>>>>> +
> >>>>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
> >>>>>> +    CHECK(res, cmd);
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_unref(vr->rutabaga,
> unref.resource_id);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +
> >>>>>> +    if (res->image) {
> >>>>>> +        pixman_image_unref(res->image);
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
> >>>>>> +    g_free(res);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_context_create(VirtIOGPU *g,
> >>>>>> +                            struct virtio_gpu_ctrl_command *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct virtio_gpu_ctx_create cc;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(cc);
> >>>>>> +    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
> >>>>>> +                                    cc.debug_name);
> >>>>>> +
> >>>>>> +    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
> >>>>>> +                                     cc.context_init,
> cc.debug_name,
> >>>>> cc.nlen);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_context_destroy(VirtIOGPU *g,
> >>>>>> +                             struct virtio_gpu_ctrl_command *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct virtio_gpu_ctx_destroy cd;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(cd);
> >>>>>> +    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
> >>>>>> +
> >>>>>> +    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_resource_flush(VirtIOGPU *g, struct
> >>> virtio_gpu_ctrl_command
> >>>>> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result, i;
> >>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
> >>>>>> +    struct iovec transfer_iovec;
> >>>>>> +    struct virtio_gpu_resource_flush rf;
> >>>>>> +    bool found = false;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +    if (vr->headless) {
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(rf);
> >>>>>> +    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
> >>>>>> +                                   rf.r.width, rf.r.height, rf.r.x,
> >>>>> rf.r.y);
> >>>>>> +
> >>>>>> +    res = virtio_gpu_find_resource(g, rf.resource_id);
> >>>>>> +    CHECK(res, cmd);
> >>>>>> +
> >>>>>> +    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
> >>>>>> +        scanout = &g->parent_obj.scanout[i];
> >>>>>> +        if (i == res->scanout_bitmask) {
> >>>>>> +            found = true;
> >>>>>> +            break;
> >>>>>> +        }
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    if (!found) {
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    transfer.x = 0;
> >>>>>> +    transfer.y = 0;
> >>>>>> +    transfer.z = 0;
> >>>>>> +    transfer.w = res->width;
> >>>>>> +    transfer.h = res->height;
> >>>>>> +    transfer.d = 1;
> >>>>>> +
> >>>>>> +    transfer_iovec.iov_base = pixman_image_get_data(res->image);
> >>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
> >>>>>> +                                             rf.resource_id,
> >>> &transfer,
> >>>>>> +                                             &transfer_iovec);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +    dpy_gfx_update_full(scanout->con);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_set_scanout(VirtIOGPU *g, struct
> virtio_gpu_ctrl_command
> >>>>> *cmd)
> >>>>>> +{
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
> >>>>>> +    struct virtio_gpu_set_scanout ss;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +    if (vr->headless) {
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(ss);
> >>>>>> +    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
> >>>>>> +                                     ss.r.width, ss.r.height,
> ss.r.x,
> >>>>> ss.r.y);
> >>>>>> +
> >>>>>> +    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
> >>>>>> +    scanout = &g->parent_obj.scanout[ss.scanout_id];
> >>>>>> +
> >>>>>> +    if (ss.resource_id == 0) {
> >>>>>> +        dpy_gfx_replace_surface(scanout->con, NULL);
> >>>>>> +        dpy_gl_scanout_disable(scanout->con);
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    res = virtio_gpu_find_resource(g, ss.resource_id);
> >>>>>> +    CHECK(res, cmd);
> >>>>>> +
> >>>>>> +    if (!res->image) {
> >>>>>> +        pixman_format_code_t pformat;
> >>>>>> +        pformat = virtio_gpu_get_pixman_format(res->format);
> >>>>>> +        CHECK(pformat, cmd);
> >>>>>> +
> >>>>>> +        res->image = pixman_image_create_bits(pformat,
> >>>>>> +                                              res->width,
> >>>>>> +                                              res->height,
> >>>>>> +                                              NULL, 0);
> >>>>>> +        CHECK(res->image, cmd);
> >>>>>> +        pixman_image_ref(res->image);
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    g->parent_obj.enable = 1;
> >>>>>> +
> >>>>>> +    /* realloc the surface ptr */
> >>>>>> +    scanout->ds = qemu_create_displaysurface_pixman(res->image);
> >>>>>> +    dpy_gfx_replace_surface(scanout->con, NULL);
> >>>>>> +    dpy_gfx_replace_surface(scanout->con, scanout->ds);
> >>>>>> +    res->scanout_bitmask = ss.scanout_id;
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_submit_3d(VirtIOGPU *g,
> >>>>>> +                       struct virtio_gpu_ctrl_command *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct virtio_gpu_cmd_submit cs;
> >>>>>> +    struct rutabaga_command rutabaga_cmd = { 0 };
> >>>>>> +    g_autofree uint8_t *buf = NULL;
> >>>>>> +    size_t s;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(cs);
> >>>>>> +    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
> >>>>>> +
> >>>>>> +    buf = g_new0(uint8_t, cs.size);
> >>>>>> +    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
> >>>>>> +                   sizeof(cs), buf, cs.size);
> >>>>>> +    CHECK(s == cs.size, cmd);
> >>>>>> +
> >>>>>> +    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
> >>>>>> +    rutabaga_cmd.cmd = buf;
> >>>>>> +    rutabaga_cmd.cmd_size = cs.size;
> >>>>>> +
> >>>>>> +    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
> >>>>>> +                                 struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
> >>>>>> +    struct virtio_gpu_transfer_to_host_2d t2d;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(t2d);
> >>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
> >>>>>> +
> >>>>>> +    transfer.x = t2d.r.x;
> >>>>>> +    transfer.y = t2d.r.y;
> >>>>>> +    transfer.z = 0;
> >>>>>> +    transfer.w = t2d.r.width;
> >>>>>> +    transfer.h = t2d.r.height;
> >>>>>> +    transfer.d = 1;
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
> >>>>> t2d.resource_id,
> >>>>>> +                                              &transfer);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
> >>>>>> +                                 struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
> >>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
> >>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
> >>>>>> +
> >>>>>> +    transfer.x = t3d.box.x;
> >>>>>> +    transfer.y = t3d.box.y;
> >>>>>> +    transfer.z = t3d.box.z;
> >>>>>> +    transfer.w = t3d.box.w;
> >>>>>> +    transfer.h = t3d.box.h;
> >>>>>> +    transfer.d = t3d.box.d;
> >>>>>> +    transfer.level = t3d.level;
> >>>>>> +    transfer.stride = t3d.stride;
> >>>>>> +    transfer.layer_stride = t3d.layer_stride;
> >>>>>> +    transfer.offset = t3d.offset;
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga,
> >>>>> t3d.hdr.ctx_id,
> >>>>>> +                                              t3d.resource_id,
> >>>>> &transfer);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
> >>>>>> +                                   struct virtio_gpu_ctrl_command
> >>> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
> >>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
> >>>>>> +    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
> >>>>>> +
> >>>>>> +    transfer.x = t3d.box.x;
> >>>>>> +    transfer.y = t3d.box.y;
> >>>>>> +    transfer.z = t3d.box.z;
> >>>>>> +    transfer.w = t3d.box.w;
> >>>>>> +    transfer.h = t3d.box.h;
> >>>>>> +    transfer.d = t3d.box.d;
> >>>>>> +    transfer.level = t3d.level;
> >>>>>> +    transfer.stride = t3d.stride;
> >>>>>> +    transfer.layer_stride = t3d.layer_stride;
> >>>>>> +    transfer.offset = t3d.offset;
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga,
> >>>>> t3d.hdr.ctx_id,
> >>>>>> +                                             t3d.resource_id,
> >>> &transfer,
> >>>>> NULL);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_attach_backing(VirtIOGPU *g, struct
> >>> virtio_gpu_ctrl_command
> >>>>> *cmd)
> >>>>>> +{
> >>>>>> +    struct rutabaga_iovecs vecs = { 0 };
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct virtio_gpu_resource_attach_backing att_rb;
> >>>>>> +    int ret;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(att_rb);
> >>>>>> +    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
> >>>>>> +
> >>>>>> +    res = virtio_gpu_find_resource(g, att_rb.resource_id);
> >>>>>> +    CHECK(res, cmd);
> >>>>>> +    CHECK(!res->iov, cmd);
> >>>>>> +
> >>>>>> +    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
> >>>>> sizeof(att_rb),
> >>>>>> +                                        cmd, NULL, &res->iov,
> >>>>> &res->iov_cnt);
> >>>>>> +    CHECK(!ret, cmd);
> >>>>>> +
> >>>>>> +    vecs.iovecs = res->iov;
> >>>>>> +    vecs.num_iovecs = res->iov_cnt;
> >>>>>> +
> >>>>>> +    ret = rutabaga_resource_attach_backing(vr->rutabaga,
> >>>>> att_rb.resource_id,
> >>>>>> +                                           &vecs);
> >>>>>> +    if (ret != 0) {
> >>>>>> +        virtio_gpu_cleanup_mapping(g, res);
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    CHECK(!ret, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_detach_backing(VirtIOGPU *g, struct
> >>> virtio_gpu_ctrl_command
> >>>>> *cmd)
> >>>>>> +{
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct virtio_gpu_resource_detach_backing detach_rb;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(detach_rb);
> >>>>>> +    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
> >>>>>> +
> >>>>>> +    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
> >>>>>> +    CHECK(res, cmd);
> >>>>>> +
> >>>>>> +    rutabaga_resource_detach_backing(vr->rutabaga,
> >>>>>> +                                     detach_rb.resource_id);
> >>>>>> +
> >>>>>> +    virtio_gpu_cleanup_mapping(g, res);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
> >>>>>> +                                 struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct virtio_gpu_ctx_resource att_res;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(att_res);
> >>>>>> +    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
> >>>>>> +                                        att_res.resource_id);
> >>>>>> +
> >>>>>> +    result = rutabaga_context_attach_resource(vr->rutabaga,
> >>>>> att_res.hdr.ctx_id,
> >>>>>> +                                              att_res.resource_id);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
> >>>>>> +                                 struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct virtio_gpu_ctx_resource det_res;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(det_res);
> >>>>>> +    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
> >>>>>> +                                        det_res.resource_id);
> >>>>>> +
> >>>>>> +    result = rutabaga_context_detach_resource(vr->rutabaga,
> >>>>> det_res.hdr.ctx_id,
> >>>>>> +                                              det_res.resource_id);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
> >>>>> virtio_gpu_ctrl_command *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct virtio_gpu_get_capset_info info;
> >>>>>> +    struct virtio_gpu_resp_capset_info resp;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(info);
> >>>>>> +
> >>>>>> +    result = rutabaga_get_capset_info(vr->rutabaga,
> info.capset_index,
> >>>>>> +                                      &resp.capset_id,
> >>>>> &resp.capset_max_version,
> >>>>>> +                                      &resp.capset_max_size);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +
> >>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
> >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_get_capset(VirtIOGPU *g, struct
> virtio_gpu_ctrl_command
> >>>>> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    struct virtio_gpu_get_capset gc;
> >>>>>> +    struct virtio_gpu_resp_capset *resp;
> >>>>>> +    uint32_t capset_size, capset_version;
> >>>>>> +    uint32_t current_id, i;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(gc);
> >>>>>> +    for (i = 0; i < vr->num_capsets; i++) {
> >>>>>> +        result = rutabaga_get_capset_info(vr->rutabaga, i,
> >>>>>> +                                          &current_id,
> >>> &capset_version,
> >>>>>> +                                          &capset_size);
> >>>>>> +        CHECK(!result, cmd);
> >>>>>> +
> >>>>>> +        if (current_id == gc.capset_id) {
> >>>>>> +            break;
> >>>>>> +        }
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    CHECK(i < vr->num_capsets, cmd);
> >>>>>> +
> >>>>>> +    resp = g_malloc0(sizeof(*resp) + capset_size);
> >>>>>> +    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
> >>>>>> +    rutabaga_get_capset(vr->rutabaga, gc.capset_id,
> gc.capset_version,
> >>>>>> +                        resp->capset_data, capset_size);
> >>>>>> +
> >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
> >>>>> capset_size);
> >>>>>> +    g_free(resp);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
> >>>>>> +                                  struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    int result;
> >>>>>> +    struct rutabaga_iovecs vecs = { 0 };
> >>>>>> +    g_autofree struct virtio_gpu_simple_resource *res = NULL;
> >>>>>> +    struct virtio_gpu_resource_create_blob cblob;
> >>>>>> +    struct rutabaga_create_blob rc_blob = { 0 };
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(cblob);
> >>>>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id,
> >>> cblob.size);
> >>>>>> +
> >>>>>> +    CHECK(cblob.resource_id != 0, cmd);
> >>>>>> +
> >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >>>>>> +
> >>>>>> +    res->resource_id = cblob.resource_id;
> >>>>>> +    res->blob_size = cblob.size;
> >>>>>> +
> >>>>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> >>>>>> +        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
> >>>>>> +                                               sizeof(cblob), cmd,
> >>>>> &res->addrs,
> >>>>>> +                                               &res->iov,
> >>> &res->iov_cnt);
> >>>>>> +        CHECK(!result, cmd);
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    rc_blob.blob_id = cblob.blob_id;
> >>>>>> +    rc_blob.blob_mem = cblob.blob_mem;
> >>>>>> +    rc_blob.blob_flags = cblob.blob_flags;
> >>>>>> +    rc_blob.size = cblob.size;
> >>>>>> +
> >>>>>> +    vecs.iovecs = res->iov;
> >>>>>> +    vecs.num_iovecs = res->iov_cnt;
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_create_blob(vr->rutabaga,
> >>>>> cblob.hdr.ctx_id,
> >>>>>> +                                           cblob.resource_id,
> >>> &rc_blob,
> >>>>> &vecs,
> >>>>>> +                                           NULL);
> >>>>>> +
> >>>>>> +    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> >>>>>> +        virtio_gpu_cleanup_mapping(g, res);
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +
> >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >>>>>> +    res = NULL;
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
> >>>>>> +                               struct virtio_gpu_ctrl_command *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    uint32_t map_info = 0;
> >>>>>> +    uint32_t slot = 0;
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct rutabaga_mapping mapping = { 0 };
> >>>>>> +    struct virtio_gpu_resource_map_blob mblob;
> >>>>>> +    struct virtio_gpu_resp_map_info resp = { 0 };
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(mblob);
> >>>>>> +
> >>>>>> +    CHECK(mblob.resource_id != 0, cmd);
> >>>>>> +
> >>>>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
> >>>>>> +    CHECK(res, cmd);
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_map_info(vr->rutabaga,
> >>> mblob.resource_id,
> >>>>>> +                                        &map_info);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +
> >>>>>> +    /*
> >>>>>> +     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu
> >>> spec,
> >>>>> but do
> >>>>>> +     * exist to potentially allow the hypervisor to restrict write
> >>>>> access to
> >>>>>> +     * memory. QEMU does not need to use this functionality at the
> >>>>> moment.
> >>>>>> +     */
> >>>>>> +    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
> >>>>>> +
> >>>>>> +    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
> >>>>> &mapping);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +
> >>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
> >>>>>> +        if (vr->memory_regions[slot].used) {
> >>>>>> +            continue;
> >>>>>> +        }
> >>>>>> +
> >>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
> >>>>>> +        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
> >>>>>> +                                   mapping.ptr);
> >>>>>> +        memory_region_add_subregion(&g->parent_obj.hostmem,
> >>>>>> +                                    mblob.offset, mr);
> >>>>>> +        vr->memory_regions[slot].resource_id = mblob.resource_id;
> >>>>>> +        vr->memory_regions[slot].used = 1;
> >>>>>> +        break;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    if (slot >= MAX_SLOTS) {
> >>>>>> +        result = rutabaga_resource_unmap(vr->rutabaga,
> >>>>> mblob.resource_id);
> >>>>>> +        CHECK(!result, cmd);
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
> >>>>>> +
> >>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
> >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
> >>>>>> +                                 struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    int32_t result;
> >>>>>> +    uint32_t slot = 0;
> >>>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>> +    struct virtio_gpu_resource_unmap_blob ublob;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(ublob);
> >>>>>> +
> >>>>>> +    CHECK(ublob.resource_id != 0, cmd);
> >>>>>> +
> >>>>>> +    res = virtio_gpu_find_resource(g, ublob.resource_id);
> >>>>>> +    CHECK(res, cmd);
> >>>>>> +
> >>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
> >>>>>> +        if (vr->memory_regions[slot].resource_id !=
> >>> ublob.resource_id) {
> >>>>>> +            continue;
> >>>>>> +        }
> >>>>>> +
> >>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
> >>>>>> +        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
> >>>>>> +
> >>>>>> +        vr->memory_regions[slot].resource_id = 0;
> >>>>>> +        vr->memory_regions[slot].used = 0;
> >>>>>> +        break;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
> >>>>>> +    result = rutabaga_resource_unmap(vr->rutabaga,
> res->resource_id);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
> >>>>>> +                                struct virtio_gpu_ctrl_command
> *cmd)
> >>>>>> +{
> >>>>>> +    struct rutabaga_fence fence = { 0 };
> >>>>>> +    int32_t result;
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
> >>>>>> +
> >>>>>> +    switch (cmd->cmd_hdr.type) {
> >>>>>> +    case VIRTIO_GPU_CMD_CTX_CREATE:
> >>>>>> +        rutabaga_cmd_context_create(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_CTX_DESTROY:
> >>>>>> +        rutabaga_cmd_context_destroy(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
> >>>>>> +        rutabaga_cmd_create_resource_2d(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
> >>>>>> +        rutabaga_cmd_create_resource_3d(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_SUBMIT_3D:
> >>>>>> +        rutabaga_cmd_submit_3d(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
> >>>>>> +        rutabaga_cmd_transfer_to_host_2d(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
> >>>>>> +        rutabaga_cmd_transfer_to_host_3d(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
> >>>>>> +        rutabaga_cmd_transfer_from_host_3d(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
> >>>>>> +        rutabaga_cmd_attach_backing(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
> >>>>>> +        rutabaga_cmd_detach_backing(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_SET_SCANOUT:
> >>>>>> +        rutabaga_cmd_set_scanout(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
> >>>>>> +        rutabaga_cmd_resource_flush(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
> >>>>>> +        rutabaga_cmd_resource_unref(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
> >>>>>> +        rutabaga_cmd_ctx_attach_resource(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
> >>>>>> +        rutabaga_cmd_ctx_detach_resource(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
> >>>>>> +        rutabaga_cmd_get_capset_info(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET:
> >>>>>> +        rutabaga_cmd_get_capset(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
> >>>>>> +        virtio_gpu_get_display_info(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_GET_EDID:
> >>>>>> +        virtio_gpu_get_edid(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
> >>>>>> +        rutabaga_cmd_resource_create_blob(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
> >>>>>> +        rutabaga_cmd_resource_map_blob(g, cmd);
> >>>>>> +        break;
> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
> >>>>>> +        rutabaga_cmd_resource_unmap_blob(g, cmd);
> >>>>>> +        break;
> >>>>>> +    default:
> >>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >>>>>> +        break;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    if (cmd->finished) {
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +    if (cmd->error) {
> >>>>>> +        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
> >>>>>> +                     cmd->cmd_hdr.type, cmd->error);
> >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
> >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
> >>>>> VIRTIO_GPU_RESP_OK_NODATA);
> >>>>>> +        return;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    fence.flags = cmd->cmd_hdr.flags;
> >>>>>> +    fence.ctx_id = cmd->cmd_hdr.ctx_id;
> >>>>>> +    fence.fence_id = cmd->cmd_hdr.fence_id;
> >>>>>> +    fence.ring_idx = cmd->cmd_hdr.ring_idx;
> >>>>>> +
> >>>>>> +    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
> >>>>> cmd->cmd_hdr.type);
> >>>>>> +
> >>>>>> +    result = rutabaga_create_fence(vr->rutabaga, &fence);
> >>>>>> +    CHECK(!result, cmd);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +virtio_gpu_rutabaga_aio_cb(void *opaque)
> >>>>>> +{
> >>>>>> +    struct rutabaga_aio_data *data = opaque;
> >>>>>> +    VirtIOGPU *g = VIRTIO_GPU(data->vr);
> >>>>>> +    struct rutabaga_fence fence_data = data->fence;
> >>>>>> +    struct virtio_gpu_ctrl_command *cmd, *tmp;
> >>>>>> +
> >>>>>> +    uint32_t signaled_ctx_specific = fence_data.flags &
> >>>>>> +                                     RUTABAGA_FLAG_INFO_RING_IDX;
> >>>>>> +
> >>>>>> +    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
> >>>>>> +        /*
> >>>>>> +         * Due to context specific timelines.
> >>>>>> +         */
> >>>>>> +        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
> >>>>>> +                                       RUTABAGA_FLAG_INFO_RING_IDX;
> >>>>>> +
> >>>>>> +        if (signaled_ctx_specific != target_ctx_specific) {
> >>>>>> +            continue;
> >>>>>> +        }
> >>>>>> +
> >>>>>> +        if (signaled_ctx_specific &&
> >>>>>> +           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
> >>>>>> +            continue;
> >>>>>> +        }
> >>>>>> +
> >>>>>> +        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
> >>>>>> +            continue;
> >>>>>> +        }
> >>>>>> +
> >>>>>> +        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
> >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
> >>>>> VIRTIO_GPU_RESP_OK_NODATA);
> >>>>>> +        QTAILQ_REMOVE(&g->fenceq, cmd, next);
> >>>>>> +        g_free(cmd);
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    g_free(data);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
> >>>>>> +                             const struct rutabaga_fence *fence) {
> >>>>>> +    struct rutabaga_aio_data *data;
> >>>>>> +    VirtIOGPU *g = (VirtIOGPU *)user_data;
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +
> >>>>>> +    /*
> >>>>>> +     * gfxstream and both cross-domain (and even newer versions
> >>>>> virglrenderer:
> >>>>>> +     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
> >>>>> completion on
> >>>>>> +     * threads ("callback threads") that are different from the
> thread
> >>>>> that
> >>>>>> +     * processes the command queue ("main thread").
> >>>>>> +     *
> >>>>>> +     * crosvm and other virtio-gpu 1.1 implementations enable
> callback
> >>>>> threads
> >>>>>> +     * via locking.  However, on QEMU a deadlock is observed if
> >>>>>> +     * virtio_gpu_ctrl_response_nodata(..) [used in the fence
> >>> callback]
> >>>>> is used
> >>>>>> +     * from a thread that is not the main thread.
> >>>>>> +     *
> >>>>>> +     * The reason is QEMU's internal locking is designed to work
> with
> >>>>> QEMU
> >>>>>> +     * threads (see rcu_register_thread()) and not generic
> C/C++/Rust
> >>>>> threads.
> >>>>>> +     * For now, we can workaround this by scheduling the return of
> the
> >>>>>> +     * fence descriptors on the main thread.
> >>>>>> +     */
> >>>>>> +
> >>>>>> +    data = g_new0(struct rutabaga_aio_data, 1);
> >>>>>> +    data->vr = vr;
> >>>>>> +    data->fence = *fence;
> >>>>>> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
> >>>>>> +                            virtio_gpu_rutabaga_aio_cb,
> >>>>>> +                            data);
> >>>>>> +}
> >>>>>> +
> >>>>>> +static void
> >>>>>> +virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
> >>>>>> +                             const struct rutabaga_debug *debug) {
> >>>>>> +
> >>>>>> +    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
> >>>>>> +        error_report("%s", debug->message);
> >>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
> >>>>>> +        warn_report("%s", debug->message);
> >>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
> >>>>>> +        info_report("%s", debug->message);
> >>>>>> +    }
> >>>>>> +}
> >>>>>> +
> >>>>>> +static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
> >>>>>> +{
> >>>>>> +    int result;
> >>>>>> +    uint64_t capset_mask;
> >>>>>> +    struct rutabaga_builder builder = { 0 };
> >>>>>> +    char wayland_socket_path[UNIX_PATH_MAX];
> >>>>>> +    struct rutabaga_channel channel = { 0 };
> >>>>>> +    struct rutabaga_channels channels = { 0 };
> >>>>>> +
> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> >>>>>> +    vr->rutabaga = NULL;
> >>>>>> +
> >>>>>> +    if (!vr->capset_names) {
> >>>>>> +        error_setg(errp, "a capset name from the virtio-gpu spec is
> >>>>> required");
> >>>>>> +        return false;
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    builder.wsi = RUTABAGA_WSI_SURFACELESS;
> >>>>>> +    /*
> >>>>>> +     * Currently, if WSI is specified, the only valid strings are
> >>>>> "surfaceless"
> >>>>>> +     * or "headless".  Surfaceless doesn't create a native window
> >>>>> surface, but
> >>>>>> +     * does copy from the render target to the Pixman buffer if a
> >>>>> virtio-gpu
> >>>>>> +     * 2D hypercall is issued.  Surfacless is the default.
> >>>>>> +     *
> >>>>>> +     * Headless is like surfaceless, but doesn't copy to the Pixman
> >>>>> buffer. The
> >>>>>> +     * use case is automated testing environments where there is no
> >>> need
> >>>>> to view
> >>>>>> +     * results.
> >>>>>> +     *
> >>>>>> +     * In the future, more performant virtio-gpu 2D UI integration
> may
> >>>>> be added.
> >>>>>> +     */
> >>>>>> +    if (vr->wsi) {
> >>>>>> +        if (g_str_equal(vr->wsi, "surfaceless")) {
> >>>>>> +            vr->headless = false;
> >>>>>> +        } else if (g_str_equal(vr->wsi, "headless")) {
> >>>>>> +            vr->headless = true;
> >>>>>> +        } else {
> >>>>>> +            error_setg(errp, "invalid wsi option selected");
> >>>>>> +            return false;
> >>>>>> +        }
> >>>>>> +    }
> >>>>>> +
> >>>>>> +    result = rutabaga_calculate_capset_mask(vr->capset_names,
> >>>>> &capset_mask);
> >>>>>
> >>>>> First, sorry for responding after such a long time. I've been busy
> with
> >>>>> work and I'm doing QEMU in my free time.
> >>>>>
> >>>>> In iteration 1 I've raised the topic on capset_names [1] and I
> haven't
> >>>>> seen it answered properly. Perhaps I need to rephrase a bit so here
> we
> >>> go:
> >>>>> capset_names seems to be colon-separated list of bit options managed
> by
> >>>>> rutabaga. This introduces yet another way of options handling. There
> >>> have
> >>>>> been talks about harmonizing options handling in QEMU since
> apparently
> >>> it
> >>>>> is considered too complex [2,3].
> >>>>
> >>>>
> >>>>> Why not pass the "capset" as a bitfield like capset_mask and have
> QEMU
> >>>>> create "capset" from QOM properties?
> >>>>
> >>>> IIUC these flags could come from virtio_gpu.h which is already
> present in
> >>>>> the QEMU tree. This would not inly shortcut the dependency on
> rutabaga
> >>> here
> >>>>> but would also be more idiomatic QEMU (since it makes the options
> more
> >>>>> introspectable by internal machinery).
> >>>>
> >>>>
> >>>>> Of course the bitfield approach would require modifications in QEMU
> >>>>> whenever rutabaga gains new features. However, I figure that in the
> long
> >>>>> term rutabaga will be quite feature complete such that the benefits
> of
> >>>>> idiomatic QEMU handling will outweigh the decoupling of the projects.
> >>>>>
> >>>>> What do you think?
> >>>>>
> >>>>
> >>>> I think what you're suggesting is something like -device
> >>>> virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
> >>>> gfxstream_vulkan + cross_domain]?
> >>>
> >>> I was thinking more along the lines of
> >>> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
> >>> gfxstream_vulkan and cross_domain are boolean QOM properties. This
> would
> >>> make for a human-readable format which follows QEMU style.
> >>>
> >>>>
> >>>> We actually did consider something like that when adding the
> >>>> --context-types flag [with crosvm], but there was a desire for a
> >>>> human-readable format rather than numbers [even if they are in the
> >>>> virtio-gpu spec].
> >>>>
> >>>> Additionally, there are quite a few context types that people are
> playing
> >>>> around with [gfxstream-gles, gfxstream-composer] that are launchable
> and
> >>>> aren't in the spec just yet.
> >>>
> >>> Right, QEMU had to be modified for this kind of experimentation. I
> >>> considered this in my last paragraph and figured that in the long run
> QEMU
> >>> *may* prefer more idiomatic option handling since it tries hard to not
> >>> break its command line interface. I'm just pointing this out -- the
> >>> decision is ultimately up to the community.
> >>>
> >>> Why not have dedicated QEMU development branches for experimentation?
> >>> Wouldn't upstreaming new features into QEMU be a good motivation to
> get the
> >>> missing pieces into the spec, once they are mature?
> >>
> >>
> >>>>
> >>>> Also, a key feature we want to explicitly **not** turn on all
> available
> >>>> context-types and let the user decide.
> >>>
> >>> How would you prevent that with the current colon-separated approach?
> >>> Splitting capset_mask in multiple parameters is just a different
> >>> syntactical representation of the same thing.
> >>>
> >>>> That'll allow guest Mesa in
> >>>> particular to do its magic in its loader.  So one may run Zink + ANV
> with
> >>>> ioctl forwarding, or Iris + ioctl forwarding and compare performance
> with
> >>>> the same guest image.
> >>>>
> >>>> And another thing is one needs some knowledge of the host system to
> choose
> >>>> the right context type.  You wouldn't do Zink + ANV ioctl forwarding
> on
> >>>> MacOS.  So I think the task of choosing the right context type will
> fall
> >>> to
> >>>> projects that depend on QEMU (such as Android Emulator) which have
> some
> >>>> knowledge of the host environment.
> >>>>
> >>>> We actually have a graphics detector somewhere that calls VK/OpenGL
> before
> >>>> launching the VM and sets the right options.  Plan is to port into
> >>>> gfxstream, maybe we could use that.
> >>>
> >>> You could bail out in QEMU if rutabaga_calculate_capset_mask() detects
> >>> conflicting combinations, no?
> >>>
> >>>>
> >>>> So given the desire for human readable formats, being portable across
> VMMs
> >>>> (crosvm, qemu, rust-vmm??) and experimentation, the string -> capset
> mask
> >>>> conversion was put in the rutabaga API.  So I wouldn't change it for
> those
> >>>> reasons.
> >>>
> >>> What do you mean by being portable across VMMs?
> >>
> >>
> >> Having the API inside rutabaga is (mildly) useful when multiple VMMs
> have
> >> the need to translate from a human-readable format to flags digestible
> by
> >> rutabaga.
> >>
> >>
> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452
> >>
> >>
> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353
> >>
> >>
> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505
> >>
> >> For these crosvm/qemu launchers, I imagine capset names will be plumbed
> all
> >> the way through eventually (launch_cvd
> >> --gpu_context=gfxstream-vulkan:cross-domain if you've played around with
> >> Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you played
> >> around with Termina VMs).
> >>
> >> I think rust-vmm could also use the same API ("--capset_names") too.
> >>
> >>
> >>> Sure, QEMU had to be taught new flags before being able to use new
> >>> rutabaga features. I agree that this comes with a certain
> inconvenience.
> >>> But it may also be inconvenient for QEMU to deal with additional ad-hoc
> >>> options parsing when there are efforts for harmonization.
> >>>
> >>> Did my comments shed new light into the discussion?
> >>
> >>
> >> Yes, they do.  I agree with you that both crosvm/qemu have too many
> flags,
> >> and having a stable command line interface is important.  We are aiming
> for
> >> stability with the `--capset_names={colon string}` command line, and at
> >> least for crosvm looking to deprecate older options [since we've never
> had
> >> an official release of crosvm yet].
> >>
> >> I do think:
> >>
> >> 1) "capset_names=gfxstream-vulkan:cross-domain"
> >> 2) "cross-domain=on,gfxstream-vulkan=on"
> >>
> >> are similar enough.  I would choose (1) for since I think not
> duplicating
> >> the [name] -> flag logic and having a similar interface across VMMs +
> VMM
> >> launchers is ever-so slightly useful.
> >
> > I think we've now reached a good understanding of the issue. It's now up
> to the QEMU community to make a choice. So I'm cc'ing Markus and Thomas as
> the experts of the topic.
>
> As a virtio-gpu user, I'm slightly inclined to (2) since it would be
> easier to implement the same option for virtio-gpu-virgl when a need
> arises.
>

The rutabaga/virgl implementations will likely be done via DEFINE_PROP_BIT,
no?  For virgl, it'll set the virgl flags, and for rutabaga, it'll set the
capset mask.  So it would be different.

That said, the change isn't too bad to make.  Here's the key part:

+++ b/hw/display/virtio-gpu-rutabaga.c
@@ -1084,6 +1084,14 @@ static Property virtio_gpu_rutabaga_properties[] = {
     DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
                        wayland_socket_path),
     DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
+    DEFINE_PROP_BIT("gfxstream-vulkan-experimental", VirtIOGPURutabaga,
+                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_VULKAN, false),
+    DEFINE_PROP_BIT("cross-domain-experimental", VirtIOGPURutabaga,
+                    capset_mask, RUTABAGA_CAPSET_CROSS_DOMAIN, false),
+    DEFINE_PROP_BIT("gfxstream-gles-experimental", VirtIOGPURutabaga,
+                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_GLES, false),
+    DEFINE_PROP_BIT("gfxstream-composer-experimental", VirtIOGPURutabaga,
+                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_COMPOSER,
false),
     DEFINE_PROP_END_OF_LIST(),
 };

One thing though: I borrowed a page of the Mesa-3d playbook (since they
land non-working/experimental drivers to speed development) and named all
gfxstream/rutabaga_gfx context types as "experimental".  That'll allow us
to experiment in-tree.

If you closely follow:

https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg03319.html

you may notice packaging/distributing rutabaga/gfxstream is low-priority,
since I do have the Android emulator use case in mind and I'm not sure
anybody else will find production targets for it in QEMU.  I think this is
somewhat closely related to crosvm/qemu has "too many flags" situation.
Many times a flag/config is landed, and production target changes, and the
history is lost.  By prefixing everything as "experimental", we explicitly
make it clear that QEMU makes no guarantees at this time regarding
rutabaga.  That'll allow hobbyists who build QEMU from sources anyways (the
main non-Android users of rutabaga/gfxstream) to play around with it
upstream, and also allow downstream to follow upstream but not make any
guarantees until everything is ready.

I'm curious what everyone thinks of the plan.

[-- Attachment #2: Type: text/html, Size: 95295 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-21 23:44               ` Gurchetan Singh
@ 2023-09-22  2:41                 ` Akihiko Odaki
  2023-09-29 15:06                 ` Bernhard Beschow
  1 sibling, 0 replies; 34+ messages in thread
From: Akihiko Odaki @ 2023-09-22  2:41 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: Bernhard Beschow, qemu-devel, marcandre.lureau, ray.huang,
	alex.bennee, hi, ernunes, manos.pitsidianakis, philmd,
	Markus Armbruster, Thomas Huth

On 2023/09/22 8:44, Gurchetan Singh wrote:
> 
> 
> On Tue, Sep 19, 2023 at 3:07 PM Akihiko Odaki <akihiko.odaki@gmail.com 
> <mailto:akihiko.odaki@gmail.com>> wrote:
> 
>     On 2023/09/20 3:36, Bernhard Beschow wrote:
>      >
>      >
>      > Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh
>     <gurchetansingh@chromium.org <mailto:gurchetansingh@chromium.org>>:
>      >> On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow
>     <shentey@gmail.com <mailto:shentey@gmail.com>> wrote:
>      >>
>      >>>
>      >>>
>      >>> Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <
>      >>> gurchetansingh@chromium.org <mailto:gurchetansingh@chromium.org>>:
>      >>>> On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow
>     <shentey@gmail.com <mailto:shentey@gmail.com>>
>      >>> wrote:
>      >>>>
>      >>>>>
>      >>>>>
>      >>>>> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
>      >>>>> gurchetansingh@chromium.org
>     <mailto:gurchetansingh@chromium.org>>:
>      >>>>>> This adds initial support for gfxstream and cross-domain.  Both
>      >>>>>> features rely on virtio-gpu blob resources and context
>     types, which
>      >>>>>> are also implemented in this patch.
>      >>>>>>
>      >>>>>> gfxstream has a long and illustrious history in Android graphics
>      >>>>>> paravirtualization.  It has been powering graphics in the
>     Android
>      >>>>>> Studio Emulator for more than a decade, which is the main
>     developer
>      >>>>>> platform.
>      >>>>>>
>      >>>>>> Originally conceived by Jesse Hall, it was first known as
>     "EmuGL" [a].
>      >>>>>> The key design characteristic was a 1:1 threading model and
>      >>>>>> auto-generation, which fit nicely with the OpenGLES spec. 
>     It also
>      >>>>>> allowed easy layering with ANGLE on the host, which provides
>     the GLES
>      >>>>>> implementations on Windows or MacOS enviroments.
>      >>>>>>
>      >>>>>> gfxstream has traditionally been maintained by a single
>     engineer, and
>      >>>>>> between 2015 to 2021, the goldfish throne passed to Frank Yang.
>      >>>>>> Historians often remark this glorious reign ("pax
>     gfxstreama" is the
>      >>>>>> academic term) was comparable to that of Augustus and both Queen
>      >>>>>> Elizabeths.  Just to name a few accomplishments in a resplendent
>      >>>>>> panoply: higher versions of GLES, address space graphics,
>     snapshot
>      >>>>>> support and CTS compliant Vulkan [b].
>      >>>>>>
>      >>>>>> One major drawback was the use of out-of-tree goldfish drivers.
>      >>>>>> Android engineers didn't know much about DRM/KMS and
>     especially TTM so
>      >>>>>> a simple guest to host pipe was conceived.
>      >>>>>>
>      >>>>>> Luckily, virtio-gpu 3D started to emerge in 2016 due to the
>     work of
>      >>>>>> the Mesa/virglrenderer communities.  In 2018, the initial
>     virtio-gpu
>      >>>>>> port of gfxstream was done by Cuttlefish enthusiast Alistair
>     Delva.
>      >>>>>> It was a symbol compatible replacement of virglrenderer [c]
>     and named
>      >>>>>> "AVDVirglrenderer".  This implementation forms the basis of the
>      >>>>>> current gfxstream host implementation still in use today.
>      >>>>>>
>      >>>>>> cross-domain support follows a similar arc.  Originally
>     conceived by
>      >>>>>> Wayland aficionado David Reveman and crosvm enjoyer Zach
>     Reizner in
>      >>>>>> 2018, it initially relied on the downstream "virtio-wl" device.
>      >>>>>>
>      >>>>>> In 2020 and 2021, virtio-gpu was extended to include blob
>     resources
>      >>>>>> and multiple timelines by yours truly, features
>     gfxstream/cross-domain
>      >>>>>> both require to function correctly.
>      >>>>>>
>      >>>>>> Right now, we stand at the precipice of a truly fantastic
>     possibility:
>      >>>>>> the Android Emulator powered by upstream QEMU and upstream Linux
>      >>>>>> kernel.  gfxstream will then be packaged properfully, and app
>      >>>>>> developers can even fix gfxstream bugs on their own if they
>     encounter
>      >>>>>> them.
>      >>>>>>
>      >>>>>> It's been quite the ride, my friends.  Where will gfxstream
>     head next,
>      >>>>>> nobody really knows.  I wouldn't be surprised if it's around for
>      >>>>>> another decade, maintained by a new generation of Android
>     graphics
>      >>>>>> enthusiasts.
>      >>>>>>
>      >>>>>> Technical details:
>      >>>>>>   - Very simple initial display integration: just used Pixman
>      >>>>>>   - Largely, 1:1 mapping of virtio-gpu hypercalls to
>     rutabaga function
>      >>>>>>     calls
>      >>>>>>
>      >>>>>> Next steps for Android VMs:
>      >>>>>>   - The next step would be improving display integration and UI
>      >>> interfaces
>      >>>>>>     with the goal of the QEMU upstream graphics being in an
>     emulator
>      >>>>>>     release [d].
>      >>>>>>
>      >>>>>> Next steps for Linux VMs for display virtualization:
>      >>>>>>   - For widespread distribution, someone needs to package
>     Sommelier or
>      >>> the
>      >>>>>>     wayland-proxy-virtwl [e] ideally into Debian main. In
>     addition,
>      >>> newer
>      >>>>>>     versions of the Linux kernel come with
>     DRM_VIRTIO_GPU_KMS option,
>      >>>>>>     which allows disabling KMS hypercalls.  If anyone cares
>     enough,
>      >>> it'll
>      >>>>>>     probably be possible to build a custom VM variant that
>     uses this
>      >>>>> display
>      >>>>>>     virtualization strategy.
>      >>>>>>
>      >>>>>> [a]
>      >>>>>
>     https://android-review.googlesource.com/c/platform/development/+/34470 <https://android-review.googlesource.com/c/platform/development/+/34470>
>      >>>>>> [b]
>      >>>>>
>      >>>
>     https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22 <https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22>
>      >>>>>> [c]
>      >>>>>
>      >>>
>     https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927 <https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927>
>      >>>>>> [d] https://developer.android.com/studio/releases/emulator
>     <https://developer.android.com/studio/releases/emulator>
>      >>>>>> [e] https://github.com/talex5/wayland-proxy-virtwl
>     <https://github.com/talex5/wayland-proxy-virtwl>
>      >>>>>>
>      >>>>>> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org
>     <mailto:gurchetansingh@chromium.org>>
>      >>>>>> Tested-by: Alyssa Ross <hi@alyssa.is <mailto:hi@alyssa.is>>
>      >>>>>> Tested-by: Emmanouil Pitsidianakis
>     <manos.pitsidianakis@linaro.org <mailto:manos.pitsidianakis@linaro.org>>
>      >>>>>> Reviewed-by: Emmanouil Pitsidianakis
>     <manos.pitsidianakis@linaro.org <mailto:manos.pitsidianakis@linaro.org>>
>      >>>>>> ---
>      >>>>>> v1: Incorported various suggestions by Akihiko Odaki and Bernard
>      >>> Berschow
>      >>>>>>     - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>      >>>>>>     - Used error_report(..)
>      >>>>>>     - Used g_autofree to fix leaks on error paths
>      >>>>>>     - Removed unnecessary casts
>      >>>>>>     - added virtio-gpu-pci-rutabaga.c +
>     virtio-vga-rutabaga.c files
>      >>>>>>
>      >>>>>> v2: Incorported various suggestions by Akihiko Odaki,
>     Marc-André Lureau
>      >>>>> and
>      >>>>>>     Bernard Berschow:
>      >>>>>>     - Parenthesis in CHECK macro
>      >>>>>>     - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
>      >>>>>>     - delay until g->parent_obj.enable = 1
>      >>>>>>     - Additional cast fixes
>      >>>>>>     - initialize directly in virtio_gpu_rutabaga_realize(..)
>      >>>>>>     - add debug callback to hook into QEMU error's APIs
>      >>>>>>
>      >>>>>> v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>      >>>>>>     - Autodetect Wayland socket when not explicitly specified
>      >>>>>>     - Fix map_blob error paths
>      >>>>>>     - Add comment why we need both `res` and `resource` in
>     create blob
>      >>>>>>     - Cast and whitespace fixes
>      >>>>>>     - Big endian check comes before virtio_gpu_rutabaga_init().
>      >>>>>>     - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
>      >>>>>>
>      >>>>>> v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>      >>>>>>     - Double checked all casts
>      >>>>>>     - Remove unnecessary parenthesis
>      >>>>>>     - Removed `resource` in create_blob
>      >>>>>>     - Added comment about failure case
>      >>>>>>     - Pass user-provided socket as-is
>      >>>>>>     - Use stack variable rather than heap allocation
>      >>>>>>     - Future-proofed map info API to give access flags as well
>      >>>>>>
>      >>>>>> v5: Incorporated feedback from Akihiko Odaki:
>      >>>>>>     - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
>      >>>>>>     - Simplify num_capsets check
>      >>>>>>     - Call cleanup mapping on error paths
>      >>>>>>     - uint64_t --> void* for rutabaga_map(..)
>      >>>>>>     - Removed unnecessary parenthesis
>      >>>>>>     - Removed unnecessary cast
>      >>>>>>     - #define UNIX_PATH_MAX sizeof((struct sockaddr_un)
>     {}.sun_path)
>      >>>>>>     - Reuse result variable
>      >>>>>>
>      >>>>>> v6: Incorporated feedback from Akihiko Odaki:
>      >>>>>>     - Remove unnecessary #ifndef
>      >>>>>>     - Disable scanout when appropriate
>      >>>>>>     - CHECK capset index within range outside loop
>      >>>>>>     - Add capset_version
>      >>>>>>
>      >>>>>> v7: Incorporated feedback from Akihiko Odaki:
>      >>>>>>     - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
>      >>>>>>
>      >>>>>> v9: Incorportated feedback from Akihiko Odaki:
>      >>>>>>     - Remove extra error_setg(..) after
>     virtio_gpu_rutabaga_init(..)
>      >>>>>>     - Add error_setg(..) after rutabaga_init(..)
>      >>>>>>
>      >>>>>> v10: Incorportated feedback from Akihiko Odaki:
>      >>>>>>     - error_setg(..) --> error_setg_errno(..) when appropriate
>      >>>>>>     - virtio_gpu_rutabaga_init returns a bool instead of an int
>      >>>>>>
>      >>>>>> v11: Incorportated feedback from Philippe Mathieu-Daudé:
>      >>>>>>     - C-style /* */ comments and avoid // comments.
>      >>>>>>     - GPL-2.0 --> GPL-2.0-or-later
>      >>>>>>
>      >>>>>> hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
>      >>>>>> hw/display/virtio-gpu-rutabaga.c     | 1121
>     ++++++++++++++++++++++++++
>      >>>>>> hw/display/virtio-vga-rutabaga.c     |   53 ++
>      >>>>>> 3 files changed, 1224 insertions(+)
>      >>>>>> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
>      >>>>>> create mode 100644 hw/display/virtio-gpu-rutabaga.c
>      >>>>>> create mode 100644 hw/display/virtio-vga-rutabaga.c
>      >>>>>>
>      >>>>>> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
>      >>>>> b/hw/display/virtio-gpu-pci-rutabaga.c
>      >>>>>> new file mode 100644
>      >>>>>> index 0000000000..311eff308a
>      >>>>>> --- /dev/null
>      >>>>>> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
>      >>>>>> @@ -0,0 +1,50 @@
>      >>>>>> +/*
>      >>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>      >>>>>> + */
>      >>>>>> +
>      >>>>>> +#include "qemu/osdep.h"
>      >>>>>> +#include "qapi/error.h"
>      >>>>>> +#include "qemu/module.h"
>      >>>>>> +#include "hw/pci/pci.h"
>      >>>>>> +#include "hw/qdev-properties.h"
>      >>>>>> +#include "hw/virtio/virtio.h"
>      >>>>>> +#include "hw/virtio/virtio-bus.h"
>      >>>>>> +#include "hw/virtio/virtio-gpu-pci.h"
>      >>>>>> +#include "qom/object.h"
>      >>>>>> +
>      >>>>>> +#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>      >>>>>> +typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
>      >>>>>> +DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI,
>      >>> VIRTIO_GPU_RUTABAGA_PCI,
>      >>>>>> +                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>      >>>>>> +
>      >>>>>> +struct VirtIOGPURutabagaPCI {
>      >>>>>> +    VirtIOGPUPCIBase parent_obj;
>      >>>>>> +    VirtIOGPURutabaga vdev;
>      >>>>>> +};
>      >>>>>> +
>      >>>>>> +static void virtio_gpu_rutabaga_initfn(Object *obj)
>      >>>>>> +{
>      >>>>>> +    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>      >>>>>> +
>      >>>>>> +    virtio_instance_init_common(obj, &dev->vdev,
>     sizeof(dev->vdev),
>      >>>>>> +                                TYPE_VIRTIO_GPU_RUTABAGA);
>      >>>>>> +    VIRTIO_GPU_PCI_BASE(obj)->vgpu =
>     VIRTIO_GPU_BASE(&dev->vdev);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static const VirtioPCIDeviceTypeInfo
>     virtio_gpu_rutabaga_pci_info = {
>      >>>>>> +    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>      >>>>>> +    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>      >>>>>> +    .instance_size = sizeof(VirtIOGPURutabagaPCI),
>      >>>>>> +    .instance_init = virtio_gpu_rutabaga_initfn,
>      >>>>>> +};
>      >>>>>> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>      >>>>>> +module_kconfig(VIRTIO_PCI);
>      >>>>>> +
>      >>>>>> +static void virtio_gpu_rutabaga_pci_register_types(void)
>      >>>>>> +{
>      >>>>>> +    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +type_init(virtio_gpu_rutabaga_pci_register_types)
>      >>>>>> +
>      >>>>>> +module_dep("hw-display-virtio-gpu-pci");
>      >>>>>> diff --git a/hw/display/virtio-gpu-rutabaga.c
>      >>>>> b/hw/display/virtio-gpu-rutabaga.c
>      >>>>>> new file mode 100644
>      >>>>>> index 0000000000..9018e5a702
>      >>>>>> --- /dev/null
>      >>>>>> +++ b/hw/display/virtio-gpu-rutabaga.c
>      >>>>>> @@ -0,0 +1,1121 @@
>      >>>>>> +/*
>      >>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>      >>>>>> + */
>      >>>>>> +
>      >>>>>> +#include "qemu/osdep.h"
>      >>>>>> +#include "qapi/error.h"
>      >>>>>> +#include "qemu/error-report.h"
>      >>>>>> +#include "qemu/iov.h"
>      >>>>>> +#include "trace.h"
>      >>>>>> +#include "hw/virtio/virtio.h"
>      >>>>>> +#include "hw/virtio/virtio-gpu.h"
>      >>>>>> +#include "hw/virtio/virtio-gpu-pixman.h"
>      >>>>>> +#include "hw/virtio/virtio-iommu.h"
>      >>>>>> +
>      >>>>>> +#include <glib/gmem.h>
>      >>>>>> +#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>      >>>>>> +
>      >>>>>> +#define CHECK(condition, cmd)
>      >>>>>       \
>      >>>>>> +    do {
>      >>>>>        \
>      >>>>>> +        if (!(condition)) {
>      >>>>>       \
>      >>>>>> +            error_report("CHECK failed in %s() %s:" "%d",
>     __func__,
>      >>>>>       \
>      >>>>>> +                         __FILE__, __LINE__);
>      >>>>>       \
>      >>>>>> +            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>      >>>>>        \
>      >>>>>> +            return;
>      >>>>>       \
>      >>>>>> +       }
>      >>>>>        \
>      >>>>>> +    } while (0)
>      >>>>>> +
>      >>>>>> +/*
>      >>>>>> + * This is the size of the char array in struct
>     sock_addr_un. No
>      >>> Wayland
>      >>>>> socket
>      >>>>>> + * can be created with a path longer than this, including
>     the null
>      >>>>> terminator.
>      >>>>>> + */
>      >>>>>> +#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
>      >>>>>> +
>      >>>>>> +struct rutabaga_aio_data {
>      >>>>>> +    struct VirtIOGPURutabaga *vr;
>      >>>>>> +    struct rutabaga_fence fence;
>      >>>>>> +};
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
>      >>>>> virtio_gpu_scanout *s,
>      >>>>>> +                                  uint32_t resource_id)
>      >>>>>> +{
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>      >>>>>> +    struct iovec transfer_iovec;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    res = virtio_gpu_find_resource(g, resource_id);
>      >>>>>> +    if (!res) {
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    if (res->width != s->current_cursor->width ||
>      >>>>>> +        res->height != s->current_cursor->height) {
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    transfer.x = 0;
>      >>>>>> +    transfer.y = 0;
>      >>>>>> +    transfer.z = 0;
>      >>>>>> +    transfer.w = res->width;
>      >>>>>> +    transfer.h = res->height;
>      >>>>>> +    transfer.d = 1;
>      >>>>>> +
>      >>>>>> +    transfer_iovec.iov_base = s->current_cursor->data;
>      >>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
>      >>>>>> +
>      >>>>>> +    rutabaga_resource_transfer_read(vr->rutabaga, 0,
>      >>>>>> +                                    resource_id, &transfer,
>      >>>>>> +                                    &transfer_iovec);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>      >>>>>> +{
>      >>>>>> +    VirtIOGPU *g = VIRTIO_GPU(b);
>      >>>>>> +    virtio_gpu_process_cmdq(g);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>      >>>>>> +                                struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct virtio_gpu_resource_create_2d c2d;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(c2d);
>      >>>>>> +    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id,
>     c2d.format,
>      >>>>>> +                                       c2d.width, c2d.height);
>      >>>>>> +
>      >>>>>> +    rc_3d.target = 2;
>      >>>>>> +    rc_3d.format = c2d.format;
>      >>>>>> +    rc_3d.bind = (1 << 1);
>      >>>>>> +    rc_3d.width = c2d.width;
>      >>>>>> +    rc_3d.height = c2d.height;
>      >>>>>> +    rc_3d.depth = 1;
>      >>>>>> +    rc_3d.array_size = 1;
>      >>>>>> +    rc_3d.last_level = 0;
>      >>>>>> +    rc_3d.nr_samples = 0;
>      >>>>>> +    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
>      >>> c2d.resource_id,
>      >>>>> &rc_3d);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +
>      >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>      >>>>>> +    res->width = c2d.width;
>      >>>>>> +    res->height = c2d.height;
>      >>>>>> +    res->format = c2d.format;
>      >>>>>> +    res->resource_id = c2d.resource_id;
>      >>>>>> +
>      >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>      >>>>>> +                                struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct virtio_gpu_resource_create_3d c3d;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(c3d);
>      >>>>>> +
>      >>>>>> +    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id,
>     c3d.format,
>      >>>>>> +                                       c3d.width, c3d.height,
>      >>> c3d.depth);
>      >>>>>> +
>      >>>>>> +    rc_3d.target = c3d.target;
>      >>>>>> +    rc_3d.format = c3d.format;
>      >>>>>> +    rc_3d.bind = c3d.bind;
>      >>>>>> +    rc_3d.width = c3d.width;
>      >>>>>> +    rc_3d.height = c3d.height;
>      >>>>>> +    rc_3d.depth = c3d.depth;
>      >>>>>> +    rc_3d.array_size = c3d.array_size;
>      >>>>>> +    rc_3d.last_level = c3d.last_level;
>      >>>>>> +    rc_3d.nr_samples = c3d.nr_samples;
>      >>>>>> +    rc_3d.flags = c3d.flags;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
>      >>> c3d.resource_id,
>      >>>>> &rc_3d);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +
>      >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>      >>>>>> +    res->width = c3d.width;
>      >>>>>> +    res->height = c3d.height;
>      >>>>>> +    res->format = c3d.format;
>      >>>>>> +    res->resource_id = c3d.resource_id;
>      >>>>>> +
>      >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_resource_unref(VirtIOGPU *g,
>      >>>>>> +                            struct virtio_gpu_ctrl_command
>     *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct virtio_gpu_resource_unref unref;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(unref);
>      >>>>>> +
>      >>>>>> +    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>      >>>>>> +
>      >>>>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
>      >>>>>> +    CHECK(res, cmd);
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_unref(vr->rutabaga,
>     unref.resource_id);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +
>      >>>>>> +    if (res->image) {
>      >>>>>> +        pixman_image_unref(res->image);
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
>      >>>>>> +    g_free(res);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_context_create(VirtIOGPU *g,
>      >>>>>> +                            struct virtio_gpu_ctrl_command
>     *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct virtio_gpu_ctx_create cc;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(cc);
>      >>>>>> +    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>      >>>>>> +                                    cc.debug_name);
>      >>>>>> +
>      >>>>>> +    result = rutabaga_context_create(vr->rutabaga,
>     cc.hdr.ctx_id,
>      >>>>>> +                                     cc.context_init,
>     cc.debug_name,
>      >>>>> cc.nlen);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_context_destroy(VirtIOGPU *g,
>      >>>>>> +                             struct virtio_gpu_ctrl_command
>     *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct virtio_gpu_ctx_destroy cd;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(cd);
>      >>>>>> +    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>      >>>>>> +
>      >>>>>> +    result = rutabaga_context_destroy(vr->rutabaga,
>     cd.hdr.ctx_id);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_resource_flush(VirtIOGPU *g, struct
>      >>> virtio_gpu_ctrl_command
>      >>>>> *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result, i;
>      >>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>      >>>>>> +    struct iovec transfer_iovec;
>      >>>>>> +    struct virtio_gpu_resource_flush rf;
>      >>>>>> +    bool found = false;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +    if (vr->headless) {
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(rf);
>      >>>>>> +    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>      >>>>>> +                                   rf.r.width, rf.r.height,
>     rf.r.x,
>      >>>>> rf.r.y);
>      >>>>>> +
>      >>>>>> +    res = virtio_gpu_find_resource(g, rf.resource_id);
>      >>>>>> +    CHECK(res, cmd);
>      >>>>>> +
>      >>>>>> +    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>      >>>>>> +        scanout = &g->parent_obj.scanout[i];
>      >>>>>> +        if (i == res->scanout_bitmask) {
>      >>>>>> +            found = true;
>      >>>>>> +            break;
>      >>>>>> +        }
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    if (!found) {
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    transfer.x = 0;
>      >>>>>> +    transfer.y = 0;
>      >>>>>> +    transfer.z = 0;
>      >>>>>> +    transfer.w = res->width;
>      >>>>>> +    transfer.h = res->height;
>      >>>>>> +    transfer.d = 1;
>      >>>>>> +
>      >>>>>> +    transfer_iovec.iov_base =
>     pixman_image_get_data(res->image);
>      >>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>      >>>>>> +                                             rf.resource_id,
>      >>> &transfer,
>      >>>>>> +                                             &transfer_iovec);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +    dpy_gfx_update_full(scanout->con);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_set_scanout(VirtIOGPU *g, struct
>     virtio_gpu_ctrl_command
>      >>>>> *cmd)
>      >>>>>> +{
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
>      >>>>>> +    struct virtio_gpu_set_scanout ss;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +    if (vr->headless) {
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(ss);
>      >>>>>> +    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id,
>     ss.resource_id,
>      >>>>>> +                                     ss.r.width,
>     ss.r.height, ss.r.x,
>      >>>>> ss.r.y);
>      >>>>>> +
>      >>>>>> +    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
>      >>>>>> +    scanout = &g->parent_obj.scanout[ss.scanout_id];
>      >>>>>> +
>      >>>>>> +    if (ss.resource_id == 0) {
>      >>>>>> +        dpy_gfx_replace_surface(scanout->con, NULL);
>      >>>>>> +        dpy_gl_scanout_disable(scanout->con);
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    res = virtio_gpu_find_resource(g, ss.resource_id);
>      >>>>>> +    CHECK(res, cmd);
>      >>>>>> +
>      >>>>>> +    if (!res->image) {
>      >>>>>> +        pixman_format_code_t pformat;
>      >>>>>> +        pformat = virtio_gpu_get_pixman_format(res->format);
>      >>>>>> +        CHECK(pformat, cmd);
>      >>>>>> +
>      >>>>>> +        res->image = pixman_image_create_bits(pformat,
>      >>>>>> +                                              res->width,
>      >>>>>> +                                              res->height,
>      >>>>>> +                                              NULL, 0);
>      >>>>>> +        CHECK(res->image, cmd);
>      >>>>>> +        pixman_image_ref(res->image);
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    g->parent_obj.enable = 1;
>      >>>>>> +
>      >>>>>> +    /* realloc the surface ptr */
>      >>>>>> +    scanout->ds =
>     qemu_create_displaysurface_pixman(res->image);
>      >>>>>> +    dpy_gfx_replace_surface(scanout->con, NULL);
>      >>>>>> +    dpy_gfx_replace_surface(scanout->con, scanout->ds);
>      >>>>>> +    res->scanout_bitmask = ss.scanout_id;
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_submit_3d(VirtIOGPU *g,
>      >>>>>> +                       struct virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct virtio_gpu_cmd_submit cs;
>      >>>>>> +    struct rutabaga_command rutabaga_cmd = { 0 };
>      >>>>>> +    g_autofree uint8_t *buf = NULL;
>      >>>>>> +    size_t s;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(cs);
>      >>>>>> +    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>      >>>>>> +
>      >>>>>> +    buf = g_new0(uint8_t, cs.size);
>      >>>>>> +    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>      >>>>>> +                   sizeof(cs), buf, cs.size);
>      >>>>>> +    CHECK(s == cs.size, cmd);
>      >>>>>> +
>      >>>>>> +    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
>      >>>>>> +    rutabaga_cmd.cmd = buf;
>      >>>>>> +    rutabaga_cmd.cmd_size = cs.size;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_submit_command(vr->rutabaga,
>     &rutabaga_cmd);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>      >>>>>> +                                 struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>      >>>>>> +    struct virtio_gpu_transfer_to_host_2d t2d;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(t2d);
>      >>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>      >>>>>> +
>      >>>>>> +    transfer.x = t2d.r.x;
>      >>>>>> +    transfer.y = t2d.r.y;
>      >>>>>> +    transfer.z = 0;
>      >>>>>> +    transfer.w = t2d.r.width;
>      >>>>>> +    transfer.h = t2d.r.height;
>      >>>>>> +    transfer.d = 1;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
>      >>>>> t2d.resource_id,
>      >>>>>> +                                              &transfer);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>      >>>>>> +                                 struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>      >>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
>      >>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>      >>>>>> +
>      >>>>>> +    transfer.x = t3d.box.x;
>      >>>>>> +    transfer.y = t3d.box.y;
>      >>>>>> +    transfer.z = t3d.box.z;
>      >>>>>> +    transfer.w = t3d.box.w;
>      >>>>>> +    transfer.h = t3d.box.h;
>      >>>>>> +    transfer.d = t3d.box.d;
>      >>>>>> +    transfer.level = t3d.level;
>      >>>>>> +    transfer.stride = t3d.stride;
>      >>>>>> +    transfer.layer_stride = t3d.layer_stride;
>      >>>>>> +    transfer.offset = t3d.offset;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga,
>      >>>>> t3d.hdr.ctx_id,
>      >>>>>> +                                              t3d.resource_id,
>      >>>>> &transfer);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>      >>>>>> +                                   struct
>     virtio_gpu_ctrl_command
>      >>> *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>      >>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
>      >>>>>> +    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>      >>>>>> +
>      >>>>>> +    transfer.x = t3d.box.x;
>      >>>>>> +    transfer.y = t3d.box.y;
>      >>>>>> +    transfer.z = t3d.box.z;
>      >>>>>> +    transfer.w = t3d.box.w;
>      >>>>>> +    transfer.h = t3d.box.h;
>      >>>>>> +    transfer.d = t3d.box.d;
>      >>>>>> +    transfer.level = t3d.level;
>      >>>>>> +    transfer.stride = t3d.stride;
>      >>>>>> +    transfer.layer_stride = t3d.layer_stride;
>      >>>>>> +    transfer.offset = t3d.offset;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga,
>      >>>>> t3d.hdr.ctx_id,
>      >>>>>> +                                             t3d.resource_id,
>      >>> &transfer,
>      >>>>> NULL);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_attach_backing(VirtIOGPU *g, struct
>      >>> virtio_gpu_ctrl_command
>      >>>>> *cmd)
>      >>>>>> +{
>      >>>>>> +    struct rutabaga_iovecs vecs = { 0 };
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct virtio_gpu_resource_attach_backing att_rb;
>      >>>>>> +    int ret;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(att_rb);
>      >>>>>> +    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>      >>>>>> +
>      >>>>>> +    res = virtio_gpu_find_resource(g, att_rb.resource_id);
>      >>>>>> +    CHECK(res, cmd);
>      >>>>>> +    CHECK(!res->iov, cmd);
>      >>>>>> +
>      >>>>>> +    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
>      >>>>> sizeof(att_rb),
>      >>>>>> +                                        cmd, NULL, &res->iov,
>      >>>>> &res->iov_cnt);
>      >>>>>> +    CHECK(!ret, cmd);
>      >>>>>> +
>      >>>>>> +    vecs.iovecs = res->iov;
>      >>>>>> +    vecs.num_iovecs = res->iov_cnt;
>      >>>>>> +
>      >>>>>> +    ret = rutabaga_resource_attach_backing(vr->rutabaga,
>      >>>>> att_rb.resource_id,
>      >>>>>> +                                           &vecs);
>      >>>>>> +    if (ret != 0) {
>      >>>>>> +        virtio_gpu_cleanup_mapping(g, res);
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    CHECK(!ret, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_detach_backing(VirtIOGPU *g, struct
>      >>> virtio_gpu_ctrl_command
>      >>>>> *cmd)
>      >>>>>> +{
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct virtio_gpu_resource_detach_backing detach_rb;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(detach_rb);
>      >>>>>> +   
>     trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>      >>>>>> +
>      >>>>>> +    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>      >>>>>> +    CHECK(res, cmd);
>      >>>>>> +
>      >>>>>> +    rutabaga_resource_detach_backing(vr->rutabaga,
>      >>>>>> +                                     detach_rb.resource_id);
>      >>>>>> +
>      >>>>>> +    virtio_gpu_cleanup_mapping(g, res);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>      >>>>>> +                                 struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct virtio_gpu_ctx_resource att_res;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(att_res);
>      >>>>>> +    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>      >>>>>> +                                        att_res.resource_id);
>      >>>>>> +
>      >>>>>> +    result = rutabaga_context_attach_resource(vr->rutabaga,
>      >>>>> att_res.hdr.ctx_id,
>      >>>>>> +                                             
>     att_res.resource_id);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>      >>>>>> +                                 struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct virtio_gpu_ctx_resource det_res;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(det_res);
>      >>>>>> +    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>      >>>>>> +                                        det_res.resource_id);
>      >>>>>> +
>      >>>>>> +    result = rutabaga_context_detach_resource(vr->rutabaga,
>      >>>>> det_res.hdr.ctx_id,
>      >>>>>> +                                             
>     det_res.resource_id);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
>      >>>>> virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct virtio_gpu_get_capset_info info;
>      >>>>>> +    struct virtio_gpu_resp_capset_info resp;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(info);
>      >>>>>> +
>      >>>>>> +    result = rutabaga_get_capset_info(vr->rutabaga,
>     info.capset_index,
>      >>>>>> +                                      &resp.capset_id,
>      >>>>> &resp.capset_max_version,
>      >>>>>> +                                      &resp.capset_max_size);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +
>      >>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>      >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_get_capset(VirtIOGPU *g, struct
>     virtio_gpu_ctrl_command
>      >>>>> *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    struct virtio_gpu_get_capset gc;
>      >>>>>> +    struct virtio_gpu_resp_capset *resp;
>      >>>>>> +    uint32_t capset_size, capset_version;
>      >>>>>> +    uint32_t current_id, i;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(gc);
>      >>>>>> +    for (i = 0; i < vr->num_capsets; i++) {
>      >>>>>> +        result = rutabaga_get_capset_info(vr->rutabaga, i,
>      >>>>>> +                                          &current_id,
>      >>> &capset_version,
>      >>>>>> +                                          &capset_size);
>      >>>>>> +        CHECK(!result, cmd);
>      >>>>>> +
>      >>>>>> +        if (current_id == gc.capset_id) {
>      >>>>>> +            break;
>      >>>>>> +        }
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    CHECK(i < vr->num_capsets, cmd);
>      >>>>>> +
>      >>>>>> +    resp = g_malloc0(sizeof(*resp) + capset_size);
>      >>>>>> +    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>      >>>>>> +    rutabaga_get_capset(vr->rutabaga, gc.capset_id,
>     gc.capset_version,
>      >>>>>> +                        resp->capset_data, capset_size);
>      >>>>>> +
>      >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp->hdr,
>     sizeof(*resp) +
>      >>>>> capset_size);
>      >>>>>> +    g_free(resp);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>      >>>>>> +                                  struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int result;
>      >>>>>> +    struct rutabaga_iovecs vecs = { 0 };
>      >>>>>> +    g_autofree struct virtio_gpu_simple_resource *res = NULL;
>      >>>>>> +    struct virtio_gpu_resource_create_blob cblob;
>      >>>>>> +    struct rutabaga_create_blob rc_blob = { 0 };
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(cblob);
>      >>>>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id,
>      >>> cblob.size);
>      >>>>>> +
>      >>>>>> +    CHECK(cblob.resource_id != 0, cmd);
>      >>>>>> +
>      >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>      >>>>>> +
>      >>>>>> +    res->resource_id = cblob.resource_id;
>      >>>>>> +    res->blob_size = cblob.size;
>      >>>>>> +
>      >>>>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>      >>>>>> +        result = virtio_gpu_create_mapping_iov(g,
>     cblob.nr_entries,
>      >>>>>> +                                             
>       sizeof(cblob), cmd,
>      >>>>> &res->addrs,
>      >>>>>> +                                               &res->iov,
>      >>> &res->iov_cnt);
>      >>>>>> +        CHECK(!result, cmd);
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    rc_blob.blob_id = cblob.blob_id;
>      >>>>>> +    rc_blob.blob_mem = cblob.blob_mem;
>      >>>>>> +    rc_blob.blob_flags = cblob.blob_flags;
>      >>>>>> +    rc_blob.size = cblob.size;
>      >>>>>> +
>      >>>>>> +    vecs.iovecs = res->iov;
>      >>>>>> +    vecs.num_iovecs = res->iov_cnt;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_create_blob(vr->rutabaga,
>      >>>>> cblob.hdr.ctx_id,
>      >>>>>> +                                           cblob.resource_id,
>      >>> &rc_blob,
>      >>>>> &vecs,
>      >>>>>> +                                           NULL);
>      >>>>>> +
>      >>>>>> +    if (result && cblob.blob_mem !=
>     VIRTIO_GPU_BLOB_MEM_HOST3D) {
>      >>>>>> +        virtio_gpu_cleanup_mapping(g, res);
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +
>      >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>      >>>>>> +    res = NULL;
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>      >>>>>> +                               struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    uint32_t map_info = 0;
>      >>>>>> +    uint32_t slot = 0;
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct rutabaga_mapping mapping = { 0 };
>      >>>>>> +    struct virtio_gpu_resource_map_blob mblob;
>      >>>>>> +    struct virtio_gpu_resp_map_info resp = { 0 };
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(mblob);
>      >>>>>> +
>      >>>>>> +    CHECK(mblob.resource_id != 0, cmd);
>      >>>>>> +
>      >>>>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
>      >>>>>> +    CHECK(res, cmd);
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_map_info(vr->rutabaga,
>      >>> mblob.resource_id,
>      >>>>>> +                                        &map_info);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +
>      >>>>>> +    /*
>      >>>>>> +     * RUTABAGA_MAP_ACCESS_* flags are not part of the
>     virtio-gpu
>      >>> spec,
>      >>>>> but do
>      >>>>>> +     * exist to potentially allow the hypervisor to
>     restrict write
>      >>>>> access to
>      >>>>>> +     * memory. QEMU does not need to use this functionality
>     at the
>      >>>>> moment.
>      >>>>>> +     */
>      >>>>>> +    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
>      >>>>>> +
>      >>>>>> +    result = rutabaga_resource_map(vr->rutabaga,
>     mblob.resource_id,
>      >>>>> &mapping);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +
>      >>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
>      >>>>>> +        if (vr->memory_regions[slot].used) {
>      >>>>>> +            continue;
>      >>>>>> +        }
>      >>>>>> +
>      >>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>      >>>>>> +        memory_region_init_ram_ptr(mr, NULL, "blob",
>     mapping.size,
>      >>>>>> +                                   mapping.ptr);
>      >>>>>> +        memory_region_add_subregion(&g->parent_obj.hostmem,
>      >>>>>> +                                    mblob.offset, mr);
>      >>>>>> +        vr->memory_regions[slot].resource_id =
>     mblob.resource_id;
>      >>>>>> +        vr->memory_regions[slot].used = 1;
>      >>>>>> +        break;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    if (slot >= MAX_SLOTS) {
>      >>>>>> +        result = rutabaga_resource_unmap(vr->rutabaga,
>      >>>>> mblob.resource_id);
>      >>>>>> +        CHECK(!result, cmd);
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
>      >>>>>> +
>      >>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>      >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>      >>>>>> +                                 struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    int32_t result;
>      >>>>>> +    uint32_t slot = 0;
>      >>>>>> +    struct virtio_gpu_simple_resource *res;
>      >>>>>> +    struct virtio_gpu_resource_unmap_blob ublob;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(ublob);
>      >>>>>> +
>      >>>>>> +    CHECK(ublob.resource_id != 0, cmd);
>      >>>>>> +
>      >>>>>> +    res = virtio_gpu_find_resource(g, ublob.resource_id);
>      >>>>>> +    CHECK(res, cmd);
>      >>>>>> +
>      >>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
>      >>>>>> +        if (vr->memory_regions[slot].resource_id !=
>      >>> ublob.resource_id) {
>      >>>>>> +            continue;
>      >>>>>> +        }
>      >>>>>> +
>      >>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>      >>>>>> +        memory_region_del_subregion(&g->parent_obj.hostmem,
>     mr);
>      >>>>>> +
>      >>>>>> +        vr->memory_regions[slot].resource_id = 0;
>      >>>>>> +        vr->memory_regions[slot].used = 0;
>      >>>>>> +        break;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
>      >>>>>> +    result = rutabaga_resource_unmap(vr->rutabaga,
>     res->resource_id);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>      >>>>>> +                                struct
>     virtio_gpu_ctrl_command *cmd)
>      >>>>>> +{
>      >>>>>> +    struct rutabaga_fence fence = { 0 };
>      >>>>>> +    int32_t result;
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>      >>>>>> +
>      >>>>>> +    switch (cmd->cmd_hdr.type) {
>      >>>>>> +    case VIRTIO_GPU_CMD_CTX_CREATE:
>      >>>>>> +        rutabaga_cmd_context_create(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_CTX_DESTROY:
>      >>>>>> +        rutabaga_cmd_context_destroy(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>      >>>>>> +        rutabaga_cmd_create_resource_2d(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>      >>>>>> +        rutabaga_cmd_create_resource_3d(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_SUBMIT_3D:
>      >>>>>> +        rutabaga_cmd_submit_3d(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>      >>>>>> +        rutabaga_cmd_transfer_to_host_2d(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>      >>>>>> +        rutabaga_cmd_transfer_to_host_3d(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>      >>>>>> +        rutabaga_cmd_transfer_from_host_3d(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>      >>>>>> +        rutabaga_cmd_attach_backing(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>      >>>>>> +        rutabaga_cmd_detach_backing(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_SET_SCANOUT:
>      >>>>>> +        rutabaga_cmd_set_scanout(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>      >>>>>> +        rutabaga_cmd_resource_flush(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>      >>>>>> +        rutabaga_cmd_resource_unref(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>      >>>>>> +        rutabaga_cmd_ctx_attach_resource(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>      >>>>>> +        rutabaga_cmd_ctx_detach_resource(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>      >>>>>> +        rutabaga_cmd_get_capset_info(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET:
>      >>>>>> +        rutabaga_cmd_get_capset(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>      >>>>>> +        virtio_gpu_get_display_info(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_GET_EDID:
>      >>>>>> +        virtio_gpu_get_edid(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>      >>>>>> +        rutabaga_cmd_resource_create_blob(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>      >>>>>> +        rutabaga_cmd_resource_map_blob(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>      >>>>>> +        rutabaga_cmd_resource_unmap_blob(g, cmd);
>      >>>>>> +        break;
>      >>>>>> +    default:
>      >>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>      >>>>>> +        break;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    if (cmd->finished) {
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +    if (cmd->error) {
>      >>>>>> +        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>      >>>>>> +                     cmd->cmd_hdr.type, cmd->error);
>      >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>      >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
>      >>>>> VIRTIO_GPU_RESP_OK_NODATA);
>      >>>>>> +        return;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    fence.flags = cmd->cmd_hdr.flags;
>      >>>>>> +    fence.ctx_id = cmd->cmd_hdr.ctx_id;
>      >>>>>> +    fence.fence_id = cmd->cmd_hdr.fence_id;
>      >>>>>> +    fence.ring_idx = cmd->cmd_hdr.ring_idx;
>      >>>>>> +
>      >>>>>> +    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
>      >>>>> cmd->cmd_hdr.type);
>      >>>>>> +
>      >>>>>> +    result = rutabaga_create_fence(vr->rutabaga, &fence);
>      >>>>>> +    CHECK(!result, cmd);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +virtio_gpu_rutabaga_aio_cb(void *opaque)
>      >>>>>> +{
>      >>>>>> +    struct rutabaga_aio_data *data = opaque;
>      >>>>>> +    VirtIOGPU *g = VIRTIO_GPU(data->vr);
>      >>>>>> +    struct rutabaga_fence fence_data = data->fence;
>      >>>>>> +    struct virtio_gpu_ctrl_command *cmd, *tmp;
>      >>>>>> +
>      >>>>>> +    uint32_t signaled_ctx_specific = fence_data.flags &
>      >>>>>> +                                   
>       RUTABAGA_FLAG_INFO_RING_IDX;
>      >>>>>> +
>      >>>>>> +    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>      >>>>>> +        /*
>      >>>>>> +         * Due to context specific timelines.
>      >>>>>> +         */
>      >>>>>> +        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>      >>>>>> +                                     
>       RUTABAGA_FLAG_INFO_RING_IDX;
>      >>>>>> +
>      >>>>>> +        if (signaled_ctx_specific != target_ctx_specific) {
>      >>>>>> +            continue;
>      >>>>>> +        }
>      >>>>>> +
>      >>>>>> +        if (signaled_ctx_specific &&
>      >>>>>> +           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>      >>>>>> +            continue;
>      >>>>>> +        }
>      >>>>>> +
>      >>>>>> +        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>      >>>>>> +            continue;
>      >>>>>> +        }
>      >>>>>> +
>      >>>>>> +        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>      >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
>      >>>>> VIRTIO_GPU_RESP_OK_NODATA);
>      >>>>>> +        QTAILQ_REMOVE(&g->fenceq, cmd, next);
>      >>>>>> +        g_free(cmd);
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    g_free(data);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>      >>>>>> +                             const struct rutabaga_fence
>     *fence) {
>      >>>>>> +    struct rutabaga_aio_data *data;
>      >>>>>> +    VirtIOGPU *g = (VirtIOGPU *)user_data;
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +
>      >>>>>> +    /*
>      >>>>>> +     * gfxstream and both cross-domain (and even newer versions
>      >>>>> virglrenderer:
>      >>>>>> +     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
>      >>>>> completion on
>      >>>>>> +     * threads ("callback threads") that are different from
>     the thread
>      >>>>> that
>      >>>>>> +     * processes the command queue ("main thread").
>      >>>>>> +     *
>      >>>>>> +     * crosvm and other virtio-gpu 1.1 implementations
>     enable callback
>      >>>>> threads
>      >>>>>> +     * via locking.  However, on QEMU a deadlock is observed if
>      >>>>>> +     * virtio_gpu_ctrl_response_nodata(..) [used in the fence
>      >>> callback]
>      >>>>> is used
>      >>>>>> +     * from a thread that is not the main thread.
>      >>>>>> +     *
>      >>>>>> +     * The reason is QEMU's internal locking is designed to
>     work with
>      >>>>> QEMU
>      >>>>>> +     * threads (see rcu_register_thread()) and not generic
>     C/C++/Rust
>      >>>>> threads.
>      >>>>>> +     * For now, we can workaround this by scheduling the
>     return of the
>      >>>>>> +     * fence descriptors on the main thread.
>      >>>>>> +     */
>      >>>>>> +
>      >>>>>> +    data = g_new0(struct rutabaga_aio_data, 1);
>      >>>>>> +    data->vr = vr;
>      >>>>>> +    data->fence = *fence;
>      >>>>>> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
>      >>>>>> +                            virtio_gpu_rutabaga_aio_cb,
>      >>>>>> +                            data);
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static void
>      >>>>>> +virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
>      >>>>>> +                             const struct rutabaga_debug
>     *debug) {
>      >>>>>> +
>      >>>>>> +    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
>      >>>>>> +        error_report("%s", debug->message);
>      >>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
>      >>>>>> +        warn_report("%s", debug->message);
>      >>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
>      >>>>>> +        info_report("%s", debug->message);
>      >>>>>> +    }
>      >>>>>> +}
>      >>>>>> +
>      >>>>>> +static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error
>     **errp)
>      >>>>>> +{
>      >>>>>> +    int result;
>      >>>>>> +    uint64_t capset_mask;
>      >>>>>> +    struct rutabaga_builder builder = { 0 };
>      >>>>>> +    char wayland_socket_path[UNIX_PATH_MAX];
>      >>>>>> +    struct rutabaga_channel channel = { 0 };
>      >>>>>> +    struct rutabaga_channels channels = { 0 };
>      >>>>>> +
>      >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>      >>>>>> +    vr->rutabaga = NULL;
>      >>>>>> +
>      >>>>>> +    if (!vr->capset_names) {
>      >>>>>> +        error_setg(errp, "a capset name from the virtio-gpu
>     spec is
>      >>>>> required");
>      >>>>>> +        return false;
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    builder.wsi = RUTABAGA_WSI_SURFACELESS;
>      >>>>>> +    /*
>      >>>>>> +     * Currently, if WSI is specified, the only valid
>     strings are
>      >>>>> "surfaceless"
>      >>>>>> +     * or "headless".  Surfaceless doesn't create a native
>     window
>      >>>>> surface, but
>      >>>>>> +     * does copy from the render target to the Pixman
>     buffer if a
>      >>>>> virtio-gpu
>      >>>>>> +     * 2D hypercall is issued.  Surfacless is the default.
>      >>>>>> +     *
>      >>>>>> +     * Headless is like surfaceless, but doesn't copy to
>     the Pixman
>      >>>>> buffer. The
>      >>>>>> +     * use case is automated testing environments where
>     there is no
>      >>> need
>      >>>>> to view
>      >>>>>> +     * results.
>      >>>>>> +     *
>      >>>>>> +     * In the future, more performant virtio-gpu 2D UI
>     integration may
>      >>>>> be added.
>      >>>>>> +     */
>      >>>>>> +    if (vr->wsi) {
>      >>>>>> +        if (g_str_equal(vr->wsi, "surfaceless")) {
>      >>>>>> +            vr->headless = false;
>      >>>>>> +        } else if (g_str_equal(vr->wsi, "headless")) {
>      >>>>>> +            vr->headless = true;
>      >>>>>> +        } else {
>      >>>>>> +            error_setg(errp, "invalid wsi option selected");
>      >>>>>> +            return false;
>      >>>>>> +        }
>      >>>>>> +    }
>      >>>>>> +
>      >>>>>> +    result = rutabaga_calculate_capset_mask(vr->capset_names,
>      >>>>> &capset_mask);
>      >>>>>
>      >>>>> First, sorry for responding after such a long time. I've been
>     busy with
>      >>>>> work and I'm doing QEMU in my free time.
>      >>>>>
>      >>>>> In iteration 1 I've raised the topic on capset_names [1] and
>     I haven't
>      >>>>> seen it answered properly. Perhaps I need to rephrase a bit
>     so here we
>      >>> go:
>      >>>>> capset_names seems to be colon-separated list of bit options
>     managed by
>      >>>>> rutabaga. This introduces yet another way of options
>     handling. There
>      >>> have
>      >>>>> been talks about harmonizing options handling in QEMU since
>     apparently
>      >>> it
>      >>>>> is considered too complex [2,3].
>      >>>>
>      >>>>
>      >>>>> Why not pass the "capset" as a bitfield like capset_mask and
>     have QEMU
>      >>>>> create "capset" from QOM properties?
>      >>>>
>      >>>> IIUC these flags could come from virtio_gpu.h which is already
>     present in
>      >>>>> the QEMU tree. This would not inly shortcut the dependency on
>     rutabaga
>      >>> here
>      >>>>> but would also be more idiomatic QEMU (since it makes the
>     options more
>      >>>>> introspectable by internal machinery).
>      >>>>
>      >>>>
>      >>>>> Of course the bitfield approach would require modifications
>     in QEMU
>      >>>>> whenever rutabaga gains new features. However, I figure that
>     in the long
>      >>>>> term rutabaga will be quite feature complete such that the
>     benefits of
>      >>>>> idiomatic QEMU handling will outweigh the decoupling of the
>     projects.
>      >>>>>
>      >>>>> What do you think?
>      >>>>>
>      >>>>
>      >>>> I think what you're suggesting is something like -device
>      >>>> virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
>      >>>> gfxstream_vulkan + cross_domain]?
>      >>>
>      >>> I was thinking more along the lines of
>      >>> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
>      >>> gfxstream_vulkan and cross_domain are boolean QOM properties.
>     This would
>      >>> make for a human-readable format which follows QEMU style.
>      >>>
>      >>>>
>      >>>> We actually did consider something like that when adding the
>      >>>> --context-types flag [with crosvm], but there was a desire for a
>      >>>> human-readable format rather than numbers [even if they are in the
>      >>>> virtio-gpu spec].
>      >>>>
>      >>>> Additionally, there are quite a few context types that people
>     are playing
>      >>>> around with [gfxstream-gles, gfxstream-composer] that are
>     launchable and
>      >>>> aren't in the spec just yet.
>      >>>
>      >>> Right, QEMU had to be modified for this kind of experimentation. I
>      >>> considered this in my last paragraph and figured that in the
>     long run QEMU
>      >>> *may* prefer more idiomatic option handling since it tries hard
>     to not
>      >>> break its command line interface. I'm just pointing this out -- the
>      >>> decision is ultimately up to the community.
>      >>>
>      >>> Why not have dedicated QEMU development branches for
>     experimentation?
>      >>> Wouldn't upstreaming new features into QEMU be a good
>     motivation to get the
>      >>> missing pieces into the spec, once they are mature?
>      >>
>      >>
>      >>>>
>      >>>> Also, a key feature we want to explicitly **not** turn on all
>     available
>      >>>> context-types and let the user decide.
>      >>>
>      >>> How would you prevent that with the current colon-separated
>     approach?
>      >>> Splitting capset_mask in multiple parameters is just a different
>      >>> syntactical representation of the same thing.
>      >>>
>      >>>> That'll allow guest Mesa in
>      >>>> particular to do its magic in its loader.  So one may run Zink
>     + ANV with
>      >>>> ioctl forwarding, or Iris + ioctl forwarding and compare
>     performance with
>      >>>> the same guest image.
>      >>>>
>      >>>> And another thing is one needs some knowledge of the host
>     system to choose
>      >>>> the right context type.  You wouldn't do Zink + ANV ioctl
>     forwarding on
>      >>>> MacOS.  So I think the task of choosing the right context type
>     will fall
>      >>> to
>      >>>> projects that depend on QEMU (such as Android Emulator) which
>     have some
>      >>>> knowledge of the host environment.
>      >>>>
>      >>>> We actually have a graphics detector somewhere that calls
>     VK/OpenGL before
>      >>>> launching the VM and sets the right options.  Plan is to port into
>      >>>> gfxstream, maybe we could use that.
>      >>>
>      >>> You could bail out in QEMU if rutabaga_calculate_capset_mask()
>     detects
>      >>> conflicting combinations, no?
>      >>>
>      >>>>
>      >>>> So given the desire for human readable formats, being portable
>     across VMMs
>      >>>> (crosvm, qemu, rust-vmm??) and experimentation, the string ->
>     capset mask
>      >>>> conversion was put in the rutabaga API.  So I wouldn't change
>     it for those
>      >>>> reasons.
>      >>>
>      >>> What do you mean by being portable across VMMs?
>      >>
>      >>
>      >> Having the API inside rutabaga is (mildly) useful when multiple
>     VMMs have
>      >> the need to translate from a human-readable format to flags
>     digestible by
>      >> rutabaga.
>      >>
>      >>
>     https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452 <https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452>
>      >>
>      >>
>     https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353 <https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353>
>      >>
>      >>
>     https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505 <https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505>
>      >>
>      >> For these crosvm/qemu launchers, I imagine capset names will be
>     plumbed all
>      >> the way through eventually (launch_cvd
>      >> --gpu_context=gfxstream-vulkan:cross-domain if you've played
>     around with
>      >> Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you
>     played
>      >> around with Termina VMs).
>      >>
>      >> I think rust-vmm could also use the same API ("--capset_names") too.
>      >>
>      >>
>      >>> Sure, QEMU had to be taught new flags before being able to use new
>      >>> rutabaga features. I agree that this comes with a certain
>     inconvenience.
>      >>> But it may also be inconvenient for QEMU to deal with
>     additional ad-hoc
>      >>> options parsing when there are efforts for harmonization.
>      >>>
>      >>> Did my comments shed new light into the discussion?
>      >>
>      >>
>      >> Yes, they do.  I agree with you that both crosvm/qemu have too
>     many flags,
>      >> and having a stable command line interface is important.  We are
>     aiming for
>      >> stability with the `--capset_names={colon string}` command line,
>     and at
>      >> least for crosvm looking to deprecate older options [since we've
>     never had
>      >> an official release of crosvm yet].
>      >>
>      >> I do think:
>      >>
>      >> 1) "capset_names=gfxstream-vulkan:cross-domain"
>      >> 2) "cross-domain=on,gfxstream-vulkan=on"
>      >>
>      >> are similar enough.  I would choose (1) for since I think not
>     duplicating
>      >> the [name] -> flag logic and having a similar interface across
>     VMMs + VMM
>      >> launchers is ever-so slightly useful.
>      >
>      > I think we've now reached a good understanding of the issue. It's
>     now up to the QEMU community to make a choice. So I'm cc'ing Markus
>     and Thomas as the experts of the topic.
> 
>     As a virtio-gpu user, I'm slightly inclined to (2) since it would be
>     easier to implement the same option for virtio-gpu-virgl when a need
>     arises.
> 
> 
> The rutabaga/virgl implementations will likely be done via 
> DEFINE_PROP_BIT, no?  For virgl, it'll set the virgl flags, and for 
> rutabaga, it'll set the capset mask.  So it would be different.

Currently virtio-gpu-gl does not have properties for capsets so it 
exposes all capsets it implements (VIRTIO_GPU_CAPSET_VIRGL and 
VIRTIO_GPU_CAPSET_VIRGL2. VIRTIO_GPU_CAPSET_VENUS follows soon.), but it 
is possible to add them.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-19 18:36           ` Bernhard Beschow
  2023-09-19 22:07             ` Akihiko Odaki
@ 2023-09-27 11:34             ` Thomas Huth
  2023-09-27 12:33               ` Markus Armbruster
  1 sibling, 1 reply; 34+ messages in thread
From: Thomas Huth @ 2023-09-27 11:34 UTC (permalink / raw)
  To: Bernhard Beschow, Gurchetan Singh
  Cc: qemu-devel, marcandre.lureau, akihiko.odaki, ray.huang,
	alex.bennee, hi, ernunes, manos.pitsidianakis, philmd,
	Markus Armbruster

On 19/09/2023 20.36, Bernhard Beschow wrote:
> 
> 
> Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>> On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow <shentey@gmail.com> wrote:
>>
>>>
>>>
>>> Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <
>>> gurchetansingh@chromium.org>:
>>>> On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com>
>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
>>>>> gurchetansingh@chromium.org>:
>>>>>> This adds initial support for gfxstream and cross-domain.  Both
>>>>>> features rely on virtio-gpu blob resources and context types, which
>>>>>> are also implemented in this patch.
>>>>>>
>>>>>> gfxstream has a long and illustrious history in Android graphics
>>>>>> paravirtualization.  It has been powering graphics in the Android
>>>>>> Studio Emulator for more than a decade, which is the main developer
>>>>>> platform.
>>>>>>
>>>>>> Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
>>>>>> The key design characteristic was a 1:1 threading model and
>>>>>> auto-generation, which fit nicely with the OpenGLES spec.  It also
>>>>>> allowed easy layering with ANGLE on the host, which provides the GLES
>>>>>> implementations on Windows or MacOS enviroments.
>>>>>>
>>>>>> gfxstream has traditionally been maintained by a single engineer, and
>>>>>> between 2015 to 2021, the goldfish throne passed to Frank Yang.
>>>>>> Historians often remark this glorious reign ("pax gfxstreama" is the
>>>>>> academic term) was comparable to that of Augustus and both Queen
>>>>>> Elizabeths.  Just to name a few accomplishments in a resplendent
>>>>>> panoply: higher versions of GLES, address space graphics, snapshot
>>>>>> support and CTS compliant Vulkan [b].
>>>>>>
>>>>>> One major drawback was the use of out-of-tree goldfish drivers.
>>>>>> Android engineers didn't know much about DRM/KMS and especially TTM so
>>>>>> a simple guest to host pipe was conceived.
>>>>>>
>>>>>> Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
>>>>>> the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
>>>>>> port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
>>>>>> It was a symbol compatible replacement of virglrenderer [c] and named
>>>>>> "AVDVirglrenderer".  This implementation forms the basis of the
>>>>>> current gfxstream host implementation still in use today.
>>>>>>
>>>>>> cross-domain support follows a similar arc.  Originally conceived by
>>>>>> Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
>>>>>> 2018, it initially relied on the downstream "virtio-wl" device.
>>>>>>
>>>>>> In 2020 and 2021, virtio-gpu was extended to include blob resources
>>>>>> and multiple timelines by yours truly, features gfxstream/cross-domain
>>>>>> both require to function correctly.
>>>>>>
>>>>>> Right now, we stand at the precipice of a truly fantastic possibility:
>>>>>> the Android Emulator powered by upstream QEMU and upstream Linux
>>>>>> kernel.  gfxstream will then be packaged properfully, and app
>>>>>> developers can even fix gfxstream bugs on their own if they encounter
>>>>>> them.
>>>>>>
>>>>>> It's been quite the ride, my friends.  Where will gfxstream head next,
>>>>>> nobody really knows.  I wouldn't be surprised if it's around for
>>>>>> another decade, maintained by a new generation of Android graphics
>>>>>> enthusiasts.
>>>>>>
>>>>>> Technical details:
>>>>>>   - Very simple initial display integration: just used Pixman
>>>>>>   - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
>>>>>>     calls
>>>>>>
>>>>>> Next steps for Android VMs:
>>>>>>   - The next step would be improving display integration and UI
>>> interfaces
>>>>>>     with the goal of the QEMU upstream graphics being in an emulator
>>>>>>     release [d].
>>>>>>
>>>>>> Next steps for Linux VMs for display virtualization:
>>>>>>   - For widespread distribution, someone needs to package Sommelier or
>>> the
>>>>>>     wayland-proxy-virtwl [e] ideally into Debian main. In addition,
>>> newer
>>>>>>     versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS option,
>>>>>>     which allows disabling KMS hypercalls.  If anyone cares enough,
>>> it'll
>>>>>>     probably be possible to build a custom VM variant that uses this
>>>>> display
>>>>>>     virtualization strategy.
>>>>>>
>>>>>> [a]
>>>>> https://android-review.googlesource.com/c/platform/development/+/34470
>>>>>> [b]
>>>>>
>>> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
>>>>>> [c]
>>>>>
>>> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>>>>>> [d] https://developer.android.com/studio/releases/emulator
>>>>>> [e] https://github.com/talex5/wayland-proxy-virtwl
>>>>>>
>>>>>> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
>>>>>> Tested-by: Alyssa Ross <hi@alyssa.is>
>>>>>> Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>>>>>> Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>>>>>> ---
>>>>>> v1: Incorported various suggestions by Akihiko Odaki and Bernard
>>> Berschow
>>>>>>     - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>>>>>>     - Used error_report(..)
>>>>>>     - Used g_autofree to fix leaks on error paths
>>>>>>     - Removed unnecessary casts
>>>>>>     - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>>>>>>
>>>>>> v2: Incorported various suggestions by Akihiko Odaki, Marc-André Lureau
>>>>> and
>>>>>>     Bernard Berschow:
>>>>>>     - Parenthesis in CHECK macro
>>>>>>     - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
>>>>>>     - delay until g->parent_obj.enable = 1
>>>>>>     - Additional cast fixes
>>>>>>     - initialize directly in virtio_gpu_rutabaga_realize(..)
>>>>>>     - add debug callback to hook into QEMU error's APIs
>>>>>>
>>>>>> v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>>>>>>     - Autodetect Wayland socket when not explicitly specified
>>>>>>     - Fix map_blob error paths
>>>>>>     - Add comment why we need both `res` and `resource` in create blob
>>>>>>     - Cast and whitespace fixes
>>>>>>     - Big endian check comes before virtio_gpu_rutabaga_init().
>>>>>>     - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
>>>>>>
>>>>>> v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>>>>>>     - Double checked all casts
>>>>>>     - Remove unnecessary parenthesis
>>>>>>     - Removed `resource` in create_blob
>>>>>>     - Added comment about failure case
>>>>>>     - Pass user-provided socket as-is
>>>>>>     - Use stack variable rather than heap allocation
>>>>>>     - Future-proofed map info API to give access flags as well
>>>>>>
>>>>>> v5: Incorporated feedback from Akihiko Odaki:
>>>>>>     - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
>>>>>>     - Simplify num_capsets check
>>>>>>     - Call cleanup mapping on error paths
>>>>>>     - uint64_t --> void* for rutabaga_map(..)
>>>>>>     - Removed unnecessary parenthesis
>>>>>>     - Removed unnecessary cast
>>>>>>     - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
>>>>>>     - Reuse result variable
>>>>>>
>>>>>> v6: Incorporated feedback from Akihiko Odaki:
>>>>>>     - Remove unnecessary #ifndef
>>>>>>     - Disable scanout when appropriate
>>>>>>     - CHECK capset index within range outside loop
>>>>>>     - Add capset_version
>>>>>>
>>>>>> v7: Incorporated feedback from Akihiko Odaki:
>>>>>>     - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
>>>>>>
>>>>>> v9: Incorportated feedback from Akihiko Odaki:
>>>>>>     - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
>>>>>>     - Add error_setg(..) after rutabaga_init(..)
>>>>>>
>>>>>> v10: Incorportated feedback from Akihiko Odaki:
>>>>>>     - error_setg(..) --> error_setg_errno(..) when appropriate
>>>>>>     - virtio_gpu_rutabaga_init returns a bool instead of an int
>>>>>>
>>>>>> v11: Incorportated feedback from Philippe Mathieu-Daudé:
>>>>>>     - C-style /* */ comments and avoid // comments.
>>>>>>     - GPL-2.0 --> GPL-2.0-or-later
>>>>>>
>>>>>> hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
>>>>>> hw/display/virtio-gpu-rutabaga.c     | 1121 ++++++++++++++++++++++++++
>>>>>> hw/display/virtio-vga-rutabaga.c     |   53 ++
>>>>>> 3 files changed, 1224 insertions(+)
>>>>>> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
>>>>>> create mode 100644 hw/display/virtio-gpu-rutabaga.c
>>>>>> create mode 100644 hw/display/virtio-vga-rutabaga.c
>>>>>>
>>>>>> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
>>>>> b/hw/display/virtio-gpu-pci-rutabaga.c
>>>>>> new file mode 100644
>>>>>> index 0000000000..311eff308a
>>>>>> --- /dev/null
>>>>>> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
>>>>>> @@ -0,0 +1,50 @@
>>>>>> +/*
>>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>>>>>> + */
>>>>>> +
>>>>>> +#include "qemu/osdep.h"
>>>>>> +#include "qapi/error.h"
>>>>>> +#include "qemu/module.h"
>>>>>> +#include "hw/pci/pci.h"
>>>>>> +#include "hw/qdev-properties.h"
>>>>>> +#include "hw/virtio/virtio.h"
>>>>>> +#include "hw/virtio/virtio-bus.h"
>>>>>> +#include "hw/virtio/virtio-gpu-pci.h"
>>>>>> +#include "qom/object.h"
>>>>>> +
>>>>>> +#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>>>>>> +typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
>>>>>> +DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI,
>>> VIRTIO_GPU_RUTABAGA_PCI,
>>>>>> +                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>>>>>> +
>>>>>> +struct VirtIOGPURutabagaPCI {
>>>>>> +    VirtIOGPUPCIBase parent_obj;
>>>>>> +    VirtIOGPURutabaga vdev;
>>>>>> +};
>>>>>> +
>>>>>> +static void virtio_gpu_rutabaga_initfn(Object *obj)
>>>>>> +{
>>>>>> +    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>>>>>> +
>>>>>> +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>>>>>> +                                TYPE_VIRTIO_GPU_RUTABAGA);
>>>>>> +    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>>>>>> +}
>>>>>> +
>>>>>> +static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
>>>>>> +    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>>>>>> +    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>>>>>> +    .instance_size = sizeof(VirtIOGPURutabagaPCI),
>>>>>> +    .instance_init = virtio_gpu_rutabaga_initfn,
>>>>>> +};
>>>>>> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>>>>>> +module_kconfig(VIRTIO_PCI);
>>>>>> +
>>>>>> +static void virtio_gpu_rutabaga_pci_register_types(void)
>>>>>> +{
>>>>>> +    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>>>>>> +}
>>>>>> +
>>>>>> +type_init(virtio_gpu_rutabaga_pci_register_types)
>>>>>> +
>>>>>> +module_dep("hw-display-virtio-gpu-pci");
>>>>>> diff --git a/hw/display/virtio-gpu-rutabaga.c
>>>>> b/hw/display/virtio-gpu-rutabaga.c
>>>>>> new file mode 100644
>>>>>> index 0000000000..9018e5a702
>>>>>> --- /dev/null
>>>>>> +++ b/hw/display/virtio-gpu-rutabaga.c
>>>>>> @@ -0,0 +1,1121 @@
>>>>>> +/*
>>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>>>>>> + */
>>>>>> +
>>>>>> +#include "qemu/osdep.h"
>>>>>> +#include "qapi/error.h"
>>>>>> +#include "qemu/error-report.h"
>>>>>> +#include "qemu/iov.h"
>>>>>> +#include "trace.h"
>>>>>> +#include "hw/virtio/virtio.h"
>>>>>> +#include "hw/virtio/virtio-gpu.h"
>>>>>> +#include "hw/virtio/virtio-gpu-pixman.h"
>>>>>> +#include "hw/virtio/virtio-iommu.h"
>>>>>> +
>>>>>> +#include <glib/gmem.h>
>>>>>> +#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>>>>>> +
>>>>>> +#define CHECK(condition, cmd)
>>>>>       \
>>>>>> +    do {
>>>>>        \
>>>>>> +        if (!(condition)) {
>>>>>       \
>>>>>> +            error_report("CHECK failed in %s() %s:" "%d", __func__,
>>>>>       \
>>>>>> +                         __FILE__, __LINE__);
>>>>>       \
>>>>>> +            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>>>>        \
>>>>>> +            return;
>>>>>       \
>>>>>> +       }
>>>>>        \
>>>>>> +    } while (0)
>>>>>> +
>>>>>> +/*
>>>>>> + * This is the size of the char array in struct sock_addr_un. No
>>> Wayland
>>>>> socket
>>>>>> + * can be created with a path longer than this, including the null
>>>>> terminator.
>>>>>> + */
>>>>>> +#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
>>>>>> +
>>>>>> +struct rutabaga_aio_data {
>>>>>> +    struct VirtIOGPURutabaga *vr;
>>>>>> +    struct rutabaga_fence fence;
>>>>>> +};
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
>>>>> virtio_gpu_scanout *s,
>>>>>> +                                  uint32_t resource_id)
>>>>>> +{
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct iovec transfer_iovec;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, resource_id);
>>>>>> +    if (!res) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    if (res->width != s->current_cursor->width ||
>>>>>> +        res->height != s->current_cursor->height) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    transfer.x = 0;
>>>>>> +    transfer.y = 0;
>>>>>> +    transfer.z = 0;
>>>>>> +    transfer.w = res->width;
>>>>>> +    transfer.h = res->height;
>>>>>> +    transfer.d = 1;
>>>>>> +
>>>>>> +    transfer_iovec.iov_base = s->current_cursor->data;
>>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
>>>>>> +
>>>>>> +    rutabaga_resource_transfer_read(vr->rutabaga, 0,
>>>>>> +                                    resource_id, &transfer,
>>>>>> +                                    &transfer_iovec);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>>>>>> +{
>>>>>> +    VirtIOGPU *g = VIRTIO_GPU(b);
>>>>>> +    virtio_gpu_process_cmdq(g);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>>>>>> +                                struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_create_2d c2d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(c2d);
>>>>>> +    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
>>>>>> +                                       c2d.width, c2d.height);
>>>>>> +
>>>>>> +    rc_3d.target = 2;
>>>>>> +    rc_3d.format = c2d.format;
>>>>>> +    rc_3d.bind = (1 << 1);
>>>>>> +    rc_3d.width = c2d.width;
>>>>>> +    rc_3d.height = c2d.height;
>>>>>> +    rc_3d.depth = 1;
>>>>>> +    rc_3d.array_size = 1;
>>>>>> +    rc_3d.last_level = 0;
>>>>>> +    rc_3d.nr_samples = 0;
>>>>>> +    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>>>>>> +
>>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
>>> c2d.resource_id,
>>>>> &rc_3d);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>>>>> +    res->width = c2d.width;
>>>>>> +    res->height = c2d.height;
>>>>>> +    res->format = c2d.format;
>>>>>> +    res->resource_id = c2d.resource_id;
>>>>>> +
>>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>>>>>> +                                struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_create_3d c3d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(c3d);
>>>>>> +
>>>>>> +    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
>>>>>> +                                       c3d.width, c3d.height,
>>> c3d.depth);
>>>>>> +
>>>>>> +    rc_3d.target = c3d.target;
>>>>>> +    rc_3d.format = c3d.format;
>>>>>> +    rc_3d.bind = c3d.bind;
>>>>>> +    rc_3d.width = c3d.width;
>>>>>> +    rc_3d.height = c3d.height;
>>>>>> +    rc_3d.depth = c3d.depth;
>>>>>> +    rc_3d.array_size = c3d.array_size;
>>>>>> +    rc_3d.last_level = c3d.last_level;
>>>>>> +    rc_3d.nr_samples = c3d.nr_samples;
>>>>>> +    rc_3d.flags = c3d.flags;
>>>>>> +
>>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
>>> c3d.resource_id,
>>>>> &rc_3d);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>>>>> +    res->width = c3d.width;
>>>>>> +    res->height = c3d.height;
>>>>>> +    res->format = c3d.format;
>>>>>> +    res->resource_id = c3d.resource_id;
>>>>>> +
>>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_unref(VirtIOGPU *g,
>>>>>> +                            struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_unref unref;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(unref);
>>>>>> +
>>>>>> +    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    if (res->image) {
>>>>>> +        pixman_image_unref(res->image);
>>>>>> +    }
>>>>>> +
>>>>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
>>>>>> +    g_free(res);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_context_create(VirtIOGPU *g,
>>>>>> +                            struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_ctx_create cc;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cc);
>>>>>> +    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>>>>>> +                                    cc.debug_name);
>>>>>> +
>>>>>> +    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
>>>>>> +                                     cc.context_init, cc.debug_name,
>>>>> cc.nlen);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_context_destroy(VirtIOGPU *g,
>>>>>> +                             struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_ctx_destroy cd;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cd);
>>>>>> +    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>>>>>> +
>>>>>> +    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_flush(VirtIOGPU *g, struct
>>> virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    int32_t result, i;
>>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct iovec transfer_iovec;
>>>>>> +    struct virtio_gpu_resource_flush rf;
>>>>>> +    bool found = false;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +    if (vr->headless) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(rf);
>>>>>> +    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>>>>>> +                                   rf.r.width, rf.r.height, rf.r.x,
>>>>> rf.r.y);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, rf.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>>>>>> +        scanout = &g->parent_obj.scanout[i];
>>>>>> +        if (i == res->scanout_bitmask) {
>>>>>> +            found = true;
>>>>>> +            break;
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>> +    if (!found) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    transfer.x = 0;
>>>>>> +    transfer.y = 0;
>>>>>> +    transfer.z = 0;
>>>>>> +    transfer.w = res->width;
>>>>>> +    transfer.h = res->height;
>>>>>> +    transfer.d = 1;
>>>>>> +
>>>>>> +    transfer_iovec.iov_base = pixman_image_get_data(res->image);
>>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
>>>>>> +
>>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>>>>>> +                                             rf.resource_id,
>>> &transfer,
>>>>>> +                                             &transfer_iovec);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +    dpy_gfx_update_full(scanout->con);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
>>>>>> +    struct virtio_gpu_set_scanout ss;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +    if (vr->headless) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(ss);
>>>>>> +    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
>>>>>> +                                     ss.r.width, ss.r.height, ss.r.x,
>>>>> ss.r.y);
>>>>>> +
>>>>>> +    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
>>>>>> +    scanout = &g->parent_obj.scanout[ss.scanout_id];
>>>>>> +
>>>>>> +    if (ss.resource_id == 0) {
>>>>>> +        dpy_gfx_replace_surface(scanout->con, NULL);
>>>>>> +        dpy_gl_scanout_disable(scanout->con);
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, ss.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    if (!res->image) {
>>>>>> +        pixman_format_code_t pformat;
>>>>>> +        pformat = virtio_gpu_get_pixman_format(res->format);
>>>>>> +        CHECK(pformat, cmd);
>>>>>> +
>>>>>> +        res->image = pixman_image_create_bits(pformat,
>>>>>> +                                              res->width,
>>>>>> +                                              res->height,
>>>>>> +                                              NULL, 0);
>>>>>> +        CHECK(res->image, cmd);
>>>>>> +        pixman_image_ref(res->image);
>>>>>> +    }
>>>>>> +
>>>>>> +    g->parent_obj.enable = 1;
>>>>>> +
>>>>>> +    /* realloc the surface ptr */
>>>>>> +    scanout->ds = qemu_create_displaysurface_pixman(res->image);
>>>>>> +    dpy_gfx_replace_surface(scanout->con, NULL);
>>>>>> +    dpy_gfx_replace_surface(scanout->con, scanout->ds);
>>>>>> +    res->scanout_bitmask = ss.scanout_id;
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_submit_3d(VirtIOGPU *g,
>>>>>> +                       struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_cmd_submit cs;
>>>>>> +    struct rutabaga_command rutabaga_cmd = { 0 };
>>>>>> +    g_autofree uint8_t *buf = NULL;
>>>>>> +    size_t s;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cs);
>>>>>> +    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>>>>>> +
>>>>>> +    buf = g_new0(uint8_t, cs.size);
>>>>>> +    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>>>>>> +                   sizeof(cs), buf, cs.size);
>>>>>> +    CHECK(s == cs.size, cmd);
>>>>>> +
>>>>>> +    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
>>>>>> +    rutabaga_cmd.cmd = buf;
>>>>>> +    rutabaga_cmd.cmd_size = cs.size;
>>>>>> +
>>>>>> +    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct virtio_gpu_transfer_to_host_2d t2d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(t2d);
>>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>>>>>> +
>>>>>> +    transfer.x = t2d.r.x;
>>>>>> +    transfer.y = t2d.r.y;
>>>>>> +    transfer.z = 0;
>>>>>> +    transfer.w = t2d.r.width;
>>>>>> +    transfer.h = t2d.r.height;
>>>>>> +    transfer.d = 1;
>>>>>> +
>>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
>>>>> t2d.resource_id,
>>>>>> +                                              &transfer);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
>>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>>>>>> +
>>>>>> +    transfer.x = t3d.box.x;
>>>>>> +    transfer.y = t3d.box.y;
>>>>>> +    transfer.z = t3d.box.z;
>>>>>> +    transfer.w = t3d.box.w;
>>>>>> +    transfer.h = t3d.box.h;
>>>>>> +    transfer.d = t3d.box.d;
>>>>>> +    transfer.level = t3d.level;
>>>>>> +    transfer.stride = t3d.stride;
>>>>>> +    transfer.layer_stride = t3d.layer_stride;
>>>>>> +    transfer.offset = t3d.offset;
>>>>>> +
>>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga,
>>>>> t3d.hdr.ctx_id,
>>>>>> +                                              t3d.resource_id,
>>>>> &transfer);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>>>>>> +                                   struct virtio_gpu_ctrl_command
>>> *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct rutabaga_transfer transfer = { 0 };
>>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
>>>>>> +    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>>>>>> +
>>>>>> +    transfer.x = t3d.box.x;
>>>>>> +    transfer.y = t3d.box.y;
>>>>>> +    transfer.z = t3d.box.z;
>>>>>> +    transfer.w = t3d.box.w;
>>>>>> +    transfer.h = t3d.box.h;
>>>>>> +    transfer.d = t3d.box.d;
>>>>>> +    transfer.level = t3d.level;
>>>>>> +    transfer.stride = t3d.stride;
>>>>>> +    transfer.layer_stride = t3d.layer_stride;
>>>>>> +    transfer.offset = t3d.offset;
>>>>>> +
>>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga,
>>>>> t3d.hdr.ctx_id,
>>>>>> +                                             t3d.resource_id,
>>> &transfer,
>>>>> NULL);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_attach_backing(VirtIOGPU *g, struct
>>> virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    struct rutabaga_iovecs vecs = { 0 };
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_attach_backing att_rb;
>>>>>> +    int ret;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(att_rb);
>>>>>> +    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, att_rb.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +    CHECK(!res->iov, cmd);
>>>>>> +
>>>>>> +    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
>>>>> sizeof(att_rb),
>>>>>> +                                        cmd, NULL, &res->iov,
>>>>> &res->iov_cnt);
>>>>>> +    CHECK(!ret, cmd);
>>>>>> +
>>>>>> +    vecs.iovecs = res->iov;
>>>>>> +    vecs.num_iovecs = res->iov_cnt;
>>>>>> +
>>>>>> +    ret = rutabaga_resource_attach_backing(vr->rutabaga,
>>>>> att_rb.resource_id,
>>>>>> +                                           &vecs);
>>>>>> +    if (ret != 0) {
>>>>>> +        virtio_gpu_cleanup_mapping(g, res);
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(!ret, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_detach_backing(VirtIOGPU *g, struct
>>> virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_detach_backing detach_rb;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(detach_rb);
>>>>>> +    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    rutabaga_resource_detach_backing(vr->rutabaga,
>>>>>> +                                     detach_rb.resource_id);
>>>>>> +
>>>>>> +    virtio_gpu_cleanup_mapping(g, res);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_ctx_resource att_res;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(att_res);
>>>>>> +    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>>>>>> +                                        att_res.resource_id);
>>>>>> +
>>>>>> +    result = rutabaga_context_attach_resource(vr->rutabaga,
>>>>> att_res.hdr.ctx_id,
>>>>>> +                                              att_res.resource_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_ctx_resource det_res;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(det_res);
>>>>>> +    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>>>>>> +                                        det_res.resource_id);
>>>>>> +
>>>>>> +    result = rutabaga_context_detach_resource(vr->rutabaga,
>>>>> det_res.hdr.ctx_id,
>>>>>> +                                              det_res.resource_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
>>>>> virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_get_capset_info info;
>>>>>> +    struct virtio_gpu_resp_capset_info resp;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(info);
>>>>>> +
>>>>>> +    result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
>>>>>> +                                      &resp.capset_id,
>>>>> &resp.capset_max_version,
>>>>>> +                                      &resp.capset_max_size);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command
>>>>> *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    struct virtio_gpu_get_capset gc;
>>>>>> +    struct virtio_gpu_resp_capset *resp;
>>>>>> +    uint32_t capset_size, capset_version;
>>>>>> +    uint32_t current_id, i;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(gc);
>>>>>> +    for (i = 0; i < vr->num_capsets; i++) {
>>>>>> +        result = rutabaga_get_capset_info(vr->rutabaga, i,
>>>>>> +                                          &current_id,
>>> &capset_version,
>>>>>> +                                          &capset_size);
>>>>>> +        CHECK(!result, cmd);
>>>>>> +
>>>>>> +        if (current_id == gc.capset_id) {
>>>>>> +            break;
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(i < vr->num_capsets, cmd);
>>>>>> +
>>>>>> +    resp = g_malloc0(sizeof(*resp) + capset_size);
>>>>>> +    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>>>>>> +    rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
>>>>>> +                        resp->capset_data, capset_size);
>>>>>> +
>>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
>>>>> capset_size);
>>>>>> +    g_free(resp);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>>>>>> +                                  struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int result;
>>>>>> +    struct rutabaga_iovecs vecs = { 0 };
>>>>>> +    g_autofree struct virtio_gpu_simple_resource *res = NULL;
>>>>>> +    struct virtio_gpu_resource_create_blob cblob;
>>>>>> +    struct rutabaga_create_blob rc_blob = { 0 };
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cblob);
>>>>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id,
>>> cblob.size);
>>>>>> +
>>>>>> +    CHECK(cblob.resource_id != 0, cmd);
>>>>>> +
>>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>>>>> +
>>>>>> +    res->resource_id = cblob.resource_id;
>>>>>> +    res->blob_size = cblob.size;
>>>>>> +
>>>>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>>>>>> +        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
>>>>>> +                                               sizeof(cblob), cmd,
>>>>> &res->addrs,
>>>>>> +                                               &res->iov,
>>> &res->iov_cnt);
>>>>>> +        CHECK(!result, cmd);
>>>>>> +    }
>>>>>> +
>>>>>> +    rc_blob.blob_id = cblob.blob_id;
>>>>>> +    rc_blob.blob_mem = cblob.blob_mem;
>>>>>> +    rc_blob.blob_flags = cblob.blob_flags;
>>>>>> +    rc_blob.size = cblob.size;
>>>>>> +
>>>>>> +    vecs.iovecs = res->iov;
>>>>>> +    vecs.num_iovecs = res->iov_cnt;
>>>>>> +
>>>>>> +    result = rutabaga_resource_create_blob(vr->rutabaga,
>>>>> cblob.hdr.ctx_id,
>>>>>> +                                           cblob.resource_id,
>>> &rc_blob,
>>>>> &vecs,
>>>>>> +                                           NULL);
>>>>>> +
>>>>>> +    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>>>>>> +        virtio_gpu_cleanup_mapping(g, res);
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>>>>> +    res = NULL;
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>>>>>> +                               struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    uint32_t map_info = 0;
>>>>>> +    uint32_t slot = 0;
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct rutabaga_mapping mapping = { 0 };
>>>>>> +    struct virtio_gpu_resource_map_blob mblob;
>>>>>> +    struct virtio_gpu_resp_map_info resp = { 0 };
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(mblob);
>>>>>> +
>>>>>> +    CHECK(mblob.resource_id != 0, cmd);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    result = rutabaga_resource_map_info(vr->rutabaga,
>>> mblob.resource_id,
>>>>>> +                                        &map_info);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    /*
>>>>>> +     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu
>>> spec,
>>>>> but do
>>>>>> +     * exist to potentially allow the hypervisor to restrict write
>>>>> access to
>>>>>> +     * memory. QEMU does not need to use this functionality at the
>>>>> moment.
>>>>>> +     */
>>>>>> +    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
>>>>>> +
>>>>>> +    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
>>>>> &mapping);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +
>>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
>>>>>> +        if (vr->memory_regions[slot].used) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>>>>>> +        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
>>>>>> +                                   mapping.ptr);
>>>>>> +        memory_region_add_subregion(&g->parent_obj.hostmem,
>>>>>> +                                    mblob.offset, mr);
>>>>>> +        vr->memory_regions[slot].resource_id = mblob.resource_id;
>>>>>> +        vr->memory_regions[slot].used = 1;
>>>>>> +        break;
>>>>>> +    }
>>>>>> +
>>>>>> +    if (slot >= MAX_SLOTS) {
>>>>>> +        result = rutabaga_resource_unmap(vr->rutabaga,
>>>>> mblob.resource_id);
>>>>>> +        CHECK(!result, cmd);
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
>>>>>> +
>>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>>>>>> +                                 struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    int32_t result;
>>>>>> +    uint32_t slot = 0;
>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>> +    struct virtio_gpu_resource_unmap_blob ublob;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(ublob);
>>>>>> +
>>>>>> +    CHECK(ublob.resource_id != 0, cmd);
>>>>>> +
>>>>>> +    res = virtio_gpu_find_resource(g, ublob.resource_id);
>>>>>> +    CHECK(res, cmd);
>>>>>> +
>>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
>>>>>> +        if (vr->memory_regions[slot].resource_id !=
>>> ublob.resource_id) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>>>>>> +        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
>>>>>> +
>>>>>> +        vr->memory_regions[slot].resource_id = 0;
>>>>>> +        vr->memory_regions[slot].used = 0;
>>>>>> +        break;
>>>>>> +    }
>>>>>> +
>>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
>>>>>> +    result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>>>>>> +                                struct virtio_gpu_ctrl_command *cmd)
>>>>>> +{
>>>>>> +    struct rutabaga_fence fence = { 0 };
>>>>>> +    int32_t result;
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>>>>>> +
>>>>>> +    switch (cmd->cmd_hdr.type) {
>>>>>> +    case VIRTIO_GPU_CMD_CTX_CREATE:
>>>>>> +        rutabaga_cmd_context_create(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_CTX_DESTROY:
>>>>>> +        rutabaga_cmd_context_destroy(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>>>>>> +        rutabaga_cmd_create_resource_2d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>>>>>> +        rutabaga_cmd_create_resource_3d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_SUBMIT_3D:
>>>>>> +        rutabaga_cmd_submit_3d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>>>>>> +        rutabaga_cmd_transfer_to_host_2d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>>>>>> +        rutabaga_cmd_transfer_to_host_3d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>>>>>> +        rutabaga_cmd_transfer_from_host_3d(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>>>>>> +        rutabaga_cmd_attach_backing(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>>>>>> +        rutabaga_cmd_detach_backing(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_SET_SCANOUT:
>>>>>> +        rutabaga_cmd_set_scanout(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>>>>>> +        rutabaga_cmd_resource_flush(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>>>>>> +        rutabaga_cmd_resource_unref(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>>>>>> +        rutabaga_cmd_ctx_attach_resource(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>>>>>> +        rutabaga_cmd_ctx_detach_resource(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>>>>>> +        rutabaga_cmd_get_capset_info(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET:
>>>>>> +        rutabaga_cmd_get_capset(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>>>>>> +        virtio_gpu_get_display_info(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_GET_EDID:
>>>>>> +        virtio_gpu_get_edid(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>>>>>> +        rutabaga_cmd_resource_create_blob(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>>>>>> +        rutabaga_cmd_resource_map_blob(g, cmd);
>>>>>> +        break;
>>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>>>>>> +        rutabaga_cmd_resource_unmap_blob(g, cmd);
>>>>>> +        break;
>>>>>> +    default:
>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>>>>> +        break;
>>>>>> +    }
>>>>>> +
>>>>>> +    if (cmd->finished) {
>>>>>> +        return;
>>>>>> +    }
>>>>>> +    if (cmd->error) {
>>>>>> +        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>>>>>> +                     cmd->cmd_hdr.type, cmd->error);
>>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>>>>>> +        return;
>>>>>> +    }
>>>>>> +    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
>>>>> VIRTIO_GPU_RESP_OK_NODATA);
>>>>>> +        return;
>>>>>> +    }
>>>>>> +
>>>>>> +    fence.flags = cmd->cmd_hdr.flags;
>>>>>> +    fence.ctx_id = cmd->cmd_hdr.ctx_id;
>>>>>> +    fence.fence_id = cmd->cmd_hdr.fence_id;
>>>>>> +    fence.ring_idx = cmd->cmd_hdr.ring_idx;
>>>>>> +
>>>>>> +    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
>>>>> cmd->cmd_hdr.type);
>>>>>> +
>>>>>> +    result = rutabaga_create_fence(vr->rutabaga, &fence);
>>>>>> +    CHECK(!result, cmd);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_aio_cb(void *opaque)
>>>>>> +{
>>>>>> +    struct rutabaga_aio_data *data = opaque;
>>>>>> +    VirtIOGPU *g = VIRTIO_GPU(data->vr);
>>>>>> +    struct rutabaga_fence fence_data = data->fence;
>>>>>> +    struct virtio_gpu_ctrl_command *cmd, *tmp;
>>>>>> +
>>>>>> +    uint32_t signaled_ctx_specific = fence_data.flags &
>>>>>> +                                     RUTABAGA_FLAG_INFO_RING_IDX;
>>>>>> +
>>>>>> +    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>>>>>> +        /*
>>>>>> +         * Due to context specific timelines.
>>>>>> +         */
>>>>>> +        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>>>>>> +                                       RUTABAGA_FLAG_INFO_RING_IDX;
>>>>>> +
>>>>>> +        if (signaled_ctx_specific != target_ctx_specific) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        if (signaled_ctx_specific &&
>>>>>> +           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>>>>>> +            continue;
>>>>>> +        }
>>>>>> +
>>>>>> +        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
>>>>> VIRTIO_GPU_RESP_OK_NODATA);
>>>>>> +        QTAILQ_REMOVE(&g->fenceq, cmd, next);
>>>>>> +        g_free(cmd);
>>>>>> +    }
>>>>>> +
>>>>>> +    g_free(data);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>>>>>> +                             const struct rutabaga_fence *fence) {
>>>>>> +    struct rutabaga_aio_data *data;
>>>>>> +    VirtIOGPU *g = (VirtIOGPU *)user_data;
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +
>>>>>> +    /*
>>>>>> +     * gfxstream and both cross-domain (and even newer versions
>>>>> virglrenderer:
>>>>>> +     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
>>>>> completion on
>>>>>> +     * threads ("callback threads") that are different from the thread
>>>>> that
>>>>>> +     * processes the command queue ("main thread").
>>>>>> +     *
>>>>>> +     * crosvm and other virtio-gpu 1.1 implementations enable callback
>>>>> threads
>>>>>> +     * via locking.  However, on QEMU a deadlock is observed if
>>>>>> +     * virtio_gpu_ctrl_response_nodata(..) [used in the fence
>>> callback]
>>>>> is used
>>>>>> +     * from a thread that is not the main thread.
>>>>>> +     *
>>>>>> +     * The reason is QEMU's internal locking is designed to work with
>>>>> QEMU
>>>>>> +     * threads (see rcu_register_thread()) and not generic C/C++/Rust
>>>>> threads.
>>>>>> +     * For now, we can workaround this by scheduling the return of the
>>>>>> +     * fence descriptors on the main thread.
>>>>>> +     */
>>>>>> +
>>>>>> +    data = g_new0(struct rutabaga_aio_data, 1);
>>>>>> +    data->vr = vr;
>>>>>> +    data->fence = *fence;
>>>>>> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
>>>>>> +                            virtio_gpu_rutabaga_aio_cb,
>>>>>> +                            data);
>>>>>> +}
>>>>>> +
>>>>>> +static void
>>>>>> +virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
>>>>>> +                             const struct rutabaga_debug *debug) {
>>>>>> +
>>>>>> +    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
>>>>>> +        error_report("%s", debug->message);
>>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
>>>>>> +        warn_report("%s", debug->message);
>>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
>>>>>> +        info_report("%s", debug->message);
>>>>>> +    }
>>>>>> +}
>>>>>> +
>>>>>> +static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
>>>>>> +{
>>>>>> +    int result;
>>>>>> +    uint64_t capset_mask;
>>>>>> +    struct rutabaga_builder builder = { 0 };
>>>>>> +    char wayland_socket_path[UNIX_PATH_MAX];
>>>>>> +    struct rutabaga_channel channel = { 0 };
>>>>>> +    struct rutabaga_channels channels = { 0 };
>>>>>> +
>>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>>>>>> +    vr->rutabaga = NULL;
>>>>>> +
>>>>>> +    if (!vr->capset_names) {
>>>>>> +        error_setg(errp, "a capset name from the virtio-gpu spec is
>>>>> required");
>>>>>> +        return false;
>>>>>> +    }
>>>>>> +
>>>>>> +    builder.wsi = RUTABAGA_WSI_SURFACELESS;
>>>>>> +    /*
>>>>>> +     * Currently, if WSI is specified, the only valid strings are
>>>>> "surfaceless"
>>>>>> +     * or "headless".  Surfaceless doesn't create a native window
>>>>> surface, but
>>>>>> +     * does copy from the render target to the Pixman buffer if a
>>>>> virtio-gpu
>>>>>> +     * 2D hypercall is issued.  Surfacless is the default.
>>>>>> +     *
>>>>>> +     * Headless is like surfaceless, but doesn't copy to the Pixman
>>>>> buffer. The
>>>>>> +     * use case is automated testing environments where there is no
>>> need
>>>>> to view
>>>>>> +     * results.
>>>>>> +     *
>>>>>> +     * In the future, more performant virtio-gpu 2D UI integration may
>>>>> be added.
>>>>>> +     */
>>>>>> +    if (vr->wsi) {
>>>>>> +        if (g_str_equal(vr->wsi, "surfaceless")) {
>>>>>> +            vr->headless = false;
>>>>>> +        } else if (g_str_equal(vr->wsi, "headless")) {
>>>>>> +            vr->headless = true;
>>>>>> +        } else {
>>>>>> +            error_setg(errp, "invalid wsi option selected");
>>>>>> +            return false;
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>> +    result = rutabaga_calculate_capset_mask(vr->capset_names,
>>>>> &capset_mask);
>>>>>
>>>>> First, sorry for responding after such a long time. I've been busy with
>>>>> work and I'm doing QEMU in my free time.
>>>>>
>>>>> In iteration 1 I've raised the topic on capset_names [1] and I haven't
>>>>> seen it answered properly. Perhaps I need to rephrase a bit so here we
>>> go:
>>>>> capset_names seems to be colon-separated list of bit options managed by
>>>>> rutabaga. This introduces yet another way of options handling. There
>>> have
>>>>> been talks about harmonizing options handling in QEMU since apparently
>>> it
>>>>> is considered too complex [2,3].
>>>>
>>>>
>>>>> Why not pass the "capset" as a bitfield like capset_mask and have QEMU
>>>>> create "capset" from QOM properties?
>>>>
>>>> IIUC these flags could come from virtio_gpu.h which is already present in
>>>>> the QEMU tree. This would not inly shortcut the dependency on rutabaga
>>> here
>>>>> but would also be more idiomatic QEMU (since it makes the options more
>>>>> introspectable by internal machinery).
>>>>
>>>>
>>>>> Of course the bitfield approach would require modifications in QEMU
>>>>> whenever rutabaga gains new features. However, I figure that in the long
>>>>> term rutabaga will be quite feature complete such that the benefits of
>>>>> idiomatic QEMU handling will outweigh the decoupling of the projects.
>>>>>
>>>>> What do you think?
>>>>>
>>>>
>>>> I think what you're suggesting is something like -device
>>>> virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
>>>> gfxstream_vulkan + cross_domain]?
>>>
>>> I was thinking more along the lines of
>>> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
>>> gfxstream_vulkan and cross_domain are boolean QOM properties. This would
>>> make for a human-readable format which follows QEMU style.
>>>
>>>>
>>>> We actually did consider something like that when adding the
>>>> --context-types flag [with crosvm], but there was a desire for a
>>>> human-readable format rather than numbers [even if they are in the
>>>> virtio-gpu spec].
>>>>
>>>> Additionally, there are quite a few context types that people are playing
>>>> around with [gfxstream-gles, gfxstream-composer] that are launchable and
>>>> aren't in the spec just yet.
>>>
>>> Right, QEMU had to be modified for this kind of experimentation. I
>>> considered this in my last paragraph and figured that in the long run QEMU
>>> *may* prefer more idiomatic option handling since it tries hard to not
>>> break its command line interface. I'm just pointing this out -- the
>>> decision is ultimately up to the community.
>>>
>>> Why not have dedicated QEMU development branches for experimentation?
>>> Wouldn't upstreaming new features into QEMU be a good motivation to get the
>>> missing pieces into the spec, once they are mature?
>>
>>
>>>>
>>>> Also, a key feature we want to explicitly **not** turn on all available
>>>> context-types and let the user decide.
>>>
>>> How would you prevent that with the current colon-separated approach?
>>> Splitting capset_mask in multiple parameters is just a different
>>> syntactical representation of the same thing.
>>>
>>>> That'll allow guest Mesa in
>>>> particular to do its magic in its loader.  So one may run Zink + ANV with
>>>> ioctl forwarding, or Iris + ioctl forwarding and compare performance with
>>>> the same guest image.
>>>>
>>>> And another thing is one needs some knowledge of the host system to choose
>>>> the right context type.  You wouldn't do Zink + ANV ioctl forwarding on
>>>> MacOS.  So I think the task of choosing the right context type will fall
>>> to
>>>> projects that depend on QEMU (such as Android Emulator) which have some
>>>> knowledge of the host environment.
>>>>
>>>> We actually have a graphics detector somewhere that calls VK/OpenGL before
>>>> launching the VM and sets the right options.  Plan is to port into
>>>> gfxstream, maybe we could use that.
>>>
>>> You could bail out in QEMU if rutabaga_calculate_capset_mask() detects
>>> conflicting combinations, no?
>>>
>>>>
>>>> So given the desire for human readable formats, being portable across VMMs
>>>> (crosvm, qemu, rust-vmm??) and experimentation, the string -> capset mask
>>>> conversion was put in the rutabaga API.  So I wouldn't change it for those
>>>> reasons.
>>>
>>> What do you mean by being portable across VMMs?
>>
>>
>> Having the API inside rutabaga is (mildly) useful when multiple VMMs have
>> the need to translate from a human-readable format to flags digestible by
>> rutabaga.
>>
>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452
>>
>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353
>>
>> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505
>>
>> For these crosvm/qemu launchers, I imagine capset names will be plumbed all
>> the way through eventually (launch_cvd
>> --gpu_context=gfxstream-vulkan:cross-domain if you've played around with
>> Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you played
>> around with Termina VMs).
>>
>> I think rust-vmm could also use the same API ("--capset_names") too.
>>
>>
>>> Sure, QEMU had to be taught new flags before being able to use new
>>> rutabaga features. I agree that this comes with a certain inconvenience.
>>> But it may also be inconvenient for QEMU to deal with additional ad-hoc
>>> options parsing when there are efforts for harmonization.
>>>
>>> Did my comments shed new light into the discussion?
>>
>>
>> Yes, they do.  I agree with you that both crosvm/qemu have too many flags,
>> and having a stable command line interface is important.  We are aiming for
>> stability with the `--capset_names={colon string}` command line, and at
>> least for crosvm looking to deprecate older options [since we've never had
>> an official release of crosvm yet].
>>
>> I do think:
>>
>> 1) "capset_names=gfxstream-vulkan:cross-domain"
>> 2) "cross-domain=on,gfxstream-vulkan=on"
>>
>> are similar enough.  I would choose (1) for since I think not duplicating
>> the [name] -> flag logic and having a similar interface across VMMs + VMM
>> launchers is ever-so slightly useful.
> 
> I think we've now reached a good understanding of the issue. It's now up to the QEMU community to make a choice. So I'm cc'ing Markus and Thomas as the experts of the topic.

Sorry for the late reply ... but I'm also not really an expert here. But I 
think the colon string (1) looks like a less common way for handling 
settings in QEMU. The option (2) looks preferable to me.

  Thomas



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-27 11:34             ` Thomas Huth
@ 2023-09-27 12:33               ` Markus Armbruster
  0 siblings, 0 replies; 34+ messages in thread
From: Markus Armbruster @ 2023-09-27 12:33 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Bernhard Beschow, Gurchetan Singh, qemu-devel, marcandre.lureau,
	akihiko.odaki, ray.huang, alex.bennee, hi, ernunes,
	manos.pitsidianakis, philmd

Thomas Huth <thuth@redhat.com> writes:

> On 19/09/2023 20.36, Bernhard Beschow wrote:
>> Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>>> On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow <shentey@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>>>>> On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com> wrote:

[...]

>>>>>> First, sorry for responding after such a long time. I've been busy with
>>>>>> work and I'm doing QEMU in my free time.
>>>>>>
>>>>>> In iteration 1 I've raised the topic on capset_names [1] and I haven't
>>>>>> seen it answered properly. Perhaps I need to rephrase a bit so here we go:
>>>>>> capset_names seems to be colon-separated list of bit options managed by
>>>>>> rutabaga. This introduces yet another way of options handling. There have
>>>>>> been talks about harmonizing options handling in QEMU since apparently it
>>>>>> is considered too complex [2,3].
>>>>>>
>>>>>> Why not pass the "capset" as a bitfield like capset_mask and have QEMU
>>>>>> create "capset" from QOM properties?
>>>>>> IIUC these flags could come from virtio_gpu.h which is already present in
>>>>>> the QEMU tree. This would not inly shortcut the dependency on rutabaga here
>>>>>> but would also be more idiomatic QEMU (since it makes the options more
>>>>>> introspectable by internal machinery).
>>>>>>
>>>>>> Of course the bitfield approach would require modifications in QEMU
>>>>>> whenever rutabaga gains new features. However, I figure that in the long
>>>>>> term rutabaga will be quite feature complete such that the benefits of
>>>>>> idiomatic QEMU handling will outweigh the decoupling of the projects.
>>>>>>
>>>>>> What do you think?
>>>>>>
>>>>>
>>>>> I think what you're suggesting is something like -device
>>>>> virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
>>>>> gfxstream_vulkan + cross_domain]?
>>>>
>>>> I was thinking more along the lines of
>>>> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
>>>> gfxstream_vulkan and cross_domain are boolean QOM properties. This would
>>>> make for a human-readable format which follows QEMU style.
>>>>
>>>>>
>>>>> We actually did consider something like that when adding the
>>>>> --context-types flag [with crosvm], but there was a desire for a
>>>>> human-readable format rather than numbers [even if they are in the
>>>>> virtio-gpu spec].
>>>>>
>>>>> Additionally, there are quite a few context types that people are playing
>>>>> around with [gfxstream-gles, gfxstream-composer] that are launchable and
>>>>> aren't in the spec just yet.
>>>>
>>>> Right, QEMU had to be modified for this kind of experimentation. I
>>>> considered this in my last paragraph and figured that in the long run QEMU
>>>> *may* prefer more idiomatic option handling since it tries hard to not
>>>> break its command line interface. I'm just pointing this out -- the
>>>> decision is ultimately up to the community.
>>>>
>>>> Why not have dedicated QEMU development branches for experimentation?
>>>> Wouldn't upstreaming new features into QEMU be a good motivation to get the
>>>> missing pieces into the spec, once they are mature?
>>>
>>>
>>>>>
>>>>> Also, a key feature we want to explicitly **not** turn on all available
>>>>> context-types and let the user decide.
>>>>
>>>> How would you prevent that with the current colon-separated approach?
>>>> Splitting capset_mask in multiple parameters is just a different
>>>> syntactical representation of the same thing.
>>>>
>>>>> That'll allow guest Mesa in
>>>>> particular to do its magic in its loader.  So one may run Zink + ANV with
>>>>> ioctl forwarding, or Iris + ioctl forwarding and compare performance with
>>>>> the same guest image.
>>>>>
>>>>> And another thing is one needs some knowledge of the host system to choose
>>>>> the right context type.  You wouldn't do Zink + ANV ioctl forwarding on
>>>>> MacOS.  So I think the task of choosing the right context type will fall to
>>>>> projects that depend on QEMU (such as Android Emulator) which have some
>>>>> knowledge of the host environment.
>>>>>
>>>>> We actually have a graphics detector somewhere that calls VK/OpenGL before
>>>>> launching the VM and sets the right options.  Plan is to port into
>>>>> gfxstream, maybe we could use that.
>>>>
>>>> You could bail out in QEMU if rutabaga_calculate_capset_mask() detects
>>>> conflicting combinations, no?
>>>>
>>>>>
>>>>> So given the desire for human readable formats, being portable across VMMs
>>>>> (crosvm, qemu, rust-vmm??) and experimentation, the string -> capset mask
>>>>> conversion was put in the rutabaga API.  So I wouldn't change it for those
>>>>> reasons.
>>>>
>>>> What do you mean by being portable across VMMs?
>>>
>>>
>>> Having the API inside rutabaga is (mildly) useful when multiple VMMs have
>>> the need to translate from a human-readable format to flags digestible by
>>> rutabaga.
>>>
>>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452
>>>
>>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353
>>>
>>> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505
>>>
>>> For these crosvm/qemu launchers, I imagine capset names will be plumbed all
>>> the way through eventually (launch_cvd
>>> --gpu_context=gfxstream-vulkan:cross-domain if you've played around with
>>> Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you played
>>> around with Termina VMs).
>>>
>>> I think rust-vmm could also use the same API ("--capset_names") too.
>>>
>>>
>>>> Sure, QEMU had to be taught new flags before being able to use new
>>>> rutabaga features. I agree that this comes with a certain inconvenience.
>>>> But it may also be inconvenient for QEMU to deal with additional ad-hoc
>>>> options parsing when there are efforts for harmonization.
>>>>
>>>> Did my comments shed new light into the discussion?
>>>
>>>
>>> Yes, they do.  I agree with you that both crosvm/qemu have too many flags,
>>> and having a stable command line interface is important.  We are aiming for
>>> stability with the `--capset_names={colon string}` command line, and at
>>> least for crosvm looking to deprecate older options [since we've never had
>>> an official release of crosvm yet].
>>>
>>> I do think:
>>>
>>> 1) "capset_names=gfxstream-vulkan:cross-domain"
>>> 2) "cross-domain=on,gfxstream-vulkan=on"
>>>
>>> are similar enough.  I would choose (1) for since I think not duplicating
>>> the [name] -> flag logic and having a similar interface across VMMs + VMM
>>> launchers is ever-so slightly useful.
>>
>> I think we've now reached a good understanding of the issue. It's now up to the QEMU community to make a choice. So I'm cc'ing Markus and Thomas as the experts of the topic.
>
> Sorry for the late reply ... but I'm also not really an expert here. But I think the colon string (1) looks like a less common way for handling settings in QEMU. The option (2) looks preferable to me.

For CLI option arguments, we definitely want (2) and not (1), and we
definitely want to use an existing parser.

For complex configuration, I'd recommend a QAPI-based approach.

To provide more specific advice, I'd have to understand the structure of
your intended configuration first.



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-21 23:44               ` Gurchetan Singh
  2023-09-22  2:41                 ` Akihiko Odaki
@ 2023-09-29 15:06                 ` Bernhard Beschow
  2023-09-30 10:28                   ` Thomas Huth
  1 sibling, 1 reply; 34+ messages in thread
From: Bernhard Beschow @ 2023-09-29 15:06 UTC (permalink / raw)
  To: Gurchetan Singh, Akihiko Odaki
  Cc: qemu-devel, marcandre.lureau, ray.huang, alex.bennee, hi,
	ernunes, manos.pitsidianakis, philmd, Markus Armbruster,
	Thomas Huth



Am 21. September 2023 23:44:42 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>On Tue, Sep 19, 2023 at 3:07 PM Akihiko Odaki <akihiko.odaki@gmail.com>
>wrote:
>
>> On 2023/09/20 3:36, Bernhard Beschow wrote:
>> >
>> >
>> > Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh <
>> gurchetansingh@chromium.org>:
>> >> On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow <shentey@gmail.com>
>> wrote:
>> >>
>> >>>
>> >>>
>> >>> Am 14. September 2023 04:38:51 UTC schrieb Gurchetan Singh <
>> >>> gurchetansingh@chromium.org>:
>> >>>> On Wed, Sep 13, 2023 at 4:58 AM Bernhard Beschow <shentey@gmail.com>
>> >>> wrote:
>> >>>>
>> >>>>>
>> >>>>>
>> >>>>> Am 23. August 2023 01:25:38 UTC schrieb Gurchetan Singh <
>> >>>>> gurchetansingh@chromium.org>:
>> >>>>>> This adds initial support for gfxstream and cross-domain.  Both
>> >>>>>> features rely on virtio-gpu blob resources and context types, which
>> >>>>>> are also implemented in this patch.
>> >>>>>>
>> >>>>>> gfxstream has a long and illustrious history in Android graphics
>> >>>>>> paravirtualization.  It has been powering graphics in the Android
>> >>>>>> Studio Emulator for more than a decade, which is the main developer
>> >>>>>> platform.
>> >>>>>>
>> >>>>>> Originally conceived by Jesse Hall, it was first known as "EmuGL"
>> [a].
>> >>>>>> The key design characteristic was a 1:1 threading model and
>> >>>>>> auto-generation, which fit nicely with the OpenGLES spec.  It also
>> >>>>>> allowed easy layering with ANGLE on the host, which provides the
>> GLES
>> >>>>>> implementations on Windows or MacOS enviroments.
>> >>>>>>
>> >>>>>> gfxstream has traditionally been maintained by a single engineer,
>> and
>> >>>>>> between 2015 to 2021, the goldfish throne passed to Frank Yang.
>> >>>>>> Historians often remark this glorious reign ("pax gfxstreama" is the
>> >>>>>> academic term) was comparable to that of Augustus and both Queen
>> >>>>>> Elizabeths.  Just to name a few accomplishments in a resplendent
>> >>>>>> panoply: higher versions of GLES, address space graphics, snapshot
>> >>>>>> support and CTS compliant Vulkan [b].
>> >>>>>>
>> >>>>>> One major drawback was the use of out-of-tree goldfish drivers.
>> >>>>>> Android engineers didn't know much about DRM/KMS and especially TTM
>> so
>> >>>>>> a simple guest to host pipe was conceived.
>> >>>>>>
>> >>>>>> Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
>> >>>>>> the Mesa/virglrenderer communities.  In 2018, the initial virtio-gpu
>> >>>>>> port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
>> >>>>>> It was a symbol compatible replacement of virglrenderer [c] and
>> named
>> >>>>>> "AVDVirglrenderer".  This implementation forms the basis of the
>> >>>>>> current gfxstream host implementation still in use today.
>> >>>>>>
>> >>>>>> cross-domain support follows a similar arc.  Originally conceived by
>> >>>>>> Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
>> >>>>>> 2018, it initially relied on the downstream "virtio-wl" device.
>> >>>>>>
>> >>>>>> In 2020 and 2021, virtio-gpu was extended to include blob resources
>> >>>>>> and multiple timelines by yours truly, features
>> gfxstream/cross-domain
>> >>>>>> both require to function correctly.
>> >>>>>>
>> >>>>>> Right now, we stand at the precipice of a truly fantastic
>> possibility:
>> >>>>>> the Android Emulator powered by upstream QEMU and upstream Linux
>> >>>>>> kernel.  gfxstream will then be packaged properfully, and app
>> >>>>>> developers can even fix gfxstream bugs on their own if they
>> encounter
>> >>>>>> them.
>> >>>>>>
>> >>>>>> It's been quite the ride, my friends.  Where will gfxstream head
>> next,
>> >>>>>> nobody really knows.  I wouldn't be surprised if it's around for
>> >>>>>> another decade, maintained by a new generation of Android graphics
>> >>>>>> enthusiasts.
>> >>>>>>
>> >>>>>> Technical details:
>> >>>>>>   - Very simple initial display integration: just used Pixman
>> >>>>>>   - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga
>> function
>> >>>>>>     calls
>> >>>>>>
>> >>>>>> Next steps for Android VMs:
>> >>>>>>   - The next step would be improving display integration and UI
>> >>> interfaces
>> >>>>>>     with the goal of the QEMU upstream graphics being in an emulator
>> >>>>>>     release [d].
>> >>>>>>
>> >>>>>> Next steps for Linux VMs for display virtualization:
>> >>>>>>   - For widespread distribution, someone needs to package Sommelier
>> or
>> >>> the
>> >>>>>>     wayland-proxy-virtwl [e] ideally into Debian main. In addition,
>> >>> newer
>> >>>>>>     versions of the Linux kernel come with DRM_VIRTIO_GPU_KMS
>> option,
>> >>>>>>     which allows disabling KMS hypercalls.  If anyone cares enough,
>> >>> it'll
>> >>>>>>     probably be possible to build a custom VM variant that uses this
>> >>>>> display
>> >>>>>>     virtualization strategy.
>> >>>>>>
>> >>>>>> [a]
>> >>>>>
>> https://android-review.googlesource.com/c/platform/development/+/34470
>> >>>>>> [b]
>> >>>>>
>> >>>
>> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
>> >>>>>> [c]
>> >>>>>
>> >>>
>> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>> >>>>>> [d] https://developer.android.com/studio/releases/emulator
>> >>>>>> [e] https://github.com/talex5/wayland-proxy-virtwl
>> >>>>>>
>> >>>>>> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
>> >>>>>> Tested-by: Alyssa Ross <hi@alyssa.is>
>> >>>>>> Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
>> >>>>>> Reviewed-by: Emmanouil Pitsidianakis <
>> manos.pitsidianakis@linaro.org>
>> >>>>>> ---
>> >>>>>> v1: Incorported various suggestions by Akihiko Odaki and Bernard
>> >>> Berschow
>> >>>>>>     - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>> >>>>>>     - Used error_report(..)
>> >>>>>>     - Used g_autofree to fix leaks on error paths
>> >>>>>>     - Removed unnecessary casts
>> >>>>>>     - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>> >>>>>>
>> >>>>>> v2: Incorported various suggestions by Akihiko Odaki, Marc-André
>> Lureau
>> >>>>> and
>> >>>>>>     Bernard Berschow:
>> >>>>>>     - Parenthesis in CHECK macro
>> >>>>>>     - CHECK_RESULT(result, ..) --> CHECK(!result, ..)
>> >>>>>>     - delay until g->parent_obj.enable = 1
>> >>>>>>     - Additional cast fixes
>> >>>>>>     - initialize directly in virtio_gpu_rutabaga_realize(..)
>> >>>>>>     - add debug callback to hook into QEMU error's APIs
>> >>>>>>
>> >>>>>> v3: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>> >>>>>>     - Autodetect Wayland socket when not explicitly specified
>> >>>>>>     - Fix map_blob error paths
>> >>>>>>     - Add comment why we need both `res` and `resource` in create
>> blob
>> >>>>>>     - Cast and whitespace fixes
>> >>>>>>     - Big endian check comes before virtio_gpu_rutabaga_init().
>> >>>>>>     - VirtIOVGARUTABAGA --> VirtIOVGARutabaga
>> >>>>>>
>> >>>>>> v4: Incorporated feedback from Akihiko Odaki and Alyssa Ross:
>> >>>>>>     - Double checked all casts
>> >>>>>>     - Remove unnecessary parenthesis
>> >>>>>>     - Removed `resource` in create_blob
>> >>>>>>     - Added comment about failure case
>> >>>>>>     - Pass user-provided socket as-is
>> >>>>>>     - Use stack variable rather than heap allocation
>> >>>>>>     - Future-proofed map info API to give access flags as well
>> >>>>>>
>> >>>>>> v5: Incorporated feedback from Akihiko Odaki:
>> >>>>>>     - Check (ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS)
>> >>>>>>     - Simplify num_capsets check
>> >>>>>>     - Call cleanup mapping on error paths
>> >>>>>>     - uint64_t --> void* for rutabaga_map(..)
>> >>>>>>     - Removed unnecessary parenthesis
>> >>>>>>     - Removed unnecessary cast
>> >>>>>>     - #define UNIX_PATH_MAX sizeof((struct sockaddr_un) {}.sun_path)
>> >>>>>>     - Reuse result variable
>> >>>>>>
>> >>>>>> v6: Incorporated feedback from Akihiko Odaki:
>> >>>>>>     - Remove unnecessary #ifndef
>> >>>>>>     - Disable scanout when appropriate
>> >>>>>>     - CHECK capset index within range outside loop
>> >>>>>>     - Add capset_version
>> >>>>>>
>> >>>>>> v7: Incorporated feedback from Akihiko Odaki:
>> >>>>>>     - aio_bh_schedule_oneshot_full --> aio_bh_schedule_oneshot
>> >>>>>>
>> >>>>>> v9: Incorportated feedback from Akihiko Odaki:
>> >>>>>>     - Remove extra error_setg(..) after virtio_gpu_rutabaga_init(..)
>> >>>>>>     - Add error_setg(..) after rutabaga_init(..)
>> >>>>>>
>> >>>>>> v10: Incorportated feedback from Akihiko Odaki:
>> >>>>>>     - error_setg(..) --> error_setg_errno(..) when appropriate
>> >>>>>>     - virtio_gpu_rutabaga_init returns a bool instead of an int
>> >>>>>>
>> >>>>>> v11: Incorportated feedback from Philippe Mathieu-Daudé:
>> >>>>>>     - C-style /* */ comments and avoid // comments.
>> >>>>>>     - GPL-2.0 --> GPL-2.0-or-later
>> >>>>>>
>> >>>>>> hw/display/virtio-gpu-pci-rutabaga.c |   50 ++
>> >>>>>> hw/display/virtio-gpu-rutabaga.c     | 1121
>> ++++++++++++++++++++++++++
>> >>>>>> hw/display/virtio-vga-rutabaga.c     |   53 ++
>> >>>>>> 3 files changed, 1224 insertions(+)
>> >>>>>> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
>> >>>>>> create mode 100644 hw/display/virtio-gpu-rutabaga.c
>> >>>>>> create mode 100644 hw/display/virtio-vga-rutabaga.c
>> >>>>>>
>> >>>>>> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
>> >>>>> b/hw/display/virtio-gpu-pci-rutabaga.c
>> >>>>>> new file mode 100644
>> >>>>>> index 0000000000..311eff308a
>> >>>>>> --- /dev/null
>> >>>>>> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
>> >>>>>> @@ -0,0 +1,50 @@
>> >>>>>> +/*
>> >>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>> >>>>>> + */
>> >>>>>> +
>> >>>>>> +#include "qemu/osdep.h"
>> >>>>>> +#include "qapi/error.h"
>> >>>>>> +#include "qemu/module.h"
>> >>>>>> +#include "hw/pci/pci.h"
>> >>>>>> +#include "hw/qdev-properties.h"
>> >>>>>> +#include "hw/virtio/virtio.h"
>> >>>>>> +#include "hw/virtio/virtio-bus.h"
>> >>>>>> +#include "hw/virtio/virtio-gpu-pci.h"
>> >>>>>> +#include "qom/object.h"
>> >>>>>> +
>> >>>>>> +#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>> >>>>>> +typedef struct VirtIOGPURutabagaPCI VirtIOGPURutabagaPCI;
>> >>>>>> +DECLARE_INSTANCE_CHECKER(VirtIOGPURutabagaPCI,
>> >>> VIRTIO_GPU_RUTABAGA_PCI,
>> >>>>>> +                         TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>> >>>>>> +
>> >>>>>> +struct VirtIOGPURutabagaPCI {
>> >>>>>> +    VirtIOGPUPCIBase parent_obj;
>> >>>>>> +    VirtIOGPURutabaga vdev;
>> >>>>>> +};
>> >>>>>> +
>> >>>>>> +static void virtio_gpu_rutabaga_initfn(Object *obj)
>> >>>>>> +{
>> >>>>>> +    VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>> >>>>>> +
>> >>>>>> +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>> >>>>>> +                                TYPE_VIRTIO_GPU_RUTABAGA);
>> >>>>>> +    VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info
>> = {
>> >>>>>> +    .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>> >>>>>> +    .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>> >>>>>> +    .instance_size = sizeof(VirtIOGPURutabagaPCI),
>> >>>>>> +    .instance_init = virtio_gpu_rutabaga_initfn,
>> >>>>>> +};
>> >>>>>> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>> >>>>>> +module_kconfig(VIRTIO_PCI);
>> >>>>>> +
>> >>>>>> +static void virtio_gpu_rutabaga_pci_register_types(void)
>> >>>>>> +{
>> >>>>>> +    virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +type_init(virtio_gpu_rutabaga_pci_register_types)
>> >>>>>> +
>> >>>>>> +module_dep("hw-display-virtio-gpu-pci");
>> >>>>>> diff --git a/hw/display/virtio-gpu-rutabaga.c
>> >>>>> b/hw/display/virtio-gpu-rutabaga.c
>> >>>>>> new file mode 100644
>> >>>>>> index 0000000000..9018e5a702
>> >>>>>> --- /dev/null
>> >>>>>> +++ b/hw/display/virtio-gpu-rutabaga.c
>> >>>>>> @@ -0,0 +1,1121 @@
>> >>>>>> +/*
>> >>>>>> + * SPDX-License-Identifier: GPL-2.0-or-later
>> >>>>>> + */
>> >>>>>> +
>> >>>>>> +#include "qemu/osdep.h"
>> >>>>>> +#include "qapi/error.h"
>> >>>>>> +#include "qemu/error-report.h"
>> >>>>>> +#include "qemu/iov.h"
>> >>>>>> +#include "trace.h"
>> >>>>>> +#include "hw/virtio/virtio.h"
>> >>>>>> +#include "hw/virtio/virtio-gpu.h"
>> >>>>>> +#include "hw/virtio/virtio-gpu-pixman.h"
>> >>>>>> +#include "hw/virtio/virtio-iommu.h"
>> >>>>>> +
>> >>>>>> +#include <glib/gmem.h>
>> >>>>>> +#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>> >>>>>> +
>> >>>>>> +#define CHECK(condition, cmd)
>> >>>>>       \
>> >>>>>> +    do {
>> >>>>>        \
>> >>>>>> +        if (!(condition)) {
>> >>>>>       \
>> >>>>>> +            error_report("CHECK failed in %s() %s:" "%d", __func__,
>> >>>>>       \
>> >>>>>> +                         __FILE__, __LINE__);
>> >>>>>       \
>> >>>>>> +            (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>> >>>>>        \
>> >>>>>> +            return;
>> >>>>>       \
>> >>>>>> +       }
>> >>>>>        \
>> >>>>>> +    } while (0)
>> >>>>>> +
>> >>>>>> +/*
>> >>>>>> + * This is the size of the char array in struct sock_addr_un. No
>> >>> Wayland
>> >>>>> socket
>> >>>>>> + * can be created with a path longer than this, including the null
>> >>>>> terminator.
>> >>>>>> + */
>> >>>>>> +#define UNIX_PATH_MAX sizeof((struct sockaddr_un) {} .sun_path)
>> >>>>>> +
>> >>>>>> +struct rutabaga_aio_data {
>> >>>>>> +    struct VirtIOGPURutabaga *vr;
>> >>>>>> +    struct rutabaga_fence fence;
>> >>>>>> +};
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct
>> >>>>> virtio_gpu_scanout *s,
>> >>>>>> +                                  uint32_t resource_id)
>> >>>>>> +{
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>> >>>>>> +    struct iovec transfer_iovec;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    res = virtio_gpu_find_resource(g, resource_id);
>> >>>>>> +    if (!res) {
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    if (res->width != s->current_cursor->width ||
>> >>>>>> +        res->height != s->current_cursor->height) {
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    transfer.x = 0;
>> >>>>>> +    transfer.y = 0;
>> >>>>>> +    transfer.z = 0;
>> >>>>>> +    transfer.w = res->width;
>> >>>>>> +    transfer.h = res->height;
>> >>>>>> +    transfer.d = 1;
>> >>>>>> +
>> >>>>>> +    transfer_iovec.iov_base = s->current_cursor->data;
>> >>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
>> >>>>>> +
>> >>>>>> +    rutabaga_resource_transfer_read(vr->rutabaga, 0,
>> >>>>>> +                                    resource_id, &transfer,
>> >>>>>> +                                    &transfer_iovec);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>> >>>>>> +{
>> >>>>>> +    VirtIOGPU *g = VIRTIO_GPU(b);
>> >>>>>> +    virtio_gpu_process_cmdq(g);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>> >>>>>> +                                struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct virtio_gpu_resource_create_2d c2d;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(c2d);
>> >>>>>> +    trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
>> >>>>>> +                                       c2d.width, c2d.height);
>> >>>>>> +
>> >>>>>> +    rc_3d.target = 2;
>> >>>>>> +    rc_3d.format = c2d.format;
>> >>>>>> +    rc_3d.bind = (1 << 1);
>> >>>>>> +    rc_3d.width = c2d.width;
>> >>>>>> +    rc_3d.height = c2d.height;
>> >>>>>> +    rc_3d.depth = 1;
>> >>>>>> +    rc_3d.array_size = 1;
>> >>>>>> +    rc_3d.last_level = 0;
>> >>>>>> +    rc_3d.nr_samples = 0;
>> >>>>>> +    rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
>> >>> c2d.resource_id,
>> >>>>> &rc_3d);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +
>> >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >>>>>> +    res->width = c2d.width;
>> >>>>>> +    res->height = c2d.height;
>> >>>>>> +    res->format = c2d.format;
>> >>>>>> +    res->resource_id = c2d.resource_id;
>> >>>>>> +
>> >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>> >>>>>> +                                struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct rutabaga_create_3d rc_3d = { 0 };
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct virtio_gpu_resource_create_3d c3d;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(c3d);
>> >>>>>> +
>> >>>>>> +    trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
>> >>>>>> +                                       c3d.width, c3d.height,
>> >>> c3d.depth);
>> >>>>>> +
>> >>>>>> +    rc_3d.target = c3d.target;
>> >>>>>> +    rc_3d.format = c3d.format;
>> >>>>>> +    rc_3d.bind = c3d.bind;
>> >>>>>> +    rc_3d.width = c3d.width;
>> >>>>>> +    rc_3d.height = c3d.height;
>> >>>>>> +    rc_3d.depth = c3d.depth;
>> >>>>>> +    rc_3d.array_size = c3d.array_size;
>> >>>>>> +    rc_3d.last_level = c3d.last_level;
>> >>>>>> +    rc_3d.nr_samples = c3d.nr_samples;
>> >>>>>> +    rc_3d.flags = c3d.flags;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_create_3d(vr->rutabaga,
>> >>> c3d.resource_id,
>> >>>>> &rc_3d);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +
>> >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >>>>>> +    res->width = c3d.width;
>> >>>>>> +    res->height = c3d.height;
>> >>>>>> +    res->format = c3d.format;
>> >>>>>> +    res->resource_id = c3d.resource_id;
>> >>>>>> +
>> >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_resource_unref(VirtIOGPU *g,
>> >>>>>> +                            struct virtio_gpu_ctrl_command *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct virtio_gpu_resource_unref unref;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(unref);
>> >>>>>> +
>> >>>>>> +    trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>> >>>>>> +
>> >>>>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
>> >>>>>> +    CHECK(res, cmd);
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_unref(vr->rutabaga,
>> unref.resource_id);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +
>> >>>>>> +    if (res->image) {
>> >>>>>> +        pixman_image_unref(res->image);
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
>> >>>>>> +    g_free(res);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_context_create(VirtIOGPU *g,
>> >>>>>> +                            struct virtio_gpu_ctrl_command *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct virtio_gpu_ctx_create cc;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(cc);
>> >>>>>> +    trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>> >>>>>> +                                    cc.debug_name);
>> >>>>>> +
>> >>>>>> +    result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
>> >>>>>> +                                     cc.context_init,
>> cc.debug_name,
>> >>>>> cc.nlen);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_context_destroy(VirtIOGPU *g,
>> >>>>>> +                             struct virtio_gpu_ctrl_command *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct virtio_gpu_ctx_destroy cd;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(cd);
>> >>>>>> +    trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>> >>>>>> +
>> >>>>>> +    result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_resource_flush(VirtIOGPU *g, struct
>> >>> virtio_gpu_ctrl_command
>> >>>>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result, i;
>> >>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>> >>>>>> +    struct iovec transfer_iovec;
>> >>>>>> +    struct virtio_gpu_resource_flush rf;
>> >>>>>> +    bool found = false;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +    if (vr->headless) {
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(rf);
>> >>>>>> +    trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>> >>>>>> +                                   rf.r.width, rf.r.height, rf.r.x,
>> >>>>> rf.r.y);
>> >>>>>> +
>> >>>>>> +    res = virtio_gpu_find_resource(g, rf.resource_id);
>> >>>>>> +    CHECK(res, cmd);
>> >>>>>> +
>> >>>>>> +    for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>> >>>>>> +        scanout = &g->parent_obj.scanout[i];
>> >>>>>> +        if (i == res->scanout_bitmask) {
>> >>>>>> +            found = true;
>> >>>>>> +            break;
>> >>>>>> +        }
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    if (!found) {
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    transfer.x = 0;
>> >>>>>> +    transfer.y = 0;
>> >>>>>> +    transfer.z = 0;
>> >>>>>> +    transfer.w = res->width;
>> >>>>>> +    transfer.h = res->height;
>> >>>>>> +    transfer.d = 1;
>> >>>>>> +
>> >>>>>> +    transfer_iovec.iov_base = pixman_image_get_data(res->image);
>> >>>>>> +    transfer_iovec.iov_len = res->width * res->height * 4;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>> >>>>>> +                                             rf.resource_id,
>> >>> &transfer,
>> >>>>>> +                                             &transfer_iovec);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +    dpy_gfx_update_full(scanout->con);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_set_scanout(VirtIOGPU *g, struct
>> virtio_gpu_ctrl_command
>> >>>>> *cmd)
>> >>>>>> +{
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct virtio_gpu_scanout *scanout = NULL;
>> >>>>>> +    struct virtio_gpu_set_scanout ss;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +    if (vr->headless) {
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(ss);
>> >>>>>> +    trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
>> >>>>>> +                                     ss.r.width, ss.r.height,
>> ss.r.x,
>> >>>>> ss.r.y);
>> >>>>>> +
>> >>>>>> +    CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
>> >>>>>> +    scanout = &g->parent_obj.scanout[ss.scanout_id];
>> >>>>>> +
>> >>>>>> +    if (ss.resource_id == 0) {
>> >>>>>> +        dpy_gfx_replace_surface(scanout->con, NULL);
>> >>>>>> +        dpy_gl_scanout_disable(scanout->con);
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    res = virtio_gpu_find_resource(g, ss.resource_id);
>> >>>>>> +    CHECK(res, cmd);
>> >>>>>> +
>> >>>>>> +    if (!res->image) {
>> >>>>>> +        pixman_format_code_t pformat;
>> >>>>>> +        pformat = virtio_gpu_get_pixman_format(res->format);
>> >>>>>> +        CHECK(pformat, cmd);
>> >>>>>> +
>> >>>>>> +        res->image = pixman_image_create_bits(pformat,
>> >>>>>> +                                              res->width,
>> >>>>>> +                                              res->height,
>> >>>>>> +                                              NULL, 0);
>> >>>>>> +        CHECK(res->image, cmd);
>> >>>>>> +        pixman_image_ref(res->image);
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    g->parent_obj.enable = 1;
>> >>>>>> +
>> >>>>>> +    /* realloc the surface ptr */
>> >>>>>> +    scanout->ds = qemu_create_displaysurface_pixman(res->image);
>> >>>>>> +    dpy_gfx_replace_surface(scanout->con, NULL);
>> >>>>>> +    dpy_gfx_replace_surface(scanout->con, scanout->ds);
>> >>>>>> +    res->scanout_bitmask = ss.scanout_id;
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_submit_3d(VirtIOGPU *g,
>> >>>>>> +                       struct virtio_gpu_ctrl_command *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct virtio_gpu_cmd_submit cs;
>> >>>>>> +    struct rutabaga_command rutabaga_cmd = { 0 };
>> >>>>>> +    g_autofree uint8_t *buf = NULL;
>> >>>>>> +    size_t s;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(cs);
>> >>>>>> +    trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>> >>>>>> +
>> >>>>>> +    buf = g_new0(uint8_t, cs.size);
>> >>>>>> +    s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>> >>>>>> +                   sizeof(cs), buf, cs.size);
>> >>>>>> +    CHECK(s == cs.size, cmd);
>> >>>>>> +
>> >>>>>> +    rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
>> >>>>>> +    rutabaga_cmd.cmd = buf;
>> >>>>>> +    rutabaga_cmd.cmd_size = cs.size;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>> >>>>>> +                                 struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>> >>>>>> +    struct virtio_gpu_transfer_to_host_2d t2d;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(t2d);
>> >>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>> >>>>>> +
>> >>>>>> +    transfer.x = t2d.r.x;
>> >>>>>> +    transfer.y = t2d.r.y;
>> >>>>>> +    transfer.z = 0;
>> >>>>>> +    transfer.w = t2d.r.width;
>> >>>>>> +    transfer.h = t2d.r.height;
>> >>>>>> +    transfer.d = 1;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
>> >>>>> t2d.resource_id,
>> >>>>>> +                                              &transfer);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>> >>>>>> +                                 struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>> >>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
>> >>>>>> +    trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>> >>>>>> +
>> >>>>>> +    transfer.x = t3d.box.x;
>> >>>>>> +    transfer.y = t3d.box.y;
>> >>>>>> +    transfer.z = t3d.box.z;
>> >>>>>> +    transfer.w = t3d.box.w;
>> >>>>>> +    transfer.h = t3d.box.h;
>> >>>>>> +    transfer.d = t3d.box.d;
>> >>>>>> +    transfer.level = t3d.level;
>> >>>>>> +    transfer.stride = t3d.stride;
>> >>>>>> +    transfer.layer_stride = t3d.layer_stride;
>> >>>>>> +    transfer.offset = t3d.offset;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_transfer_write(vr->rutabaga,
>> >>>>> t3d.hdr.ctx_id,
>> >>>>>> +                                              t3d.resource_id,
>> >>>>> &transfer);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>> >>>>>> +                                   struct virtio_gpu_ctrl_command
>> >>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct rutabaga_transfer transfer = { 0 };
>> >>>>>> +    struct virtio_gpu_transfer_host_3d t3d;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(t3d);
>> >>>>>> +    trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>> >>>>>> +
>> >>>>>> +    transfer.x = t3d.box.x;
>> >>>>>> +    transfer.y = t3d.box.y;
>> >>>>>> +    transfer.z = t3d.box.z;
>> >>>>>> +    transfer.w = t3d.box.w;
>> >>>>>> +    transfer.h = t3d.box.h;
>> >>>>>> +    transfer.d = t3d.box.d;
>> >>>>>> +    transfer.level = t3d.level;
>> >>>>>> +    transfer.stride = t3d.stride;
>> >>>>>> +    transfer.layer_stride = t3d.layer_stride;
>> >>>>>> +    transfer.offset = t3d.offset;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_transfer_read(vr->rutabaga,
>> >>>>> t3d.hdr.ctx_id,
>> >>>>>> +                                             t3d.resource_id,
>> >>> &transfer,
>> >>>>> NULL);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_attach_backing(VirtIOGPU *g, struct
>> >>> virtio_gpu_ctrl_command
>> >>>>> *cmd)
>> >>>>>> +{
>> >>>>>> +    struct rutabaga_iovecs vecs = { 0 };
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct virtio_gpu_resource_attach_backing att_rb;
>> >>>>>> +    int ret;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(att_rb);
>> >>>>>> +    trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>> >>>>>> +
>> >>>>>> +    res = virtio_gpu_find_resource(g, att_rb.resource_id);
>> >>>>>> +    CHECK(res, cmd);
>> >>>>>> +    CHECK(!res->iov, cmd);
>> >>>>>> +
>> >>>>>> +    ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
>> >>>>> sizeof(att_rb),
>> >>>>>> +                                        cmd, NULL, &res->iov,
>> >>>>> &res->iov_cnt);
>> >>>>>> +    CHECK(!ret, cmd);
>> >>>>>> +
>> >>>>>> +    vecs.iovecs = res->iov;
>> >>>>>> +    vecs.num_iovecs = res->iov_cnt;
>> >>>>>> +
>> >>>>>> +    ret = rutabaga_resource_attach_backing(vr->rutabaga,
>> >>>>> att_rb.resource_id,
>> >>>>>> +                                           &vecs);
>> >>>>>> +    if (ret != 0) {
>> >>>>>> +        virtio_gpu_cleanup_mapping(g, res);
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    CHECK(!ret, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_detach_backing(VirtIOGPU *g, struct
>> >>> virtio_gpu_ctrl_command
>> >>>>> *cmd)
>> >>>>>> +{
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct virtio_gpu_resource_detach_backing detach_rb;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(detach_rb);
>> >>>>>> +    trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>> >>>>>> +
>> >>>>>> +    res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>> >>>>>> +    CHECK(res, cmd);
>> >>>>>> +
>> >>>>>> +    rutabaga_resource_detach_backing(vr->rutabaga,
>> >>>>>> +                                     detach_rb.resource_id);
>> >>>>>> +
>> >>>>>> +    virtio_gpu_cleanup_mapping(g, res);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>> >>>>>> +                                 struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct virtio_gpu_ctx_resource att_res;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(att_res);
>> >>>>>> +    trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>> >>>>>> +                                        att_res.resource_id);
>> >>>>>> +
>> >>>>>> +    result = rutabaga_context_attach_resource(vr->rutabaga,
>> >>>>> att_res.hdr.ctx_id,
>> >>>>>> +                                              att_res.resource_id);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>> >>>>>> +                                 struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct virtio_gpu_ctx_resource det_res;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(det_res);
>> >>>>>> +    trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>> >>>>>> +                                        det_res.resource_id);
>> >>>>>> +
>> >>>>>> +    result = rutabaga_context_detach_resource(vr->rutabaga,
>> >>>>> det_res.hdr.ctx_id,
>> >>>>>> +                                              det_res.resource_id);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct
>> >>>>> virtio_gpu_ctrl_command *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct virtio_gpu_get_capset_info info;
>> >>>>>> +    struct virtio_gpu_resp_capset_info resp;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(info);
>> >>>>>> +
>> >>>>>> +    result = rutabaga_get_capset_info(vr->rutabaga,
>> info.capset_index,
>> >>>>>> +                                      &resp.capset_id,
>> >>>>> &resp.capset_max_version,
>> >>>>>> +                                      &resp.capset_max_size);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +
>> >>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>> >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_get_capset(VirtIOGPU *g, struct
>> virtio_gpu_ctrl_command
>> >>>>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    struct virtio_gpu_get_capset gc;
>> >>>>>> +    struct virtio_gpu_resp_capset *resp;
>> >>>>>> +    uint32_t capset_size, capset_version;
>> >>>>>> +    uint32_t current_id, i;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(gc);
>> >>>>>> +    for (i = 0; i < vr->num_capsets; i++) {
>> >>>>>> +        result = rutabaga_get_capset_info(vr->rutabaga, i,
>> >>>>>> +                                          &current_id,
>> >>> &capset_version,
>> >>>>>> +                                          &capset_size);
>> >>>>>> +        CHECK(!result, cmd);
>> >>>>>> +
>> >>>>>> +        if (current_id == gc.capset_id) {
>> >>>>>> +            break;
>> >>>>>> +        }
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    CHECK(i < vr->num_capsets, cmd);
>> >>>>>> +
>> >>>>>> +    resp = g_malloc0(sizeof(*resp) + capset_size);
>> >>>>>> +    resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>> >>>>>> +    rutabaga_get_capset(vr->rutabaga, gc.capset_id,
>> gc.capset_version,
>> >>>>>> +                        resp->capset_data, capset_size);
>> >>>>>> +
>> >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
>> >>>>> capset_size);
>> >>>>>> +    g_free(resp);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>> >>>>>> +                                  struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int result;
>> >>>>>> +    struct rutabaga_iovecs vecs = { 0 };
>> >>>>>> +    g_autofree struct virtio_gpu_simple_resource *res = NULL;
>> >>>>>> +    struct virtio_gpu_resource_create_blob cblob;
>> >>>>>> +    struct rutabaga_create_blob rc_blob = { 0 };
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(cblob);
>> >>>>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id,
>> >>> cblob.size);
>> >>>>>> +
>> >>>>>> +    CHECK(cblob.resource_id != 0, cmd);
>> >>>>>> +
>> >>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> >>>>>> +
>> >>>>>> +    res->resource_id = cblob.resource_id;
>> >>>>>> +    res->blob_size = cblob.size;
>> >>>>>> +
>> >>>>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>> >>>>>> +        result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
>> >>>>>> +                                               sizeof(cblob), cmd,
>> >>>>> &res->addrs,
>> >>>>>> +                                               &res->iov,
>> >>> &res->iov_cnt);
>> >>>>>> +        CHECK(!result, cmd);
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    rc_blob.blob_id = cblob.blob_id;
>> >>>>>> +    rc_blob.blob_mem = cblob.blob_mem;
>> >>>>>> +    rc_blob.blob_flags = cblob.blob_flags;
>> >>>>>> +    rc_blob.size = cblob.size;
>> >>>>>> +
>> >>>>>> +    vecs.iovecs = res->iov;
>> >>>>>> +    vecs.num_iovecs = res->iov_cnt;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_create_blob(vr->rutabaga,
>> >>>>> cblob.hdr.ctx_id,
>> >>>>>> +                                           cblob.resource_id,
>> >>> &rc_blob,
>> >>>>> &vecs,
>> >>>>>> +                                           NULL);
>> >>>>>> +
>> >>>>>> +    if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>> >>>>>> +        virtio_gpu_cleanup_mapping(g, res);
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +
>> >>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> >>>>>> +    res = NULL;
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>> >>>>>> +                               struct virtio_gpu_ctrl_command *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    uint32_t map_info = 0;
>> >>>>>> +    uint32_t slot = 0;
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct rutabaga_mapping mapping = { 0 };
>> >>>>>> +    struct virtio_gpu_resource_map_blob mblob;
>> >>>>>> +    struct virtio_gpu_resp_map_info resp = { 0 };
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(mblob);
>> >>>>>> +
>> >>>>>> +    CHECK(mblob.resource_id != 0, cmd);
>> >>>>>> +
>> >>>>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
>> >>>>>> +    CHECK(res, cmd);
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_map_info(vr->rutabaga,
>> >>> mblob.resource_id,
>> >>>>>> +                                        &map_info);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +
>> >>>>>> +    /*
>> >>>>>> +     * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu
>> >>> spec,
>> >>>>> but do
>> >>>>>> +     * exist to potentially allow the hypervisor to restrict write
>> >>>>> access to
>> >>>>>> +     * memory. QEMU does not need to use this functionality at the
>> >>>>> moment.
>> >>>>>> +     */
>> >>>>>> +    resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
>> >>>>>> +
>> >>>>>> +    result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
>> >>>>> &mapping);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +
>> >>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
>> >>>>>> +        if (vr->memory_regions[slot].used) {
>> >>>>>> +            continue;
>> >>>>>> +        }
>> >>>>>> +
>> >>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>> >>>>>> +        memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
>> >>>>>> +                                   mapping.ptr);
>> >>>>>> +        memory_region_add_subregion(&g->parent_obj.hostmem,
>> >>>>>> +                                    mblob.offset, mr);
>> >>>>>> +        vr->memory_regions[slot].resource_id = mblob.resource_id;
>> >>>>>> +        vr->memory_regions[slot].used = 1;
>> >>>>>> +        break;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    if (slot >= MAX_SLOTS) {
>> >>>>>> +        result = rutabaga_resource_unmap(vr->rutabaga,
>> >>>>> mblob.resource_id);
>> >>>>>> +        CHECK(!result, cmd);
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
>> >>>>>> +
>> >>>>>> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>> >>>>>> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>> >>>>>> +                                 struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    int32_t result;
>> >>>>>> +    uint32_t slot = 0;
>> >>>>>> +    struct virtio_gpu_simple_resource *res;
>> >>>>>> +    struct virtio_gpu_resource_unmap_blob ublob;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(ublob);
>> >>>>>> +
>> >>>>>> +    CHECK(ublob.resource_id != 0, cmd);
>> >>>>>> +
>> >>>>>> +    res = virtio_gpu_find_resource(g, ublob.resource_id);
>> >>>>>> +    CHECK(res, cmd);
>> >>>>>> +
>> >>>>>> +    for (slot = 0; slot < MAX_SLOTS; slot++) {
>> >>>>>> +        if (vr->memory_regions[slot].resource_id !=
>> >>> ublob.resource_id) {
>> >>>>>> +            continue;
>> >>>>>> +        }
>> >>>>>> +
>> >>>>>> +        MemoryRegion *mr = &(vr->memory_regions[slot].mr);
>> >>>>>> +        memory_region_del_subregion(&g->parent_obj.hostmem, mr);
>> >>>>>> +
>> >>>>>> +        vr->memory_regions[slot].resource_id = 0;
>> >>>>>> +        vr->memory_regions[slot].used = 0;
>> >>>>>> +        break;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    CHECK(slot < MAX_SLOTS, cmd);
>> >>>>>> +    result = rutabaga_resource_unmap(vr->rutabaga,
>> res->resource_id);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>> >>>>>> +                                struct virtio_gpu_ctrl_command
>> *cmd)
>> >>>>>> +{
>> >>>>>> +    struct rutabaga_fence fence = { 0 };
>> >>>>>> +    int32_t result;
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>> >>>>>> +
>> >>>>>> +    switch (cmd->cmd_hdr.type) {
>> >>>>>> +    case VIRTIO_GPU_CMD_CTX_CREATE:
>> >>>>>> +        rutabaga_cmd_context_create(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_CTX_DESTROY:
>> >>>>>> +        rutabaga_cmd_context_destroy(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>> >>>>>> +        rutabaga_cmd_create_resource_2d(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>> >>>>>> +        rutabaga_cmd_create_resource_3d(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_SUBMIT_3D:
>> >>>>>> +        rutabaga_cmd_submit_3d(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>> >>>>>> +        rutabaga_cmd_transfer_to_host_2d(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>> >>>>>> +        rutabaga_cmd_transfer_to_host_3d(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>> >>>>>> +        rutabaga_cmd_transfer_from_host_3d(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>> >>>>>> +        rutabaga_cmd_attach_backing(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>> >>>>>> +        rutabaga_cmd_detach_backing(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_SET_SCANOUT:
>> >>>>>> +        rutabaga_cmd_set_scanout(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>> >>>>>> +        rutabaga_cmd_resource_flush(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>> >>>>>> +        rutabaga_cmd_resource_unref(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>> >>>>>> +        rutabaga_cmd_ctx_attach_resource(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>> >>>>>> +        rutabaga_cmd_ctx_detach_resource(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>> >>>>>> +        rutabaga_cmd_get_capset_info(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_GET_CAPSET:
>> >>>>>> +        rutabaga_cmd_get_capset(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>> >>>>>> +        virtio_gpu_get_display_info(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_GET_EDID:
>> >>>>>> +        virtio_gpu_get_edid(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>> >>>>>> +        rutabaga_cmd_resource_create_blob(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>> >>>>>> +        rutabaga_cmd_resource_map_blob(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>> >>>>>> +        rutabaga_cmd_resource_unmap_blob(g, cmd);
>> >>>>>> +        break;
>> >>>>>> +    default:
>> >>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>> >>>>>> +        break;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    if (cmd->finished) {
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +    if (cmd->error) {
>> >>>>>> +        error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>> >>>>>> +                     cmd->cmd_hdr.type, cmd->error);
>> >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +    if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>> >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
>> >>>>> VIRTIO_GPU_RESP_OK_NODATA);
>> >>>>>> +        return;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    fence.flags = cmd->cmd_hdr.flags;
>> >>>>>> +    fence.ctx_id = cmd->cmd_hdr.ctx_id;
>> >>>>>> +    fence.fence_id = cmd->cmd_hdr.fence_id;
>> >>>>>> +    fence.ring_idx = cmd->cmd_hdr.ring_idx;
>> >>>>>> +
>> >>>>>> +    trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id,
>> >>>>> cmd->cmd_hdr.type);
>> >>>>>> +
>> >>>>>> +    result = rutabaga_create_fence(vr->rutabaga, &fence);
>> >>>>>> +    CHECK(!result, cmd);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +virtio_gpu_rutabaga_aio_cb(void *opaque)
>> >>>>>> +{
>> >>>>>> +    struct rutabaga_aio_data *data = opaque;
>> >>>>>> +    VirtIOGPU *g = VIRTIO_GPU(data->vr);
>> >>>>>> +    struct rutabaga_fence fence_data = data->fence;
>> >>>>>> +    struct virtio_gpu_ctrl_command *cmd, *tmp;
>> >>>>>> +
>> >>>>>> +    uint32_t signaled_ctx_specific = fence_data.flags &
>> >>>>>> +                                     RUTABAGA_FLAG_INFO_RING_IDX;
>> >>>>>> +
>> >>>>>> +    QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>> >>>>>> +        /*
>> >>>>>> +         * Due to context specific timelines.
>> >>>>>> +         */
>> >>>>>> +        uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>> >>>>>> +                                       RUTABAGA_FLAG_INFO_RING_IDX;
>> >>>>>> +
>> >>>>>> +        if (signaled_ctx_specific != target_ctx_specific) {
>> >>>>>> +            continue;
>> >>>>>> +        }
>> >>>>>> +
>> >>>>>> +        if (signaled_ctx_specific &&
>> >>>>>> +           (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>> >>>>>> +            continue;
>> >>>>>> +        }
>> >>>>>> +
>> >>>>>> +        if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>> >>>>>> +            continue;
>> >>>>>> +        }
>> >>>>>> +
>> >>>>>> +        trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>> >>>>>> +        virtio_gpu_ctrl_response_nodata(g, cmd,
>> >>>>> VIRTIO_GPU_RESP_OK_NODATA);
>> >>>>>> +        QTAILQ_REMOVE(&g->fenceq, cmd, next);
>> >>>>>> +        g_free(cmd);
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    g_free(data);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>> >>>>>> +                             const struct rutabaga_fence *fence) {
>> >>>>>> +    struct rutabaga_aio_data *data;
>> >>>>>> +    VirtIOGPU *g = (VirtIOGPU *)user_data;
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +
>> >>>>>> +    /*
>> >>>>>> +     * gfxstream and both cross-domain (and even newer versions
>> >>>>> virglrenderer:
>> >>>>>> +     * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence
>> >>>>> completion on
>> >>>>>> +     * threads ("callback threads") that are different from the
>> thread
>> >>>>> that
>> >>>>>> +     * processes the command queue ("main thread").
>> >>>>>> +     *
>> >>>>>> +     * crosvm and other virtio-gpu 1.1 implementations enable
>> callback
>> >>>>> threads
>> >>>>>> +     * via locking.  However, on QEMU a deadlock is observed if
>> >>>>>> +     * virtio_gpu_ctrl_response_nodata(..) [used in the fence
>> >>> callback]
>> >>>>> is used
>> >>>>>> +     * from a thread that is not the main thread.
>> >>>>>> +     *
>> >>>>>> +     * The reason is QEMU's internal locking is designed to work
>> with
>> >>>>> QEMU
>> >>>>>> +     * threads (see rcu_register_thread()) and not generic
>> C/C++/Rust
>> >>>>> threads.
>> >>>>>> +     * For now, we can workaround this by scheduling the return of
>> the
>> >>>>>> +     * fence descriptors on the main thread.
>> >>>>>> +     */
>> >>>>>> +
>> >>>>>> +    data = g_new0(struct rutabaga_aio_data, 1);
>> >>>>>> +    data->vr = vr;
>> >>>>>> +    data->fence = *fence;
>> >>>>>> +    aio_bh_schedule_oneshot(qemu_get_aio_context(),
>> >>>>>> +                            virtio_gpu_rutabaga_aio_cb,
>> >>>>>> +                            data);
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static void
>> >>>>>> +virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
>> >>>>>> +                             const struct rutabaga_debug *debug) {
>> >>>>>> +
>> >>>>>> +    if (debug->debug_type == RUTABAGA_DEBUG_ERROR) {
>> >>>>>> +        error_report("%s", debug->message);
>> >>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_WARN) {
>> >>>>>> +        warn_report("%s", debug->message);
>> >>>>>> +    } else if (debug->debug_type == RUTABAGA_DEBUG_INFO) {
>> >>>>>> +        info_report("%s", debug->message);
>> >>>>>> +    }
>> >>>>>> +}
>> >>>>>> +
>> >>>>>> +static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
>> >>>>>> +{
>> >>>>>> +    int result;
>> >>>>>> +    uint64_t capset_mask;
>> >>>>>> +    struct rutabaga_builder builder = { 0 };
>> >>>>>> +    char wayland_socket_path[UNIX_PATH_MAX];
>> >>>>>> +    struct rutabaga_channel channel = { 0 };
>> >>>>>> +    struct rutabaga_channels channels = { 0 };
>> >>>>>> +
>> >>>>>> +    VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> >>>>>> +    vr->rutabaga = NULL;
>> >>>>>> +
>> >>>>>> +    if (!vr->capset_names) {
>> >>>>>> +        error_setg(errp, "a capset name from the virtio-gpu spec is
>> >>>>> required");
>> >>>>>> +        return false;
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    builder.wsi = RUTABAGA_WSI_SURFACELESS;
>> >>>>>> +    /*
>> >>>>>> +     * Currently, if WSI is specified, the only valid strings are
>> >>>>> "surfaceless"
>> >>>>>> +     * or "headless".  Surfaceless doesn't create a native window
>> >>>>> surface, but
>> >>>>>> +     * does copy from the render target to the Pixman buffer if a
>> >>>>> virtio-gpu
>> >>>>>> +     * 2D hypercall is issued.  Surfacless is the default.
>> >>>>>> +     *
>> >>>>>> +     * Headless is like surfaceless, but doesn't copy to the Pixman
>> >>>>> buffer. The
>> >>>>>> +     * use case is automated testing environments where there is no
>> >>> need
>> >>>>> to view
>> >>>>>> +     * results.
>> >>>>>> +     *
>> >>>>>> +     * In the future, more performant virtio-gpu 2D UI integration
>> may
>> >>>>> be added.
>> >>>>>> +     */
>> >>>>>> +    if (vr->wsi) {
>> >>>>>> +        if (g_str_equal(vr->wsi, "surfaceless")) {
>> >>>>>> +            vr->headless = false;
>> >>>>>> +        } else if (g_str_equal(vr->wsi, "headless")) {
>> >>>>>> +            vr->headless = true;
>> >>>>>> +        } else {
>> >>>>>> +            error_setg(errp, "invalid wsi option selected");
>> >>>>>> +            return false;
>> >>>>>> +        }
>> >>>>>> +    }
>> >>>>>> +
>> >>>>>> +    result = rutabaga_calculate_capset_mask(vr->capset_names,
>> >>>>> &capset_mask);
>> >>>>>
>> >>>>> First, sorry for responding after such a long time. I've been busy
>> with
>> >>>>> work and I'm doing QEMU in my free time.
>> >>>>>
>> >>>>> In iteration 1 I've raised the topic on capset_names [1] and I
>> haven't
>> >>>>> seen it answered properly. Perhaps I need to rephrase a bit so here
>> we
>> >>> go:
>> >>>>> capset_names seems to be colon-separated list of bit options managed
>> by
>> >>>>> rutabaga. This introduces yet another way of options handling. There
>> >>> have
>> >>>>> been talks about harmonizing options handling in QEMU since
>> apparently
>> >>> it
>> >>>>> is considered too complex [2,3].
>> >>>>
>> >>>>
>> >>>>> Why not pass the "capset" as a bitfield like capset_mask and have
>> QEMU
>> >>>>> create "capset" from QOM properties?
>> >>>>
>> >>>> IIUC these flags could come from virtio_gpu.h which is already
>> present in
>> >>>>> the QEMU tree. This would not inly shortcut the dependency on
>> rutabaga
>> >>> here
>> >>>>> but would also be more idiomatic QEMU (since it makes the options
>> more
>> >>>>> introspectable by internal machinery).
>> >>>>
>> >>>>
>> >>>>> Of course the bitfield approach would require modifications in QEMU
>> >>>>> whenever rutabaga gains new features. However, I figure that in the
>> long
>> >>>>> term rutabaga will be quite feature complete such that the benefits
>> of
>> >>>>> idiomatic QEMU handling will outweigh the decoupling of the projects.
>> >>>>>
>> >>>>> What do you think?
>> >>>>>
>> >>>>
>> >>>> I think what you're suggesting is something like -device
>> >>>> virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
>> >>>> gfxstream_vulkan + cross_domain]?
>> >>>
>> >>> I was thinking more along the lines of
>> >>> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
>> >>> gfxstream_vulkan and cross_domain are boolean QOM properties. This
>> would
>> >>> make for a human-readable format which follows QEMU style.
>> >>>
>> >>>>
>> >>>> We actually did consider something like that when adding the
>> >>>> --context-types flag [with crosvm], but there was a desire for a
>> >>>> human-readable format rather than numbers [even if they are in the
>> >>>> virtio-gpu spec].
>> >>>>
>> >>>> Additionally, there are quite a few context types that people are
>> playing
>> >>>> around with [gfxstream-gles, gfxstream-composer] that are launchable
>> and
>> >>>> aren't in the spec just yet.
>> >>>
>> >>> Right, QEMU had to be modified for this kind of experimentation. I
>> >>> considered this in my last paragraph and figured that in the long run
>> QEMU
>> >>> *may* prefer more idiomatic option handling since it tries hard to not
>> >>> break its command line interface. I'm just pointing this out -- the
>> >>> decision is ultimately up to the community.
>> >>>
>> >>> Why not have dedicated QEMU development branches for experimentation?
>> >>> Wouldn't upstreaming new features into QEMU be a good motivation to
>> get the
>> >>> missing pieces into the spec, once they are mature?
>> >>
>> >>
>> >>>>
>> >>>> Also, a key feature we want to explicitly **not** turn on all
>> available
>> >>>> context-types and let the user decide.
>> >>>
>> >>> How would you prevent that with the current colon-separated approach?
>> >>> Splitting capset_mask in multiple parameters is just a different
>> >>> syntactical representation of the same thing.
>> >>>
>> >>>> That'll allow guest Mesa in
>> >>>> particular to do its magic in its loader.  So one may run Zink + ANV
>> with
>> >>>> ioctl forwarding, or Iris + ioctl forwarding and compare performance
>> with
>> >>>> the same guest image.
>> >>>>
>> >>>> And another thing is one needs some knowledge of the host system to
>> choose
>> >>>> the right context type.  You wouldn't do Zink + ANV ioctl forwarding
>> on
>> >>>> MacOS.  So I think the task of choosing the right context type will
>> fall
>> >>> to
>> >>>> projects that depend on QEMU (such as Android Emulator) which have
>> some
>> >>>> knowledge of the host environment.
>> >>>>
>> >>>> We actually have a graphics detector somewhere that calls VK/OpenGL
>> before
>> >>>> launching the VM and sets the right options.  Plan is to port into
>> >>>> gfxstream, maybe we could use that.
>> >>>
>> >>> You could bail out in QEMU if rutabaga_calculate_capset_mask() detects
>> >>> conflicting combinations, no?
>> >>>
>> >>>>
>> >>>> So given the desire for human readable formats, being portable across
>> VMMs
>> >>>> (crosvm, qemu, rust-vmm??) and experimentation, the string -> capset
>> mask
>> >>>> conversion was put in the rutabaga API.  So I wouldn't change it for
>> those
>> >>>> reasons.
>> >>>
>> >>> What do you mean by being portable across VMMs?
>> >>
>> >>
>> >> Having the API inside rutabaga is (mildly) useful when multiple VMMs
>> have
>> >> the need to translate from a human-readable format to flags digestible
>> by
>> >> rutabaga.
>> >>
>> >>
>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452
>> >>
>> >>
>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353
>> >>
>> >>
>> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505
>> >>
>> >> For these crosvm/qemu launchers, I imagine capset names will be plumbed
>> all
>> >> the way through eventually (launch_cvd
>> >> --gpu_context=gfxstream-vulkan:cross-domain if you've played around with
>> >> Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you played
>> >> around with Termina VMs).
>> >>
>> >> I think rust-vmm could also use the same API ("--capset_names") too.
>> >>
>> >>
>> >>> Sure, QEMU had to be taught new flags before being able to use new
>> >>> rutabaga features. I agree that this comes with a certain
>> inconvenience.
>> >>> But it may also be inconvenient for QEMU to deal with additional ad-hoc
>> >>> options parsing when there are efforts for harmonization.
>> >>>
>> >>> Did my comments shed new light into the discussion?
>> >>
>> >>
>> >> Yes, they do.  I agree with you that both crosvm/qemu have too many
>> flags,
>> >> and having a stable command line interface is important.  We are aiming
>> for
>> >> stability with the `--capset_names={colon string}` command line, and at
>> >> least for crosvm looking to deprecate older options [since we've never
>> had
>> >> an official release of crosvm yet].
>> >>
>> >> I do think:
>> >>
>> >> 1) "capset_names=gfxstream-vulkan:cross-domain"
>> >> 2) "cross-domain=on,gfxstream-vulkan=on"
>> >>
>> >> are similar enough.  I would choose (1) for since I think not
>> duplicating
>> >> the [name] -> flag logic and having a similar interface across VMMs +
>> VMM
>> >> launchers is ever-so slightly useful.
>> >
>> > I think we've now reached a good understanding of the issue. It's now up
>> to the QEMU community to make a choice. So I'm cc'ing Markus and Thomas as
>> the experts of the topic.
>>
>> As a virtio-gpu user, I'm slightly inclined to (2) since it would be
>> easier to implement the same option for virtio-gpu-virgl when a need
>> arises.
>>
>
>The rutabaga/virgl implementations will likely be done via DEFINE_PROP_BIT,
>no?  For virgl, it'll set the virgl flags, and for rutabaga, it'll set the
>capset mask.  So it would be different.
>
>That said, the change isn't too bad to make.  Here's the key part:
>
>+++ b/hw/display/virtio-gpu-rutabaga.c
>@@ -1084,6 +1084,14 @@ static Property virtio_gpu_rutabaga_properties[] = {
>     DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
>                        wayland_socket_path),
>     DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
>+    DEFINE_PROP_BIT("gfxstream-vulkan-experimental", VirtIOGPURutabaga,
>+                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_VULKAN, false),
>+    DEFINE_PROP_BIT("cross-domain-experimental", VirtIOGPURutabaga,
>+                    capset_mask, RUTABAGA_CAPSET_CROSS_DOMAIN, false),
>+    DEFINE_PROP_BIT("gfxstream-gles-experimental", VirtIOGPURutabaga,
>+                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_GLES, false),
>+    DEFINE_PROP_BIT("gfxstream-composer-experimental", VirtIOGPURutabaga,
>+                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_COMPOSER,
>false),
>     DEFINE_PROP_END_OF_LIST(),
> };

Nice!

I think the current approach for experimental and deprecated properties is to not pre- or postfix them but issue a warning at runtime when used, see e.g. here: https://lore.kernel.org/qemu-devel/20230710121543.197250-18-thuth@redhat.com/ That way, the command line interface won't change once the properties become stable. So if you omit the "-experimental" postfixes Android Studio wouldn't need to adapt.

Best regards,
Bernhard

>
>One thing though: I borrowed a page of the Mesa-3d playbook (since they
>land non-working/experimental drivers to speed development) and named all
>gfxstream/rutabaga_gfx context types as "experimental".  That'll allow us
>to experiment in-tree.
>
>If you closely follow:
>
>https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg03319.html
>
>you may notice packaging/distributing rutabaga/gfxstream is low-priority,
>since I do have the Android emulator use case in mind and I'm not sure
>anybody else will find production targets for it in QEMU.  I think this is
>somewhat closely related to crosvm/qemu has "too many flags" situation.
>Many times a flag/config is landed, and production target changes, and the
>history is lost.  By prefixing everything as "experimental", we explicitly
>make it clear that QEMU makes no guarantees at this time regarding
>rutabaga.  That'll allow hobbyists who build QEMU from sources anyways (the
>main non-Android users of rutabaga/gfxstream) to play around with it
>upstream, and also allow downstream to follow upstream but not make any
>guarantees until everything is ready.
>
>I'm curious what everyone thinks of the plan.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-29 15:06                 ` Bernhard Beschow
@ 2023-09-30 10:28                   ` Thomas Huth
  2023-10-09 11:18                     ` Markus Armbruster
  0 siblings, 1 reply; 34+ messages in thread
From: Thomas Huth @ 2023-09-30 10:28 UTC (permalink / raw)
  To: Bernhard Beschow, Gurchetan Singh, Akihiko Odaki
  Cc: qemu-devel, marcandre.lureau, ray.huang, alex.bennee, hi,
	ernunes, manos.pitsidianakis, philmd, Markus Armbruster

On 29/09/2023 17.06, Bernhard Beschow wrote:
> 
> 
> Am 21. September 2023 23:44:42 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>> On Tue, Sep 19, 2023 at 3:07 PM Akihiko Odaki <akihiko.odaki@gmail.com>
>> wrote:
>>
>>> On 2023/09/20 3:36, Bernhard Beschow wrote:
>>>>
>>>>
>>>> Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh <
>>> gurchetansingh@chromium.org>:
>>>>> On Thu, Sep 14, 2023 at 12:23 AM Bernhard Beschow <shentey@gmail.com>
>>> wrote:
...
>>>>>>>> First, sorry for responding after such a long time. I've been busy
>>> with
>>>>>>>> work and I'm doing QEMU in my free time.
>>>>>>>>
>>>>>>>> In iteration 1 I've raised the topic on capset_names [1] and I
>>> haven't
>>>>>>>> seen it answered properly. Perhaps I need to rephrase a bit so here
>>> we
>>>>>> go:
>>>>>>>> capset_names seems to be colon-separated list of bit options managed
>>> by
>>>>>>>> rutabaga. This introduces yet another way of options handling. There
>>>>>> have
>>>>>>>> been talks about harmonizing options handling in QEMU since
>>> apparently
>>>>>> it
>>>>>>>> is considered too complex [2,3].
>>>>>>>
>>>>>>>
>>>>>>>> Why not pass the "capset" as a bitfield like capset_mask and have
>>> QEMU
>>>>>>>> create "capset" from QOM properties?
>>>>>>>
>>>>>>> IIUC these flags could come from virtio_gpu.h which is already
>>> present in
>>>>>>>> the QEMU tree. This would not inly shortcut the dependency on
>>> rutabaga
>>>>>> here
>>>>>>>> but would also be more idiomatic QEMU (since it makes the options
>>> more
>>>>>>>> introspectable by internal machinery).
>>>>>>>
>>>>>>>
>>>>>>>> Of course the bitfield approach would require modifications in QEMU
>>>>>>>> whenever rutabaga gains new features. However, I figure that in the
>>> long
>>>>>>>> term rutabaga will be quite feature complete such that the benefits
>>> of
>>>>>>>> idiomatic QEMU handling will outweigh the decoupling of the projects.
>>>>>>>>
>>>>>>>> What do you think?
>>>>>>>>
>>>>>>>
>>>>>>> I think what you're suggesting is something like -device
>>>>>>> virtio-gpu-rutabaga,capset_mask=0x10100 [40, which would be
>>>>>>> gfxstream_vulkan + cross_domain]?
>>>>>>
>>>>>> I was thinking more along the lines of
>>>>>> `virtio-gpu-rutabaga,gfxstream_vulkan=on,cross_domain=on` where
>>>>>> gfxstream_vulkan and cross_domain are boolean QOM properties. This
>>> would
>>>>>> make for a human-readable format which follows QEMU style.
>>>>>>
>>>>>>>
>>>>>>> We actually did consider something like that when adding the
>>>>>>> --context-types flag [with crosvm], but there was a desire for a
>>>>>>> human-readable format rather than numbers [even if they are in the
>>>>>>> virtio-gpu spec].
>>>>>>>
>>>>>>> Additionally, there are quite a few context types that people are
>>> playing
>>>>>>> around with [gfxstream-gles, gfxstream-composer] that are launchable
>>> and
>>>>>>> aren't in the spec just yet.
>>>>>>
>>>>>> Right, QEMU had to be modified for this kind of experimentation. I
>>>>>> considered this in my last paragraph and figured that in the long run
>>> QEMU
>>>>>> *may* prefer more idiomatic option handling since it tries hard to not
>>>>>> break its command line interface. I'm just pointing this out -- the
>>>>>> decision is ultimately up to the community.
>>>>>>
>>>>>> Why not have dedicated QEMU development branches for experimentation?
>>>>>> Wouldn't upstreaming new features into QEMU be a good motivation to
>>> get the
>>>>>> missing pieces into the spec, once they are mature?
>>>>>
>>>>>
>>>>>>>
>>>>>>> Also, a key feature we want to explicitly **not** turn on all
>>> available
>>>>>>> context-types and let the user decide.
>>>>>>
>>>>>> How would you prevent that with the current colon-separated approach?
>>>>>> Splitting capset_mask in multiple parameters is just a different
>>>>>> syntactical representation of the same thing.
>>>>>>
>>>>>>> That'll allow guest Mesa in
>>>>>>> particular to do its magic in its loader.  So one may run Zink + ANV
>>> with
>>>>>>> ioctl forwarding, or Iris + ioctl forwarding and compare performance
>>> with
>>>>>>> the same guest image.
>>>>>>>
>>>>>>> And another thing is one needs some knowledge of the host system to
>>> choose
>>>>>>> the right context type.  You wouldn't do Zink + ANV ioctl forwarding
>>> on
>>>>>>> MacOS.  So I think the task of choosing the right context type will
>>> fall
>>>>>> to
>>>>>>> projects that depend on QEMU (such as Android Emulator) which have
>>> some
>>>>>>> knowledge of the host environment.
>>>>>>>
>>>>>>> We actually have a graphics detector somewhere that calls VK/OpenGL
>>> before
>>>>>>> launching the VM and sets the right options.  Plan is to port into
>>>>>>> gfxstream, maybe we could use that.
>>>>>>
>>>>>> You could bail out in QEMU if rutabaga_calculate_capset_mask() detects
>>>>>> conflicting combinations, no?
>>>>>>
>>>>>>>
>>>>>>> So given the desire for human readable formats, being portable across
>>> VMMs
>>>>>>> (crosvm, qemu, rust-vmm??) and experimentation, the string -> capset
>>> mask
>>>>>>> conversion was put in the rutabaga API.  So I wouldn't change it for
>>> those
>>>>>>> reasons.
>>>>>>
>>>>>> What do you mean by being portable across VMMs?
>>>>>
>>>>>
>>>>> Having the API inside rutabaga is (mildly) useful when multiple VMMs
>>> have
>>>>> the need to translate from a human-readable format to flags digestible
>>> by
>>>>> rutabaga.
>>>>>
>>>>>
>>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/qemu_manager.cpp#452
>>>>>
>>>>>
>>> https://android.googlesource.com/device/google/cuttlefish/+/refs/heads/main/host/libs/vm_manager/crosvm_manager.cpp#353
>>>>>
>>>>>
>>> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/main/vm_tools/concierge/vm_builder.cc#505
>>>>>
>>>>> For these crosvm/qemu launchers, I imagine capset names will be plumbed
>>> all
>>>>> the way through eventually (launch_cvd
>>>>> --gpu_context=gfxstream-vulkan:cross-domain if you've played around with
>>>>> Cuttlefish, or vmc start --gpu_contexts=gfxstream-vulkan if you played
>>>>> around with Termina VMs).
>>>>>
>>>>> I think rust-vmm could also use the same API ("--capset_names") too.
>>>>>
>>>>>
>>>>>> Sure, QEMU had to be taught new flags before being able to use new
>>>>>> rutabaga features. I agree that this comes with a certain
>>> inconvenience.
>>>>>> But it may also be inconvenient for QEMU to deal with additional ad-hoc
>>>>>> options parsing when there are efforts for harmonization.
>>>>>>
>>>>>> Did my comments shed new light into the discussion?
>>>>>
>>>>>
>>>>> Yes, they do.  I agree with you that both crosvm/qemu have too many
>>> flags,
>>>>> and having a stable command line interface is important.  We are aiming
>>> for
>>>>> stability with the `--capset_names={colon string}` command line, and at
>>>>> least for crosvm looking to deprecate older options [since we've never
>>> had
>>>>> an official release of crosvm yet].
>>>>>
>>>>> I do think:
>>>>>
>>>>> 1) "capset_names=gfxstream-vulkan:cross-domain"
>>>>> 2) "cross-domain=on,gfxstream-vulkan=on"
>>>>>
>>>>> are similar enough.  I would choose (1) for since I think not
>>> duplicating
>>>>> the [name] -> flag logic and having a similar interface across VMMs +
>>> VMM
>>>>> launchers is ever-so slightly useful.
>>>>
>>>> I think we've now reached a good understanding of the issue. It's now up
>>> to the QEMU community to make a choice. So I'm cc'ing Markus and Thomas as
>>> the experts of the topic.
>>>
>>> As a virtio-gpu user, I'm slightly inclined to (2) since it would be
>>> easier to implement the same option for virtio-gpu-virgl when a need
>>> arises.
>>>
>>
>> The rutabaga/virgl implementations will likely be done via DEFINE_PROP_BIT,
>> no?  For virgl, it'll set the virgl flags, and for rutabaga, it'll set the
>> capset mask.  So it would be different.
>>
>> That said, the change isn't too bad to make.  Here's the key part:
>>
>> +++ b/hw/display/virtio-gpu-rutabaga.c
>> @@ -1084,6 +1084,14 @@ static Property virtio_gpu_rutabaga_properties[] = {
>>      DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
>>                         wayland_socket_path),
>>      DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
>> +    DEFINE_PROP_BIT("gfxstream-vulkan-experimental", VirtIOGPURutabaga,
>> +                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_VULKAN, false),
>> +    DEFINE_PROP_BIT("cross-domain-experimental", VirtIOGPURutabaga,
>> +                    capset_mask, RUTABAGA_CAPSET_CROSS_DOMAIN, false),
>> +    DEFINE_PROP_BIT("gfxstream-gles-experimental", VirtIOGPURutabaga,
>> +                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_GLES, false),
>> +    DEFINE_PROP_BIT("gfxstream-composer-experimental", VirtIOGPURutabaga,
>> +                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_COMPOSER,
>> false),
>>      DEFINE_PROP_END_OF_LIST(),
>> };
> 
> Nice!
> 
> I think the current approach for experimental and deprecated properties is to not pre- or postfix them but issue a warning at runtime when used, see e.g. here: https://lore.kernel.org/qemu-devel/20230710121543.197250-18-thuth@redhat.com/ That way, the command line interface won't change once the properties become stable. So if you omit the "-experimental" postfixes Android Studio wouldn't need to adapt.

That's for deprecated options only. For experimental new properties, please 
use the "x-" prefix instead of the "-experimental" suffix.

  Thanks,
   Thomas



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream
  2023-09-30 10:28                   ` Thomas Huth
@ 2023-10-09 11:18                     ` Markus Armbruster
  0 siblings, 0 replies; 34+ messages in thread
From: Markus Armbruster @ 2023-10-09 11:18 UTC (permalink / raw)
  To: Thomas Huth
  Cc: Bernhard Beschow, Gurchetan Singh, Akihiko Odaki, qemu-devel,
	marcandre.lureau, ray.huang, alex.bennee, hi, ernunes,
	manos.pitsidianakis, philmd

Thomas Huth <thuth@redhat.com> writes:

> On 29/09/2023 17.06, Bernhard Beschow wrote:
>> Am 21. September 2023 23:44:42 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>>> On Tue, Sep 19, 2023 at 3:07 PM Akihiko Odaki <akihiko.odaki@gmail.com>
>>> wrote:
>>>
>>>> On 2023/09/20 3:36, Bernhard Beschow wrote:
>>>>>
>>>>>
>>>>> Am 15. September 2023 02:38:02 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:

[...]

>>>>>> I do think:
>>>>>>
>>>>>> 1) "capset_names=gfxstream-vulkan:cross-domain"
>>>>>> 2) "cross-domain=on,gfxstream-vulkan=on"
>>>>>>
>>>>>> are similar enough.  I would choose (1) for since I think not duplicating
>>>>>> the [name] -> flag logic and having a similar interface across VMMs + VMM
>>>>>> launchers is ever-so slightly useful.
>>>>>
>>>>> I think we've now reached a good understanding of the issue. It's now up
>>>>> to the QEMU community to make a choice. So I'm cc'ing Markus and Thomas as
>>>>> the experts of the topic.
>>>>
>>>> As a virtio-gpu user, I'm slightly inclined to (2) since it would be
>>>> easier to implement the same option for virtio-gpu-virgl when a need
>>>> arises.
>>>
>>> The rutabaga/virgl implementations will likely be done via DEFINE_PROP_BIT,
>>> no?  For virgl, it'll set the virgl flags, and for rutabaga, it'll set the
>>> capset mask.  So it would be different.
>>>
>>> That said, the change isn't too bad to make.  Here's the key part:
>>>
>>> +++ b/hw/display/virtio-gpu-rutabaga.c
>>> @@ -1084,6 +1084,14 @@ static Property virtio_gpu_rutabaga_properties[] = {
>>>      DEFINE_PROP_STRING("wayland_socket_path", VirtIOGPURutabaga,
>>>                         wayland_socket_path),
>>>      DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
>>> +    DEFINE_PROP_BIT("gfxstream-vulkan-experimental", VirtIOGPURutabaga,
>>> +                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_VULKAN, false),
>>> +    DEFINE_PROP_BIT("cross-domain-experimental", VirtIOGPURutabaga,
>>> +                    capset_mask, RUTABAGA_CAPSET_CROSS_DOMAIN, false),
>>> +    DEFINE_PROP_BIT("gfxstream-gles-experimental", VirtIOGPURutabaga,
>>> +                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_GLES, false),
>>> +    DEFINE_PROP_BIT("gfxstream-composer-experimental", VirtIOGPURutabaga,
>>> +                    capset_mask, RUTABAGA_CAPSET_GFXSTREAM_COMPOSER,
>>> false),
>>>      DEFINE_PROP_END_OF_LIST(),
>>> };
>>
>> Nice!
>>
>> I think the current approach for experimental and deprecated properties is to not pre- or postfix them but issue a warning at runtime when used, see e.g. here: https://lore.kernel.org/qemu-devel/20230710121543.197250-18-thuth@redhat.com/ That way, the command line interface won't change once the properties become stable. So if you omit the "-experimental" postfixes Android Studio wouldn't need to adapt.
>
> That's for deprecated options only. For experimental new properties, please use the "x-" prefix instead of the "-experimental" suffix.

Thomas is right.

In the QAPI schema, we have the means to avoid renaming on transition
from unstable to stable.  However, device properties remain outside
QAPI.

In case you're curious, docs/devel/qapi-code-gen.rst:

    Special features
    ~~~~~~~~~~~~~~~~

    Feature "deprecated" marks a command, event, enum value, or struct
    member as deprecated.  It is not supported elsewhere so far.
    Interfaces so marked may be withdrawn in future releases in accordance
    with QEMU's deprecation policy.

    Feature "unstable" marks a command, event, enum value, or struct
    member as unstable.  It is not supported elsewhere so far.  Interfaces
    so marked may be withdrawn or changed incompatibly in future releases.

You can even configure policy for handling use of deprecated or unstable
stuff:

    -compat [deprecated-input=accept|reject|crash][,deprecated-output=accept|hide]
                    Policy for handling deprecated management interfaces
    -compat [unstable-input=accept|reject|crash][,unstable-output=accept|hide]
                    Policy for handling unstable management interfaces

For more, see qapi/compat.json and commit 6dd75472d5^..dbb675c19a.



^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2023-10-09 11:19 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-23  1:25 [PATCH v11 0/9] rutabaga_gfx + gfxstream Gurchetan Singh
2023-08-23  1:25 ` [PATCH v11 1/9] virtio: Add shared memory capability Gurchetan Singh
2023-08-23  1:25 ` [PATCH v11 2/9] virtio-gpu: CONTEXT_INIT feature Gurchetan Singh
2023-08-23  1:25 ` [PATCH v11 3/9] virtio-gpu: hostmem Gurchetan Singh
2023-08-23  1:25 ` [PATCH v11 4/9] virtio-gpu: blob prep Gurchetan Singh
2023-08-23  1:25 ` [PATCH v11 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options Gurchetan Singh
2023-08-23  1:25 ` [PATCH v11 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
2023-08-23  9:59   ` Akihiko Odaki
2023-09-13 11:57   ` Bernhard Beschow
2023-09-14  4:38     ` Gurchetan Singh
2023-09-14  7:23       ` Bernhard Beschow
2023-09-15  2:38         ` Gurchetan Singh
2023-09-19 18:36           ` Bernhard Beschow
2023-09-19 22:07             ` Akihiko Odaki
2023-09-21 23:44               ` Gurchetan Singh
2023-09-22  2:41                 ` Akihiko Odaki
2023-09-29 15:06                 ` Bernhard Beschow
2023-09-30 10:28                   ` Thomas Huth
2023-10-09 11:18                     ` Markus Armbruster
2023-09-27 11:34             ` Thomas Huth
2023-09-27 12:33               ` Markus Armbruster
2023-08-23  1:25 ` [PATCH v11 7/9] gfxstream + rutabaga: meson support Gurchetan Singh
2023-08-23  1:25 ` [PATCH v11 8/9] gfxstream + rutabaga: enable rutabaga Gurchetan Singh
2023-08-23  1:25 ` [PATCH v11 9/9] docs/system: add basic virtio-gpu documentation Gurchetan Singh
2023-08-23 11:07 ` [PATCH v11 0/9] rutabaga_gfx + gfxstream Alyssa Ross
2023-08-24 23:56   ` Gurchetan Singh
2023-08-25  7:11     ` Alyssa Ross
2023-08-25 19:05       ` Gurchetan Singh
2023-08-25 19:29         ` Alyssa Ross
2023-08-25 19:37           ` Alyssa Ross
2023-08-29  0:43             ` Gurchetan Singh
2023-09-12  8:53               ` Alyssa Ross
2023-09-13  1:14                 ` Gurchetan Singh
2023-09-13 10:10                   ` Alyssa Ross

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).