qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [QEMU PATCH v4 00/13] Support blob memory and venus on qemu
@ 2023-08-31  9:32 Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 01/13] virtio: Add shared memory capability Huang Rui
                   ` (12 more replies)
  0 siblings, 13 replies; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

Hi all,

Antonio Caggiano made the venus with QEMU on KVM platform last
September[1]. This series are inherited from his original work to support
the features of context init, hostmem, resource uuid, and blob resources
for venus.
At March of this year, we sent out the V1 version[2] for the review. But
those series are included both xen and virtio gpu. Right now, we would like
to divide into two parts, one is to continue the Antonio's work to upstream
virtio-gpu support for blob memory and venus, and another is to upstream
xen specific patches. This series is focusing on virtio-gpu, so we are
marking as V4 version here to continue Antonio's patches[1]. And we will
send xen specific patches separately, because they are hypervisor specific.
Besides of QEMU, these supports also included virglrenderer[3][4] and
mesa[5][6] as well. Right now, virglrenderer and mesa parts are all
accepted by upstream. In this qemu version, we try to address the concerns
around not proper cleanup during blob resource unmap and unref. Appreciate
it if you have any commments.

[1] https://lore.kernel.org/qemu-devel/20220926142422.22325-1-antonio.caggiano@collabora.com/
[2] V1: https://lore.kernel.org/qemu-devel/20230312092244.451465-1-ray.huang@amd.com
[3] https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/1068
[4] https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/1180
[5] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22108
[6] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23680

Please note the first 4 patches 1 -> 4 are inlcuded in these series because
the series depends on them and not because we want them to be reviewed
since they are already in the process of review through the "rutabaga_gfx +
gfxstream" series.
- https://lore.kernel.org/qemu-devel/20230829003629.410-1-gurchetansingh@chromium.org/

Changes from V1 to V2 (virtio gpu V4)

- Remove unused #include "hw/virtio/virtio-iommu.h"

- Add a local function, called virgl_resource_destroy(), that is used
  to release a vgpu resource on error paths and in resource_unref.

- Remove virtio_gpu_virgl_resource_unmap from
  virtio_gpu_cleanup_mapping(),
  since this function won't be called on blob resources and also because
  blob resources are unmapped via virgl_cmd_resource_unmap_blob().

- In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
  and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
  has been fully initialized.

- Memory region has a different life-cycle from virtio gpu resources
  i.e. cannot be released synchronously along with the vgpu resource.
  So, here the field "region" was changed to a pointer and is allocated
  dynamically when the blob is mapped.
  Also, since the pointer can be used to indicate whether the blob
  is mapped, the explicite field "mapped" was removed.

- In virgl_cmd_resource_map_blob(), add check on the value of
  res->region, to prevent beeing called twice on the same resource.

- Add a patch to enable automatic deallocation of memory regions to resolve
  use-after-free memory corruption with a reference.

References

Demo with Venus:
- https://static.sched.com/hosted_files/xen2023/3f/xen_summit_2023_virtgpu_demo.mp4
QEMU repository:
- https://gitlab.freedesktop.org/rui/qemu-xen/-/commits/upstream-for-virtio-gpu

Thanks,
Ray

Antonio Caggiano (6):
  virtio-gpu: CONTEXT_INIT feature
  virtio-gpu: blob prep
  virtio-gpu: Handle resource blob commands
  virtio-gpu: Resource UUID
  virtio-gpu: Support Venus capset
  virtio-gpu: Initialize Venus

Dmitry Osipenko (1):
  virtio-gpu: Don't require udmabuf when blobs and virgl are enabled

Dr. David Alan Gilbert (1):
  virtio: Add shared memory capability

Gerd Hoffmann (1):
  virtio-gpu: hostmem

Huang Rui (3):
  virtio-gpu: Support context init feature with virglrenderer
  virtio-gpu: Configure context init for virglrenderer
  virtio-gpu: Enable virglrenderer render server flag for venus

Xenia Ragiadakou (1):
  softmmu/memory: enable automatic deallocation of memory regions

 hw/display/trace-events                     |   1 +
 hw/display/virtio-gpu-base.c                |   5 +
 hw/display/virtio-gpu-pci.c                 |  14 +
 hw/display/virtio-gpu-virgl.c               | 270 +++++++++++++++++++-
 hw/display/virtio-gpu.c                     |  61 ++++-
 hw/display/virtio-vga.c                     |  33 ++-
 hw/virtio/virtio-pci.c                      |  18 ++
 include/hw/virtio/virtio-gpu-bswap.h        |  15 ++
 include/hw/virtio/virtio-gpu.h              |  22 ++
 include/hw/virtio/virtio-pci.h              |   4 +
 include/standard-headers/linux/virtio_gpu.h |   2 +
 meson.build                                 |   8 +
 softmmu/memory.c                            |  19 +-
 13 files changed, 446 insertions(+), 26 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 01/13] virtio: Add shared memory capability
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 02/13] virtio-gpu: CONTEXT_INIT feature Huang Rui
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

Define a new capability type 'VIRTIO_PCI_CAP_SHARED_MEMORY_CFG' to allow
defining shared memory regions with sizes and offsets of 2^32 and more.
Multiple instances of the capability are allowed and distinguished
by a device-specific 'id'.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Huang Rui <ray.huang@amd.com>
Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Acked-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

This patch is already under review as part of the "rutabaga_gfx + gfxstream"
series (already in v13) and has been included here because of dependency.

 hw/virtio/virtio-pci.c         | 18 ++++++++++++++++++
 include/hw/virtio/virtio-pci.h |  4 ++++
 2 files changed, 22 insertions(+)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index edbc0daa18..da8c9ea12d 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1435,6 +1435,24 @@ static int virtio_pci_add_mem_cap(VirtIOPCIProxy *proxy,
     return offset;
 }
 
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy,
+                           uint8_t bar, uint64_t offset, uint64_t length,
+                           uint8_t id)
+{
+    struct virtio_pci_cap64 cap = {
+        .cap.cap_len = sizeof cap,
+        .cap.cfg_type = VIRTIO_PCI_CAP_SHARED_MEMORY_CFG,
+    };
+
+    cap.cap.bar = bar;
+    cap.cap.length = cpu_to_le32(length);
+    cap.length_hi = cpu_to_le32(length >> 32);
+    cap.cap.offset = cpu_to_le32(offset);
+    cap.offset_hi = cpu_to_le32(offset >> 32);
+    cap.cap.id = id;
+    return virtio_pci_add_mem_cap(proxy, &cap.cap);
+}
+
 static uint64_t virtio_pci_common_read(void *opaque, hwaddr addr,
                                        unsigned size)
 {
diff --git a/include/hw/virtio/virtio-pci.h b/include/hw/virtio/virtio-pci.h
index ab2051b64b..5a3f182f99 100644
--- a/include/hw/virtio/virtio-pci.h
+++ b/include/hw/virtio/virtio-pci.h
@@ -264,4 +264,8 @@ unsigned virtio_pci_optimal_num_queues(unsigned fixed_queues);
 void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
                                               int n, bool assign,
                                               bool with_irqfd);
+
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy, uint8_t bar, uint64_t offset,
+                           uint64_t length, uint8_t id);
+
 #endif
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 02/13] virtio-gpu: CONTEXT_INIT feature
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 01/13] virtio: Add shared memory capability Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 03/13] virtio-gpu: hostmem Huang Rui
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Marc-André Lureau, Huang Rui

From: Antonio Caggiano <antonio.caggiano@collabora.com>

The feature can be enabled when a backend wants it.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

This patch is already under review as part of the "rutabaga_gfx + gfxstream"
series (already in v13) and has been included here because of dependency.

 hw/display/virtio-gpu-base.c   | 3 +++
 include/hw/virtio/virtio-gpu.h | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index ca1fb7b16f..4f2b0ba1f3 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -232,6 +232,9 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
     if (virtio_gpu_blob_enabled(g->conf)) {
         features |= (1 << VIRTIO_GPU_F_RESOURCE_BLOB);
     }
+    if (virtio_gpu_context_init_enabled(g->conf)) {
+        features |= (1 << VIRTIO_GPU_F_CONTEXT_INIT);
+    }
 
     return features;
 }
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 390c4642b8..8377c365ef 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -93,6 +93,7 @@ enum virtio_gpu_base_conf_flags {
     VIRTIO_GPU_FLAG_EDID_ENABLED,
     VIRTIO_GPU_FLAG_DMABUF_ENABLED,
     VIRTIO_GPU_FLAG_BLOB_ENABLED,
+    VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
 };
 
 #define virtio_gpu_virgl_enabled(_cfg) \
@@ -105,6 +106,8 @@ enum virtio_gpu_base_conf_flags {
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_DMABUF_ENABLED))
 #define virtio_gpu_blob_enabled(_cfg) \
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
+#define virtio_gpu_context_init_enabled(_cfg) \
+    (_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
 
 struct virtio_gpu_base_conf {
     uint32_t max_outputs;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 03/13] virtio-gpu: hostmem
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 01/13] virtio: Add shared memory capability Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 02/13] virtio-gpu: CONTEXT_INIT feature Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 04/13] virtio-gpu: blob prep Huang Rui
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

From: Gerd Hoffmann <kraxel@redhat.com>

Use VIRTIO_GPU_SHM_ID_HOST_VISIBLE as id for virtio-gpu.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

This patch is already under review as part of the "rutabaga_gfx + gfxstream"
series (already in v13) and has been included here because of dependency.

 hw/display/virtio-gpu-pci.c    | 14 ++++++++++++++
 hw/display/virtio-gpu.c        |  1 +
 hw/display/virtio-vga.c        | 33 ++++++++++++++++++++++++---------
 include/hw/virtio/virtio-gpu.h |  5 +++++
 4 files changed, 44 insertions(+), 9 deletions(-)

diff --git a/hw/display/virtio-gpu-pci.c b/hw/display/virtio-gpu-pci.c
index 93f214ff58..da6a99f038 100644
--- a/hw/display/virtio-gpu-pci.c
+++ b/hw/display/virtio-gpu-pci.c
@@ -33,6 +33,20 @@ static void virtio_gpu_pci_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
     DeviceState *vdev = DEVICE(g);
     int i;
 
+    if (virtio_gpu_hostmem_enabled(g->conf)) {
+        vpci_dev->msix_bar_idx = 1;
+        vpci_dev->modern_mem_bar_idx = 2;
+        memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+                           g->conf.hostmem);
+        pci_register_bar(&vpci_dev->pci_dev, 4,
+                         PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_PREFETCH |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                         &g->hostmem);
+        virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+                               VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+    }
+
     virtio_pci_force_virtio_1(vpci_dev);
     if (!qdev_realize(vdev, BUS(&vpci_dev->bus), errp)) {
         return;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index bbd5c6561a..48ef0d9fad 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1509,6 +1509,7 @@ static Property virtio_gpu_properties[] = {
                      256 * MiB),
     DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
                     VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
+    DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index e6fb0aa876..c8552ff760 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -115,17 +115,32 @@ static void virtio_vga_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
     pci_register_bar(&vpci_dev->pci_dev, 0,
                      PCI_BASE_ADDRESS_MEM_PREFETCH, &vga->vram);
 
-    /*
-     * Configure virtio bar and regions
-     *
-     * We use bar #2 for the mmio regions, to be compatible with stdvga.
-     * virtio regions are moved to the end of bar #2, to make room for
-     * the stdvga mmio registers at the start of bar #2.
-     */
-    vpci_dev->modern_mem_bar_idx = 2;
-    vpci_dev->msix_bar_idx = 4;
     vpci_dev->modern_io_bar_idx = 5;
 
+    if (!virtio_gpu_hostmem_enabled(g->conf)) {
+        /*
+         * Configure virtio bar and regions
+         *
+         * We use bar #2 for the mmio regions, to be compatible with stdvga.
+         * virtio regions are moved to the end of bar #2, to make room for
+         * the stdvga mmio registers at the start of bar #2.
+         */
+        vpci_dev->modern_mem_bar_idx = 2;
+        vpci_dev->msix_bar_idx = 4;
+    } else {
+        vpci_dev->msix_bar_idx = 1;
+        vpci_dev->modern_mem_bar_idx = 2;
+        memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+                           g->conf.hostmem);
+        pci_register_bar(&vpci_dev->pci_dev, 4,
+                         PCI_BASE_ADDRESS_SPACE_MEMORY |
+                         PCI_BASE_ADDRESS_MEM_PREFETCH |
+                         PCI_BASE_ADDRESS_MEM_TYPE_64,
+                         &g->hostmem);
+        virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+                               VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+    }
+
     if (!(vpci_dev->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ)) {
         /*
          * with page-per-vq=off there is no padding space we can use
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 8377c365ef..de4f624e94 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -108,12 +108,15 @@ enum virtio_gpu_base_conf_flags {
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
 #define virtio_gpu_context_init_enabled(_cfg) \
     (_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
+#define virtio_gpu_hostmem_enabled(_cfg) \
+    (_cfg.hostmem > 0)
 
 struct virtio_gpu_base_conf {
     uint32_t max_outputs;
     uint32_t flags;
     uint32_t xres;
     uint32_t yres;
+    uint64_t hostmem;
 };
 
 struct virtio_gpu_ctrl_command {
@@ -137,6 +140,8 @@ struct VirtIOGPUBase {
     int renderer_blocked;
     int enable;
 
+    MemoryRegion hostmem;
+
     struct virtio_gpu_scanout scanout[VIRTIO_GPU_MAX_SCANOUTS];
 
     int enabled_output_bitmask;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 04/13] virtio-gpu: blob prep
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (2 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 03/13] virtio-gpu: hostmem Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 05/13] virtio-gpu: Support context init feature with virglrenderer Huang Rui
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Emmanouil Pitsidianakis, Huang Rui

From: Antonio Caggiano <antonio.caggiano@collabora.com>

This adds preparatory functions needed to:

     - decode blob cmds
     - tracking iovecs

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Tested-by: Alyssa Ross <hi@alyssa.is>
Tested-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Tested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Emmanouil Pitsidianakis <manos.pitsidianakis@linaro.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

This patch is already under review as part of the "rutabaga_gfx + gfxstream"
series (already in v13) and has been included here because of dependency.

 hw/display/virtio-gpu.c              | 10 +++-------
 include/hw/virtio/virtio-gpu-bswap.h | 15 +++++++++++++++
 include/hw/virtio/virtio-gpu.h       |  5 +++++
 3 files changed, 23 insertions(+), 7 deletions(-)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 48ef0d9fad..3e658f1fef 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -33,15 +33,11 @@
 
 #define VIRTIO_GPU_VM_VERSION 1
 
-static struct virtio_gpu_simple_resource*
-virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
 static struct virtio_gpu_simple_resource *
 virtio_gpu_find_check_resource(VirtIOGPU *g, uint32_t resource_id,
                                bool require_backing,
                                const char *caller, uint32_t *error);
 
-static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
-                                       struct virtio_gpu_simple_resource *res);
 static void virtio_gpu_reset_bh(void *opaque);
 
 void virtio_gpu_update_cursor_data(VirtIOGPU *g,
@@ -116,7 +112,7 @@ static void update_cursor(VirtIOGPU *g, struct virtio_gpu_update_cursor *cursor)
                   cursor->resource_id ? 1 : 0);
 }
 
-static struct virtio_gpu_simple_resource *
+struct virtio_gpu_simple_resource *
 virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id)
 {
     struct virtio_gpu_simple_resource *res;
@@ -904,8 +900,8 @@ void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
     g_free(iov);
 }
 
-static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
-                                       struct virtio_gpu_simple_resource *res)
+void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
+                                struct virtio_gpu_simple_resource *res)
 {
     virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
     res->iov = NULL;
diff --git a/include/hw/virtio/virtio-gpu-bswap.h b/include/hw/virtio/virtio-gpu-bswap.h
index 637a0585d0..dd1975e2d4 100644
--- a/include/hw/virtio/virtio-gpu-bswap.h
+++ b/include/hw/virtio/virtio-gpu-bswap.h
@@ -70,6 +70,21 @@ virtio_gpu_create_blob_bswap(struct virtio_gpu_resource_create_blob *cblob)
     le64_to_cpus(&cblob->size);
 }
 
+static inline void
+virtio_gpu_map_blob_bswap(struct virtio_gpu_resource_map_blob *mblob)
+{
+    virtio_gpu_ctrl_hdr_bswap(&mblob->hdr);
+    le32_to_cpus(&mblob->resource_id);
+    le64_to_cpus(&mblob->offset);
+}
+
+static inline void
+virtio_gpu_unmap_blob_bswap(struct virtio_gpu_resource_unmap_blob *ublob)
+{
+    virtio_gpu_ctrl_hdr_bswap(&ublob->hdr);
+    le32_to_cpus(&ublob->resource_id);
+}
+
 static inline void
 virtio_gpu_scanout_blob_bswap(struct virtio_gpu_set_scanout_blob *ssb)
 {
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index de4f624e94..55973e112f 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -257,6 +257,9 @@ void virtio_gpu_base_fill_display_info(VirtIOGPUBase *g,
 void virtio_gpu_base_generate_edid(VirtIOGPUBase *g, int scanout,
                                    struct virtio_gpu_resp_edid *edid);
 /* virtio-gpu.c */
+struct virtio_gpu_simple_resource *
+virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
+
 void virtio_gpu_ctrl_response(VirtIOGPU *g,
                               struct virtio_gpu_ctrl_command *cmd,
                               struct virtio_gpu_ctrl_hdr *resp,
@@ -275,6 +278,8 @@ int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
                                   uint32_t *niov);
 void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
                                     struct iovec *iov, uint32_t count);
+void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
+                                struct virtio_gpu_simple_resource *res);
 void virtio_gpu_process_cmdq(VirtIOGPU *g);
 void virtio_gpu_device_realize(DeviceState *qdev, Error **errp);
 void virtio_gpu_reset(VirtIODevice *vdev);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 05/13] virtio-gpu: Support context init feature with virglrenderer
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (3 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 04/13] virtio-gpu: blob prep Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31  9:41   ` Philippe Mathieu-Daudé
  2023-08-31  9:32 ` [QEMU PATCH v4 06/13] virtio-gpu: Configure context init for virglrenderer Huang Rui
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

Patch "virtio-gpu: CONTEXT_INIT feature" has added the context_init
feature flags.
We would like to enable the feature with virglrenderer, so add to create
virgl renderer context with flags using context_id when valid.

Originally-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

New patch, result of splitting
[RFC QEMU PATCH 04/18] virtio-gpu: CONTEXT_INIT feature

 hw/display/virtio-gpu-virgl.c | 13 +++++++++++--
 hw/display/virtio-gpu.c       |  2 ++
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 8bb7a2c21f..312953ec16 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -106,8 +106,17 @@ static void virgl_cmd_context_create(VirtIOGPU *g,
     trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
                                     cc.debug_name);
 
-    virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen,
-                                  cc.debug_name);
+    if (cc.context_init) {
+#ifdef HAVE_VIRGL_CONTEXT_INIT
+        virgl_renderer_context_create_with_flags(cc.hdr.ctx_id,
+                                                 cc.context_init,
+                                                 cc.nlen,
+                                                 cc.debug_name);
+        return;
+#endif
+    }
+
+    virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen, cc.debug_name);
 }
 
 static void virgl_cmd_context_destroy(VirtIOGPU *g,
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 3e658f1fef..a66cbd9930 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1506,6 +1506,8 @@ static Property virtio_gpu_properties[] = {
     DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
                     VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
     DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
+    DEFINE_PROP_BIT("context_init", VirtIOGPU, parent_obj.conf.flags,
+                    VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED, false),
     DEFINE_PROP_END_OF_LIST(),
 };
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 06/13] virtio-gpu: Configure context init for virglrenderer
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (4 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 05/13] virtio-gpu: Support context init feature with virglrenderer Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31  9:39   ` Philippe Mathieu-Daudé
  2023-08-31  9:32 ` [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions Huang Rui
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

Configure context init feature flag for virglrenderer.

Originally-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

New patch, result of splitting
[RFC QEMU PATCH 04/18] virtio-gpu: CONTEXT_INIT feature

 meson.build | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/meson.build b/meson.build
index 98e68ef0b1..ff20d3c249 100644
--- a/meson.build
+++ b/meson.build
@@ -1068,6 +1068,10 @@ if not get_option('virglrenderer').auto() or have_system or have_vhost_user_gpu
                                        prefix: '#include <virglrenderer.h>',
                                        dependencies: virgl))
   endif
+  config_host_data.set('HAVE_VIRGL_CONTEXT_INIT',
+                       cc.has_function('virgl_renderer_context_create_with_flags',
+                                       prefix: '#include <virglrenderer.h>',
+                                       dependencies: virgl))
 endif
 blkio = not_found
 if not get_option('blkio').auto() or have_block
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (5 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 06/13] virtio-gpu: Configure context init for virglrenderer Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31 10:10   ` Akihiko Odaki
  2023-08-31  9:32 ` [QEMU PATCH v4 08/13] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled Huang Rui
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>

When the memory region has a different life-cycle from that of her parent,
could be automatically released, once has been unparent and once all of her
references have gone away, via the object's free callback.

However, currently, references to the memory region are held by its owner
without first incrementing the memory region object's reference count.
As a result, the automatic deallocation of the object, not taking into
account those references, results in use-after-free memory corruption.

This patch increases the reference count of the memory region object on
each memory_region_ref() and decreases it on each memory_region_unref().

Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

New patch

 softmmu/memory.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/softmmu/memory.c b/softmmu/memory.c
index 7d9494ce70..0fdd5eebf9 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1797,6 +1797,15 @@ Object *memory_region_owner(MemoryRegion *mr)
 
 void memory_region_ref(MemoryRegion *mr)
 {
+    if (!mr) {
+        return;
+    }
+
+    /* Obtain a reference to prevent the memory region object
+     * from being released under our feet.
+     */
+    object_ref(OBJECT(mr));
+
     /* MMIO callbacks most likely will access data that belongs
      * to the owner, hence the need to ref/unref the owner whenever
      * the memory region is in use.
@@ -1807,16 +1816,22 @@ void memory_region_ref(MemoryRegion *mr)
      * Memory regions without an owner are supposed to never go away;
      * we do not ref/unref them because it slows down DMA sensibly.
      */
-    if (mr && mr->owner) {
+    if (mr->owner) {
         object_ref(mr->owner);
     }
 }
 
 void memory_region_unref(MemoryRegion *mr)
 {
-    if (mr && mr->owner) {
+    if (!mr) {
+        return;
+    }
+
+    if (mr->owner) {
         object_unref(mr->owner);
     }
+
+    object_unref(OBJECT(mr));
 }
 
 uint64_t memory_region_size(MemoryRegion *mr)
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 08/13] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (6 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands Huang Rui
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

From: Dmitry Osipenko <dmitry.osipenko@collabora.com>

The udmabuf usage is mandatory when virgl is disabled and blobs feature
enabled in the Qemu machine configuration. If virgl and blobs are enabled,
then udmabuf requirement is optional. Since udmabuf isn't widely supported
by a popular Linux distros today, let's relax the udmabuf requirement for
blobs=on,virgl=on. Now, a full-featured virtio-gpu acceleration is
available to Qemu users without a need to have udmabuf available in the
system.

Reviewed-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

New patch

 hw/display/virtio-gpu.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index a66cbd9930..5b7a7eab4f 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1361,7 +1361,8 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
     VirtIOGPU *g = VIRTIO_GPU(qdev);
 
     if (virtio_gpu_blob_enabled(g->parent_obj.conf)) {
-        if (!virtio_gpu_have_udmabuf()) {
+        if (!virtio_gpu_virgl_enabled(g->parent_obj.conf) &&
+            !virtio_gpu_have_udmabuf()) {
             error_setg(errp, "cannot enable blob resources without udmabuf");
             return;
         }
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (7 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 08/13] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31 10:24   ` Akihiko Odaki
  2023-08-31  9:32 ` [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID Huang Rui
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

From: Antonio Caggiano <antonio.caggiano@collabora.com>

Support BLOB resources creation, mapping and unmapping by calling the
new stable virglrenderer 0.10 interface. Only enabled when available and
via the blob config. E.g. -device virtio-vga-gl,blob=true

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

v1->v2:
    - Remove unused #include "hw/virtio/virtio-iommu.h"

    - Add a local function, called virgl_resource_destroy(), that is used
      to release a vgpu resource on error paths and in resource_unref.

    - Remove virtio_gpu_virgl_resource_unmap from virtio_gpu_cleanup_mapping(),
      since this function won't be called on blob resources and also because
      blob resources are unmapped via virgl_cmd_resource_unmap_blob().

    - In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
      and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
      has been fully initialized.

    - Memory region has a different life-cycle from virtio gpu resources
      i.e. cannot be released synchronously along with the vgpu resource.
      So, here the field "region" was changed to a pointer that will be
      released automatically once the memory region is unparented and all
      of its references have been released.
      Also, since the pointer can be used to indicate whether the blob
      is mapped, the explicit field "mapped" was removed.

    - In virgl_cmd_resource_map_blob(), add check on the value of
      res->region, to prevent beeing called twice on the same resource.

    - Remove direct references to parent_obj.

    - Separate declarations from code.

 hw/display/virtio-gpu-virgl.c  | 213 +++++++++++++++++++++++++++++++++
 hw/display/virtio-gpu.c        |   4 +-
 include/hw/virtio/virtio-gpu.h |   5 +
 meson.build                    |   4 +
 4 files changed, 225 insertions(+), 1 deletion(-)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 312953ec16..17b634d4ee 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -17,6 +17,7 @@
 #include "trace.h"
 #include "hw/virtio/virtio.h"
 #include "hw/virtio/virtio-gpu.h"
+#include "hw/virtio/virtio-gpu-bswap.h"
 
 #include "ui/egl-helpers.h"
 
@@ -78,9 +79,24 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
     virgl_renderer_resource_create(&args, NULL, 0);
 }
 
+static void virgl_resource_destroy(VirtIOGPU *g,
+                                   struct virtio_gpu_simple_resource *res)
+{
+    if (!res)
+        return;
+
+    QTAILQ_REMOVE(&g->reslist, res, next);
+
+    virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
+    g_free(res->addrs);
+
+    g_free(res);
+}
+
 static void virgl_cmd_resource_unref(VirtIOGPU *g,
                                      struct virtio_gpu_ctrl_command *cmd)
 {
+    struct virtio_gpu_simple_resource *res;
     struct virtio_gpu_resource_unref unref;
     struct iovec *res_iovs = NULL;
     int num_iovs = 0;
@@ -88,13 +104,22 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
     VIRTIO_GPU_FILL_CMD(unref);
     trace_virtio_gpu_cmd_res_unref(unref.resource_id);
 
+    res = virtio_gpu_find_resource(g, unref.resource_id);
+
     virgl_renderer_resource_detach_iov(unref.resource_id,
                                        &res_iovs,
                                        &num_iovs);
     if (res_iovs != NULL && num_iovs != 0) {
         virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
+        if (res) {
+            res->iov = NULL;
+            res->iov_cnt = 0;
+        }
     }
+
     virgl_renderer_resource_unref(unref.resource_id);
+
+    virgl_resource_destroy(g, res);
 }
 
 static void virgl_cmd_context_create(VirtIOGPU *g,
@@ -426,6 +451,183 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
     g_free(resp);
 }
 
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+
+static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
+                                           struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_create_blob cblob;
+    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
+    int ret;
+
+    VIRTIO_GPU_FILL_CMD(cblob);
+    virtio_gpu_create_blob_bswap(&cblob);
+    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
+
+    if (cblob.resource_id == 0) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+                      __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, cblob.resource_id);
+    if (res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+                      __func__, cblob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    if (!res) {
+        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
+        return;
+    }
+
+    res->resource_id = cblob.resource_id;
+    res->blob_size = cblob.size;
+
+    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
+                                            cmd, &res->addrs, &res->iov,
+                                            &res->iov_cnt);
+        if (!ret) {
+            g_free(res);
+            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+            return;
+        }
+    }
+
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+
+    virgl_args.res_handle = cblob.resource_id;
+    virgl_args.ctx_id = cblob.hdr.ctx_id;
+    virgl_args.blob_mem = cblob.blob_mem;
+    virgl_args.blob_id = cblob.blob_id;
+    virgl_args.blob_flags = cblob.blob_flags;
+    virgl_args.size = cblob.size;
+    virgl_args.iovecs = res->iov;
+    virgl_args.num_iovs = res->iov_cnt;
+
+    ret = virgl_renderer_resource_create_blob(&virgl_args);
+    if (ret) {
+        virgl_resource_destroy(g, res);
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
+                      __func__, strerror(-ret));
+        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+    }
+}
+
+static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
+                                        struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_map_blob mblob;
+    int ret;
+    void *data;
+    uint64_t size;
+    struct virtio_gpu_resp_map_info resp;
+    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
+
+    VIRTIO_GPU_FILL_CMD(mblob);
+    virtio_gpu_map_blob_bswap(&mblob);
+
+    if (mblob.resource_id == 0) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+                      __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, mblob.resource_id);
+    if (!res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+                      __func__, mblob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+    if (res->region) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
+		      __func__, mblob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
+    if (ret) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
+                      __func__, strerror(-ret));
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res->region = g_new0(MemoryRegion, 1);
+    if (!res->region) {
+        virgl_renderer_resource_unmap(res->resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
+        return;
+    }
+    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
+    OBJECT(res->region)->free = g_free;
+    memory_region_add_subregion(&b->hostmem, mblob.offset, res->region);
+    memory_region_set_enabled(res->region, true);
+
+    memset(&resp, 0, sizeof(resp));
+    resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
+    virgl_renderer_resource_get_map_info(mblob.resource_id, &resp.map_info);
+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static int virtio_gpu_virgl_resource_unmap(VirtIOGPU *g,
+                                           struct virtio_gpu_simple_resource
+                                           *res)
+{
+    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
+
+    if (!res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already unmapped %d\n",
+                      __func__, res->resource_id);
+        return VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+    }
+
+    memory_region_set_enabled(res->region, false);
+    memory_region_del_subregion(&b->hostmem, res->region);
+    object_unparent(OBJECT(res->region));
+    res->region = NULL;
+
+    return virgl_renderer_resource_unmap(res->resource_id);
+}
+
+static void virgl_cmd_resource_unmap_blob(VirtIOGPU *g,
+                                          struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_unmap_blob ublob;
+    VIRTIO_GPU_FILL_CMD(ublob);
+    virtio_gpu_unmap_blob_bswap(&ublob);
+
+    if (ublob.resource_id == 0) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+                      __func__);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    res = virtio_gpu_find_resource(g, ublob.resource_id);
+    if (!res) {
+        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+                      __func__, ublob.resource_id);
+        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+        return;
+    }
+
+    virtio_gpu_virgl_resource_unmap(g, res);
+}
+
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
+
 void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
                                       struct virtio_gpu_ctrl_command *cmd)
 {
@@ -492,6 +694,17 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
     case VIRTIO_GPU_CMD_GET_EDID:
         virtio_gpu_get_edid(g, cmd);
         break;
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+    case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
+        virgl_cmd_resource_create_blob(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
+        virgl_cmd_resource_map_blob(g, cmd);
+        break;
+    case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
+        virgl_cmd_resource_unmap_blob(g, cmd);
+        break;
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
     default:
         cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
         break;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 5b7a7eab4f..cc4c1f81bb 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1367,10 +1367,12 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
             return;
         }
 
+#ifndef HAVE_VIRGL_RESOURCE_BLOB
         if (virtio_gpu_virgl_enabled(g->parent_obj.conf)) {
-            error_setg(errp, "blobs and virgl are not compatible (yet)");
+            error_setg(errp, "Linked virglrenderer does not support blob resources");
             return;
         }
+#endif
     }
 
     if (!virtio_gpu_base_device_realize(qdev,
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 55973e112f..b9adc28071 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -58,6 +58,11 @@ struct virtio_gpu_simple_resource {
     int dmabuf_fd;
     uint8_t *remapped;
 
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+    /* only blob resource needs this region to be mapped as guest mmio */
+    MemoryRegion *region;
+#endif
+
     QTAILQ_ENTRY(virtio_gpu_simple_resource) next;
 };
 
diff --git a/meson.build b/meson.build
index ff20d3c249..f7b744ab82 100644
--- a/meson.build
+++ b/meson.build
@@ -1072,6 +1072,10 @@ if not get_option('virglrenderer').auto() or have_system or have_vhost_user_gpu
                        cc.has_function('virgl_renderer_context_create_with_flags',
                                        prefix: '#include <virglrenderer.h>',
                                        dependencies: virgl))
+  config_host_data.set('HAVE_VIRGL_RESOURCE_BLOB',
+                       cc.has_function('virgl_renderer_resource_create_blob',
+                                       prefix: '#include <virglrenderer.h>',
+                                       dependencies: virgl))
 endif
 blkio = not_found
 if not get_option('blkio').auto() or have_block
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (8 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31 10:36   ` Akihiko Odaki
  2023-08-31  9:32 ` [QEMU PATCH v4 11/13] virtio-gpu: Support Venus capset Huang Rui
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

From: Antonio Caggiano <antonio.caggiano@collabora.com>

Enable resource UUID feature and implement command resource assign UUID.
This is done by introducing a hash table to map resource IDs to their
UUIDs.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

v1->v2:
   - Separate declarations from code.

 hw/display/trace-events        |  1 +
 hw/display/virtio-gpu-base.c   |  2 ++
 hw/display/virtio-gpu-virgl.c  | 21 +++++++++++++++++
 hw/display/virtio-gpu.c        | 41 ++++++++++++++++++++++++++++++++++
 include/hw/virtio/virtio-gpu.h |  4 ++++
 5 files changed, 69 insertions(+)

diff --git a/hw/display/trace-events b/hw/display/trace-events
index 2336a0ca15..54d6894c59 100644
--- a/hw/display/trace-events
+++ b/hw/display/trace-events
@@ -41,6 +41,7 @@ virtio_gpu_cmd_res_create_blob(uint32_t res, uint64_t size) "res 0x%x, size %" P
 virtio_gpu_cmd_res_unref(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_back_attach(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_back_detach(uint32_t res) "res 0x%x"
+virtio_gpu_cmd_res_assign_uuid(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_xfer_toh_2d(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_xfer_toh_3d(uint32_t res) "res 0x%x"
 virtio_gpu_cmd_res_xfer_fromh_3d(uint32_t res) "res 0x%x"
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 4f2b0ba1f3..f44388715c 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -236,6 +236,8 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
         features |= (1 << VIRTIO_GPU_F_CONTEXT_INIT);
     }
 
+    features |= (1 << VIRTIO_GPU_F_RESOURCE_UUID);
+
     return features;
 }
 
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 17b634d4ee..1a996a08fc 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -36,6 +36,7 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
 {
     struct virtio_gpu_resource_create_2d c2d;
     struct virgl_renderer_resource_create_args args;
+    struct virtio_gpu_simple_resource *res;
 
     VIRTIO_GPU_FILL_CMD(c2d);
     trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
@@ -53,6 +54,14 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
     args.nr_samples = 0;
     args.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
     virgl_renderer_resource_create(&args, NULL, 0);
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    if (!res) {
+        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
+        return;
+    }
+    res->resource_id = c2d.resource_id;
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
 }
 
 static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
@@ -60,6 +69,7 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
 {
     struct virtio_gpu_resource_create_3d c3d;
     struct virgl_renderer_resource_create_args args;
+    struct virtio_gpu_simple_resource *res;
 
     VIRTIO_GPU_FILL_CMD(c3d);
     trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
@@ -77,6 +87,14 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
     args.nr_samples = c3d.nr_samples;
     args.flags = c3d.flags;
     virgl_renderer_resource_create(&args, NULL, 0);
+
+    res = g_new0(struct virtio_gpu_simple_resource, 1);
+    if (!res) {
+        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
+        return;
+    }
+    res->resource_id = c3d.resource_id;
+    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
 }
 
 static void virgl_resource_destroy(VirtIOGPU *g,
@@ -682,6 +700,9 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
         /* TODO add security */
         virgl_cmd_ctx_detach_resource(g, cmd);
         break;
+    case VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID:
+        virtio_gpu_resource_assign_uuid(g, cmd);
+        break;
     case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
         virgl_cmd_get_capset_info(g, cmd);
         break;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index cc4c1f81bb..770e4747e3 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -966,6 +966,37 @@ virtio_gpu_resource_detach_backing(VirtIOGPU *g,
     virtio_gpu_cleanup_mapping(g, res);
 }
 
+void virtio_gpu_resource_assign_uuid(VirtIOGPU *g,
+                                     struct virtio_gpu_ctrl_command *cmd)
+{
+    struct virtio_gpu_simple_resource *res;
+    struct virtio_gpu_resource_assign_uuid assign;
+    struct virtio_gpu_resp_resource_uuid resp;
+    QemuUUID *uuid = NULL;
+
+    VIRTIO_GPU_FILL_CMD(assign);
+    virtio_gpu_bswap_32(&assign, sizeof(assign));
+    trace_virtio_gpu_cmd_res_assign_uuid(assign.resource_id);
+
+    res = virtio_gpu_find_check_resource(g, assign.resource_id, false, __func__, &cmd->error);
+    if (!res) {
+        return;
+    }
+
+    memset(&resp, 0, sizeof(resp));
+    resp.hdr.type = VIRTIO_GPU_RESP_OK_RESOURCE_UUID;
+
+    uuid = g_hash_table_lookup(g->resource_uuids, GUINT_TO_POINTER(assign.resource_id));
+    if (!uuid) {
+        uuid = g_new(QemuUUID, 1);
+        qemu_uuid_generate(uuid);
+        g_hash_table_insert(g->resource_uuids, GUINT_TO_POINTER(assign.resource_id), uuid);
+    }
+
+    memcpy(resp.uuid, uuid, sizeof(QemuUUID));
+    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
 void virtio_gpu_simple_process_cmd(VirtIOGPU *g,
                                    struct virtio_gpu_ctrl_command *cmd)
 {
@@ -1014,6 +1045,9 @@ void virtio_gpu_simple_process_cmd(VirtIOGPU *g,
     case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
         virtio_gpu_resource_detach_backing(g, cmd);
         break;
+    case VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID:
+        virtio_gpu_resource_assign_uuid(g, cmd);
+        break;
     default:
         cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
         break;
@@ -1393,12 +1427,15 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
     QTAILQ_INIT(&g->reslist);
     QTAILQ_INIT(&g->cmdq);
     QTAILQ_INIT(&g->fenceq);
+
+    g->resource_uuids = g_hash_table_new_full(NULL, NULL, NULL, g_free);
 }
 
 static void virtio_gpu_device_unrealize(DeviceState *qdev)
 {
     VirtIOGPU *g = VIRTIO_GPU(qdev);
 
+    g_hash_table_destroy(g->resource_uuids);
     g_clear_pointer(&g->ctrl_bh, qemu_bh_delete);
     g_clear_pointer(&g->cursor_bh, qemu_bh_delete);
     g_clear_pointer(&g->reset_bh, qemu_bh_delete);
@@ -1452,6 +1489,10 @@ void virtio_gpu_reset(VirtIODevice *vdev)
         g_free(cmd);
     }
 
+    if (g->resource_uuids) {
+        g_hash_table_remove_all(g->resource_uuids);
+    }
+
     virtio_gpu_base_reset(VIRTIO_GPU_BASE(vdev));
 }
 
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index b9adc28071..aa94b1b697 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -208,6 +208,8 @@ struct VirtIOGPU {
         QTAILQ_HEAD(, VGPUDMABuf) bufs;
         VGPUDMABuf *primary[VIRTIO_GPU_MAX_SCANOUTS];
     } dmabuf;
+
+    GHashTable *resource_uuids;
 };
 
 struct VirtIOGPUClass {
@@ -285,6 +287,8 @@ void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
                                     struct iovec *iov, uint32_t count);
 void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
                                 struct virtio_gpu_simple_resource *res);
+void virtio_gpu_resource_assign_uuid(VirtIOGPU *g,
+                                     struct virtio_gpu_ctrl_command *cmd);
 void virtio_gpu_process_cmdq(VirtIOGPU *g);
 void virtio_gpu_device_realize(DeviceState *qdev, Error **errp);
 void virtio_gpu_reset(VirtIODevice *vdev);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 11/13] virtio-gpu: Support Venus capset
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (9 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31 10:43   ` Akihiko Odaki
  2023-08-31  9:32 ` [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus Huang Rui
  2023-08-31  9:32 ` [QEMU PATCH v4 13/13] virtio-gpu: Enable virglrenderer render server flag for venus Huang Rui
  12 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

From: Antonio Caggiano <antonio.caggiano@collabora.com>

Add support for the Venus capset, which enables Vulkan support through
the Venus Vulkan driver for virtio-gpu.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---
 hw/display/virtio-gpu-virgl.c               | 21 +++++++++++++++++----
 include/standard-headers/linux/virtio_gpu.h |  2 ++
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 1a996a08fc..83cd8c8fd0 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -437,6 +437,11 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
         virgl_renderer_get_cap_set(resp.capset_id,
                                    &resp.capset_max_version,
                                    &resp.capset_max_size);
+    } else if (info.capset_index == 2) {
+        resp.capset_id = VIRTIO_GPU_CAPSET_VENUS;
+        virgl_renderer_get_cap_set(resp.capset_id,
+                                   &resp.capset_max_version,
+                                   &resp.capset_max_size);
     } else {
         resp.capset_max_version = 0;
         resp.capset_max_size = 0;
@@ -901,10 +906,18 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
 
 int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
 {
-    uint32_t capset2_max_ver, capset2_max_size;
+    uint32_t capset2_max_ver, capset2_max_size, num_capsets;
+    num_capsets = 1;
+
     virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
-                              &capset2_max_ver,
-                              &capset2_max_size);
+                               &capset2_max_ver,
+                               &capset2_max_size);
+    num_capsets += capset2_max_ver ? 1 : 0;
+
+    virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VENUS,
+                               &capset2_max_ver,
+                               &capset2_max_size);
+    num_capsets += capset2_max_size ? 1 : 0;
 
-    return capset2_max_ver ? 2 : 1;
+    return num_capsets;
 }
diff --git a/include/standard-headers/linux/virtio_gpu.h b/include/standard-headers/linux/virtio_gpu.h
index 2da48d3d4c..2db643ed8f 100644
--- a/include/standard-headers/linux/virtio_gpu.h
+++ b/include/standard-headers/linux/virtio_gpu.h
@@ -309,6 +309,8 @@ struct virtio_gpu_cmd_submit {
 
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
+/* 3 is reserved for gfxstream */
+#define VIRTIO_GPU_CAPSET_VENUS 4
 
 /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */
 struct virtio_gpu_get_capset_info {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (10 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 11/13] virtio-gpu: Support Venus capset Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  2023-08-31 10:40   ` Antonio Caggiano
  2023-08-31  9:32 ` [QEMU PATCH v4 13/13] virtio-gpu: Enable virglrenderer render server flag for venus Huang Rui
  12 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

From: Antonio Caggiano <antonio.caggiano@collabora.com>

Request Venus when initializing VirGL.

Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
---

v1->v2:
    - Rebase to latest version

 hw/display/virtio-gpu-virgl.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 83cd8c8fd0..c5a62665bd 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -887,6 +887,8 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
     }
 #endif
 
+    flags |= VIRGL_RENDERER_VENUS;
+
     ret = virgl_renderer_init(g, flags, &virtio_gpu_3d_cbs);
     if (ret != 0) {
         error_report("virgl could not be initialized: %d", ret);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [QEMU PATCH v4 13/13] virtio-gpu: Enable virglrenderer render server flag for venus
  2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
                   ` (11 preceding siblings ...)
  2023-08-31  9:32 ` [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus Huang Rui
@ 2023-08-31  9:32 ` Huang Rui
  12 siblings, 0 replies; 51+ messages in thread
From: Huang Rui @ 2023-08-31  9:32 UTC (permalink / raw)
  To: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian, Huang Rui

Venus in virglrenderer has required render server support.

Signed-off-by: Huang Rui <ray.huang@amd.com>
---

New patch

 hw/display/virtio-gpu-virgl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index c5a62665bd..1ae3e458e2 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -887,7 +887,7 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
     }
 #endif
 
-    flags |= VIRGL_RENDERER_VENUS;
+    flags |= VIRGL_RENDERER_VENUS | VIRGL_RENDERER_RENDER_SERVER;
 
     ret = virgl_renderer_init(g, flags, &virtio_gpu_3d_cbs);
     if (ret != 0) {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 06/13] virtio-gpu: Configure context init for virglrenderer
  2023-08-31  9:32 ` [QEMU PATCH v4 06/13] virtio-gpu: Configure context init for virglrenderer Huang Rui
@ 2023-08-31  9:39   ` Philippe Mathieu-Daudé
  2023-09-04  6:45     ` Huang Rui via
  0 siblings, 1 reply; 51+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-08-31  9:39 UTC (permalink / raw)
  To: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Akihiko Odaki, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian

On 31/8/23 11:32, Huang Rui wrote:
> Configure context init feature flag for virglrenderer.
> 
> Originally-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> ---
> 
> New patch, result of splitting
> [RFC QEMU PATCH 04/18] virtio-gpu: CONTEXT_INIT feature
> 
>   meson.build | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/meson.build b/meson.build
> index 98e68ef0b1..ff20d3c249 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -1068,6 +1068,10 @@ if not get_option('virglrenderer').auto() or have_system or have_vhost_user_gpu
>                                          prefix: '#include <virglrenderer.h>',
>                                          dependencies: virgl))
>     endif
> +  config_host_data.set('HAVE_VIRGL_CONTEXT_INIT',
> +                       cc.has_function('virgl_renderer_context_create_with_flags',
> +                                       prefix: '#include <virglrenderer.h>',
> +                                       dependencies: virgl))

Shouldn't this be inverted with previous patch?



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 05/13] virtio-gpu: Support context init feature with virglrenderer
  2023-08-31  9:32 ` [QEMU PATCH v4 05/13] virtio-gpu: Support context init feature with virglrenderer Huang Rui
@ 2023-08-31  9:41   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 51+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-08-31  9:41 UTC (permalink / raw)
  To: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Akihiko Odaki, Alyssa Ross,
	Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian

On 31/8/23 11:32, Huang Rui wrote:
> Patch "virtio-gpu: CONTEXT_INIT feature" has added the context_init
> feature flags.
> We would like to enable the feature with virglrenderer, so add to create
> virgl renderer context with flags using context_id when valid.
> 
> Originally-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> ---
> 
> New patch, result of splitting
> [RFC QEMU PATCH 04/18] virtio-gpu: CONTEXT_INIT feature
> 
>   hw/display/virtio-gpu-virgl.c | 13 +++++++++++--
>   hw/display/virtio-gpu.c       |  2 ++
>   2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index 8bb7a2c21f..312953ec16 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -106,8 +106,17 @@ static void virgl_cmd_context_create(VirtIOGPU *g,
>       trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>                                       cc.debug_name);
>   
> -    virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen,
> -                                  cc.debug_name);
> +    if (cc.context_init) {
> +#ifdef HAVE_VIRGL_CONTEXT_INIT
> +        virgl_renderer_context_create_with_flags(cc.hdr.ctx_id,
> +                                                 cc.context_init,
> +                                                 cc.nlen,
> +                                                 cc.debug_name);
> +        return;
> +#endif

What happens if someone sets the 'context_init' property but virgl
doesn't have virgl_renderer_context_create_with_flags()? Should we
report an error?

> +    }
> +
> +    virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen, cc.debug_name);
>   }
>   
>   static void virgl_cmd_context_destroy(VirtIOGPU *g,
> diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
> index 3e658f1fef..a66cbd9930 100644
> --- a/hw/display/virtio-gpu.c
> +++ b/hw/display/virtio-gpu.c
> @@ -1506,6 +1506,8 @@ static Property virtio_gpu_properties[] = {
>       DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
>                       VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
>       DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
> +    DEFINE_PROP_BIT("context_init", VirtIOGPU, parent_obj.conf.flags,
> +                    VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED, false),
>       DEFINE_PROP_END_OF_LIST(),
>   };
>   



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions
  2023-08-31  9:32 ` [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions Huang Rui
@ 2023-08-31 10:10   ` Akihiko Odaki
  2023-09-05  9:07     ` Huang Rui
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-08-31 10:10 UTC (permalink / raw)
  To: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian

On 2023/08/31 18:32, Huang Rui wrote:
> From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> 
> When the memory region has a different life-cycle from that of her parent,
> could be automatically released, once has been unparent and once all of her
> references have gone away, via the object's free callback.
> 
> However, currently, references to the memory region are held by its owner
> without first incrementing the memory region object's reference count.
> As a result, the automatic deallocation of the object, not taking into
> account those references, results in use-after-free memory corruption.
> 
> This patch increases the reference count of the memory region object on
> each memory_region_ref() and decreases it on each memory_region_unref().
> 
> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> ---
> 
> New patch
> 
>   softmmu/memory.c | 19 +++++++++++++++++--
>   1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/softmmu/memory.c b/softmmu/memory.c
> index 7d9494ce70..0fdd5eebf9 100644
> --- a/softmmu/memory.c
> +++ b/softmmu/memory.c
> @@ -1797,6 +1797,15 @@ Object *memory_region_owner(MemoryRegion *mr)
>   
>   void memory_region_ref(MemoryRegion *mr)
>   {
> +    if (!mr) {
> +        return;
> +    }
> +
> +    /* Obtain a reference to prevent the memory region object
> +     * from being released under our feet.
> +     */
> +    object_ref(OBJECT(mr));
> +
>       /* MMIO callbacks most likely will access data that belongs
>        * to the owner, hence the need to ref/unref the owner whenever
>        * the memory region is in use.
> @@ -1807,16 +1816,22 @@ void memory_region_ref(MemoryRegion *mr)
>        * Memory regions without an owner are supposed to never go away;
>        * we do not ref/unref them because it slows down DMA sensibly.
>        */

The collapsed comment says:
 > The memory region is a child of its owner.  As long as the
 > owner doesn't call unparent itself on the memory region,
 > ref-ing the owner will also keep the memory region alive.
 > Memory regions without an owner are supposed to never go away;
 > we do not ref/unref them because it slows down DMA sensibly.

It contradicts with this patch.

> -    if (mr && mr->owner) {
> +    if (mr->owner) {
>           object_ref(mr->owner);
>       }
>   }
>   
>   void memory_region_unref(MemoryRegion *mr)
>   {
> -    if (mr && mr->owner) {
> +    if (!mr) {
> +        return;
> +    }
> +
> +    if (mr->owner) {
>           object_unref(mr->owner);
>       }
> +
> +    object_unref(OBJECT(mr));
>   }
>   
>   uint64_t memory_region_size(MemoryRegion *mr)


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands
  2023-08-31  9:32 ` [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands Huang Rui
@ 2023-08-31 10:24   ` Akihiko Odaki
  2023-09-05  9:08     ` Huang Rui
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-08-31 10:24 UTC (permalink / raw)
  To: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian

On 2023/08/31 18:32, Huang Rui wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> 
> Support BLOB resources creation, mapping and unmapping by calling the
> new stable virglrenderer 0.10 interface. Only enabled when available and
> via the blob config. E.g. -device virtio-vga-gl,blob=true
> 
> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> ---
> 
> v1->v2:
>      - Remove unused #include "hw/virtio/virtio-iommu.h"
> 
>      - Add a local function, called virgl_resource_destroy(), that is used
>        to release a vgpu resource on error paths and in resource_unref.
> 
>      - Remove virtio_gpu_virgl_resource_unmap from virtio_gpu_cleanup_mapping(),
>        since this function won't be called on blob resources and also because
>        blob resources are unmapped via virgl_cmd_resource_unmap_blob().
> 
>      - In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
>        and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
>        has been fully initialized.
> 
>      - Memory region has a different life-cycle from virtio gpu resources
>        i.e. cannot be released synchronously along with the vgpu resource.
>        So, here the field "region" was changed to a pointer that will be
>        released automatically once the memory region is unparented and all
>        of its references have been released.
>        Also, since the pointer can be used to indicate whether the blob
>        is mapped, the explicit field "mapped" was removed.
> 
>      - In virgl_cmd_resource_map_blob(), add check on the value of
>        res->region, to prevent beeing called twice on the same resource.
> 
>      - Remove direct references to parent_obj.
> 
>      - Separate declarations from code.
> 
>   hw/display/virtio-gpu-virgl.c  | 213 +++++++++++++++++++++++++++++++++
>   hw/display/virtio-gpu.c        |   4 +-
>   include/hw/virtio/virtio-gpu.h |   5 +
>   meson.build                    |   4 +
>   4 files changed, 225 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index 312953ec16..17b634d4ee 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -17,6 +17,7 @@
>   #include "trace.h"
>   #include "hw/virtio/virtio.h"
>   #include "hw/virtio/virtio-gpu.h"
> +#include "hw/virtio/virtio-gpu-bswap.h"
>   
>   #include "ui/egl-helpers.h"
>   
> @@ -78,9 +79,24 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
>       virgl_renderer_resource_create(&args, NULL, 0);
>   }
>   
> +static void virgl_resource_destroy(VirtIOGPU *g,
> +                                   struct virtio_gpu_simple_resource *res)
> +{
> +    if (!res)
> +        return;
> +
> +    QTAILQ_REMOVE(&g->reslist, res, next);
> +
> +    virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
> +    g_free(res->addrs);
> +
> +    g_free(res);
> +}
> +
>   static void virgl_cmd_resource_unref(VirtIOGPU *g,
>                                        struct virtio_gpu_ctrl_command *cmd)
>   {
> +    struct virtio_gpu_simple_resource *res;
>       struct virtio_gpu_resource_unref unref;
>       struct iovec *res_iovs = NULL;
>       int num_iovs = 0;
> @@ -88,13 +104,22 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
>       VIRTIO_GPU_FILL_CMD(unref);
>       trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>   
> +    res = virtio_gpu_find_resource(g, unref.resource_id);
> +
>       virgl_renderer_resource_detach_iov(unref.resource_id,
>                                          &res_iovs,
>                                          &num_iovs);
>       if (res_iovs != NULL && num_iovs != 0) {
>           virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
> +        if (res) {
> +            res->iov = NULL;
> +            res->iov_cnt = 0;
> +        }
>       }
> +
>       virgl_renderer_resource_unref(unref.resource_id);
> +
> +    virgl_resource_destroy(g, res);
>   }
>   
>   static void virgl_cmd_context_create(VirtIOGPU *g,
> @@ -426,6 +451,183 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
>       g_free(resp);
>   }
>   
> +#ifdef HAVE_VIRGL_RESOURCE_BLOB
> +
> +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
> +                                           struct virtio_gpu_ctrl_command *cmd)
> +{
> +    struct virtio_gpu_simple_resource *res;
> +    struct virtio_gpu_resource_create_blob cblob;
> +    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
> +    int ret;
> +
> +    VIRTIO_GPU_FILL_CMD(cblob);
> +    virtio_gpu_create_blob_bswap(&cblob);
> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> +
> +    if (cblob.resource_id == 0) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> +                      __func__);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    res = virtio_gpu_find_resource(g, cblob.resource_id);
> +    if (res) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
> +                      __func__, cblob.resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> +    if (!res) {
> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> +        return;
> +    }
> +
> +    res->resource_id = cblob.resource_id;
> +    res->blob_size = cblob.size;
> +
> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> +        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
> +                                            cmd, &res->addrs, &res->iov,
> +                                            &res->iov_cnt);
> +        if (!ret) {
> +            g_free(res);
> +            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> +            return;
> +        }
> +    }
> +
> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> +
> +    virgl_args.res_handle = cblob.resource_id;
> +    virgl_args.ctx_id = cblob.hdr.ctx_id;
> +    virgl_args.blob_mem = cblob.blob_mem;
> +    virgl_args.blob_id = cblob.blob_id;
> +    virgl_args.blob_flags = cblob.blob_flags;
> +    virgl_args.size = cblob.size;
> +    virgl_args.iovecs = res->iov;
> +    virgl_args.num_iovs = res->iov_cnt;
> +
> +    ret = virgl_renderer_resource_create_blob(&virgl_args);
> +    if (ret) {
> +        virgl_resource_destroy(g, res);
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
> +                      __func__, strerror(-ret));
> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> +    }
> +}
> +
> +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
> +                                        struct virtio_gpu_ctrl_command *cmd)
> +{
> +    struct virtio_gpu_simple_resource *res;
> +    struct virtio_gpu_resource_map_blob mblob;
> +    int ret;
> +    void *data;
> +    uint64_t size;
> +    struct virtio_gpu_resp_map_info resp;
> +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
> +
> +    VIRTIO_GPU_FILL_CMD(mblob);
> +    virtio_gpu_map_blob_bswap(&mblob);
> +
> +    if (mblob.resource_id == 0) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> +                      __func__);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
> +    if (!res) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
> +                      __func__, mblob.resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +    if (res->region) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
> +		      __func__, mblob.resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
> +    if (ret) {
> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
> +                      __func__, strerror(-ret));
> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> +        return;
> +    }
> +
> +    res->region = g_new0(MemoryRegion, 1);
> +    if (!res->region) {
> +        virgl_renderer_resource_unmap(res->resource_id);
> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> +        return;
> +    }
> +    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);

I think memory_region_init_ram_ptr() should be used instead.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-08-31  9:32 ` [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID Huang Rui
@ 2023-08-31 10:36   ` Akihiko Odaki
  2023-09-09  9:09     ` Huang Rui
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-08-31 10:36 UTC (permalink / raw)
  To: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian

On 2023/08/31 18:32, Huang Rui wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> 
> Enable resource UUID feature and implement command resource assign UUID.
> This is done by introducing a hash table to map resource IDs to their
> UUIDs.

The hash table does not seem to be stored during migration.

> 
> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> ---
> 
> v1->v2:
>     - Separate declarations from code.
> 
>   hw/display/trace-events        |  1 +
>   hw/display/virtio-gpu-base.c   |  2 ++
>   hw/display/virtio-gpu-virgl.c  | 21 +++++++++++++++++
>   hw/display/virtio-gpu.c        | 41 ++++++++++++++++++++++++++++++++++
>   include/hw/virtio/virtio-gpu.h |  4 ++++
>   5 files changed, 69 insertions(+)
> 
> diff --git a/hw/display/trace-events b/hw/display/trace-events
> index 2336a0ca15..54d6894c59 100644
> --- a/hw/display/trace-events
> +++ b/hw/display/trace-events
> @@ -41,6 +41,7 @@ virtio_gpu_cmd_res_create_blob(uint32_t res, uint64_t size) "res 0x%x, size %" P
>   virtio_gpu_cmd_res_unref(uint32_t res) "res 0x%x"
>   virtio_gpu_cmd_res_back_attach(uint32_t res) "res 0x%x"
>   virtio_gpu_cmd_res_back_detach(uint32_t res) "res 0x%x"
> +virtio_gpu_cmd_res_assign_uuid(uint32_t res) "res 0x%x"
>   virtio_gpu_cmd_res_xfer_toh_2d(uint32_t res) "res 0x%x"
>   virtio_gpu_cmd_res_xfer_toh_3d(uint32_t res) "res 0x%x"
>   virtio_gpu_cmd_res_xfer_fromh_3d(uint32_t res) "res 0x%x"
> diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
> index 4f2b0ba1f3..f44388715c 100644
> --- a/hw/display/virtio-gpu-base.c
> +++ b/hw/display/virtio-gpu-base.c
> @@ -236,6 +236,8 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
>           features |= (1 << VIRTIO_GPU_F_CONTEXT_INIT);
>       }
>   
> +    features |= (1 << VIRTIO_GPU_F_RESOURCE_UUID);
> +
>       return features;
>   }
>   
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index 17b634d4ee..1a996a08fc 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -36,6 +36,7 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
>   {
>       struct virtio_gpu_resource_create_2d c2d;
>       struct virgl_renderer_resource_create_args args;
> +    struct virtio_gpu_simple_resource *res;
>   
>       VIRTIO_GPU_FILL_CMD(c2d);
>       trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
> @@ -53,6 +54,14 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
>       args.nr_samples = 0;
>       args.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>       virgl_renderer_resource_create(&args, NULL, 0);
> +
> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> +    if (!res) {
> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> +        return;

virglrenderer thinks the resource is alive in such a situation.

> +    }
> +    res->resource_id = c2d.resource_id;
> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>   }
>   
>   static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
> @@ -60,6 +69,7 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
>   {
>       struct virtio_gpu_resource_create_3d c3d;
>       struct virgl_renderer_resource_create_args args;
> +    struct virtio_gpu_simple_resource *res;
>   
>       VIRTIO_GPU_FILL_CMD(c3d);
>       trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
> @@ -77,6 +87,14 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
>       args.nr_samples = c3d.nr_samples;
>       args.flags = c3d.flags;
>       virgl_renderer_resource_create(&args, NULL, 0);
> +
> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> +    if (!res) {
> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> +        return;
> +    }
> +    res->resource_id = c3d.resource_id;
> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>   }
>   
>   static void virgl_resource_destroy(VirtIOGPU *g,
> @@ -682,6 +700,9 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
>           /* TODO add security */
>           virgl_cmd_ctx_detach_resource(g, cmd);
>           break;
> +    case VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID:
> +        virtio_gpu_resource_assign_uuid(g, cmd);
> +        break;
>       case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>           virgl_cmd_get_capset_info(g, cmd);
>           break;
> diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
> index cc4c1f81bb..770e4747e3 100644
> --- a/hw/display/virtio-gpu.c
> +++ b/hw/display/virtio-gpu.c
> @@ -966,6 +966,37 @@ virtio_gpu_resource_detach_backing(VirtIOGPU *g,
>       virtio_gpu_cleanup_mapping(g, res);
>   }
>   
> +void virtio_gpu_resource_assign_uuid(VirtIOGPU *g,
> +                                     struct virtio_gpu_ctrl_command *cmd)
> +{
> +    struct virtio_gpu_simple_resource *res;
> +    struct virtio_gpu_resource_assign_uuid assign;
> +    struct virtio_gpu_resp_resource_uuid resp;
> +    QemuUUID *uuid = NULL;

This initialization is unnecessary.

> +
> +    VIRTIO_GPU_FILL_CMD(assign);
> +    virtio_gpu_bswap_32(&assign, sizeof(assign));
> +    trace_virtio_gpu_cmd_res_assign_uuid(assign.resource_id);
> +
> +    res = virtio_gpu_find_check_resource(g, assign.resource_id, false, __func__, &cmd->error);
> +    if (!res) {
> +        return;
> +    }
> +
> +    memset(&resp, 0, sizeof(resp));
> +    resp.hdr.type = VIRTIO_GPU_RESP_OK_RESOURCE_UUID;
> +
> +    uuid = g_hash_table_lookup(g->resource_uuids, GUINT_TO_POINTER(assign.resource_id));
> +    if (!uuid) {
> +        uuid = g_new(QemuUUID, 1);
> +        qemu_uuid_generate(uuid);
> +        g_hash_table_insert(g->resource_uuids, GUINT_TO_POINTER(assign.resource_id), uuid);
> +    }
> +
> +    memcpy(resp.uuid, uuid, sizeof(QemuUUID));
> +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> +}
> +
>   void virtio_gpu_simple_process_cmd(VirtIOGPU *g,
>                                      struct virtio_gpu_ctrl_command *cmd)
>   {
> @@ -1014,6 +1045,9 @@ void virtio_gpu_simple_process_cmd(VirtIOGPU *g,
>       case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>           virtio_gpu_resource_detach_backing(g, cmd);
>           break;
> +    case VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID:
> +        virtio_gpu_resource_assign_uuid(g, cmd);
> +        break;
>       default:
>           cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>           break;
> @@ -1393,12 +1427,15 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
>       QTAILQ_INIT(&g->reslist);
>       QTAILQ_INIT(&g->cmdq);
>       QTAILQ_INIT(&g->fenceq);
> +
> +    g->resource_uuids = g_hash_table_new_full(NULL, NULL, NULL, g_free);
>   }
>   
>   static void virtio_gpu_device_unrealize(DeviceState *qdev)
>   {
>       VirtIOGPU *g = VIRTIO_GPU(qdev);
>   
> +    g_hash_table_destroy(g->resource_uuids);
>       g_clear_pointer(&g->ctrl_bh, qemu_bh_delete);
>       g_clear_pointer(&g->cursor_bh, qemu_bh_delete);
>       g_clear_pointer(&g->reset_bh, qemu_bh_delete);
> @@ -1452,6 +1489,10 @@ void virtio_gpu_reset(VirtIODevice *vdev)
>           g_free(cmd);
>       }
>   
> +    if (g->resource_uuids) {

Isn't g->resource_uuids always non-NULL?

> +        g_hash_table_remove_all(g->resource_uuids);
> +    }
> +
>       virtio_gpu_base_reset(VIRTIO_GPU_BASE(vdev));
>   }
>   
> diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
> index b9adc28071..aa94b1b697 100644
> --- a/include/hw/virtio/virtio-gpu.h
> +++ b/include/hw/virtio/virtio-gpu.h
> @@ -208,6 +208,8 @@ struct VirtIOGPU {
>           QTAILQ_HEAD(, VGPUDMABuf) bufs;
>           VGPUDMABuf *primary[VIRTIO_GPU_MAX_SCANOUTS];
>       } dmabuf;
> +
> +    GHashTable *resource_uuids;
>   };
>   
>   struct VirtIOGPUClass {
> @@ -285,6 +287,8 @@ void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
>                                       struct iovec *iov, uint32_t count);
>   void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
>                                   struct virtio_gpu_simple_resource *res);
> +void virtio_gpu_resource_assign_uuid(VirtIOGPU *g,
> +                                     struct virtio_gpu_ctrl_command *cmd);
>   void virtio_gpu_process_cmdq(VirtIOGPU *g);
>   void virtio_gpu_device_realize(DeviceState *qdev, Error **errp);
>   void virtio_gpu_reset(VirtIODevice *vdev);


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus
  2023-08-31  9:32 ` [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus Huang Rui
@ 2023-08-31 10:40   ` Antonio Caggiano
  2023-08-31 15:51     ` Dmitry Osipenko
  2023-09-09 10:52     ` Huang Rui
  0 siblings, 2 replies; 51+ messages in thread
From: Antonio Caggiano @ 2023-08-31 10:40 UTC (permalink / raw)
  To: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian

Hi Huang,

Thank you for pushing this forward!

On 31/08/2023 11:32, Huang Rui wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> 
> Request Venus when initializing VirGL.
> 
> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> ---
> 
> v1->v2:
>      - Rebase to latest version
> 
>   hw/display/virtio-gpu-virgl.c | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index 83cd8c8fd0..c5a62665bd 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -887,6 +887,8 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
>       }
>   #endif
>   
> +    flags |= VIRGL_RENDERER_VENUS;
> +

VIRGL_RENDERER_VENUS is a symbol only available from virglrenderer 0.9.1 
[0] and only if VIRGL_RENDERER_UNSTABLE_APIS is defined.

Luckily for us, VIRGL_RENDERER_UNSTABLE_APIS is defined unconditionally 
from virglrenderer 0.9.0 [1], so we could check for that in qemu/meson.build

e.g.


   if virgl.version().version_compare('>= 0.9.0')
     message('Enabling virglrenderer unstable APIs')
     virgl = declare_dependency(compile_args: 
'-DVIRGL_RENDERER_UNSTABLE_APIS',
                                dependencies: virgl)
   endif


Also, while testing this with various versions of virglrenderer, I 
realized there are no guarantees for Venus backend to be available in 
the linked library. Virglrenderer should be built with 
-Dvenus_experimental=true, and if that is not the case, the following 
virgl_renderer_init would fail for previous versions of virglrenderer or 
in case it has not been built with venus support.

I would suggest another approach for that which tries initializing Venus 
only if VIRGL_RENDERER_VENUS is actually defined. Then, if it fails 
cause virglrenderer has not been built with venus support, try again 
falling back to virgl only.

e.g.

#ifdef VIRGL_RENDERER_VENUS
     ret = virgl_renderer_init(g, VIRGL_RENDERER_VENUS, &virtio_gpu_3d_cbs);
     if (ret != 0) {
         warn_report("Failed to initialize virglrenderer with venus: 
%d", ret);
         warn_report("Falling back to virgl only");
         ret = virgl_renderer_init(g, 0, &virtio_gpu_3d_cbs);
     }
#else
     ret = virgl_renderer_init(g, 0, &virtio_gpu_3d_cbs);
#endif


>       ret = virgl_renderer_init(g, flags, &virtio_gpu_3d_cbs);
>       if (ret != 0) {
>           error_report("virgl could not be initialized: %d", ret);

[0] 
https://gitlab.freedesktop.org/virgl/virglrenderer/-/commit/6c31f85330bb4c5aba8b82eba606971e598c6e25
[1] 
https://gitlab.freedesktop.org/virgl/virglrenderer/-/commit/491afdc42a49ec6a1b8d7cbc5c97360229002d41

Best regards,
Antonio Caggiano


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 11/13] virtio-gpu: Support Venus capset
  2023-08-31  9:32 ` [QEMU PATCH v4 11/13] virtio-gpu: Support Venus capset Huang Rui
@ 2023-08-31 10:43   ` Akihiko Odaki
  2023-09-09  9:29     ` Huang Rui
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-08-31 10:43 UTC (permalink / raw)
  To: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian

On 2023/08/31 18:32, Huang Rui wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> 
> Add support for the Venus capset, which enables Vulkan support through
> the Venus Vulkan driver for virtio-gpu.
> 
> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> ---
>   hw/display/virtio-gpu-virgl.c               | 21 +++++++++++++++++----
>   include/standard-headers/linux/virtio_gpu.h |  2 ++
>   2 files changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index 1a996a08fc..83cd8c8fd0 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -437,6 +437,11 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
>           virgl_renderer_get_cap_set(resp.capset_id,
>                                      &resp.capset_max_version,
>                                      &resp.capset_max_size);
> +    } else if (info.capset_index == 2) {
> +        resp.capset_id = VIRTIO_GPU_CAPSET_VENUS;
> +        virgl_renderer_get_cap_set(resp.capset_id,
> +                                   &resp.capset_max_version,
> +                                   &resp.capset_max_size);
>       } else {
>           resp.capset_max_version = 0;
>           resp.capset_max_size = 0;
> @@ -901,10 +906,18 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
>   
>   int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
>   {
> -    uint32_t capset2_max_ver, capset2_max_size;
> +    uint32_t capset2_max_ver, capset2_max_size, num_capsets;
> +    num_capsets = 1;
> +
>       virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
> -                              &capset2_max_ver,
> -                              &capset2_max_size);
> +                               &capset2_max_ver,
> +                               &capset2_max_size);
> +    num_capsets += capset2_max_ver ? 1 : 0;
> +
> +    virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VENUS,
> +                               &capset2_max_ver,
> +                               &capset2_max_size);
> +    num_capsets += capset2_max_size ? 1 : 0;
>   
> -    return capset2_max_ver ? 2 : 1;
> +    return num_capsets;
>   }
> diff --git a/include/standard-headers/linux/virtio_gpu.h b/include/standard-headers/linux/virtio_gpu.h
> index 2da48d3d4c..2db643ed8f 100644
> --- a/include/standard-headers/linux/virtio_gpu.h
> +++ b/include/standard-headers/linux/virtio_gpu.h
> @@ -309,6 +309,8 @@ struct virtio_gpu_cmd_submit {
>   
>   #define VIRTIO_GPU_CAPSET_VIRGL 1
>   #define VIRTIO_GPU_CAPSET_VIRGL2 2
> +/* 3 is reserved for gfxstream */
> +#define VIRTIO_GPU_CAPSET_VENUS 4

This file is synced with scripts/update-linux-headers.sh and should not 
be modified.

>   
>   /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */
>   struct virtio_gpu_get_capset_info {


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus
  2023-08-31 10:40   ` Antonio Caggiano
@ 2023-08-31 15:51     ` Dmitry Osipenko
  2023-09-09 10:53       ` Huang Rui via
  2023-09-09 10:52     ` Huang Rui
  1 sibling, 1 reply; 51+ messages in thread
From: Dmitry Osipenko @ 2023-08-31 15:51 UTC (permalink / raw)
  To: Antonio Caggiano, Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Dr . David Alan Gilbert,
	Robert Beckett, Alex Bennée, qemu-devel
  Cc: xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Alex Deucher, Christian König, Xenia Ragiadakou,
	Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
	Chen Jiqian

On 8/31/23 13:40, Antonio Caggiano wrote:
> Hi Huang,
> 
> Thank you for pushing this forward!
> 
> On 31/08/2023 11:32, Huang Rui wrote:
>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>
>> Request Venus when initializing VirGL.
>>
>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>> ---
>>
>> v1->v2:
>>      - Rebase to latest version
>>
>>   hw/display/virtio-gpu-virgl.c | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/hw/display/virtio-gpu-virgl.c
>> b/hw/display/virtio-gpu-virgl.c
>> index 83cd8c8fd0..c5a62665bd 100644
>> --- a/hw/display/virtio-gpu-virgl.c
>> +++ b/hw/display/virtio-gpu-virgl.c
>> @@ -887,6 +887,8 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
>>       }
>>   #endif
>>   +    flags |= VIRGL_RENDERER_VENUS;
>> +
> 
> VIRGL_RENDERER_VENUS is a symbol only available from virglrenderer 0.9.1
> [0] and only if VIRGL_RENDERER_UNSTABLE_APIS is defined.
> 
> Luckily for us, VIRGL_RENDERER_UNSTABLE_APIS is defined unconditionally
> from virglrenderer 0.9.0 [1], so we could check for that in
> qemu/meson.build
> 
> e.g.
> 
> 
>   if virgl.version().version_compare('>= 0.9.0')
>     message('Enabling virglrenderer unstable APIs')
>     virgl = declare_dependency(compile_args:
> '-DVIRGL_RENDERER_UNSTABLE_APIS',
>                                dependencies: virgl)
>   endif
> 
> 
> Also, while testing this with various versions of virglrenderer, I
> realized there are no guarantees for Venus backend to be available in
> the linked library. Virglrenderer should be built with
> -Dvenus_experimental=true, and if that is not the case, the following
> virgl_renderer_init would fail for previous versions of virglrenderer or
> in case it has not been built with venus support.
> 
> I would suggest another approach for that which tries initializing Venus
> only if VIRGL_RENDERER_VENUS is actually defined. Then, if it fails
> cause virglrenderer has not been built with venus support, try again
> falling back to virgl only.

All the APIs will be stabilized with the upcoming virglrender 1.0
release that will happen soon. There is also a venus protocol bump, qemu
will have to bump virglrenderer's version dependency to 1.0 for venus
and other new features.

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 06/13] virtio-gpu: Configure context init for virglrenderer
  2023-08-31  9:39   ` Philippe Mathieu-Daudé
@ 2023-09-04  6:45     ` Huang Rui via
  0 siblings, 0 replies; 51+ messages in thread
From: Huang Rui via @ 2023-09-04  6:45 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Akihiko Odaki, Alyssa Ross,
	Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Thu, Aug 31, 2023 at 05:39:38PM +0800, Philippe Mathieu-Daudé wrote:
> On 31/8/23 11:32, Huang Rui wrote:
> > Configure context init feature flag for virglrenderer.
> > 
> > Originally-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > ---
> > 
> > New patch, result of splitting
> > [RFC QEMU PATCH 04/18] virtio-gpu: CONTEXT_INIT feature
> > 
> >   meson.build | 4 ++++
> >   1 file changed, 4 insertions(+)
> > 
> > diff --git a/meson.build b/meson.build
> > index 98e68ef0b1..ff20d3c249 100644
> > --- a/meson.build
> > +++ b/meson.build
> > @@ -1068,6 +1068,10 @@ if not get_option('virglrenderer').auto() or have_system or have_vhost_user_gpu
> >                                          prefix: '#include <virglrenderer.h>',
> >                                          dependencies: virgl))
> >     endif
> > +  config_host_data.set('HAVE_VIRGL_CONTEXT_INIT',
> > +                       cc.has_function('virgl_renderer_context_create_with_flags',
> > +                                       prefix: '#include <virglrenderer.h>',
> > +                                       dependencies: virgl))
> 
> Shouldn't this be inverted with previous patch?
> 

Yes, this should be patch 3 because we should configure
HAVE_VIRGL_CONTEXT_INIT firstly. I will update it in next version.

Thanks
Ray


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions
  2023-08-31 10:10   ` Akihiko Odaki
@ 2023-09-05  9:07     ` Huang Rui
  2023-09-05  9:17       ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-09-05  9:07 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Thu, Aug 31, 2023 at 06:10:08PM +0800, Akihiko Odaki wrote:
> On 2023/08/31 18:32, Huang Rui wrote:
> > From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> > 
> > When the memory region has a different life-cycle from that of her parent,
> > could be automatically released, once has been unparent and once all of her
> > references have gone away, via the object's free callback.
> > 
> > However, currently, references to the memory region are held by its owner
> > without first incrementing the memory region object's reference count.
> > As a result, the automatic deallocation of the object, not taking into
> > account those references, results in use-after-free memory corruption.
> > 
> > This patch increases the reference count of the memory region object on
> > each memory_region_ref() and decreases it on each memory_region_unref().
> > 
> > Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > ---
> > 
> > New patch
> > 
> >   softmmu/memory.c | 19 +++++++++++++++++--
> >   1 file changed, 17 insertions(+), 2 deletions(-)
> > 
> > diff --git a/softmmu/memory.c b/softmmu/memory.c
> > index 7d9494ce70..0fdd5eebf9 100644
> > --- a/softmmu/memory.c
> > +++ b/softmmu/memory.c
> > @@ -1797,6 +1797,15 @@ Object *memory_region_owner(MemoryRegion *mr)
> >   
> >   void memory_region_ref(MemoryRegion *mr)
> >   {
> > +    if (!mr) {
> > +        return;
> > +    }
> > +
> > +    /* Obtain a reference to prevent the memory region object
> > +     * from being released under our feet.
> > +     */
> > +    object_ref(OBJECT(mr));
> > +
> >       /* MMIO callbacks most likely will access data that belongs
> >        * to the owner, hence the need to ref/unref the owner whenever
> >        * the memory region is in use.
> > @@ -1807,16 +1816,22 @@ void memory_region_ref(MemoryRegion *mr)
> >        * Memory regions without an owner are supposed to never go away;
> >        * we do not ref/unref them because it slows down DMA sensibly.
> >        */
> 
> The collapsed comment says:
>  > The memory region is a child of its owner.  As long as the
>  > owner doesn't call unparent itself on the memory region,
>  > ref-ing the owner will also keep the memory region alive.
>  > Memory regions without an owner are supposed to never go away;
>  > we do not ref/unref them because it slows down DMA sensibly.
> 
> It contradicts with this patch.

The reason that we modify it is because we would like to address the memory
leak issue in the original codes. Please see below, we find the memory
region will be crashed once we free(unref) the simple resource, because the
region will be freed in object_finalize() after unparent and the ref count
is to 0. Then the VM will be crashed with this.

In virgl_cmd_resource_map_blob():
    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
    OBJECT(res->region)->free = g_free;
    memory_region_add_subregion(&b->hostmem, mblob.offset, res->region);
    memory_region_set_enabled(res->region, true);

In virtio_gpu_virgl_resource_unmap():
    memory_region_set_enabled(res->region, false);
    memory_region_del_subregion(&b->hostmem, res->region);
    object_unparent(OBJECT(res->region));
    res->region = NULL;

I spent a bit more time to understand your point, do you want me to update
corresponding comments or you have some concern about this change?

Thanks,
Ray

> 
> > -    if (mr && mr->owner) {
> > +    if (mr->owner) {
> >           object_ref(mr->owner);
> >       }
> >   }
> >   
> >   void memory_region_unref(MemoryRegion *mr)
> >   {
> > -    if (mr && mr->owner) {
> > +    if (!mr) {
> > +        return;
> > +    }
> > +
> > +    if (mr->owner) {
> >           object_unref(mr->owner);
> >       }
> > +
> > +    object_unref(OBJECT(mr));
> >   }
> >   
> >   uint64_t memory_region_size(MemoryRegion *mr)


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands
  2023-08-31 10:24   ` Akihiko Odaki
@ 2023-09-05  9:08     ` Huang Rui
  2023-09-05  9:20       ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-09-05  9:08 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Thu, Aug 31, 2023 at 06:24:32PM +0800, Akihiko Odaki wrote:
> On 2023/08/31 18:32, Huang Rui wrote:
> > From: Antonio Caggiano <antonio.caggiano@collabora.com>
> > 
> > Support BLOB resources creation, mapping and unmapping by calling the
> > new stable virglrenderer 0.10 interface. Only enabled when available and
> > via the blob config. E.g. -device virtio-vga-gl,blob=true
> > 
> > Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> > Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > ---
> > 
> > v1->v2:
> >      - Remove unused #include "hw/virtio/virtio-iommu.h"
> > 
> >      - Add a local function, called virgl_resource_destroy(), that is used
> >        to release a vgpu resource on error paths and in resource_unref.
> > 
> >      - Remove virtio_gpu_virgl_resource_unmap from virtio_gpu_cleanup_mapping(),
> >        since this function won't be called on blob resources and also because
> >        blob resources are unmapped via virgl_cmd_resource_unmap_blob().
> > 
> >      - In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
> >        and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
> >        has been fully initialized.
> > 
> >      - Memory region has a different life-cycle from virtio gpu resources
> >        i.e. cannot be released synchronously along with the vgpu resource.
> >        So, here the field "region" was changed to a pointer that will be
> >        released automatically once the memory region is unparented and all
> >        of its references have been released.
> >        Also, since the pointer can be used to indicate whether the blob
> >        is mapped, the explicit field "mapped" was removed.
> > 
> >      - In virgl_cmd_resource_map_blob(), add check on the value of
> >        res->region, to prevent beeing called twice on the same resource.
> > 
> >      - Remove direct references to parent_obj.
> > 
> >      - Separate declarations from code.
> > 
> >   hw/display/virtio-gpu-virgl.c  | 213 +++++++++++++++++++++++++++++++++
> >   hw/display/virtio-gpu.c        |   4 +-
> >   include/hw/virtio/virtio-gpu.h |   5 +
> >   meson.build                    |   4 +
> >   4 files changed, 225 insertions(+), 1 deletion(-)
> > 
> > diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> > index 312953ec16..17b634d4ee 100644
> > --- a/hw/display/virtio-gpu-virgl.c
> > +++ b/hw/display/virtio-gpu-virgl.c
> > @@ -17,6 +17,7 @@
> >   #include "trace.h"
> >   #include "hw/virtio/virtio.h"
> >   #include "hw/virtio/virtio-gpu.h"
> > +#include "hw/virtio/virtio-gpu-bswap.h"
> >   
> >   #include "ui/egl-helpers.h"
> >   
> > @@ -78,9 +79,24 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
> >       virgl_renderer_resource_create(&args, NULL, 0);
> >   }
> >   
> > +static void virgl_resource_destroy(VirtIOGPU *g,
> > +                                   struct virtio_gpu_simple_resource *res)
> > +{
> > +    if (!res)
> > +        return;
> > +
> > +    QTAILQ_REMOVE(&g->reslist, res, next);
> > +
> > +    virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
> > +    g_free(res->addrs);
> > +
> > +    g_free(res);
> > +}
> > +
> >   static void virgl_cmd_resource_unref(VirtIOGPU *g,
> >                                        struct virtio_gpu_ctrl_command *cmd)
> >   {
> > +    struct virtio_gpu_simple_resource *res;
> >       struct virtio_gpu_resource_unref unref;
> >       struct iovec *res_iovs = NULL;
> >       int num_iovs = 0;
> > @@ -88,13 +104,22 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
> >       VIRTIO_GPU_FILL_CMD(unref);
> >       trace_virtio_gpu_cmd_res_unref(unref.resource_id);
> >   
> > +    res = virtio_gpu_find_resource(g, unref.resource_id);
> > +
> >       virgl_renderer_resource_detach_iov(unref.resource_id,
> >                                          &res_iovs,
> >                                          &num_iovs);
> >       if (res_iovs != NULL && num_iovs != 0) {
> >           virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
> > +        if (res) {
> > +            res->iov = NULL;
> > +            res->iov_cnt = 0;
> > +        }
> >       }
> > +
> >       virgl_renderer_resource_unref(unref.resource_id);
> > +
> > +    virgl_resource_destroy(g, res);
> >   }
> >   
> >   static void virgl_cmd_context_create(VirtIOGPU *g,
> > @@ -426,6 +451,183 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
> >       g_free(resp);
> >   }
> >   
> > +#ifdef HAVE_VIRGL_RESOURCE_BLOB
> > +
> > +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
> > +                                           struct virtio_gpu_ctrl_command *cmd)
> > +{
> > +    struct virtio_gpu_simple_resource *res;
> > +    struct virtio_gpu_resource_create_blob cblob;
> > +    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
> > +    int ret;
> > +
> > +    VIRTIO_GPU_FILL_CMD(cblob);
> > +    virtio_gpu_create_blob_bswap(&cblob);
> > +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> > +
> > +    if (cblob.resource_id == 0) {
> > +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> > +                      __func__);
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> > +        return;
> > +    }
> > +
> > +    res = virtio_gpu_find_resource(g, cblob.resource_id);
> > +    if (res) {
> > +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
> > +                      __func__, cblob.resource_id);
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> > +        return;
> > +    }
> > +
> > +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> > +    if (!res) {
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> > +        return;
> > +    }
> > +
> > +    res->resource_id = cblob.resource_id;
> > +    res->blob_size = cblob.size;
> > +
> > +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> > +        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
> > +                                            cmd, &res->addrs, &res->iov,
> > +                                            &res->iov_cnt);
> > +        if (!ret) {
> > +            g_free(res);
> > +            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> > +            return;
> > +        }
> > +    }
> > +
> > +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> > +
> > +    virgl_args.res_handle = cblob.resource_id;
> > +    virgl_args.ctx_id = cblob.hdr.ctx_id;
> > +    virgl_args.blob_mem = cblob.blob_mem;
> > +    virgl_args.blob_id = cblob.blob_id;
> > +    virgl_args.blob_flags = cblob.blob_flags;
> > +    virgl_args.size = cblob.size;
> > +    virgl_args.iovecs = res->iov;
> > +    virgl_args.num_iovs = res->iov_cnt;
> > +
> > +    ret = virgl_renderer_resource_create_blob(&virgl_args);
> > +    if (ret) {
> > +        virgl_resource_destroy(g, res);
> > +        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
> > +                      __func__, strerror(-ret));
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> > +    }
> > +}
> > +
> > +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
> > +                                        struct virtio_gpu_ctrl_command *cmd)
> > +{
> > +    struct virtio_gpu_simple_resource *res;
> > +    struct virtio_gpu_resource_map_blob mblob;
> > +    int ret;
> > +    void *data;
> > +    uint64_t size;
> > +    struct virtio_gpu_resp_map_info resp;
> > +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
> > +
> > +    VIRTIO_GPU_FILL_CMD(mblob);
> > +    virtio_gpu_map_blob_bswap(&mblob);
> > +
> > +    if (mblob.resource_id == 0) {
> > +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> > +                      __func__);
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> > +        return;
> > +    }
> > +
> > +    res = virtio_gpu_find_resource(g, mblob.resource_id);
> > +    if (!res) {
> > +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
> > +                      __func__, mblob.resource_id);
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> > +        return;
> > +    }
> > +    if (res->region) {
> > +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
> > +		      __func__, mblob.resource_id);
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> > +        return;
> > +    }
> > +
> > +    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
> > +    if (ret) {
> > +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
> > +                      __func__, strerror(-ret));
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> > +        return;
> > +    }
> > +
> > +    res->region = g_new0(MemoryRegion, 1);
> > +    if (!res->region) {
> > +        virgl_renderer_resource_unmap(res->resource_id);
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> > +        return;
> > +    }
> > +    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
> 
> I think memory_region_init_ram_ptr() should be used instead.

Would you mind to explain the reason?

Thanks,
Ray


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions
  2023-09-05  9:07     ` Huang Rui
@ 2023-09-05  9:17       ` Akihiko Odaki
  2023-09-05 13:29         ` Huang Rui
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-05  9:17 UTC (permalink / raw)
  To: Huang Rui
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/05 18:07, Huang Rui wrote:
> On Thu, Aug 31, 2023 at 06:10:08PM +0800, Akihiko Odaki wrote:
>> On 2023/08/31 18:32, Huang Rui wrote:
>>> From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
>>>
>>> When the memory region has a different life-cycle from that of her parent,
>>> could be automatically released, once has been unparent and once all of her
>>> references have gone away, via the object's free callback.
>>>
>>> However, currently, references to the memory region are held by its owner
>>> without first incrementing the memory region object's reference count.
>>> As a result, the automatic deallocation of the object, not taking into
>>> account those references, results in use-after-free memory corruption.
>>>
>>> This patch increases the reference count of the memory region object on
>>> each memory_region_ref() and decreases it on each memory_region_unref().
>>>
>>> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> ---
>>>
>>> New patch
>>>
>>>    softmmu/memory.c | 19 +++++++++++++++++--
>>>    1 file changed, 17 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/softmmu/memory.c b/softmmu/memory.c
>>> index 7d9494ce70..0fdd5eebf9 100644
>>> --- a/softmmu/memory.c
>>> +++ b/softmmu/memory.c
>>> @@ -1797,6 +1797,15 @@ Object *memory_region_owner(MemoryRegion *mr)
>>>    
>>>    void memory_region_ref(MemoryRegion *mr)
>>>    {
>>> +    if (!mr) {
>>> +        return;
>>> +    }
>>> +
>>> +    /* Obtain a reference to prevent the memory region object
>>> +     * from being released under our feet.
>>> +     */
>>> +    object_ref(OBJECT(mr));
>>> +
>>>        /* MMIO callbacks most likely will access data that belongs
>>>         * to the owner, hence the need to ref/unref the owner whenever
>>>         * the memory region is in use.
>>> @@ -1807,16 +1816,22 @@ void memory_region_ref(MemoryRegion *mr)
>>>         * Memory regions without an owner are supposed to never go away;
>>>         * we do not ref/unref them because it slows down DMA sensibly.
>>>         */
>>
>> The collapsed comment says:
>>   > The memory region is a child of its owner.  As long as the
>>   > owner doesn't call unparent itself on the memory region,
>>   > ref-ing the owner will also keep the memory region alive.
>>   > Memory regions without an owner are supposed to never go away;
>>   > we do not ref/unref them because it slows down DMA sensibly.
>>
>> It contradicts with this patch.
> 
> The reason that we modify it is because we would like to address the memory
> leak issue in the original codes. Please see below, we find the memory
> region will be crashed once we free(unref) the simple resource, because the
> region will be freed in object_finalize() after unparent and the ref count
> is to 0. Then the VM will be crashed with this.
> 
> In virgl_cmd_resource_map_blob():
>      memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
>      OBJECT(res->region)->free = g_free;
>      memory_region_add_subregion(&b->hostmem, mblob.offset, res->region);
>      memory_region_set_enabled(res->region, true);
> 
> In virtio_gpu_virgl_resource_unmap():
>      memory_region_set_enabled(res->region, false);
>      memory_region_del_subregion(&b->hostmem, res->region);
>      object_unparent(OBJECT(res->region));
>      res->region = NULL;
> 
> I spent a bit more time to understand your point, do you want me to update
> corresponding comments or you have some concern about this change?

As the comment says ref-ing memory regions without an owner will slow 
down DMA, you should avoid that. More concretely, you should check 
mr->owner before doing object_ref(OBJECT(mr)).

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands
  2023-09-05  9:08     ` Huang Rui
@ 2023-09-05  9:20       ` Akihiko Odaki
  2023-09-06  3:09         ` Huang Rui
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-05  9:20 UTC (permalink / raw)
  To: Huang Rui
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/05 18:08, Huang Rui wrote:
> On Thu, Aug 31, 2023 at 06:24:32PM +0800, Akihiko Odaki wrote:
>> On 2023/08/31 18:32, Huang Rui wrote:
>>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>
>>> Support BLOB resources creation, mapping and unmapping by calling the
>>> new stable virglrenderer 0.10 interface. Only enabled when available and
>>> via the blob config. E.g. -device virtio-vga-gl,blob=true
>>>
>>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> ---
>>>
>>> v1->v2:
>>>       - Remove unused #include "hw/virtio/virtio-iommu.h"
>>>
>>>       - Add a local function, called virgl_resource_destroy(), that is used
>>>         to release a vgpu resource on error paths and in resource_unref.
>>>
>>>       - Remove virtio_gpu_virgl_resource_unmap from virtio_gpu_cleanup_mapping(),
>>>         since this function won't be called on blob resources and also because
>>>         blob resources are unmapped via virgl_cmd_resource_unmap_blob().
>>>
>>>       - In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
>>>         and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
>>>         has been fully initialized.
>>>
>>>       - Memory region has a different life-cycle from virtio gpu resources
>>>         i.e. cannot be released synchronously along with the vgpu resource.
>>>         So, here the field "region" was changed to a pointer that will be
>>>         released automatically once the memory region is unparented and all
>>>         of its references have been released.
>>>         Also, since the pointer can be used to indicate whether the blob
>>>         is mapped, the explicit field "mapped" was removed.
>>>
>>>       - In virgl_cmd_resource_map_blob(), add check on the value of
>>>         res->region, to prevent beeing called twice on the same resource.
>>>
>>>       - Remove direct references to parent_obj.
>>>
>>>       - Separate declarations from code.
>>>
>>>    hw/display/virtio-gpu-virgl.c  | 213 +++++++++++++++++++++++++++++++++
>>>    hw/display/virtio-gpu.c        |   4 +-
>>>    include/hw/virtio/virtio-gpu.h |   5 +
>>>    meson.build                    |   4 +
>>>    4 files changed, 225 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
>>> index 312953ec16..17b634d4ee 100644
>>> --- a/hw/display/virtio-gpu-virgl.c
>>> +++ b/hw/display/virtio-gpu-virgl.c
>>> @@ -17,6 +17,7 @@
>>>    #include "trace.h"
>>>    #include "hw/virtio/virtio.h"
>>>    #include "hw/virtio/virtio-gpu.h"
>>> +#include "hw/virtio/virtio-gpu-bswap.h"
>>>    
>>>    #include "ui/egl-helpers.h"
>>>    
>>> @@ -78,9 +79,24 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
>>>        virgl_renderer_resource_create(&args, NULL, 0);
>>>    }
>>>    
>>> +static void virgl_resource_destroy(VirtIOGPU *g,
>>> +                                   struct virtio_gpu_simple_resource *res)
>>> +{
>>> +    if (!res)
>>> +        return;
>>> +
>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
>>> +
>>> +    virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
>>> +    g_free(res->addrs);
>>> +
>>> +    g_free(res);
>>> +}
>>> +
>>>    static void virgl_cmd_resource_unref(VirtIOGPU *g,
>>>                                         struct virtio_gpu_ctrl_command *cmd)
>>>    {
>>> +    struct virtio_gpu_simple_resource *res;
>>>        struct virtio_gpu_resource_unref unref;
>>>        struct iovec *res_iovs = NULL;
>>>        int num_iovs = 0;
>>> @@ -88,13 +104,22 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
>>>        VIRTIO_GPU_FILL_CMD(unref);
>>>        trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>>>    
>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
>>> +
>>>        virgl_renderer_resource_detach_iov(unref.resource_id,
>>>                                           &res_iovs,
>>>                                           &num_iovs);
>>>        if (res_iovs != NULL && num_iovs != 0) {
>>>            virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
>>> +        if (res) {
>>> +            res->iov = NULL;
>>> +            res->iov_cnt = 0;
>>> +        }
>>>        }
>>> +
>>>        virgl_renderer_resource_unref(unref.resource_id);
>>> +
>>> +    virgl_resource_destroy(g, res);
>>>    }
>>>    
>>>    static void virgl_cmd_context_create(VirtIOGPU *g,
>>> @@ -426,6 +451,183 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
>>>        g_free(resp);
>>>    }
>>>    
>>> +#ifdef HAVE_VIRGL_RESOURCE_BLOB
>>> +
>>> +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
>>> +                                           struct virtio_gpu_ctrl_command *cmd)
>>> +{
>>> +    struct virtio_gpu_simple_resource *res;
>>> +    struct virtio_gpu_resource_create_blob cblob;
>>> +    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
>>> +    int ret;
>>> +
>>> +    VIRTIO_GPU_FILL_CMD(cblob);
>>> +    virtio_gpu_create_blob_bswap(&cblob);
>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
>>> +
>>> +    if (cblob.resource_id == 0) {
>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
>>> +                      __func__);
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>> +        return;
>>> +    }
>>> +
>>> +    res = virtio_gpu_find_resource(g, cblob.resource_id);
>>> +    if (res) {
>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
>>> +                      __func__, cblob.resource_id);
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>> +        return;
>>> +    }
>>> +
>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>> +    if (!res) {
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
>>> +        return;
>>> +    }
>>> +
>>> +    res->resource_id = cblob.resource_id;
>>> +    res->blob_size = cblob.size;
>>> +
>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>>> +        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
>>> +                                            cmd, &res->addrs, &res->iov,
>>> +                                            &res->iov_cnt);
>>> +        if (!ret) {
>>> +            g_free(res);
>>> +            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>> +            return;
>>> +        }
>>> +    }
>>> +
>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>> +
>>> +    virgl_args.res_handle = cblob.resource_id;
>>> +    virgl_args.ctx_id = cblob.hdr.ctx_id;
>>> +    virgl_args.blob_mem = cblob.blob_mem;
>>> +    virgl_args.blob_id = cblob.blob_id;
>>> +    virgl_args.blob_flags = cblob.blob_flags;
>>> +    virgl_args.size = cblob.size;
>>> +    virgl_args.iovecs = res->iov;
>>> +    virgl_args.num_iovs = res->iov_cnt;
>>> +
>>> +    ret = virgl_renderer_resource_create_blob(&virgl_args);
>>> +    if (ret) {
>>> +        virgl_resource_destroy(g, res);
>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
>>> +                      __func__, strerror(-ret));
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>> +    }
>>> +}
>>> +
>>> +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
>>> +                                        struct virtio_gpu_ctrl_command *cmd)
>>> +{
>>> +    struct virtio_gpu_simple_resource *res;
>>> +    struct virtio_gpu_resource_map_blob mblob;
>>> +    int ret;
>>> +    void *data;
>>> +    uint64_t size;
>>> +    struct virtio_gpu_resp_map_info resp;
>>> +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
>>> +
>>> +    VIRTIO_GPU_FILL_CMD(mblob);
>>> +    virtio_gpu_map_blob_bswap(&mblob);
>>> +
>>> +    if (mblob.resource_id == 0) {
>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
>>> +                      __func__);
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>> +        return;
>>> +    }
>>> +
>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
>>> +    if (!res) {
>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
>>> +                      __func__, mblob.resource_id);
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>> +        return;
>>> +    }
>>> +    if (res->region) {
>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
>>> +		      __func__, mblob.resource_id);
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>> +        return;
>>> +    }
>>> +
>>> +    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
>>> +    if (ret) {
>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
>>> +                      __func__, strerror(-ret));
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>> +        return;
>>> +    }
>>> +
>>> +    res->region = g_new0(MemoryRegion, 1);
>>> +    if (!res->region) {
>>> +        virgl_renderer_resource_unmap(res->resource_id);
>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
>>> +        return;
>>> +    }
>>> +    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
>>
>> I think memory_region_init_ram_ptr() should be used instead.
> 
> Would you mind to explain the reason?

The documentation comment of memory_region_init_ram_device_ptr() says:
 > A RAM device represents a mapping to a physical device, such as to a
 > PCI MMIO BAR of an vfio-pci assigned device.  The memory region may be
 > mapped into the VM address space and access to the region will modify
 > memory directly.  However, the memory region should not be included in
 > a memory dump (device may not be enabled/mapped at the time of the
 > dump), and operations incompatible with manipulating MMIO should be
 > avoided.  Replaces skip_dump flag.

In my understanding it's not MMIO so memory_region_init_ram_ptr() should 
be used instead.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions
  2023-09-05  9:17       ` Akihiko Odaki
@ 2023-09-05 13:29         ` Huang Rui
  0 siblings, 0 replies; 51+ messages in thread
From: Huang Rui @ 2023-09-05 13:29 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Tue, Sep 05, 2023 at 05:17:32PM +0800, Akihiko Odaki wrote:
> On 2023/09/05 18:07, Huang Rui wrote:
> > On Thu, Aug 31, 2023 at 06:10:08PM +0800, Akihiko Odaki wrote:
> >> On 2023/08/31 18:32, Huang Rui wrote:
> >>> From: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> >>>
> >>> When the memory region has a different life-cycle from that of her parent,
> >>> could be automatically released, once has been unparent and once all of her
> >>> references have gone away, via the object's free callback.
> >>>
> >>> However, currently, references to the memory region are held by its owner
> >>> without first incrementing the memory region object's reference count.
> >>> As a result, the automatic deallocation of the object, not taking into
> >>> account those references, results in use-after-free memory corruption.
> >>>
> >>> This patch increases the reference count of the memory region object on
> >>> each memory_region_ref() and decreases it on each memory_region_unref().
> >>>
> >>> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> >>> Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>> ---
> >>>
> >>> New patch
> >>>
> >>>    softmmu/memory.c | 19 +++++++++++++++++--
> >>>    1 file changed, 17 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/softmmu/memory.c b/softmmu/memory.c
> >>> index 7d9494ce70..0fdd5eebf9 100644
> >>> --- a/softmmu/memory.c
> >>> +++ b/softmmu/memory.c
> >>> @@ -1797,6 +1797,15 @@ Object *memory_region_owner(MemoryRegion *mr)
> >>>    
> >>>    void memory_region_ref(MemoryRegion *mr)
> >>>    {
> >>> +    if (!mr) {
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    /* Obtain a reference to prevent the memory region object
> >>> +     * from being released under our feet.
> >>> +     */
> >>> +    object_ref(OBJECT(mr));
> >>> +
> >>>        /* MMIO callbacks most likely will access data that belongs
> >>>         * to the owner, hence the need to ref/unref the owner whenever
> >>>         * the memory region is in use.
> >>> @@ -1807,16 +1816,22 @@ void memory_region_ref(MemoryRegion *mr)
> >>>         * Memory regions without an owner are supposed to never go away;
> >>>         * we do not ref/unref them because it slows down DMA sensibly.
> >>>         */
> >>
> >> The collapsed comment says:
> >>   > The memory region is a child of its owner.  As long as the
> >>   > owner doesn't call unparent itself on the memory region,
> >>   > ref-ing the owner will also keep the memory region alive.
> >>   > Memory regions without an owner are supposed to never go away;
> >>   > we do not ref/unref them because it slows down DMA sensibly.
> >>
> >> It contradicts with this patch.
> > 
> > The reason that we modify it is because we would like to address the memory
> > leak issue in the original codes. Please see below, we find the memory
> > region will be crashed once we free(unref) the simple resource, because the
> > region will be freed in object_finalize() after unparent and the ref count
> > is to 0. Then the VM will be crashed with this.
> > 
> > In virgl_cmd_resource_map_blob():
> >      memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
> >      OBJECT(res->region)->free = g_free;
> >      memory_region_add_subregion(&b->hostmem, mblob.offset, res->region);
> >      memory_region_set_enabled(res->region, true);
> > 
> > In virtio_gpu_virgl_resource_unmap():
> >      memory_region_set_enabled(res->region, false);
> >      memory_region_del_subregion(&b->hostmem, res->region);
> >      object_unparent(OBJECT(res->region));
> >      res->region = NULL;
> > 
> > I spent a bit more time to understand your point, do you want me to update
> > corresponding comments or you have some concern about this change?
> 
> As the comment says ref-ing memory regions without an owner will slow 
> down DMA, you should avoid that. More concretely, you should check 
> mr->owner before doing object_ref(OBJECT(mr)).
> 

I get it, thanks to point this exactly. Will update it in V5.

Thanks,
Ray


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands
  2023-09-05  9:20       ` Akihiko Odaki
@ 2023-09-06  3:09         ` Huang Rui
  2023-09-06  3:39           ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-09-06  3:09 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Tue, Sep 05, 2023 at 05:20:43PM +0800, Akihiko Odaki wrote:
> On 2023/09/05 18:08, Huang Rui wrote:
> > On Thu, Aug 31, 2023 at 06:24:32PM +0800, Akihiko Odaki wrote:
> >> On 2023/08/31 18:32, Huang Rui wrote:
> >>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> >>>
> >>> Support BLOB resources creation, mapping and unmapping by calling the
> >>> new stable virglrenderer 0.10 interface. Only enabled when available and
> >>> via the blob config. E.g. -device virtio-vga-gl,blob=true
> >>>
> >>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> >>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >>> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> >>> Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>> ---
> >>>
> >>> v1->v2:
> >>>       - Remove unused #include "hw/virtio/virtio-iommu.h"
> >>>
> >>>       - Add a local function, called virgl_resource_destroy(), that is used
> >>>         to release a vgpu resource on error paths and in resource_unref.
> >>>
> >>>       - Remove virtio_gpu_virgl_resource_unmap from virtio_gpu_cleanup_mapping(),
> >>>         since this function won't be called on blob resources and also because
> >>>         blob resources are unmapped via virgl_cmd_resource_unmap_blob().
> >>>
> >>>       - In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
> >>>         and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
> >>>         has been fully initialized.
> >>>
> >>>       - Memory region has a different life-cycle from virtio gpu resources
> >>>         i.e. cannot be released synchronously along with the vgpu resource.
> >>>         So, here the field "region" was changed to a pointer that will be
> >>>         released automatically once the memory region is unparented and all
> >>>         of its references have been released.
> >>>         Also, since the pointer can be used to indicate whether the blob
> >>>         is mapped, the explicit field "mapped" was removed.
> >>>
> >>>       - In virgl_cmd_resource_map_blob(), add check on the value of
> >>>         res->region, to prevent beeing called twice on the same resource.
> >>>
> >>>       - Remove direct references to parent_obj.
> >>>
> >>>       - Separate declarations from code.
> >>>
> >>>    hw/display/virtio-gpu-virgl.c  | 213 +++++++++++++++++++++++++++++++++
> >>>    hw/display/virtio-gpu.c        |   4 +-
> >>>    include/hw/virtio/virtio-gpu.h |   5 +
> >>>    meson.build                    |   4 +
> >>>    4 files changed, 225 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> >>> index 312953ec16..17b634d4ee 100644
> >>> --- a/hw/display/virtio-gpu-virgl.c
> >>> +++ b/hw/display/virtio-gpu-virgl.c
> >>> @@ -17,6 +17,7 @@
> >>>    #include "trace.h"
> >>>    #include "hw/virtio/virtio.h"
> >>>    #include "hw/virtio/virtio-gpu.h"
> >>> +#include "hw/virtio/virtio-gpu-bswap.h"
> >>>    
> >>>    #include "ui/egl-helpers.h"
> >>>    
> >>> @@ -78,9 +79,24 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
> >>>        virgl_renderer_resource_create(&args, NULL, 0);
> >>>    }
> >>>    
> >>> +static void virgl_resource_destroy(VirtIOGPU *g,
> >>> +                                   struct virtio_gpu_simple_resource *res)
> >>> +{
> >>> +    if (!res)
> >>> +        return;
> >>> +
> >>> +    QTAILQ_REMOVE(&g->reslist, res, next);
> >>> +
> >>> +    virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
> >>> +    g_free(res->addrs);
> >>> +
> >>> +    g_free(res);
> >>> +}
> >>> +
> >>>    static void virgl_cmd_resource_unref(VirtIOGPU *g,
> >>>                                         struct virtio_gpu_ctrl_command *cmd)
> >>>    {
> >>> +    struct virtio_gpu_simple_resource *res;
> >>>        struct virtio_gpu_resource_unref unref;
> >>>        struct iovec *res_iovs = NULL;
> >>>        int num_iovs = 0;
> >>> @@ -88,13 +104,22 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
> >>>        VIRTIO_GPU_FILL_CMD(unref);
> >>>        trace_virtio_gpu_cmd_res_unref(unref.resource_id);
> >>>    
> >>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
> >>> +
> >>>        virgl_renderer_resource_detach_iov(unref.resource_id,
> >>>                                           &res_iovs,
> >>>                                           &num_iovs);
> >>>        if (res_iovs != NULL && num_iovs != 0) {
> >>>            virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
> >>> +        if (res) {
> >>> +            res->iov = NULL;
> >>> +            res->iov_cnt = 0;
> >>> +        }
> >>>        }
> >>> +
> >>>        virgl_renderer_resource_unref(unref.resource_id);
> >>> +
> >>> +    virgl_resource_destroy(g, res);
> >>>    }
> >>>    
> >>>    static void virgl_cmd_context_create(VirtIOGPU *g,
> >>> @@ -426,6 +451,183 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
> >>>        g_free(resp);
> >>>    }
> >>>    
> >>> +#ifdef HAVE_VIRGL_RESOURCE_BLOB
> >>> +
> >>> +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
> >>> +                                           struct virtio_gpu_ctrl_command *cmd)
> >>> +{
> >>> +    struct virtio_gpu_simple_resource *res;
> >>> +    struct virtio_gpu_resource_create_blob cblob;
> >>> +    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
> >>> +    int ret;
> >>> +
> >>> +    VIRTIO_GPU_FILL_CMD(cblob);
> >>> +    virtio_gpu_create_blob_bswap(&cblob);
> >>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> >>> +
> >>> +    if (cblob.resource_id == 0) {
> >>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> >>> +                      __func__);
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    res = virtio_gpu_find_resource(g, cblob.resource_id);
> >>> +    if (res) {
> >>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
> >>> +                      __func__, cblob.resource_id);
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >>> +    if (!res) {
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    res->resource_id = cblob.resource_id;
> >>> +    res->blob_size = cblob.size;
> >>> +
> >>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> >>> +        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
> >>> +                                            cmd, &res->addrs, &res->iov,
> >>> +                                            &res->iov_cnt);
> >>> +        if (!ret) {
> >>> +            g_free(res);
> >>> +            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >>> +            return;
> >>> +        }
> >>> +    }
> >>> +
> >>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >>> +
> >>> +    virgl_args.res_handle = cblob.resource_id;
> >>> +    virgl_args.ctx_id = cblob.hdr.ctx_id;
> >>> +    virgl_args.blob_mem = cblob.blob_mem;
> >>> +    virgl_args.blob_id = cblob.blob_id;
> >>> +    virgl_args.blob_flags = cblob.blob_flags;
> >>> +    virgl_args.size = cblob.size;
> >>> +    virgl_args.iovecs = res->iov;
> >>> +    virgl_args.num_iovs = res->iov_cnt;
> >>> +
> >>> +    ret = virgl_renderer_resource_create_blob(&virgl_args);
> >>> +    if (ret) {
> >>> +        virgl_resource_destroy(g, res);
> >>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
> >>> +                      __func__, strerror(-ret));
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >>> +    }
> >>> +}
> >>> +
> >>> +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
> >>> +                                        struct virtio_gpu_ctrl_command *cmd)
> >>> +{
> >>> +    struct virtio_gpu_simple_resource *res;
> >>> +    struct virtio_gpu_resource_map_blob mblob;
> >>> +    int ret;
> >>> +    void *data;
> >>> +    uint64_t size;
> >>> +    struct virtio_gpu_resp_map_info resp;
> >>> +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
> >>> +
> >>> +    VIRTIO_GPU_FILL_CMD(mblob);
> >>> +    virtio_gpu_map_blob_bswap(&mblob);
> >>> +
> >>> +    if (mblob.resource_id == 0) {
> >>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> >>> +                      __func__);
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
> >>> +    if (!res) {
> >>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
> >>> +                      __func__, mblob.resource_id);
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>> +        return;
> >>> +    }
> >>> +    if (res->region) {
> >>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
> >>> +		      __func__, mblob.resource_id);
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
> >>> +    if (ret) {
> >>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
> >>> +                      __func__, strerror(-ret));
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>> +        return;
> >>> +    }
> >>> +
> >>> +    res->region = g_new0(MemoryRegion, 1);
> >>> +    if (!res->region) {
> >>> +        virgl_renderer_resource_unmap(res->resource_id);
> >>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> >>> +        return;
> >>> +    }
> >>> +    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
> >>
> >> I think memory_region_init_ram_ptr() should be used instead.
> > 
> > Would you mind to explain the reason?
> 
> The documentation comment of memory_region_init_ram_device_ptr() says:
>  > A RAM device represents a mapping to a physical device, such as to a
>  > PCI MMIO BAR of an vfio-pci assigned device.  The memory region may be
>  > mapped into the VM address space and access to the region will modify
>  > memory directly.  However, the memory region should not be included in
>  > a memory dump (device may not be enabled/mapped at the time of the
>  > dump), and operations incompatible with manipulating MMIO should be
>  > avoided.  Replaces skip_dump flag.
> 
> In my understanding it's not MMIO so memory_region_init_ram_ptr() should 
> be used instead.
> 

It actually maybe the video memory (mmio) or system memory here. :-)

We will get the host memory for blob from host with
virgl_renderer_resource_map() in virglrenderer. In virglrenderer, there are two
types of VIRGL_RESOURCE_FD_DMABUF and VIRGL_RESOURCE_FD_SHM to indicate the
memory types. The shmem is the system memory that won't need GPU
accessible, and dmabuf is the memory that required GPU accessible. Host
kernel amdgpu driver will register dma-buf to export the resource buffer
for sharing, and here, it may have video memory that exposed by amdgpu pcie
bar0 in the dma-buf buffers. And we also have system memory(gtt) that can
be mapped as gpu page tables for gpu accessible.

07:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Renoir (rev c1) (prog-if 00 [VGA controller])
        Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Renoir
        Flags: bus master, fast devsel, latency 0, IRQ 56
        Memory at fcc0000000 (64-bit, prefetchable) [size=256M]
        Memory at fcd0000000 (64-bit, prefetchable) [size=2M]
        I/O ports at 1000 [size=256]
        Memory at d0400000 (32-bit, non-prefetchable) [size=512K]
        Capabilities: <access denied>
        Kernel driver in use: amdgpu
        Kernel modules: amdgpu

Thanks,
Ray


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands
  2023-09-06  3:09         ` Huang Rui
@ 2023-09-06  3:39           ` Akihiko Odaki
  2023-09-06  7:56             ` Huang Rui
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-06  3:39 UTC (permalink / raw)
  To: Huang Rui
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/06 12:09, Huang Rui wrote:
> On Tue, Sep 05, 2023 at 05:20:43PM +0800, Akihiko Odaki wrote:
>> On 2023/09/05 18:08, Huang Rui wrote:
>>> On Thu, Aug 31, 2023 at 06:24:32PM +0800, Akihiko Odaki wrote:
>>>> On 2023/08/31 18:32, Huang Rui wrote:
>>>>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>>>
>>>>> Support BLOB resources creation, mapping and unmapping by calling the
>>>>> new stable virglrenderer 0.10 interface. Only enabled when available and
>>>>> via the blob config. E.g. -device virtio-vga-gl,blob=true
>>>>>
>>>>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>>> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>> ---
>>>>>
>>>>> v1->v2:
>>>>>        - Remove unused #include "hw/virtio/virtio-iommu.h"
>>>>>
>>>>>        - Add a local function, called virgl_resource_destroy(), that is used
>>>>>          to release a vgpu resource on error paths and in resource_unref.
>>>>>
>>>>>        - Remove virtio_gpu_virgl_resource_unmap from virtio_gpu_cleanup_mapping(),
>>>>>          since this function won't be called on blob resources and also because
>>>>>          blob resources are unmapped via virgl_cmd_resource_unmap_blob().
>>>>>
>>>>>        - In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
>>>>>          and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
>>>>>          has been fully initialized.
>>>>>
>>>>>        - Memory region has a different life-cycle from virtio gpu resources
>>>>>          i.e. cannot be released synchronously along with the vgpu resource.
>>>>>          So, here the field "region" was changed to a pointer that will be
>>>>>          released automatically once the memory region is unparented and all
>>>>>          of its references have been released.
>>>>>          Also, since the pointer can be used to indicate whether the blob
>>>>>          is mapped, the explicit field "mapped" was removed.
>>>>>
>>>>>        - In virgl_cmd_resource_map_blob(), add check on the value of
>>>>>          res->region, to prevent beeing called twice on the same resource.
>>>>>
>>>>>        - Remove direct references to parent_obj.
>>>>>
>>>>>        - Separate declarations from code.
>>>>>
>>>>>     hw/display/virtio-gpu-virgl.c  | 213 +++++++++++++++++++++++++++++++++
>>>>>     hw/display/virtio-gpu.c        |   4 +-
>>>>>     include/hw/virtio/virtio-gpu.h |   5 +
>>>>>     meson.build                    |   4 +
>>>>>     4 files changed, 225 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
>>>>> index 312953ec16..17b634d4ee 100644
>>>>> --- a/hw/display/virtio-gpu-virgl.c
>>>>> +++ b/hw/display/virtio-gpu-virgl.c
>>>>> @@ -17,6 +17,7 @@
>>>>>     #include "trace.h"
>>>>>     #include "hw/virtio/virtio.h"
>>>>>     #include "hw/virtio/virtio-gpu.h"
>>>>> +#include "hw/virtio/virtio-gpu-bswap.h"
>>>>>     
>>>>>     #include "ui/egl-helpers.h"
>>>>>     
>>>>> @@ -78,9 +79,24 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
>>>>>         virgl_renderer_resource_create(&args, NULL, 0);
>>>>>     }
>>>>>     
>>>>> +static void virgl_resource_destroy(VirtIOGPU *g,
>>>>> +                                   struct virtio_gpu_simple_resource *res)
>>>>> +{
>>>>> +    if (!res)
>>>>> +        return;
>>>>> +
>>>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
>>>>> +
>>>>> +    virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
>>>>> +    g_free(res->addrs);
>>>>> +
>>>>> +    g_free(res);
>>>>> +}
>>>>> +
>>>>>     static void virgl_cmd_resource_unref(VirtIOGPU *g,
>>>>>                                          struct virtio_gpu_ctrl_command *cmd)
>>>>>     {
>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>         struct virtio_gpu_resource_unref unref;
>>>>>         struct iovec *res_iovs = NULL;
>>>>>         int num_iovs = 0;
>>>>> @@ -88,13 +104,22 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
>>>>>         VIRTIO_GPU_FILL_CMD(unref);
>>>>>         trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>>>>>     
>>>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
>>>>> +
>>>>>         virgl_renderer_resource_detach_iov(unref.resource_id,
>>>>>                                            &res_iovs,
>>>>>                                            &num_iovs);
>>>>>         if (res_iovs != NULL && num_iovs != 0) {
>>>>>             virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
>>>>> +        if (res) {
>>>>> +            res->iov = NULL;
>>>>> +            res->iov_cnt = 0;
>>>>> +        }
>>>>>         }
>>>>> +
>>>>>         virgl_renderer_resource_unref(unref.resource_id);
>>>>> +
>>>>> +    virgl_resource_destroy(g, res);
>>>>>     }
>>>>>     
>>>>>     static void virgl_cmd_context_create(VirtIOGPU *g,
>>>>> @@ -426,6 +451,183 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
>>>>>         g_free(resp);
>>>>>     }
>>>>>     
>>>>> +#ifdef HAVE_VIRGL_RESOURCE_BLOB
>>>>> +
>>>>> +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
>>>>> +                                           struct virtio_gpu_ctrl_command *cmd)
>>>>> +{
>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>> +    struct virtio_gpu_resource_create_blob cblob;
>>>>> +    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
>>>>> +    int ret;
>>>>> +
>>>>> +    VIRTIO_GPU_FILL_CMD(cblob);
>>>>> +    virtio_gpu_create_blob_bswap(&cblob);
>>>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
>>>>> +
>>>>> +    if (cblob.resource_id == 0) {
>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
>>>>> +                      __func__);
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    res = virtio_gpu_find_resource(g, cblob.resource_id);
>>>>> +    if (res) {
>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
>>>>> +                      __func__, cblob.resource_id);
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>>>> +    if (!res) {
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    res->resource_id = cblob.resource_id;
>>>>> +    res->blob_size = cblob.size;
>>>>> +
>>>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>>>>> +        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
>>>>> +                                            cmd, &res->addrs, &res->iov,
>>>>> +                                            &res->iov_cnt);
>>>>> +        if (!ret) {
>>>>> +            g_free(res);
>>>>> +            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>>>> +            return;
>>>>> +        }
>>>>> +    }
>>>>> +
>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>>>> +
>>>>> +    virgl_args.res_handle = cblob.resource_id;
>>>>> +    virgl_args.ctx_id = cblob.hdr.ctx_id;
>>>>> +    virgl_args.blob_mem = cblob.blob_mem;
>>>>> +    virgl_args.blob_id = cblob.blob_id;
>>>>> +    virgl_args.blob_flags = cblob.blob_flags;
>>>>> +    virgl_args.size = cblob.size;
>>>>> +    virgl_args.iovecs = res->iov;
>>>>> +    virgl_args.num_iovs = res->iov_cnt;
>>>>> +
>>>>> +    ret = virgl_renderer_resource_create_blob(&virgl_args);
>>>>> +    if (ret) {
>>>>> +        virgl_resource_destroy(g, res);
>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
>>>>> +                      __func__, strerror(-ret));
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>>>> +    }
>>>>> +}
>>>>> +
>>>>> +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
>>>>> +                                        struct virtio_gpu_ctrl_command *cmd)
>>>>> +{
>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>> +    struct virtio_gpu_resource_map_blob mblob;
>>>>> +    int ret;
>>>>> +    void *data;
>>>>> +    uint64_t size;
>>>>> +    struct virtio_gpu_resp_map_info resp;
>>>>> +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
>>>>> +
>>>>> +    VIRTIO_GPU_FILL_CMD(mblob);
>>>>> +    virtio_gpu_map_blob_bswap(&mblob);
>>>>> +
>>>>> +    if (mblob.resource_id == 0) {
>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
>>>>> +                      __func__);
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
>>>>> +    if (!res) {
>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
>>>>> +                      __func__, mblob.resource_id);
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>> +        return;
>>>>> +    }
>>>>> +    if (res->region) {
>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
>>>>> +		      __func__, mblob.resource_id);
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
>>>>> +    if (ret) {
>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
>>>>> +                      __func__, strerror(-ret));
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>> +        return;
>>>>> +    }
>>>>> +
>>>>> +    res->region = g_new0(MemoryRegion, 1);
>>>>> +    if (!res->region) {
>>>>> +        virgl_renderer_resource_unmap(res->resource_id);
>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
>>>>> +        return;
>>>>> +    }
>>>>> +    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
>>>>
>>>> I think memory_region_init_ram_ptr() should be used instead.
>>>
>>> Would you mind to explain the reason?
>>
>> The documentation comment of memory_region_init_ram_device_ptr() says:
>>   > A RAM device represents a mapping to a physical device, such as to a
>>   > PCI MMIO BAR of an vfio-pci assigned device.  The memory region may be
>>   > mapped into the VM address space and access to the region will modify
>>   > memory directly.  However, the memory region should not be included in
>>   > a memory dump (device may not be enabled/mapped at the time of the
>>   > dump), and operations incompatible with manipulating MMIO should be
>>   > avoided.  Replaces skip_dump flag.
>>
>> In my understanding it's not MMIO so memory_region_init_ram_ptr() should
>> be used instead.
>>
> 
> It actually maybe the video memory (mmio) or system memory here. :-)
> 
> We will get the host memory for blob from host with
> virgl_renderer_resource_map() in virglrenderer. In virglrenderer, there are two
> types of VIRGL_RESOURCE_FD_DMABUF and VIRGL_RESOURCE_FD_SHM to indicate the
> memory types. The shmem is the system memory that won't need GPU
> accessible, and dmabuf is the memory that required GPU accessible. Host
> kernel amdgpu driver will register dma-buf to export the resource buffer
> for sharing, and here, it may have video memory that exposed by amdgpu pcie
> bar0 in the dma-buf buffers. And we also have system memory(gtt) that can
> be mapped as gpu page tables for gpu accessible.
> 
> 07:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Renoir (rev c1) (prog-if 00 [VGA controller])
>          Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Renoir
>          Flags: bus master, fast devsel, latency 0, IRQ 56
>          Memory at fcc0000000 (64-bit, prefetchable) [size=256M]
>          Memory at fcd0000000 (64-bit, prefetchable) [size=2M]
>          I/O ports at 1000 [size=256]
>          Memory at d0400000 (32-bit, non-prefetchable) [size=512K]
>          Capabilities: <access denied>
>          Kernel driver in use: amdgpu
>          Kernel modules: amdgpu

In my understanding it is not relevant if the memory is backed by device 
or not. Here MMIO means memory-mapping I/O registers that has 
side-effects during accesses. Reading such a register may acknowledge an 
interrupt for example and the unit of writes may also matter. 
memory_region_init_ram_device_ptr() ensures no supurious memory read 
will not happen and word accesses are preserved.

They do not matter for video memory even if it lies in a separate device 
memory. In this sense the name "memory_region_init_ram_device_ptr" is 
somewhat misnomer.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands
  2023-09-06  3:39           ` Akihiko Odaki
@ 2023-09-06  7:56             ` Huang Rui
  2023-09-06 14:16               ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-09-06  7:56 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Wed, Sep 06, 2023 at 11:39:09AM +0800, Akihiko Odaki wrote:
> On 2023/09/06 12:09, Huang Rui wrote:
> > On Tue, Sep 05, 2023 at 05:20:43PM +0800, Akihiko Odaki wrote:
> >> On 2023/09/05 18:08, Huang Rui wrote:
> >>> On Thu, Aug 31, 2023 at 06:24:32PM +0800, Akihiko Odaki wrote:
> >>>> On 2023/08/31 18:32, Huang Rui wrote:
> >>>>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> >>>>>
> >>>>> Support BLOB resources creation, mapping and unmapping by calling the
> >>>>> new stable virglrenderer 0.10 interface. Only enabled when available and
> >>>>> via the blob config. E.g. -device virtio-vga-gl,blob=true
> >>>>>
> >>>>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> >>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >>>>> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> >>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>>>> ---
> >>>>>
> >>>>> v1->v2:
> >>>>>        - Remove unused #include "hw/virtio/virtio-iommu.h"
> >>>>>
> >>>>>        - Add a local function, called virgl_resource_destroy(), that is used
> >>>>>          to release a vgpu resource on error paths and in resource_unref.
> >>>>>
> >>>>>        - Remove virtio_gpu_virgl_resource_unmap from virtio_gpu_cleanup_mapping(),
> >>>>>          since this function won't be called on blob resources and also because
> >>>>>          blob resources are unmapped via virgl_cmd_resource_unmap_blob().
> >>>>>
> >>>>>        - In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
> >>>>>          and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
> >>>>>          has been fully initialized.
> >>>>>
> >>>>>        - Memory region has a different life-cycle from virtio gpu resources
> >>>>>          i.e. cannot be released synchronously along with the vgpu resource.
> >>>>>          So, here the field "region" was changed to a pointer that will be
> >>>>>          released automatically once the memory region is unparented and all
> >>>>>          of its references have been released.
> >>>>>          Also, since the pointer can be used to indicate whether the blob
> >>>>>          is mapped, the explicit field "mapped" was removed.
> >>>>>
> >>>>>        - In virgl_cmd_resource_map_blob(), add check on the value of
> >>>>>          res->region, to prevent beeing called twice on the same resource.
> >>>>>
> >>>>>        - Remove direct references to parent_obj.
> >>>>>
> >>>>>        - Separate declarations from code.
> >>>>>
> >>>>>     hw/display/virtio-gpu-virgl.c  | 213 +++++++++++++++++++++++++++++++++
> >>>>>     hw/display/virtio-gpu.c        |   4 +-
> >>>>>     include/hw/virtio/virtio-gpu.h |   5 +
> >>>>>     meson.build                    |   4 +
> >>>>>     4 files changed, 225 insertions(+), 1 deletion(-)
> >>>>>
> >>>>> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> >>>>> index 312953ec16..17b634d4ee 100644
> >>>>> --- a/hw/display/virtio-gpu-virgl.c
> >>>>> +++ b/hw/display/virtio-gpu-virgl.c
> >>>>> @@ -17,6 +17,7 @@
> >>>>>     #include "trace.h"
> >>>>>     #include "hw/virtio/virtio.h"
> >>>>>     #include "hw/virtio/virtio-gpu.h"
> >>>>> +#include "hw/virtio/virtio-gpu-bswap.h"
> >>>>>     
> >>>>>     #include "ui/egl-helpers.h"
> >>>>>     
> >>>>> @@ -78,9 +79,24 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
> >>>>>         virgl_renderer_resource_create(&args, NULL, 0);
> >>>>>     }
> >>>>>     
> >>>>> +static void virgl_resource_destroy(VirtIOGPU *g,
> >>>>> +                                   struct virtio_gpu_simple_resource *res)
> >>>>> +{
> >>>>> +    if (!res)
> >>>>> +        return;
> >>>>> +
> >>>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
> >>>>> +
> >>>>> +    virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
> >>>>> +    g_free(res->addrs);
> >>>>> +
> >>>>> +    g_free(res);
> >>>>> +}
> >>>>> +
> >>>>>     static void virgl_cmd_resource_unref(VirtIOGPU *g,
> >>>>>                                          struct virtio_gpu_ctrl_command *cmd)
> >>>>>     {
> >>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>>         struct virtio_gpu_resource_unref unref;
> >>>>>         struct iovec *res_iovs = NULL;
> >>>>>         int num_iovs = 0;
> >>>>> @@ -88,13 +104,22 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
> >>>>>         VIRTIO_GPU_FILL_CMD(unref);
> >>>>>         trace_virtio_gpu_cmd_res_unref(unref.resource_id);
> >>>>>     
> >>>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
> >>>>> +
> >>>>>         virgl_renderer_resource_detach_iov(unref.resource_id,
> >>>>>                                            &res_iovs,
> >>>>>                                            &num_iovs);
> >>>>>         if (res_iovs != NULL && num_iovs != 0) {
> >>>>>             virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
> >>>>> +        if (res) {
> >>>>> +            res->iov = NULL;
> >>>>> +            res->iov_cnt = 0;
> >>>>> +        }
> >>>>>         }
> >>>>> +
> >>>>>         virgl_renderer_resource_unref(unref.resource_id);
> >>>>> +
> >>>>> +    virgl_resource_destroy(g, res);
> >>>>>     }
> >>>>>     
> >>>>>     static void virgl_cmd_context_create(VirtIOGPU *g,
> >>>>> @@ -426,6 +451,183 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
> >>>>>         g_free(resp);
> >>>>>     }
> >>>>>     
> >>>>> +#ifdef HAVE_VIRGL_RESOURCE_BLOB
> >>>>> +
> >>>>> +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
> >>>>> +                                           struct virtio_gpu_ctrl_command *cmd)
> >>>>> +{
> >>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>> +    struct virtio_gpu_resource_create_blob cblob;
> >>>>> +    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
> >>>>> +    int ret;
> >>>>> +
> >>>>> +    VIRTIO_GPU_FILL_CMD(cblob);
> >>>>> +    virtio_gpu_create_blob_bswap(&cblob);
> >>>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> >>>>> +
> >>>>> +    if (cblob.resource_id == 0) {
> >>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> >>>>> +                      __func__);
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>>>> +        return;
> >>>>> +    }
> >>>>> +
> >>>>> +    res = virtio_gpu_find_resource(g, cblob.resource_id);
> >>>>> +    if (res) {
> >>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
> >>>>> +                      __func__, cblob.resource_id);
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>>>> +        return;
> >>>>> +    }
> >>>>> +
> >>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> >>>>> +    if (!res) {
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> >>>>> +        return;
> >>>>> +    }
> >>>>> +
> >>>>> +    res->resource_id = cblob.resource_id;
> >>>>> +    res->blob_size = cblob.size;
> >>>>> +
> >>>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> >>>>> +        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
> >>>>> +                                            cmd, &res->addrs, &res->iov,
> >>>>> +                                            &res->iov_cnt);
> >>>>> +        if (!ret) {
> >>>>> +            g_free(res);
> >>>>> +            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >>>>> +            return;
> >>>>> +        }
> >>>>> +    }
> >>>>> +
> >>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >>>>> +
> >>>>> +    virgl_args.res_handle = cblob.resource_id;
> >>>>> +    virgl_args.ctx_id = cblob.hdr.ctx_id;
> >>>>> +    virgl_args.blob_mem = cblob.blob_mem;
> >>>>> +    virgl_args.blob_id = cblob.blob_id;
> >>>>> +    virgl_args.blob_flags = cblob.blob_flags;
> >>>>> +    virgl_args.size = cblob.size;
> >>>>> +    virgl_args.iovecs = res->iov;
> >>>>> +    virgl_args.num_iovs = res->iov_cnt;
> >>>>> +
> >>>>> +    ret = virgl_renderer_resource_create_blob(&virgl_args);
> >>>>> +    if (ret) {
> >>>>> +        virgl_resource_destroy(g, res);
> >>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
> >>>>> +                      __func__, strerror(-ret));
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >>>>> +    }
> >>>>> +}
> >>>>> +
> >>>>> +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
> >>>>> +                                        struct virtio_gpu_ctrl_command *cmd)
> >>>>> +{
> >>>>> +    struct virtio_gpu_simple_resource *res;
> >>>>> +    struct virtio_gpu_resource_map_blob mblob;
> >>>>> +    int ret;
> >>>>> +    void *data;
> >>>>> +    uint64_t size;
> >>>>> +    struct virtio_gpu_resp_map_info resp;
> >>>>> +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
> >>>>> +
> >>>>> +    VIRTIO_GPU_FILL_CMD(mblob);
> >>>>> +    virtio_gpu_map_blob_bswap(&mblob);
> >>>>> +
> >>>>> +    if (mblob.resource_id == 0) {
> >>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> >>>>> +                      __func__);
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>>>> +        return;
> >>>>> +    }
> >>>>> +
> >>>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
> >>>>> +    if (!res) {
> >>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
> >>>>> +                      __func__, mblob.resource_id);
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>>>> +        return;
> >>>>> +    }
> >>>>> +    if (res->region) {
> >>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
> >>>>> +		      __func__, mblob.resource_id);
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>>>> +        return;
> >>>>> +    }
> >>>>> +
> >>>>> +    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
> >>>>> +    if (ret) {
> >>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
> >>>>> +                      __func__, strerror(-ret));
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> >>>>> +        return;
> >>>>> +    }
> >>>>> +
> >>>>> +    res->region = g_new0(MemoryRegion, 1);
> >>>>> +    if (!res->region) {
> >>>>> +        virgl_renderer_resource_unmap(res->resource_id);
> >>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> >>>>> +        return;
> >>>>> +    }
> >>>>> +    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
> >>>>
> >>>> I think memory_region_init_ram_ptr() should be used instead.
> >>>
> >>> Would you mind to explain the reason?
> >>
> >> The documentation comment of memory_region_init_ram_device_ptr() says:
> >>   > A RAM device represents a mapping to a physical device, such as to a
> >>   > PCI MMIO BAR of an vfio-pci assigned device.  The memory region may be
> >>   > mapped into the VM address space and access to the region will modify
> >>   > memory directly.  However, the memory region should not be included in
> >>   > a memory dump (device may not be enabled/mapped at the time of the
> >>   > dump), and operations incompatible with manipulating MMIO should be
> >>   > avoided.  Replaces skip_dump flag.
> >>
> >> In my understanding it's not MMIO so memory_region_init_ram_ptr() should
> >> be used instead.
> >>
> > 
> > It actually maybe the video memory (mmio) or system memory here. :-)
> > 
> > We will get the host memory for blob from host with
> > virgl_renderer_resource_map() in virglrenderer. In virglrenderer, there are two
> > types of VIRGL_RESOURCE_FD_DMABUF and VIRGL_RESOURCE_FD_SHM to indicate the
> > memory types. The shmem is the system memory that won't need GPU
> > accessible, and dmabuf is the memory that required GPU accessible. Host
> > kernel amdgpu driver will register dma-buf to export the resource buffer
> > for sharing, and here, it may have video memory that exposed by amdgpu pcie
> > bar0 in the dma-buf buffers. And we also have system memory(gtt) that can
> > be mapped as gpu page tables for gpu accessible.
> > 
> > 07:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Renoir (rev c1) (prog-if 00 [VGA controller])
> >          Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Renoir
> >          Flags: bus master, fast devsel, latency 0, IRQ 56
> >          Memory at fcc0000000 (64-bit, prefetchable) [size=256M]
> >          Memory at fcd0000000 (64-bit, prefetchable) [size=2M]
> >          I/O ports at 1000 [size=256]
> >          Memory at d0400000 (32-bit, non-prefetchable) [size=512K]
> >          Capabilities: <access denied>
> >          Kernel driver in use: amdgpu
> >          Kernel modules: amdgpu
> 
> In my understanding it is not relevant if the memory is backed by device 
> or not. Here MMIO means memory-mapping I/O registers that has 
> side-effects during accesses. Reading such a register may acknowledge an 
> interrupt for example and the unit of writes may also matter. 
> memory_region_init_ram_device_ptr() ensures no supurious memory read 
> will not happen and word accesses are preserved.
> 
> They do not matter for video memory even if it lies in a separate device 
> memory. In this sense the name "memory_region_init_ram_device_ptr" is 
> somewhat misnomer.
> 

OK. Thanks for the clarification.

After traced the code, if memory_region_init_ram_device_ptr(), it will use
memory_region_dispatch_write() to call into memory_region_ram_device_write()
which registered as ram_device_mem_ops to write ram_block->host with offset
dword by dword. And if we use memory_region_init_ram_ptr(), the
flatview_write_continue() will use memmove() to write buf into ramblock while
we write the blob memory. May I know whether you mean the "memmove" may have an
interrupt or supurious memory access?

        } else if (!memory_access_is_direct(mr, true)) {
            release_lock |= prepare_mmio_access(mr);
            l = memory_access_size(mr, l, addr1);
            /* XXX: could force current_cpu to NULL to avoid
               potential bugs */
            val = ldn_he_p(buf, l);
            result |= memory_region_dispatch_write(mr, addr1, val,
                                                   size_memop(l), attrs);
        } else {
            /* RAM case */
            ram_ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
            memmove(ram_ptr, buf, l);
            invalidate_and_set_dirty(mr, addr1, l);
        }

Thanks,
Ray


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands
  2023-09-06  7:56             ` Huang Rui
@ 2023-09-06 14:16               ` Akihiko Odaki
  0 siblings, 0 replies; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-06 14:16 UTC (permalink / raw)
  To: Huang Rui
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/06 16:56, Huang Rui wrote:
> On Wed, Sep 06, 2023 at 11:39:09AM +0800, Akihiko Odaki wrote:
>> On 2023/09/06 12:09, Huang Rui wrote:
>>> On Tue, Sep 05, 2023 at 05:20:43PM +0800, Akihiko Odaki wrote:
>>>> On 2023/09/05 18:08, Huang Rui wrote:
>>>>> On Thu, Aug 31, 2023 at 06:24:32PM +0800, Akihiko Odaki wrote:
>>>>>> On 2023/08/31 18:32, Huang Rui wrote:
>>>>>>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>>>>>
>>>>>>> Support BLOB resources creation, mapping and unmapping by calling the
>>>>>>> new stable virglrenderer 0.10 interface. Only enabled when available and
>>>>>>> via the blob config. E.g. -device virtio-vga-gl,blob=true
>>>>>>>
>>>>>>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>>>>> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
>>>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>>>> ---
>>>>>>>
>>>>>>> v1->v2:
>>>>>>>         - Remove unused #include "hw/virtio/virtio-iommu.h"
>>>>>>>
>>>>>>>         - Add a local function, called virgl_resource_destroy(), that is used
>>>>>>>           to release a vgpu resource on error paths and in resource_unref.
>>>>>>>
>>>>>>>         - Remove virtio_gpu_virgl_resource_unmap from virtio_gpu_cleanup_mapping(),
>>>>>>>           since this function won't be called on blob resources and also because
>>>>>>>           blob resources are unmapped via virgl_cmd_resource_unmap_blob().
>>>>>>>
>>>>>>>         - In virgl_cmd_resource_create_blob(), do proper cleanup in error paths
>>>>>>>           and move QTAILQ_INSERT_HEAD(&g->reslist, res, next) after the resource
>>>>>>>           has been fully initialized.
>>>>>>>
>>>>>>>         - Memory region has a different life-cycle from virtio gpu resources
>>>>>>>           i.e. cannot be released synchronously along with the vgpu resource.
>>>>>>>           So, here the field "region" was changed to a pointer that will be
>>>>>>>           released automatically once the memory region is unparented and all
>>>>>>>           of its references have been released.
>>>>>>>           Also, since the pointer can be used to indicate whether the blob
>>>>>>>           is mapped, the explicit field "mapped" was removed.
>>>>>>>
>>>>>>>         - In virgl_cmd_resource_map_blob(), add check on the value of
>>>>>>>           res->region, to prevent beeing called twice on the same resource.
>>>>>>>
>>>>>>>         - Remove direct references to parent_obj.
>>>>>>>
>>>>>>>         - Separate declarations from code.
>>>>>>>
>>>>>>>      hw/display/virtio-gpu-virgl.c  | 213 +++++++++++++++++++++++++++++++++
>>>>>>>      hw/display/virtio-gpu.c        |   4 +-
>>>>>>>      include/hw/virtio/virtio-gpu.h |   5 +
>>>>>>>      meson.build                    |   4 +
>>>>>>>      4 files changed, 225 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
>>>>>>> index 312953ec16..17b634d4ee 100644
>>>>>>> --- a/hw/display/virtio-gpu-virgl.c
>>>>>>> +++ b/hw/display/virtio-gpu-virgl.c
>>>>>>> @@ -17,6 +17,7 @@
>>>>>>>      #include "trace.h"
>>>>>>>      #include "hw/virtio/virtio.h"
>>>>>>>      #include "hw/virtio/virtio-gpu.h"
>>>>>>> +#include "hw/virtio/virtio-gpu-bswap.h"
>>>>>>>      
>>>>>>>      #include "ui/egl-helpers.h"
>>>>>>>      
>>>>>>> @@ -78,9 +79,24 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
>>>>>>>          virgl_renderer_resource_create(&args, NULL, 0);
>>>>>>>      }
>>>>>>>      
>>>>>>> +static void virgl_resource_destroy(VirtIOGPU *g,
>>>>>>> +                                   struct virtio_gpu_simple_resource *res)
>>>>>>> +{
>>>>>>> +    if (!res)
>>>>>>> +        return;
>>>>>>> +
>>>>>>> +    QTAILQ_REMOVE(&g->reslist, res, next);
>>>>>>> +
>>>>>>> +    virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
>>>>>>> +    g_free(res->addrs);
>>>>>>> +
>>>>>>> +    g_free(res);
>>>>>>> +}
>>>>>>> +
>>>>>>>      static void virgl_cmd_resource_unref(VirtIOGPU *g,
>>>>>>>                                           struct virtio_gpu_ctrl_command *cmd)
>>>>>>>      {
>>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>>>          struct virtio_gpu_resource_unref unref;
>>>>>>>          struct iovec *res_iovs = NULL;
>>>>>>>          int num_iovs = 0;
>>>>>>> @@ -88,13 +104,22 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
>>>>>>>          VIRTIO_GPU_FILL_CMD(unref);
>>>>>>>          trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>>>>>>>      
>>>>>>> +    res = virtio_gpu_find_resource(g, unref.resource_id);
>>>>>>> +
>>>>>>>          virgl_renderer_resource_detach_iov(unref.resource_id,
>>>>>>>                                             &res_iovs,
>>>>>>>                                             &num_iovs);
>>>>>>>          if (res_iovs != NULL && num_iovs != 0) {
>>>>>>>              virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
>>>>>>> +        if (res) {
>>>>>>> +            res->iov = NULL;
>>>>>>> +            res->iov_cnt = 0;
>>>>>>> +        }
>>>>>>>          }
>>>>>>> +
>>>>>>>          virgl_renderer_resource_unref(unref.resource_id);
>>>>>>> +
>>>>>>> +    virgl_resource_destroy(g, res);
>>>>>>>      }
>>>>>>>      
>>>>>>>      static void virgl_cmd_context_create(VirtIOGPU *g,
>>>>>>> @@ -426,6 +451,183 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
>>>>>>>          g_free(resp);
>>>>>>>      }
>>>>>>>      
>>>>>>> +#ifdef HAVE_VIRGL_RESOURCE_BLOB
>>>>>>> +
>>>>>>> +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
>>>>>>> +                                           struct virtio_gpu_ctrl_command *cmd)
>>>>>>> +{
>>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>>> +    struct virtio_gpu_resource_create_blob cblob;
>>>>>>> +    struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
>>>>>>> +    int ret;
>>>>>>> +
>>>>>>> +    VIRTIO_GPU_FILL_CMD(cblob);
>>>>>>> +    virtio_gpu_create_blob_bswap(&cblob);
>>>>>>> +    trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
>>>>>>> +
>>>>>>> +    if (cblob.resource_id == 0) {
>>>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
>>>>>>> +                      __func__);
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    res = virtio_gpu_find_resource(g, cblob.resource_id);
>>>>>>> +    if (res) {
>>>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
>>>>>>> +                      __func__, cblob.resource_id);
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>>>>>>> +    if (!res) {
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    res->resource_id = cblob.resource_id;
>>>>>>> +    res->blob_size = cblob.size;
>>>>>>> +
>>>>>>> +    if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>>>>>>> +        ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
>>>>>>> +                                            cmd, &res->addrs, &res->iov,
>>>>>>> +                                            &res->iov_cnt);
>>>>>>> +        if (!ret) {
>>>>>>> +            g_free(res);
>>>>>>> +            cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>>>>>> +            return;
>>>>>>> +        }
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>>>>>>> +
>>>>>>> +    virgl_args.res_handle = cblob.resource_id;
>>>>>>> +    virgl_args.ctx_id = cblob.hdr.ctx_id;
>>>>>>> +    virgl_args.blob_mem = cblob.blob_mem;
>>>>>>> +    virgl_args.blob_id = cblob.blob_id;
>>>>>>> +    virgl_args.blob_flags = cblob.blob_flags;
>>>>>>> +    virgl_args.size = cblob.size;
>>>>>>> +    virgl_args.iovecs = res->iov;
>>>>>>> +    virgl_args.num_iovs = res->iov_cnt;
>>>>>>> +
>>>>>>> +    ret = virgl_renderer_resource_create_blob(&virgl_args);
>>>>>>> +    if (ret) {
>>>>>>> +        virgl_resource_destroy(g, res);
>>>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
>>>>>>> +                      __func__, strerror(-ret));
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>>>>>>> +    }
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
>>>>>>> +                                        struct virtio_gpu_ctrl_command *cmd)
>>>>>>> +{
>>>>>>> +    struct virtio_gpu_simple_resource *res;
>>>>>>> +    struct virtio_gpu_resource_map_blob mblob;
>>>>>>> +    int ret;
>>>>>>> +    void *data;
>>>>>>> +    uint64_t size;
>>>>>>> +    struct virtio_gpu_resp_map_info resp;
>>>>>>> +    VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
>>>>>>> +
>>>>>>> +    VIRTIO_GPU_FILL_CMD(mblob);
>>>>>>> +    virtio_gpu_map_blob_bswap(&mblob);
>>>>>>> +
>>>>>>> +    if (mblob.resource_id == 0) {
>>>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
>>>>>>> +                      __func__);
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    res = virtio_gpu_find_resource(g, mblob.resource_id);
>>>>>>> +    if (!res) {
>>>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
>>>>>>> +                      __func__, mblob.resource_id);
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +    if (res->region) {
>>>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
>>>>>>> +		      __func__, mblob.resource_id);
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
>>>>>>> +    if (ret) {
>>>>>>> +        qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
>>>>>>> +                      __func__, strerror(-ret));
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    res->region = g_new0(MemoryRegion, 1);
>>>>>>> +    if (!res->region) {
>>>>>>> +        virgl_renderer_resource_unmap(res->resource_id);
>>>>>>> +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +    memory_region_init_ram_device_ptr(res->region, OBJECT(g), NULL, size, data);
>>>>>>
>>>>>> I think memory_region_init_ram_ptr() should be used instead.
>>>>>
>>>>> Would you mind to explain the reason?
>>>>
>>>> The documentation comment of memory_region_init_ram_device_ptr() says:
>>>>    > A RAM device represents a mapping to a physical device, such as to a
>>>>    > PCI MMIO BAR of an vfio-pci assigned device.  The memory region may be
>>>>    > mapped into the VM address space and access to the region will modify
>>>>    > memory directly.  However, the memory region should not be included in
>>>>    > a memory dump (device may not be enabled/mapped at the time of the
>>>>    > dump), and operations incompatible with manipulating MMIO should be
>>>>    > avoided.  Replaces skip_dump flag.
>>>>
>>>> In my understanding it's not MMIO so memory_region_init_ram_ptr() should
>>>> be used instead.
>>>>
>>>
>>> It actually maybe the video memory (mmio) or system memory here. :-)
>>>
>>> We will get the host memory for blob from host with
>>> virgl_renderer_resource_map() in virglrenderer. In virglrenderer, there are two
>>> types of VIRGL_RESOURCE_FD_DMABUF and VIRGL_RESOURCE_FD_SHM to indicate the
>>> memory types. The shmem is the system memory that won't need GPU
>>> accessible, and dmabuf is the memory that required GPU accessible. Host
>>> kernel amdgpu driver will register dma-buf to export the resource buffer
>>> for sharing, and here, it may have video memory that exposed by amdgpu pcie
>>> bar0 in the dma-buf buffers. And we also have system memory(gtt) that can
>>> be mapped as gpu page tables for gpu accessible.
>>>
>>> 07:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Renoir (rev c1) (prog-if 00 [VGA controller])
>>>           Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Renoir
>>>           Flags: bus master, fast devsel, latency 0, IRQ 56
>>>           Memory at fcc0000000 (64-bit, prefetchable) [size=256M]
>>>           Memory at fcd0000000 (64-bit, prefetchable) [size=2M]
>>>           I/O ports at 1000 [size=256]
>>>           Memory at d0400000 (32-bit, non-prefetchable) [size=512K]
>>>           Capabilities: <access denied>
>>>           Kernel driver in use: amdgpu
>>>           Kernel modules: amdgpu
>>
>> In my understanding it is not relevant if the memory is backed by device
>> or not. Here MMIO means memory-mapping I/O registers that has
>> side-effects during accesses. Reading such a register may acknowledge an
>> interrupt for example and the unit of writes may also matter.
>> memory_region_init_ram_device_ptr() ensures no supurious memory read
>> will not happen and word accesses are preserved.
>>
>> They do not matter for video memory even if it lies in a separate device
>> memory. In this sense the name "memory_region_init_ram_device_ptr" is
>> somewhat misnomer.
>>
> 
> OK. Thanks for the clarification.
> 
> After traced the code, if memory_region_init_ram_device_ptr(), it will use
> memory_region_dispatch_write() to call into memory_region_ram_device_write()
> which registered as ram_device_mem_ops to write ram_block->host with offset
> dword by dword. And if we use memory_region_init_ram_ptr(), the
> flatview_write_continue() will use memmove() to write buf into ramblock while
> we write the blob memory. May I know whether you mean the "memmove" may have an
> interrupt or supurious memory access?

memmove can split one word write into smaller writes and that is 
implementation dependent.

git blame is your friend if you want to know more. Particularly commits 
21e00fa55f ("memory: Replace skip_dump flag with "ram_device"") and 
4a2e242bbb ("memory: Don't use memcpy for ram_device regions" [the 
memmove call was used to be memcpy) may interest you.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-08-31 10:36   ` Akihiko Odaki
@ 2023-09-09  9:09     ` Huang Rui
  2023-09-10  4:21       ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-09-09  9:09 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Thu, Aug 31, 2023 at 06:36:57PM +0800, Akihiko Odaki wrote:
> On 2023/08/31 18:32, Huang Rui wrote:
> > From: Antonio Caggiano <antonio.caggiano@collabora.com>
> > 
> > Enable resource UUID feature and implement command resource assign UUID.
> > This is done by introducing a hash table to map resource IDs to their
> > UUIDs.
> 
> The hash table does not seem to be stored during migration.

May I know whether you point g->resource_uuids table data in VirtIOGPU
device should be stored in virtio_gpu_save() and resumed in
virtio_gpu_load() for virtio migration?

> 
> > 
> > Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > ---
> > 
> > v1->v2:
> >     - Separate declarations from code.
> > 
> >   hw/display/trace-events        |  1 +
> >   hw/display/virtio-gpu-base.c   |  2 ++
> >   hw/display/virtio-gpu-virgl.c  | 21 +++++++++++++++++
> >   hw/display/virtio-gpu.c        | 41 ++++++++++++++++++++++++++++++++++
> >   include/hw/virtio/virtio-gpu.h |  4 ++++
> >   5 files changed, 69 insertions(+)
> > 
> > diff --git a/hw/display/trace-events b/hw/display/trace-events
> > index 2336a0ca15..54d6894c59 100644
> > --- a/hw/display/trace-events
> > +++ b/hw/display/trace-events
> > @@ -41,6 +41,7 @@ virtio_gpu_cmd_res_create_blob(uint32_t res, uint64_t size) "res 0x%x, size %" P
> >   virtio_gpu_cmd_res_unref(uint32_t res) "res 0x%x"
> >   virtio_gpu_cmd_res_back_attach(uint32_t res) "res 0x%x"
> >   virtio_gpu_cmd_res_back_detach(uint32_t res) "res 0x%x"
> > +virtio_gpu_cmd_res_assign_uuid(uint32_t res) "res 0x%x"
> >   virtio_gpu_cmd_res_xfer_toh_2d(uint32_t res) "res 0x%x"
> >   virtio_gpu_cmd_res_xfer_toh_3d(uint32_t res) "res 0x%x"
> >   virtio_gpu_cmd_res_xfer_fromh_3d(uint32_t res) "res 0x%x"
> > diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
> > index 4f2b0ba1f3..f44388715c 100644
> > --- a/hw/display/virtio-gpu-base.c
> > +++ b/hw/display/virtio-gpu-base.c
> > @@ -236,6 +236,8 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
> >           features |= (1 << VIRTIO_GPU_F_CONTEXT_INIT);
> >       }
> >   
> > +    features |= (1 << VIRTIO_GPU_F_RESOURCE_UUID);
> > +
> >       return features;
> >   }
> >   
> > diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> > index 17b634d4ee..1a996a08fc 100644
> > --- a/hw/display/virtio-gpu-virgl.c
> > +++ b/hw/display/virtio-gpu-virgl.c
> > @@ -36,6 +36,7 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
> >   {
> >       struct virtio_gpu_resource_create_2d c2d;
> >       struct virgl_renderer_resource_create_args args;
> > +    struct virtio_gpu_simple_resource *res;
> >   
> >       VIRTIO_GPU_FILL_CMD(c2d);
> >       trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
> > @@ -53,6 +54,14 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
> >       args.nr_samples = 0;
> >       args.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
> >       virgl_renderer_resource_create(&args, NULL, 0);
> > +
> > +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> > +    if (!res) {
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> > +        return;
> 
> virglrenderer thinks the resource is alive in such a situation.

Yes, so we can move the resource allocation in front of below virglrenderer
resource creation.

virgl_renderer_resource_create(&args, NULL, 0);

> 
> > +    }
> > +    res->resource_id = c2d.resource_id;
> > +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >   }
> >   
> >   static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
> > @@ -60,6 +69,7 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
> >   {
> >       struct virtio_gpu_resource_create_3d c3d;
> >       struct virgl_renderer_resource_create_args args;
> > +    struct virtio_gpu_simple_resource *res;
> >   
> >       VIRTIO_GPU_FILL_CMD(c3d);
> >       trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
> > @@ -77,6 +87,14 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
> >       args.nr_samples = c3d.nr_samples;
> >       args.flags = c3d.flags;
> >       virgl_renderer_resource_create(&args, NULL, 0);
> > +
> > +    res = g_new0(struct virtio_gpu_simple_resource, 1);
> > +    if (!res) {
> > +        cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
> > +        return;
> > +    }
> > +    res->resource_id = c3d.resource_id;
> > +    QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> >   }
> >   
> >   static void virgl_resource_destroy(VirtIOGPU *g,
> > @@ -682,6 +700,9 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
> >           /* TODO add security */
> >           virgl_cmd_ctx_detach_resource(g, cmd);
> >           break;
> > +    case VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID:
> > +        virtio_gpu_resource_assign_uuid(g, cmd);
> > +        break;
> >       case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
> >           virgl_cmd_get_capset_info(g, cmd);
> >           break;
> > diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
> > index cc4c1f81bb..770e4747e3 100644
> > --- a/hw/display/virtio-gpu.c
> > +++ b/hw/display/virtio-gpu.c
> > @@ -966,6 +966,37 @@ virtio_gpu_resource_detach_backing(VirtIOGPU *g,
> >       virtio_gpu_cleanup_mapping(g, res);
> >   }
> >   
> > +void virtio_gpu_resource_assign_uuid(VirtIOGPU *g,
> > +                                     struct virtio_gpu_ctrl_command *cmd)
> > +{
> > +    struct virtio_gpu_simple_resource *res;
> > +    struct virtio_gpu_resource_assign_uuid assign;
> > +    struct virtio_gpu_resp_resource_uuid resp;
> > +    QemuUUID *uuid = NULL;
> 
> This initialization is unnecessary.

Yes, will remove it.

> 
> > +
> > +    VIRTIO_GPU_FILL_CMD(assign);
> > +    virtio_gpu_bswap_32(&assign, sizeof(assign));
> > +    trace_virtio_gpu_cmd_res_assign_uuid(assign.resource_id);
> > +
> > +    res = virtio_gpu_find_check_resource(g, assign.resource_id, false, __func__, &cmd->error);
> > +    if (!res) {
> > +        return;
> > +    }
> > +
> > +    memset(&resp, 0, sizeof(resp));
> > +    resp.hdr.type = VIRTIO_GPU_RESP_OK_RESOURCE_UUID;
> > +
> > +    uuid = g_hash_table_lookup(g->resource_uuids, GUINT_TO_POINTER(assign.resource_id));
> > +    if (!uuid) {
> > +        uuid = g_new(QemuUUID, 1);
> > +        qemu_uuid_generate(uuid);
> > +        g_hash_table_insert(g->resource_uuids, GUINT_TO_POINTER(assign.resource_id), uuid);
> > +    }
> > +
> > +    memcpy(resp.uuid, uuid, sizeof(QemuUUID));
> > +    virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> > +}
> > +
> >   void virtio_gpu_simple_process_cmd(VirtIOGPU *g,
> >                                      struct virtio_gpu_ctrl_command *cmd)
> >   {
> > @@ -1014,6 +1045,9 @@ void virtio_gpu_simple_process_cmd(VirtIOGPU *g,
> >       case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
> >           virtio_gpu_resource_detach_backing(g, cmd);
> >           break;
> > +    case VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID:
> > +        virtio_gpu_resource_assign_uuid(g, cmd);
> > +        break;
> >       default:
> >           cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> >           break;
> > @@ -1393,12 +1427,15 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
> >       QTAILQ_INIT(&g->reslist);
> >       QTAILQ_INIT(&g->cmdq);
> >       QTAILQ_INIT(&g->fenceq);
> > +
> > +    g->resource_uuids = g_hash_table_new_full(NULL, NULL, NULL, g_free);
> >   }
> >   
> >   static void virtio_gpu_device_unrealize(DeviceState *qdev)
> >   {
> >       VirtIOGPU *g = VIRTIO_GPU(qdev);
> >   
> > +    g_hash_table_destroy(g->resource_uuids);
> >       g_clear_pointer(&g->ctrl_bh, qemu_bh_delete);
> >       g_clear_pointer(&g->cursor_bh, qemu_bh_delete);
> >       g_clear_pointer(&g->reset_bh, qemu_bh_delete);
> > @@ -1452,6 +1489,10 @@ void virtio_gpu_reset(VirtIODevice *vdev)
> >           g_free(cmd);
> >       }
> >   
> > +    if (g->resource_uuids) {
> 
> Isn't g->resource_uuids always non-NULL?

Yes, right. The if check should be superfluous. Will remove it.

Thanks,
Ray

> 
> > +        g_hash_table_remove_all(g->resource_uuids);
> > +    }
> > +
> >       virtio_gpu_base_reset(VIRTIO_GPU_BASE(vdev));
> >   }
> >   
> > diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
> > index b9adc28071..aa94b1b697 100644
> > --- a/include/hw/virtio/virtio-gpu.h
> > +++ b/include/hw/virtio/virtio-gpu.h
> > @@ -208,6 +208,8 @@ struct VirtIOGPU {
> >           QTAILQ_HEAD(, VGPUDMABuf) bufs;
> >           VGPUDMABuf *primary[VIRTIO_GPU_MAX_SCANOUTS];
> >       } dmabuf;
> > +
> > +    GHashTable *resource_uuids;
> >   };
> >   
> >   struct VirtIOGPUClass {
> > @@ -285,6 +287,8 @@ void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
> >                                       struct iovec *iov, uint32_t count);
> >   void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
> >                                   struct virtio_gpu_simple_resource *res);
> > +void virtio_gpu_resource_assign_uuid(VirtIOGPU *g,
> > +                                     struct virtio_gpu_ctrl_command *cmd);
> >   void virtio_gpu_process_cmdq(VirtIOGPU *g);
> >   void virtio_gpu_device_realize(DeviceState *qdev, Error **errp);
> >   void virtio_gpu_reset(VirtIODevice *vdev);


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 11/13] virtio-gpu: Support Venus capset
  2023-08-31 10:43   ` Akihiko Odaki
@ 2023-09-09  9:29     ` Huang Rui
  2023-09-10  4:32       ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Huang Rui @ 2023-09-09  9:29 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Thu, Aug 31, 2023 at 06:43:17PM +0800, Akihiko Odaki wrote:
> On 2023/08/31 18:32, Huang Rui wrote:
> > From: Antonio Caggiano <antonio.caggiano@collabora.com>
> > 
> > Add support for the Venus capset, which enables Vulkan support through
> > the Venus Vulkan driver for virtio-gpu.
> > 
> > Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > ---
> >   hw/display/virtio-gpu-virgl.c               | 21 +++++++++++++++++----
> >   include/standard-headers/linux/virtio_gpu.h |  2 ++
> >   2 files changed, 19 insertions(+), 4 deletions(-)
> > 
> > diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> > index 1a996a08fc..83cd8c8fd0 100644
> > --- a/hw/display/virtio-gpu-virgl.c
> > +++ b/hw/display/virtio-gpu-virgl.c
> > @@ -437,6 +437,11 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
> >           virgl_renderer_get_cap_set(resp.capset_id,
> >                                      &resp.capset_max_version,
> >                                      &resp.capset_max_size);
> > +    } else if (info.capset_index == 2) {
> > +        resp.capset_id = VIRTIO_GPU_CAPSET_VENUS;
> > +        virgl_renderer_get_cap_set(resp.capset_id,
> > +                                   &resp.capset_max_version,
> > +                                   &resp.capset_max_size);
> >       } else {
> >           resp.capset_max_version = 0;
> >           resp.capset_max_size = 0;
> > @@ -901,10 +906,18 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
> >   
> >   int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
> >   {
> > -    uint32_t capset2_max_ver, capset2_max_size;
> > +    uint32_t capset2_max_ver, capset2_max_size, num_capsets;
> > +    num_capsets = 1;
> > +
> >       virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
> > -                              &capset2_max_ver,
> > -                              &capset2_max_size);
> > +                               &capset2_max_ver,
> > +                               &capset2_max_size);
> > +    num_capsets += capset2_max_ver ? 1 : 0;
> > +
> > +    virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VENUS,
> > +                               &capset2_max_ver,
> > +                               &capset2_max_size);
> > +    num_capsets += capset2_max_size ? 1 : 0;
> >   
> > -    return capset2_max_ver ? 2 : 1;
> > +    return num_capsets;
> >   }
> > diff --git a/include/standard-headers/linux/virtio_gpu.h b/include/standard-headers/linux/virtio_gpu.h
> > index 2da48d3d4c..2db643ed8f 100644
> > --- a/include/standard-headers/linux/virtio_gpu.h
> > +++ b/include/standard-headers/linux/virtio_gpu.h
> > @@ -309,6 +309,8 @@ struct virtio_gpu_cmd_submit {
> >   
> >   #define VIRTIO_GPU_CAPSET_VIRGL 1
> >   #define VIRTIO_GPU_CAPSET_VIRGL2 2
> > +/* 3 is reserved for gfxstream */
> > +#define VIRTIO_GPU_CAPSET_VENUS 4
> 
> This file is synced with scripts/update-linux-headers.sh and should not 
> be modified.

Should I add marco in kernel include/uapi/linux/virtio_gpu.h?

They are used at VIRGL_RENDERER_UNSTABLE_APIS in virglrender.

enum virgl_renderer_capset {
   VIRGL_RENDERER_CAPSET_VIRGL                   = 1,
   VIRGL_RENDERER_CAPSET_VIRGL2                  = 2,
   /* 3 is reserved for gfxstream */
   VIRGL_RENDERER_CAPSET_VENUS                   = 4,
   /* 5 is reserved for cross-domain */
   VIRGL_RENDERER_CAPSET_DRM                     = 6,
};

Thanks,
Ray

> 
> >   
> >   /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */
> >   struct virtio_gpu_get_capset_info {


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus
  2023-08-31 10:40   ` Antonio Caggiano
  2023-08-31 15:51     ` Dmitry Osipenko
@ 2023-09-09 10:52     ` Huang Rui
  1 sibling, 0 replies; 51+ messages in thread
From: Huang Rui @ 2023-09-09 10:52 UTC (permalink / raw)
  To: Antonio Caggiano
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Thu, Aug 31, 2023 at 06:40:11PM +0800, Antonio Caggiano wrote:
> Hi Huang,
> 
> Thank you for pushing this forward!
> 

My pleasure! :-)

> On 31/08/2023 11:32, Huang Rui wrote:
> > From: Antonio Caggiano <antonio.caggiano@collabora.com>
> > 
> > Request Venus when initializing VirGL.
> > 
> > Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > ---
> > 
> > v1->v2:
> >      - Rebase to latest version
> > 
> >   hw/display/virtio-gpu-virgl.c | 2 ++
> >   1 file changed, 2 insertions(+)
> > 
> > diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> > index 83cd8c8fd0..c5a62665bd 100644
> > --- a/hw/display/virtio-gpu-virgl.c
> > +++ b/hw/display/virtio-gpu-virgl.c
> > @@ -887,6 +887,8 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
> >       }
> >   #endif
> >   
> > +    flags |= VIRGL_RENDERER_VENUS;
> > +
> 
> VIRGL_RENDERER_VENUS is a symbol only available from virglrenderer 0.9.1 
> [0] and only if VIRGL_RENDERER_UNSTABLE_APIS is defined.
> 
> Luckily for us, VIRGL_RENDERER_UNSTABLE_APIS is defined unconditionally 
> from virglrenderer 0.9.0 [1], so we could check for that in qemu/meson.build
> 
> e.g.
> 
> 
>    if virgl.version().version_compare('>= 0.9.0')
>      message('Enabling virglrenderer unstable APIs')
>      virgl = declare_dependency(compile_args: 
> '-DVIRGL_RENDERER_UNSTABLE_APIS',
>                                 dependencies: virgl)
>    endif
> 
> 
> Also, while testing this with various versions of virglrenderer, I 
> realized there are no guarantees for Venus backend to be available in 
> the linked library. Virglrenderer should be built with 
> -Dvenus_experimental=true, and if that is not the case, the following 
> virgl_renderer_init would fail for previous versions of virglrenderer or 
> in case it has not been built with venus support.
> 
> I would suggest another approach for that which tries initializing Venus 
> only if VIRGL_RENDERER_VENUS is actually defined. Then, if it fails 
> cause virglrenderer has not been built with venus support, try again 
> falling back to virgl only.
> 
> e.g.
> 
> #ifdef VIRGL_RENDERER_VENUS
>      ret = virgl_renderer_init(g, VIRGL_RENDERER_VENUS, &virtio_gpu_3d_cbs);
>      if (ret != 0) {
>          warn_report("Failed to initialize virglrenderer with venus: 
> %d", ret);
>          warn_report("Falling back to virgl only");
>          ret = virgl_renderer_init(g, 0, &virtio_gpu_3d_cbs);
>      }
> #else
>      ret = virgl_renderer_init(g, 0, &virtio_gpu_3d_cbs);
> #endif
> 

Thanks a lot to explain so clearly. Yes, it's reasonable for me. I will
modify it in the next version. And agree, we should take care of different
virglrenderer versions.

Thanks,
Ray

> 
> >       ret = virgl_renderer_init(g, flags, &virtio_gpu_3d_cbs);
> >       if (ret != 0) {
> >           error_report("virgl could not be initialized: %d", ret);
> 
> [0] 
> https://gitlab.freedesktop.org/virgl/virglrenderer/-/commit/6c31f85330bb4c5aba8b82eba606971e598c6e25
> [1] 
> https://gitlab.freedesktop.org/virgl/virglrenderer/-/commit/491afdc42a49ec6a1b8d7cbc5c97360229002d41
> 
> Best regards,
> Antonio Caggiano


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus
  2023-08-31 15:51     ` Dmitry Osipenko
@ 2023-09-09 10:53       ` Huang Rui via
  2023-09-12 13:44         ` Dmitry Osipenko
  0 siblings, 1 reply; 51+ messages in thread
From: Huang Rui via @ 2023-09-09 10:53 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Antonio Caggiano, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Dr . David Alan Gilbert,
	Robert Beckett, Alex Bennée, qemu-devel, xen-devel,
	Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On Thu, Aug 31, 2023 at 11:51:50PM +0800, Dmitry Osipenko wrote:
> On 8/31/23 13:40, Antonio Caggiano wrote:
> > Hi Huang,
> > 
> > Thank you for pushing this forward!
> > 
> > On 31/08/2023 11:32, Huang Rui wrote:
> >> From: Antonio Caggiano <antonio.caggiano@collabora.com>
> >>
> >> Request Venus when initializing VirGL.
> >>
> >> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> >> Signed-off-by: Huang Rui <ray.huang@amd.com>
> >> ---
> >>
> >> v1->v2:
> >>      - Rebase to latest version
> >>
> >>   hw/display/virtio-gpu-virgl.c | 2 ++
> >>   1 file changed, 2 insertions(+)
> >>
> >> diff --git a/hw/display/virtio-gpu-virgl.c
> >> b/hw/display/virtio-gpu-virgl.c
> >> index 83cd8c8fd0..c5a62665bd 100644
> >> --- a/hw/display/virtio-gpu-virgl.c
> >> +++ b/hw/display/virtio-gpu-virgl.c
> >> @@ -887,6 +887,8 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
> >>       }
> >>   #endif
> >>   +    flags |= VIRGL_RENDERER_VENUS;
> >> +
> > 
> > VIRGL_RENDERER_VENUS is a symbol only available from virglrenderer 0.9.1
> > [0] and only if VIRGL_RENDERER_UNSTABLE_APIS is defined.
> > 
> > Luckily for us, VIRGL_RENDERER_UNSTABLE_APIS is defined unconditionally
> > from virglrenderer 0.9.0 [1], so we could check for that in
> > qemu/meson.build
> > 
> > e.g.
> > 
> > 
> >   if virgl.version().version_compare('>= 0.9.0')
> >     message('Enabling virglrenderer unstable APIs')
> >     virgl = declare_dependency(compile_args:
> > '-DVIRGL_RENDERER_UNSTABLE_APIS',
> >                                dependencies: virgl)
> >   endif
> > 
> > 
> > Also, while testing this with various versions of virglrenderer, I
> > realized there are no guarantees for Venus backend to be available in
> > the linked library. Virglrenderer should be built with
> > -Dvenus_experimental=true, and if that is not the case, the following
> > virgl_renderer_init would fail for previous versions of virglrenderer or
> > in case it has not been built with venus support.
> > 
> > I would suggest another approach for that which tries initializing Venus
> > only if VIRGL_RENDERER_VENUS is actually defined. Then, if it fails
> > cause virglrenderer has not been built with venus support, try again
> > falling back to virgl only.
> 
> All the APIs will be stabilized with the upcoming virglrender 1.0
> release that will happen soon. There is also a venus protocol bump, qemu
> will have to bump virglrenderer's version dependency to 1.0 for venus
> and other new features.
> 

Dmitry, do you know the timeline of virglrender 1.0?

Thanks,
Ray


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-09  9:09     ` Huang Rui
@ 2023-09-10  4:21       ` Akihiko Odaki
  2023-09-13  7:55         ` Albert Esteve
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-10  4:21 UTC (permalink / raw)
  To: Huang Rui
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/09 18:09, Huang Rui wrote:
> On Thu, Aug 31, 2023 at 06:36:57PM +0800, Akihiko Odaki wrote:
>> On 2023/08/31 18:32, Huang Rui wrote:
>>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>
>>> Enable resource UUID feature and implement command resource assign UUID.
>>> This is done by introducing a hash table to map resource IDs to their
>>> UUIDs.
>>
>> The hash table does not seem to be stored during migration.
> 
> May I know whether you point g->resource_uuids table data in VirtIOGPU
> device should be stored in virtio_gpu_save() and resumed in
> virtio_gpu_load() for virtio migration?

Yes, that's what I meant.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 11/13] virtio-gpu: Support Venus capset
  2023-09-09  9:29     ` Huang Rui
@ 2023-09-10  4:32       ` Akihiko Odaki
  0 siblings, 0 replies; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-10  4:32 UTC (permalink / raw)
  To: Huang Rui
  Cc: Gerd Hoffmann, Michael S . Tsirkin, Stefano Stabellini,
	Anthony PERARD, Antonio Caggiano, Dr . David Alan Gilbert,
	Robert Beckett, Dmitry Osipenko, Alex Bennée, qemu-devel,
	xen-devel, Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/09 18:29, Huang Rui wrote:
> On Thu, Aug 31, 2023 at 06:43:17PM +0800, Akihiko Odaki wrote:
>> On 2023/08/31 18:32, Huang Rui wrote:
>>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>
>>> Add support for the Venus capset, which enables Vulkan support through
>>> the Venus Vulkan driver for virtio-gpu.
>>>
>>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> ---
>>>    hw/display/virtio-gpu-virgl.c               | 21 +++++++++++++++++----
>>>    include/standard-headers/linux/virtio_gpu.h |  2 ++
>>>    2 files changed, 19 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
>>> index 1a996a08fc..83cd8c8fd0 100644
>>> --- a/hw/display/virtio-gpu-virgl.c
>>> +++ b/hw/display/virtio-gpu-virgl.c
>>> @@ -437,6 +437,11 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
>>>            virgl_renderer_get_cap_set(resp.capset_id,
>>>                                       &resp.capset_max_version,
>>>                                       &resp.capset_max_size);
>>> +    } else if (info.capset_index == 2) {
>>> +        resp.capset_id = VIRTIO_GPU_CAPSET_VENUS;
>>> +        virgl_renderer_get_cap_set(resp.capset_id,
>>> +                                   &resp.capset_max_version,
>>> +                                   &resp.capset_max_size);
>>>        } else {
>>>            resp.capset_max_version = 0;
>>>            resp.capset_max_size = 0;
>>> @@ -901,10 +906,18 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
>>>    
>>>    int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
>>>    {
>>> -    uint32_t capset2_max_ver, capset2_max_size;
>>> +    uint32_t capset2_max_ver, capset2_max_size, num_capsets;
>>> +    num_capsets = 1;
>>> +
>>>        virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
>>> -                              &capset2_max_ver,
>>> -                              &capset2_max_size);
>>> +                               &capset2_max_ver,
>>> +                               &capset2_max_size);
>>> +    num_capsets += capset2_max_ver ? 1 : 0;
>>> +
>>> +    virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VENUS,
>>> +                               &capset2_max_ver,
>>> +                               &capset2_max_size);
>>> +    num_capsets += capset2_max_size ? 1 : 0;
>>>    
>>> -    return capset2_max_ver ? 2 : 1;
>>> +    return num_capsets;
>>>    }
>>> diff --git a/include/standard-headers/linux/virtio_gpu.h b/include/standard-headers/linux/virtio_gpu.h
>>> index 2da48d3d4c..2db643ed8f 100644
>>> --- a/include/standard-headers/linux/virtio_gpu.h
>>> +++ b/include/standard-headers/linux/virtio_gpu.h
>>> @@ -309,6 +309,8 @@ struct virtio_gpu_cmd_submit {
>>>    
>>>    #define VIRTIO_GPU_CAPSET_VIRGL 1
>>>    #define VIRTIO_GPU_CAPSET_VIRGL2 2
>>> +/* 3 is reserved for gfxstream */
>>> +#define VIRTIO_GPU_CAPSET_VENUS 4
>>
>> This file is synced with scripts/update-linux-headers.sh and should not
>> be modified.
> 
> Should I add marco in kernel include/uapi/linux/virtio_gpu.h?

Yes, I think so.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus
  2023-09-09 10:53       ` Huang Rui via
@ 2023-09-12 13:44         ` Dmitry Osipenko
  0 siblings, 0 replies; 51+ messages in thread
From: Dmitry Osipenko @ 2023-09-12 13:44 UTC (permalink / raw)
  To: Huang Rui
  Cc: Antonio Caggiano, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Dr . David Alan Gilbert,
	Robert Beckett, Alex Bennée, qemu-devel, xen-devel,
	Gurchetan Singh, ernunes, Philippe Mathieu-Daudé,
	Akihiko Odaki, Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 9/9/23 13:53, Huang Rui wrote:
> On Thu, Aug 31, 2023 at 11:51:50PM +0800, Dmitry Osipenko wrote:
>> On 8/31/23 13:40, Antonio Caggiano wrote:
>>> Hi Huang,
>>>
>>> Thank you for pushing this forward!
>>>
>>> On 31/08/2023 11:32, Huang Rui wrote:
>>>> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>>
>>>> Request Venus when initializing VirGL.
>>>>
>>>> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>> ---
>>>>
>>>> v1->v2:
>>>>      - Rebase to latest version
>>>>
>>>>   hw/display/virtio-gpu-virgl.c | 2 ++
>>>>   1 file changed, 2 insertions(+)
>>>>
>>>> diff --git a/hw/display/virtio-gpu-virgl.c
>>>> b/hw/display/virtio-gpu-virgl.c
>>>> index 83cd8c8fd0..c5a62665bd 100644
>>>> --- a/hw/display/virtio-gpu-virgl.c
>>>> +++ b/hw/display/virtio-gpu-virgl.c
>>>> @@ -887,6 +887,8 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
>>>>       }
>>>>   #endif
>>>>   +    flags |= VIRGL_RENDERER_VENUS;
>>>> +
>>>
>>> VIRGL_RENDERER_VENUS is a symbol only available from virglrenderer 0.9.1
>>> [0] and only if VIRGL_RENDERER_UNSTABLE_APIS is defined.
>>>
>>> Luckily for us, VIRGL_RENDERER_UNSTABLE_APIS is defined unconditionally
>>> from virglrenderer 0.9.0 [1], so we could check for that in
>>> qemu/meson.build
>>>
>>> e.g.
>>>
>>>
>>>   if virgl.version().version_compare('>= 0.9.0')
>>>     message('Enabling virglrenderer unstable APIs')
>>>     virgl = declare_dependency(compile_args:
>>> '-DVIRGL_RENDERER_UNSTABLE_APIS',
>>>                                dependencies: virgl)
>>>   endif
>>>
>>>
>>> Also, while testing this with various versions of virglrenderer, I
>>> realized there are no guarantees for Venus backend to be available in
>>> the linked library. Virglrenderer should be built with
>>> -Dvenus_experimental=true, and if that is not the case, the following
>>> virgl_renderer_init would fail for previous versions of virglrenderer or
>>> in case it has not been built with venus support.
>>>
>>> I would suggest another approach for that which tries initializing Venus
>>> only if VIRGL_RENDERER_VENUS is actually defined. Then, if it fails
>>> cause virglrenderer has not been built with venus support, try again
>>> falling back to virgl only.
>>
>> All the APIs will be stabilized with the upcoming virglrender 1.0
>> release that will happen soon. There is also a venus protocol bump, qemu
>> will have to bump virglrenderer's version dependency to 1.0 for venus
>> and other new features.
>>
> 
> Dmitry, do you know the timeline of virglrender 1.0?

Should be end of this week or next week

-- 
Best regards,
Dmitry



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-10  4:21       ` Akihiko Odaki
@ 2023-09-13  7:55         ` Albert Esteve
  2023-09-13 10:34           ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Albert Esteve @ 2023-09-13  7:55 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

[-- Attachment #1: Type: text/plain, Size: 586 bytes --]

Hi Antonio,

If I'm not mistaken, this patch is related with:
https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
IMHO, ideally, virtio-gpu and vhost-user-gpu both, would use the
infrastructure from the patch I linked to store the
virtio objects, so that they can be later shared with other devices.

Which, in terms of code, would mean changing:
g_hash_table_insert(g->resource_uuids,
GUINT_TO_POINTER(assign.resource_id), uuid);
by:
virtio_add_dmabuf(uuid, assign.resource_id);

...and simplify part of the infrastructure.

Let me know what you think.

Regard,
Albert

[-- Attachment #2: Type: text/html, Size: 928 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-13  7:55         ` Albert Esteve
@ 2023-09-13 10:34           ` Akihiko Odaki
  2023-09-13 11:34             ` Albert Esteve
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-13 10:34 UTC (permalink / raw)
  To: Albert Esteve
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/13 16:55, Albert Esteve wrote:
> Hi Antonio,
> 
> If I'm not mistaken, this patch is related with: 
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html 
> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
> IMHO, ideally, virtio-gpu and vhost-user-gpu both, would use the 
> infrastructure from the patch I linked to store the
> virtio objects, so that they can be later shared with other devices.

I don't think such sharing is possible because the resources are 
identified by IDs that are local to the device. That also complicates 
migration.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-13 10:34           ` Akihiko Odaki
@ 2023-09-13 11:34             ` Albert Esteve
  2023-09-13 12:22               ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Albert Esteve @ 2023-09-13 11:34 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

[-- Attachment #1: Type: text/plain, Size: 1254 bytes --]

On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki <akihiko.odaki@daynix.com>
wrote:

> On 2023/09/13 16:55, Albert Esteve wrote:
> > Hi Antonio,
> >
> > If I'm not mistaken, this patch is related with:
> > https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> > <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
> > IMHO, ideally, virtio-gpu and vhost-user-gpu both, would use the
> > infrastructure from the patch I linked to store the
> > virtio objects, so that they can be later shared with other devices.
>
> I don't think such sharing is possible because the resources are
> identified by IDs that are local to the device. That also complicates
> migration.
>
> Regards,
> Akihiko Odaki
>
> Hi Akihiko,

As far as I understand, the feature to export dma-bufs from the
virtgpu was introduced as part of the virtio cross-device sharing
proposal [1]. Thus, it shall be posible. When virtgpu ASSING_UUID,
it exports and identifies the dmabuf resource, so that when the dmabuf gets
shared inside the guest (e.g., with virtio-video), we can use the assigned
UUID to find the dmabuf in the host (using the patch that I linked above),
and import it.

[1] - https://lwn.net/Articles/828988/

[-- Attachment #2: Type: text/html, Size: 2058 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-13 11:34             ` Albert Esteve
@ 2023-09-13 12:22               ` Akihiko Odaki
  2023-09-13 12:58                 ` Albert Esteve
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-13 12:22 UTC (permalink / raw)
  To: Albert Esteve
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/13 20:34, Albert Esteve wrote:
> 
> 
> On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki <akihiko.odaki@daynix.com 
> <mailto:akihiko.odaki@daynix.com>> wrote:
> 
>     On 2023/09/13 16:55, Albert Esteve wrote:
>      > Hi Antonio,
>      >
>      > If I'm not mistaken, this patch is related with:
>      >
>     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
>      >
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
>      > IMHO, ideally, virtio-gpu and vhost-user-gpu both, would use the
>      > infrastructure from the patch I linked to store the
>      > virtio objects, so that they can be later shared with other devices.
> 
>     I don't think such sharing is possible because the resources are
>     identified by IDs that are local to the device. That also complicates
>     migration.
> 
>     Regards,
>     Akihiko Odaki
> 
> Hi Akihiko,
> 
> As far as I understand, the feature to export dma-bufs from the
> virtgpu was introduced as part of the virtio cross-device sharing
> proposal [1]. Thus, it shall be posible. When virtgpu ASSING_UUID,
> it exports and identifies the dmabuf resource, so that when the dmabuf gets
> shared inside the guest (e.g., with virtio-video), we can use the assigned
> UUID to find the dmabuf in the host (using the patch that I linked above),
> and import it.
> 
> [1] - https://lwn.net/Articles/828988/ <https://lwn.net/Articles/828988/>

The problem is that virtio-gpu can have other kind of resources like 
pixman and OpenGL textures and manage them and DMA-BUFs with unified 
resource ID.

So you cannot change:
g_hash_table_insert(g->resource_uuids, 
GUINT_TO_POINTER(assign.resource_id), uuid);
by:
virtio_add_dmabuf(uuid, assign.resource_id);

assign.resource_id is not DMA-BUF file descriptor, and the underlying 
resource my not be DMA-BUF at first place.

Also, since this lives in the common code that is not used only by 
virtio-gpu-gl but also virtio-gpu, which supports migration, we also 
need to take care of that. It is not a problem for DMA-BUF as DMA-BUF is 
not migratable anyway, but the situation is different in this case.

Implementing cross-device sharing is certainly a possibility, but that 
requires more than dealing with DMA-BUFs.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-13 12:22               ` Akihiko Odaki
@ 2023-09-13 12:58                 ` Albert Esteve
  2023-09-13 13:43                   ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Albert Esteve @ 2023-09-13 12:58 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

[-- Attachment #1: Type: text/plain, Size: 3858 bytes --]

On Wed, Sep 13, 2023 at 2:22 PM Akihiko Odaki <akihiko.odaki@daynix.com>
wrote:

> On 2023/09/13 20:34, Albert Esteve wrote:
> >
> >
> > On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki <akihiko.odaki@daynix.com
> > <mailto:akihiko.odaki@daynix.com>> wrote:
> >
> >     On 2023/09/13 16:55, Albert Esteve wrote:
> >      > Hi Antonio,
> >      >
> >      > If I'm not mistaken, this patch is related with:
> >      >
> >     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >
> >      >
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >>
> >      > IMHO, ideally, virtio-gpu and vhost-user-gpu both, would use the
> >      > infrastructure from the patch I linked to store the
> >      > virtio objects, so that they can be later shared with other
> devices.
> >
> >     I don't think such sharing is possible because the resources are
> >     identified by IDs that are local to the device. That also complicates
> >     migration.
> >
> >     Regards,
> >     Akihiko Odaki
> >
> > Hi Akihiko,
> >
> > As far as I understand, the feature to export dma-bufs from the
> > virtgpu was introduced as part of the virtio cross-device sharing
> > proposal [1]. Thus, it shall be posible. When virtgpu ASSING_UUID,
> > it exports and identifies the dmabuf resource, so that when the dmabuf
> gets
> > shared inside the guest (e.g., with virtio-video), we can use the
> assigned
> > UUID to find the dmabuf in the host (using the patch that I linked
> above),
> > and import it.
> >
> > [1] - https://lwn.net/Articles/828988/ <https://lwn.net/Articles/828988/
> >
>
> The problem is that virtio-gpu can have other kind of resources like
> pixman and OpenGL textures and manage them and DMA-BUFs with unified
> resource ID.
>

I see.


>
> So you cannot change:
> g_hash_table_insert(g->resource_uuids,
> GUINT_TO_POINTER(assign.resource_id), uuid);
> by:
> virtio_add_dmabuf(uuid, assign.resource_id);
>
> assign.resource_id is not DMA-BUF file descriptor, and the underlying
> resource my not be DMA-BUF at first place.
>

I didn't really look into the patch in-depth, so the code was intended
to give an idea of how the implementation would look like with
the cross-device patch API. Indeed, it is not the resource_id,
(I just took a brief look at the virtio specificacion 1.2), but the
underlying
resource what we want to use here.


>
> Also, since this lives in the common code that is not used only by
> virtio-gpu-gl but also virtio-gpu, which supports migration, we also
> need to take care of that. It is not a problem for DMA-BUF as DMA-BUF is
> not migratable anyway, but the situation is different in this case.
>
> Implementing cross-device sharing is certainly a possibility, but that
> requires more than dealing with DMA-BUFs.
>
>
So, if I understood correctly, dmabufs are just a subset of the resources
that the gpu manages, or can assign UUIDs to. I am not sure why
the virt gpu driver would want to send a ASSIGN_UUID for anything
that is not a dmabuf (are we sure it does?), but I am not super familiarized
with virtgpu to begin with.
But I see that internally, the GPU specs relate a UUID with a resource_id,
so we still need both tables:
- one to relate UUID with resource_id to be able to locate the underlying
resource
- the table that holds the dmabuf with the UUID for cross-device sharing

With that in mind, sounds to me that the support for cross-device sharing
could
be added on top of this patch, once
https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
lands.

I hope that makes sense.
Regards,
Albert

[-- Attachment #2: Type: text/html, Size: 5962 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-13 12:58                 ` Albert Esteve
@ 2023-09-13 13:43                   ` Akihiko Odaki
  2023-09-13 14:18                     ` Albert Esteve
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-13 13:43 UTC (permalink / raw)
  To: Albert Esteve
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/13 21:58, Albert Esteve wrote:
> 
> 
> On Wed, Sep 13, 2023 at 2:22 PM Akihiko Odaki <akihiko.odaki@daynix.com 
> <mailto:akihiko.odaki@daynix.com>> wrote:
> 
>     On 2023/09/13 20:34, Albert Esteve wrote:
>      >
>      >
>      > On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki
>     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
>      > <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>> wrote:
>      >
>      >     On 2023/09/13 16:55, Albert Esteve wrote:
>      >      > Hi Antonio,
>      >      >
>      >      > If I'm not mistaken, this patch is related with:
>      >      >
>      >
>     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
>      >   
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
>      >      >
>      >   
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
>      >   
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
>      >      > IMHO, ideally, virtio-gpu and vhost-user-gpu both, would
>     use the
>      >      > infrastructure from the patch I linked to store the
>      >      > virtio objects, so that they can be later shared with
>     other devices.
>      >
>      >     I don't think such sharing is possible because the resources are
>      >     identified by IDs that are local to the device. That also
>     complicates
>      >     migration.
>      >
>      >     Regards,
>      >     Akihiko Odaki
>      >
>      > Hi Akihiko,
>      >
>      > As far as I understand, the feature to export dma-bufs from the
>      > virtgpu was introduced as part of the virtio cross-device sharing
>      > proposal [1]. Thus, it shall be posible. When virtgpu ASSING_UUID,
>      > it exports and identifies the dmabuf resource, so that when the
>     dmabuf gets
>      > shared inside the guest (e.g., with virtio-video), we can use the
>     assigned
>      > UUID to find the dmabuf in the host (using the patch that I
>     linked above),
>      > and import it.
>      >
>      > [1] - https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/> <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>>
> 
>     The problem is that virtio-gpu can have other kind of resources like
>     pixman and OpenGL textures and manage them and DMA-BUFs with unified
>     resource ID.
> 
> 
> I see.
> 
> 
>     So you cannot change:
>     g_hash_table_insert(g->resource_uuids,
>     GUINT_TO_POINTER(assign.resource_id), uuid);
>     by:
>     virtio_add_dmabuf(uuid, assign.resource_id);
> 
>     assign.resource_id is not DMA-BUF file descriptor, and the underlying
>     resource my not be DMA-BUF at first place.
> 
> 
> I didn't really look into the patch in-depth, so the code was intended
> to give an idea of how the implementation would look like with
> the cross-device patch API. Indeed, it is not the resource_id,
> (I just took a brief look at the virtio specificacion 1.2), but the 
> underlying
> resource what we want to use here.
> 
> 
>     Also, since this lives in the common code that is not used only by
>     virtio-gpu-gl but also virtio-gpu, which supports migration, we also
>     need to take care of that. It is not a problem for DMA-BUF as
>     DMA-BUF is
>     not migratable anyway, but the situation is different in this case.
> 
>     Implementing cross-device sharing is certainly a possibility, but that
>     requires more than dealing with DMA-BUFs.
> 
> 
> So, if I understood correctly, dmabufs are just a subset of the resources
> that the gpu manages, or can assign UUIDs to. I am not sure why
> the virt gpu driver would want to send a ASSIGN_UUID for anything
> that is not a dmabuf (are we sure it does?), but I am not super familiarized
> with virtgpu to begin with.

In my understanding, an resource will be first created as OpenGL or 
Vulkan textures and then exported as a DMA-BUF file descriptor. For 
these resource types exporting/importing code is mandatory.

For pixman buffers (i.e., non-virgl device), I don't see a compelling 
reason to have cross-device sharing. It is possible to omit resource 
UUID feature from non-virgl device to avoid implementing complicated 
migration.

> But I see that internally, the GPU specs relate a UUID with a resource_id,
> so we still need both tables:
> - one to relate UUID with resource_id to be able to locate the 
> underlying resource
> - the table that holds the dmabuf with the UUID for cross-device sharing
> 
> With that in mind, sounds to me that the support for cross-device 
> sharing could
> be added on top of this patch, once 
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html 
> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>
> lands.

That is possible, but I think it's better to implement cross-device 
sharing at the same time introducing virtio-dmabuf.

The current design of virtio-dmabuf looks somewhat inconsistent; it's 
named "dmabuf", but internally the UUIDs are stored into something named 
"resource_uuids" and it has SharedObjectType so it's more like a generic 
resource sharing mechanism. If you actually have an implementation of 
cross-device sharing using virtio-dmabuf, it will be clear what kind of 
feature is truly necessary.

Regards,
Akihiko Odaki


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-13 13:43                   ` Akihiko Odaki
@ 2023-09-13 14:18                     ` Albert Esteve
  2023-09-13 14:27                       ` Albert Esteve
  2023-09-14  7:17                       ` Akihiko Odaki
  0 siblings, 2 replies; 51+ messages in thread
From: Albert Esteve @ 2023-09-13 14:18 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

[-- Attachment #1: Type: text/plain, Size: 6994 bytes --]

On Wed, Sep 13, 2023 at 3:43 PM Akihiko Odaki <akihiko.odaki@daynix.com>
wrote:

> On 2023/09/13 21:58, Albert Esteve wrote:
> >
> >
> > On Wed, Sep 13, 2023 at 2:22 PM Akihiko Odaki <akihiko.odaki@daynix.com
> > <mailto:akihiko.odaki@daynix.com>> wrote:
> >
> >     On 2023/09/13 20:34, Albert Esteve wrote:
> >      >
> >      >
> >      > On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki
> >     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
> >      > <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>> wrote:
> >      >
> >      >     On 2023/09/13 16:55, Albert Esteve wrote:
> >      >      > Hi Antonio,
> >      >      >
> >      >      > If I'm not mistaken, this patch is related with:
> >      >      >
> >      >
> >     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
> >      >      > IMHO, ideally, virtio-gpu and vhost-user-gpu both, would
> >     use the
> >      >      > infrastructure from the patch I linked to store the
> >      >      > virtio objects, so that they can be later shared with
> >     other devices.
> >      >
> >      >     I don't think such sharing is possible because the resources
> are
> >      >     identified by IDs that are local to the device. That also
> >     complicates
> >      >     migration.
> >      >
> >      >     Regards,
> >      >     Akihiko Odaki
> >      >
> >      > Hi Akihiko,
> >      >
> >      > As far as I understand, the feature to export dma-bufs from the
> >      > virtgpu was introduced as part of the virtio cross-device sharing
> >      > proposal [1]. Thus, it shall be posible. When virtgpu ASSING_UUID,
> >      > it exports and identifies the dmabuf resource, so that when the
> >     dmabuf gets
> >      > shared inside the guest (e.g., with virtio-video), we can use the
> >     assigned
> >      > UUID to find the dmabuf in the host (using the patch that I
> >     linked above),
> >      > and import it.
> >      >
> >      > [1] - https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/> <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>>
> >
> >     The problem is that virtio-gpu can have other kind of resources like
> >     pixman and OpenGL textures and manage them and DMA-BUFs with unified
> >     resource ID.
> >
> >
> > I see.
> >
> >
> >     So you cannot change:
> >     g_hash_table_insert(g->resource_uuids,
> >     GUINT_TO_POINTER(assign.resource_id), uuid);
> >     by:
> >     virtio_add_dmabuf(uuid, assign.resource_id);
> >
> >     assign.resource_id is not DMA-BUF file descriptor, and the underlying
> >     resource my not be DMA-BUF at first place.
> >
> >
> > I didn't really look into the patch in-depth, so the code was intended
> > to give an idea of how the implementation would look like with
> > the cross-device patch API. Indeed, it is not the resource_id,
> > (I just took a brief look at the virtio specificacion 1.2), but the
> > underlying
> > resource what we want to use here.
> >
> >
> >     Also, since this lives in the common code that is not used only by
> >     virtio-gpu-gl but also virtio-gpu, which supports migration, we also
> >     need to take care of that. It is not a problem for DMA-BUF as
> >     DMA-BUF is
> >     not migratable anyway, but the situation is different in this case.
> >
> >     Implementing cross-device sharing is certainly a possibility, but
> that
> >     requires more than dealing with DMA-BUFs.
> >
> >
> > So, if I understood correctly, dmabufs are just a subset of the resources
> > that the gpu manages, or can assign UUIDs to. I am not sure why
> > the virt gpu driver would want to send a ASSIGN_UUID for anything
> > that is not a dmabuf (are we sure it does?), but I am not super
> familiarized
> > with virtgpu to begin with.
>
> In my understanding, an resource will be first created as OpenGL or
> Vulkan textures and then exported as a DMA-BUF file descriptor. For
> these resource types exporting/importing code is mandatory.
>
> For pixman buffers (i.e., non-virgl device), I don't see a compelling
> reason to have cross-device sharing. It is possible to omit resource
> UUID feature from non-virgl device to avoid implementing complicated
> migration.
>

I see, thanks for the clarification.
I would assume you could avoid the UUID feature for those resources, but
I will need to check the driver implementation. It is worth checking
though, if
that would simplify the implementation.


>
> > But I see that internally, the GPU specs relate a UUID with a
> resource_id,
> > so we still need both tables:
> > - one to relate UUID with resource_id to be able to locate the
> > underlying resource
> > - the table that holds the dmabuf with the UUID for cross-device sharing
> >
> > With that in mind, sounds to me that the support for cross-device
> > sharing could
> > be added on top of this patch, once
> > https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
> > <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>
> > lands.
>
> That is possible, but I think it's better to implement cross-device
> sharing at the same time introducing virtio-dmabuf.
>
> The current design of virtio-dmabuf looks somewhat inconsistent; it's
> named "dmabuf", but internally the UUIDs are stored into something named
> "resource_uuids" and it has SharedObjectType so it's more like a generic
> resource sharing mechanism. If you actually have an implementation of
> cross-device sharing using virtio-dmabuf, it will be clear what kind of
> feature is truly necessary.
>
>
Yeah, the file was named as virtio-dmabuf following the kernel
implementation. Also, because for the moment it only aims to share
dmabufs. However, virtio specs leave the virtio object defintion vague [1]
(I guess purposely). It is up to the specific devices to define what an
object
means for them. So the implementation tries to follow that, and
leave the contents of the table generic. The table can hold any kind of
object,
and the API exposes type-specific functions (for dmabufs, or others).

[1] -
https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html#x1-10500011


> Regards,
> Akihiko Odaki
>
>

[-- Attachment #2: Type: text/html, Size: 10923 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-13 14:18                     ` Albert Esteve
@ 2023-09-13 14:27                       ` Albert Esteve
  2023-09-14  7:17                       ` Akihiko Odaki
  1 sibling, 0 replies; 51+ messages in thread
From: Albert Esteve @ 2023-09-13 14:27 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

[-- Attachment #1: Type: text/plain, Size: 7550 bytes --]

On Wed, Sep 13, 2023 at 4:18 PM Albert Esteve <aesteve@redhat.com> wrote:

>
>
> On Wed, Sep 13, 2023 at 3:43 PM Akihiko Odaki <akihiko.odaki@daynix.com>
> wrote:
>
>> On 2023/09/13 21:58, Albert Esteve wrote:
>> >
>> >
>> > On Wed, Sep 13, 2023 at 2:22 PM Akihiko Odaki <akihiko.odaki@daynix.com
>> > <mailto:akihiko.odaki@daynix.com>> wrote:
>> >
>> >     On 2023/09/13 20:34, Albert Esteve wrote:
>> >      >
>> >      >
>> >      > On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki
>> >     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
>> >      > <mailto:akihiko.odaki@daynix.com
>> >     <mailto:akihiko.odaki@daynix.com>>> wrote:
>> >      >
>> >      >     On 2023/09/13 16:55, Albert Esteve wrote:
>> >      >      > Hi Antonio,
>> >      >      >
>> >      >      > If I'm not mistaken, this patch is related with:
>> >      >      >
>> >      >
>> >     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
>> >     <
>> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
>> >      >
>> >       <
>> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
>> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
>> >      >      >
>> >      >
>> >       <
>> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
>> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
>> >      >
>> >       <
>> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
>> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
>> >      >      > IMHO, ideally, virtio-gpu and vhost-user-gpu both, would
>> >     use the
>> >      >      > infrastructure from the patch I linked to store the
>> >      >      > virtio objects, so that they can be later shared with
>> >     other devices.
>> >      >
>> >      >     I don't think such sharing is possible because the resources
>> are
>> >      >     identified by IDs that are local to the device. That also
>> >     complicates
>> >      >     migration.
>> >      >
>> >      >     Regards,
>> >      >     Akihiko Odaki
>> >      >
>> >      > Hi Akihiko,
>> >      >
>> >      > As far as I understand, the feature to export dma-bufs from the
>> >      > virtgpu was introduced as part of the virtio cross-device sharing
>> >      > proposal [1]. Thus, it shall be posible. When
>> virtgpu ASSING_UUID,
>> >      > it exports and identifies the dmabuf resource, so that when the
>> >     dmabuf gets
>> >      > shared inside the guest (e.g., with virtio-video), we can use the
>> >     assigned
>> >      > UUID to find the dmabuf in the host (using the patch that I
>> >     linked above),
>> >      > and import it.
>> >      >
>> >      > [1] - https://lwn.net/Articles/828988/
>> >     <https://lwn.net/Articles/828988/> <
>> https://lwn.net/Articles/828988/
>> >     <https://lwn.net/Articles/828988/>>
>> >
>> >     The problem is that virtio-gpu can have other kind of resources like
>> >     pixman and OpenGL textures and manage them and DMA-BUFs with unified
>> >     resource ID.
>> >
>> >
>> > I see.
>> >
>> >
>> >     So you cannot change:
>> >     g_hash_table_insert(g->resource_uuids,
>> >     GUINT_TO_POINTER(assign.resource_id), uuid);
>> >     by:
>> >     virtio_add_dmabuf(uuid, assign.resource_id);
>> >
>> >     assign.resource_id is not DMA-BUF file descriptor, and the
>> underlying
>> >     resource my not be DMA-BUF at first place.
>> >
>> >
>> > I didn't really look into the patch in-depth, so the code was intended
>> > to give an idea of how the implementation would look like with
>> > the cross-device patch API. Indeed, it is not the resource_id,
>> > (I just took a brief look at the virtio specificacion 1.2), but the
>> > underlying
>> > resource what we want to use here.
>> >
>> >
>> >     Also, since this lives in the common code that is not used only by
>> >     virtio-gpu-gl but also virtio-gpu, which supports migration, we also
>> >     need to take care of that. It is not a problem for DMA-BUF as
>> >     DMA-BUF is
>> >     not migratable anyway, but the situation is different in this case.
>> >
>> >     Implementing cross-device sharing is certainly a possibility, but
>> that
>> >     requires more than dealing with DMA-BUFs.
>> >
>> >
>> > So, if I understood correctly, dmabufs are just a subset of the
>> resources
>> > that the gpu manages, or can assign UUIDs to. I am not sure why
>> > the virt gpu driver would want to send a ASSIGN_UUID for anything
>> > that is not a dmabuf (are we sure it does?), but I am not super
>> familiarized
>> > with virtgpu to begin with.
>>
>> In my understanding, an resource will be first created as OpenGL or
>> Vulkan textures and then exported as a DMA-BUF file descriptor. For
>> these resource types exporting/importing code is mandatory.
>>
>> For pixman buffers (i.e., non-virgl device), I don't see a compelling
>> reason to have cross-device sharing. It is possible to omit resource
>> UUID feature from non-virgl device to avoid implementing complicated
>> migration.
>>
>
> I see, thanks for the clarification.
> I would assume you could avoid the UUID feature for those resources, but
> I will need to check the driver implementation. It is worth checking
> though, if
> that would simplify the implementation.
>
>
>>
>> > But I see that internally, the GPU specs relate a UUID with a
>> resource_id,
>> > so we still need both tables:
>> > - one to relate UUID with resource_id to be able to locate the
>> > underlying resource
>> > - the table that holds the dmabuf with the UUID for cross-device sharing
>> >
>> > With that in mind, sounds to me that the support for cross-device
>> > sharing could
>> > be added on top of this patch, once
>> > https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
>> > <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>
>> > lands.
>>
>>
Also, to clarify what I stated here:
I am not trying to get your patch blocked until the other one lands.
I think both could be integrated in parallel, and then have virtgpu
use the cross-device sharing API on top of your patch.

Regards,
Albert


> That is possible, but I think it's better to implement cross-device
>> sharing at the same time introducing virtio-dmabuf.
>>
>> The current design of virtio-dmabuf looks somewhat inconsistent; it's
>> named "dmabuf", but internally the UUIDs are stored into something named
>> "resource_uuids" and it has SharedObjectType so it's more like a generic
>> resource sharing mechanism. If you actually have an implementation of
>> cross-device sharing using virtio-dmabuf, it will be clear what kind of
>> feature is truly necessary.
>>
>>
> Yeah, the file was named as virtio-dmabuf following the kernel
> implementation. Also, because for the moment it only aims to share
> dmabufs. However, virtio specs leave the virtio object defintion vague [1]
> (I guess purposely). It is up to the specific devices to define what an
> object
> means for them. So the implementation tries to follow that, and
> leave the contents of the table generic. The table can hold any kind of
> object,
> and the API exposes type-specific functions (for dmabufs, or others).
>
> [1] -
> https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html#x1-10500011
>
>
>> Regards,
>> Akihiko Odaki
>>
>>

[-- Attachment #2: Type: text/html, Size: 12082 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-13 14:18                     ` Albert Esteve
  2023-09-13 14:27                       ` Albert Esteve
@ 2023-09-14  7:17                       ` Akihiko Odaki
  2023-09-14  8:29                         ` Albert Esteve
  1 sibling, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-14  7:17 UTC (permalink / raw)
  To: Albert Esteve
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/13 23:18, Albert Esteve wrote:
> 
> 
> On Wed, Sep 13, 2023 at 3:43 PM Akihiko Odaki <akihiko.odaki@daynix.com 
> <mailto:akihiko.odaki@daynix.com>> wrote:
> 
>     On 2023/09/13 21:58, Albert Esteve wrote:
>      >
>      >
>      > On Wed, Sep 13, 2023 at 2:22 PM Akihiko Odaki
>     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
>      > <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>> wrote:
>      >
>      >     On 2023/09/13 20:34, Albert Esteve wrote:
>      >      >
>      >      >
>      >      > On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki
>      >     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
>     <mailto:akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>>
>      >      > <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>
>      >     <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>>> wrote:
>      >      >
>      >      >     On 2023/09/13 16:55, Albert Esteve wrote:
>      >      >      > Hi Antonio,
>      >      >      >
>      >      >      > If I'm not mistaken, this patch is related with:
>      >      >      >
>      >      >
>      >
>     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
>      >   
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
>      >      >
>      >     
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
>      >      >      >
>      >      >
>      >     
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
>      >      >
>      >     
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>>
>      >      >      > IMHO, ideally, virtio-gpu and vhost-user-gpu both,
>     would
>      >     use the
>      >      >      > infrastructure from the patch I linked to store the
>      >      >      > virtio objects, so that they can be later shared with
>      >     other devices.
>      >      >
>      >      >     I don't think such sharing is possible because the
>     resources are
>      >      >     identified by IDs that are local to the device. That also
>      >     complicates
>      >      >     migration.
>      >      >
>      >      >     Regards,
>      >      >     Akihiko Odaki
>      >      >
>      >      > Hi Akihiko,
>      >      >
>      >      > As far as I understand, the feature to export
>     dma-bufs from the
>      >      > virtgpu was introduced as part of the virtio cross-device
>     sharing
>      >      > proposal [1]. Thus, it shall be posible. When
>     virtgpu ASSING_UUID,
>      >      > it exports and identifies the dmabuf resource, so that
>     when the
>      >     dmabuf gets
>      >      > shared inside the guest (e.g., with virtio-video), we can
>     use the
>      >     assigned
>      >      > UUID to find the dmabuf in the host (using the patch that I
>      >     linked above),
>      >      > and import it.
>      >      >
>      >      > [1] - https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>
>      >     <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>>
>     <https://lwn.net/Articles/828988/ <https://lwn.net/Articles/828988/>
>      >     <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>>>
>      >
>      >     The problem is that virtio-gpu can have other kind of
>     resources like
>      >     pixman and OpenGL textures and manage them and DMA-BUFs with
>     unified
>      >     resource ID.
>      >
>      >
>      > I see.
>      >
>      >
>      >     So you cannot change:
>      >     g_hash_table_insert(g->resource_uuids,
>      >     GUINT_TO_POINTER(assign.resource_id), uuid);
>      >     by:
>      >     virtio_add_dmabuf(uuid, assign.resource_id);
>      >
>      >     assign.resource_id is not DMA-BUF file descriptor, and the
>     underlying
>      >     resource my not be DMA-BUF at first place.
>      >
>      >
>      > I didn't really look into the patch in-depth, so the code was
>     intended
>      > to give an idea of how the implementation would look like with
>      > the cross-device patch API. Indeed, it is not the resource_id,
>      > (I just took a brief look at the virtio specificacion 1.2), but the
>      > underlying
>      > resource what we want to use here.
>      >
>      >
>      >     Also, since this lives in the common code that is not used
>     only by
>      >     virtio-gpu-gl but also virtio-gpu, which supports migration,
>     we also
>      >     need to take care of that. It is not a problem for DMA-BUF as
>      >     DMA-BUF is
>      >     not migratable anyway, but the situation is different in this
>     case.
>      >
>      >     Implementing cross-device sharing is certainly a possibility,
>     but that
>      >     requires more than dealing with DMA-BUFs.
>      >
>      >
>      > So, if I understood correctly, dmabufs are just a subset of the
>     resources
>      > that the gpu manages, or can assign UUIDs to. I am not sure why
>      > the virt gpu driver would want to send a ASSIGN_UUID for anything
>      > that is not a dmabuf (are we sure it does?), but I am not super
>     familiarized
>      > with virtgpu to begin with.
> 
>     In my understanding, an resource will be first created as OpenGL or
>     Vulkan textures and then exported as a DMA-BUF file descriptor. For
>     these resource types exporting/importing code is mandatory.
> 
>     For pixman buffers (i.e., non-virgl device), I don't see a compelling
>     reason to have cross-device sharing. It is possible to omit resource
>     UUID feature from non-virgl device to avoid implementing complicated
>     migration.
> 
> 
> I see, thanks for the clarification.
> I would assume you could avoid the UUID feature for those resources, but
> I will need to check the driver implementation. It is worth checking 
> though, if
> that would simplify the implementation.
> 
> 
>      > But I see that internally, the GPU specs relate a UUID with a
>     resource_id,
>      > so we still need both tables:
>      > - one to relate UUID with resource_id to be able to locate the
>      > underlying resource
>      > - the table that holds the dmabuf with the UUID for cross-device
>     sharing
>      >
>      > With that in mind, sounds to me that the support for cross-device
>      > sharing could
>      > be added on top of this patch, once
>      >
>     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>
>      >
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>>
>      > lands.
> 
>     That is possible, but I think it's better to implement cross-device
>     sharing at the same time introducing virtio-dmabuf.
> 
>     The current design of virtio-dmabuf looks somewhat inconsistent; it's
>     named "dmabuf", but internally the UUIDs are stored into something
>     named
>     "resource_uuids" and it has SharedObjectType so it's more like a
>     generic
>     resource sharing mechanism. If you actually have an implementation of
>     cross-device sharing using virtio-dmabuf, it will be clear what kind of
>     feature is truly necessary.
> 
> 
> Yeah, the file was named as virtio-dmabuf following the kernel
> implementation. Also, because for the moment it only aims to share
> dmabufs. However, virtio specs leave the virtio object defintion vague [1]
> (I guess purposely). It is up to the specific devices to define what an 
> object
> means for them. So the implementation tries to follow that, and
> leave the contents of the table generic. The table can hold any kind of 
> object,
> and the API exposes type-specific functions (for dmabufs, or others).
In the guest kernel, the name "virtio_dma_buf" represents the interface 
between the *guest* kernel and *guest* user-space. It makes sense since 
the cross-device resource sharing is managed by the userspace and 
DMA-BUF is the only interface between them for this purpose.

The situation is different for QEMU; QEMU interacts with backends using 
backend-specific interfaces (OpenGL/pixman) and virgl is capable to 
export textures as DMA-BUF. DMA-BUF is not universal in this sense. As 
such, we cannot just borrow the kernel-side naming but invent one.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-14  7:17                       ` Akihiko Odaki
@ 2023-09-14  8:29                         ` Albert Esteve
  2023-09-14 16:55                           ` Akihiko Odaki
  0 siblings, 1 reply; 51+ messages in thread
From: Albert Esteve @ 2023-09-14  8:29 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

[-- Attachment #1: Type: text/plain, Size: 10992 bytes --]

On Thu, Sep 14, 2023 at 9:17 AM Akihiko Odaki <akihiko.odaki@daynix.com>
wrote:

> On 2023/09/13 23:18, Albert Esteve wrote:
> >
> >
> > On Wed, Sep 13, 2023 at 3:43 PM Akihiko Odaki <akihiko.odaki@daynix.com
> > <mailto:akihiko.odaki@daynix.com>> wrote:
> >
> >     On 2023/09/13 21:58, Albert Esteve wrote:
> >      >
> >      >
> >      > On Wed, Sep 13, 2023 at 2:22 PM Akihiko Odaki
> >     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
> >      > <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>> wrote:
> >      >
> >      >     On 2023/09/13 20:34, Albert Esteve wrote:
> >      >      >
> >      >      >
> >      >      > On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki
> >      >     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
> >     <mailto:akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>>
> >      >      > <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>
> >      >     <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>>> wrote:
> >      >      >
> >      >      >     On 2023/09/13 16:55, Albert Esteve wrote:
> >      >      >      > Hi Antonio,
> >      >      >      >
> >      >      >      > If I'm not mistaken, this patch is related with:
> >      >      >      >
> >      >      >
> >      >
> >     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
> >      >      >      >
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>>
> >      >      >      > IMHO, ideally, virtio-gpu and vhost-user-gpu both,
> >     would
> >      >     use the
> >      >      >      > infrastructure from the patch I linked to store the
> >      >      >      > virtio objects, so that they can be later shared
> with
> >      >     other devices.
> >      >      >
> >      >      >     I don't think such sharing is possible because the
> >     resources are
> >      >      >     identified by IDs that are local to the device. That
> also
> >      >     complicates
> >      >      >     migration.
> >      >      >
> >      >      >     Regards,
> >      >      >     Akihiko Odaki
> >      >      >
> >      >      > Hi Akihiko,
> >      >      >
> >      >      > As far as I understand, the feature to export
> >     dma-bufs from the
> >      >      > virtgpu was introduced as part of the virtio cross-device
> >     sharing
> >      >      > proposal [1]. Thus, it shall be posible. When
> >     virtgpu ASSING_UUID,
> >      >      > it exports and identifies the dmabuf resource, so that
> >     when the
> >      >     dmabuf gets
> >      >      > shared inside the guest (e.g., with virtio-video), we can
> >     use the
> >      >     assigned
> >      >      > UUID to find the dmabuf in the host (using the patch that I
> >      >     linked above),
> >      >      > and import it.
> >      >      >
> >      >      > [1] - https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>
> >      >     <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>>
> >     <https://lwn.net/Articles/828988/ <https://lwn.net/Articles/828988/>
> >      >     <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>>>
> >      >
> >      >     The problem is that virtio-gpu can have other kind of
> >     resources like
> >      >     pixman and OpenGL textures and manage them and DMA-BUFs with
> >     unified
> >      >     resource ID.
> >      >
> >      >
> >      > I see.
> >      >
> >      >
> >      >     So you cannot change:
> >      >     g_hash_table_insert(g->resource_uuids,
> >      >     GUINT_TO_POINTER(assign.resource_id), uuid);
> >      >     by:
> >      >     virtio_add_dmabuf(uuid, assign.resource_id);
> >      >
> >      >     assign.resource_id is not DMA-BUF file descriptor, and the
> >     underlying
> >      >     resource my not be DMA-BUF at first place.
> >      >
> >      >
> >      > I didn't really look into the patch in-depth, so the code was
> >     intended
> >      > to give an idea of how the implementation would look like with
> >      > the cross-device patch API. Indeed, it is not the resource_id,
> >      > (I just took a brief look at the virtio specificacion 1.2), but
> the
> >      > underlying
> >      > resource what we want to use here.
> >      >
> >      >
> >      >     Also, since this lives in the common code that is not used
> >     only by
> >      >     virtio-gpu-gl but also virtio-gpu, which supports migration,
> >     we also
> >      >     need to take care of that. It is not a problem for DMA-BUF as
> >      >     DMA-BUF is
> >      >     not migratable anyway, but the situation is different in this
> >     case.
> >      >
> >      >     Implementing cross-device sharing is certainly a possibility,
> >     but that
> >      >     requires more than dealing with DMA-BUFs.
> >      >
> >      >
> >      > So, if I understood correctly, dmabufs are just a subset of the
> >     resources
> >      > that the gpu manages, or can assign UUIDs to. I am not sure why
> >      > the virt gpu driver would want to send a ASSIGN_UUID for anything
> >      > that is not a dmabuf (are we sure it does?), but I am not super
> >     familiarized
> >      > with virtgpu to begin with.
> >
> >     In my understanding, an resource will be first created as OpenGL or
> >     Vulkan textures and then exported as a DMA-BUF file descriptor. For
> >     these resource types exporting/importing code is mandatory.
> >
> >     For pixman buffers (i.e., non-virgl device), I don't see a compelling
> >     reason to have cross-device sharing. It is possible to omit resource
> >     UUID feature from non-virgl device to avoid implementing complicated
> >     migration.
> >
> >
> > I see, thanks for the clarification.
> > I would assume you could avoid the UUID feature for those resources, but
> > I will need to check the driver implementation. It is worth checking
> > though, if
> > that would simplify the implementation.
> >
> >
> >      > But I see that internally, the GPU specs relate a UUID with a
> >     resource_id,
> >      > so we still need both tables:
> >      > - one to relate UUID with resource_id to be able to locate the
> >      > underlying resource
> >      > - the table that holds the dmabuf with the UUID for cross-device
> >     sharing
> >      >
> >      > With that in mind, sounds to me that the support for cross-device
> >      > sharing could
> >      > be added on top of this patch, once
> >      >
> >     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
> >
> >      >
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
> >>
> >      > lands.
> >
> >     That is possible, but I think it's better to implement cross-device
> >     sharing at the same time introducing virtio-dmabuf.
> >
> >     The current design of virtio-dmabuf looks somewhat inconsistent; it's
> >     named "dmabuf", but internally the UUIDs are stored into something
> >     named
> >     "resource_uuids" and it has SharedObjectType so it's more like a
> >     generic
> >     resource sharing mechanism. If you actually have an implementation of
> >     cross-device sharing using virtio-dmabuf, it will be clear what kind
> of
> >     feature is truly necessary.
> >
> >
> > Yeah, the file was named as virtio-dmabuf following the kernel
> > implementation. Also, because for the moment it only aims to share
> > dmabufs. However, virtio specs leave the virtio object defintion vague
> [1]
> > (I guess purposely). It is up to the specific devices to define what an
> > object
> > means for them. So the implementation tries to follow that, and
> > leave the contents of the table generic. The table can hold any kind of
> > object,
> > and the API exposes type-specific functions (for dmabufs, or others).
> In the guest kernel, the name "virtio_dma_buf" represents the interface
> between the *guest* kernel and *guest* user-space. It makes sense since
> the cross-device resource sharing is managed by the userspace and
> DMA-BUF is the only interface between them for this purpose.
>
> The situation is different for QEMU; QEMU interacts with backends using
> backend-specific interfaces (OpenGL/pixman) and virgl is capable to
> export textures as DMA-BUF. DMA-BUF is not universal in this sense. As
> such, we cannot just borrow the kernel-side naming but invent one.
>
> It is not a gpu-specific feature. It is a generic cross-device sharing
mechanism for virtio objects. In this case, virtio objects happen to be
dmabufs in this first iteration. Hence, the name.

virtio-gpu (and vhost-user-gpu) will use this feature only with virgl, that
is
fine, and transversal to the object-sharing mechanism. It allows
to share dmabufs in the host following how they are shared in the guest.
The virtgpu driver may call ASSIGN_UUID for other types of resources (not
sure,
but could be), but they will never be shared with other virtio devices.
So they are not too relevant. Also, the shared objects table could
potentially
be accessed from any virtio device, not only virtio-gpu or virtio-video.

What I am trying to say, is that the name focuses solely in its current
usage,
i.e., sharing dmabufs between virtio-gpu (as exporter), and virtio-video
(as importer).
If it grows to something more, imo it can be renamed later.

Regards,
Albert

[-- Attachment #2: Type: text/html, Size: 17849 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-14  8:29                         ` Albert Esteve
@ 2023-09-14 16:55                           ` Akihiko Odaki
  2023-09-15  9:56                             ` Albert Esteve
  0 siblings, 1 reply; 51+ messages in thread
From: Akihiko Odaki @ 2023-09-14 16:55 UTC (permalink / raw)
  To: Albert Esteve
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

On 2023/09/14 17:29, Albert Esteve wrote:
> 
> 
> On Thu, Sep 14, 2023 at 9:17 AM Akihiko Odaki <akihiko.odaki@daynix.com 
> <mailto:akihiko.odaki@daynix.com>> wrote:
> 
>     On 2023/09/13 23:18, Albert Esteve wrote:
>      >
>      >
>      > On Wed, Sep 13, 2023 at 3:43 PM Akihiko Odaki
>     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
>      > <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>> wrote:
>      >
>      >     On 2023/09/13 21:58, Albert Esteve wrote:
>      >      >
>      >      >
>      >      > On Wed, Sep 13, 2023 at 2:22 PM Akihiko Odaki
>      >     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
>     <mailto:akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>>
>      >      > <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>
>      >     <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>>> wrote:
>      >      >
>      >      >     On 2023/09/13 20:34, Albert Esteve wrote:
>      >      >      >
>      >      >      >
>      >      >      > On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki
>      >      >     <akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com> <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>
>      >     <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com> <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>>
>      >      >      > <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>
>      >     <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>
>      >      >     <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>
>      >     <mailto:akihiko.odaki@daynix.com
>     <mailto:akihiko.odaki@daynix.com>>>>> wrote:
>      >      >      >
>      >      >      >     On 2023/09/13 16:55, Albert Esteve wrote:
>      >      >      >      > Hi Antonio,
>      >      >      >      >
>      >      >      >      > If I'm not mistaken, this patch is related with:
>      >      >      >      >
>      >      >      >
>      >      >
>      >
>     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>
>      >   
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
>      >      >
>      >     
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
>      >      >      >
>      >      >
>      >     
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>>
>      >      >      >      >
>      >      >      >
>      >      >
>      >     
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
>      >      >      >
>      >      >
>      >     
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>>>
>      >      >      >      > IMHO, ideally, virtio-gpu and vhost-user-gpu
>     both,
>      >     would
>      >      >     use the
>      >      >      >      > infrastructure from the patch I linked to
>     store the
>      >      >      >      > virtio objects, so that they can be later
>     shared with
>      >      >     other devices.
>      >      >      >
>      >      >      >     I don't think such sharing is possible because the
>      >     resources are
>      >      >      >     identified by IDs that are local to the device.
>     That also
>      >      >     complicates
>      >      >      >     migration.
>      >      >      >
>      >      >      >     Regards,
>      >      >      >     Akihiko Odaki
>      >      >      >
>      >      >      > Hi Akihiko,
>      >      >      >
>      >      >      > As far as I understand, the feature to export
>      >     dma-bufs from the
>      >      >      > virtgpu was introduced as part of the virtio
>     cross-device
>      >     sharing
>      >      >      > proposal [1]. Thus, it shall be posible. When
>      >     virtgpu ASSING_UUID,
>      >      >      > it exports and identifies the dmabuf resource, so that
>      >     when the
>      >      >     dmabuf gets
>      >      >      > shared inside the guest (e.g., with virtio-video),
>     we can
>      >     use the
>      >      >     assigned
>      >      >      > UUID to find the dmabuf in the host (using the
>     patch that I
>      >      >     linked above),
>      >      >      > and import it.
>      >      >      >
>      >      >      > [1] - https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>
>      >     <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>>
>      >      >     <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>
>      >     <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>>>
>      >     <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/> <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>>
>      >      >     <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>
>      >     <https://lwn.net/Articles/828988/
>     <https://lwn.net/Articles/828988/>>>>
>      >      >
>      >      >     The problem is that virtio-gpu can have other kind of
>      >     resources like
>      >      >     pixman and OpenGL textures and manage them and
>     DMA-BUFs with
>      >     unified
>      >      >     resource ID.
>      >      >
>      >      >
>      >      > I see.
>      >      >
>      >      >
>      >      >     So you cannot change:
>      >      >     g_hash_table_insert(g->resource_uuids,
>      >      >     GUINT_TO_POINTER(assign.resource_id), uuid);
>      >      >     by:
>      >      >     virtio_add_dmabuf(uuid, assign.resource_id);
>      >      >
>      >      >     assign.resource_id is not DMA-BUF file descriptor, and the
>      >     underlying
>      >      >     resource my not be DMA-BUF at first place.
>      >      >
>      >      >
>      >      > I didn't really look into the patch in-depth, so the code was
>      >     intended
>      >      > to give an idea of how the implementation would look like with
>      >      > the cross-device patch API. Indeed, it is not the resource_id,
>      >      > (I just took a brief look at the virtio
>     specificacion 1.2), but the
>      >      > underlying
>      >      > resource what we want to use here.
>      >      >
>      >      >
>      >      >     Also, since this lives in the common code that is not used
>      >     only by
>      >      >     virtio-gpu-gl but also virtio-gpu, which supports
>     migration,
>      >     we also
>      >      >     need to take care of that. It is not a problem for
>     DMA-BUF as
>      >      >     DMA-BUF is
>      >      >     not migratable anyway, but the situation is different
>     in this
>      >     case.
>      >      >
>      >      >     Implementing cross-device sharing is certainly a
>     possibility,
>      >     but that
>      >      >     requires more than dealing with DMA-BUFs.
>      >      >
>      >      >
>      >      > So, if I understood correctly, dmabufs are just a subset
>     of the
>      >     resources
>      >      > that the gpu manages, or can assign UUIDs to. I am not
>     sure why
>      >      > the virt gpu driver would want to send a ASSIGN_UUID for
>     anything
>      >      > that is not a dmabuf (are we sure it does?), but I am not
>     super
>      >     familiarized
>      >      > with virtgpu to begin with.
>      >
>      >     In my understanding, an resource will be first created as
>     OpenGL or
>      >     Vulkan textures and then exported as a DMA-BUF file
>     descriptor. For
>      >     these resource types exporting/importing code is mandatory.
>      >
>      >     For pixman buffers (i.e., non-virgl device), I don't see a
>     compelling
>      >     reason to have cross-device sharing. It is possible to omit
>     resource
>      >     UUID feature from non-virgl device to avoid implementing
>     complicated
>      >     migration.
>      >
>      >
>      > I see, thanks for the clarification.
>      > I would assume you could avoid the UUID feature for those
>     resources, but
>      > I will need to check the driver implementation. It is worth checking
>      > though, if
>      > that would simplify the implementation.
>      >
>      >
>      >      > But I see that internally, the GPU specs relate a UUID with a
>      >     resource_id,
>      >      > so we still need both tables:
>      >      > - one to relate UUID with resource_id to be able to locate the
>      >      > underlying resource
>      >      > - the table that holds the dmabuf with the UUID for
>     cross-device
>      >     sharing
>      >      >
>      >      > With that in mind, sounds to me that the support for
>     cross-device
>      >      > sharing could
>      >      > be added on top of this patch, once
>      >      >
>      >
>     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
>     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>
>      >   
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>>
>      >      >
>      >   
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>
>      >   
>       <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>>>
>      >      > lands.
>      >
>      >     That is possible, but I think it's better to implement
>     cross-device
>      >     sharing at the same time introducing virtio-dmabuf.
>      >
>      >     The current design of virtio-dmabuf looks somewhat
>     inconsistent; it's
>      >     named "dmabuf", but internally the UUIDs are stored into
>     something
>      >     named
>      >     "resource_uuids" and it has SharedObjectType so it's more like a
>      >     generic
>      >     resource sharing mechanism. If you actually have an
>     implementation of
>      >     cross-device sharing using virtio-dmabuf, it will be clear
>     what kind of
>      >     feature is truly necessary.
>      >
>      >
>      > Yeah, the file was named as virtio-dmabuf following the kernel
>      > implementation. Also, because for the moment it only aims to share
>      > dmabufs. However, virtio specs leave the virtio object
>     defintion vague [1]
>      > (I guess purposely). It is up to the specific devices to define
>     what an
>      > object
>      > means for them. So the implementation tries to follow that, and
>      > leave the contents of the table generic. The table can hold any
>     kind of
>      > object,
>      > and the API exposes type-specific functions (for dmabufs, or others).
>     In the guest kernel, the name "virtio_dma_buf" represents the interface
>     between the *guest* kernel and *guest* user-space. It makes sense since
>     the cross-device resource sharing is managed by the userspace and
>     DMA-BUF is the only interface between them for this purpose.
> 
>     The situation is different for QEMU; QEMU interacts with backends using
>     backend-specific interfaces (OpenGL/pixman) and virgl is capable to
>     export textures as DMA-BUF. DMA-BUF is not universal in this sense. As
>     such, we cannot just borrow the kernel-side naming but invent one.
> 
> It is not a gpu-specific feature. It is a generic cross-device sharing
> mechanism for virtio objects. In this case, virtio objects happen to be
> dmabufs in this first iteration. Hence, the name.
> 
> virtio-gpu (and vhost-user-gpu) will use this feature only with virgl, 
> that is
> fine, and transversal to the object-sharing mechanism. It allows
> to share dmabufs in the host following how they are shared in the guest.
> The virtgpu driver may call ASSIGN_UUID for other types of resources 
> (not sure,
> but could be), but they will never be shared with other virtio devices.
> So they are not too relevant. Also, the shared objects table could 
> potentially
> be accessed from any virtio device, not only virtio-gpu or virtio-video.

The virtgpu driver will call ASSIGN_UUID for resources that are backed 
with pixman buffer. What is used as the backing store for resources is 
an implementation detail of VMM and the guest cannot know it. For the 
guest, they are same kind of resources (2D images).


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID
  2023-09-14 16:55                           ` Akihiko Odaki
@ 2023-09-15  9:56                             ` Albert Esteve
  0 siblings, 0 replies; 51+ messages in thread
From: Albert Esteve @ 2023-09-15  9:56 UTC (permalink / raw)
  To: Akihiko Odaki
  Cc: Huang Rui, Gerd Hoffmann, Michael S . Tsirkin,
	Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
	Dr . David Alan Gilbert, Robert Beckett, Dmitry Osipenko,
	Alex Bennée, qemu-devel, xen-devel, Gurchetan Singh,
	ernunes, Philippe Mathieu-Daudé,
	Alyssa Ross, Roger Pau Monné,
	Deucher, Alexander, Koenig, Christian, Ragiadakou, Xenia,
	Pelloux-Prayer, Pierre-Eric, Huang, Honglei1, Zhang, Julia, Chen,
	Jiqian

[-- Attachment #1: Type: text/plain, Size: 15786 bytes --]

On Thu, Sep 14, 2023 at 6:56 PM Akihiko Odaki <akihiko.odaki@daynix.com>
wrote:

> On 2023/09/14 17:29, Albert Esteve wrote:
> >
> >
> > On Thu, Sep 14, 2023 at 9:17 AM Akihiko Odaki <akihiko.odaki@daynix.com
> > <mailto:akihiko.odaki@daynix.com>> wrote:
> >
> >     On 2023/09/13 23:18, Albert Esteve wrote:
> >      >
> >      >
> >      > On Wed, Sep 13, 2023 at 3:43 PM Akihiko Odaki
> >     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
> >      > <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>> wrote:
> >      >
> >      >     On 2023/09/13 21:58, Albert Esteve wrote:
> >      >      >
> >      >      >
> >      >      > On Wed, Sep 13, 2023 at 2:22 PM Akihiko Odaki
> >      >     <akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>
> >     <mailto:akihiko.odaki@daynix.com <mailto:akihiko.odaki@daynix.com>>
> >      >      > <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>
> >      >     <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>>> wrote:
> >      >      >
> >      >      >     On 2023/09/13 20:34, Albert Esteve wrote:
> >      >      >      >
> >      >      >      >
> >      >      >      > On Wed, Sep 13, 2023 at 12:34 PM Akihiko Odaki
> >      >      >     <akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com> <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>
> >      >     <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com> <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>>
> >      >      >      > <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>
> >      >     <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>
> >      >      >     <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>
> >      >     <mailto:akihiko.odaki@daynix.com
> >     <mailto:akihiko.odaki@daynix.com>>>>> wrote:
> >      >      >      >
> >      >      >      >     On 2023/09/13 16:55, Albert Esteve wrote:
> >      >      >      >      > Hi Antonio,
> >      >      >      >      >
> >      >      >      >      > If I'm not mistaken, this patch is related
> with:
> >      >      >      >      >
> >      >      >      >
> >      >      >
> >      >
> >     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html
> >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
> >      >      >      >
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>>
> >      >      >      >      >
> >      >      >      >
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>
> >      >      >      >
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html> <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01853.html>>>>>
> >      >      >      >      > IMHO, ideally, virtio-gpu and vhost-user-gpu
> >     both,
> >      >     would
> >      >      >     use the
> >      >      >      >      > infrastructure from the patch I linked to
> >     store the
> >      >      >      >      > virtio objects, so that they can be later
> >     shared with
> >      >      >     other devices.
> >      >      >      >
> >      >      >      >     I don't think such sharing is possible because
> the
> >      >     resources are
> >      >      >      >     identified by IDs that are local to the device.
> >     That also
> >      >      >     complicates
> >      >      >      >     migration.
> >      >      >      >
> >      >      >      >     Regards,
> >      >      >      >     Akihiko Odaki
> >      >      >      >
> >      >      >      > Hi Akihiko,
> >      >      >      >
> >      >      >      > As far as I understand, the feature to export
> >      >     dma-bufs from the
> >      >      >      > virtgpu was introduced as part of the virtio
> >     cross-device
> >      >     sharing
> >      >      >      > proposal [1]. Thus, it shall be posible. When
> >      >     virtgpu ASSING_UUID,
> >      >      >      > it exports and identifies the dmabuf resource, so
> that
> >      >     when the
> >      >      >     dmabuf gets
> >      >      >      > shared inside the guest (e.g., with virtio-video),
> >     we can
> >      >     use the
> >      >      >     assigned
> >      >      >      > UUID to find the dmabuf in the host (using the
> >     patch that I
> >      >      >     linked above),
> >      >      >      > and import it.
> >      >      >      >
> >      >      >      > [1] - https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>
> >      >     <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>>
> >      >      >     <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>
> >      >     <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>>>
> >      >     <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/> <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>>
> >      >      >     <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>
> >      >     <https://lwn.net/Articles/828988/
> >     <https://lwn.net/Articles/828988/>>>>
> >      >      >
> >      >      >     The problem is that virtio-gpu can have other kind of
> >      >     resources like
> >      >      >     pixman and OpenGL textures and manage them and
> >     DMA-BUFs with
> >      >     unified
> >      >      >     resource ID.
> >      >      >
> >      >      >
> >      >      > I see.
> >      >      >
> >      >      >
> >      >      >     So you cannot change:
> >      >      >     g_hash_table_insert(g->resource_uuids,
> >      >      >     GUINT_TO_POINTER(assign.resource_id), uuid);
> >      >      >     by:
> >      >      >     virtio_add_dmabuf(uuid, assign.resource_id);
> >      >      >
> >      >      >     assign.resource_id is not DMA-BUF file descriptor, and
> the
> >      >     underlying
> >      >      >     resource my not be DMA-BUF at first place.
> >      >      >
> >      >      >
> >      >      > I didn't really look into the patch in-depth, so the code
> was
> >      >     intended
> >      >      > to give an idea of how the implementation would look like
> with
> >      >      > the cross-device patch API. Indeed, it is not the
> resource_id,
> >      >      > (I just took a brief look at the virtio
> >     specificacion 1.2), but the
> >      >      > underlying
> >      >      > resource what we want to use here.
> >      >      >
> >      >      >
> >      >      >     Also, since this lives in the common code that is not
> used
> >      >     only by
> >      >      >     virtio-gpu-gl but also virtio-gpu, which supports
> >     migration,
> >      >     we also
> >      >      >     need to take care of that. It is not a problem for
> >     DMA-BUF as
> >      >      >     DMA-BUF is
> >      >      >     not migratable anyway, but the situation is different
> >     in this
> >      >     case.
> >      >      >
> >      >      >     Implementing cross-device sharing is certainly a
> >     possibility,
> >      >     but that
> >      >      >     requires more than dealing with DMA-BUFs.
> >      >      >
> >      >      >
> >      >      > So, if I understood correctly, dmabufs are just a subset
> >     of the
> >      >     resources
> >      >      > that the gpu manages, or can assign UUIDs to. I am not
> >     sure why
> >      >      > the virt gpu driver would want to send a ASSIGN_UUID for
> >     anything
> >      >      > that is not a dmabuf (are we sure it does?), but I am not
> >     super
> >      >     familiarized
> >      >      > with virtgpu to begin with.
> >      >
> >      >     In my understanding, an resource will be first created as
> >     OpenGL or
> >      >     Vulkan textures and then exported as a DMA-BUF file
> >     descriptor. For
> >      >     these resource types exporting/importing code is mandatory.
> >      >
> >      >     For pixman buffers (i.e., non-virgl device), I don't see a
> >     compelling
> >      >     reason to have cross-device sharing. It is possible to omit
> >     resource
> >      >     UUID feature from non-virgl device to avoid implementing
> >     complicated
> >      >     migration.
> >      >
> >      >
> >      > I see, thanks for the clarification.
> >      > I would assume you could avoid the UUID feature for those
> >     resources, but
> >      > I will need to check the driver implementation. It is worth
> checking
> >      > though, if
> >      > that would simplify the implementation.
> >      >
> >      >
> >      >      > But I see that internally, the GPU specs relate a UUID
> with a
> >      >     resource_id,
> >      >      > so we still need both tables:
> >      >      > - one to relate UUID with resource_id to be able to locate
> the
> >      >      > underlying resource
> >      >      > - the table that holds the dmabuf with the UUID for
> >     cross-device
> >      >     sharing
> >      >      >
> >      >      > With that in mind, sounds to me that the support for
> >     cross-device
> >      >      > sharing could
> >      >      > be added on top of this patch, once
> >      >      >
> >      >
> >     https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
> >     <https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html
> >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>>
> >      >      >
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>
> >      >
> >       <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html <
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg01850.html>>>
> >      >      > lands.
> >      >
> >      >     That is possible, but I think it's better to implement
> >     cross-device
> >      >     sharing at the same time introducing virtio-dmabuf.
> >      >
> >      >     The current design of virtio-dmabuf looks somewhat
> >     inconsistent; it's
> >      >     named "dmabuf", but internally the UUIDs are stored into
> >     something
> >      >     named
> >      >     "resource_uuids" and it has SharedObjectType so it's more
> like a
> >      >     generic
> >      >     resource sharing mechanism. If you actually have an
> >     implementation of
> >      >     cross-device sharing using virtio-dmabuf, it will be clear
> >     what kind of
> >      >     feature is truly necessary.
> >      >
> >      >
> >      > Yeah, the file was named as virtio-dmabuf following the kernel
> >      > implementation. Also, because for the moment it only aims to share
> >      > dmabufs. However, virtio specs leave the virtio object
> >     defintion vague [1]
> >      > (I guess purposely). It is up to the specific devices to define
> >     what an
> >      > object
> >      > means for them. So the implementation tries to follow that, and
> >      > leave the contents of the table generic. The table can hold any
> >     kind of
> >      > object,
> >      > and the API exposes type-specific functions (for dmabufs, or
> others).
> >     In the guest kernel, the name "virtio_dma_buf" represents the
> interface
> >     between the *guest* kernel and *guest* user-space. It makes sense
> since
> >     the cross-device resource sharing is managed by the userspace and
> >     DMA-BUF is the only interface between them for this purpose.
> >
> >     The situation is different for QEMU; QEMU interacts with backends
> using
> >     backend-specific interfaces (OpenGL/pixman) and virgl is capable to
> >     export textures as DMA-BUF. DMA-BUF is not universal in this sense.
> As
> >     such, we cannot just borrow the kernel-side naming but invent one.
> >
> > It is not a gpu-specific feature. It is a generic cross-device sharing
> > mechanism for virtio objects. In this case, virtio objects happen to be
> > dmabufs in this first iteration. Hence, the name.
> >
> > virtio-gpu (and vhost-user-gpu) will use this feature only with virgl,
> > that is
> > fine, and transversal to the object-sharing mechanism. It allows
> > to share dmabufs in the host following how they are shared in the guest.
> > The virtgpu driver may call ASSIGN_UUID for other types of resources
> > (not sure,
> > but could be), but they will never be shared with other virtio devices.
> > So they are not too relevant. Also, the shared objects table could
> > potentially
> > be accessed from any virtio device, not only virtio-gpu or virtio-video.
>
> The virtgpu driver will call ASSIGN_UUID for resources that are backed
> with pixman buffer. What is used as the backing store for resources is
> an implementation detail of VMM and the guest cannot know it. For the
> guest, they are same kind of resources (2D images).
>
> Ok, got it. In any case, those resources that are actually backed by a
dmabuf
ought to be shared using virtio_dmabuf. I think that is the key point of
this discussion.

That support can be added once both this patch and the virtio_dmabuf land.

[-- Attachment #2: Type: text/html, Size: 28453 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2023-09-15  9:57 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-31  9:32 [QEMU PATCH v4 00/13] Support blob memory and venus on qemu Huang Rui
2023-08-31  9:32 ` [QEMU PATCH v4 01/13] virtio: Add shared memory capability Huang Rui
2023-08-31  9:32 ` [QEMU PATCH v4 02/13] virtio-gpu: CONTEXT_INIT feature Huang Rui
2023-08-31  9:32 ` [QEMU PATCH v4 03/13] virtio-gpu: hostmem Huang Rui
2023-08-31  9:32 ` [QEMU PATCH v4 04/13] virtio-gpu: blob prep Huang Rui
2023-08-31  9:32 ` [QEMU PATCH v4 05/13] virtio-gpu: Support context init feature with virglrenderer Huang Rui
2023-08-31  9:41   ` Philippe Mathieu-Daudé
2023-08-31  9:32 ` [QEMU PATCH v4 06/13] virtio-gpu: Configure context init for virglrenderer Huang Rui
2023-08-31  9:39   ` Philippe Mathieu-Daudé
2023-09-04  6:45     ` Huang Rui via
2023-08-31  9:32 ` [QEMU PATCH v4 07/13] softmmu/memory: enable automatic deallocation of memory regions Huang Rui
2023-08-31 10:10   ` Akihiko Odaki
2023-09-05  9:07     ` Huang Rui
2023-09-05  9:17       ` Akihiko Odaki
2023-09-05 13:29         ` Huang Rui
2023-08-31  9:32 ` [QEMU PATCH v4 08/13] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled Huang Rui
2023-08-31  9:32 ` [QEMU PATCH v4 09/13] virtio-gpu: Handle resource blob commands Huang Rui
2023-08-31 10:24   ` Akihiko Odaki
2023-09-05  9:08     ` Huang Rui
2023-09-05  9:20       ` Akihiko Odaki
2023-09-06  3:09         ` Huang Rui
2023-09-06  3:39           ` Akihiko Odaki
2023-09-06  7:56             ` Huang Rui
2023-09-06 14:16               ` Akihiko Odaki
2023-08-31  9:32 ` [QEMU PATCH v4 10/13] virtio-gpu: Resource UUID Huang Rui
2023-08-31 10:36   ` Akihiko Odaki
2023-09-09  9:09     ` Huang Rui
2023-09-10  4:21       ` Akihiko Odaki
2023-09-13  7:55         ` Albert Esteve
2023-09-13 10:34           ` Akihiko Odaki
2023-09-13 11:34             ` Albert Esteve
2023-09-13 12:22               ` Akihiko Odaki
2023-09-13 12:58                 ` Albert Esteve
2023-09-13 13:43                   ` Akihiko Odaki
2023-09-13 14:18                     ` Albert Esteve
2023-09-13 14:27                       ` Albert Esteve
2023-09-14  7:17                       ` Akihiko Odaki
2023-09-14  8:29                         ` Albert Esteve
2023-09-14 16:55                           ` Akihiko Odaki
2023-09-15  9:56                             ` Albert Esteve
2023-08-31  9:32 ` [QEMU PATCH v4 11/13] virtio-gpu: Support Venus capset Huang Rui
2023-08-31 10:43   ` Akihiko Odaki
2023-09-09  9:29     ` Huang Rui
2023-09-10  4:32       ` Akihiko Odaki
2023-08-31  9:32 ` [QEMU PATCH v4 12/13] virtio-gpu: Initialize Venus Huang Rui
2023-08-31 10:40   ` Antonio Caggiano
2023-08-31 15:51     ` Dmitry Osipenko
2023-09-09 10:53       ` Huang Rui via
2023-09-12 13:44         ` Dmitry Osipenko
2023-09-09 10:52     ` Huang Rui
2023-08-31  9:32 ` [QEMU PATCH v4 13/13] virtio-gpu: Enable virglrenderer render server flag for venus Huang Rui

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).