* [PATCH v2 0/4] virtio-gpu: Blob resources
@ 2022-09-13 10:50 Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 1/4] virtio: Add shared memory capability Antonio Caggiano
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Antonio Caggiano @ 2022-09-13 10:50 UTC (permalink / raw)
To: qemu-devel; +Cc: gert.wollny, dmitry.osipenko
Add shared memory and support blob resource creation, mapping and
unmapping through virglrenderer new stable APIs[0] when available.
[0] https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/891
Antonio Caggiano (1):
virtio-gpu: Handle resource blob commands
Dmitry Osipenko (1):
virtio-gpu: Don't require udmabuf when blob support is enabled
Dr. David Alan Gilbert (1):
virtio: Add shared memory capability
Gerd Hoffmann (1):
virtio-gpu: hostmem
hw/display/virtio-gpu-pci.c | 15 +++
hw/display/virtio-gpu-virgl.c | 171 +++++++++++++++++++++++++++
hw/display/virtio-gpu.c | 29 ++---
hw/display/virtio-vga.c | 33 ++++--
hw/virtio/virtio-pci.c | 18 +++
include/hw/virtio/virtio-gpu-bswap.h | 18 +++
include/hw/virtio/virtio-gpu.h | 13 ++
include/hw/virtio/virtio-pci.h | 4 +
meson.build | 5 +
9 files changed, 283 insertions(+), 23 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/4] virtio: Add shared memory capability
2022-09-13 10:50 [PATCH v2 0/4] virtio-gpu: Blob resources Antonio Caggiano
@ 2022-09-13 10:50 ` Antonio Caggiano
2022-09-13 13:48 ` Bin Meng
2022-09-13 10:50 ` [PATCH v2 2/4] virtio-gpu: hostmem Antonio Caggiano
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Antonio Caggiano @ 2022-09-13 10:50 UTC (permalink / raw)
To: qemu-devel
Cc: gert.wollny, dmitry.osipenko, Dr. David Alan Gilbert, Michael S. Tsirkin
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Define a new capability type 'VIRTIO_PCI_CAP_SHARED_MEMORY_CFG'
and the data structure 'virtio_pci_shm_cap' to go with it.
They allow defining shared memory regions with sizes and offsets
of 2^32 and more.
Multiple instances of the capability are allowed and distinguished
by a device-specific 'id'.
v2: Remove virtio_pci_shm_cap as virtio_pci_cap64 is used instead.
v3: No need for mask32 as cpu_to_le32 truncates the value.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
---
hw/virtio/virtio-pci.c | 18 ++++++++++++++++++
include/hw/virtio/virtio-pci.h | 4 ++++
2 files changed, 22 insertions(+)
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index a50c5a57d7..377bb06fec 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1169,6 +1169,24 @@ static int virtio_pci_add_mem_cap(VirtIOPCIProxy *proxy,
return offset;
}
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy,
+ uint8_t bar, uint64_t offset, uint64_t length,
+ uint8_t id)
+{
+ struct virtio_pci_cap64 cap = {
+ .cap.cap_len = sizeof cap,
+ .cap.cfg_type = VIRTIO_PCI_CAP_SHARED_MEMORY_CFG,
+ };
+
+ cap.cap.bar = bar;
+ cap.cap.length = cpu_to_le32(length);
+ cap.length_hi = cpu_to_le32(length >> 32);
+ cap.cap.offset = cpu_to_le32(offset);
+ cap.offset_hi = cpu_to_le32(offset >> 32);
+ cap.cap.id = id;
+ return virtio_pci_add_mem_cap(proxy, &cap.cap);
+}
+
static uint64_t virtio_pci_common_read(void *opaque, hwaddr addr,
unsigned size)
{
diff --git a/include/hw/virtio/virtio-pci.h b/include/hw/virtio/virtio-pci.h
index 2446dcd9ae..5e5c4a4c6d 100644
--- a/include/hw/virtio/virtio-pci.h
+++ b/include/hw/virtio/virtio-pci.h
@@ -252,4 +252,8 @@ void virtio_pci_types_register(const VirtioPCIDeviceTypeInfo *t);
*/
unsigned virtio_pci_optimal_num_queues(unsigned fixed_queues);
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy,
+ uint8_t bar, uint64_t offset, uint64_t length,
+ uint8_t id);
+
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 2/4] virtio-gpu: hostmem
2022-09-13 10:50 [PATCH v2 0/4] virtio-gpu: Blob resources Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 1/4] virtio: Add shared memory capability Antonio Caggiano
@ 2022-09-13 10:50 ` Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 3/4] virtio-gpu: Handle resource blob commands Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled Antonio Caggiano
3 siblings, 0 replies; 11+ messages in thread
From: Antonio Caggiano @ 2022-09-13 10:50 UTC (permalink / raw)
To: qemu-devel
Cc: gert.wollny, dmitry.osipenko, Gerd Hoffmann, Michael S . Tsirkin
From: Gerd Hoffmann <kraxel@redhat.com>
Use VIRTIO_GPU_SHM_ID_HOST_VISIBLE as id for virtio-gpu.
v2: Formatting fixes
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
hw/display/virtio-gpu-pci.c | 15 +++++++++++++++
hw/display/virtio-gpu.c | 1 +
hw/display/virtio-vga.c | 33 ++++++++++++++++++++++++---------
include/hw/virtio/virtio-gpu.h | 5 +++++
4 files changed, 45 insertions(+), 9 deletions(-)
diff --git a/hw/display/virtio-gpu-pci.c b/hw/display/virtio-gpu-pci.c
index 93f214ff58..2cbbacd7fe 100644
--- a/hw/display/virtio-gpu-pci.c
+++ b/hw/display/virtio-gpu-pci.c
@@ -33,6 +33,21 @@ static void virtio_gpu_pci_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
DeviceState *vdev = DEVICE(g);
int i;
+ if (virtio_gpu_hostmem_enabled(g->conf)) {
+ vpci_dev->msix_bar_idx = 1;
+ vpci_dev->modern_mem_bar_idx = 2;
+ memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+ g->conf.hostmem);
+ pci_register_bar(&vpci_dev->pci_dev, 4,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_PREFETCH |
+ PCI_BASE_ADDRESS_MEM_TYPE_64,
+ &g->hostmem);
+ virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+ VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+ }
+
+ qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus), errp);
virtio_pci_force_virtio_1(vpci_dev);
if (!qdev_realize(vdev, BUS(&vpci_dev->bus), errp)) {
return;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 20cc703dcc..506b3b8eef 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1424,6 +1424,7 @@ static Property virtio_gpu_properties[] = {
256 * MiB),
DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
+ DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index 4dcb34c4a7..aa8d1ab993 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -115,17 +115,32 @@ static void virtio_vga_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
pci_register_bar(&vpci_dev->pci_dev, 0,
PCI_BASE_ADDRESS_MEM_PREFETCH, &vga->vram);
- /*
- * Configure virtio bar and regions
- *
- * We use bar #2 for the mmio regions, to be compatible with stdvga.
- * virtio regions are moved to the end of bar #2, to make room for
- * the stdvga mmio registers at the start of bar #2.
- */
- vpci_dev->modern_mem_bar_idx = 2;
- vpci_dev->msix_bar_idx = 4;
vpci_dev->modern_io_bar_idx = 5;
+ if (!virtio_gpu_hostmem_enabled(g->conf)) {
+ /*
+ * Configure virtio bar and regions
+ *
+ * We use bar #2 for the mmio regions, to be compatible with stdvga.
+ * virtio regions are moved to the end of bar #2, to make room for
+ * the stdvga mmio registers at the start of bar #2.
+ */
+ vpci_dev->modern_mem_bar_idx = 2;
+ vpci_dev->msix_bar_idx = 4;
+ } else {
+ vpci_dev->msix_bar_idx = 1;
+ vpci_dev->modern_mem_bar_idx = 2;
+ memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+ g->conf.hostmem);
+ pci_register_bar(&vpci_dev->pci_dev, 4,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_PREFETCH |
+ PCI_BASE_ADDRESS_MEM_TYPE_64,
+ &g->hostmem);
+ virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+ VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+ }
+
if (!(vpci_dev->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ)) {
/*
* with page-per-vq=off there is no padding space we can use
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 2e28507efe..eafce75b04 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -102,12 +102,15 @@ enum virtio_gpu_base_conf_flags {
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_DMABUF_ENABLED))
#define virtio_gpu_blob_enabled(_cfg) \
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
+#define virtio_gpu_hostmem_enabled(_cfg) \
+ (_cfg.hostmem > 0)
struct virtio_gpu_base_conf {
uint32_t max_outputs;
uint32_t flags;
uint32_t xres;
uint32_t yres;
+ uint64_t hostmem;
};
struct virtio_gpu_ctrl_command {
@@ -131,6 +134,8 @@ struct VirtIOGPUBase {
int renderer_blocked;
int enable;
+ MemoryRegion hostmem;
+
struct virtio_gpu_scanout scanout[VIRTIO_GPU_MAX_SCANOUTS];
int enabled_output_bitmask;
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 3/4] virtio-gpu: Handle resource blob commands
2022-09-13 10:50 [PATCH v2 0/4] virtio-gpu: Blob resources Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 1/4] virtio: Add shared memory capability Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 2/4] virtio-gpu: hostmem Antonio Caggiano
@ 2022-09-13 10:50 ` Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled Antonio Caggiano
3 siblings, 0 replies; 11+ messages in thread
From: Antonio Caggiano @ 2022-09-13 10:50 UTC (permalink / raw)
To: qemu-devel
Cc: gert.wollny, dmitry.osipenko, Michael S. Tsirkin, Gerd Hoffmann
Support BLOB resources creation, mapping and unmapping by calling the
new stable virglrenderer 0.10 interface. Only enabled when available and
via the blob config. E.g. -device virtio-vga-gl,blob=true
v2: Fix memory leaks and unmap resource on destroy.
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu-virgl.c | 171 +++++++++++++++++++++++++++
hw/display/virtio-gpu.c | 12 +-
include/hw/virtio/virtio-gpu-bswap.h | 18 +++
include/hw/virtio/virtio-gpu.h | 8 ++
meson.build | 5 +
5 files changed, 210 insertions(+), 4 deletions(-)
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 73cb92c8d5..17f00b3fb0 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -16,6 +16,8 @@
#include "trace.h"
#include "hw/virtio/virtio.h"
#include "hw/virtio/virtio-gpu.h"
+#include "hw/virtio/virtio-gpu-bswap.h"
+#include "hw/virtio/virtio-iommu.h"
#include <virglrenderer.h>
@@ -398,6 +400,164 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
g_free(resp);
}
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+
+static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_create_blob cblob;
+ int ret;
+
+ VIRTIO_GPU_FILL_CMD(cblob);
+ virtio_gpu_create_blob_bswap(&cblob);
+ trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
+
+ if (cblob.resource_id == 0) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = virtio_gpu_find_resource(g, cblob.resource_id);
+ if (res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+ __func__, cblob.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = g_new0(struct virtio_gpu_simple_resource, 1);
+ QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+
+ res->resource_id = cblob.resource_id;
+ res->blob_size = cblob.size;
+
+ if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+ ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
+ cmd, &res->addrs, &res->iov,
+ &res->iov_cnt);
+ if (ret != 0) {
+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+ return;
+ }
+ }
+
+ if (cblob.blob_mem == VIRTIO_GPU_BLOB_MEM_GUEST) {
+ virtio_gpu_init_udmabuf(res);
+ }
+ const struct virgl_renderer_resource_create_blob_args virgl_args = {
+ .res_handle = cblob.resource_id,
+ .ctx_id = cblob.hdr.ctx_id,
+ .blob_mem = cblob.blob_mem,
+ .blob_id = cblob.blob_id,
+ .blob_flags = cblob.blob_flags,
+ .size = cblob.size,
+ .iovecs = res->iov,
+ .num_iovs = res->iov_cnt,
+ };
+ ret = virgl_renderer_resource_create_blob(&virgl_args);
+ if (ret) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
+ __func__, strerror(-ret));
+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+ }
+}
+
+static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_map_blob mblob;
+ int ret;
+ void *data;
+ uint64_t size;
+ struct virtio_gpu_resp_map_info resp;
+
+ VIRTIO_GPU_FILL_CMD(mblob);
+ virtio_gpu_map_blob_bswap(&mblob);
+
+ if (mblob.resource_id == 0) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = virtio_gpu_find_resource(g, mblob.resource_id);
+ if (!res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+ __func__, mblob.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ ret = virgl_renderer_resource_map(res->resource_id, &data, &size);
+ if (ret) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
+ __func__, strerror(-ret));
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ memory_region_init_ram_device_ptr(&res->region, OBJECT(g), NULL, size, data);
+ memory_region_add_subregion(&g->parent_obj.hostmem, mblob.offset, &res->region);
+ memory_region_set_enabled(&res->region, true);
+
+ memset(&resp, 0, sizeof(resp));
+ resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
+ virgl_renderer_resource_get_map_info(mblob.resource_id, &resp.map_info);
+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+
+ res->mapped = true;
+}
+
+int virtio_gpu_virgl_resource_unmap(VirtIOGPU *g,
+ struct virtio_gpu_simple_resource *res)
+{
+ if (!res->mapped) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already unmapped %d\n",
+ __func__, res->resource_id);
+ return VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ }
+
+ memory_region_set_enabled(&res->region, false);
+ memory_region_del_subregion(&g->parent_obj.hostmem, &res->region);
+ object_unparent(OBJECT(&res->region));
+
+ res->mapped = false;
+ return virgl_renderer_resource_unmap(res->resource_id);
+}
+
+static void virgl_cmd_resource_unmap_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_unmap_blob ublob;
+ VIRTIO_GPU_FILL_CMD(ublob);
+ virtio_gpu_unmap_blob_bswap(&ublob);
+
+ if (ublob.resource_id == 0) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = virtio_gpu_find_resource(g, ublob.resource_id);
+ if (!res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+ __func__, ublob.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ virtio_gpu_virgl_resource_unmap(g, res);
+}
+
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
+
void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd)
{
@@ -464,6 +624,17 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
case VIRTIO_GPU_CMD_GET_EDID:
virtio_gpu_get_edid(g, cmd);
break;
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
+ virgl_cmd_resource_create_blob(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
+ virgl_cmd_resource_map_blob(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
+ virgl_cmd_resource_unmap_blob(g, cmd);
+ break;
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
default:
cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
break;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 506b3b8eef..f79693d44d 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -33,8 +33,6 @@
#define VIRTIO_GPU_VM_VERSION 1
-static struct virtio_gpu_simple_resource*
-virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
static struct virtio_gpu_simple_resource *
virtio_gpu_find_check_resource(VirtIOGPU *g, uint32_t resource_id,
bool require_backing,
@@ -115,7 +113,7 @@ static void update_cursor(VirtIOGPU *g, struct virtio_gpu_update_cursor *cursor)
cursor->resource_id ? 1 : 0);
}
-static struct virtio_gpu_simple_resource *
+struct virtio_gpu_simple_resource *
virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id)
{
struct virtio_gpu_simple_resource *res;
@@ -874,6 +872,10 @@ void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
struct virtio_gpu_simple_resource *res)
{
+ if (res->mapped) {
+ virtio_gpu_virgl_resource_unmap(g, res);
+ }
+
virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
res->iov = NULL;
res->iov_cnt = 0;
@@ -1323,10 +1325,12 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
return;
}
+#ifndef HAVE_VIRGL_RESOURCE_BLOB
if (virtio_gpu_virgl_enabled(g->parent_obj.conf)) {
- error_setg(errp, "blobs and virgl are not compatible (yet)");
+ error_setg(errp, "Linked virglrenderer does not support blob resources");
return;
}
+#endif
}
if (!virtio_gpu_base_device_realize(qdev,
diff --git a/include/hw/virtio/virtio-gpu-bswap.h b/include/hw/virtio/virtio-gpu-bswap.h
index 9124108485..dd1975e2d4 100644
--- a/include/hw/virtio/virtio-gpu-bswap.h
+++ b/include/hw/virtio/virtio-gpu-bswap.h
@@ -63,10 +63,28 @@ virtio_gpu_create_blob_bswap(struct virtio_gpu_resource_create_blob *cblob)
{
virtio_gpu_ctrl_hdr_bswap(&cblob->hdr);
le32_to_cpus(&cblob->resource_id);
+ le32_to_cpus(&cblob->blob_mem);
le32_to_cpus(&cblob->blob_flags);
+ le32_to_cpus(&cblob->nr_entries);
+ le64_to_cpus(&cblob->blob_id);
le64_to_cpus(&cblob->size);
}
+static inline void
+virtio_gpu_map_blob_bswap(struct virtio_gpu_resource_map_blob *mblob)
+{
+ virtio_gpu_ctrl_hdr_bswap(&mblob->hdr);
+ le32_to_cpus(&mblob->resource_id);
+ le64_to_cpus(&mblob->offset);
+}
+
+static inline void
+virtio_gpu_unmap_blob_bswap(struct virtio_gpu_resource_unmap_blob *ublob)
+{
+ virtio_gpu_ctrl_hdr_bswap(&ublob->hdr);
+ le32_to_cpus(&ublob->resource_id);
+}
+
static inline void
virtio_gpu_scanout_blob_bswap(struct virtio_gpu_set_scanout_blob *ssb)
{
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index eafce75b04..708cf1bb9c 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -55,6 +55,9 @@ struct virtio_gpu_simple_resource {
int dmabuf_fd;
uint8_t *remapped;
+ MemoryRegion region;
+ bool mapped;
+
QTAILQ_ENTRY(virtio_gpu_simple_resource) next;
};
@@ -245,6 +248,9 @@ void virtio_gpu_base_fill_display_info(VirtIOGPUBase *g,
struct virtio_gpu_resp_display_info *dpy_info);
/* virtio-gpu.c */
+struct virtio_gpu_simple_resource *
+virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
+
void virtio_gpu_ctrl_response(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd,
struct virtio_gpu_ctrl_hdr *resp,
@@ -289,5 +295,7 @@ void virtio_gpu_virgl_reset_scanout(VirtIOGPU *g);
void virtio_gpu_virgl_reset(VirtIOGPU *g);
int virtio_gpu_virgl_init(VirtIOGPU *g);
int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g);
+int virtio_gpu_virgl_resource_unmap(VirtIOGPU *g,
+ struct virtio_gpu_simple_resource *res);
#endif
diff --git a/meson.build b/meson.build
index c2adb7caf4..dd064afd7a 100644
--- a/meson.build
+++ b/meson.build
@@ -718,6 +718,11 @@ if not get_option('virglrenderer').auto() or have_system or have_vhost_user_gpu
method: 'pkg-config',
required: get_option('virglrenderer'),
kwargs: static_kwargs)
+
+ config_host_data.set('HAVE_VIRGL_RESOURCE_BLOB',
+ cc.has_function('virgl_renderer_resource_create_blob',
+ prefix: '#include <virglrenderer.h>',
+ dependencies: virgl))
endif
curl = not_found
if not get_option('curl').auto() or have_block
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled
2022-09-13 10:50 [PATCH v2 0/4] virtio-gpu: Blob resources Antonio Caggiano
` (2 preceding siblings ...)
2022-09-13 10:50 ` [PATCH v2 3/4] virtio-gpu: Handle resource blob commands Antonio Caggiano
@ 2022-09-13 10:50 ` Antonio Caggiano
2022-09-23 12:32 ` Gerd Hoffmann
3 siblings, 1 reply; 11+ messages in thread
From: Antonio Caggiano @ 2022-09-13 10:50 UTC (permalink / raw)
To: qemu-devel
Cc: gert.wollny, dmitry.osipenko, Michael S. Tsirkin, Gerd Hoffmann
From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Host blobs don't need udmabuf, it's only needed by guest blobs. The host
blobs are utilized by the Mesa virgl driver when persistent memory mapping
is needed by a GL buffer, otherwise virgl driver doesn't use blobs.
Persistent mapping support bumps GL version from 4.3 to 4.5 in guest.
Relax the udmabuf requirement.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Reviewed-by: Antonio Caggiano <antonio.caggiano@collabora.com>
---
hw/display/virtio-gpu.c | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index f79693d44d..767142cf5d 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -367,7 +367,9 @@ static void virtio_gpu_resource_create_blob(VirtIOGPU *g,
return;
}
- virtio_gpu_init_udmabuf(res);
+ if (cblob.blob_mem == VIRTIO_GPU_BLOB_MEM_GUEST) {
+ virtio_gpu_init_udmabuf(res);
+ }
QTAILQ_INSERT_HEAD(&g->reslist, res, next);
}
@@ -1319,19 +1321,13 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
VirtIODevice *vdev = VIRTIO_DEVICE(qdev);
VirtIOGPU *g = VIRTIO_GPU(qdev);
- if (virtio_gpu_blob_enabled(g->parent_obj.conf)) {
- if (!virtio_gpu_have_udmabuf()) {
- error_setg(errp, "cannot enable blob resources without udmabuf");
- return;
- }
-
#ifndef HAVE_VIRGL_RESOURCE_BLOB
- if (virtio_gpu_virgl_enabled(g->parent_obj.conf)) {
- error_setg(errp, "Linked virglrenderer does not support blob resources");
- return;
- }
-#endif
+ if (virtio_gpu_blob_enabled(g->parent_obj.conf) &&
+ virtio_gpu_virgl_enabled(g->parent_obj.conf)) {
+ error_setg(errp, "Linked virglrenderer does not support blob resources");
+ return;
}
+#endif
if (!virtio_gpu_base_device_realize(qdev,
virtio_gpu_handle_ctrl_cb,
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 1/4] virtio: Add shared memory capability
2022-09-13 10:50 ` [PATCH v2 1/4] virtio: Add shared memory capability Antonio Caggiano
@ 2022-09-13 13:48 ` Bin Meng
0 siblings, 0 replies; 11+ messages in thread
From: Bin Meng @ 2022-09-13 13:48 UTC (permalink / raw)
To: Antonio Caggiano
Cc: qemu-devel@nongnu.org Developers, gert.wollny, dmitry.osipenko,
Dr. David Alan Gilbert, Michael S. Tsirkin
On Tue, Sep 13, 2022 at 6:57 PM Antonio Caggiano
<antonio.caggiano@collabora.com> wrote:
>
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
>
> Define a new capability type 'VIRTIO_PCI_CAP_SHARED_MEMORY_CFG'
> and the data structure 'virtio_pci_shm_cap' to go with it.
> They allow defining shared memory regions with sizes and offsets
> of 2^32 and more.
> Multiple instances of the capability are allowed and distinguished
> by a device-specific 'id'.
>
> v2: Remove virtio_pci_shm_cap as virtio_pci_cap64 is used instead.
> v3: No need for mask32 as cpu_to_le32 truncates the value.
The above changelog should be put under --- below
>
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> ---
> hw/virtio/virtio-pci.c | 18 ++++++++++++++++++
> include/hw/virtio/virtio-pci.h | 4 ++++
> 2 files changed, 22 insertions(+)
>
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index a50c5a57d7..377bb06fec 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -1169,6 +1169,24 @@ static int virtio_pci_add_mem_cap(VirtIOPCIProxy *proxy,
> return offset;
> }
>
> +int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy,
> + uint8_t bar, uint64_t offset, uint64_t length,
> + uint8_t id)
> +{
> + struct virtio_pci_cap64 cap = {
> + .cap.cap_len = sizeof cap,
> + .cap.cfg_type = VIRTIO_PCI_CAP_SHARED_MEMORY_CFG,
> + };
> +
> + cap.cap.bar = bar;
> + cap.cap.length = cpu_to_le32(length);
> + cap.length_hi = cpu_to_le32(length >> 32);
> + cap.cap.offset = cpu_to_le32(offset);
> + cap.offset_hi = cpu_to_le32(offset >> 32);
> + cap.cap.id = id;
> + return virtio_pci_add_mem_cap(proxy, &cap.cap);
> +}
> +
> static uint64_t virtio_pci_common_read(void *opaque, hwaddr addr,
> unsigned size)
> {
> diff --git a/include/hw/virtio/virtio-pci.h b/include/hw/virtio/virtio-pci.h
> index 2446dcd9ae..5e5c4a4c6d 100644
> --- a/include/hw/virtio/virtio-pci.h
> +++ b/include/hw/virtio/virtio-pci.h
> @@ -252,4 +252,8 @@ void virtio_pci_types_register(const VirtioPCIDeviceTypeInfo *t);
> */
> unsigned virtio_pci_optimal_num_queues(unsigned fixed_queues);
>
> +int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy,
> + uint8_t bar, uint64_t offset, uint64_t length,
> + uint8_t id);
> +
> #endif
> --
Regards,
Bin
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled
2022-09-13 10:50 ` [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled Antonio Caggiano
@ 2022-09-23 12:32 ` Gerd Hoffmann
2022-09-26 18:32 ` Dmitry Osipenko
0 siblings, 1 reply; 11+ messages in thread
From: Gerd Hoffmann @ 2022-09-23 12:32 UTC (permalink / raw)
To: Antonio Caggiano
Cc: qemu-devel, gert.wollny, dmitry.osipenko, Michael S. Tsirkin
On Tue, Sep 13, 2022 at 12:50:22PM +0200, Antonio Caggiano wrote:
> From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>
> Host blobs don't need udmabuf, it's only needed by guest blobs. The host
> blobs are utilized by the Mesa virgl driver when persistent memory mapping
> is needed by a GL buffer, otherwise virgl driver doesn't use blobs.
> Persistent mapping support bumps GL version from 4.3 to 4.5 in guest.
> Relax the udmabuf requirement.
What about blob=on,virgl=off?
In that case qemu manages the resources and continued to require
udmabuf.
Patches 1-3 look good, queued them up.
thanks,
Gerd
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled
2022-09-23 12:32 ` Gerd Hoffmann
@ 2022-09-26 18:32 ` Dmitry Osipenko
2022-09-27 8:32 ` Gerd Hoffmann
0 siblings, 1 reply; 11+ messages in thread
From: Dmitry Osipenko @ 2022-09-26 18:32 UTC (permalink / raw)
To: Gerd Hoffmann, Antonio Caggiano
Cc: qemu-devel, gert.wollny, Michael S. Tsirkin
On 9/23/22 15:32, Gerd Hoffmann wrote:
> On Tue, Sep 13, 2022 at 12:50:22PM +0200, Antonio Caggiano wrote:
>> From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>
>> Host blobs don't need udmabuf, it's only needed by guest blobs. The host
>> blobs are utilized by the Mesa virgl driver when persistent memory mapping
>> is needed by a GL buffer, otherwise virgl driver doesn't use blobs.
>> Persistent mapping support bumps GL version from 4.3 to 4.5 in guest.
>> Relax the udmabuf requirement.
>
> What about blob=on,virgl=off?
>
> In that case qemu manages the resources and continued to require
> udmabuf.
The udmabuf is used only by the blob resource-creation command in Qemu.
I couldn't find when we could hit that udmabuf code path in Qemu because
BLOB_MEM_GUEST resource type is used only by crosvm+Venus when crosvm
uses a dedicated render-server for virglrenderer.
Summarizing:
- only BLOB_MEM_GUEST resources require udmabuf
- /dev/udmabuf isn't accessible by normal user
- udmabuf driver isn't shipped by all of the popular Linux distros,
for example Debian doesn't ship it
Because of all of the above, I don't think it makes sense to
hard-require udmabuf at the start of Qemu. It's much better to fail
resource creation dynamically.
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled
2022-09-26 18:32 ` Dmitry Osipenko
@ 2022-09-27 8:32 ` Gerd Hoffmann
2022-09-27 10:44 ` Dmitry Osipenko
0 siblings, 1 reply; 11+ messages in thread
From: Gerd Hoffmann @ 2022-09-27 8:32 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: Antonio Caggiano, qemu-devel, gert.wollny, Michael S. Tsirkin
On Mon, Sep 26, 2022 at 09:32:40PM +0300, Dmitry Osipenko wrote:
> On 9/23/22 15:32, Gerd Hoffmann wrote:
> > On Tue, Sep 13, 2022 at 12:50:22PM +0200, Antonio Caggiano wrote:
> >> From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >>
> >> Host blobs don't need udmabuf, it's only needed by guest blobs. The host
> >> blobs are utilized by the Mesa virgl driver when persistent memory mapping
> >> is needed by a GL buffer, otherwise virgl driver doesn't use blobs.
> >> Persistent mapping support bumps GL version from 4.3 to 4.5 in guest.
> >> Relax the udmabuf requirement.
> >
> > What about blob=on,virgl=off?
> >
> > In that case qemu manages the resources and continued to require
> > udmabuf.
>
> The udmabuf is used only by the blob resource-creation command in Qemu.
> I couldn't find when we could hit that udmabuf code path in Qemu because
> BLOB_MEM_GUEST resource type is used only by crosvm+Venus when crosvm
> uses a dedicated render-server for virglrenderer.
Recent enough linux guest driver will use BLOB_MEM_GUEST resources
with blob=on + virgl=off
> - /dev/udmabuf isn't accessible by normal user
> - udmabuf driver isn't shipped by all of the popular Linux distros,
> for example Debian doesn't ship it
That's why blob resources are off by default.
> Because of all of the above, I don't think it makes sense to
> hard-require udmabuf at the start of Qemu. It's much better to fail
> resource creation dynamically.
Disagree. When virgl/venus is enabled, then yes, qemu would let
virglrenderer manage resources and I'm ok with whatever requirements
virglrenderer has. When qemu manages resources by itself udmabuf is
a hard requirement for blob support though.
take care,
Gerd
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled
2022-09-27 8:32 ` Gerd Hoffmann
@ 2022-09-27 10:44 ` Dmitry Osipenko
2022-09-27 21:57 ` Kasireddy, Vivek
0 siblings, 1 reply; 11+ messages in thread
From: Dmitry Osipenko @ 2022-09-27 10:44 UTC (permalink / raw)
To: Gerd Hoffmann
Cc: Antonio Caggiano, qemu-devel, gert.wollny, Michael S. Tsirkin
On 9/27/22 11:32, Gerd Hoffmann wrote:
> On Mon, Sep 26, 2022 at 09:32:40PM +0300, Dmitry Osipenko wrote:
>> On 9/23/22 15:32, Gerd Hoffmann wrote:
>>> On Tue, Sep 13, 2022 at 12:50:22PM +0200, Antonio Caggiano wrote:
>>>> From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>>
>>>> Host blobs don't need udmabuf, it's only needed by guest blobs. The host
>>>> blobs are utilized by the Mesa virgl driver when persistent memory mapping
>>>> is needed by a GL buffer, otherwise virgl driver doesn't use blobs.
>>>> Persistent mapping support bumps GL version from 4.3 to 4.5 in guest.
>>>> Relax the udmabuf requirement.
>>>
>>> What about blob=on,virgl=off?
>>>
>>> In that case qemu manages the resources and continued to require
>>> udmabuf.
>>
>> The udmabuf is used only by the blob resource-creation command in Qemu.
>> I couldn't find when we could hit that udmabuf code path in Qemu because
>> BLOB_MEM_GUEST resource type is used only by crosvm+Venus when crosvm
>> uses a dedicated render-server for virglrenderer.
>
> Recent enough linux guest driver will use BLOB_MEM_GUEST resources
> with blob=on + virgl=off
I reproduced this case today using "-device
virtio-gpu-device,blob=true". You're right, it doesn't work without udmabuf.
[ 8.369306] virtio_gpu_dequeue_ctrl_func: 90 callbacks suppressed
[ 8.369311] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
0x1200 (command 0x105)
[ 8.371848] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
0x1205 (command 0x104)
[ 8.373085] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
0x1200 (command 0x105)
[ 8.376273] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
0x1205 (command 0x104)
[ 8.416972] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
0x1200 (command 0x105)
[ 8.418841] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
0x1205 (command 0x104)
I see now why you're wanting to keep the udmabuf requirement for
blob=on,virgl=off.
>> - /dev/udmabuf isn't accessible by normal user
>> - udmabuf driver isn't shipped by all of the popular Linux distros,
>> for example Debian doesn't ship it
>
> That's why blob resources are off by default.
>
>> Because of all of the above, I don't think it makes sense to
>> hard-require udmabuf at the start of Qemu. It's much better to fail
>> resource creation dynamically.
>
> Disagree. When virgl/venus is enabled, then yes, qemu would let
> virglrenderer manage resources and I'm ok with whatever requirements
> virglrenderer has. When qemu manages resources by itself udmabuf is
> a hard requirement for blob support though.
Let's try to relax the udmabuf requirement only for blob=on,virgl=on.
I'll update this patch.
Thank you very much for the review!
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled
2022-09-27 10:44 ` Dmitry Osipenko
@ 2022-09-27 21:57 ` Kasireddy, Vivek
0 siblings, 0 replies; 11+ messages in thread
From: Kasireddy, Vivek @ 2022-09-27 21:57 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: Gerd Hoffmann, Antonio Caggiano, qemu-devel, gert.wollny,
Michael S. Tsirkin
Hi Dmitry,
>
> On 9/27/22 11:32, Gerd Hoffmann wrote:
> > On Mon, Sep 26, 2022 at 09:32:40PM +0300, Dmitry Osipenko wrote:
> >> On 9/23/22 15:32, Gerd Hoffmann wrote:
> >>> On Tue, Sep 13, 2022 at 12:50:22PM +0200, Antonio Caggiano wrote:
> >>>> From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >>>>
> >>>> Host blobs don't need udmabuf, it's only needed by guest blobs. The host
> >>>> blobs are utilized by the Mesa virgl driver when persistent memory mapping
> >>>> is needed by a GL buffer, otherwise virgl driver doesn't use blobs.
> >>>> Persistent mapping support bumps GL version from 4.3 to 4.5 in guest.
> >>>> Relax the udmabuf requirement.
> >>>
> >>> What about blob=on,virgl=off?
> >>>
> >>> In that case qemu manages the resources and continued to require
> >>> udmabuf.
> >>
> >> The udmabuf is used only by the blob resource-creation command in Qemu.
> >> I couldn't find when we could hit that udmabuf code path in Qemu because
> >> BLOB_MEM_GUEST resource type is used only by crosvm+Venus when crosvm
> >> uses a dedicated render-server for virglrenderer.
> >
> > Recent enough linux guest driver will use BLOB_MEM_GUEST resources
> > with blob=on + virgl=off
>
> I reproduced this case today using "-device
> virtio-gpu-device,blob=true". You're right, it doesn't work without udmabuf.
>
> [ 8.369306] virtio_gpu_dequeue_ctrl_func: 90 callbacks suppressed
> [ 8.369311] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
> 0x1200 (command 0x105)
> [ 8.371848] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
> 0x1205 (command 0x104)
> [ 8.373085] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
> 0x1200 (command 0x105)
> [ 8.376273] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
> 0x1205 (command 0x104)
> [ 8.416972] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
> 0x1200 (command 0x105)
> [ 8.418841] [drm:virtio_gpu_dequeue_ctrl_func] *ERROR* response
> 0x1205 (command 0x104)
>
> I see now why you're wanting to keep the udmabuf requirement for
> blob=on,virgl=off.
>
>
> >> - /dev/udmabuf isn't accessible by normal user
> >> - udmabuf driver isn't shipped by all of the popular Linux distros,
> >> for example Debian doesn't ship it
> >
> > That's why blob resources are off by default.
> >
> >> Because of all of the above, I don't think it makes sense to
> >> hard-require udmabuf at the start of Qemu. It's much better to fail
> >> resource creation dynamically.
> >
> > Disagree. When virgl/venus is enabled, then yes, qemu would let
> > virglrenderer manage resources and I'm ok with whatever requirements
> > virglrenderer has. When qemu manages resources by itself udmabuf is
> > a hard requirement for blob support though.
>
> Let's try to relax the udmabuf requirement only for blob=on,virgl=on.
> I'll update this patch.
[Kasireddy, Vivek] In addition to what Gerd mentioned and what you have
discovered, I wanted to briefly add that we use Udmabuf/Guest Blobs for
Headless GPU passthrough use-cases (blob=on and virgl=off). Here are some
more details about our use-case:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9592#note_851314
Thanks,
Vivek
>
> Thank you very much for the review!
>
> --
> Best regards,
> Dmitry
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2022-09-27 21:59 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-13 10:50 [PATCH v2 0/4] virtio-gpu: Blob resources Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 1/4] virtio: Add shared memory capability Antonio Caggiano
2022-09-13 13:48 ` Bin Meng
2022-09-13 10:50 ` [PATCH v2 2/4] virtio-gpu: hostmem Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 3/4] virtio-gpu: Handle resource blob commands Antonio Caggiano
2022-09-13 10:50 ` [PATCH v2 4/4] virtio-gpu: Don't require udmabuf when blob support is enabled Antonio Caggiano
2022-09-23 12:32 ` Gerd Hoffmann
2022-09-26 18:32 ` Dmitry Osipenko
2022-09-27 8:32 ` Gerd Hoffmann
2022-09-27 10:44 ` Dmitry Osipenko
2022-09-27 21:57 ` Kasireddy, Vivek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).