All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/7] drm/virtio: Import scanout buffers from other devices
@ 2024-03-28  8:32 Vivek Kasireddy
  2024-03-28  8:32 ` [RFC 1/7] drm/virtio: Implement VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd Vivek Kasireddy
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Vivek Kasireddy @ 2024-03-28  8:32 UTC (permalink / raw)
  To: dri-devel
  Cc: Vivek Kasireddy, Gerd Hoffmann, Dongwon Kim, Daniel Vetter,
	Christian Koenig, Dmitry Osipenko, Rob Clark,
	Thomas Hellström, Oded Gabbay, Michal Wajdeczko,
	Michael Tretter

Having virtio-gpu import scanout buffers (via prime) from other
devices means that we'd be adding a head to headless GPUs assigned
to a Guest VM or additional heads to regular GPU devices that are
passthrough'd to the Guest. In these cases, the Guest compositor
can render into the scanout buffer using a primary GPU and has the
secondary GPU (virtio-gpu) import it for display purposes.

The main advantage with this is that the imported scanout buffer can
either be displayed locally on the Host (e.g, using Qemu + GTK UI)
or encoded and streamed to a remote client (e.g, Qemu + Spice UI).
Note that since Qemu uses udmabuf driver, there would be no copies
made of the scanout buffer as it is displayed. This should be
possible even when it might reside in device memory such has VRAM.

The specific use-case that can be supported with this series is when
running Weston or other guest compositors with "additional-devices"
feature (./weston --drm-device=card1 --additional-devices=card0).
More info about this feature can be found at:
https://gitlab.freedesktop.org/wayland/weston/-/merge_requests/736

In the above scenario, card1 could be a dGPU or an iGPU and card0
would be virtio-gpu in KMS only mode. However, the case where this
patch series could be particularly useful is when card1 is a GPU VF
that needs to share its scanout buffer (in a zero-copy way) with the
GPU PF on the Host. Or, it can also be useful when the scanout buffer
needs to be shared between any two GPU devices (assuming one of them
is assigned to a Guest VM) as long as they are P2P DMA compatible.

As part of the import, the virtio-gpu driver shares the dma
addresses and lengths with Qemu which then determines whether the
memory region they belong to is owned by a PCI device or whether it
is part of the Guest's system ram. If it is the former, it identifies
the devid (or bdf) and bar and provides this info (along with offsets
and sizes) to the udmabuf driver. In the latter case, instead of the
the devid and bar it provides the memfd. The udmabuf driver then
creates a dmabuf using this info that Qemu shares with Spice for
encode via Gstreamer.

Note that the virtio-gpu driver registers a move_notify() callback
to track location changes associated with the scanout buffer and
sends attach/detach backing cmds to Qemu when appropriate. And,
synchronization (that is, ensuring that Guest and Host are not
using the scanout buffer at the same time) is ensured by pinning/
unpinning the dmabuf as part of plane update and using a fence
in resource_flush cmd. 

This series is available at:
https://gitlab.freedesktop.org/Vivek/drm-tip/-/commits/virtgpu_import_rfc

along with additional patches for Qemu and Spice here:
https://gitlab.freedesktop.org/Vivek/qemu/-/commits/virtgpu_dmabuf_pcidev
https://gitlab.freedesktop.org/Vivek/spice/-/commits/encode_dmabuf_v4 

Patchset overview:

Patch 1:   Implement VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd
Patch 2-3: Helpers to initalize, import, free imported object
Patch 4-5: Import and use buffers from other devices for scanout
Patch 6-7: Have udmabuf driver create dmabuf from PCI bars for P2P DMA

This series is tested using the following method:
- Run Qemu with the following relevant options:
  qemu-system-x86_64 -m 4096m ....
  -device vfio-pci,host=0000:03:00.0
  -device virtio-vga,max_outputs=1,blob=true,xres=1920,yres=1080
  -spice port=3001,gl=on,disable-ticketing=on,preferred-codec=gstreamer:h264
  -object memory-backend-memfd,id=mem1,size=4096M
  -machine memory-backend=mem1 ...
- Run upstream Weston with the following options in the Guest VM:
  ./weston --drm-device=card1 --additional-devices=card0

where card1 is a DG2 dGPU (passthrough'd and using xe driver in Guest VM),
card0 is virtio-gpu and the Host is using a RPL iGPU.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Dongwon Kim <dongwon.kim@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: Rob Clark <robdclark@chromium.org>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Oded Gabbay <ogabbay@kernel.org>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Michael Tretter <m.tretter@pengutronix.de>

Vivek Kasireddy (7):
  drm/virtio: Implement VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd
  drm/virtio: Add a helper to map and note the dma addrs and lengths
  drm/virtio: Add helpers to initialize and free the imported object
  drm/virtio: Import prime buffers from other devices as guest blobs
  drm/virtio: Ensure that bo's backing store is valid while updating
    plane
  udmabuf/uapi: Add new ioctl to create a dmabuf from PCI bar regions
  udmabuf: Implement UDMABUF_CREATE_LIST_FOR_PCIDEV ioctl

 drivers/dma-buf/udmabuf.c              | 122 ++++++++++++++++--
 drivers/gpu/drm/virtio/virtgpu_drv.h   |   8 ++
 drivers/gpu/drm/virtio/virtgpu_plane.c |  56 ++++++++-
 drivers/gpu/drm/virtio/virtgpu_prime.c | 167 ++++++++++++++++++++++++-
 drivers/gpu/drm/virtio/virtgpu_vq.c    |  15 +++
 include/uapi/linux/udmabuf.h           |  11 +-
 6 files changed, 368 insertions(+), 11 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC 1/7] drm/virtio: Implement VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd
  2024-03-28  8:32 [RFC 0/7] drm/virtio: Import scanout buffers from other devices Vivek Kasireddy
@ 2024-03-28  8:32 ` Vivek Kasireddy
  2024-03-28  8:32 ` [RFC 2/7] drm/virtio: Add a helper to map and note the dma addrs and lengths Vivek Kasireddy
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Vivek Kasireddy @ 2024-03-28  8:32 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

This cmd is useful to let the VMM (i.e, Qemu) know that the backing
store associated with a resource is no longer valid, so that the VMM
can perform any cleanup or unmap operations.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h |  2 ++
 drivers/gpu/drm/virtio/virtgpu_vq.c  | 15 +++++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index bb7d86a0c6a1..7347835e4fbe 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -349,6 +349,8 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
 			      struct virtio_gpu_object *obj,
 			      struct virtio_gpu_mem_entry *ents,
 			      unsigned int nents);
+void virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev,
+			      uint32_t resource_id);
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
 			    struct virtio_gpu_output *output);
 int virtio_gpu_cmd_get_display_info(struct virtio_gpu_device *vgdev);
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index b1a00c0c25a7..17e2e7ab231a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -645,6 +645,21 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev,
 	virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
 }
 
+void virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev,
+					    uint32_t resource_id)
+{
+	struct virtio_gpu_resource_detach_backing *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING);
+	cmd_p->resource_id = cpu_to_le32(resource_id);
+
+	virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+}
+
 static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev,
 					       struct virtio_gpu_vbuffer *vbuf)
 {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 2/7] drm/virtio: Add a helper to map and note the dma addrs and lengths
  2024-03-28  8:32 [RFC 0/7] drm/virtio: Import scanout buffers from other devices Vivek Kasireddy
  2024-03-28  8:32 ` [RFC 1/7] drm/virtio: Implement VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd Vivek Kasireddy
@ 2024-03-28  8:32 ` Vivek Kasireddy
  2024-03-28  8:32 ` [RFC 3/7] drm/virtio: Add helpers to initialize and free the imported object Vivek Kasireddy
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Vivek Kasireddy @ 2024-03-28  8:32 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

This helper would be used when first initializing the object as
part of import and also when updating the plane where we need to
ensure that the imported object's backing is valid.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  6 ++++
 drivers/gpu/drm/virtio/virtgpu_prime.c | 44 ++++++++++++++++++++++++++
 2 files changed, 50 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 7347835e4fbe..ca4cb166b509 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -89,9 +89,11 @@ struct virtio_gpu_object_params {
 
 struct virtio_gpu_object {
 	struct drm_gem_shmem_object base;
+	struct sg_table *sgt;
 	uint32_t hw_res_handle;
 	bool dumb;
 	bool created;
+	bool has_backing;
 	bool host3d_blob, guest_blob;
 	uint32_t blob_mem, blob_flags;
 
@@ -470,6 +472,10 @@ struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
 struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
+long virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents,
+				unsigned int *nents,
+				struct virtio_gpu_object *bo,
+				struct dma_buf_attachment *attach);
 
 /* virtgpu_debugfs.c */
 void virtio_gpu_debugfs_init(struct drm_minor *minor);
diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
index 44425f20d91a..2a90df39c5de 100644
--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
+++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
@@ -27,6 +27,8 @@
 
 #include "virtgpu_drv.h"
 
+MODULE_IMPORT_NS(DMA_BUF);
+
 static int virtgpu_virtio_get_uuid(struct dma_buf *buf,
 				   uuid_t *uuid)
 {
@@ -142,6 +144,48 @@ struct dma_buf *virtgpu_gem_prime_export(struct drm_gem_object *obj,
 	return buf;
 }
 
+long virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents,
+				unsigned int *nents,
+				struct virtio_gpu_object *bo,
+				struct dma_buf_attachment *attach)
+{
+	struct scatterlist *sl;
+	struct sg_table *sgt;
+	long i, ret;
+
+	dma_resv_assert_held(attach->dmabuf->resv);
+
+	ret = dma_resv_wait_timeout(attach->dmabuf->resv,
+				    DMA_RESV_USAGE_KERNEL,
+				    false, MAX_SCHEDULE_TIMEOUT);
+	if (ret < 0)
+		return ret;
+
+	sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+	if (IS_ERR(sgt))
+		return PTR_ERR(sgt);
+
+	*ents = kvmalloc_array(sgt->nents,
+			       sizeof(struct virtio_gpu_mem_entry),
+			       GFP_KERNEL);
+	if (!(*ents)) {
+		dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
+		return -ENOMEM;
+	}
+
+	*nents = sgt->nents;
+	for_each_sgtable_dma_sg(sgt, sl, i) {
+		(*ents)[i].addr = cpu_to_le64(sg_dma_address(sl));
+		(*ents)[i].length = cpu_to_le32(sg_dma_len(sl));
+		(*ents)[i].padding = 0;
+	}
+
+	bo->sgt = sgt;
+	bo->has_backing = true;
+
+	return 0;
+}
+
 struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
 						struct dma_buf *buf)
 {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 3/7] drm/virtio: Add helpers to initialize and free the imported object
  2024-03-28  8:32 [RFC 0/7] drm/virtio: Import scanout buffers from other devices Vivek Kasireddy
  2024-03-28  8:32 ` [RFC 1/7] drm/virtio: Implement VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd Vivek Kasireddy
  2024-03-28  8:32 ` [RFC 2/7] drm/virtio: Add a helper to map and note the dma addrs and lengths Vivek Kasireddy
@ 2024-03-28  8:32 ` Vivek Kasireddy
  2024-03-28  8:32 ` [RFC 4/7] drm/virtio: Import prime buffers from other devices as guest blobs Vivek Kasireddy
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Vivek Kasireddy @ 2024-03-28  8:32 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

The imported object can be considered a guest blob resource;
therefore, we use create_blob cmd while creating it. These helpers
are used in the next patch which does the actual import.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_prime.c | 69 ++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
index 2a90df39c5de..1e87dbc9a897 100644
--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
+++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
@@ -186,6 +186,75 @@ long virtgpu_dma_buf_import_sgt(struct virtio_gpu_mem_entry **ents,
 	return 0;
 }
 
+static void virtgpu_dma_buf_free_obj(struct drm_gem_object *obj)
+{
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	struct virtio_gpu_device *vgdev = obj->dev->dev_private;
+	struct dma_buf_attachment *attach = obj->import_attach;
+
+	if (bo->created) {
+		virtio_gpu_cmd_unref_resource(vgdev, bo);
+		virtio_gpu_notify(vgdev);
+	}
+
+	if (attach) {
+		dma_buf_detach(attach->dmabuf, attach);
+		dma_buf_put(attach->dmabuf);
+	}
+
+	drm_gem_object_release(&bo->base.base);
+	kfree(bo);
+}
+
+static int virtgpu_dma_buf_init_obj(struct drm_device *dev,
+				    struct virtio_gpu_object *bo,
+				    struct dma_buf_attachment *attach)
+{
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+	struct virtio_gpu_object_params params = { 0 };
+	struct dma_resv *resv = attach->dmabuf->resv;
+	struct virtio_gpu_mem_entry *ents = NULL;
+	unsigned int nents;
+	int ret;
+
+	ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
+	if (ret) {
+		virtgpu_dma_buf_free_obj(&bo->base.base);
+		return ret;
+	}
+
+	dma_resv_lock(resv, NULL);
+
+	ret = dma_buf_pin(attach);
+	if (ret)
+		goto err_pin;
+
+	ret = virtgpu_dma_buf_import_sgt(&ents, &nents, bo, attach);
+	if (ret)
+		goto err_import;
+
+	bo->guest_blob = true;
+	params.blob = true;
+	params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
+	params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
+	params.size = attach->dmabuf->size;
+
+	virtio_gpu_cmd_resource_create_blob(vgdev, bo, &params,
+					    ents, nents);
+	dma_buf_unpin(attach);
+	dma_resv_unlock(resv);
+
+	return 0;
+
+err_import:
+	dma_buf_unpin(attach);
+err_pin:
+	dma_resv_unlock(resv);
+	ida_free(&vgdev->resource_ida, bo->hw_res_handle - 1);
+	virtgpu_dma_buf_free_obj(&bo->base.base);
+	return ret;
+}
+
 struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
 						struct dma_buf *buf)
 {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 4/7] drm/virtio: Import prime buffers from other devices as guest blobs
  2024-03-28  8:32 [RFC 0/7] drm/virtio: Import scanout buffers from other devices Vivek Kasireddy
                   ` (2 preceding siblings ...)
  2024-03-28  8:32 ` [RFC 3/7] drm/virtio: Add helpers to initialize and free the imported object Vivek Kasireddy
@ 2024-03-28  8:32 ` Vivek Kasireddy
  2024-03-28  8:32 ` [RFC 5/7] drm/virtio: Ensure that bo's backing store is valid while updating plane Vivek Kasireddy
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Vivek Kasireddy @ 2024-03-28  8:32 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

By importing scanout buffers from other devices, we should be able
to use the virtio-gpu driver in KMS only mode. Note that we attach
dynamically and register a move_notify() callback so that we can
let the VMM know of any location changes associated with the backing
store of the imported object by sending detach_backing cmd.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_prime.c | 54 +++++++++++++++++++++++++-
 1 file changed, 53 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
index 1e87dbc9a897..c65dacc1b2b5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
+++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
@@ -255,10 +255,36 @@ static int virtgpu_dma_buf_init_obj(struct drm_device *dev,
 	return ret;
 }
 
+static const struct drm_gem_object_funcs virtgpu_gem_dma_buf_funcs = {
+	.free = virtgpu_dma_buf_free_obj,
+};
+
+static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment *attach)
+{
+	struct drm_gem_object *obj = attach->importer_priv;
+	struct virtio_gpu_device *vgdev = obj->dev->dev_private;
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+
+	if (bo->created) {
+		virtio_gpu_cmd_resource_detach_backing(vgdev,
+						       bo->hw_res_handle);
+		bo->has_backing = false;
+	}
+}
+
+static const struct dma_buf_attach_ops virtgpu_dma_buf_attach_ops = {
+	.allow_peer2peer = true,
+	.move_notify = virtgpu_dma_buf_move_notify
+};
+
 struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
 						struct dma_buf *buf)
 {
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+	struct dma_buf_attachment *attach;
+	struct virtio_gpu_object *bo;
 	struct drm_gem_object *obj;
+	int ret;
 
 	if (buf->ops == &virtgpu_dmabuf_ops.ops) {
 		obj = buf->priv;
@@ -272,7 +298,32 @@ struct drm_gem_object *virtgpu_gem_prime_import(struct drm_device *dev,
 		}
 	}
 
-	return drm_gem_prime_import(dev, buf);
+	if (!vgdev->has_resource_blob || vgdev->has_virgl_3d)
+		return drm_gem_prime_import(dev, buf);
+
+	bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+	if (!bo)
+		return ERR_PTR(-ENOMEM);
+
+	obj = &bo->base.base;
+	obj->funcs = &virtgpu_gem_dma_buf_funcs;
+	drm_gem_private_object_init(dev, obj, buf->size);
+
+	attach = dma_buf_dynamic_attach(buf, dev->dev,
+					&virtgpu_dma_buf_attach_ops, obj);
+	if (IS_ERR(attach)) {
+		kfree(bo);
+		return ERR_CAST(attach);
+	}
+
+	obj->import_attach = attach;
+	get_dma_buf(buf);
+
+	ret = virtgpu_dma_buf_init_obj(dev, bo, attach);
+	if (ret < 0)
+		return ERR_PTR(ret);
+
+	return obj;
 }
 
 struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
@@ -281,3 +332,4 @@ struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
 {
 	return ERR_PTR(-ENODEV);
 }
+
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 5/7] drm/virtio: Ensure that bo's backing store is valid while updating plane
  2024-03-28  8:32 [RFC 0/7] drm/virtio: Import scanout buffers from other devices Vivek Kasireddy
                   ` (3 preceding siblings ...)
  2024-03-28  8:32 ` [RFC 4/7] drm/virtio: Import prime buffers from other devices as guest blobs Vivek Kasireddy
@ 2024-03-28  8:32 ` Vivek Kasireddy
  2024-04-26  6:06   ` Weifeng Liu
  2024-03-28  8:32 ` [RFC 6/7] udmabuf/uapi: Add new ioctl to create a dmabuf from PCI bar regions Vivek Kasireddy
  2024-03-28  8:33 ` [RFC 7/7] udmabuf: Implement UDMABUF_CREATE_LIST_FOR_PCIDEV ioctl Vivek Kasireddy
  6 siblings, 1 reply; 9+ messages in thread
From: Vivek Kasireddy @ 2024-03-28  8:32 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

To make sure that the imported bo's backing store is valid, we first
pin the associated dmabuf, import the sgt if need be and then unpin
it after the update is complete. Note that we pin/unpin the dmabuf
even when the backing store is valid to ensure that it does not move
when the host update (resource_flush) is in progress.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_plane.c | 56 +++++++++++++++++++++++++-
 1 file changed, 55 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index a72a2dbda031..3ccf88f9addc 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -26,6 +26,7 @@
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_damage_helper.h>
 #include <drm/drm_fourcc.h>
+#include <linux/virtio_dma_buf.h>
 
 #include "virtgpu_drv.h"
 
@@ -131,6 +132,45 @@ static void virtio_gpu_update_dumb_bo(struct virtio_gpu_device *vgdev,
 					   objs, NULL);
 }
 
+static bool virtio_gpu_update_dmabuf_bo(struct virtio_gpu_device *vgdev,
+					struct drm_gem_object *obj)
+{
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	struct dma_buf_attachment *attach = obj->import_attach;
+	struct dma_resv *resv = attach->dmabuf->resv;
+	struct virtio_gpu_mem_entry *ents = NULL;
+	unsigned int nents;
+	int ret;
+
+	dma_resv_lock(resv, NULL);
+
+	ret = dma_buf_pin(attach);
+	if (ret) {
+		dma_resv_unlock(resv);
+		return false;
+	}
+
+	if (!bo->has_backing) {
+		if (bo->sgt)
+			dma_buf_unmap_attachment(attach,
+						 bo->sgt,
+						 DMA_BIDIRECTIONAL);
+
+		ret = virtgpu_dma_buf_import_sgt(&ents, &nents,
+						 bo, attach);
+		if (ret)
+			goto err_import;
+
+		virtio_gpu_object_attach(vgdev, bo, ents, nents);
+	}
+	return true;
+
+err_import:
+	dma_buf_unpin(attach);
+	dma_resv_unlock(resv);
+	return false;
+}
+
 static void virtio_gpu_resource_flush(struct drm_plane *plane,
 				      uint32_t x, uint32_t y,
 				      uint32_t width, uint32_t height)
@@ -174,7 +214,9 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_output *output = NULL;
 	struct virtio_gpu_object *bo;
+	struct drm_gem_object *obj;
 	struct drm_rect rect;
+	bool updated = false;
 
 	if (plane->state->crtc)
 		output = drm_crtc_to_virtio_gpu_output(plane->state->crtc);
@@ -196,10 +238,17 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
 	if (!drm_atomic_helper_damage_merged(old_state, plane->state, &rect))
 		return;
 
-	bo = gem_to_virtio_gpu_obj(plane->state->fb->obj[0]);
+	obj = plane->state->fb->obj[0];
+	bo = gem_to_virtio_gpu_obj(obj);
 	if (bo->dumb)
 		virtio_gpu_update_dumb_bo(vgdev, plane->state, &rect);
 
+	if (obj->import_attach) {
+		updated = virtio_gpu_update_dmabuf_bo(vgdev, obj);
+		if (!updated)
+			return;
+	}
+
 	if (plane->state->fb != old_state->fb ||
 	    plane->state->src_w != old_state->src_w ||
 	    plane->state->src_h != old_state->src_h ||
@@ -239,6 +288,11 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
 				  rect.y1,
 				  rect.x2 - rect.x1,
 				  rect.y2 - rect.y1);
+
+	if (obj->import_attach && updated) {
+		dma_buf_unpin(obj->import_attach);
+		dma_resv_unlock(obj->import_attach->dmabuf->resv);
+	}
 }
 
 static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 6/7] udmabuf/uapi: Add new ioctl to create a dmabuf from PCI bar regions
  2024-03-28  8:32 [RFC 0/7] drm/virtio: Import scanout buffers from other devices Vivek Kasireddy
                   ` (4 preceding siblings ...)
  2024-03-28  8:32 ` [RFC 5/7] drm/virtio: Ensure that bo's backing store is valid while updating plane Vivek Kasireddy
@ 2024-03-28  8:32 ` Vivek Kasireddy
  2024-03-28  8:33 ` [RFC 7/7] udmabuf: Implement UDMABUF_CREATE_LIST_FOR_PCIDEV ioctl Vivek Kasireddy
  6 siblings, 0 replies; 9+ messages in thread
From: Vivek Kasireddy @ 2024-03-28  8:32 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

This new ioctl can be used by a VMM such as Qemu or other userspace
applications to create a dmabuf from a PCI device's memory regions.
The PCI device's id that the userspace app is required to provide
needs to be encoded in the format specified by the following macro
(defined in include/linux/pci.h):
define PCI_DEVID(bus, devfn)	((((u16)(bus)) << 8) | (devfn))

where devfn is defined (in include/uapi/linux/pci.h) as
define PCI_DEVFN(slot, func)	((((slot) & 0x1f) << 3) | ((func) & 0x07))

In addition to the devid, the userspace needs to include the
offsets and sizes and also the bar number as part of this request.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 include/uapi/linux/udmabuf.h | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/udmabuf.h b/include/uapi/linux/udmabuf.h
index 46b6532ed855..16fe41fdc4b9 100644
--- a/include/uapi/linux/udmabuf.h
+++ b/include/uapi/linux/udmabuf.h
@@ -15,7 +15,15 @@ struct udmabuf_create {
 };
 
 struct udmabuf_create_item {
-	__u32 memfd;
+	union {
+		struct {
+			__u32 memfd;
+		};
+		struct {
+			__u16 devid;
+			__u16 bar;
+		};
+	};
 	__u32 __pad;
 	__u64 offset;
 	__u64 size;
@@ -29,5 +37,6 @@ struct udmabuf_create_list {
 
 #define UDMABUF_CREATE       _IOW('u', 0x42, struct udmabuf_create)
 #define UDMABUF_CREATE_LIST  _IOW('u', 0x43, struct udmabuf_create_list)
+#define UDMABUF_CREATE_LIST_FOR_PCIDEV  _IOW('u', 0x44, struct udmabuf_create_list)
 
 #endif /* _UAPI_LINUX_UDMABUF_H */
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC 7/7] udmabuf: Implement UDMABUF_CREATE_LIST_FOR_PCIDEV ioctl
  2024-03-28  8:32 [RFC 0/7] drm/virtio: Import scanout buffers from other devices Vivek Kasireddy
                   ` (5 preceding siblings ...)
  2024-03-28  8:32 ` [RFC 6/7] udmabuf/uapi: Add new ioctl to create a dmabuf from PCI bar regions Vivek Kasireddy
@ 2024-03-28  8:33 ` Vivek Kasireddy
  6 siblings, 0 replies; 9+ messages in thread
From: Vivek Kasireddy @ 2024-03-28  8:33 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

By implementing this request, the udmabuf driver would be able
to support creating a dmabuf from a PCI device's bar region. This
would facilitate P2P DMA operations between any two PCI devices
as long as they are compatible.

Based on the information (devid, bar) provided by the VMM, once
the PCI device known as the provider is identified, we create a
page pool associated with the requested bar region by calling
pci_p2pdma_add_resource(). We then populate the ubuf->pages[]
array with the pages from the pool that would eventually be
included in a sgt which would be shared with the importers.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/dma-buf/udmabuf.c | 122 +++++++++++++++++++++++++++++++++++---
 1 file changed, 114 insertions(+), 8 deletions(-)

diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 274defd3fa3e..7355451ed337 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -9,6 +9,7 @@
 #include <linux/memfd.h>
 #include <linux/miscdevice.h>
 #include <linux/module.h>
+#include <linux/pci-p2pdma.h>
 #include <linux/shmem_fs.h>
 #include <linux/hugetlb.h>
 #include <linux/slab.h>
@@ -27,6 +28,7 @@ MODULE_PARM_DESC(size_limit_mb, "Max size of a dmabuf, in megabytes. Default is
 struct udmabuf {
 	pgoff_t pagecount;
 	struct page **pages;
+	struct pci_dev *pdev;
 	struct sg_table *sg;
 	struct miscdevice *device;
 	pgoff_t *offsets;
@@ -129,9 +131,28 @@ static void put_sg_table(struct device *dev, struct sg_table *sg,
 	kfree(sg);
 }
 
+static int check_p2p_support(struct dma_buf_attachment *attach)
+{
+	struct udmabuf *ubuf = attach->dmabuf->priv;
+	struct pci_dev *provider = ubuf->pdev;
+	struct device *client = attach->dev;
+	int ret = -1;
+
+	if (!provider)
+		return 0;
+
+	if (attach->peer2peer)
+		ret = pci_p2pdma_distance(provider, client, true);
+
+	return ret < 0 ? ret : 0;
+}
+
 static struct sg_table *map_udmabuf(struct dma_buf_attachment *at,
 				    enum dma_data_direction direction)
 {
+	if (check_p2p_support(at) < 0)
+		return ERR_PTR(-EOPNOTSUPP);
+
 	return get_sg_table(at->dev, at->dmabuf, direction);
 }
 
@@ -151,8 +172,15 @@ static void release_udmabuf(struct dma_buf *buf)
 	if (ubuf->sg)
 		put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
 
-	for (pg = 0; pg < ubuf->pagecount; pg++)
-		put_page(ubuf->pages[pg]);
+	for (pg = 0; pg < ubuf->pagecount; pg++) {
+		if (ubuf->pdev)
+			pci_free_p2pmem(ubuf->pdev,
+					page_to_virt(ubuf->pages[pg]),
+					PAGE_SIZE);
+		else
+			put_page(ubuf->pages[pg]);
+	}
+
 	kfree(ubuf->offsets);
 	kfree(ubuf->pages);
 	kfree(ubuf);
@@ -269,9 +297,74 @@ static int handle_shmem_pages(struct udmabuf *ubuf, struct file *memfd,
 	return 0;
 }
 
+static int handle_pcidev_pages(struct udmabuf *ubuf,
+			       struct udmabuf_create_list *head,
+			       struct udmabuf_create_item *list)
+{
+	struct pci_dev *pdev = NULL;
+	resource_size_t bar_size;
+	pgoff_t pgbuf = 0;
+	struct page *page;
+	int i, ret;
+	size_t size;
+	void *addr;
+
+	for (i = 0; i < head->count; i++) {
+		if (!ubuf->pdev) {
+			pdev = pci_get_domain_bus_and_slot(0,
+						PCI_BUS_NUM(list[i].devid),
+						list[i].devid & 0xff);
+			if (!pdev) {
+				ret = -ENODEV;
+				goto err;
+			}
+
+			ubuf->pdev = pdev;
+		}
+
+		bar_size = pci_resource_len(pdev, list[i].bar);
+		if (list[i].offset > bar_size ||
+		    list[i].offset + list[i].size > bar_size) {
+			ret = -EINVAL;
+			goto err;
+		}
+
+		ret = pci_p2pdma_add_resource(pdev,
+					      list[i].bar,
+					      list[i].size,
+					      list[i].offset);
+		if (ret)
+			goto err;
+
+		addr = pci_alloc_p2pmem(pdev, list[i].size);
+		if (!addr) {
+			ret = -EINVAL;
+			goto err;
+		}
+
+		size = 0;
+		while (size < list[i].size) {
+			page = virt_to_page((unsigned long)addr + size);
+			ubuf->pages[pgbuf++] = page;
+
+			size += PAGE_SIZE;
+		}
+	}
+
+err:
+	while (pgbuf > 0 && ubuf->pages[--pgbuf])
+		pci_free_p2pmem(pdev,
+				page_to_virt(ubuf->pages[pgbuf]),
+				PAGE_SIZE);
+	if (pdev)
+		pci_dev_put(pdev);
+	return ret;
+}
+
 static long udmabuf_create(struct miscdevice *device,
 			   struct udmabuf_create_list *head,
-			   struct udmabuf_create_item *list)
+			   struct udmabuf_create_item *list,
+			   bool for_pcidev)
 {
 	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
 	struct file *memfd = NULL;
@@ -312,6 +405,14 @@ static long udmabuf_create(struct miscdevice *device,
 		goto err;
 	}
 
+	if (for_pcidev) {
+		ret = handle_pcidev_pages(ubuf, head, list);
+		if (ret)
+			goto err;
+
+		goto create_dmabuf;
+	}
+
 	pgbuf = 0;
 	for (i = 0; i < head->count; i++) {
 		ret = -EBADFD;
@@ -344,6 +445,7 @@ static long udmabuf_create(struct miscdevice *device,
 		memfd = NULL;
 	}
 
+create_dmabuf:
 	exp_info.ops  = &udmabuf_ops;
 	exp_info.size = ubuf->pagecount << PAGE_SHIFT;
 	exp_info.priv = ubuf;
@@ -362,7 +464,7 @@ static long udmabuf_create(struct miscdevice *device,
 	return dma_buf_fd(buf, flags);
 
 err:
-	while (pgbuf > 0)
+	while (pgbuf > 0 && !ubuf->pdev)
 		put_page(ubuf->pages[--pgbuf]);
 	if (memfd)
 		fput(memfd);
@@ -388,10 +490,11 @@ static long udmabuf_ioctl_create(struct file *filp, unsigned long arg)
 	list.offset = create.offset;
 	list.size   = create.size;
 
-	return udmabuf_create(filp->private_data, &head, &list);
+	return udmabuf_create(filp->private_data, &head, &list, false);
 }
 
-static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg)
+static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg,
+				      bool for_pcidev)
 {
 	struct udmabuf_create_list head;
 	struct udmabuf_create_item *list;
@@ -407,7 +510,7 @@ static long udmabuf_ioctl_create_list(struct file *filp, unsigned long arg)
 	if (IS_ERR(list))
 		return PTR_ERR(list);
 
-	ret = udmabuf_create(filp->private_data, &head, list);
+	ret = udmabuf_create(filp->private_data, &head, list, for_pcidev);
 	kfree(list);
 	return ret;
 }
@@ -422,7 +525,10 @@ static long udmabuf_ioctl(struct file *filp, unsigned int ioctl,
 		ret = udmabuf_ioctl_create(filp, arg);
 		break;
 	case UDMABUF_CREATE_LIST:
-		ret = udmabuf_ioctl_create_list(filp, arg);
+		ret = udmabuf_ioctl_create_list(filp, arg, false);
+		break;
+	case UDMABUF_CREATE_LIST_FOR_PCIDEV:
+		ret = udmabuf_ioctl_create_list(filp, arg, true);
 		break;
 	default:
 		ret = -ENOTTY;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [RFC 5/7] drm/virtio: Ensure that bo's backing store is valid while updating plane
  2024-03-28  8:32 ` [RFC 5/7] drm/virtio: Ensure that bo's backing store is valid while updating plane Vivek Kasireddy
@ 2024-04-26  6:06   ` Weifeng Liu
  0 siblings, 0 replies; 9+ messages in thread
From: Weifeng Liu @ 2024-04-26  6:06 UTC (permalink / raw)
  To: Vivek Kasireddy, dri-devel; +Cc: Gerd Hoffmann

On Thu, 2024-03-28 at 01:32 -0700, Vivek Kasireddy wrote:
> To make sure that the imported bo's backing store is valid, we first
> pin the associated dmabuf, import the sgt if need be and then unpin
> it after the update is complete. Note that we pin/unpin the dmabuf
> even when the backing store is valid to ensure that it does not move
> when the host update (resource_flush) is in progress.
> 
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_plane.c | 56 +++++++++++++++++++++++++-
>  1 file changed, 55 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index a72a2dbda031..3ccf88f9addc 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -26,6 +26,7 @@
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_damage_helper.h>
>  #include <drm/drm_fourcc.h>
> +#include <linux/virtio_dma_buf.h>
>  
>  #include "virtgpu_drv.h"
>  
> @@ -131,6 +132,45 @@ static void virtio_gpu_update_dumb_bo(struct virtio_gpu_device *vgdev,
>  					   objs, NULL);
>  }
>  
> +static bool virtio_gpu_update_dmabuf_bo(struct virtio_gpu_device *vgdev,
> +					struct drm_gem_object *obj)
> +{
> +	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> +	struct dma_buf_attachment *attach = obj->import_attach;
> +	struct dma_resv *resv = attach->dmabuf->resv;
> +	struct virtio_gpu_mem_entry *ents = NULL;
> +	unsigned int nents;
> +	int ret;
> +
> +	dma_resv_lock(resv, NULL);
> +
> +	ret = dma_buf_pin(attach);
> +	if (ret) {
> +		dma_resv_unlock(resv);
> +		return false;
> +	}
> +
> +	if (!bo->has_backing) {
> +		if (bo->sgt)
> +			dma_buf_unmap_attachment(attach,
> +						 bo->sgt,
> +						 DMA_BIDIRECTIONAL);
> +
> +		ret = virtgpu_dma_buf_import_sgt(&ents, &nents,
> +						 bo, attach);
> +		if (ret)
> +			goto err_import;
> +
> +		virtio_gpu_object_attach(vgdev, bo, ents, nents);
> +	}
> +	return true;
> +
> +err_import:
> +	dma_buf_unpin(attach);
> +	dma_resv_unlock(resv);
> +	return false;
> +}
> +
>  static void virtio_gpu_resource_flush(struct drm_plane *plane,
>  				      uint32_t x, uint32_t y,
>  				      uint32_t width, uint32_t height)
> @@ -174,7 +214,9 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
>  	struct virtio_gpu_device *vgdev = dev->dev_private;
>  	struct virtio_gpu_output *output = NULL;
>  	struct virtio_gpu_object *bo;
> +	struct drm_gem_object *obj;
>  	struct drm_rect rect;
> +	bool updated = false;
>  
>  	if (plane->state->crtc)
>  		output = drm_crtc_to_virtio_gpu_output(plane->state->crtc);
> @@ -196,10 +238,17 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
>  	if (!drm_atomic_helper_damage_merged(old_state, plane->state, &rect))
>  		return;
>  
> -	bo = gem_to_virtio_gpu_obj(plane->state->fb->obj[0]);
> +	obj = plane->state->fb->obj[0];
> +	bo = gem_to_virtio_gpu_obj(obj);
>  	if (bo->dumb)
>  		virtio_gpu_update_dumb_bo(vgdev, plane->state, &rect);
>  
> +	if (obj->import_attach) {
> +		updated = virtio_gpu_update_dmabuf_bo(vgdev, obj);
Hi Vivek,

It's possible that the objects imported from other devices are used in
other ways apart from being scanned out (e.g., they might act as
texture resources in 3D contexts in the virtio-GPU back-end).  Thus I
think we should find a better way of updating DMA-BUF objects, like
doing so in move_notify callback.

BTW, this patch set is very useful in implementing virtual display for
the case of SR-IOV, especially it supports sharing device local memory
between host and guest.  Thanks for your work and I am really hoping it
gets merged one day!

Best regards,
-Weifeng
> +		if (!updated)
> +			return;
> +	}
> +
>  	if (plane->state->fb != old_state->fb ||
>  	    plane->state->src_w != old_state->src_w ||
>  	    plane->state->src_h != old_state->src_h ||
> @@ -239,6 +288,11 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
>  				  rect.y1,
>  				  rect.x2 - rect.x1,
>  				  rect.y2 - rect.y1);
> +
> +	if (obj->import_attach && updated) {
> +		dma_buf_unpin(obj->import_attach);
> +		dma_resv_unlock(obj->import_attach->dmabuf->resv);
> +	}
>  }
>  
>  static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-04-26  7:09 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-28  8:32 [RFC 0/7] drm/virtio: Import scanout buffers from other devices Vivek Kasireddy
2024-03-28  8:32 ` [RFC 1/7] drm/virtio: Implement VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd Vivek Kasireddy
2024-03-28  8:32 ` [RFC 2/7] drm/virtio: Add a helper to map and note the dma addrs and lengths Vivek Kasireddy
2024-03-28  8:32 ` [RFC 3/7] drm/virtio: Add helpers to initialize and free the imported object Vivek Kasireddy
2024-03-28  8:32 ` [RFC 4/7] drm/virtio: Import prime buffers from other devices as guest blobs Vivek Kasireddy
2024-03-28  8:32 ` [RFC 5/7] drm/virtio: Ensure that bo's backing store is valid while updating plane Vivek Kasireddy
2024-04-26  6:06   ` Weifeng Liu
2024-03-28  8:32 ` [RFC 6/7] udmabuf/uapi: Add new ioctl to create a dmabuf from PCI bar regions Vivek Kasireddy
2024-03-28  8:33 ` [RFC 7/7] udmabuf: Implement UDMABUF_CREATE_LIST_FOR_PCIDEV ioctl Vivek Kasireddy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.