dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
@ 2023-04-16 11:52 Dmitry Osipenko
  2023-04-16 11:52 ` [PATCH v6 1/3] drm/virtio: Refactor and optimize job submission code path Dmitry Osipenko
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-04-16 11:52 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Rob Clark, Marek Olšák,
	Pierre-Eric Pelloux-Prayer, Emil Velikov
  Cc: kernel, linux-kernel, dri-devel, virtualization

We have multiple Vulkan context types that are awaiting for the addition
of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:

 1. Venus context
 2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)

Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
generic fencing implementation that we want to utilize.

This patch adds initial sync objects support. It creates fundament for a
further fencing improvements. Later on we will want to extend the VirtIO-GPU
fencing API with passing fence IDs to host for waiting, it will be a new
additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
drivers in works that require VirtIO-GPU to support sync objects UAPI.

The patch is heavily inspired by the sync object UAPI implementation of the
MSM driver.

Changelog:

v6: - Added zeroing out of syncobj_desc, as was suggested by Emil Velikov.

    - Fixed memleak in error code path which was spotted by Emil Velikov.

    - Switched to u32/u64 instead of uint_t. Previously was keeping
      uint_t style of the virtgpu_ioctl.c, in the end decided to change
      it because it's not a proper kernel coding style after all.

    - Kept single drm_virtgpu_execbuffer_syncobj struct for both in/out
      sync objects. There was a little concern about whether it would be
      worthwhile to have separate in/out descriptors, in practice it's
      unlikely that we will extend the descs in a foreseeable future.
      There is no overhead in using same struct since we want to pad it
      to 64b anyways and it shouldn't be a problem to separate the descs
      later on if we will want to do that.

    - Added r-b from Emil Velikov.

v5: - Factored out dma-fence unwrap API usage into separate patch as was
      suggested by Emil Velikov.

    - Improved and documented the job submission reorderings as was
      requested by Emil Velikov. Sync file FD is now installed after
      job is submitted to virtio to further optimize reorderings.

    - Added comment for the kvalloc, as was requested by Emil Velikov.

    - The num_in/out_syncobjs now is set only after completed parsing
      of pre/post deps, as was requested by Emil Velikov.

v4: - Added r-b from Rob Clark to the "refactoring" patch.

    - Replaced for/while(ptr && itr) with if (ptr), like was suggested by
      Rob Clark.

    - Dropped NOWARN and NORETRY GFP flags and switched syncobj patch
      to use kvmalloc.

    - Removed unused variables from syncobj patch that were borrowed by
      accident from another (upcoming) patch after one of git rebases.

v3: - Switched to use dma_fence_unwrap_for_each(), like was suggested by
      Rob Clark.

    - Fixed missing dma_fence_put() in error code path that was spotted by
      Rob Clark.

    - Removed obsoleted comment to virtio_gpu_execbuffer_ioctl(), like was
      suggested by Rob Clark.

v2: - Fixed chain-fence context matching by making use of
      dma_fence_chain_contained().

    - Fixed potential uninitialized var usage in error code patch of
      parse_post_deps(). MSM driver had a similar issue that is fixed
      already in upstream.

    - Added new patch that refactors job submission code path. I found
      that it was very difficult to add a new/upcoming host-waits feature
      because of how variables are passed around the code, the virtgpu_ioctl.c
      also was growing to unmanageable size.

Dmitry Osipenko (3):
  drm/virtio: Refactor and optimize job submission code path
  drm/virtio: Wait for each dma-fence of in-fence array individually
  drm/virtio: Support sync objects

 drivers/gpu/drm/virtio/Makefile         |   2 +-
 drivers/gpu/drm/virtio/virtgpu_drv.c    |   3 +-
 drivers/gpu/drm/virtio/virtgpu_drv.h    |   4 +
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 182 --------
 drivers/gpu/drm/virtio/virtgpu_submit.c | 530 ++++++++++++++++++++++++
 include/uapi/drm/virtgpu_drm.h          |  16 +-
 6 files changed, 552 insertions(+), 185 deletions(-)
 create mode 100644 drivers/gpu/drm/virtio/virtgpu_submit.c

-- 
2.39.2


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v6 1/3] drm/virtio: Refactor and optimize job submission code path
  2023-04-16 11:52 [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Dmitry Osipenko
@ 2023-04-16 11:52 ` Dmitry Osipenko
  2023-04-16 11:52 ` [PATCH v6 2/3] drm/virtio: Wait for each dma-fence of in-fence array individually Dmitry Osipenko
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-04-16 11:52 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Rob Clark, Marek Olšák,
	Pierre-Eric Pelloux-Prayer, Emil Velikov
  Cc: kernel, linux-kernel, dri-devel, virtualization

Move virtio_gpu_execbuffer_ioctl() into separate virtgpu_submit.c file,
refactoring and optimizing the code along the way to ease addition of new
features to the ioctl.

The optimization is done by using optimal ordering of the job's submission
steps, reducing code path from the start of the ioctl to the point of
pushing job to virtio queue. Job's initialization is now performed before
in-fence is awaited and out-fence setup is made after sending out job to
virtio.

Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/Makefile         |   2 +-
 drivers/gpu/drm/virtio/virtgpu_drv.h    |   4 +
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 182 ---------------
 drivers/gpu/drm/virtio/virtgpu_submit.c | 295 ++++++++++++++++++++++++
 4 files changed, 300 insertions(+), 183 deletions(-)
 create mode 100644 drivers/gpu/drm/virtio/virtgpu_submit.c

diff --git a/drivers/gpu/drm/virtio/Makefile b/drivers/gpu/drm/virtio/Makefile
index b99fa4a73b68..d2e1788a8227 100644
--- a/drivers/gpu/drm/virtio/Makefile
+++ b/drivers/gpu/drm/virtio/Makefile
@@ -6,6 +6,6 @@
 virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o virtgpu_vram.o \
 	virtgpu_display.o virtgpu_vq.o \
 	virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
-	virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o
+	virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o virtgpu_submit.o
 
 obj-$(CONFIG_DRM_VIRTIO_GPU) += virtio-gpu.o
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index af6ffb696086..4126c384286b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -486,4 +486,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev,
 				   struct sg_table *sgt,
 				   enum dma_data_direction dir);
 
+/* virtgpu_submit.c */
+int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
+				struct drm_file *file);
+
 #endif
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index da45215a933d..b24b11f25197 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -38,36 +38,6 @@
 				    VIRTGPU_BLOB_FLAG_USE_SHAREABLE | \
 				    VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE)
 
-static int virtio_gpu_fence_event_create(struct drm_device *dev,
-					 struct drm_file *file,
-					 struct virtio_gpu_fence *fence,
-					 uint32_t ring_idx)
-{
-	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
-	struct virtio_gpu_fence_event *e = NULL;
-	int ret;
-
-	if (!(vfpriv->ring_idx_mask & BIT_ULL(ring_idx)))
-		return 0;
-
-	e = kzalloc(sizeof(*e), GFP_KERNEL);
-	if (!e)
-		return -ENOMEM;
-
-	e->event.type = VIRTGPU_EVENT_FENCE_SIGNALED;
-	e->event.length = sizeof(e->event);
-
-	ret = drm_event_reserve_init(dev, file, &e->base, &e->event);
-	if (ret)
-		goto free;
-
-	fence->e = e;
-	return 0;
-free:
-	kfree(e);
-	return ret;
-}
-
 /* Must be called with &virtio_gpu_fpriv.struct_mutex held. */
 static void virtio_gpu_create_context_locked(struct virtio_gpu_device *vgdev,
 					     struct virtio_gpu_fpriv *vfpriv)
@@ -108,158 +78,6 @@ static int virtio_gpu_map_ioctl(struct drm_device *dev, void *data,
 					 &virtio_gpu_map->offset);
 }
 
-/*
- * Usage of execbuffer:
- * Relocations need to take into account the full VIRTIO_GPUDrawable size.
- * However, the command as passed from user space must *not* contain the initial
- * VIRTIO_GPUReleaseInfo struct (first XXX bytes)
- */
-static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
-				 struct drm_file *file)
-{
-	struct drm_virtgpu_execbuffer *exbuf = data;
-	struct virtio_gpu_device *vgdev = dev->dev_private;
-	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
-	struct virtio_gpu_fence *out_fence;
-	int ret;
-	uint32_t *bo_handles = NULL;
-	void __user *user_bo_handles = NULL;
-	struct virtio_gpu_object_array *buflist = NULL;
-	struct sync_file *sync_file;
-	int out_fence_fd = -1;
-	void *buf;
-	uint64_t fence_ctx;
-	uint32_t ring_idx;
-
-	fence_ctx = vgdev->fence_drv.context;
-	ring_idx = 0;
-
-	if (vgdev->has_virgl_3d == false)
-		return -ENOSYS;
-
-	if ((exbuf->flags & ~VIRTGPU_EXECBUF_FLAGS))
-		return -EINVAL;
-
-	if ((exbuf->flags & VIRTGPU_EXECBUF_RING_IDX)) {
-		if (exbuf->ring_idx >= vfpriv->num_rings)
-			return -EINVAL;
-
-		if (!vfpriv->base_fence_ctx)
-			return -EINVAL;
-
-		fence_ctx = vfpriv->base_fence_ctx;
-		ring_idx = exbuf->ring_idx;
-	}
-
-	virtio_gpu_create_context(dev, file);
-	if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_IN) {
-		struct dma_fence *in_fence;
-
-		in_fence = sync_file_get_fence(exbuf->fence_fd);
-
-		if (!in_fence)
-			return -EINVAL;
-
-		/*
-		 * Wait if the fence is from a foreign context, or if the fence
-		 * array contains any fence from a foreign context.
-		 */
-		ret = 0;
-		if (!dma_fence_match_context(in_fence, fence_ctx + ring_idx))
-			ret = dma_fence_wait(in_fence, true);
-
-		dma_fence_put(in_fence);
-		if (ret)
-			return ret;
-	}
-
-	if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) {
-		out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
-		if (out_fence_fd < 0)
-			return out_fence_fd;
-	}
-
-	if (exbuf->num_bo_handles) {
-		bo_handles = kvmalloc_array(exbuf->num_bo_handles,
-					    sizeof(uint32_t), GFP_KERNEL);
-		if (!bo_handles) {
-			ret = -ENOMEM;
-			goto out_unused_fd;
-		}
-
-		user_bo_handles = u64_to_user_ptr(exbuf->bo_handles);
-		if (copy_from_user(bo_handles, user_bo_handles,
-				   exbuf->num_bo_handles * sizeof(uint32_t))) {
-			ret = -EFAULT;
-			goto out_unused_fd;
-		}
-
-		buflist = virtio_gpu_array_from_handles(file, bo_handles,
-							exbuf->num_bo_handles);
-		if (!buflist) {
-			ret = -ENOENT;
-			goto out_unused_fd;
-		}
-		kvfree(bo_handles);
-		bo_handles = NULL;
-	}
-
-	buf = vmemdup_user(u64_to_user_ptr(exbuf->command), exbuf->size);
-	if (IS_ERR(buf)) {
-		ret = PTR_ERR(buf);
-		goto out_unused_fd;
-	}
-
-	if (buflist) {
-		ret = virtio_gpu_array_lock_resv(buflist);
-		if (ret)
-			goto out_memdup;
-	}
-
-	out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
-	if(!out_fence) {
-		ret = -ENOMEM;
-		goto out_unresv;
-	}
-
-	ret = virtio_gpu_fence_event_create(dev, file, out_fence, ring_idx);
-	if (ret)
-		goto out_unresv;
-
-	if (out_fence_fd >= 0) {
-		sync_file = sync_file_create(&out_fence->f);
-		if (!sync_file) {
-			dma_fence_put(&out_fence->f);
-			ret = -ENOMEM;
-			goto out_unresv;
-		}
-
-		exbuf->fence_fd = out_fence_fd;
-		fd_install(out_fence_fd, sync_file->file);
-	}
-
-	virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
-			      vfpriv->ctx_id, buflist, out_fence);
-	dma_fence_put(&out_fence->f);
-	virtio_gpu_notify(vgdev);
-	return 0;
-
-out_unresv:
-	if (buflist)
-		virtio_gpu_array_unlock_resv(buflist);
-out_memdup:
-	kvfree(buf);
-out_unused_fd:
-	kvfree(bo_handles);
-	if (buflist)
-		virtio_gpu_array_put_free(buflist);
-
-	if (out_fence_fd >= 0)
-		put_unused_fd(out_fence_fd);
-
-	return ret;
-}
-
 static int virtio_gpu_getparam_ioctl(struct drm_device *dev, void *data,
 				     struct drm_file *file)
 {
diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c
new file mode 100644
index 000000000000..84e7c4d9d8c7
--- /dev/null
+++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
@@ -0,0 +1,295 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright (C) 2015 Red Hat, Inc.
+ * All Rights Reserved.
+ *
+ * Authors:
+ *    Dave Airlie
+ *    Alon Levy
+ */
+
+#include <linux/dma-fence-unwrap.h>
+#include <linux/file.h>
+#include <linux/sync_file.h>
+#include <linux/uaccess.h>
+
+#include <drm/drm_file.h>
+#include <drm/virtgpu_drm.h>
+
+#include "virtgpu_drv.h"
+
+struct virtio_gpu_submit {
+	struct virtio_gpu_object_array *buflist;
+	struct drm_virtgpu_execbuffer *exbuf;
+	struct virtio_gpu_fence *out_fence;
+	struct virtio_gpu_fpriv *vfpriv;
+	struct virtio_gpu_device *vgdev;
+	struct sync_file *sync_file;
+	struct drm_file *file;
+	int out_fence_fd;
+	u64 fence_ctx;
+	u32 ring_idx;
+	void *buf;
+};
+
+static int virtio_gpu_dma_fence_wait(struct virtio_gpu_submit *submit,
+				     struct dma_fence *in_fence)
+{
+	u32 context = submit->fence_ctx + submit->ring_idx;
+
+	if (dma_fence_match_context(in_fence, context))
+		return 0;
+
+	return dma_fence_wait(in_fence, true);
+}
+
+static int virtio_gpu_fence_event_create(struct drm_device *dev,
+					 struct drm_file *file,
+					 struct virtio_gpu_fence *fence,
+					 u32 ring_idx)
+{
+	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
+	struct virtio_gpu_fence_event *e = NULL;
+	int ret;
+
+	if (!(vfpriv->ring_idx_mask & BIT_ULL(ring_idx)))
+		return 0;
+
+	e = kzalloc(sizeof(*e), GFP_KERNEL);
+	if (!e)
+		return -ENOMEM;
+
+	e->event.type = VIRTGPU_EVENT_FENCE_SIGNALED;
+	e->event.length = sizeof(e->event);
+
+	ret = drm_event_reserve_init(dev, file, &e->base, &e->event);
+	if (ret) {
+		kfree(e);
+		return ret;
+	}
+
+	fence->e = e;
+
+	return 0;
+}
+
+static int virtio_gpu_init_submit_buflist(struct virtio_gpu_submit *submit)
+{
+	struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
+	u32 *bo_handles;
+
+	if (!exbuf->num_bo_handles)
+		return 0;
+
+	bo_handles = kvmalloc_array(exbuf->num_bo_handles, sizeof(*bo_handles),
+				    GFP_KERNEL);
+	if (!bo_handles)
+		return -ENOMEM;
+
+	if (copy_from_user(bo_handles, u64_to_user_ptr(exbuf->bo_handles),
+			   exbuf->num_bo_handles * sizeof(*bo_handles))) {
+		kvfree(bo_handles);
+		return -EFAULT;
+	}
+
+	submit->buflist = virtio_gpu_array_from_handles(submit->file, bo_handles,
+							exbuf->num_bo_handles);
+	if (!submit->buflist) {
+		kvfree(bo_handles);
+		return -ENOENT;
+	}
+
+	kvfree(bo_handles);
+
+	return 0;
+}
+
+static void virtio_gpu_cleanup_submit(struct virtio_gpu_submit *submit)
+{
+	if (!IS_ERR(submit->buf))
+		kvfree(submit->buf);
+
+	if (submit->buflist)
+		virtio_gpu_array_put_free(submit->buflist);
+
+	if (submit->out_fence_fd >= 0)
+		put_unused_fd(submit->out_fence_fd);
+
+	if (submit->out_fence)
+		dma_fence_put(&submit->out_fence->f);
+
+	if (submit->sync_file)
+		fput(submit->sync_file->file);
+}
+
+static void virtio_gpu_submit(struct virtio_gpu_submit *submit)
+{
+	virtio_gpu_cmd_submit(submit->vgdev, submit->buf, submit->exbuf->size,
+			      submit->vfpriv->ctx_id, submit->buflist,
+			      submit->out_fence);
+	virtio_gpu_notify(submit->vgdev);
+}
+
+static void virtio_gpu_complete_submit(struct virtio_gpu_submit *submit)
+{
+	submit->buf = NULL;
+	submit->buflist = NULL;
+	submit->sync_file = NULL;
+	submit->out_fence = NULL;
+	submit->out_fence_fd = -1;
+}
+
+static int virtio_gpu_init_submit(struct virtio_gpu_submit *submit,
+				  struct drm_virtgpu_execbuffer *exbuf,
+				  struct drm_device *dev,
+				  struct drm_file *file,
+				  u64 fence_ctx, u32 ring_idx)
+{
+	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+	struct virtio_gpu_fence *out_fence;
+	int err;
+
+	memset(submit, 0, sizeof(*submit));
+
+	out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
+	if (!out_fence)
+		return -ENOMEM;
+
+	err = virtio_gpu_fence_event_create(dev, file, out_fence, ring_idx);
+	if (err) {
+		dma_fence_put(&out_fence->f);
+		return err;
+	}
+
+	submit->out_fence = out_fence;
+	submit->fence_ctx = fence_ctx;
+	submit->ring_idx = ring_idx;
+	submit->out_fence_fd = -1;
+	submit->vfpriv = vfpriv;
+	submit->vgdev = vgdev;
+	submit->exbuf = exbuf;
+	submit->file = file;
+
+	err = virtio_gpu_init_submit_buflist(submit);
+	if (err)
+		return err;
+
+	submit->buf = vmemdup_user(u64_to_user_ptr(exbuf->command), exbuf->size);
+	if (IS_ERR(submit->buf))
+		return PTR_ERR(submit->buf);
+
+	if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) {
+		err = get_unused_fd_flags(O_CLOEXEC);
+		if (err < 0)
+			return err;
+
+		submit->out_fence_fd = err;
+
+		submit->sync_file = sync_file_create(&out_fence->f);
+		if (!submit->sync_file)
+			return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int virtio_gpu_wait_in_fence(struct virtio_gpu_submit *submit)
+{
+	int ret = 0;
+
+	if (submit->exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_IN) {
+		struct dma_fence *in_fence =
+				sync_file_get_fence(submit->exbuf->fence_fd);
+		if (!in_fence)
+			return -EINVAL;
+
+		/*
+		 * Wait if the fence is from a foreign context, or if the
+		 * fence array contains any fence from a foreign context.
+		 */
+		ret = virtio_gpu_dma_fence_wait(submit, in_fence);
+
+		dma_fence_put(in_fence);
+	}
+
+	return ret;
+}
+
+static void virtio_gpu_install_out_fence_fd(struct virtio_gpu_submit *submit)
+{
+	if (submit->sync_file) {
+		submit->exbuf->fence_fd = submit->out_fence_fd;
+		fd_install(submit->out_fence_fd, submit->sync_file->file);
+	}
+}
+
+static int virtio_gpu_lock_buflist(struct virtio_gpu_submit *submit)
+{
+	if (submit->buflist)
+		return virtio_gpu_array_lock_resv(submit->buflist);
+
+	return 0;
+}
+
+int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
+				struct drm_file *file)
+{
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
+	u64 fence_ctx = vgdev->fence_drv.context;
+	struct drm_virtgpu_execbuffer *exbuf = data;
+	struct virtio_gpu_submit submit;
+	u32 ring_idx = 0;
+	int ret = -EINVAL;
+
+	if (!vgdev->has_virgl_3d)
+		return -ENOSYS;
+
+	if (exbuf->flags & ~VIRTGPU_EXECBUF_FLAGS)
+		return ret;
+
+	if (exbuf->flags & VIRTGPU_EXECBUF_RING_IDX) {
+		if (exbuf->ring_idx >= vfpriv->num_rings)
+			return ret;
+
+		if (!vfpriv->base_fence_ctx)
+			return ret;
+
+		fence_ctx = vfpriv->base_fence_ctx;
+		ring_idx = exbuf->ring_idx;
+	}
+
+	virtio_gpu_create_context(dev, file);
+
+	ret = virtio_gpu_init_submit(&submit, exbuf, dev, file,
+				     fence_ctx, ring_idx);
+	if (ret)
+		goto cleanup;
+
+	/*
+	 * Await in-fences in the end of the job submission path to
+	 * optimize the path by proceeding directly to the submission
+	 * to virtio after the waits.
+	 */
+	ret = virtio_gpu_wait_in_fence(&submit);
+	if (ret)
+		goto cleanup;
+
+	ret = virtio_gpu_lock_buflist(&submit);
+	if (ret)
+		goto cleanup;
+
+	virtio_gpu_submit(&submit);
+
+	/*
+	 * Set up usr-out data after submitting the job to optimize
+	 * the job submission path.
+	 */
+	virtio_gpu_install_out_fence_fd(&submit);
+	virtio_gpu_complete_submit(&submit);
+cleanup:
+	virtio_gpu_cleanup_submit(&submit);
+
+	return ret;
+}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 2/3] drm/virtio: Wait for each dma-fence of in-fence array individually
  2023-04-16 11:52 [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Dmitry Osipenko
  2023-04-16 11:52 ` [PATCH v6 1/3] drm/virtio: Refactor and optimize job submission code path Dmitry Osipenko
@ 2023-04-16 11:52 ` Dmitry Osipenko
  2023-04-16 11:52 ` [PATCH v6 3/3] drm/virtio: Support sync objects Dmitry Osipenko
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-04-16 11:52 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Rob Clark, Marek Olšák,
	Pierre-Eric Pelloux-Prayer, Emil Velikov
  Cc: kernel, linux-kernel, dri-devel, virtualization

Use dma-fence-unwrap API for waiting each dma-fence of the in-fence array
individually. Sync file's in-fence array always has a non-matching fence
context ID, which doesn't allow to skip waiting of fences with a matching
context ID in a case of a merged sync file fence.

Suggested-by: Rob Clark <robdclark@gmail.com>
Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_submit.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c
index 84e7c4d9d8c7..cf3c04b16a7a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_submit.c
+++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
@@ -32,8 +32,8 @@ struct virtio_gpu_submit {
 	void *buf;
 };
 
-static int virtio_gpu_dma_fence_wait(struct virtio_gpu_submit *submit,
-				     struct dma_fence *in_fence)
+static int virtio_gpu_do_fence_wait(struct virtio_gpu_submit *submit,
+				    struct dma_fence *in_fence)
 {
 	u32 context = submit->fence_ctx + submit->ring_idx;
 
@@ -43,6 +43,22 @@ static int virtio_gpu_dma_fence_wait(struct virtio_gpu_submit *submit,
 	return dma_fence_wait(in_fence, true);
 }
 
+static int virtio_gpu_dma_fence_wait(struct virtio_gpu_submit *submit,
+				     struct dma_fence *fence)
+{
+	struct dma_fence_unwrap itr;
+	struct dma_fence *f;
+	int err;
+
+	dma_fence_unwrap_for_each(f, &itr, fence) {
+		err = virtio_gpu_do_fence_wait(submit, f);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
 static int virtio_gpu_fence_event_create(struct drm_device *dev,
 					 struct drm_file *file,
 					 struct virtio_gpu_fence *fence,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-04-16 11:52 [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Dmitry Osipenko
  2023-04-16 11:52 ` [PATCH v6 1/3] drm/virtio: Refactor and optimize job submission code path Dmitry Osipenko
  2023-04-16 11:52 ` [PATCH v6 2/3] drm/virtio: Wait for each dma-fence of in-fence array individually Dmitry Osipenko
@ 2023-04-16 11:52 ` Dmitry Osipenko
  2023-05-01 15:29   ` Dmitry Osipenko
                     ` (2 more replies)
  2023-04-17 23:17 ` [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Gurchetan Singh
                   ` (2 subsequent siblings)
  5 siblings, 3 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-04-16 11:52 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Rob Clark, Marek Olšák,
	Pierre-Eric Pelloux-Prayer, Emil Velikov
  Cc: kernel, linux-kernel, dri-devel, virtualization

Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
support is needed by native context VirtIO-GPU Mesa drivers, it also will
be used by Venus and Virgl contexts.

Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.c    |   3 +-
 drivers/gpu/drm/virtio/virtgpu_submit.c | 219 ++++++++++++++++++++++++
 include/uapi/drm/virtgpu_drm.h          |  16 +-
 3 files changed, 236 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index add075681e18..a22155577152 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -176,7 +176,8 @@ static const struct drm_driver driver = {
 	 * If KMS is disabled DRIVER_MODESET and DRIVER_ATOMIC are masked
 	 * out via drm_device::driver_features:
 	 */
-	.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC,
+	.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC |
+			   DRIVER_SYNCOBJ | DRIVER_SYNCOBJ_TIMELINE,
 	.open = virtio_gpu_driver_open,
 	.postclose = virtio_gpu_driver_postclose,
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c
index cf3c04b16a7a..5a0f2526c1a0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_submit.c
+++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
@@ -14,11 +14,24 @@
 #include <linux/uaccess.h>
 
 #include <drm/drm_file.h>
+#include <drm/drm_syncobj.h>
 #include <drm/virtgpu_drm.h>
 
 #include "virtgpu_drv.h"
 
+struct virtio_gpu_submit_post_dep {
+	struct drm_syncobj *syncobj;
+	struct dma_fence_chain *chain;
+	u64 point;
+};
+
 struct virtio_gpu_submit {
+	struct virtio_gpu_submit_post_dep *post_deps;
+	unsigned int num_out_syncobjs;
+
+	struct drm_syncobj **in_syncobjs;
+	unsigned int num_in_syncobjs;
+
 	struct virtio_gpu_object_array *buflist;
 	struct drm_virtgpu_execbuffer *exbuf;
 	struct virtio_gpu_fence *out_fence;
@@ -59,6 +72,199 @@ static int virtio_gpu_dma_fence_wait(struct virtio_gpu_submit *submit,
 	return 0;
 }
 
+static void virtio_gpu_free_syncobjs(struct drm_syncobj **syncobjs,
+				     u32 nr_syncobjs)
+{
+	u32 i = nr_syncobjs;
+
+	while (i--) {
+		if (syncobjs[i])
+			drm_syncobj_put(syncobjs[i]);
+	}
+
+	kvfree(syncobjs);
+}
+
+static int
+virtio_gpu_parse_deps(struct virtio_gpu_submit *submit)
+{
+	struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
+	struct drm_virtgpu_execbuffer_syncobj syncobj_desc;
+	size_t syncobj_stride = exbuf->syncobj_stride;
+	u32 num_in_syncobjs = exbuf->num_in_syncobjs;
+	struct drm_syncobj **syncobjs;
+	int ret = 0, i;
+
+	if (!num_in_syncobjs)
+		return 0;
+
+	/*
+	 * kvalloc at first tries to allocate memory using kmalloc and
+	 * falls back to vmalloc only on failure. It also uses GFP_NOWARN
+	 * internally for allocations larger than a page size, preventing
+	 * storm of KMSG warnings.
+	 */
+	syncobjs = kvcalloc(num_in_syncobjs, sizeof(*syncobjs), GFP_KERNEL);
+	if (!syncobjs)
+		return -ENOMEM;
+
+	for (i = 0; i < num_in_syncobjs; i++) {
+		u64 address = exbuf->in_syncobjs + i * syncobj_stride;
+		struct dma_fence *fence;
+
+		memset(&syncobj_desc, 0, sizeof(syncobj_desc));
+
+		if (copy_from_user(&syncobj_desc,
+				   u64_to_user_ptr(address),
+				   min(syncobj_stride, sizeof(syncobj_desc)))) {
+			ret = -EFAULT;
+			break;
+		}
+
+		if (syncobj_desc.flags & ~VIRTGPU_EXECBUF_SYNCOBJ_FLAGS) {
+			ret = -EINVAL;
+			break;
+		}
+
+		ret = drm_syncobj_find_fence(submit->file, syncobj_desc.handle,
+					     syncobj_desc.point, 0, &fence);
+		if (ret)
+			break;
+
+		ret = virtio_gpu_dma_fence_wait(submit, fence);
+
+		dma_fence_put(fence);
+		if (ret)
+			break;
+
+		if (syncobj_desc.flags & VIRTGPU_EXECBUF_SYNCOBJ_RESET) {
+			syncobjs[i] = drm_syncobj_find(submit->file,
+						       syncobj_desc.handle);
+			if (!syncobjs[i]) {
+				ret = -EINVAL;
+				break;
+			}
+		}
+	}
+
+	if (ret) {
+		virtio_gpu_free_syncobjs(syncobjs, i);
+		return ret;
+	}
+
+	submit->num_in_syncobjs = num_in_syncobjs;
+	submit->in_syncobjs = syncobjs;
+
+	return ret;
+}
+
+static void virtio_gpu_reset_syncobjs(struct drm_syncobj **syncobjs,
+				      u32 nr_syncobjs)
+{
+	u32 i;
+
+	for (i = 0; i < nr_syncobjs; i++) {
+		if (syncobjs[i])
+			drm_syncobj_replace_fence(syncobjs[i], NULL);
+	}
+}
+
+static void
+virtio_gpu_free_post_deps(struct virtio_gpu_submit_post_dep *post_deps,
+			  u32 nr_syncobjs)
+{
+	u32 i = nr_syncobjs;
+
+	while (i--) {
+		kfree(post_deps[i].chain);
+		drm_syncobj_put(post_deps[i].syncobj);
+	}
+
+	kvfree(post_deps);
+}
+
+static int virtio_gpu_parse_post_deps(struct virtio_gpu_submit *submit)
+{
+	struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
+	struct drm_virtgpu_execbuffer_syncobj syncobj_desc;
+	struct virtio_gpu_submit_post_dep *post_deps;
+	u32 num_out_syncobjs = exbuf->num_out_syncobjs;
+	size_t syncobj_stride = exbuf->syncobj_stride;
+	int ret = 0, i;
+
+	if (!num_out_syncobjs)
+		return 0;
+
+	post_deps = kvcalloc(num_out_syncobjs, sizeof(*post_deps), GFP_KERNEL);
+	if (!post_deps)
+		return -ENOMEM;
+
+	for (i = 0; i < num_out_syncobjs; i++) {
+		u64 address = exbuf->out_syncobjs + i * syncobj_stride;
+
+		memset(&syncobj_desc, 0, sizeof(syncobj_desc));
+
+		if (copy_from_user(&syncobj_desc,
+				   u64_to_user_ptr(address),
+				   min(syncobj_stride, sizeof(syncobj_desc)))) {
+			ret = -EFAULT;
+			break;
+		}
+
+		post_deps[i].point = syncobj_desc.point;
+
+		if (syncobj_desc.flags) {
+			ret = -EINVAL;
+			break;
+		}
+
+		if (syncobj_desc.point) {
+			post_deps[i].chain = dma_fence_chain_alloc();
+			if (!post_deps[i].chain) {
+				ret = -ENOMEM;
+				break;
+			}
+		}
+
+		post_deps[i].syncobj = drm_syncobj_find(submit->file,
+							syncobj_desc.handle);
+		if (!post_deps[i].syncobj) {
+			kfree(post_deps[i].chain);
+			ret = -EINVAL;
+			break;
+		}
+	}
+
+	if (ret) {
+		virtio_gpu_free_post_deps(post_deps, i);
+		return ret;
+	}
+
+	submit->num_out_syncobjs = num_out_syncobjs;
+	submit->post_deps = post_deps;
+
+	return 0;
+}
+
+static void
+virtio_gpu_process_post_deps(struct virtio_gpu_submit *submit)
+{
+	struct virtio_gpu_submit_post_dep *post_deps = submit->post_deps;
+	struct dma_fence *fence = &submit->out_fence->f;
+	u32 i;
+
+	for (i = 0; i < submit->num_out_syncobjs; i++) {
+		if (post_deps[i].chain) {
+			drm_syncobj_add_point(post_deps[i].syncobj,
+					      post_deps[i].chain,
+					      fence, post_deps[i].point);
+			post_deps[i].chain = NULL;
+		} else {
+			drm_syncobj_replace_fence(post_deps[i].syncobj, fence);
+		}
+	}
+}
+
 static int virtio_gpu_fence_event_create(struct drm_device *dev,
 					 struct drm_file *file,
 					 struct virtio_gpu_fence *fence,
@@ -122,6 +328,10 @@ static int virtio_gpu_init_submit_buflist(struct virtio_gpu_submit *submit)
 
 static void virtio_gpu_cleanup_submit(struct virtio_gpu_submit *submit)
 {
+	virtio_gpu_reset_syncobjs(submit->in_syncobjs, submit->num_in_syncobjs);
+	virtio_gpu_free_syncobjs(submit->in_syncobjs, submit->num_in_syncobjs);
+	virtio_gpu_free_post_deps(submit->post_deps, submit->num_out_syncobjs);
+
 	if (!IS_ERR(submit->buf))
 		kvfree(submit->buf);
 
@@ -288,6 +498,14 @@ int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 	 * optimize the path by proceeding directly to the submission
 	 * to virtio after the waits.
 	 */
+	ret = virtio_gpu_parse_post_deps(&submit);
+	if (ret)
+		goto cleanup;
+
+	ret = virtio_gpu_parse_deps(&submit);
+	if (ret)
+		goto cleanup;
+
 	ret = virtio_gpu_wait_in_fence(&submit);
 	if (ret)
 		goto cleanup;
@@ -303,6 +521,7 @@ int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 	 * the job submission path.
 	 */
 	virtio_gpu_install_out_fence_fd(&submit);
+	virtio_gpu_process_post_deps(&submit);
 	virtio_gpu_complete_submit(&submit);
 cleanup:
 	virtio_gpu_cleanup_submit(&submit);
diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 7b158fcb02b4..b1d0e56565bc 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -64,6 +64,16 @@ struct drm_virtgpu_map {
 	__u32 pad;
 };
 
+#define VIRTGPU_EXECBUF_SYNCOBJ_RESET		0x01
+#define VIRTGPU_EXECBUF_SYNCOBJ_FLAGS ( \
+		VIRTGPU_EXECBUF_SYNCOBJ_RESET | \
+		0)
+struct drm_virtgpu_execbuffer_syncobj {
+	__u32 handle;
+	__u32 flags;
+	__u64 point;
+};
+
 /* fence_fd is modified on success if VIRTGPU_EXECBUF_FENCE_FD_OUT flag is set. */
 struct drm_virtgpu_execbuffer {
 	__u32 flags;
@@ -73,7 +83,11 @@ struct drm_virtgpu_execbuffer {
 	__u32 num_bo_handles;
 	__s32 fence_fd; /* in/out fence fd (see VIRTGPU_EXECBUF_FENCE_FD_IN/OUT) */
 	__u32 ring_idx; /* command ring index (see VIRTGPU_EXECBUF_RING_IDX) */
-	__u32 pad;
+	__u32 syncobj_stride; /* size of @drm_virtgpu_execbuffer_syncobj */
+	__u32 num_in_syncobjs;
+	__u32 num_out_syncobjs;
+	__u64 in_syncobjs;
+	__u64 out_syncobjs;
 };
 
 #define VIRTGPU_PARAM_3D_FEATURES 1 /* do we have 3D features in the hw */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-04-16 11:52 [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Dmitry Osipenko
                   ` (2 preceding siblings ...)
  2023-04-16 11:52 ` [PATCH v6 3/3] drm/virtio: Support sync objects Dmitry Osipenko
@ 2023-04-17 23:17 ` Gurchetan Singh
  2023-04-19 21:22   ` Dmitry Osipenko
  2023-05-01 15:38 ` Dmitry Osipenko
  2023-06-03  2:11 ` Dmitry Osipenko
  5 siblings, 1 reply; 28+ messages in thread
From: Gurchetan Singh @ 2023-04-17 23:17 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, Dominik Behr,
	David Airlie, kernel, Emil Velikov

[-- Attachment #1: Type: text/plain, Size: 4900 bytes --]

On Sun, Apr 16, 2023 at 4:53 AM Dmitry Osipenko <
dmitry.osipenko@collabora.com> wrote:

> We have multiple Vulkan context types that are awaiting for the addition
> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
>
>  1. Venus context
>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
>
> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> generic fencing implementation that we want to utilize.
>
> This patch adds initial sync objects support. It creates fundament for a
> further fencing improvements. Later on we will want to extend the
> VirtIO-GPU
> fencing API with passing fence IDs to host for waiting, it will be a new
> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU
> context
> drivers in works that require VirtIO-GPU to support sync objects UAPI.
>
> The patch is heavily inspired by the sync object UAPI implementation of the
> MSM driver.
>

The changes seem good, but I would recommend getting a full end-to-end
solution (i.e, you've proxied the host fence with these changes and shared
with the host compositor) working first.  You'll never know what you'll
find after completing this exercise.  Or is that the plan already?

Typically, you want to land the uAPI and virtio spec changes last.
Mesa/gfxstream/virglrenderer/crosvm all have the ability to test out
unstable uAPIs ...


>
> Changelog:
>
> v6: - Added zeroing out of syncobj_desc, as was suggested by Emil Velikov.
>
>     - Fixed memleak in error code path which was spotted by Emil Velikov.
>
>     - Switched to u32/u64 instead of uint_t. Previously was keeping
>       uint_t style of the virtgpu_ioctl.c, in the end decided to change
>       it because it's not a proper kernel coding style after all.
>
>     - Kept single drm_virtgpu_execbuffer_syncobj struct for both in/out
>       sync objects. There was a little concern about whether it would be
>       worthwhile to have separate in/out descriptors, in practice it's
>       unlikely that we will extend the descs in a foreseeable future.
>       There is no overhead in using same struct since we want to pad it
>       to 64b anyways and it shouldn't be a problem to separate the descs
>       later on if we will want to do that.
>
>     - Added r-b from Emil Velikov.
>
> v5: - Factored out dma-fence unwrap API usage into separate patch as was
>       suggested by Emil Velikov.
>
>     - Improved and documented the job submission reorderings as was
>       requested by Emil Velikov. Sync file FD is now installed after
>       job is submitted to virtio to further optimize reorderings.
>
>     - Added comment for the kvalloc, as was requested by Emil Velikov.
>
>     - The num_in/out_syncobjs now is set only after completed parsing
>       of pre/post deps, as was requested by Emil Velikov.
>
> v4: - Added r-b from Rob Clark to the "refactoring" patch.
>
>     - Replaced for/while(ptr && itr) with if (ptr), like was suggested by
>       Rob Clark.
>
>     - Dropped NOWARN and NORETRY GFP flags and switched syncobj patch
>       to use kvmalloc.
>
>     - Removed unused variables from syncobj patch that were borrowed by
>       accident from another (upcoming) patch after one of git rebases.
>
> v3: - Switched to use dma_fence_unwrap_for_each(), like was suggested by
>       Rob Clark.
>
>     - Fixed missing dma_fence_put() in error code path that was spotted by
>       Rob Clark.
>
>     - Removed obsoleted comment to virtio_gpu_execbuffer_ioctl(), like was
>       suggested by Rob Clark.
>
> v2: - Fixed chain-fence context matching by making use of
>       dma_fence_chain_contained().
>
>     - Fixed potential uninitialized var usage in error code patch of
>       parse_post_deps(). MSM driver had a similar issue that is fixed
>       already in upstream.
>
>     - Added new patch that refactors job submission code path. I found
>       that it was very difficult to add a new/upcoming host-waits feature
>       because of how variables are passed around the code, the
> virtgpu_ioctl.c
>       also was growing to unmanageable size.
>
> Dmitry Osipenko (3):
>   drm/virtio: Refactor and optimize job submission code path
>   drm/virtio: Wait for each dma-fence of in-fence array individually
>   drm/virtio: Support sync objects
>
>  drivers/gpu/drm/virtio/Makefile         |   2 +-
>  drivers/gpu/drm/virtio/virtgpu_drv.c    |   3 +-
>  drivers/gpu/drm/virtio/virtgpu_drv.h    |   4 +
>  drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 182 --------
>  drivers/gpu/drm/virtio/virtgpu_submit.c | 530 ++++++++++++++++++++++++
>  include/uapi/drm/virtgpu_drm.h          |  16 +-
>  6 files changed, 552 insertions(+), 185 deletions(-)
>  create mode 100644 drivers/gpu/drm/virtio/virtgpu_submit.c
>
> --
> 2.39.2
>
>

[-- Attachment #2: Type: text/html, Size: 5799 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-04-17 23:17 ` [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Gurchetan Singh
@ 2023-04-19 21:22   ` Dmitry Osipenko
  2023-04-24 18:40     ` Gurchetan Singh
  0 siblings, 1 reply; 28+ messages in thread
From: Dmitry Osipenko @ 2023-04-19 21:22 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, Dominik Behr,
	David Airlie, kernel, Emil Velikov

Hello Gurchetan,

On 4/18/23 02:17, Gurchetan Singh wrote:
> On Sun, Apr 16, 2023 at 4:53 AM Dmitry Osipenko <
> dmitry.osipenko@collabora.com> wrote:
> 
>> We have multiple Vulkan context types that are awaiting for the addition
>> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
>>
>>  1. Venus context
>>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
>>
>> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
>> generic fencing implementation that we want to utilize.
>>
>> This patch adds initial sync objects support. It creates fundament for a
>> further fencing improvements. Later on we will want to extend the
>> VirtIO-GPU
>> fencing API with passing fence IDs to host for waiting, it will be a new
>> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU
>> context
>> drivers in works that require VirtIO-GPU to support sync objects UAPI.
>>
>> The patch is heavily inspired by the sync object UAPI implementation of the
>> MSM driver.
>>
> 
> The changes seem good, but I would recommend getting a full end-to-end
> solution (i.e, you've proxied the host fence with these changes and shared
> with the host compositor) working first.  You'll never know what you'll
> find after completing this exercise.  Or is that the plan already?
> 
> Typically, you want to land the uAPI and virtio spec changes last.
> Mesa/gfxstream/virglrenderer/crosvm all have the ability to test out
> unstable uAPIs ...

The proxied host fence isn't directly related to sync objects, though I
prepared code such that it could be extended with a proxied fence later
on, based on a prototype that was made some time ago.

The proxied host fence shouldn't require UAPI changes, but only
virtio-gpu proto extension. Normally, all in-fences belong to a job's
context, and thus, waits are skipped by the guest kernel. Hence, fence
proxying is a separate feature from sync objects, it can be added
without sync objects.

Sync objects primarily wanted by native context drivers because Mesa
relies on the sync object UAPI presence. It's one of direct blockers for
Intel and AMDGPU drivers, both of which has been using this sync object
UAPI for a few months and now wanting it to land upstream.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-04-19 21:22   ` Dmitry Osipenko
@ 2023-04-24 18:40     ` Gurchetan Singh
  0 siblings, 0 replies; 28+ messages in thread
From: Gurchetan Singh @ 2023-04-24 18:40 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, Dominik Behr,
	David Airlie, kernel, Emil Velikov

On Wed, Apr 19, 2023 at 2:22 PM Dmitry Osipenko
<dmitry.osipenko@collabora.com> wrote:
>
> Hello Gurchetan,
>
> On 4/18/23 02:17, Gurchetan Singh wrote:
> > On Sun, Apr 16, 2023 at 4:53 AM Dmitry Osipenko <
> > dmitry.osipenko@collabora.com> wrote:
> >
> >> We have multiple Vulkan context types that are awaiting for the addition
> >> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> >>
> >>  1. Venus context
> >>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> >>
> >> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> >> generic fencing implementation that we want to utilize.
> >>
> >> This patch adds initial sync objects support. It creates fundament for a
> >> further fencing improvements. Later on we will want to extend the
> >> VirtIO-GPU
> >> fencing API with passing fence IDs to host for waiting, it will be a new
> >> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU
> >> context
> >> drivers in works that require VirtIO-GPU to support sync objects UAPI.
> >>
> >> The patch is heavily inspired by the sync object UAPI implementation of the
> >> MSM driver.
> >>
> >
> > The changes seem good, but I would recommend getting a full end-to-end
> > solution (i.e, you've proxied the host fence with these changes and shared
> > with the host compositor) working first.  You'll never know what you'll
> > find after completing this exercise.  Or is that the plan already?
> >
> > Typically, you want to land the uAPI and virtio spec changes last.
> > Mesa/gfxstream/virglrenderer/crosvm all have the ability to test out
> > unstable uAPIs ...
>
> The proxied host fence isn't directly related to sync objects, though I
> prepared code such that it could be extended with a proxied fence later
> on, based on a prototype that was made some time ago.

Proxying the host fence is the novel bit.  If you have code that does
this, you should rebase/send that out (even as an RFC) so it's easier
to see how the pieces fit.

Right now, if you've only tested synchronization objects between the
same virtio-gpu context that skips the guest side wait, I think you
can already do that with the current uAPI (since ideally you'd wait on
the host side and can encode the sync resource in the command stream).

Also, try to come with a simple test (so we can meet requirements here
[a]) that showcases the new feature/capability.  An example would be
the virtio-intel native context sharing a fence with KMS or even
Wayland.

[a] https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements

>
> The proxied host fence shouldn't require UAPI changes, but only
> virtio-gpu proto extension. Normally, all in-fences belong to a job's
> context, and thus, waits are skipped by the guest kernel. Hence, fence
> proxying is a separate feature from sync objects, it can be added
> without sync objects.
>
> Sync objects primarily wanted by native context drivers because Mesa
> relies on the sync object UAPI presence. It's one of direct blockers for
> Intel and AMDGPU drivers, both of which has been using this sync object
> UAPI for a few months and now wanting it to land upstream.
>
> --
> Best regards,
> Dmitry
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-04-16 11:52 ` [PATCH v6 3/3] drm/virtio: Support sync objects Dmitry Osipenko
@ 2023-05-01 15:29   ` Dmitry Osipenko
  2023-06-25  8:47   ` Geert Uytterhoeven
  2023-07-31 22:47   ` Dmitry Osipenko
  2 siblings, 0 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-05-01 15:29 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Rob Clark, Marek Olšák,
	Pierre-Eric Pelloux-Prayer, Emil Velikov
  Cc: kernel, linux-kernel, dri-devel, virtualization

On 4/16/23 14:52, Dmitry Osipenko wrote:
> Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
> support is needed by native context VirtIO-GPU Mesa drivers, it also will
> be used by Venus and Virgl contexts.
> 
> Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.c    |   3 +-
>  drivers/gpu/drm/virtio/virtgpu_submit.c | 219 ++++++++++++++++++++++++
>  include/uapi/drm/virtgpu_drm.h          |  16 +-
>  3 files changed, 236 insertions(+), 2 deletions(-)

Pierre-Eric tested this v6 patchset with the AMDGPU native context. He
has problems his email/ML setup and is unable to reply here. I asked him
to provide his t-b on the Mesa MR [1] and now replicating it here.

[1]
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658#note_1889792

Tested-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-04-16 11:52 [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Dmitry Osipenko
                   ` (3 preceding siblings ...)
  2023-04-17 23:17 ` [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Gurchetan Singh
@ 2023-05-01 15:38 ` Dmitry Osipenko
  2023-05-03  6:51   ` Gerd Hoffmann
  2023-05-03 17:07   ` Gurchetan Singh
  2023-06-03  2:11 ` Dmitry Osipenko
  5 siblings, 2 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-05-01 15:38 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, David Airlie, kernel, virtualization,
	Emil Velikov

On 4/16/23 14:52, Dmitry Osipenko wrote:
> We have multiple Vulkan context types that are awaiting for the addition
> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> 
>  1. Venus context
>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> 
> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> generic fencing implementation that we want to utilize.
> 
> This patch adds initial sync objects support. It creates fundament for a
> further fencing improvements. Later on we will want to extend the VirtIO-GPU
> fencing API with passing fence IDs to host for waiting, it will be a new
> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
> drivers in works that require VirtIO-GPU to support sync objects UAPI.
> 
> The patch is heavily inspired by the sync object UAPI implementation of the
> MSM driver.

Gerd, do you have any objections to merging this series?

We have AMDGPU [1] and Intel [2] native context WIP drivers depending on
the sync object support. It is the only part missing from kernel today
that is wanted by the native context drivers. Otherwise, there are few
other things in Qemu and virglrenderer left to sort out.

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658
[2] https://gitlab.freedesktop.org/digetx/mesa/-/commits/native-context-iris

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-05-01 15:38 ` Dmitry Osipenko
@ 2023-05-03  6:51   ` Gerd Hoffmann
  2023-05-08 12:16     ` Dmitry Osipenko
  2023-05-03 17:07   ` Gurchetan Singh
  1 sibling, 1 reply; 28+ messages in thread
From: Gerd Hoffmann @ 2023-05-03  6:51 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, David Airlie, kernel, virtualization,
	Emil Velikov

On Mon, May 01, 2023 at 06:38:45PM +0300, Dmitry Osipenko wrote:
> On 4/16/23 14:52, Dmitry Osipenko wrote:
> > We have multiple Vulkan context types that are awaiting for the addition
> > of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> > 
> >  1. Venus context
> >  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> > 
> > Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> > generic fencing implementation that we want to utilize.
> > 
> > This patch adds initial sync objects support. It creates fundament for a
> > further fencing improvements. Later on we will want to extend the VirtIO-GPU
> > fencing API with passing fence IDs to host for waiting, it will be a new
> > additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
> > drivers in works that require VirtIO-GPU to support sync objects UAPI.
> > 
> > The patch is heavily inspired by the sync object UAPI implementation of the
> > MSM driver.
> 
> Gerd, do you have any objections to merging this series?

No objections.  Can't spot any issues, but I also don't follow drm close
enough to be able to review the sync object logic in detail.

Acked-by: Gerd Hoffmann <kraxel@redhat.com>

take care,
  Gerd


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-05-01 15:38 ` Dmitry Osipenko
  2023-05-03  6:51   ` Gerd Hoffmann
@ 2023-05-03 17:07   ` Gurchetan Singh
  2023-05-08 13:59     ` Rob Clark
  1 sibling, 1 reply; 28+ messages in thread
From: Gurchetan Singh @ 2023-05-03 17:07 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, David Airlie, kernel,
	Emil Velikov

[-- Attachment #1: Type: text/plain, Size: 2267 bytes --]

On Mon, May 1, 2023 at 8:38 AM Dmitry Osipenko <
dmitry.osipenko@collabora.com> wrote:

> On 4/16/23 14:52, Dmitry Osipenko wrote:
> > We have multiple Vulkan context types that are awaiting for the addition
> > of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> >
> >  1. Venus context
> >  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> >
> > Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> > generic fencing implementation that we want to utilize.
> >
> > This patch adds initial sync objects support. It creates fundament for a
> > further fencing improvements. Later on we will want to extend the
> VirtIO-GPU
> > fencing API with passing fence IDs to host for waiting, it will be a new
> > additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU
> context
> > drivers in works that require VirtIO-GPU to support sync objects UAPI.
> >
> > The patch is heavily inspired by the sync object UAPI implementation of
> the
> > MSM driver.
>
> Gerd, do you have any objections to merging this series?
>
> We have AMDGPU [1] and Intel [2] native context WIP drivers depending on
> the sync object support. It is the only part missing from kernel today
> that is wanted by the native context drivers. Otherwise, there are few
> other things in Qemu and virglrenderer left to sort out.
>
> [1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658
> [2]
> https://gitlab.freedesktop.org/digetx/mesa/-/commits/native-context-iris


I'm not saying this change isn't good, just it's probably possible to
implement the native contexts (even up to even VK1.2) without it.  But this
patch series may be the most ergonomic way to do it, given how Mesa is
designed.  But you probably want one of Mesa MRs reviewed first before
merging (I added a comment on the amdgpu change) and that is a requirement
[a].

[a] "The userspace side must be fully reviewed and tested to the standards
of that user space project. For e.g. mesa this means piglit testcases and
review on the mailing list. This is again to ensure that the new interface
actually gets the job done." -- from the requirements


>
>
> --
> Best regards,
> Dmitry
>
>

[-- Attachment #2: Type: text/html, Size: 3124 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-05-03  6:51   ` Gerd Hoffmann
@ 2023-05-08 12:16     ` Dmitry Osipenko
  0 siblings, 0 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-05-08 12:16 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, David Airlie, kernel, virtualization,
	Emil Velikov

On 5/3/23 09:51, Gerd Hoffmann wrote:
> On Mon, May 01, 2023 at 06:38:45PM +0300, Dmitry Osipenko wrote:
>> On 4/16/23 14:52, Dmitry Osipenko wrote:
>>> We have multiple Vulkan context types that are awaiting for the addition
>>> of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
>>>
>>>  1. Venus context
>>>  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
>>>
>>> Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
>>> generic fencing implementation that we want to utilize.
>>>
>>> This patch adds initial sync objects support. It creates fundament for a
>>> further fencing improvements. Later on we will want to extend the VirtIO-GPU
>>> fencing API with passing fence IDs to host for waiting, it will be a new
>>> additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
>>> drivers in works that require VirtIO-GPU to support sync objects UAPI.
>>>
>>> The patch is heavily inspired by the sync object UAPI implementation of the
>>> MSM driver.
>>
>> Gerd, do you have any objections to merging this series?
> 
> No objections.  Can't spot any issues, but I also don't follow drm close
> enough to be able to review the sync object logic in detail.
> 
> Acked-by: Gerd Hoffmann <kraxel@redhat.com>

Thanks, I'll work with Gurchetan on resolving his questions and will
apply the patches as soon as he'll give his ack.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-05-03 17:07   ` Gurchetan Singh
@ 2023-05-08 13:59     ` Rob Clark
  2023-05-12  0:17       ` Gurchetan Singh
  0 siblings, 1 reply; 28+ messages in thread
From: Rob Clark @ 2023-05-08 13:59 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, Dmitry Osipenko,
	David Airlie, kernel, Emil Velikov

On Wed, May 3, 2023 at 10:07 AM Gurchetan Singh
<gurchetansingh@chromium.org> wrote:
>
>
>
> On Mon, May 1, 2023 at 8:38 AM Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>>
>> On 4/16/23 14:52, Dmitry Osipenko wrote:
>> > We have multiple Vulkan context types that are awaiting for the addition
>> > of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
>> >
>> >  1. Venus context
>> >  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
>> >
>> > Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
>> > generic fencing implementation that we want to utilize.
>> >
>> > This patch adds initial sync objects support. It creates fundament for a
>> > further fencing improvements. Later on we will want to extend the VirtIO-GPU
>> > fencing API with passing fence IDs to host for waiting, it will be a new
>> > additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU context
>> > drivers in works that require VirtIO-GPU to support sync objects UAPI.
>> >
>> > The patch is heavily inspired by the sync object UAPI implementation of the
>> > MSM driver.
>>
>> Gerd, do you have any objections to merging this series?
>>
>> We have AMDGPU [1] and Intel [2] native context WIP drivers depending on
>> the sync object support. It is the only part missing from kernel today
>> that is wanted by the native context drivers. Otherwise, there are few
>> other things in Qemu and virglrenderer left to sort out.
>>
>> [1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658
>> [2] https://gitlab.freedesktop.org/digetx/mesa/-/commits/native-context-iris
>
>
> I'm not saying this change isn't good, just it's probably possible to implement the native contexts (even up to even VK1.2) without it.  But this patch series may be the most ergonomic way to do it, given how Mesa is designed.  But you probably want one of Mesa MRs reviewed first before merging (I added a comment on the amdgpu change) and that is a requirement [a].
>
> [a] "The userspace side must be fully reviewed and tested to the standards of that user space project. For e.g. mesa this means piglit testcases and review on the mailing list. This is again to ensure that the new interface actually gets the job done." -- from the requirements
>

tbh, the syncobj support is all drm core, the only driver specifics is
the ioctl parsing.  IMHO existing tests and the two existing consumers
are sufficient.  (Also, considering that additional non-drm
dependencies involved.)

If this was for the core drm syncobj implementation, and not just
driver ioctl parsing and wiring up the core helpers, I would agree
with you.

BR,
-R

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-05-08 13:59     ` Rob Clark
@ 2023-05-12  0:17       ` Gurchetan Singh
  2023-05-12  2:33         ` Dmitry Osipenko
  0 siblings, 1 reply; 28+ messages in thread
From: Gurchetan Singh @ 2023-05-12  0:17 UTC (permalink / raw)
  To: Rob Clark
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, Dmitry Osipenko,
	David Airlie, kernel, Emil Velikov

[-- Attachment #1: Type: text/plain, Size: 4880 bytes --]

On Mon, May 8, 2023 at 6:59 AM Rob Clark <robdclark@gmail.com> wrote:

> On Wed, May 3, 2023 at 10:07 AM Gurchetan Singh
> <gurchetansingh@chromium.org> wrote:
> >
> >
> >
> > On Mon, May 1, 2023 at 8:38 AM Dmitry Osipenko <
> dmitry.osipenko@collabora.com> wrote:
> >>
> >> On 4/16/23 14:52, Dmitry Osipenko wrote:
> >> > We have multiple Vulkan context types that are awaiting for the
> addition
> >> > of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> >> >
> >> >  1. Venus context
> >> >  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> >> >
> >> > Mesa core supports DRM sync object UAPI, providing Vulkan drivers
> with a
> >> > generic fencing implementation that we want to utilize.
> >> >
> >> > This patch adds initial sync objects support. It creates fundament
> for a
> >> > further fencing improvements. Later on we will want to extend the
> VirtIO-GPU
> >> > fencing API with passing fence IDs to host for waiting, it will be a
> new
> >> > additional VirtIO-GPU IOCTL and more. Today we have several
> VirtIO-GPU context
> >> > drivers in works that require VirtIO-GPU to support sync objects UAPI.
> >> >
> >> > The patch is heavily inspired by the sync object UAPI implementation
> of the
> >> > MSM driver.
> >>
> >> Gerd, do you have any objections to merging this series?
> >>
> >> We have AMDGPU [1] and Intel [2] native context WIP drivers depending on
> >> the sync object support. It is the only part missing from kernel today
> >> that is wanted by the native context drivers. Otherwise, there are few
> >> other things in Qemu and virglrenderer left to sort out.
> >>
> >> [1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21658
> >> [2]
> https://gitlab.freedesktop.org/digetx/mesa/-/commits/native-context-iris
> >
> >
> > I'm not saying this change isn't good, just it's probably possible to
> implement the native contexts (even up to even VK1.2) without it.  But this
> patch series may be the most ergonomic way to do it, given how Mesa is
> designed.  But you probably want one of Mesa MRs reviewed first before
> merging (I added a comment on the amdgpu change) and that is a requirement
> [a].
> >
> > [a] "The userspace side must be fully reviewed and tested to the
> standards of that user space project. For e.g. mesa this means piglit
> testcases and review on the mailing list. This is again to ensure that the
> new interface actually gets the job done." -- from the requirements
> >
>
> tbh, the syncobj support is all drm core, the only driver specifics is
> the ioctl parsing.  IMHO existing tests and the two existing consumers
> are sufficient.  (Also, considering that additional non-drm
> dependencies involved.)
>

Can we get one of the Mesa MRs reviewed first?  There's currently no
virtio-intel MR AFAICT, and the amdgpu one is marked as "Draft:".

Even for the amdgpu, Pierre suggests the feature "will be marked as
experimental both in Mesa and virglrenderer" and we can revise as needed.
The DRM requirements seem to warn against adding an UAPI too hastily...

You can get the deqp-vk 1.2 tests to pass with the current UAPI, if you
just change your mesa <--> virglrenderer protocol a little.  Perhaps that
way is even better, since you plumb the in sync-obj into host-side command
submission.

Without inter-context sharing of the fence, this MR really only adds guest
kernel syntactic sugar.

Note I'm not against syntactic sugar, but I just want to point out that you
can likely merge the native context work without any UAPI changes, in case
it's not clear.

If this was for the core drm syncobj implementation, and not just
> driver ioctl parsing and wiring up the core helpers, I would agree
> with you.
>

There are several possible and viable paths to get the features in question
(VK1.2 syncobjs, and inter-context fence sharing).  There are paths
entirely without the syncobj, paths that only use the syncobj for the
inter-context fence sharing case and create host syncobjs for VK1.2, paths
that also use guest syncobjs in every proxied command submission.

It's really hard to tell which one is better.  Here's my suggestion:

1) Get the native contexts reviewed/merged in Mesa/virglrenderer using the
current UAPI.  Options for VK1.2 include: pushing down the syncobjs to the
host, and simulating the syncobj (as already done).  It's fine to mark
these contexts as "experimental" like msm-experimental.  That will allow
you to experiment with the protocols, come up with tests, and hopefully
determine an answer to the host versus guest syncobj question.

2) Once you've completed (1), try to add UAPI changes for features that are
missing or things that are suboptimal with the knowledge gained from doing
(2).

WDYT?


>
> BR,
> -R
>

[-- Attachment #2: Type: text/html, Size: 6499 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-05-12  0:17       ` Gurchetan Singh
@ 2023-05-12  2:33         ` Dmitry Osipenko
  2023-05-12 21:23           ` Gurchetan Singh
  0 siblings, 1 reply; 28+ messages in thread
From: Dmitry Osipenko @ 2023-05-12  2:33 UTC (permalink / raw)
  To: Gurchetan Singh, Rob Clark
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, David Airlie, kernel,
	Emil Velikov

On 5/12/23 03:17, Gurchetan Singh wrote:
...
> Can we get one of the Mesa MRs reviewed first?  There's currently no
> virtio-intel MR AFAICT, and the amdgpu one is marked as "Draft:".
> 
> Even for the amdgpu, Pierre suggests the feature "will be marked as
> experimental both in Mesa and virglrenderer" and we can revise as needed.
> The DRM requirements seem to warn against adding an UAPI too hastily...
> 
> You can get the deqp-vk 1.2 tests to pass with the current UAPI, if you
> just change your mesa <--> virglrenderer protocol a little.  Perhaps that
> way is even better, since you plumb the in sync-obj into host-side command
> submission.
> 
> Without inter-context sharing of the fence, this MR really only adds guest
> kernel syntactic sugar.
> 
> Note I'm not against syntactic sugar, but I just want to point out that you
> can likely merge the native context work without any UAPI changes, in case
> it's not clear.
> 
> If this was for the core drm syncobj implementation, and not just
>> driver ioctl parsing and wiring up the core helpers, I would agree
>> with you.
>>
> 
> There are several possible and viable paths to get the features in question
> (VK1.2 syncobjs, and inter-context fence sharing).  There are paths
> entirely without the syncobj, paths that only use the syncobj for the
> inter-context fence sharing case and create host syncobjs for VK1.2, paths
> that also use guest syncobjs in every proxied command submission.
> 
> It's really hard to tell which one is better.  Here's my suggestion:
> 
> 1) Get the native contexts reviewed/merged in Mesa/virglrenderer using the
> current UAPI.  Options for VK1.2 include: pushing down the syncobjs to the
> host, and simulating the syncobj (as already done).  It's fine to mark
> these contexts as "experimental" like msm-experimental.  That will allow
> you to experiment with the protocols, come up with tests, and hopefully
> determine an answer to the host versus guest syncobj question.
> 
> 2) Once you've completed (1), try to add UAPI changes for features that are
> missing or things that are suboptimal with the knowledge gained from doing
> (2).
> 
> WDYT?

Having syncobj support available by DRM driver is a mandatory
requirement for native contexts because userspace (Mesa) relies on sync
objects support presence. In particular, Intel Mesa driver checks
whether DRM driver supports sync objects to decide which features are
available, ANV depends on the syncobj support.

I'm not familiar with a history of Venus and its limitations. Perhaps
the reason it's using host-side syncobjs is to have 1:1 Vulkan API
mapping between guest and host. Not sure if Venus could use guest
syncobjs instead or there are problems with that.

When syncobj was initially added to kernel, it was done from the needs
of supporting Vulkan wait API. For Venus the actual Vulkan driver is on
host side, while for native contexts it's on guest side. Native contexts
don't need syncobj on host side, it will be unnecessary overhead for
every nctx to have it on host. Hence, if there is no good reason for
host-side syncobjs, then why do that?

Native contexts pass deqp synchronization tests, they use sync objects
universally for both GL and VK. Games work, piglit/deqp passing. What
else you're wanting to test? Turnip?

The AMDGPU code has been looked and it looks good. It's a draft for now
because of the missing sync objects UAPI and other virglrender/Qemu
changes required to get KMS working. Maybe it will be acceptable to
merge the Mesa part once kernel will get sync objects supported, will
need to revisit it.

I'm not opening MR for virtio-intel because it has open questions that
need to be resolved first.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-05-12  2:33         ` Dmitry Osipenko
@ 2023-05-12 21:23           ` Gurchetan Singh
  2023-06-27 17:16             ` Rob Clark
  0 siblings, 1 reply; 28+ messages in thread
From: Gurchetan Singh @ 2023-05-12 21:23 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, David Airlie, kernel,
	Emil Velikov

[-- Attachment #1: Type: text/plain, Size: 6666 bytes --]

On Thu, May 11, 2023 at 7:33 PM Dmitry Osipenko <
dmitry.osipenko@collabora.com> wrote:

> On 5/12/23 03:17, Gurchetan Singh wrote:
> ...
> > Can we get one of the Mesa MRs reviewed first?  There's currently no
> > virtio-intel MR AFAICT, and the amdgpu one is marked as "Draft:".
> >
> > Even for the amdgpu, Pierre suggests the feature "will be marked as
> > experimental both in Mesa and virglrenderer" and we can revise as needed.
> > The DRM requirements seem to warn against adding an UAPI too hastily...
> >
> > You can get the deqp-vk 1.2 tests to pass with the current UAPI, if you
> > just change your mesa <--> virglrenderer protocol a little.  Perhaps that
> > way is even better, since you plumb the in sync-obj into host-side
> command
> > submission.
> >
> > Without inter-context sharing of the fence, this MR really only adds
> guest
> > kernel syntactic sugar.
> >
> > Note I'm not against syntactic sugar, but I just want to point out that
> you
> > can likely merge the native context work without any UAPI changes, in
> case
> > it's not clear.
> >
> > If this was for the core drm syncobj implementation, and not just
> >> driver ioctl parsing and wiring up the core helpers, I would agree
> >> with you.
> >>
> >
> > There are several possible and viable paths to get the features in
> question
> > (VK1.2 syncobjs, and inter-context fence sharing).  There are paths
> > entirely without the syncobj, paths that only use the syncobj for the
> > inter-context fence sharing case and create host syncobjs for VK1.2,
> paths
> > that also use guest syncobjs in every proxied command submission.
> >
> > It's really hard to tell which one is better.  Here's my suggestion:
> >
> > 1) Get the native contexts reviewed/merged in Mesa/virglrenderer using
> the
> > current UAPI.  Options for VK1.2 include: pushing down the syncobjs to
> the
> > host, and simulating the syncobj (as already done).  It's fine to mark
> > these contexts as "experimental" like msm-experimental.  That will allow
> > you to experiment with the protocols, come up with tests, and hopefully
> > determine an answer to the host versus guest syncobj question.
> >
> > 2) Once you've completed (1), try to add UAPI changes for features that
> are
> > missing or things that are suboptimal with the knowledge gained from
> doing
> > (2).
> >
> > WDYT?
>
> Having syncobj support available by DRM driver is a mandatory
> requirement for native contexts because userspace (Mesa) relies on sync
> objects support presence. In particular, Intel Mesa driver checks
> whether DRM driver supports sync objects to decide which features are
> available, ANV depends on the syncobj support.


> I'm not familiar with a history of Venus and its limitations. Perhaps
> the reason it's using host-side syncobjs is to have 1:1 Vulkan API
> mapping between guest and host. Not sure if Venus could use guest
> syncobjs instead or there are problems with that.
>

Why not submit a Venus MR?  It's already in-tree, and you can see how your
API works in scenarios with a host side timeline semaphore (aka syncobj).
I think they are also interested in fencing/sync improvements.


>
> When syncobj was initially added to kernel, it was done from the needs
> of supporting Vulkan wait API. For Venus the actual Vulkan driver is on
> host side, while for native contexts it's on guest side. Native contexts
> don't need syncobj on host side, it will be unnecessary overhead for
> every nctx to have it on host. Hence, if there is no good reason for
> host-side syncobjs, then why do that?


Depends on your threading model.  You can have the following scenarios:

1) N guest contexts : 1 host thread
2) N guest contexts : N host threads for each context
3) 1:1 thread

I think the native context is single-threaded (1), IIRC?  If the goal is to
push command submission to the host (for inter-context sharing), I think
you'll at-least want (2).  For a 1:1 model (a la gfxstream), one host
thread can put another thread's out_sync_objs as it's in_sync_objs (in the
same virtgpu context).  I think that's kind of the goal of timeline
semaphores, with the example given by Khronos as with a compute thread + a
graphics thread.

I'm not saying one threading model is better than any other, perhaps the
native context using the host driver in the guest is so good, it doesn't
matter.  I'm just saying these are the types of discussions we can have if
we tried to get one the Mesa MRs merged first ;-)


> Native contexts pass deqp synchronization tests, they use sync objects
> universally for both GL and VK. Games work, piglit/deqp passing. What
> else you're wanting to test? Turnip?
>

Turnip would also fulfill the requirements, since most of the native
context stuff is already wired for freedreno.


>
> The AMDGPU code has been looked and it looks good. It's a draft for now
> because of the missing sync objects UAPI and other virglrender/Qemu
> changes required to get KMS working.


Get it out of draft mode then :-).  How long would that take?

Also, there's crosvm which builds on standard Linux, so I wouldn't consider
QEMU patches as a requirement.  Just Mesa/virglrenderer part.


> Maybe it will be acceptable to
> merge the Mesa part once kernel will get sync objects supported, will
> need to revisit it.
>

You can think of my commentary as the following suggestions:

- You can probably get native contexts and deqp-vk 1.2 working with the
current UAPI
- It would be terrific to see inter-context fence sharing working (with the
wait pushed down to the host), that's something the current UAPI can't do
- Work iteratively (i.e, it's fine to merge Mesa/virglrenderer MRs as
"experimental") and in steps, no need to figure everything out at once

Now these are just suggestions, and while I think they are good, you can
safely ignore them.

But there's also the DRM requirements, which state "userspace side must be
fully reviewed and tested to the standards of that user-space project.".
So I think to meet the minimum requirements, I think we should at-least
have one of the following (not all, just one) reviewed:

1) venus using the new uapi
2) gfxstream vk using the new uapi
3) amdgpu nctx out of "draft" mode and using the new uapi.
4) virtio-intel using new uapi
5) turnip using your new uapi

Depending on which one you chose, maybe we can get it done within 1-2 weeks?

I'm not opening MR for virtio-intel because it has open questions that
> need to be resolved first.
>
> --
> Best regards,
> Dmitry
>
>

[-- Attachment #2: Type: text/html, Size: 8708 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-04-16 11:52 [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Dmitry Osipenko
                   ` (4 preceding siblings ...)
  2023-05-01 15:38 ` Dmitry Osipenko
@ 2023-06-03  2:11 ` Dmitry Osipenko
  5 siblings, 0 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-06-03  2:11 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Rob Clark, Marek Olšák,
	Pierre-Eric Pelloux-Prayer, Emil Velikov
  Cc: kernel, linux-kernel, dri-devel, virtualization

> Dmitry Osipenko (3):
>   drm/virtio: Refactor and optimize job submission code path
>   drm/virtio: Wait for each dma-fence of in-fence array individually

Applied these two patches to misc-next. The syncobj patch will wait for
the turnip Mesa MR.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-04-16 11:52 ` [PATCH v6 3/3] drm/virtio: Support sync objects Dmitry Osipenko
  2023-05-01 15:29   ` Dmitry Osipenko
@ 2023-06-25  8:47   ` Geert Uytterhoeven
  2023-06-25 12:41     ` Dmitry Osipenko
  2023-07-31 22:47   ` Dmitry Osipenko
  2 siblings, 1 reply; 28+ messages in thread
From: Geert Uytterhoeven @ 2023-06-25  8:47 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, Gerd Hoffmann, David Airlie, kernel,
	virtualization, Emil Velikov

Hi Dmitry,

On Sun, Apr 16, 2023 at 1:55 PM Dmitry Osipenko
<dmitry.osipenko@collabora.com> wrote:
> Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
> support is needed by native context VirtIO-GPU Mesa drivers, it also will
> be used by Venus and Virgl contexts.
>
> Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>

Thanks for your patch!

> --- a/drivers/gpu/drm/virtio/virtgpu_submit.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c

> +static int
> +virtio_gpu_parse_deps(struct virtio_gpu_submit *submit)
> +{
> +       struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
> +       struct drm_virtgpu_execbuffer_syncobj syncobj_desc;
> +       size_t syncobj_stride = exbuf->syncobj_stride;
> +       u32 num_in_syncobjs = exbuf->num_in_syncobjs;
> +       struct drm_syncobj **syncobjs;
> +       int ret = 0, i;
> +
> +       if (!num_in_syncobjs)
> +               return 0;
> +
> +       /*
> +        * kvalloc at first tries to allocate memory using kmalloc and
> +        * falls back to vmalloc only on failure. It also uses GFP_NOWARN

GFP_NOWARN does not exist.

> +        * internally for allocations larger than a page size, preventing
> +        * storm of KMSG warnings.
> +        */
> +       syncobjs = kvcalloc(num_in_syncobjs, sizeof(*syncobjs), GFP_KERNEL);
> +       if (!syncobjs)
> +               return -ENOMEM;
> +
> +       for (i = 0; i < num_in_syncobjs; i++) {
> +               u64 address = exbuf->in_syncobjs + i * syncobj_stride;
> +               struct dma_fence *fence;
> +

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-06-25  8:47   ` Geert Uytterhoeven
@ 2023-06-25 12:41     ` Dmitry Osipenko
  2023-06-25 15:36       ` Geert Uytterhoeven
  0 siblings, 1 reply; 28+ messages in thread
From: Dmitry Osipenko @ 2023-06-25 12:41 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, Gerd Hoffmann, David Airlie, kernel,
	virtualization, Emil Velikov

On 6/25/23 11:47, Geert Uytterhoeven wrote:
> Hi Dmitry,
> 
> On Sun, Apr 16, 2023 at 1:55 PM Dmitry Osipenko
> <dmitry.osipenko@collabora.com> wrote:
>> Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
>> support is needed by native context VirtIO-GPU Mesa drivers, it also will
>> be used by Venus and Virgl contexts.
>>
>> Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> 
> Thanks for your patch!
> 
>> --- a/drivers/gpu/drm/virtio/virtgpu_submit.c
>> +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
> 
>> +static int
>> +virtio_gpu_parse_deps(struct virtio_gpu_submit *submit)
>> +{
>> +       struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
>> +       struct drm_virtgpu_execbuffer_syncobj syncobj_desc;
>> +       size_t syncobj_stride = exbuf->syncobj_stride;
>> +       u32 num_in_syncobjs = exbuf->num_in_syncobjs;
>> +       struct drm_syncobj **syncobjs;
>> +       int ret = 0, i;
>> +
>> +       if (!num_in_syncobjs)
>> +               return 0;
>> +
>> +       /*
>> +        * kvalloc at first tries to allocate memory using kmalloc and
>> +        * falls back to vmalloc only on failure. It also uses GFP_NOWARN
> 
> GFP_NOWARN does not exist.

https://elixir.bootlin.com/linux/v6.4-rc7/source/include/linux/gfp_types.h#L38

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-06-25 12:41     ` Dmitry Osipenko
@ 2023-06-25 15:36       ` Geert Uytterhoeven
  2023-06-26 16:11         ` Dmitry Osipenko
  0 siblings, 1 reply; 28+ messages in thread
From: Geert Uytterhoeven @ 2023-06-25 15:36 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, Gerd Hoffmann, David Airlie, kernel,
	virtualization, Emil Velikov

Hi Dmitry,

On Sun, Jun 25, 2023 at 2:41 PM Dmitry Osipenko
<dmitry.osipenko@collabora.com> wrote:
> On 6/25/23 11:47, Geert Uytterhoeven wrote:
> > On Sun, Apr 16, 2023 at 1:55 PM Dmitry Osipenko
> > <dmitry.osipenko@collabora.com> wrote:
> >> Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
> >> support is needed by native context VirtIO-GPU Mesa drivers, it also will
> >> be used by Venus and Virgl contexts.
> >>
> >> Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
> >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >
> > Thanks for your patch!
> >
> >> --- a/drivers/gpu/drm/virtio/virtgpu_submit.c
> >> +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
> >
> >> +static int
> >> +virtio_gpu_parse_deps(struct virtio_gpu_submit *submit)
> >> +{
> >> +       struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
> >> +       struct drm_virtgpu_execbuffer_syncobj syncobj_desc;
> >> +       size_t syncobj_stride = exbuf->syncobj_stride;
> >> +       u32 num_in_syncobjs = exbuf->num_in_syncobjs;
> >> +       struct drm_syncobj **syncobjs;
> >> +       int ret = 0, i;
> >> +
> >> +       if (!num_in_syncobjs)
> >> +               return 0;
> >> +
> >> +       /*
> >> +        * kvalloc at first tries to allocate memory using kmalloc and
> >> +        * falls back to vmalloc only on failure. It also uses GFP_NOWARN
> >
> > GFP_NOWARN does not exist.
>
> https://elixir.bootlin.com/linux/v6.4-rc7/source/include/linux/gfp_types.h#L38

That line defines "__GFP_NOWARN", not "GFP_NOWARN".
C is case- and underscore-sensitive. as is "git grep -w" ;-)

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-06-25 15:36       ` Geert Uytterhoeven
@ 2023-06-26 16:11         ` Dmitry Osipenko
  2023-06-27 12:01           ` Geert Uytterhoeven
  0 siblings, 1 reply; 28+ messages in thread
From: Dmitry Osipenko @ 2023-06-26 16:11 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, Gerd Hoffmann, David Airlie, kernel,
	virtualization, Emil Velikov

On 6/25/23 18:36, Geert Uytterhoeven wrote:
> Hi Dmitry,
> 
> On Sun, Jun 25, 2023 at 2:41 PM Dmitry Osipenko
> <dmitry.osipenko@collabora.com> wrote:
>> On 6/25/23 11:47, Geert Uytterhoeven wrote:
>>> On Sun, Apr 16, 2023 at 1:55 PM Dmitry Osipenko
>>> <dmitry.osipenko@collabora.com> wrote:
>>>> Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
>>>> support is needed by native context VirtIO-GPU Mesa drivers, it also will
>>>> be used by Venus and Virgl contexts.
>>>>
>>>> Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>
>>> Thanks for your patch!
>>>
>>>> --- a/drivers/gpu/drm/virtio/virtgpu_submit.c
>>>> +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
>>>
>>>> +static int
>>>> +virtio_gpu_parse_deps(struct virtio_gpu_submit *submit)
>>>> +{
>>>> +       struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
>>>> +       struct drm_virtgpu_execbuffer_syncobj syncobj_desc;
>>>> +       size_t syncobj_stride = exbuf->syncobj_stride;
>>>> +       u32 num_in_syncobjs = exbuf->num_in_syncobjs;
>>>> +       struct drm_syncobj **syncobjs;
>>>> +       int ret = 0, i;
>>>> +
>>>> +       if (!num_in_syncobjs)
>>>> +               return 0;
>>>> +
>>>> +       /*
>>>> +        * kvalloc at first tries to allocate memory using kmalloc and
>>>> +        * falls back to vmalloc only on failure. It also uses GFP_NOWARN
>>>
>>> GFP_NOWARN does not exist.
>>
>> https://elixir.bootlin.com/linux/v6.4-rc7/source/include/linux/gfp_types.h#L38
> 
> That line defines "__GFP_NOWARN", not "GFP_NOWARN".
> C is case- and underscore-sensitive. as is "git grep -w" ;-)

The removal of underscores was done intentionally for improving
readability of the comment

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-06-26 16:11         ` Dmitry Osipenko
@ 2023-06-27 12:01           ` Geert Uytterhoeven
  2023-07-19 18:58             ` Dmitry Osipenko
  0 siblings, 1 reply; 28+ messages in thread
From: Geert Uytterhoeven @ 2023-06-27 12:01 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, Gerd Hoffmann, David Airlie, kernel,
	virtualization, Emil Velikov

Hi Dmitry,

On Mon, Jun 26, 2023 at 6:11 PM Dmitry Osipenko
<dmitry.osipenko@collabora.com> wrote:
> On 6/25/23 18:36, Geert Uytterhoeven wrote:
> > On Sun, Jun 25, 2023 at 2:41 PM Dmitry Osipenko
> > <dmitry.osipenko@collabora.com> wrote:
> >> On 6/25/23 11:47, Geert Uytterhoeven wrote:
> >>> On Sun, Apr 16, 2023 at 1:55 PM Dmitry Osipenko
> >>> <dmitry.osipenko@collabora.com> wrote:
> >>>> Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
> >>>> support is needed by native context VirtIO-GPU Mesa drivers, it also will
> >>>> be used by Venus and Virgl contexts.
> >>>>
> >>>> Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
> >>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >>>
> >>> Thanks for your patch!
> >>>
> >>>> --- a/drivers/gpu/drm/virtio/virtgpu_submit.c
> >>>> +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
> >>>
> >>>> +static int
> >>>> +virtio_gpu_parse_deps(struct virtio_gpu_submit *submit)
> >>>> +{
> >>>> +       struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
> >>>> +       struct drm_virtgpu_execbuffer_syncobj syncobj_desc;
> >>>> +       size_t syncobj_stride = exbuf->syncobj_stride;
> >>>> +       u32 num_in_syncobjs = exbuf->num_in_syncobjs;
> >>>> +       struct drm_syncobj **syncobjs;
> >>>> +       int ret = 0, i;
> >>>> +
> >>>> +       if (!num_in_syncobjs)
> >>>> +               return 0;
> >>>> +
> >>>> +       /*
> >>>> +        * kvalloc at first tries to allocate memory using kmalloc and
> >>>> +        * falls back to vmalloc only on failure. It also uses GFP_NOWARN
> >>>
> >>> GFP_NOWARN does not exist.
> >>
> >> https://elixir.bootlin.com/linux/v6.4-rc7/source/include/linux/gfp_types.h#L38
> >
> > That line defines "__GFP_NOWARN", not "GFP_NOWARN".
> > C is case- and underscore-sensitive. as is "git grep -w" ;-)
>
> The removal of underscores was done intentionally for improving
> readability of the comment

Please don't do that, as IMHO it actually hampers readability:
  1. For some xxx, both GFP_xxx and __GFP_xxx are defined,
     so it does matter which one you are referring to,
  2. After dropping the underscores, "git grep -w" can no longer find
     the definition, nor its users.

Thanks!

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-05-12 21:23           ` Gurchetan Singh
@ 2023-06-27 17:16             ` Rob Clark
  2023-07-19 18:58               ` Dmitry Osipenko
  0 siblings, 1 reply; 28+ messages in thread
From: Rob Clark @ 2023-06-27 17:16 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, Dmitry Osipenko,
	David Airlie, kernel, Emil Velikov

On Fri, May 12, 2023 at 2:23 PM Gurchetan Singh
<gurchetansingh@chromium.org> wrote:
>
>
>
> On Thu, May 11, 2023 at 7:33 PM Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>>
>> On 5/12/23 03:17, Gurchetan Singh wrote:
>> ...
>> > Can we get one of the Mesa MRs reviewed first?  There's currently no
>> > virtio-intel MR AFAICT, and the amdgpu one is marked as "Draft:".
>> >
>> > Even for the amdgpu, Pierre suggests the feature "will be marked as
>> > experimental both in Mesa and virglrenderer" and we can revise as needed.
>> > The DRM requirements seem to warn against adding an UAPI too hastily...
>> >
>> > You can get the deqp-vk 1.2 tests to pass with the current UAPI, if you
>> > just change your mesa <--> virglrenderer protocol a little.  Perhaps that
>> > way is even better, since you plumb the in sync-obj into host-side command
>> > submission.
>> >
>> > Without inter-context sharing of the fence, this MR really only adds guest
>> > kernel syntactic sugar.
>> >
>> > Note I'm not against syntactic sugar, but I just want to point out that you
>> > can likely merge the native context work without any UAPI changes, in case
>> > it's not clear.
>> >
>> > If this was for the core drm syncobj implementation, and not just
>> >> driver ioctl parsing and wiring up the core helpers, I would agree
>> >> with you.
>> >>
>> >
>> > There are several possible and viable paths to get the features in question
>> > (VK1.2 syncobjs, and inter-context fence sharing).  There are paths
>> > entirely without the syncobj, paths that only use the syncobj for the
>> > inter-context fence sharing case and create host syncobjs for VK1.2, paths
>> > that also use guest syncobjs in every proxied command submission.
>> >
>> > It's really hard to tell which one is better.  Here's my suggestion:
>> >
>> > 1) Get the native contexts reviewed/merged in Mesa/virglrenderer using the
>> > current UAPI.  Options for VK1.2 include: pushing down the syncobjs to the
>> > host, and simulating the syncobj (as already done).  It's fine to mark
>> > these contexts as "experimental" like msm-experimental.  That will allow
>> > you to experiment with the protocols, come up with tests, and hopefully
>> > determine an answer to the host versus guest syncobj question.
>> >
>> > 2) Once you've completed (1), try to add UAPI changes for features that are
>> > missing or things that are suboptimal with the knowledge gained from doing
>> > (2).
>> >
>> > WDYT?
>>
>> Having syncobj support available by DRM driver is a mandatory
>> requirement for native contexts because userspace (Mesa) relies on sync
>> objects support presence. In particular, Intel Mesa driver checks
>> whether DRM driver supports sync objects to decide which features are
>> available, ANV depends on the syncobj support.
>>
>>
>> I'm not familiar with a history of Venus and its limitations. Perhaps
>> the reason it's using host-side syncobjs is to have 1:1 Vulkan API
>> mapping between guest and host. Not sure if Venus could use guest
>> syncobjs instead or there are problems with that.
>
>
> Why not submit a Venus MR?  It's already in-tree, and you can see how your API works in scenarios with a host side timeline semaphore (aka syncobj).  I think they are also interested in fencing/sync improvements.
>
>>
>>
>> When syncobj was initially added to kernel, it was done from the needs
>> of supporting Vulkan wait API. For Venus the actual Vulkan driver is on
>> host side, while for native contexts it's on guest side. Native contexts
>> don't need syncobj on host side, it will be unnecessary overhead for
>> every nctx to have it on host. Hence, if there is no good reason for
>> host-side syncobjs, then why do that?
>
>
> Depends on your threading model.  You can have the following scenarios:
>
> 1) N guest contexts : 1 host thread
> 2) N guest contexts : N host threads for each context
> 3) 1:1 thread
>
> I think the native context is single-threaded (1), IIRC?  If the goal is to push command submission to the host (for inter-context sharing), I think you'll at-least want (2).  For a 1:1 model (a la gfxstream), one host thread can put another thread's out_sync_objs as it's in_sync_objs (in the same virtgpu context).  I think that's kind of the goal of timeline semaphores, with the example given by Khronos as with a compute thread + a graphics thread.
>
> I'm not saying one threading model is better than any other, perhaps the native context using the host driver in the guest is so good, it doesn't matter.  I'm just saying these are the types of discussions we can have if we tried to get one the Mesa MRs merged first ;-)
>
>>
>> Native contexts pass deqp synchronization tests, they use sync objects
>> universally for both GL and VK. Games work, piglit/deqp passing. What
>> else you're wanting to test? Turnip?
>
>
> Turnip would also fulfill the requirements, since most of the native context stuff is already wired for freedreno.
>
>>
>>
>> The AMDGPU code has been looked and it looks good. It's a draft for now
>> because of the missing sync objects UAPI and other virglrender/Qemu
>> changes required to get KMS working.
>
>
> Get it out of draft mode then :-).  How long would that take?
>
> Also, there's crosvm which builds on standard Linux, so I wouldn't consider QEMU patches as a requirement.  Just Mesa/virglrenderer part.
>
>>
>> Maybe it will be acceptable to
>> merge the Mesa part once kernel will get sync objects supported, will
>> need to revisit it.
>
>
> You can think of my commentary as the following suggestions:
>
> - You can probably get native contexts and deqp-vk 1.2 working with the current UAPI
> - It would be terrific to see inter-context fence sharing working (with the wait pushed down to the host), that's something the current UAPI can't do
> - Work iteratively (i.e, it's fine to merge Mesa/virglrenderer MRs as "experimental") and in steps, no need to figure everything out at once
>
> Now these are just suggestions, and while I think they are good, you can safely ignore them.
>
> But there's also the DRM requirements, which state "userspace side must be fully reviewed and tested to the standards of that user-space project.".  So I think to meet the minimum requirements, I think we should at-least have one of the following (not all, just one) reviewed:
>
> 1) venus using the new uapi
> 2) gfxstream vk using the new uapi
> 3) amdgpu nctx out of "draft" mode and using the new uapi.
> 4) virtio-intel using new uapi
> 5) turnip using your new uapi

forgot to mention this earlier, but
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23533

Dmitry, you can also add, if you haven't already:

Tested-by: Rob Clark <robdclark@gmail.com>

> Depending on which one you chose, maybe we can get it done within 1-2 weeks?
>
>> I'm not opening MR for virtio-intel because it has open questions that
>> need to be resolved first.
>>
>> --
>> Best regards,
>> Dmitry
>>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-06-27 17:16             ` Rob Clark
@ 2023-07-19 18:58               ` Dmitry Osipenko
  2023-07-28 22:03                 ` Gurchetan Singh
  0 siblings, 1 reply; 28+ messages in thread
From: Dmitry Osipenko @ 2023-07-19 18:58 UTC (permalink / raw)
  To: Rob Clark, Gurchetan Singh
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, David Airlie, kernel,
	Emil Velikov

27.06.2023 20:16, Rob Clark пишет:
...
>> Now these are just suggestions, and while I think they are good, you can safely ignore them.
>>
>> But there's also the DRM requirements, which state "userspace side must be fully reviewed and tested to the standards of that user-space project.".  So I think to meet the minimum requirements, I think we should at-least have one of the following (not all, just one) reviewed:
>>
>> 1) venus using the new uapi
>> 2) gfxstream vk using the new uapi
>> 3) amdgpu nctx out of "draft" mode and using the new uapi.
>> 4) virtio-intel using new uapi
>> 5) turnip using your new uapi
> 
> forgot to mention this earlier, but
> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23533
> 
> Dmitry, you can also add, if you haven't already:
> 
> Tested-by: Rob Clark <robdclark@gmail.com>

Gurchetan, Turnip Mesa virtio support is ready to be merged upstream,
it's using this new syncobj UAPI. Could you please give yours r-b if you
don't have objections?

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-06-27 12:01           ` Geert Uytterhoeven
@ 2023-07-19 18:58             ` Dmitry Osipenko
  0 siblings, 0 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-07-19 18:58 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, Gurchetan Singh, Gerd Hoffmann, David Airlie, kernel,
	virtualization, Emil Velikov

27.06.2023 15:01, Geert Uytterhoeven пишет:
> Hi Dmitry,
> 
> On Mon, Jun 26, 2023 at 6:11 PM Dmitry Osipenko
> <dmitry.osipenko@collabora.com> wrote:
>> On 6/25/23 18:36, Geert Uytterhoeven wrote:
>>> On Sun, Jun 25, 2023 at 2:41 PM Dmitry Osipenko
>>> <dmitry.osipenko@collabora.com> wrote:
>>>> On 6/25/23 11:47, Geert Uytterhoeven wrote:
>>>>> On Sun, Apr 16, 2023 at 1:55 PM Dmitry Osipenko
>>>>> <dmitry.osipenko@collabora.com> wrote:
>>>>>> Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
>>>>>> support is needed by native context VirtIO-GPU Mesa drivers, it also will
>>>>>> be used by Venus and Virgl contexts.
>>>>>>
>>>>>> Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
>>>>>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>>>>>
>>>>> Thanks for your patch!
>>>>>
>>>>>> --- a/drivers/gpu/drm/virtio/virtgpu_submit.c
>>>>>> +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
>>>>>
>>>>>> +static int
>>>>>> +virtio_gpu_parse_deps(struct virtio_gpu_submit *submit)
>>>>>> +{
>>>>>> +       struct drm_virtgpu_execbuffer *exbuf = submit->exbuf;
>>>>>> +       struct drm_virtgpu_execbuffer_syncobj syncobj_desc;
>>>>>> +       size_t syncobj_stride = exbuf->syncobj_stride;
>>>>>> +       u32 num_in_syncobjs = exbuf->num_in_syncobjs;
>>>>>> +       struct drm_syncobj **syncobjs;
>>>>>> +       int ret = 0, i;
>>>>>> +
>>>>>> +       if (!num_in_syncobjs)
>>>>>> +               return 0;
>>>>>> +
>>>>>> +       /*
>>>>>> +        * kvalloc at first tries to allocate memory using kmalloc and
>>>>>> +        * falls back to vmalloc only on failure. It also uses GFP_NOWARN
>>>>>
>>>>> GFP_NOWARN does not exist.
>>>>
>>>> https://elixir.bootlin.com/linux/v6.4-rc7/source/include/linux/gfp_types.h#L38
>>>
>>> That line defines "__GFP_NOWARN", not "GFP_NOWARN".
>>> C is case- and underscore-sensitive. as is "git grep -w" ;-)
>>
>> The removal of underscores was done intentionally for improving
>> readability of the comment
> 
> Please don't do that, as IMHO it actually hampers readability:
>   1. For some xxx, both GFP_xxx and __GFP_xxx are defined,
>      so it does matter which one you are referring to,
>   2. After dropping the underscores, "git grep -w" can no longer find
>      the definition, nor its users.
> 
> Thanks!

Alright, I'll change it

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-07-19 18:58               ` Dmitry Osipenko
@ 2023-07-28 22:03                 ` Gurchetan Singh
  2023-07-31 16:26                   ` Dmitry Osipenko
  0 siblings, 1 reply; 28+ messages in thread
From: Gurchetan Singh @ 2023-07-28 22:03 UTC (permalink / raw)
  To: Dmitry Osipenko
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, David Airlie, kernel,
	Emil Velikov

[-- Attachment #1: Type: text/plain, Size: 1501 bytes --]

On Wed, Jul 19, 2023 at 11:58 AM Dmitry Osipenko <
dmitry.osipenko@collabora.com> wrote:

> 27.06.2023 20:16, Rob Clark пишет:
> ...
> >> Now these are just suggestions, and while I think they are good, you
> can safely ignore them.
> >>
> >> But there's also the DRM requirements, which state "userspace side must
> be fully reviewed and tested to the standards of that user-space
> project.".  So I think to meet the minimum requirements, I think we should
> at-least have one of the following (not all, just one) reviewed:
> >>
> >> 1) venus using the new uapi
> >> 2) gfxstream vk using the new uapi
> >> 3) amdgpu nctx out of "draft" mode and using the new uapi.
> >> 4) virtio-intel using new uapi
> >> 5) turnip using your new uapi
> >
> > forgot to mention this earlier, but
> > https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23533
> >
> > Dmitry, you can also add, if you haven't already:
> >
> > Tested-by: Rob Clark <robdclark@gmail.com>
>
> Gurchetan, Turnip Mesa virtio support is ready to be merged upstream,
> it's using this new syncobj UAPI. Could you please give yours r-b if you
> don't have objections?
>

Given that Turnip native contexts are reviewed using this UAPI, your change
does now meet the requirements and is ready to merge.

One thing I noticed is you might need explicit padding between
`num_out_syncobjs` and `in_syncobjs`.  Otherwise, feel free to add my
acked-by.


>
> --
> Best regards,
> Dmitry
>
>

[-- Attachment #2: Type: text/html, Size: 2345 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver
  2023-07-28 22:03                 ` Gurchetan Singh
@ 2023-07-31 16:26                   ` Dmitry Osipenko
  0 siblings, 0 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-07-31 16:26 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: Pierre-Eric Pelloux-Prayer, Marek Olšák, linux-kernel,
	dri-devel, virtualization, Gerd Hoffmann, David Airlie, kernel,
	Emil Velikov

On 7/29/23 01:03, Gurchetan Singh wrote:
> On Wed, Jul 19, 2023 at 11:58 AM Dmitry Osipenko <
> dmitry.osipenko@collabora.com> wrote:
> 
>> 27.06.2023 20:16, Rob Clark пишет:
>> ...
>>>> Now these are just suggestions, and while I think they are good, you
>> can safely ignore them.
>>>>
>>>> But there's also the DRM requirements, which state "userspace side must
>> be fully reviewed and tested to the standards of that user-space
>> project.".  So I think to meet the minimum requirements, I think we should
>> at-least have one of the following (not all, just one) reviewed:
>>>>
>>>> 1) venus using the new uapi
>>>> 2) gfxstream vk using the new uapi
>>>> 3) amdgpu nctx out of "draft" mode and using the new uapi.
>>>> 4) virtio-intel using new uapi
>>>> 5) turnip using your new uapi
>>>
>>> forgot to mention this earlier, but
>>> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23533
>>>
>>> Dmitry, you can also add, if you haven't already:
>>>
>>> Tested-by: Rob Clark <robdclark@gmail.com>
>>
>> Gurchetan, Turnip Mesa virtio support is ready to be merged upstream,
>> it's using this new syncobj UAPI. Could you please give yours r-b if you
>> don't have objections?
>>
> 
> Given that Turnip native contexts are reviewed using this UAPI, your change
> does now meet the requirements and is ready to merge.
> 
> One thing I noticed is you might need explicit padding between
> `num_out_syncobjs` and `in_syncobjs`.  Otherwise, feel free to add my
> acked-by.

The padding looks okay as-as, all the struct size and u64s are properly
aligned. I'll merge the patch soon, thanks.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 3/3] drm/virtio: Support sync objects
  2023-04-16 11:52 ` [PATCH v6 3/3] drm/virtio: Support sync objects Dmitry Osipenko
  2023-05-01 15:29   ` Dmitry Osipenko
  2023-06-25  8:47   ` Geert Uytterhoeven
@ 2023-07-31 22:47   ` Dmitry Osipenko
  2 siblings, 0 replies; 28+ messages in thread
From: Dmitry Osipenko @ 2023-07-31 22:47 UTC (permalink / raw)
  To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
	Daniel Vetter, Rob Clark, Marek Olšák,
	Pierre-Eric Pelloux-Prayer, Emil Velikov
  Cc: kernel, linux-kernel, dri-devel, virtualization

On 4/16/23 14:52, Dmitry Osipenko wrote:
> Add sync object DRM UAPI support to VirtIO-GPU driver. Sync objects
> support is needed by native context VirtIO-GPU Mesa drivers, it also will
> be used by Venus and Virgl contexts.
> 
> Reviewed-by; Emil Velikov <emil.velikov@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.c    |   3 +-
>  drivers/gpu/drm/virtio/virtgpu_submit.c | 219 ++++++++++++++++++++++++
>  include/uapi/drm/virtgpu_drm.h          |  16 +-
>  3 files changed, 236 insertions(+), 2 deletions(-)

Applied to misc-next

Made a minor comment change that was requested by Geert Uytterhoeven and
took into account that outfence now could be NULL after the recent
virtio-gpu changes

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2023-07-31 22:47 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-16 11:52 [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Dmitry Osipenko
2023-04-16 11:52 ` [PATCH v6 1/3] drm/virtio: Refactor and optimize job submission code path Dmitry Osipenko
2023-04-16 11:52 ` [PATCH v6 2/3] drm/virtio: Wait for each dma-fence of in-fence array individually Dmitry Osipenko
2023-04-16 11:52 ` [PATCH v6 3/3] drm/virtio: Support sync objects Dmitry Osipenko
2023-05-01 15:29   ` Dmitry Osipenko
2023-06-25  8:47   ` Geert Uytterhoeven
2023-06-25 12:41     ` Dmitry Osipenko
2023-06-25 15:36       ` Geert Uytterhoeven
2023-06-26 16:11         ` Dmitry Osipenko
2023-06-27 12:01           ` Geert Uytterhoeven
2023-07-19 18:58             ` Dmitry Osipenko
2023-07-31 22:47   ` Dmitry Osipenko
2023-04-17 23:17 ` [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver Gurchetan Singh
2023-04-19 21:22   ` Dmitry Osipenko
2023-04-24 18:40     ` Gurchetan Singh
2023-05-01 15:38 ` Dmitry Osipenko
2023-05-03  6:51   ` Gerd Hoffmann
2023-05-08 12:16     ` Dmitry Osipenko
2023-05-03 17:07   ` Gurchetan Singh
2023-05-08 13:59     ` Rob Clark
2023-05-12  0:17       ` Gurchetan Singh
2023-05-12  2:33         ` Dmitry Osipenko
2023-05-12 21:23           ` Gurchetan Singh
2023-06-27 17:16             ` Rob Clark
2023-07-19 18:58               ` Dmitry Osipenko
2023-07-28 22:03                 ` Gurchetan Singh
2023-07-31 16:26                   ` Dmitry Osipenko
2023-06-03  2:11 ` Dmitry Osipenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).