dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/1] fence per plane state
@ 2023-10-03  1:00 Dongwon Kim
  2023-10-03  1:00 ` [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update Dongwon Kim
  0 siblings, 1 reply; 6+ messages in thread
From: Dongwon Kim @ 2023-10-03  1:00 UTC (permalink / raw)
  To: dri-devel; +Cc: Dongwon Kim, kraxel, dmitry.osipenko, vivek.kasireddy

The patch "drm/virtio: new fence for every plane update" is to prevent a fence
synchronization problem when multiple planes are referencing a single large FB
(i.e. Xorg with multi displays configured as one extended surface.).

One example of a possible problematic flow is

1.virtio_gpu_plane_prepare_fb(plane_A) -> A fence for the FB is created (fence A)
2.virtio_gpu_resource_flush(plane_A) -> Fence A is emitted. Then it waits for the fence A
  to be signaled.
3.virtio_gpu_plane_prepare_fb(plane_B) -> A new fence for the FB is created (fence B) and
  FB->fence is replaced with fence B.
4.virtio_gpu_resource_flush(plane_A) -> Fence A is signaled but dma_fence_put is done for
  fence B because FB->fence = fence B already.
5.fence A won't be signaled or released for a long time, which leads to guest display and
  dmesg shows fence timeout errors.

The root-cause for problems is that the fence for the FB can be replaced with the new one
anytime another plain with the same FB is updated. So the proposed fix here is to allocate
a new fence per the plane state instead of per FB as described in the patch.

Tested system:

Host: QEMU + KVM on Linux running on Intel 12th Gen.
Guest: Ubuntu VM running Xorg w/ 2~3 virtual displays using blob scanouts

Dongwon Kim (1):
  drm/virtio: new fence for every plane update

 drivers/gpu/drm/virtio/virtgpu_drv.h   |  7 +++
 drivers/gpu/drm/virtio/virtgpu_plane.c | 66 +++++++++++++++++---------
 2 files changed, 51 insertions(+), 22 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update
  2023-10-03  1:00 [RFC PATCH v2 0/1] fence per plane state Dongwon Kim
@ 2023-10-03  1:00 ` Dongwon Kim
  2023-10-23 12:24   ` Dmitry Osipenko
  0 siblings, 1 reply; 6+ messages in thread
From: Dongwon Kim @ 2023-10-03  1:00 UTC (permalink / raw)
  To: dri-devel; +Cc: Dongwon Kim, kraxel, dmitry.osipenko, vivek.kasireddy

Having a fence linked to a virtio_gpu_framebuffer in the plane update sequence
would cause conflict when several planes referencing the same framebuffer
(e.g. Xorg screen covering multi-displays configured for an extended mode)
and those planes are updated concurrently. So it is needed to allocate a
fence for every plane state instead of the framebuffer.

The plane state for virtio-gpu, "struct virtio_gpu_plane_state" is added for
this. This structure represents drm_plane_state and it contains the reference
to virtio_gpu_fence, which was previously in "struct virtio_gpu_framebuffer".

"virtio_gpu_plane_duplicate_state" is added as well to create a
virtio_gpu_plane_state on top of duplicated drm plane state.

Several drm helpers were slightly modified accordingly to use the fence in new
plane state structure. virtio_gpu_plane_cleanup_fb was completely removed as
dma_fence_put shouldn't be called here as it can mess up with the ref count
of the fence. The fence should be put after the fence is signaled in
virtio_gpu_resource_flush then released in virtio_gpu_array_add_fence while
the next virtio message is being queued.

Also, the condition for adding fence, (plane->state->fb != new_state->fb) was
removed since we now allocate a new fence for the new plane state even if both
old and new planes are pointing to the same framebuffer.

v2: removed virtio_gpu_plane_duplicate_state as the existing helper,
    drm_atomic_helper_plane_destroy_state does the same.

Cc: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  7 +++
 drivers/gpu/drm/virtio/virtgpu_plane.c | 66 +++++++++++++++++---------
 2 files changed, 51 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 8513b671f871..2568ad0c2d44 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -191,6 +191,13 @@ struct virtio_gpu_framebuffer {
 #define to_virtio_gpu_framebuffer(x) \
 	container_of(x, struct virtio_gpu_framebuffer, base)
 
+struct virtio_gpu_plane_state {
+	struct drm_plane_state base;
+	struct virtio_gpu_fence *fence;
+};
+#define to_virtio_gpu_plane_state(x) \
+	container_of(x, struct virtio_gpu_plane_state, base)
+
 struct virtio_gpu_queue {
 	struct virtqueue *vq;
 	spinlock_t qlock;
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index a2e045f3a000..cd962898023e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -66,11 +66,28 @@ uint32_t virtio_gpu_translate_format(uint32_t drm_fourcc)
 	return format;
 }
 
+static struct
+drm_plane_state *virtio_gpu_plane_duplicate_state(struct drm_plane *plane)
+{
+	struct virtio_gpu_plane_state *new;
+
+	if (WARN_ON(!plane->state))
+		return NULL;
+
+	new = kzalloc(sizeof(*new), GFP_KERNEL);
+	if (!new)
+		return NULL;
+
+	__drm_atomic_helper_plane_duplicate_state(plane, &new->base);
+
+	return &new->base;
+}
+
 static const struct drm_plane_funcs virtio_gpu_plane_funcs = {
 	.update_plane		= drm_atomic_helper_update_plane,
 	.disable_plane		= drm_atomic_helper_disable_plane,
 	.reset			= drm_atomic_helper_plane_reset,
-	.atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
+	.atomic_duplicate_state = virtio_gpu_plane_duplicate_state,
 	.atomic_destroy_state	= drm_atomic_helper_plane_destroy_state,
 };
 
@@ -128,11 +145,13 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
 	struct drm_device *dev = plane->dev;
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_framebuffer *vgfb;
+	struct virtio_gpu_plane_state *vgplane_st;
 	struct virtio_gpu_object *bo;
 
 	vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
+	vgplane_st = to_virtio_gpu_plane_state(plane->state);
 	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
-	if (vgfb->fence) {
+	if (vgplane_st->fence) {
 		struct virtio_gpu_object_array *objs;
 
 		objs = virtio_gpu_array_alloc(1);
@@ -141,13 +160,12 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
 		virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
 		virtio_gpu_array_lock_resv(objs);
 		virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
-					      width, height, objs, vgfb->fence);
+					      width, height, objs,
+					      vgplane_st->fence);
 		virtio_gpu_notify(vgdev);
-
-		dma_fence_wait_timeout(&vgfb->fence->f, true,
+		dma_fence_wait_timeout(&vgplane_st->fence->f, true,
 				       msecs_to_jiffies(50));
-		dma_fence_put(&vgfb->fence->f);
-		vgfb->fence = NULL;
+		dma_fence_put(&vgplane_st->fence->f);
 	} else {
 		virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
 					      width, height, NULL, NULL);
@@ -237,20 +255,23 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
 	struct drm_device *dev = plane->dev;
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_framebuffer *vgfb;
+	struct virtio_gpu_plane_state *vgplane_st;
 	struct virtio_gpu_object *bo;
 
 	if (!new_state->fb)
 		return 0;
 
 	vgfb = to_virtio_gpu_framebuffer(new_state->fb);
+	vgplane_st = to_virtio_gpu_plane_state(new_state);
 	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
 	if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
 		return 0;
 
-	if (bo->dumb && (plane->state->fb != new_state->fb)) {
-		vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
+	if (bo->dumb) {
+		vgplane_st->fence = virtio_gpu_fence_alloc(vgdev,
+						     vgdev->fence_drv.context,
 						     0);
-		if (!vgfb->fence)
+		if (!vgplane_st->fence)
 			return -ENOMEM;
 	}
 
@@ -258,17 +279,17 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
 }
 
 static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
-					struct drm_plane_state *state)
+                                        struct drm_plane_state *state)
 {
-	struct virtio_gpu_framebuffer *vgfb;
+	struct virtio_gpu_plane_state *vgplane_st;
 
 	if (!state->fb)
 		return;
 
-	vgfb = to_virtio_gpu_framebuffer(state->fb);
-	if (vgfb->fence) {
-		dma_fence_put(&vgfb->fence->f);
-		vgfb->fence = NULL;
+	vgplane_st = to_virtio_gpu_plane_state(state);
+	if (vgplane_st->fence) {
+		dma_fence_put(&vgplane_st->fence->f);
+		vgplane_st->fence = NULL;
 	}
 }
 
@@ -281,6 +302,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_output *output = NULL;
 	struct virtio_gpu_framebuffer *vgfb;
+	struct virtio_gpu_plane_state *vgplane_st;
 	struct virtio_gpu_object *bo = NULL;
 	uint32_t handle;
 
@@ -293,6 +315,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 
 	if (plane->state->fb) {
 		vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
+		vgplane_st = to_virtio_gpu_plane_state(plane->state);
 		bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
 		handle = bo->hw_res_handle;
 	} else {
@@ -312,11 +335,10 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 			(vgdev, 0,
 			 plane->state->crtc_w,
 			 plane->state->crtc_h,
-			 0, 0, objs, vgfb->fence);
+			 0, 0, objs, vgplane_st->fence);
 		virtio_gpu_notify(vgdev);
-		dma_fence_wait(&vgfb->fence->f, true);
-		dma_fence_put(&vgfb->fence->f);
-		vgfb->fence = NULL;
+		dma_fence_wait(&vgplane_st->fence->f, true);
+		dma_fence_put(&vgplane_st->fence->f);
 	}
 
 	if (plane->state->fb != old_state->fb) {
@@ -351,14 +373,14 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 
 static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs = {
 	.prepare_fb		= virtio_gpu_plane_prepare_fb,
-	.cleanup_fb		= virtio_gpu_plane_cleanup_fb,
+	.cleanup_fb             = virtio_gpu_plane_cleanup_fb,
 	.atomic_check		= virtio_gpu_plane_atomic_check,
 	.atomic_update		= virtio_gpu_primary_plane_update,
 };
 
 static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = {
 	.prepare_fb		= virtio_gpu_plane_prepare_fb,
-	.cleanup_fb		= virtio_gpu_plane_cleanup_fb,
+	.cleanup_fb             = virtio_gpu_plane_cleanup_fb,
 	.atomic_check		= virtio_gpu_plane_atomic_check,
 	.atomic_update		= virtio_gpu_cursor_plane_update,
 };
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update
  2023-10-03  1:00 ` [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update Dongwon Kim
@ 2023-10-23 12:24   ` Dmitry Osipenko
  2023-10-23 17:31     ` Kim, Dongwon
  0 siblings, 1 reply; 6+ messages in thread
From: Dmitry Osipenko @ 2023-10-23 12:24 UTC (permalink / raw)
  To: Dongwon Kim, dri-devel; +Cc: kraxel, vivek.kasireddy

On 10/3/23 04:00, Dongwon Kim wrote:
> Having a fence linked to a virtio_gpu_framebuffer in the plane update sequence
> would cause conflict when several planes referencing the same framebuffer
> (e.g. Xorg screen covering multi-displays configured for an extended mode)
> and those planes are updated concurrently. So it is needed to allocate a
> fence for every plane state instead of the framebuffer.
> 
> The plane state for virtio-gpu, "struct virtio_gpu_plane_state" is added for
> this. This structure represents drm_plane_state and it contains the reference
> to virtio_gpu_fence, which was previously in "struct virtio_gpu_framebuffer".
> 
> "virtio_gpu_plane_duplicate_state" is added as well to create a
> virtio_gpu_plane_state on top of duplicated drm plane state.
> 
> Several drm helpers were slightly modified accordingly to use the fence in new
> plane state structure. virtio_gpu_plane_cleanup_fb was completely removed as
> dma_fence_put shouldn't be called here as it can mess up with the ref count
> of the fence. The fence should be put after the fence is signaled in
> virtio_gpu_resource_flush then released in virtio_gpu_array_add_fence while
> the next virtio message is being queued.
> 
> Also, the condition for adding fence, (plane->state->fb != new_state->fb) was
> removed since we now allocate a new fence for the new plane state even if both
> old and new planes are pointing to the same framebuffer.
> 
> v2: removed virtio_gpu_plane_duplicate_state as the existing helper,
>     drm_atomic_helper_plane_destroy_state does the same.
> 
> Cc: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h   |  7 +++
>  drivers/gpu/drm/virtio/virtgpu_plane.c | 66 +++++++++++++++++---------
>  2 files changed, 51 insertions(+), 22 deletions(-)

Please write a guide how to test it. Are you using spice for the
multi-display viewer?

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update
  2023-10-23 12:24   ` Dmitry Osipenko
@ 2023-10-23 17:31     ` Kim, Dongwon
  2023-11-13 16:15       ` Dmitry Osipenko
  0 siblings, 1 reply; 6+ messages in thread
From: Kim, Dongwon @ 2023-10-23 17:31 UTC (permalink / raw)
  To: Dmitry Osipenko, dri-devel; +Cc: kraxel, Kasireddy, Vivek

> -----Original Message-----
> From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> Sent: Monday, October 23, 2023 5:24 AM
> To: Kim, Dongwon <dongwon.kim@intel.com>; dri-devel@lists.freedesktop.org
> Cc: kraxel@redhat.com; Kasireddy, Vivek <vivek.kasireddy@intel.com>
> Subject: Re: [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update
> 
> On 10/3/23 04:00, Dongwon Kim wrote:
> > Having a fence linked to a virtio_gpu_framebuffer in the plane update
> > sequence would cause conflict when several planes referencing the same
> > framebuffer (e.g. Xorg screen covering multi-displays configured for
> > an extended mode) and those planes are updated concurrently. So it is
> > needed to allocate a fence for every plane state instead of the framebuffer.
> >
> > The plane state for virtio-gpu, "struct virtio_gpu_plane_state" is
> > added for this. This structure represents drm_plane_state and it
> > contains the reference to virtio_gpu_fence, which was previously in "struct
> virtio_gpu_framebuffer".
> >
> > "virtio_gpu_plane_duplicate_state" is added as well to create a
> > virtio_gpu_plane_state on top of duplicated drm plane state.
> >
> > Several drm helpers were slightly modified accordingly to use the
> > fence in new plane state structure. virtio_gpu_plane_cleanup_fb was
> > completely removed as dma_fence_put shouldn't be called here as it can
> > mess up with the ref count of the fence. The fence should be put after
> > the fence is signaled in virtio_gpu_resource_flush then released in
> > virtio_gpu_array_add_fence while the next virtio message is being queued.
> >
> > Also, the condition for adding fence, (plane->state->fb !=
> > new_state->fb) was removed since we now allocate a new fence for the
> > new plane state even if both old and new planes are pointing to the same
> framebuffer.
> >
> > v2: removed virtio_gpu_plane_duplicate_state as the existing helper,
> >     drm_atomic_helper_plane_destroy_state does the same.
> >
> > Cc: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> > Cc: Gerd Hoffmann <kraxel@redhat.com>
> > Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
> > Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> > ---
> >  drivers/gpu/drm/virtio/virtgpu_drv.h   |  7 +++
> >  drivers/gpu/drm/virtio/virtgpu_plane.c | 66
> > +++++++++++++++++---------
> >  2 files changed, 51 insertions(+), 22 deletions(-)
> 
> Please write a guide how to test it. Are you using spice for the multi-display
> viewer?

[DW] Yeah, let me come up with a simple test case. So we use virtio-gpu as KMS device. It is used to share the guest frame with QEMU.
SPICE is one of client solutions we use but we primarily use QEMU GTK-UI w/ multi displays (QEMU with this params '-device virtio-vga,max_outputs=2,blob=true').
Thanks!

> 
> --
> Best regards,
> Dmitry


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update
  2023-10-23 17:31     ` Kim, Dongwon
@ 2023-11-13 16:15       ` Dmitry Osipenko
  2023-11-13 18:25         ` Kim, Dongwon
  0 siblings, 1 reply; 6+ messages in thread
From: Dmitry Osipenko @ 2023-11-13 16:15 UTC (permalink / raw)
  To: Kim, Dongwon, dri-devel; +Cc: kraxel, Kasireddy, Vivek

On 10/23/23 20:31, Kim, Dongwon wrote:
...
>> Please write a guide how to test it. Are you using spice for the multi-display
>> viewer?
> 
> [DW] Yeah, let me come up with a simple test case. So we use virtio-gpu as KMS device. It is used to share the guest frame with QEMU.
> SPICE is one of client solutions we use but we primarily use QEMU GTK-UI w/ multi displays (QEMU with this params '-device virtio-vga,max_outputs=2,blob=true').
> Thanks!


I'm having trouble wit h reproducing the issue. For GTK-UI you should be
using virtio-vga-gl, shouldn't you?

I assume you meant that your simple case is reproducible using dGPU,
correct? I need a test case that is reproducing using virgl and no dGPU.

Using virtio-vga-gl+virgl+max_outputs=2 I don't see any issues. In the
same time switching to the second virtual display doesn't work with
Qemu's GTK UI, I'm getting black screen for the second display. In the
KDE settings I have two displays and it says both are active. For the
first display that works, I don't see wrong frames ordering.

Please give me a test case that is reproducing using latest version of
vanilla/upstream Qemu.

-- 
Best regards,
Dmitry


^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update
  2023-11-13 16:15       ` Dmitry Osipenko
@ 2023-11-13 18:25         ` Kim, Dongwon
  0 siblings, 0 replies; 6+ messages in thread
From: Kim, Dongwon @ 2023-11-13 18:25 UTC (permalink / raw)
  To: Dmitry Osipenko, dri-devel; +Cc: kraxel, Kasireddy, Vivek

Hi Dmitry,


> -----Original Message-----
> From: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> Sent: Monday, November 13, 2023 8:16 AM
> To: Kim, Dongwon <dongwon.kim@intel.com>; dri-
> devel@lists.freedesktop.org
> Cc: kraxel@redhat.com; Kasireddy, Vivek <vivek.kasireddy@intel.com>
> Subject: Re: [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update
> 
> On 10/23/23 20:31, Kim, Dongwon wrote:
> ...
> >> Please write a guide how to test it. Are you using spice for the
> >> multi-display viewer?
> >
> > [DW] Yeah, let me come up with a simple test case. So we use virtio-gpu as
> KMS device. It is used to share the guest frame with QEMU.
> > SPICE is one of client solutions we use but we primarily use QEMU GTK-UI
> w/ multi displays (QEMU with this params '-device virtio-
> vga,max_outputs=2,blob=true').
> > Thanks!
> 
> 

> I'm having trouble wit h reproducing the issue. For GTK-UI you should be
> using virtio-vga-gl, shouldn't you?

[Kim, Dongwon] 
I was trying to replicate the issue with more general case as our solution - Mesa kmsro (iris w/ render target imported from virtio-gpu), using 
i915 for render-only device and virtio-gpu as a display only device -https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9592
hasn't been upstreamed yet. But I got no luck. What I tried was Mesa - sw-rasterizer + virtio-gpu and it worked just fine without any issue.
I think the reason would be some timing difference. The problem happens when a cycle of prepare->update-plane->cleanup is overlapped with another,
but that didn't seem to happen in the test cases I tried.

> 
> I assume you meant that your simple case is reproducible using dGPU,
> correct? I need a test case that is reproducing using virgl and no dGPU.

[Kim, Dongwon] 
We use iGPU only. And we don’t use virgl. We enabled SR-IOV on iGPU so virtual machine can see it as a dedicated HW and it can run
HW driver (i915 + Iris) on it.  

> 
> Using virtio-vga-gl+virgl+max_outputs=2 I don't see any issues. In the same

[Kim, Dongwon] 
Yeah, this makes me think again the problem can be replicated only in our current setup - Iris/i915 + virtio-gpu and zero copy
virtio-gpu display sharing (blob=true)

> time switching to the second virtual display doesn't work with Qemu's GTK UI,
> I'm getting black screen for the second display. In the KDE settings I have two
> displays and it says both are active. For the first display that works, I don't
> see wrong frames ordering.

[Kim, Dongwon] 
You mean you switched to the second virtual display by changing between tabs? There were some issue with GTK-UI regarding
tab handling. I fixed some of them. What version of QEMU are you using? Have you tried the latest one like > 8.0?

Wrong ordering of frame is actually what the other patch - drm/virtio: drm_gem_plane_helper_prepare_fb for obj is solving.
The problem this patch is trying to solve is complete lock up of virtual display update and fence timeout errors (i915) on the dmesg. 

Anyway, I just can't say the problem I have been trying to solve can happen on most of general cases but wouldn't it makes sense
to you that the fence should be assigned per plane update instead of FB in general? Having fence per plane update is working in the
same way as before but it prevents any potential issue like our case. 

> 
> Please give me a test case that is reproducing using latest version of
> vanilla/upstream Qemu.
> 
> --
> Best regards,
> Dmitry


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-11-13 18:26 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-03  1:00 [RFC PATCH v2 0/1] fence per plane state Dongwon Kim
2023-10-03  1:00 ` [RFC PATCH v2 1/1] drm/virtio: new fence for every plane update Dongwon Kim
2023-10-23 12:24   ` Dmitry Osipenko
2023-10-23 17:31     ` Kim, Dongwon
2023-11-13 16:15       ` Dmitry Osipenko
2023-11-13 18:25         ` Kim, Dongwon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).