All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] drm/virtio: fence handling in case of multi scanouts
@ 2022-06-03 21:18 Dongwon Kim
  2022-06-03 21:18 ` [PATCH v2 1/2] drm/virtio: .release ops for virtgpu fence release Dongwon Kim
  2022-06-03 21:18 ` [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update Dongwon Kim
  0 siblings, 2 replies; 8+ messages in thread
From: Dongwon Kim @ 2022-06-03 21:18 UTC (permalink / raw)
  To: dri-devel; +Cc: Dongwon Kim

Current primary plane update flow when blob is enabled (for zero copy
display sharing) shows fence synchronization problems when multi planes
are referencing a same single large FB (i.e. multi displays in extended
mode). This is because there is only one fence bound to the FB and this
single fence is re-used asynchronously when flushing all associated
planes.

The way to prevent this is to assign the fence for each plane so that
flushing one plane won't be affecting or affected by other plane's flush
operation.

The 1st patch "drm/virtio: .release ops for virtgpu fence release" which
adds device specific release ops is for making the virtio_gpu fence freed
upon the last dma_fence_put call.

The 2nd patch "drm/virtio: fence created per cursor/plane update" contains
the main implementation of per-plane fence.

Dongwon Kim (2):
  drm/virtio: .release ops for virtgpu fence release
  drm/virtio: fence created per cursor/plane update

 drivers/gpu/drm/virtio/virtgpu_drv.h   |   1 -
 drivers/gpu/drm/virtio/virtgpu_fence.c |   8 ++
 drivers/gpu/drm/virtio/virtgpu_plane.c | 103 ++++++++++---------------
 3 files changed, 47 insertions(+), 65 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/2] drm/virtio: .release ops for virtgpu fence release
  2022-06-03 21:18 [PATCH v2 0/2] drm/virtio: fence handling in case of multi scanouts Dongwon Kim
@ 2022-06-03 21:18 ` Dongwon Kim
  2022-06-06 23:54   ` Kasireddy, Vivek
  2022-06-03 21:18 ` [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update Dongwon Kim
  1 sibling, 1 reply; 8+ messages in thread
From: Dongwon Kim @ 2022-06-03 21:18 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann, Dongwon Kim, Gurchetan Singh

virtio_gpu_fence_release is added to free virtio-gpu-fence
upon release of dma_fence.

Cc: Gurchetan Singh <gurchetansingh@chromium.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_fence.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_fence.c b/drivers/gpu/drm/virtio/virtgpu_fence.c
index f28357dbde35..ba659ac2a51d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_fence.c
+++ b/drivers/gpu/drm/virtio/virtgpu_fence.c
@@ -63,12 +63,20 @@ static void virtio_gpu_timeline_value_str(struct dma_fence *f, char *str,
 		 (u64)atomic64_read(&fence->drv->last_fence_id));
 }
 
+static void virtio_gpu_fence_release(struct dma_fence *f)
+{
+	struct virtio_gpu_fence *fence = to_virtio_gpu_fence(f);
+
+	kfree(fence);
+}
+
 static const struct dma_fence_ops virtio_gpu_fence_ops = {
 	.get_driver_name     = virtio_gpu_get_driver_name,
 	.get_timeline_name   = virtio_gpu_get_timeline_name,
 	.signaled            = virtio_gpu_fence_signaled,
 	.fence_value_str     = virtio_gpu_fence_value_str,
 	.timeline_value_str  = virtio_gpu_timeline_value_str,
+	.release	     = virtio_gpu_fence_release,
 };
 
 struct virtio_gpu_fence *virtio_gpu_fence_alloc(struct virtio_gpu_device *vgdev,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update
  2022-06-03 21:18 [PATCH v2 0/2] drm/virtio: fence handling in case of multi scanouts Dongwon Kim
  2022-06-03 21:18 ` [PATCH v2 1/2] drm/virtio: .release ops for virtgpu fence release Dongwon Kim
@ 2022-06-03 21:18 ` Dongwon Kim
  2022-06-07  0:23   ` Kasireddy, Vivek
  2022-06-09  4:24   ` Gerd Hoffmann
  1 sibling, 2 replies; 8+ messages in thread
From: Dongwon Kim @ 2022-06-03 21:18 UTC (permalink / raw)
  To: dri-devel
  Cc: Dongwon Kim, Vivek Kasireddy, Gurchetan Singh, Gerd Hoffmann,
	Dan Carpenter, kernel test robot

Having one fence for a vgfb would cause conflict in case there are
multiple planes referencing the same vgfb (e.g. Xorg screen covering
two displays in extended mode) being flushed simultaneously. So it makes
sence to use a separated fence for each plane update to prevent this.

vgfb->fence is not required anymore with the suggested code change so
both prepare_fb and cleanup_fb are removed since only fence creation/
freeing are done in there.

v2: - use the fence always as long as guest_blob is enabled on the
      scanout object
    - obj and fence initialized as NULL ptrs to avoid uninitialzed
      ptr problem (Reported by Dan Carpenter/kernel-test-robot)

Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Gurchetan Singh <gurchetansingh@chromium.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |   1 -
 drivers/gpu/drm/virtio/virtgpu_plane.c | 103 ++++++++++---------------
 2 files changed, 39 insertions(+), 65 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 0a194aaad419..4c59c1e67ca5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -186,7 +186,6 @@ struct virtio_gpu_output {
 
 struct virtio_gpu_framebuffer {
 	struct drm_framebuffer base;
-	struct virtio_gpu_fence *fence;
 };
 #define to_virtio_gpu_framebuffer(x) \
 	container_of(x, struct virtio_gpu_framebuffer, base)
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 6d3cc9e238a4..821023b7d57d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -137,29 +137,37 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct virtio_gpu_framebuffer *vgfb;
 	struct virtio_gpu_object *bo;
+	struct virtio_gpu_object_array *objs = NULL;
+	struct virtio_gpu_fence *fence = NULL;
 
 	vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
 	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
-	if (vgfb->fence) {
-		struct virtio_gpu_object_array *objs;
 
+	if (!bo)
+		return;
+
+	if (bo->dumb && bo->guest_blob)
+		fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
+					       0);
+
+	if (fence) {
 		objs = virtio_gpu_array_alloc(1);
-		if (!objs)
+		if (!objs) {
+			kfree(fence);
 			return;
+		}
 		virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
 		virtio_gpu_array_lock_resv(objs);
-		virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
-					      width, height, objs, vgfb->fence);
-		virtio_gpu_notify(vgdev);
+	}
+
+	virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
+				      width, height, objs, fence);
+	virtio_gpu_notify(vgdev);
 
-		dma_fence_wait_timeout(&vgfb->fence->f, true,
+	if (fence) {
+		dma_fence_wait_timeout(&fence->f, true,
 				       msecs_to_jiffies(50));
-		dma_fence_put(&vgfb->fence->f);
-		vgfb->fence = NULL;
-	} else {
-		virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
-					      width, height, NULL, NULL);
-		virtio_gpu_notify(vgdev);
+		dma_fence_put(&fence->f);
 	}
 }
 
@@ -239,47 +247,6 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
 				  rect.y2 - rect.y1);
 }
 
-static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
-				       struct drm_plane_state *new_state)
-{
-	struct drm_device *dev = plane->dev;
-	struct virtio_gpu_device *vgdev = dev->dev_private;
-	struct virtio_gpu_framebuffer *vgfb;
-	struct virtio_gpu_object *bo;
-
-	if (!new_state->fb)
-		return 0;
-
-	vgfb = to_virtio_gpu_framebuffer(new_state->fb);
-	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
-	if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
-		return 0;
-
-	if (bo->dumb && (plane->state->fb != new_state->fb)) {
-		vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
-						     0);
-		if (!vgfb->fence)
-			return -ENOMEM;
-	}
-
-	return 0;
-}
-
-static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
-					struct drm_plane_state *old_state)
-{
-	struct virtio_gpu_framebuffer *vgfb;
-
-	if (!plane->state->fb)
-		return;
-
-	vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
-	if (vgfb->fence) {
-		dma_fence_put(&vgfb->fence->f);
-		vgfb->fence = NULL;
-	}
-}
-
 static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 					   struct drm_atomic_state *state)
 {
@@ -290,6 +257,8 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 	struct virtio_gpu_output *output = NULL;
 	struct virtio_gpu_framebuffer *vgfb;
 	struct virtio_gpu_object *bo = NULL;
+	struct virtio_gpu_object_array *objs = NULL;
+	struct virtio_gpu_fence *fence = NULL;
 	uint32_t handle;
 
 	if (plane->state->crtc)
@@ -309,22 +278,32 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 
 	if (bo && bo->dumb && (plane->state->fb != old_state->fb)) {
 		/* new cursor -- update & wait */
-		struct virtio_gpu_object_array *objs;
+		fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
+					       0);
+		if (!fence)
+			return;
 
 		objs = virtio_gpu_array_alloc(1);
-		if (!objs)
+		if (!objs) {
+			if (fence)
+				kfree(fence);
+
 			return;
+		}
+
 		virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
 		virtio_gpu_array_lock_resv(objs);
 		virtio_gpu_cmd_transfer_to_host_2d
 			(vgdev, 0,
 			 plane->state->crtc_w,
 			 plane->state->crtc_h,
-			 0, 0, objs, vgfb->fence);
+			 0, 0, objs, fence);
 		virtio_gpu_notify(vgdev);
-		dma_fence_wait(&vgfb->fence->f, true);
-		dma_fence_put(&vgfb->fence->f);
-		vgfb->fence = NULL;
+
+		if (fence) {
+			dma_fence_wait(&fence->f, true);
+			dma_fence_put(&fence->f);
+		}
 	}
 
 	if (plane->state->fb != old_state->fb) {
@@ -358,15 +337,11 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
 }
 
 static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs = {
-	.prepare_fb		= virtio_gpu_plane_prepare_fb,
-	.cleanup_fb		= virtio_gpu_plane_cleanup_fb,
 	.atomic_check		= virtio_gpu_plane_atomic_check,
 	.atomic_update		= virtio_gpu_primary_plane_update,
 };
 
 static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = {
-	.prepare_fb		= virtio_gpu_plane_prepare_fb,
-	.cleanup_fb		= virtio_gpu_plane_cleanup_fb,
 	.atomic_check		= virtio_gpu_plane_atomic_check,
 	.atomic_update		= virtio_gpu_cursor_plane_update,
 };
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* RE: [PATCH v2 1/2] drm/virtio: .release ops for virtgpu fence release
  2022-06-03 21:18 ` [PATCH v2 1/2] drm/virtio: .release ops for virtgpu fence release Dongwon Kim
@ 2022-06-06 23:54   ` Kasireddy, Vivek
  0 siblings, 0 replies; 8+ messages in thread
From: Kasireddy, Vivek @ 2022-06-06 23:54 UTC (permalink / raw)
  To: Kim, Dongwon, dri-devel; +Cc: Gerd Hoffmann, Gurchetan Singh

> virtio_gpu_fence_release is added to free virtio-gpu-fence
> upon release of dma_fence.
> 
> Cc: Gurchetan Singh <gurchetansingh@chromium.org>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_fence.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_fence.c b/drivers/gpu/drm/virtio/virtgpu_fence.c
> index f28357dbde35..ba659ac2a51d 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_fence.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_fence.c
> @@ -63,12 +63,20 @@ static void virtio_gpu_timeline_value_str(struct dma_fence *f,
> char *str,
>  		 (u64)atomic64_read(&fence->drv->last_fence_id));
>  }
> 
> +static void virtio_gpu_fence_release(struct dma_fence *f)
> +{
> +	struct virtio_gpu_fence *fence = to_virtio_gpu_fence(f);
> +
> +	kfree(fence);
> +}
> +
>  static const struct dma_fence_ops virtio_gpu_fence_ops = {
>  	.get_driver_name     = virtio_gpu_get_driver_name,
>  	.get_timeline_name   = virtio_gpu_get_timeline_name,
>  	.signaled            = virtio_gpu_fence_signaled,
>  	.fence_value_str     = virtio_gpu_fence_value_str,
>  	.timeline_value_str  = virtio_gpu_timeline_value_str,
> +	.release	     = virtio_gpu_fence_release,

Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com>

>  };
> 
>  struct virtio_gpu_fence *virtio_gpu_fence_alloc(struct virtio_gpu_device *vgdev,
> --
> 2.20.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update
  2022-06-03 21:18 ` [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update Dongwon Kim
@ 2022-06-07  0:23   ` Kasireddy, Vivek
  2022-06-09  4:24   ` Gerd Hoffmann
  1 sibling, 0 replies; 8+ messages in thread
From: Kasireddy, Vivek @ 2022-06-07  0:23 UTC (permalink / raw)
  To: Kim, Dongwon, dri-devel
  Cc: Gerd Hoffmann, lkp, Dan Carpenter, Gurchetan Singh

Hi DW,

> Subject: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update
> 
> Having one fence for a vgfb would cause conflict in case there are
> multiple planes referencing the same vgfb (e.g. Xorg screen covering
> two displays in extended mode) being flushed simultaneously. So it makes
> sence to use a separated fence for each plane update to prevent this.
> 
> vgfb->fence is not required anymore with the suggested code change so
> both prepare_fb and cleanup_fb are removed since only fence creation/
> freeing are done in there.
> 
> v2: - use the fence always as long as guest_blob is enabled on the
>       scanout object
>     - obj and fence initialized as NULL ptrs to avoid uninitialzed
>       ptr problem (Reported by Dan Carpenter/kernel-test-robot)
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> Cc: Gurchetan Singh <gurchetansingh@chromium.org>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Vivek Kasireddy <vivek.kasireddy@intel.com>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.h   |   1 -
>  drivers/gpu/drm/virtio/virtgpu_plane.c | 103 ++++++++++---------------
>  2 files changed, 39 insertions(+), 65 deletions(-)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 0a194aaad419..4c59c1e67ca5 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -186,7 +186,6 @@ struct virtio_gpu_output {
> 
>  struct virtio_gpu_framebuffer {
>  	struct drm_framebuffer base;
> -	struct virtio_gpu_fence *fence;
>  };
>  #define to_virtio_gpu_framebuffer(x) \
>  	container_of(x, struct virtio_gpu_framebuffer, base)
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index 6d3cc9e238a4..821023b7d57d 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -137,29 +137,37 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
>  	struct virtio_gpu_device *vgdev = dev->dev_private;
>  	struct virtio_gpu_framebuffer *vgfb;
>  	struct virtio_gpu_object *bo;
> +	struct virtio_gpu_object_array *objs = NULL;
> +	struct virtio_gpu_fence *fence = NULL;
> 
>  	vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
>  	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
> -	if (vgfb->fence) {
> -		struct virtio_gpu_object_array *objs;
> 
> +	if (!bo)
> +		return;
[Kasireddy, Vivek] I think you can drop the above check as bo is guaranteed
to be valid in resource_flush as the necessary checks are already done early
in virtio_gpu_primary_plane_update().

> +
> +	if (bo->dumb && bo->guest_blob)
> +		fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
> +					       0);
> +
> +	if (fence) {
>  		objs = virtio_gpu_array_alloc(1);
> -		if (!objs)
> +		if (!objs) {
> +			kfree(fence);
>  			return;
> +		}
>  		virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
>  		virtio_gpu_array_lock_resv(objs);
> -		virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
> -					      width, height, objs, vgfb->fence);
> -		virtio_gpu_notify(vgdev);
> +	}
> +
> +	virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
> +				      width, height, objs, fence);
> +	virtio_gpu_notify(vgdev);
[Kasireddy, Vivek] I think it is OK to retain the existing style where all the
statements relevant for if (fence) would be lumped together. I do understand that
the above two statements would be redundant in that case but it looks a bit cleaner.

> 
> -		dma_fence_wait_timeout(&vgfb->fence->f, true,
> +	if (fence) {
> +		dma_fence_wait_timeout(&fence->f, true,
>  				       msecs_to_jiffies(50));
> -		dma_fence_put(&vgfb->fence->f);
> -		vgfb->fence = NULL;
> -	} else {
> -		virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
> -					      width, height, NULL, NULL);
> -		virtio_gpu_notify(vgdev);
> +		dma_fence_put(&fence->f);
>  	}
>  }
> 
> @@ -239,47 +247,6 @@ static void virtio_gpu_primary_plane_update(struct drm_plane
> *plane,
>  				  rect.y2 - rect.y1);
>  }
> 
> -static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
> -				       struct drm_plane_state *new_state)
> -{
> -	struct drm_device *dev = plane->dev;
> -	struct virtio_gpu_device *vgdev = dev->dev_private;
> -	struct virtio_gpu_framebuffer *vgfb;
> -	struct virtio_gpu_object *bo;
> -
> -	if (!new_state->fb)
> -		return 0;
> -
> -	vgfb = to_virtio_gpu_framebuffer(new_state->fb);
> -	bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
> -	if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
> -		return 0;
> -
> -	if (bo->dumb && (plane->state->fb != new_state->fb)) {
> -		vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
> -						     0);
> -		if (!vgfb->fence)
> -			return -ENOMEM;
> -	}
> -
> -	return 0;
> -}
> -
> -static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
> -					struct drm_plane_state *old_state)
> -{
> -	struct virtio_gpu_framebuffer *vgfb;
> -
> -	if (!plane->state->fb)
> -		return;
> -
> -	vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
> -	if (vgfb->fence) {
> -		dma_fence_put(&vgfb->fence->f);
> -		vgfb->fence = NULL;
> -	}
> -}
> -
>  static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
>  					   struct drm_atomic_state *state)
>  {
> @@ -290,6 +257,8 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane
> *plane,
>  	struct virtio_gpu_output *output = NULL;
>  	struct virtio_gpu_framebuffer *vgfb;
>  	struct virtio_gpu_object *bo = NULL;
> +	struct virtio_gpu_object_array *objs = NULL;
> +	struct virtio_gpu_fence *fence = NULL;
>  	uint32_t handle;
> 
>  	if (plane->state->crtc)
> @@ -309,22 +278,32 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane
> *plane,
> 
>  	if (bo && bo->dumb && (plane->state->fb != old_state->fb)) {
>  		/* new cursor -- update & wait */
> -		struct virtio_gpu_object_array *objs;
> +		fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
> +					       0);
> +		if (!fence)
> +			return;
> 
>  		objs = virtio_gpu_array_alloc(1);
> -		if (!objs)
> +		if (!objs) {
> +			if (fence)
[Kasireddy, Vivek] I think you can drop the above check given that you checked it
earlier.

> +				kfree(fence);
> +
>  			return;
> +		}
> +
>  		virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
>  		virtio_gpu_array_lock_resv(objs);
>  		virtio_gpu_cmd_transfer_to_host_2d
>  			(vgdev, 0,
>  			 plane->state->crtc_w,
>  			 plane->state->crtc_h,
> -			 0, 0, objs, vgfb->fence);
> +			 0, 0, objs, fence);
>  		virtio_gpu_notify(vgdev);
> -		dma_fence_wait(&vgfb->fence->f, true);
> -		dma_fence_put(&vgfb->fence->f);
> -		vgfb->fence = NULL;
> +
> +		if (fence) {
[Kasireddy, Vivek] Same as above; i.e, you can drop the if (fence) check as we
wouldn't get here without a valid fence.

I think with the above changes, the diff may get smaller and simpler. Regardless,
this patch is Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com>

> +			dma_fence_wait(&fence->f, true);
> +			dma_fence_put(&fence->f);
> +		}
>  	}
> 
>  	if (plane->state->fb != old_state->fb) {
> @@ -358,15 +337,11 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane
> *plane,
>  }
> 
>  static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs = {
> -	.prepare_fb		= virtio_gpu_plane_prepare_fb,
> -	.cleanup_fb		= virtio_gpu_plane_cleanup_fb,
>  	.atomic_check		= virtio_gpu_plane_atomic_check,
>  	.atomic_update		= virtio_gpu_primary_plane_update,
>  };
> 
>  static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = {
> -	.prepare_fb		= virtio_gpu_plane_prepare_fb,
> -	.cleanup_fb		= virtio_gpu_plane_cleanup_fb,
>  	.atomic_check		= virtio_gpu_plane_atomic_check,
>  	.atomic_update		= virtio_gpu_cursor_plane_update,
>  };
> --
> 2.20.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update
  2022-06-03 21:18 ` [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update Dongwon Kim
  2022-06-07  0:23   ` Kasireddy, Vivek
@ 2022-06-09  4:24   ` Gerd Hoffmann
  2022-06-14 19:17     ` Dongwon Kim
  1 sibling, 1 reply; 8+ messages in thread
From: Gerd Hoffmann @ 2022-06-09  4:24 UTC (permalink / raw)
  To: Dongwon Kim
  Cc: Dan Carpenter, Vivek Kasireddy, kernel test robot, dri-devel,
	Gurchetan Singh

On Fri, Jun 03, 2022 at 02:18:49PM -0700, Dongwon Kim wrote:
> Having one fence for a vgfb would cause conflict in case there are
> multiple planes referencing the same vgfb (e.g. Xorg screen covering
> two displays in extended mode) being flushed simultaneously. So it makes
> sence to use a separated fence for each plane update to prevent this.
> 
> vgfb->fence is not required anymore with the suggested code change so
> both prepare_fb and cleanup_fb are removed since only fence creation/
> freeing are done in there.

The fences are allocated and released in prepare_fb + cleanup_fb for a
reason: atomic_update must not fail.

I guess virtio-gpu must be fixed to use drm_plane_state->fence
correctly ...

take care,
  Gerd


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update
  2022-06-09  4:24   ` Gerd Hoffmann
@ 2022-06-14 19:17     ` Dongwon Kim
  2022-06-15  6:09       ` Kasireddy, Vivek
  0 siblings, 1 reply; 8+ messages in thread
From: Dongwon Kim @ 2022-06-14 19:17 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Dan Carpenter, Vivek Kasireddy, kernel test robot, dri-devel,
	Gurchetan Singh

On Thu, Jun 09, 2022 at 06:24:43AM +0200, Gerd Hoffmann wrote:
> On Fri, Jun 03, 2022 at 02:18:49PM -0700, Dongwon Kim wrote:
> > Having one fence for a vgfb would cause conflict in case there are
> > multiple planes referencing the same vgfb (e.g. Xorg screen covering
> > two displays in extended mode) being flushed simultaneously. So it makes
> > sence to use a separated fence for each plane update to prevent this.
> > 
> > vgfb->fence is not required anymore with the suggested code change so
> > both prepare_fb and cleanup_fb are removed since only fence creation/
> > freeing are done in there.
> 
> The fences are allocated and released in prepare_fb + cleanup_fb for a
> reason: atomic_update must not fail.

In case fence allocation fails, it falls back to non-fence path so it
won't fail for primary-plane-update.

For cursor plane update, it returns if fence is NULL but we could change
it to just proceed and just make it skip waiting like,

if (fence) {
    dma_fence_wait(&fence->f, true);
    dma_fence_put(&fence->f);
}						

Or maybe I can limit my suggested changes to primary-plane-update only.

What do you think about these?

> 
> I guess virtio-gpu must be fixed to use drm_plane_state->fence
> correctly ...

I was thinking about this too but current functions (e.g.
virtio_gpu_cmd_transfer_to_host_2d) takes "struct virtio_gpu_fence".
Not sure what is the best way to connect drm_plane_state->fence to
virtio_gpu_fence without changing major function interfaces.

> 
> take care,
>   Gerd
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update
  2022-06-14 19:17     ` Dongwon Kim
@ 2022-06-15  6:09       ` Kasireddy, Vivek
  0 siblings, 0 replies; 8+ messages in thread
From: Kasireddy, Vivek @ 2022-06-15  6:09 UTC (permalink / raw)
  To: Kim, Dongwon, Gerd Hoffmann
  Cc: Dan Carpenter, lkp, dri-devel, Gurchetan Singh

Hi DW,

> 
> On Thu, Jun 09, 2022 at 06:24:43AM +0200, Gerd Hoffmann wrote:
> > On Fri, Jun 03, 2022 at 02:18:49PM -0700, Dongwon Kim wrote:
> > > Having one fence for a vgfb would cause conflict in case there are
> > > multiple planes referencing the same vgfb (e.g. Xorg screen covering
> > > two displays in extended mode) being flushed simultaneously. So it makes
> > > sence to use a separated fence for each plane update to prevent this.
> > >
> > > vgfb->fence is not required anymore with the suggested code change so
> > > both prepare_fb and cleanup_fb are removed since only fence creation/
> > > freeing are done in there.
> >
> > The fences are allocated and released in prepare_fb + cleanup_fb for a
> > reason: atomic_update must not fail.
> 
> In case fence allocation fails, it falls back to non-fence path so it
> won't fail for primary-plane-update.
> 
> For cursor plane update, it returns if fence is NULL but we could change
> it to just proceed and just make it skip waiting like,
[Kasireddy, Vivek] But cursor plane update is always tied to a fence based on the
way it works now and we have to fail if there is no fence.

> 
> if (fence) {
>     dma_fence_wait(&fence->f, true);
>     dma_fence_put(&fence->f);
> }
> 
> Or maybe I can limit my suggested changes to primary-plane-update only.
> 
> What do you think about these?
> 
> >
> > I guess virtio-gpu must be fixed to use drm_plane_state->fence
> > correctly ...
> 
> I was thinking about this too but current functions (e.g.
> virtio_gpu_cmd_transfer_to_host_2d) takes "struct virtio_gpu_fence".
> Not sure what is the best way to connect drm_plane_state->fence to
> virtio_gpu_fence without changing major function interfaces.
[Kasireddy, Vivek] FWIU, we cannot use drm_plane_state->fence as it is used
by drm core to handle explicit fences. So, I think a cleaner way is to subclass
base drm_plane_state and move the fence from virtio_gpu_framebuffer to a
new struct virtio_gpu_plane_state. This way, we can create the fence in
prepare_fb() and use it for synchronization in resource_flush.

Thanks,
Vivek

> 
> >
> > take care,
> >   Gerd
> >

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-06-15  6:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-03 21:18 [PATCH v2 0/2] drm/virtio: fence handling in case of multi scanouts Dongwon Kim
2022-06-03 21:18 ` [PATCH v2 1/2] drm/virtio: .release ops for virtgpu fence release Dongwon Kim
2022-06-06 23:54   ` Kasireddy, Vivek
2022-06-03 21:18 ` [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update Dongwon Kim
2022-06-07  0:23   ` Kasireddy, Vivek
2022-06-09  4:24   ` Gerd Hoffmann
2022-06-14 19:17     ` Dongwon Kim
2022-06-15  6:09       ` Kasireddy, Vivek

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.