All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs
@ 2021-03-31  3:04 Vivek Kasireddy
  2021-03-31  3:04 ` [PATCH 2/2] drm/virtio: Include modifier as part of set_scanout_blob Vivek Kasireddy
  2021-03-31  7:41 ` [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs Gerd Hoffmann
  0 siblings, 2 replies; 20+ messages in thread
From: Vivek Kasireddy @ 2021-03-31  3:04 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

If support for Blob resources is available, then dumb BOs created
by the driver can be considered as guest Blobs. And, for guest
Blobs, there is no need to do any transfers or flushes but we do
need to do set_scanout even if the FB has not changed as part of
plane updates.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_gem.c    |  8 ++++++++
 drivers/gpu/drm/virtio/virtgpu_object.c |  3 +++
 drivers/gpu/drm/virtio/virtgpu_plane.c  | 18 +++++++++++-------
 3 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 8502400b2f9c..5f49fb1cce65 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -64,6 +64,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 {
 	struct drm_gem_object *gobj;
 	struct virtio_gpu_object_params params = { 0 };
+	struct virtio_gpu_device *vgdev = dev->dev_private;
 	int ret;
 	uint32_t pitch;
 
@@ -79,6 +80,13 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 	params.height = args->height;
 	params.size = args->size;
 	params.dumb = true;
+
+	if (vgdev->has_resource_blob) {
+		params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
+		params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
+		params.blob = true;
+	}
+
 	ret = virtio_gpu_gem_create(file_priv, dev, &params, &gobj,
 				    &args->handle);
 	if (ret)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 4ff1ec28e630..f648b0e24447 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -254,6 +254,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	}
 
 	if (params->blob) {
+		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
+			bo->guest_blob = true;
+
 		virtio_gpu_cmd_resource_create_blob(vgdev, bo, params,
 						    ents, nents);
 	} else if (params->virgl) {
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index 4e1b17548007..3731f1a6477d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -161,10 +161,11 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
 		return;
 
 	bo = gem_to_virtio_gpu_obj(plane->state->fb->obj[0]);
-	if (bo->dumb)
+	if (bo->dumb && !bo->guest_blob)
 		virtio_gpu_update_dumb_bo(vgdev, plane->state, &rect);
 
-	if (plane->state->fb != old_state->fb ||
+	if ((bo->dumb && bo->guest_blob) ||
+	    plane->state->fb != old_state->fb ||
 	    plane->state->src_w != old_state->src_w ||
 	    plane->state->src_h != old_state->src_h ||
 	    plane->state->src_x != old_state->src_x ||
@@ -198,11 +199,14 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
 		}
 	}
 
-	virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
-				      rect.x1,
-				      rect.y1,
-				      rect.x2 - rect.x1,
-				      rect.y2 - rect.y1);
+	if (!bo->guest_blob) {
+		virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
+					      rect.x1,
+				              rect.y1,
+				              rect.x2 - rect.x1,
+				              rect.y2 - rect.y1);
+	}
+
 	virtio_gpu_notify(vgdev);
 }
 
-- 
2.26.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/2] drm/virtio: Include modifier as part of set_scanout_blob
  2021-03-31  3:04 [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs Vivek Kasireddy
@ 2021-03-31  3:04 ` Vivek Kasireddy
  2021-03-31  7:59   ` Gerd Hoffmann
  2021-03-31  7:41 ` [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs Gerd Hoffmann
  1 sibling, 1 reply; 20+ messages in thread
From: Vivek Kasireddy @ 2021-03-31  3:04 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

With new use-cases coming up that include virtio-gpu:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9592

the FB associated with a Guest blob may have a modifier. Therefore,
this modifier info needs to be included as part of set_scanout_blob.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_display.c | 3 +++
 drivers/gpu/drm/virtio/virtgpu_vq.c      | 3 ++-
 include/uapi/linux/virtio_gpu.h          | 1 +
 3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c
index a6caebd4a0dd..e2e7e6c5cb91 100644
--- a/drivers/gpu/drm/virtio/virtgpu_display.c
+++ b/drivers/gpu/drm/virtio/virtgpu_display.c
@@ -344,6 +344,9 @@ int virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev)
 	vgdev->ddev->mode_config.max_width = XRES_MAX;
 	vgdev->ddev->mode_config.max_height = YRES_MAX;
 
+	if (vgdev->has_resource_blob)
+		vgdev->ddev->mode_config.allow_fb_modifiers = true;
+
 	for (i = 0 ; i < vgdev->num_scanouts; ++i)
 		vgdev_output_init(vgdev, i);
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index cf84d382dd41..462f1beb9c11 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -34,7 +34,7 @@
 #include "virtgpu_drv.h"
 #include "virtgpu_trace.h"
 
-#define MAX_INLINE_CMD_SIZE   96
+#define MAX_INLINE_CMD_SIZE   112
 #define MAX_INLINE_RESP_SIZE  24
 #define VBUFFER_SIZE          (sizeof(struct virtio_gpu_vbuffer) \
 			       + MAX_INLINE_CMD_SIZE		 \
@@ -1294,6 +1294,7 @@ void virtio_gpu_cmd_set_scanout_blob(struct virtio_gpu_device *vgdev,
 	cmd_p->format = cpu_to_le32(format);
 	cmd_p->width  = cpu_to_le32(fb->width);
 	cmd_p->height = cpu_to_le32(fb->height);
+	cmd_p->modifier = cpu_to_le64(fb->modifier);
 
 	for (i = 0; i < 4; i++) {
 		cmd_p->strides[i] = cpu_to_le32(fb->pitches[i]);
diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index 97523a95781d..c6424d769e62 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -409,6 +409,7 @@ struct virtio_gpu_set_scanout_blob {
 	__le32 width;
 	__le32 height;
 	__le32 format;
+	__le64 modifier;
 	__le32 padding;
 	__le32 strides[4];
 	__le32 offsets[4];
-- 
2.26.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs
  2021-03-31  3:04 [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs Vivek Kasireddy
  2021-03-31  3:04 ` [PATCH 2/2] drm/virtio: Include modifier as part of set_scanout_blob Vivek Kasireddy
@ 2021-03-31  7:41 ` Gerd Hoffmann
  2021-04-01  3:51   ` Kasireddy, Vivek
  1 sibling, 1 reply; 20+ messages in thread
From: Gerd Hoffmann @ 2021-03-31  7:41 UTC (permalink / raw)
  To: Vivek Kasireddy; +Cc: dri-devel

On Tue, Mar 30, 2021 at 08:04:38PM -0700, Vivek Kasireddy wrote:
> If support for Blob resources is available, then dumb BOs created
> by the driver can be considered as guest Blobs. And, for guest
> Blobs, there is no need to do any transfers or flushes

No.  VIRTGPU_BLOB_FLAG_USE_SHAREABLE means the host (aka device in
virtio terms) *can* create a shared mapping.  So, the guest sends still
needs to send transfer commands, and then the device can shortcut the
transfer commands on the host side in case a shared mapping exists.

flush commands are still needed for dirty tracking.

> but we do need to do set_scanout even if the FB has not changed as
> part of plane updates.

Sounds like you workaround host bugs.  This should not be needed with
properly implemented flush.

take care,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] drm/virtio: Include modifier as part of set_scanout_blob
  2021-03-31  3:04 ` [PATCH 2/2] drm/virtio: Include modifier as part of set_scanout_blob Vivek Kasireddy
@ 2021-03-31  7:59   ` Gerd Hoffmann
  2021-04-02  7:55     ` Zhang, Tina
  0 siblings, 1 reply; 20+ messages in thread
From: Gerd Hoffmann @ 2021-03-31  7:59 UTC (permalink / raw)
  To: Vivek Kasireddy; +Cc: dri-devel, gurchetansingh

  Hi,

> -#define MAX_INLINE_CMD_SIZE   96
> +#define MAX_INLINE_CMD_SIZE   112

Separate patch please.

> --- a/include/uapi/linux/virtio_gpu.h
> +++ b/include/uapi/linux/virtio_gpu.h
> @@ -409,6 +409,7 @@ struct virtio_gpu_set_scanout_blob {
>  	__le32 width;
>  	__le32 height;
>  	__le32 format;
> +	__le64 modifier;
>  	__le32 padding;
>  	__le32 strides[4];
>  	__le32 offsets[4];

Nope.  This breaks the virtio protocol.

We'll need a virtio feature flag to negotiate modifier support between
guest and host.  When supported by both sides it can be used.  The new
field should be appended, not inserted in the middle.  Or we create a
new virtio_gpu_set_scanout_blob2 struct with new command for this.

Also: I guess the device should provide a list of supported modifiers,
maybe as capset?

Note: I think it is already possible to create resources with modifiers
when using virgl commands for that.  Allowing modifiers with virgl=off
too makes sense given your use case, but we should not use diverging
approaches for virgl=on vs. virgl=off.  Specifically I'm not sure
virtio_gpu_set_scanout_blob is the right place, I think with virgl=on
the modifier is linked to the resource not the scanout ...

Cc'ing Gurchetan Singh for comments.

take care,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs
  2021-03-31  7:41 ` [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs Gerd Hoffmann
@ 2021-04-01  3:51   ` Kasireddy, Vivek
  2021-04-01  6:53     ` Gerd Hoffmann
  0 siblings, 1 reply; 20+ messages in thread
From: Kasireddy, Vivek @ 2021-04-01  3:51 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: dri-devel

Hi Gerd,

> > If support for Blob resources is available, then dumb BOs created by
> > the driver can be considered as guest Blobs. And, for guest Blobs,
> > there is no need to do any transfers or flushes
> 
> No.  VIRTGPU_BLOB_FLAG_USE_SHAREABLE means the host (aka device in virtio
> terms) *can* create a shared mapping.  So, the guest sends still needs to send transfer
> commands, and then the device can shortcut the transfer commands on the host side in
> case a shared mapping exists.
[Kasireddy, Vivek] Ok. IIUC, are you saying that the device may or may not create a shared
mapping (meaning res->image) and that the driver should not make any assumptions about
that and thus still do the transfers and flushes?

Also, could you please briefly explain what does VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE
mean given that the spec does not describe these blob_flags clearly? This is what the spec says:

"The driver MUST inform the device if the blob resource is used for
memory access, sharing between driver instances and/or sharing with
other devices. This is done via the \field{blob_flags} field."

And, what should be the default blob_flags value for a dumb bo if the userspace does not
specify them?

> 
> flush commands are still needed for dirty tracking.
> 
> > but we do need to do set_scanout even if the FB has not changed as
> > part of plane updates.
> 
> Sounds like you workaround host bugs.  This should not be needed with properly
> implemented flush.
[Kasireddy, Vivek] With the patches I tested with:
https://lists.nongnu.org/archive/html/qemu-devel/2021-03/msg09786.html

I noticed that if we do not have res->image and only have res->blob, we have to 
re-submit the blob/dmabuf and update the displaysurface if guest made updates to it 
(in this case same FB) which can only happen if we call set_scanout_blob. IIUC, flush
only marks the area as dirty but does not re-submit the updated buffer/blob and I see
a flicker if I let it do dpy_gfx_update().

Thanks,
Vivek

> 
> take care,
>   Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs
  2021-04-01  3:51   ` Kasireddy, Vivek
@ 2021-04-01  6:53     ` Gerd Hoffmann
  2021-04-06 20:36       ` [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2) Vivek Kasireddy
  0 siblings, 1 reply; 20+ messages in thread
From: Gerd Hoffmann @ 2021-04-01  6:53 UTC (permalink / raw)
  To: Kasireddy, Vivek; +Cc: dri-devel

  Hi,

> > No.  VIRTGPU_BLOB_FLAG_USE_SHAREABLE means the host (aka device in virtio
> > terms) *can* create a shared mapping.  So, the guest sends still needs to send transfer
> > commands, and then the device can shortcut the transfer commands on the host side in
> > case a shared mapping exists.
> [Kasireddy, Vivek] Ok. IIUC, are you saying that the device may or may not create a shared
> mapping (meaning res->image) and that the driver should not make any assumptions about
> that and thus still do the transfers and flushes?

Yes.

> Also, could you please briefly explain what does
> VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE mean given that the spec does not
> describe these blob_flags clearly? This is what the spec says:

This matters for VIRTIO_GPU_BLOB_MEM_HOST3D resources only.
VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE indicates the guest wants map the
resource for cpu access.  When the flag is not set only gpu access is
needed.

> And, what should be the default blob_flags value for a dumb bo if the
> userspace does not specify them?

Just VIRTIO_GPU_BLOB_FLAG_USE_SHAREABLE is fine for
VIRTIO_GPU_BLOB_MEM_GUEST.

> [Kasireddy, Vivek] With the patches I tested with:
> https://lists.nongnu.org/archive/html/qemu-devel/2021-03/msg09786.html

Saw the series, not looked in detail yet.

> I noticed that if we do not have res->image and only have res->blob, we have to 
> re-submit the blob/dmabuf and update the displaysurface if guest made updates to it 
> (in this case same FB)

flush must call dpy_gfx_update() or dpy_gl_update().

Oh, and make sure you use qemu master (or 6.0-rc).  In 5.2 + older
display updates might not work properly due to a missing glFlush()
in qemu.

take care,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: [PATCH 2/2] drm/virtio: Include modifier as part of set_scanout_blob
  2021-03-31  7:59   ` Gerd Hoffmann
@ 2021-04-02  7:55     ` Zhang, Tina
  2021-04-02 16:07       ` Gurchetan Singh
  0 siblings, 1 reply; 20+ messages in thread
From: Zhang, Tina @ 2021-04-02  7:55 UTC (permalink / raw)
  To: Gerd Hoffmann, Kasireddy, Vivek; +Cc: dri-devel, gurchetansingh



> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Gerd
> Hoffmann
> Sent: Wednesday, March 31, 2021 4:00 PM
> To: Kasireddy, Vivek <vivek.kasireddy@intel.com>
> Cc: dri-devel@lists.freedesktop.org; gurchetansingh@chromium.org
> Subject: Re: [PATCH 2/2] drm/virtio: Include modifier as part of
> set_scanout_blob
> 
>   Hi,
> 
> > -#define MAX_INLINE_CMD_SIZE   96
> > +#define MAX_INLINE_CMD_SIZE   112
> 
> Separate patch please.
> 
> > --- a/include/uapi/linux/virtio_gpu.h
> > +++ b/include/uapi/linux/virtio_gpu.h
> > @@ -409,6 +409,7 @@ struct virtio_gpu_set_scanout_blob {
> >  	__le32 width;
> >  	__le32 height;
> >  	__le32 format;
> > +	__le64 modifier;
> >  	__le32 padding;
> >  	__le32 strides[4];
> >  	__le32 offsets[4];
> 
> Nope.  This breaks the virtio protocol.
> 
> We'll need a virtio feature flag to negotiate modifier support between guest and
> host.  When supported by both sides it can be used.  The new field should be
> appended, not inserted in the middle.  Or we create a new
> virtio_gpu_set_scanout_blob2 struct with new command for this.
> 
> Also: I guess the device should provide a list of supported modifiers, maybe as
> capset?

Hi,

I agree that we need a way to get the supported modifiers' info to guest user space mesa, specifically to the iris driver working in kmsro mode.
So, from the guest mesa iris driver's point of view, the working flow may like this:
1) Get the modifier info from a display device through the kms_fd. In our case, the kms_fd comes from the virtio-gpu driver. So the info should come from virtio-gpu device.
2) When guest wants to allocate a scan-out buffer, the iris driver needs to check which format and modifier is suitable to be used.
3) Then, iris uses the kms_fd to allocate the scan-out buffer with the desired format.
     Maybe this time we'd better consider using VIRTGPU_RESOURCE_CREATE to allocate buffer instead of using DRM_IOCTL_MODE_CREATE_DUMB. It seems VIRTGPU_RESUORECE_CREATE can give more fb info.

BR,
Tina

> 
> Note: I think it is already possible to create resources with modifiers when using
> virgl commands for that.  Allowing modifiers with virgl=off too makes sense
> given your use case, but we should not use diverging approaches for virgl=on vs.
> virgl=off.  Specifically I'm not sure virtio_gpu_set_scanout_blob is the right place,
> I think with virgl=on the modifier is linked to the resource not the scanout ...
> 
> Cc'ing Gurchetan Singh for comments.
> 
> take care,
>   Gerd
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/2] drm/virtio: Include modifier as part of set_scanout_blob
  2021-04-02  7:55     ` Zhang, Tina
@ 2021-04-02 16:07       ` Gurchetan Singh
  0 siblings, 0 replies; 20+ messages in thread
From: Gurchetan Singh @ 2021-04-02 16:07 UTC (permalink / raw)
  To: Zhang, Tina; +Cc: Gerd Hoffmann, dri-devel, Kasireddy, Vivek


[-- Attachment #1.1: Type: text/plain, Size: 3050 bytes --]

On Fri, Apr 2, 2021 at 12:56 AM Zhang, Tina <tina.zhang@intel.com> wrote:

>
>
> > -----Original Message-----
> > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
> Gerd
> > Hoffmann
> > Sent: Wednesday, March 31, 2021 4:00 PM
> > To: Kasireddy, Vivek <vivek.kasireddy@intel.com>
> > Cc: dri-devel@lists.freedesktop.org; gurchetansingh@chromium.org
> > Subject: Re: [PATCH 2/2] drm/virtio: Include modifier as part of
> > set_scanout_blob
> >
> >   Hi,
> >
> > > -#define MAX_INLINE_CMD_SIZE   96
> > > +#define MAX_INLINE_CMD_SIZE   112
> >
> > Separate patch please.
> >
> > > --- a/include/uapi/linux/virtio_gpu.h
> > > +++ b/include/uapi/linux/virtio_gpu.h
> > > @@ -409,6 +409,7 @@ struct virtio_gpu_set_scanout_blob {
> > >     __le32 width;
> > >     __le32 height;
> > >     __le32 format;
> > > +   __le64 modifier;
> > >     __le32 padding;
> > >     __le32 strides[4];
> > >     __le32 offsets[4];
> >
> > Nope.  This breaks the virtio protocol.
> >
> > We'll need a virtio feature flag to negotiate modifier support between
> guest and
> > host.  When supported by both sides it can be used.  The new field
> should be
> > appended, not inserted in the middle.  Or we create a new
> > virtio_gpu_set_scanout_blob2 struct with new command for this.
> >
> > Also: I guess the device should provide a list of supported modifiers,
> maybe as
> > capset?
>
> Hi,
>
> I agree that we need a way to get the supported modifiers' info to guest
> user space mesa, specifically to the iris driver working in kmsro mode.
> So, from the guest mesa iris driver's point of view, the working flow may
> like this:
> 1) Get the modifier info from a display device through the kms_fd. In our
> case, the kms_fd comes from the virtio-gpu driver. So the info should come
> from virtio-gpu device.
> 2) When guest wants to allocate a scan-out buffer, the iris driver needs
> to check which format and modifier is suitable to be used.
> 3) Then, iris uses the kms_fd to allocate the scan-out buffer with the
> desired format.
>      Maybe this time we'd better consider using VIRTGPU_RESOURCE_CREATE to
> allocate buffer instead of using DRM_IOCTL_MODE_CREATE_DUMB. It seems
> VIRTGPU_RESUORECE_CREATE can give more fb info.
>

Thank you Tina and Vivek for looking into this!  Added some commentary on
your Mesa side MR.


>
> BR,
> Tina
>
> >
> > Note: I think it is already possible to create resources with modifiers
> when using
> > virgl commands for that.  Allowing modifiers with virgl=off too makes
> sense
> > given your use case, but we should not use diverging approaches for
> virgl=on vs.
> > virgl=off.  Specifically I'm not sure virtio_gpu_set_scanout_blob is the
> right place,
> > I think with virgl=on the modifier is linked to the resource not the
> scanout ...
> >
> > Cc'ing Gurchetan Singh for comments.
> >
> > take care,
> >   Gerd
> >
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>

[-- Attachment #1.2: Type: text/html, Size: 4419 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2)
  2021-04-01  6:53     ` Gerd Hoffmann
@ 2021-04-06 20:36       ` Vivek Kasireddy
  2021-04-07  0:34         ` Gurchetan Singh
  0 siblings, 1 reply; 20+ messages in thread
From: Vivek Kasireddy @ 2021-04-06 20:36 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

If support for Blob resources is available, then dumb BOs created
by the driver can be considered as guest Blobs.

v2: Don't skip transfer and flush commands as part of plane update
as the device may have created a shared mapping. (Gerd)

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_gem.c    | 8 ++++++++
 drivers/gpu/drm/virtio/virtgpu_object.c | 3 +++
 2 files changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 8502400b2f9c..5f49fb1cce65 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -64,6 +64,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 {
 	struct drm_gem_object *gobj;
 	struct virtio_gpu_object_params params = { 0 };
+	struct virtio_gpu_device *vgdev = dev->dev_private;
 	int ret;
 	uint32_t pitch;
 
@@ -79,6 +80,13 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 	params.height = args->height;
 	params.size = args->size;
 	params.dumb = true;
+
+	if (vgdev->has_resource_blob) {
+		params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
+		params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
+		params.blob = true;
+	}
+
 	ret = virtio_gpu_gem_create(file_priv, dev, &params, &gobj,
 				    &args->handle);
 	if (ret)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 4ff1ec28e630..f648b0e24447 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -254,6 +254,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	}
 
 	if (params->blob) {
+		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
+			bo->guest_blob = true;
+
 		virtio_gpu_cmd_resource_create_blob(vgdev, bo, params,
 						    ents, nents);
 	} else if (params->virgl) {
-- 
2.26.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2)
  2021-04-06 20:36       ` [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2) Vivek Kasireddy
@ 2021-04-07  0:34         ` Gurchetan Singh
  2021-04-08  9:27           ` Gerd Hoffmann
  0 siblings, 1 reply; 20+ messages in thread
From: Gurchetan Singh @ 2021-04-07  0:34 UTC (permalink / raw)
  To: Vivek Kasireddy; +Cc: Gerd Hoffmann, ML dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 3359 bytes --]

On Tue, Apr 6, 2021 at 1:47 PM Vivek Kasireddy <vivek.kasireddy@intel.com>
wrote:

> If support for Blob resources is available, then dumb BOs created
> by the driver can be considered as guest Blobs.
>
> v2: Don't skip transfer and flush commands as part of plane update
> as the device may have created a shared mapping. (Gerd)
>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_gem.c    | 8 ++++++++
>  drivers/gpu/drm/virtio/virtgpu_object.c | 3 +++
>  2 files changed, 11 insertions(+)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c
> b/drivers/gpu/drm/virtio/virtgpu_gem.c
> index 8502400b2f9c..5f49fb1cce65 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_gem.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
> @@ -64,6 +64,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file
> *file_priv,
>  {
>         struct drm_gem_object *gobj;
>         struct virtio_gpu_object_params params = { 0 };
> +       struct virtio_gpu_device *vgdev = dev->dev_private;
>         int ret;
>         uint32_t pitch;
>
> @@ -79,6 +80,13 @@ int virtio_gpu_mode_dumb_create(struct drm_file
> *file_priv,
>         params.height = args->height;
>         params.size = args->size;
>         params.dumb = true;
> +
> +       if (vgdev->has_resource_blob) {
> +               params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
> +               params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
>

This creates some log spam with crosvm + virgl_3d + vanilla linux, since
transfers don't work for guest blobs.  Two options:

a) Add vgdev->has_virgl_3d check and don't create a guest blob in that case.
b) The interactions between TRANSFER_TO_HOST_2D and VIRTGPU_BLOB_MEM_GUEST
are a bit under-defined in the spec.  Though since you don't have a host
side resource, you can safely skip the transfer and crosvm can be modified
to do the right thing in case of RESOURCE_FLUSH.

It makes a ton of sense to have a explicit udmabuf-like flag
("BLOB_FLAG_CREATE_GUEST_HANDLE" or "BLOB_FLAG_HANDLE_FROM_GUEST" -- want
to host OS agnostic -- any other ideas?), especially with 3d mode.  For
now, implicit udmabuf + dumb should be fine since the QEMU patches have
been floating around for a while and should land soon for future use cases.



> +               params.blob = true;
> +       }
>



> +
>         ret = virtio_gpu_gem_create(file_priv, dev, &params, &gobj,
>                                     &args->handle);
>         if (ret)
> diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c
> b/drivers/gpu/drm/virtio/virtgpu_object.c
> index 4ff1ec28e630..f648b0e24447 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> @@ -254,6 +254,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device
> *vgdev,
>         }
>
>         if (params->blob) {
> +               if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
> +                       bo->guest_blob = true;
> +
>                 virtio_gpu_cmd_resource_create_blob(vgdev, bo, params,
>                                                     ents, nents);
>         } else if (params->virgl) {
> --
> 2.26.2
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>

[-- Attachment #1.2: Type: text/html, Size: 4817 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2)
  2021-04-07  0:34         ` Gurchetan Singh
@ 2021-04-08  9:27           ` Gerd Hoffmann
  2021-04-09  0:31             ` Gurchetan Singh
  0 siblings, 1 reply; 20+ messages in thread
From: Gerd Hoffmann @ 2021-04-08  9:27 UTC (permalink / raw)
  To: Gurchetan Singh; +Cc: Vivek Kasireddy, ML dri-devel

> > +
> > +       if (vgdev->has_resource_blob) {
> > +               params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
> > +               params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
> >
> 
> This creates some log spam with crosvm + virgl_3d + vanilla linux, since
> transfers don't work for guest blobs.  Two options:
> 
> a) Add vgdev->has_virgl_3d check and don't create a guest blob in that case.
> b) The interactions between TRANSFER_TO_HOST_2D and VIRTGPU_BLOB_MEM_GUEST
> are a bit under-defined in the spec.

Indeed.

> Though since you don't have a host
> side resource, you can safely skip the transfer and crosvm can be modified
> to do the right thing in case of RESOURCE_FLUSH.

IIRC the VIRTGPU_BLOB_FLAG_USE_SHAREABLE flag means that the host *can*
create a shared mapping (i.e. the host seeing guest-side changes without
explicit transfer doesn't cause problems for the guest).  It doesn not
mean the host *must* create a shared mapping (note that there is no
negotiation whenever the host supports shared mappings or not).

So the transfer calls are still needed, and the host can decide to
shortcut them in case it can create a shared mapping.  In case there is
no shared mapping (say due to missing udmabuf support) the host can
fallback to copying.

So I think crosvm should be fixed to not consider transfer commands for
VIRTGPU_BLOB_MEM_GUEST resources an error.

> It makes a ton of sense to have a explicit udmabuf-like flag
> ("BLOB_FLAG_CREATE_GUEST_HANDLE" or "BLOB_FLAG_HANDLE_FROM_GUEST" -- want
> to host OS agnostic -- any other ideas?), especially with 3d mode.

Why?  Can't this be simply an host implementation detail which the guest
doesn't need to worry about?

take care,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2)
  2021-04-08  9:27           ` Gerd Hoffmann
@ 2021-04-09  0:31             ` Gurchetan Singh
  2021-04-09  7:48               ` Gerd Hoffmann
  0 siblings, 1 reply; 20+ messages in thread
From: Gurchetan Singh @ 2021-04-09  0:31 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: Vivek Kasireddy, ML dri-devel, Zhang, Tina


[-- Attachment #1.1: Type: text/plain, Size: 4689 bytes --]

On Thu, Apr 8, 2021 at 2:27 AM Gerd Hoffmann <kraxel@redhat.com> wrote:

> > > +
> > > +       if (vgdev->has_resource_blob) {
> > > +               params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
> > > +               params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
> > >
> >
> > This creates some log spam with crosvm + virgl_3d + vanilla linux, since
> > transfers don't work for guest blobs.  Two options:
> >
> > a) Add vgdev->has_virgl_3d check and don't create a guest blob in that
> case.
> > b) The interactions between TRANSFER_TO_HOST_2D and
> VIRTGPU_BLOB_MEM_GUEST
> > are a bit under-defined in the spec.
>
> Indeed.
>
> > Though since you don't have a host
> > side resource, you can safely skip the transfer and crosvm can be
> modified
> > to do the right thing in case of RESOURCE_FLUSH.
>
> IIRC the VIRTGPU_BLOB_FLAG_USE_SHAREABLE flag means that the host *can*
> create a shared mapping (i.e. the host seeing guest-side changes without
> explicit transfer doesn't cause problems for the guest).  It doesn not
> mean the host *must* create a shared mapping (note that there is no
> negotiation whenever the host supports shared mappings or not).
>

VIRTGPU_BLOB_FLAG_USE_SHAREABLE means guest userspace intends to share the
blob resource with another virtgpu driver instance via drmPrimeHandleToFd.
It's a rough analogue to VkExportMemoryAllocateInfoKHR or
PIPE_BIND_USE_SHARED.

The dumb case is a bit interesting because there is no userspace to provide
that information.  Though I think even VIRTGPU_BLOB_FLAG_USE_MAPPABLE is
fine, since for my vanilla Linux setup, I'm seeing the guest blob is mapped
only and drmPrimeHandleToFd(..) isn't called on it.  We can also modify the
virtio-gpu spec to say "blob_flags may be undefined/zero for BLOB_MEM_GUEST
when 3D mode is not on".

Though all options work for me.  The implicit dumb blob udmabuf case for me
is more about advancing blob development rather than being super rigorous.


>
> So the transfer calls are still needed, and the host can decide to
> shortcut them in case it can create a shared mapping.  In case there is
> no shared mapping (say due to missing udmabuf support) the host can
> fallback to copying.
>

Transfers are a bit under-defined for BLOB_MEM_GUEST.  Even without udmabuf
on the host, there is no host side resource for guest-only blobs?  Before
blob resources, the dumb flow was:

1) update guest side resource
2) TRANSFER_TO_HOST_2D to copy guest side contents to host side private
resource [Pixman??]
3) RESOURCE_FLUSH to copy the host-side contents to the framebuffer and
page-flip

At least for crosvm, this is possible:

1) update guest side resource
2) RESOURCE_FLUSH to copy the guest-side contents to the framebuffer and
pageflip

With implicit udmabuf, it may be possible to do this:

1) update guest side resource
2) RESOURCE_FLUSH to page-flip

So I think crosvm should be fixed to not consider transfer commands for
> VIRTGPU_BLOB_MEM_GUEST resources an error.
>

It's a simple change to make and we can definitely do it, if TRANSFER_2D is
helpful for the QEMU case.  I haven't looked at the QEMU side patches.


> > It makes a ton of sense to have a explicit udmabuf-like flag
> > ("BLOB_FLAG_CREATE_GUEST_HANDLE" or "BLOB_FLAG_HANDLE_FROM_GUEST" -- want
> > to host OS agnostic -- any other ideas?), especially with 3d mode.
>
> Why?  Can't this be simply an host implementation detail which the guest
> doesn't need to worry about?
>

For 3D mode, it's desirable to create an {EGL image}/{VkDeviceMemory} from
guest memory for certain zero-copy use cases.  If no explicit
guarantee exists for the paravirtualized user-space that there will be a
host side OS-specific handle associated with guest memory, then guest user
space must fall-back to old-style transfers.

For the PCI-passthrough + guest blob case, the end goal is to share it with
the host compositor.  If there is no guarantee the guest memory can be
converted to an OS-handle (to share with the host compositor), then I think
the guest user space should fallback to another technique involving
memcpy() to share the memory.

So essentially, thinking for two new protocol additions:

F_CREATE_GUEST_HANDLE (or F_HANDLE_FROM_GUEST) --> means an OS-specific
udmabuf-like mechanism exists on the host.

BLOB_FLAG_CREATE_GUEST_HANDLE (or BLOB_FLAG_HANDLE_FROM_GUEST)--> tells
host userspace "you must create a udmabuf" [or OS-specific equivalent] upon
success

Though much testing/work remains (both with the PCI passthough case +
virgl3d case), could be a good chance to float the nomenclature by
everyone.  Happy to collaborate further with Tina/Vivek on making such a
thing happen.


>
> take care,
>   Gerd
>
>

[-- Attachment #1.2: Type: text/html, Size: 6486 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2)
  2021-04-09  0:31             ` Gurchetan Singh
@ 2021-04-09  7:48               ` Gerd Hoffmann
  2021-04-13  0:58                 ` Gurchetan Singh
  0 siblings, 1 reply; 20+ messages in thread
From: Gerd Hoffmann @ 2021-04-09  7:48 UTC (permalink / raw)
  To: Gurchetan Singh; +Cc: Vivek Kasireddy, ML dri-devel, Zhang, Tina

  Hi,

> > IIRC the VIRTGPU_BLOB_FLAG_USE_SHAREABLE flag means that the host *can*
> > create a shared mapping (i.e. the host seeing guest-side changes without
> > explicit transfer doesn't cause problems for the guest).  It doesn not
> > mean the host *must* create a shared mapping (note that there is no
> > negotiation whenever the host supports shared mappings or not).
> >
> 
> VIRTGPU_BLOB_FLAG_USE_SHAREABLE means guest userspace intends to share the
> blob resource with another virtgpu driver instance via drmPrimeHandleToFd.
> It's a rough analogue to VkExportMemoryAllocateInfoKHR or
> PIPE_BIND_USE_SHARED.

Oh.  My memory was failing me then.  We should *really* clarify the spec
for BLOB_MEM_GUEST.

So shared mappings are allowed for all BLOB_MEM_GUEST resources, right?

> > So the transfer calls are still needed, and the host can decide to
> > shortcut them in case it can create a shared mapping.  In case there is
> > no shared mapping (say due to missing udmabuf support) the host can
> > fallback to copying.
> 
> Transfers are a bit under-defined for BLOB_MEM_GUEST.  Even without udmabuf
> on the host, there is no host side resource for guest-only blobs?  Before
> blob resources, the dumb flow was:
> 
> 1) update guest side resource
> 2) TRANSFER_TO_HOST_2D to copy guest side contents to host side private
> resource [Pixman??]
> 3) RESOURCE_FLUSH to copy the host-side contents to the framebuffer and
> page-flip

Yes.

> At least for crosvm, this is possible:
> 
> 1) update guest side resource
> 2) RESOURCE_FLUSH to copy the guest-side contents to the framebuffer and
> pageflip
> 
> With implicit udmabuf, it may be possible to do this:
> 
> 1) update guest side resource
> 2) RESOURCE_FLUSH to page-flip
> 
> > So I think crosvm should be fixed to not consider transfer commands for
> > VIRTGPU_BLOB_MEM_GUEST resources an error.
> 
> It's a simple change to make and we can definitely do it, if TRANSFER_2D is
> helpful for the QEMU case.  I haven't looked at the QEMU side patches.

Well, we have two different cases:

  (1) No udmabuf available.  qemu will have a host-side shadow then and
      the workflow will be largely identical to the non-blob resource
      workflow.

  (2) With udmabuf support.  qemu can create udmabufs for the resources,
      mmap() the dmabuf to get a linear mapping, create a pixman buffer
      backed by that dmabuf (no copying needed then).  Depending on
      capabilities pass either the pixman image (gl=off) or the dmabuf
      handle (gl=on) to the UI code to actually show the guest display.

The guest doesn't need to know any of this, it'll just send transfer and
flush commands.  In case (1) qemu must process the transfer commands and
for case (2) qemu can simply ignore them.

> For the PCI-passthrough + guest blob case, the end goal is to share it with
> the host compositor.  If there is no guarantee the guest memory can be
> converted to an OS-handle (to share with the host compositor), then I think
> the guest user space should fallback to another technique involving
> memcpy() to share the memory.

This is what happens today (using non-blob resources).

> So essentially, thinking for two new protocol additions:
> 
> F_CREATE_GUEST_HANDLE (or F_HANDLE_FROM_GUEST) --> means an OS-specific
> udmabuf-like mechanism exists on the host.
> 
> BLOB_FLAG_CREATE_GUEST_HANDLE (or BLOB_FLAG_HANDLE_FROM_GUEST)--> tells
> host userspace "you must create a udmabuf" [or OS-specific equivalent] upon
> success

Again:  Why do we actually need that?  Is there any benefit other than
the guest knowing it doesn't need to send transfer commands?

I see the whole udmabuf thing as a host-side performance optimization
and I think this should be fully transparent to the guest as the host
can easily just ignore the transfer commands.  Given we batch commands
the extra commands don't lead to extra context switches, so there
shouldn't be much overhead.

If we really want make the guest aware of the hosts udmabuf state I
think this should be designed the other way around:  Add some way for
the host to tell the guest transfer commands are not needed for a
specific BLOB_MEM_GUEST resource.

take care,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2)
  2021-04-09  7:48               ` Gerd Hoffmann
@ 2021-04-13  0:58                 ` Gurchetan Singh
       [not found]                   ` <BN7PR11MB2786499E7902CE53326ACE97894F9@BN7PR11MB2786.namprd11.prod.outlook.com>
  2021-04-13  5:26                   ` [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3) Vivek Kasireddy
  0 siblings, 2 replies; 20+ messages in thread
From: Gurchetan Singh @ 2021-04-13  0:58 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: Vivek Kasireddy, ML dri-devel, Zhang, Tina


[-- Attachment #1.1: Type: text/plain, Size: 6390 bytes --]

On Fri, Apr 9, 2021 at 12:48 AM Gerd Hoffmann <kraxel@redhat.com> wrote:

>   Hi,
>
> > > IIRC the VIRTGPU_BLOB_FLAG_USE_SHAREABLE flag means that the host *can*
> > > create a shared mapping (i.e. the host seeing guest-side changes
> without
> > > explicit transfer doesn't cause problems for the guest).  It doesn not
> > > mean the host *must* create a shared mapping (note that there is no
> > > negotiation whenever the host supports shared mappings or not).
> > >
> >
> > VIRTGPU_BLOB_FLAG_USE_SHAREABLE means guest userspace intends to share
> the
> > blob resource with another virtgpu driver instance via
> drmPrimeHandleToFd.
> > It's a rough analogue to VkExportMemoryAllocateInfoKHR or
> > PIPE_BIND_USE_SHARED.
>
> Oh.  My memory was failing me then.  We should *really* clarify the spec
> for BLOB_MEM_GUEST.


> So shared mappings are allowed for all BLOB_MEM_GUEST resources, right?
>

The guest iovecs are always shared with the host, so they may be copied
to/from directly depending on the operation.  In the case of RESOURCE_FLUSH
+ BLOB_MEM_GUEST, it could be a copy from the guest iovecs to the host
framebuffer [host framebuffer != host shadow memory].


>
> > > So the transfer calls are still needed, and the host can decide to
> > > shortcut them in case it can create a shared mapping.  In case there is
> > > no shared mapping (say due to missing udmabuf support) the host can
> > > fallback to copying.
> >
> > Transfers are a bit under-defined for BLOB_MEM_GUEST.  Even without
> udmabuf
> > on the host, there is no host side resource for guest-only blobs?  Before
> > blob resources, the dumb flow was:
> >
> > 1) update guest side resource
> > 2) TRANSFER_TO_HOST_2D to copy guest side contents to host side private
> > resource [Pixman??]
> > 3) RESOURCE_FLUSH to copy the host-side contents to the framebuffer and
> > page-flip
>
> Yes.
>
> > At least for crosvm, this is possible:
> >
> > 1) update guest side resource
> > 2) RESOURCE_FLUSH to copy the guest-side contents to the framebuffer and
> > pageflip
> >
> > With implicit udmabuf, it may be possible to do this:
> >
> > 1) update guest side resource
> > 2) RESOURCE_FLUSH to page-flip
> >
> > > So I think crosvm should be fixed to not consider transfer commands for
> > > VIRTGPU_BLOB_MEM_GUEST resources an error.
> >
> > It's a simple change to make and we can definitely do it, if TRANSFER_2D
> is
> > helpful for the QEMU case.  I haven't looked at the QEMU side patches.
>
> Well, we have two different cases:
>
>   (1) No udmabuf available.  qemu will have a host-side shadow then and
>       the workflow will be largely identical to the non-blob resource
>       workflow.
>

I think this is the key difference.  With BLOB_MEM_GUEST, crosvm can only
have a guest side iovecs and no host-side shadow memory.  With
BLOB_MEM_GUEST_HOST3D, host-side shadow memory will exist.

I guess it boils down the Pixman dependency.  crosvm sits right on top of
display APIs (X, wayland) rather than having intermediary layers.  Adding a
new Pixman API takes time too.

There's a bunch of options:

1) Don't use BLOB_MEM_GUEST dumb buffers in 3D mode.
2) virglrenderer or crosvm modified to implicitly ignore
TRANSFER_TO_HOST_2D for BLOB_MEM_GUEST when in 3D mode.
3) It's probably possible to create an implicit udmabuf
for RESOURCE_CREATE_2D resources and ignore the transfer there too.  The
benefit of this is TRANSFER_TO_HOST_2D makes a ton of sense for non-blob
resources.  No kernel side change needed here, just QEMU.
4) modify QEMU display integration

I would choose (1) since it solves the log spam problem and it advances
blob support in QEMU.  Though I leave the decision to QEMU devs.


>
>   (2) With udmabuf support.  qemu can create udmabufs for the resources,
>       mmap() the dmabuf to get a linear mapping, create a pixman buffer
>       backed by that dmabuf (no copying needed then).  Depending on
>       capabilities pass either the pixman image (gl=off) or the dmabuf
>       handle (gl=on) to the UI code to actually show the guest display.
>
> The guest doesn't need to know any of this, it'll just send transfer and
> flush commands.  In case (1) qemu must process the transfer commands and
> for case (2) qemu can simply ignore them.
>
> > For the PCI-passthrough + guest blob case, the end goal is to share it
> with
> > the host compositor.  If there is no guarantee the guest memory can be
> > converted to an OS-handle (to share with the host compositor), then I
> think
> > the guest user space should fallback to another technique involving
> > memcpy() to share the memory.
>
> This is what happens today (using non-blob resources).
>
> > So essentially, thinking for two new protocol additions:
> >
> > F_CREATE_GUEST_HANDLE (or F_HANDLE_FROM_GUEST) --> means an OS-specific
> > udmabuf-like mechanism exists on the host.
> >
> > BLOB_FLAG_CREATE_GUEST_HANDLE (or BLOB_FLAG_HANDLE_FROM_GUEST)--> tells
> > host userspace "you must create a udmabuf" [or OS-specific equivalent]
> upon
> > success
>
> Again:  Why do we actually need that?  Is there any benefit other than
> the guest knowing it doesn't need to send transfer commands?

I see the whole udmabuf thing as a host-side performance optimization
> and I think this should be fully transparent to the guest as the host
> can easily just ignore the transfer commands.


So the use case I'm most interested in (and Vivek/Tina?) is
tiled/compressed udmabufs, so they may be eventually shared with the host
compositor via the DRM modifier API.

Transfers to linear udmabufs make sense.  Maybe transfers to
tiled/compressed udmabufs shouldn't even be attempted.

It's a complicated case with many ambiguities, especially with PCI
passthrough involved.  Explicit tiled/compressed udmabufs are just an idea,
will have to think more about it / have some proof of concept [with virgl
and PCI passthrough], before making any concrete proposals.  Will keep your
idea of just ignoring transfers on the host in mind.


> Given we batch commands
> the extra commands don't lead to extra context switches, so there
> shouldn't be much overhead.
>
> If we really want make the guest aware of the hosts udmabuf state I
> think this should be designed the other way around:  Add some way for
> the host to tell the guest transfer commands are not needed for a
> specific BLOB_MEM_GUEST resource.
>
> take care,
>   Gerd
>
>

[-- Attachment #1.2: Type: text/html, Size: 8329 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2)
       [not found]                   ` <BN7PR11MB2786499E7902CE53326ACE97894F9@BN7PR11MB2786.namprd11.prod.outlook.com>
@ 2021-04-13  4:57                     ` Zhang, Tina
  0 siblings, 0 replies; 20+ messages in thread
From: Zhang, Tina @ 2021-04-13  4:57 UTC (permalink / raw)
  To: Gurchetan Singh, Gerd Hoffmann; +Cc: Kasireddy, Vivek, ML dri-devel



> -----Original Message-----
> From: Gurchetan Singh <gurchetansingh@chromium.org>
> Sent: Tuesday, April 13, 2021 8:58 AM
> To: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Kasireddy, Vivek <vivek.kasireddy@intel.com>; ML dri-devel <dri-
> devel@lists.freedesktop.org>; Zhang, Tina <tina.zhang@intel.com>
> Subject: Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2)
> 
> 
> 
> On Fri, Apr 9, 2021 at 12:48 AM Gerd Hoffmann <mailto:kraxel@redhat.com>
> wrote:
>   Hi,
> 
> > > IIRC the VIRTGPU_BLOB_FLAG_USE_SHAREABLE flag means that the host
> *can*
> > > create a shared mapping (i.e. the host seeing guest-side changes without
> > > explicit transfer doesn't cause problems for the guest).  It doesn not
> > > mean the host *must* create a shared mapping (note that there is no
> > > negotiation whenever the host supports shared mappings or not).
> > >
> >
> > VIRTGPU_BLOB_FLAG_USE_SHAREABLE means guest userspace intends to
> share the
> > blob resource with another virtgpu driver instance via
> drmPrimeHandleToFd.
> > It's a rough analogue to VkExportMemoryAllocateInfoKHR or
> > PIPE_BIND_USE_SHARED.
> 
> Oh.  My memory was failing me then.  We should *really* clarify the spec
> for BLOB_MEM_GUEST.
> 
> So shared mappings are allowed for all BLOB_MEM_GUEST resources, right?
> 
> The guest iovecs are always shared with the host, so they may be copied
> to/from directly depending on the operation.  In the case of
> RESOURCE_FLUSH + BLOB_MEM_GUEST, it could be a copy from the guest
> iovecs to the host framebuffer [host framebuffer != host shadow memory].
> 
> 
> > > So the transfer calls are still needed, and the host can decide to
> > > shortcut them in case it can create a shared mapping.  In case there is
> > > no shared mapping (say due to missing udmabuf support) the host can
> > > fallback to copying.
> >
> > Transfers are a bit under-defined for BLOB_MEM_GUEST.  Even without
> udmabuf
> > on the host, there is no host side resource for guest-only blobs?  Before
> > blob resources, the dumb flow was:
> >
> > 1) update guest side resource
> > 2) TRANSFER_TO_HOST_2D to copy guest side contents to host side private
> > resource [Pixman??]
> > 3) RESOURCE_FLUSH to copy the host-side contents to the framebuffer and
> > page-flip
> 
> Yes.
> 
> > At least for crosvm, this is possible:
> >
> > 1) update guest side resource
> > 2) RESOURCE_FLUSH to copy the guest-side contents to the framebuffer and
> > pageflip
> >
> > With implicit udmabuf, it may be possible to do this:
> >
> > 1) update guest side resource
> > 2) RESOURCE_FLUSH to page-flip
> >
> > > So I think crosvm should be fixed to not consider transfer commands for
> > > VIRTGPU_BLOB_MEM_GUEST resources an error.
> >
> > It's a simple change to make and we can definitely do it, if TRANSFER_2D is
> > helpful for the QEMU case.  I haven't looked at the QEMU side patches.
> 
> Well, we have two different cases:
> 
>   (1) No udmabuf available.  qemu will have a host-side shadow then and
>       the workflow will be largely identical to the non-blob resource
>       workflow.
> 
> I think this is the key difference.  With BLOB_MEM_GUEST, crosvm can only
> have a guest side iovecs and no host-side shadow memory.  With
> BLOB_MEM_GUEST_HOST3D, host-side shadow memory will exist.
> 
> I guess it boils down the Pixman dependency.  crosvm sits right on top of
> display APIs (X, wayland) rather than having intermediary layers.  Adding a
> new Pixman API takes time too.
> 
> There's a bunch of options:
> 
> 1) Don't use BLOB_MEM_GUEST dumb buffers in 3D mode.
> 2) virglrenderer or crosvm modified to implicitly ignore
> TRANSFER_TO_HOST_2D for BLOB_MEM_GUEST when in 3D mode.
> 3) It's probably possible to create an implicit udmabuf
> for RESOURCE_CREATE_2D resources and ignore the transfer there too.  The
> benefit of this is TRANSFER_TO_HOST_2D makes a ton of sense for non-blob
> resources.  No kernel side change needed here, just QEMU.
> 4) modify QEMU display integration
> 
> I would choose (1) since it solves the log spam problem and it advances blob
> support in QEMU.  Though I leave the decision to QEMU devs.
> 
> 
>   (2) With udmabuf support.  qemu can create udmabufs for the resources,
>       mmap() the dmabuf to get a linear mapping, create a pixman buffer
>       backed by that dmabuf (no copying needed then).  Depending on
>       capabilities pass either the pixman image (gl=off) or the dmabuf
>       handle (gl=on) to the UI code to actually show the guest display.
> 
> The guest doesn't need to know any of this, it'll just send transfer and
> flush commands.  In case (1) qemu must process the transfer commands and
> for case (2) qemu can simply ignore them.
> 
> > For the PCI-passthrough + guest blob case, the end goal is to share it with
> > the host compositor.  If there is no guarantee the guest memory can be
> > converted to an OS-handle (to share with the host compositor), then I think
> > the guest user space should fallback to another technique involving
> > memcpy() to share the memory.
> 
> This is what happens today (using non-blob resources).
> 
> > So essentially, thinking for two new protocol additions:
> >
> > F_CREATE_GUEST_HANDLE (or F_HANDLE_FROM_GUEST) --> means an OS-
> specific
> > udmabuf-like mechanism exists on the host.
> >
> > BLOB_FLAG_CREATE_GUEST_HANDLE (or
> BLOB_FLAG_HANDLE_FROM_GUEST)--> tells
> > host userspace "you must create a udmabuf" [or OS-specific equivalent]
> upon
> > success
> 
> Again:  Why do we actually need that?  Is there any benefit other than
> the guest knowing it doesn't need to send transfer commands?
> I see the whole udmabuf thing as a host-side performance optimization
> and I think this should be fully transparent to the guest as the host
> can easily just ignore the transfer commands.
> 
> So the use case I'm most interested in (and Vivek/Tina?) is tiled/compressed
> udmabufs, so they may be eventually shared with the host compositor via the
> DRM modifier API.
Hi,

Yes, eventually, we would like to have the tiled/compressed framebuffers shared by the udmabufs mechanism.

BR,
Tina
> 
> Transfers to linear udmabufs make sense.  Maybe transfers to
> tiled/compressed udmabufs shouldn't even be attempted.
> 
> It's a complicated case with many ambiguities, especially with PCI
> passthrough involved.  Explicit tiled/compressed udmabufs are just an idea,
> will have to think more about it / have some proof of concept [with virgl and
> PCI passthrough], before making any concrete proposals.  Will keep your idea
> of just ignoring transfers on the host in mind.
> 
> Given we batch commands
> the extra commands don't lead to extra context switches, so there
> shouldn't be much overhead.
> 
> If we really want make the guest aware of the hosts udmabuf state I
> think this should be designed the other way around:  Add some way for
> the host to tell the guest transfer commands are not needed for a
> specific BLOB_MEM_GUEST resource.
> 
> take care,
>   Gerd
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3)
  2021-04-13  0:58                 ` Gurchetan Singh
       [not found]                   ` <BN7PR11MB2786499E7902CE53326ACE97894F9@BN7PR11MB2786.namprd11.prod.outlook.com>
@ 2021-04-13  5:26                   ` Vivek Kasireddy
  2021-04-14  6:36                     ` Zhang, Tina
  2021-04-14 23:31                     ` Gurchetan Singh
  1 sibling, 2 replies; 20+ messages in thread
From: Vivek Kasireddy @ 2021-04-13  5:26 UTC (permalink / raw)
  To: dri-devel; +Cc: Vivek Kasireddy, Gerd Hoffmann

If support for Blob resources is available, then dumb BOs created
by the driver can be considered as guest Blobs.

v2: Don't skip transfer and flush commands as part of plane update
as the device may have created a shared mapping. (Gerd)

v3: Don't create dumb BOs as Guest blobs if Virgl is enabled. (Gurchetan)

Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 drivers/gpu/drm/virtio/virtgpu_gem.c    | 8 ++++++++
 drivers/gpu/drm/virtio/virtgpu_object.c | 3 +++
 2 files changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 8502400b2f9c..2de61b63ef91 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -64,6 +64,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 {
 	struct drm_gem_object *gobj;
 	struct virtio_gpu_object_params params = { 0 };
+	struct virtio_gpu_device *vgdev = dev->dev_private;
 	int ret;
 	uint32_t pitch;
 
@@ -79,6 +80,13 @@ int virtio_gpu_mode_dumb_create(struct drm_file *file_priv,
 	params.height = args->height;
 	params.size = args->size;
 	params.dumb = true;
+
+	if (vgdev->has_resource_blob && !vgdev->has_virgl_3d) {
+		params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
+		params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
+		params.blob = true;
+	}
+
 	ret = virtio_gpu_gem_create(file_priv, dev, &params, &gobj,
 				    &args->handle);
 	if (ret)
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 4ff1ec28e630..f648b0e24447 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -254,6 +254,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	}
 
 	if (params->blob) {
+		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
+			bo->guest_blob = true;
+
 		virtio_gpu_cmd_resource_create_blob(vgdev, bo, params,
 						    ents, nents);
 	} else if (params->virgl) {
-- 
2.26.2

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* RE: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3)
  2021-04-13  5:26                   ` [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3) Vivek Kasireddy
@ 2021-04-14  6:36                     ` Zhang, Tina
  2021-04-14  9:29                       ` Gerd Hoffmann
  2021-04-14 23:31                     ` Gurchetan Singh
  1 sibling, 1 reply; 20+ messages in thread
From: Zhang, Tina @ 2021-04-14  6:36 UTC (permalink / raw)
  To: Kasireddy, Vivek, dri-devel, Gerd Hoffmann
  Cc: Kasireddy, Vivek, Gurchetan Singh

Hi Gerd,

Speaking of the modifier, we notice that the virtio-gpu driver's mode_config.allow_fb_modifiers = false, which means virtio-gpu doesn't support modifier. With mode_config.allow_fb_modifiers=false, the DRM Modifier API would fail. 

So, do you know if there is any concern about letting virito-gpu allow modifiers? Thanks.

BR,
Tina

> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
> Vivek Kasireddy
> Sent: Tuesday, April 13, 2021 1:26 PM
> To: dri-devel@lists.freedesktop.org
> Cc: Kasireddy, Vivek <vivek.kasireddy@intel.com>; Gerd Hoffmann
> <kraxel@redhat.com>
> Subject: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3)
> 
> If support for Blob resources is available, then dumb BOs created by the
> driver can be considered as guest Blobs.
> 
> v2: Don't skip transfer and flush commands as part of plane update as the
> device may have created a shared mapping. (Gerd)
> 
> v3: Don't create dumb BOs as Guest blobs if Virgl is enabled. (Gurchetan)
> 
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_gem.c    | 8 ++++++++
>  drivers/gpu/drm/virtio/virtgpu_object.c | 3 +++
>  2 files changed, 11 insertions(+)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c
> b/drivers/gpu/drm/virtio/virtgpu_gem.c
> index 8502400b2f9c..2de61b63ef91 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_gem.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
> @@ -64,6 +64,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file
> *file_priv,  {
>  	struct drm_gem_object *gobj;
>  	struct virtio_gpu_object_params params = { 0 };
> +	struct virtio_gpu_device *vgdev = dev->dev_private;
>  	int ret;
>  	uint32_t pitch;
> 
> @@ -79,6 +80,13 @@ int virtio_gpu_mode_dumb_create(struct drm_file
> *file_priv,
>  	params.height = args->height;
>  	params.size = args->size;
>  	params.dumb = true;
> +
> +	if (vgdev->has_resource_blob && !vgdev->has_virgl_3d) {
> +		params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
> +		params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
> +		params.blob = true;
> +	}
> +
>  	ret = virtio_gpu_gem_create(file_priv, dev, &params, &gobj,
>  				    &args->handle);
>  	if (ret)
> diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c
> b/drivers/gpu/drm/virtio/virtgpu_object.c
> index 4ff1ec28e630..f648b0e24447 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> @@ -254,6 +254,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device
> *vgdev,
>  	}
> 
>  	if (params->blob) {
> +		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
> +			bo->guest_blob = true;
> +
>  		virtio_gpu_cmd_resource_create_blob(vgdev, bo, params,
>  						    ents, nents);
>  	} else if (params->virgl) {
> --
> 2.26.2
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3)
  2021-04-14  6:36                     ` Zhang, Tina
@ 2021-04-14  9:29                       ` Gerd Hoffmann
  0 siblings, 0 replies; 20+ messages in thread
From: Gerd Hoffmann @ 2021-04-14  9:29 UTC (permalink / raw)
  To: Zhang, Tina; +Cc: Kasireddy, Vivek, dri-devel, Gurchetan Singh

On Wed, Apr 14, 2021 at 06:36:55AM +0000, Zhang, Tina wrote:
> Hi Gerd,
> 
> Speaking of the modifier, we notice that the virtio-gpu driver's
> mode_config.allow_fb_modifiers = false, which means virtio-gpu doesn't
> support modifier. With mode_config.allow_fb_modifiers=false, the DRM
> Modifier API would fail. 
> 
> So, do you know if there is any concern about letting virito-gpu allow
> modifiers? Thanks.

Well, virtio-gpu must also provide a list of modifiers then.  We need
some way for virtio-gpu to figure which modifiers are supported by the
host and which are not.  Otherwise we could list LINEAR only which
doesn't buy us much ;)

Not sure whenever virglrenderer allows that already (via
VIRGL_CCMD*QUERY* or via virgl caps).  If not we could define a new
modifiers capability for that, which could then be used for both virgl
and non-virgl mode.

take care,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3)
  2021-04-13  5:26                   ` [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3) Vivek Kasireddy
  2021-04-14  6:36                     ` Zhang, Tina
@ 2021-04-14 23:31                     ` Gurchetan Singh
  2021-04-15  9:07                       ` Gerd Hoffmann
  1 sibling, 1 reply; 20+ messages in thread
From: Gurchetan Singh @ 2021-04-14 23:31 UTC (permalink / raw)
  To: Vivek Kasireddy; +Cc: Gerd Hoffmann, ML dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 2804 bytes --]

On Mon, Apr 12, 2021 at 10:36 PM Vivek Kasireddy <vivek.kasireddy@intel.com>
wrote:

> If support for Blob resources is available, then dumb BOs created
> by the driver can be considered as guest Blobs.
>
> v2: Don't skip transfer and flush commands as part of plane update
> as the device may have created a shared mapping. (Gerd)
>
> v3: Don't create dumb BOs as Guest blobs if Virgl is enabled. (Gurchetan)
>

I think it is a good start and advances QEMU blobs.  Improvements are
always possible, but may be made at a future time.

Acked-by: Gurchetan Singh <gurchetansingh@chromium.org>


>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> ---
>  drivers/gpu/drm/virtio/virtgpu_gem.c    | 8 ++++++++
>  drivers/gpu/drm/virtio/virtgpu_object.c | 3 +++
>  2 files changed, 11 insertions(+)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c
> b/drivers/gpu/drm/virtio/virtgpu_gem.c
> index 8502400b2f9c..2de61b63ef91 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_gem.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
> @@ -64,6 +64,7 @@ int virtio_gpu_mode_dumb_create(struct drm_file
> *file_priv,
>  {
>         struct drm_gem_object *gobj;
>         struct virtio_gpu_object_params params = { 0 };
> +       struct virtio_gpu_device *vgdev = dev->dev_private;
>         int ret;
>         uint32_t pitch;
>
> @@ -79,6 +80,13 @@ int virtio_gpu_mode_dumb_create(struct drm_file
> *file_priv,
>         params.height = args->height;
>         params.size = args->size;
>         params.dumb = true;
> +
> +       if (vgdev->has_resource_blob && !vgdev->has_virgl_3d) {
> +               params.blob_mem = VIRTGPU_BLOB_MEM_GUEST;
> +               params.blob_flags = VIRTGPU_BLOB_FLAG_USE_SHAREABLE;
> +               params.blob = true;
> +       }
> +
>         ret = virtio_gpu_gem_create(file_priv, dev, &params, &gobj,
>                                     &args->handle);
>         if (ret)
> diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c
> b/drivers/gpu/drm/virtio/virtgpu_object.c
> index 4ff1ec28e630..f648b0e24447 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> @@ -254,6 +254,9 @@ int virtio_gpu_object_create(struct virtio_gpu_device
> *vgdev,
>         }
>
>         if (params->blob) {
> +               if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
> +                       bo->guest_blob = true;
> +
>                 virtio_gpu_cmd_resource_create_blob(vgdev, bo, params,
>                                                     ents, nents);
>         } else if (params->virgl) {
> --
> 2.26.2
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>

[-- Attachment #1.2: Type: text/html, Size: 4072 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3)
  2021-04-14 23:31                     ` Gurchetan Singh
@ 2021-04-15  9:07                       ` Gerd Hoffmann
  0 siblings, 0 replies; 20+ messages in thread
From: Gerd Hoffmann @ 2021-04-15  9:07 UTC (permalink / raw)
  To: Gurchetan Singh; +Cc: Vivek Kasireddy, ML dri-devel

On Wed, Apr 14, 2021 at 04:31:45PM -0700, Gurchetan Singh wrote:
> On Mon, Apr 12, 2021 at 10:36 PM Vivek Kasireddy <vivek.kasireddy@intel.com>
> wrote:
> 
> > If support for Blob resources is available, then dumb BOs created
> > by the driver can be considered as guest Blobs.
> >
> > v2: Don't skip transfer and flush commands as part of plane update
> > as the device may have created a shared mapping. (Gerd)
> >
> > v3: Don't create dumb BOs as Guest blobs if Virgl is enabled. (Gurchetan)
> >
> 
> I think it is a good start and advances QEMU blobs.  Improvements are
> always possible, but may be made at a future time.
> 
> Acked-by: Gurchetan Singh <gurchetansingh@chromium.org>

Agree.  Future improvements (like maybe use HOST3D blobs for virgl=on)
can easily go on top of this.

Pushed to drm-misc-next.

thanks,
  Gerd

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-04-15  9:08 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-31  3:04 [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs Vivek Kasireddy
2021-03-31  3:04 ` [PATCH 2/2] drm/virtio: Include modifier as part of set_scanout_blob Vivek Kasireddy
2021-03-31  7:59   ` Gerd Hoffmann
2021-04-02  7:55     ` Zhang, Tina
2021-04-02 16:07       ` Gurchetan Singh
2021-03-31  7:41 ` [PATCH 1/2] drm/virtio: Create Dumb BOs as guest Blobs Gerd Hoffmann
2021-04-01  3:51   ` Kasireddy, Vivek
2021-04-01  6:53     ` Gerd Hoffmann
2021-04-06 20:36       ` [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v2) Vivek Kasireddy
2021-04-07  0:34         ` Gurchetan Singh
2021-04-08  9:27           ` Gerd Hoffmann
2021-04-09  0:31             ` Gurchetan Singh
2021-04-09  7:48               ` Gerd Hoffmann
2021-04-13  0:58                 ` Gurchetan Singh
     [not found]                   ` <BN7PR11MB2786499E7902CE53326ACE97894F9@BN7PR11MB2786.namprd11.prod.outlook.com>
2021-04-13  4:57                     ` Zhang, Tina
2021-04-13  5:26                   ` [PATCH] drm/virtio: Create Dumb BOs as guest Blobs (v3) Vivek Kasireddy
2021-04-14  6:36                     ` Zhang, Tina
2021-04-14  9:29                       ` Gerd Hoffmann
2021-04-14 23:31                     ` Gurchetan Singh
2021-04-15  9:07                       ` Gerd Hoffmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.