linux-media.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* guest / host buffer sharing ...
@ 2019-11-05 10:54 Gerd Hoffmann
  2019-11-05 11:35 ` Geoffrey McRae
                   ` (2 more replies)
  0 siblings, 3 replies; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-05 10:54 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: David Stevens, Tomasz Figa, Dmitry Morozov, Alexandre Courbot,
	Alex Lau, Dylan Reid, Stéphane Marchesin, Pawel Osciak,
	Hans Verkuil, Daniel Vetter, geoff, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

  Hi folks,

The issue of sharing buffers between guests and hosts keeps poping
up again and again in different contexts.  Most recently here:

https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html

So, I'm grabbing the recipient list of the virtio-vdec thread and some
more people I know might be interested in this, hoping to have everyone
included.

Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
resources" is really a good answer for all the different use cases
we have collected over time.  Maybe it is better to have a dedicated
buffer sharing virtio device?  Here is the rough idea:


(1) The virtio device
=====================

Has a single virtio queue, so the guest can send commands to register
and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
has a list of memory ranges for the data.  Each buffer also has some
properties to carry metadata, some fixed (id, size, application), but
also allow free form (name = value, framebuffers would have
width/height/stride/format for example).


(2) The linux guest implementation
==================================

I guess I'd try to make it a drm driver, so we can re-use drm
infrastructure (shmem helpers for example).  Buffers are dumb drm
buffers.  dma-buf import and export is supported (shmem helpers
get us that for free).  Some device-specific ioctls to get/set
properties and to register/unregister the buffers on the host.


(3) The qemu host implementation
================================

qemu (likewise other vmms) can use the udmabuf driver to create
host-side dma-bufs for the buffers.  The dma-bufs can be passed to
anyone interested, inside and outside qemu.  We'll need some protocol
for communication between qemu and external users interested in those
buffers, to receive dma-bufs (via unix file descriptor passing) and
update notifications.  Dispatching updates could be done based on the
application property, which could be "virtio-vdec" or "wayland-proxy"
for example.


commments?

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-05 10:54 guest / host buffer sharing Gerd Hoffmann
@ 2019-11-05 11:35 ` Geoffrey McRae
  2019-11-06  6:24   ` Gerd Hoffmann
  2019-11-06  8:36 ` David Stevens
  2019-11-06  8:43 ` Stefan Hajnoczi
  2 siblings, 1 reply; 51+ messages in thread
From: Geoffrey McRae @ 2019-11-05 11:35 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Keiichi Watanabe, David Stevens, Tomasz Figa, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

Hi Gerd.

On 2019-11-05 21:54, Gerd Hoffmann wrote:
> Hi folks,
> 
> The issue of sharing buffers between guests and hosts keeps poping
> up again and again in different contexts.  Most recently here:
> 
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html
> 
> So, I'm grabbing the recipient list of the virtio-vdec thread and some
> more people I know might be interested in this, hoping to have everyone
> included.
> 
> Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> resources" is really a good answer for all the different use cases
> we have collected over time.  Maybe it is better to have a dedicated
> buffer sharing virtio device?  Here is the rough idea:
> 

This would be the ultimate solution to this, it would also make it the 
defacto device, possibly even leading to the deprecation of the IVSHMEM 
device.

> 
> (1) The virtio device
> =====================
> 
> Has a single virtio queue, so the guest can send commands to register
> and unregister buffers.  Buffers are allocated in guest ram.  Each 
> buffer
> has a list of memory ranges for the data.  Each buffer also has some
> properties to carry metadata, some fixed (id, size, application), but
> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).
> 

Perfect, however since it's to be a generic device there also needs to 
be a
method in the guest to identify which device is the one the application 
is
interested in without opening the device. Since windows makes the
subsystem vendor ID and device ID available to the userspace 
application,
I suggest these be used for this purpose.

To avoid clashes a simple text file to track reservations of subsystem 
IDs
for applications/protocols would be recommended.

The device should also support a reset feature allowing the guest to
notify the host application that all buffers have become invalid such as
on abnormal termination of the guest application that is using the 
device.

Conversely, qemu on unix socket disconnect should notify the guest of 
this
event also, allowing each end to properly syncronize.

> 
> (2) The linux guest implementation
> ==================================
> 
> I guess I'd try to make it a drm driver, so we can re-use drm
> infrastructure (shmem helpers for example).  Buffers are dumb drm
> buffers.  dma-buf import and export is supported (shmem helpers
> get us that for free).  Some device-specific ioctls to get/set
> properties and to register/unregister the buffers on the host.
> 

I would be happy to do what I can to implement the windows driver for 
this
if nobody else is interested in doing so, however, my abilities in this
field is rather limited and the results may not be that great :)

> 
> (3) The qemu host implementation
> ================================
> 
> qemu (likewise other vmms) can use the udmabuf driver to create
> host-side dma-bufs for the buffers.  The dma-bufs can be passed to
> anyone interested, inside and outside qemu.  We'll need some protocol
> for communication between qemu and external users interested in those
> buffers, to receive dma-bufs (via unix file descriptor passing) and
> update notifications.  Dispatching updates could be done based on the
> application property, which could be "virtio-vdec" or "wayland-proxy"
> for example.

I don't know enough about udmabuf to really comment on this except to 
ask
a question. Would this make guest to guest transfers without an
intermediate buffer possible?

-Geoff

> 
> 
> commments?
> 
> cheers,
>   Gerd

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-05 11:35 ` Geoffrey McRae
@ 2019-11-06  6:24   ` Gerd Hoffmann
  0 siblings, 0 replies; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-06  6:24 UTC (permalink / raw)
  To: Geoffrey McRae
  Cc: Keiichi Watanabe, David Stevens, Tomasz Figa, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

> > (1) The virtio device
> > =====================
> > 
> > Has a single virtio queue, so the guest can send commands to register
> > and unregister buffers.  Buffers are allocated in guest ram.  Each
> > buffer
> > has a list of memory ranges for the data.  Each buffer also has some
> > properties to carry metadata, some fixed (id, size, application), but
> > also allow free form (name = value, framebuffers would have
> > width/height/stride/format for example).
> > 
> 
> Perfect, however since it's to be a generic device there also needs to be a
> method in the guest to identify which device is the one the application is
> interested in without opening the device.

This is what the application buffer property is supposed to handle, i.e.
you'll have a single device, all applications share it and the property
tells which buffer belongs to which application.

> The device should also support a reset feature allowing the guest to
> notify the host application that all buffers have become invalid such as
> on abnormal termination of the guest application that is using the device.

The guest driver should cleanup properly (i.e. unregister all buffers)
when an application terminates of course, no matter what the reason is
(crash, exit without unregistering buffers, ...).  Doable without a full
device reset.

Independent from that a full reset will be supported of course, it is a
standard virtio feature.

> Conversely, qemu on unix socket disconnect should notify the guest of this
> event also, allowing each end to properly syncronize.

I was thinking more about a simple guest-side publishing of buffers,
without a backchannel.  If more coordination is needed you can use
vsocks for that for example.

> > (3) The qemu host implementation
> > ================================
> > 
> > qemu (likewise other vmms) can use the udmabuf driver to create
> > host-side dma-bufs for the buffers.  The dma-bufs can be passed to
> > anyone interested, inside and outside qemu.  We'll need some protocol
> > for communication between qemu and external users interested in those
> > buffers, to receive dma-bufs (via unix file descriptor passing) and
> > update notifications.

Using vhost for the host-side implementation should be possible too.

> > Dispatching updates could be done based on the
> > application property, which could be "virtio-vdec" or "wayland-proxy"
> > for example.
> 
> I don't know enough about udmabuf to really comment on this except to ask
> a question. Would this make guest to guest transfers without an
> intermediate buffer possible?

Yes.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-05 10:54 guest / host buffer sharing Gerd Hoffmann
  2019-11-05 11:35 ` Geoffrey McRae
@ 2019-11-06  8:36 ` David Stevens
  2019-11-06 12:41   ` Gerd Hoffmann
  2019-11-06  8:43 ` Stefan Hajnoczi
  2 siblings, 1 reply; 51+ messages in thread
From: David Stevens @ 2019-11-06  8:36 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Keiichi Watanabe, Tomasz Figa, Dmitry Morozov, Alexandre Courbot,
	Alex Lau, Dylan Reid, Stéphane Marchesin, Pawel Osciak,
	Hans Verkuil, Daniel Vetter, geoff, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

> (1) The virtio device
> =====================
>
> Has a single virtio queue, so the guest can send commands to register
> and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> has a list of memory ranges for the data. Each buffer also has some

Allocating from guest ram would work most of the time, but I think
it's insufficient for many use cases. It doesn't really support things
such as contiguous allocations, allocations from carveouts or <4GB,
protected buffers, etc.

> properties to carry metadata, some fixed (id, size, application), but

What exactly do you mean by application?

> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).

Is this approach expected to handle allocating buffers with
hardware-specific constraints such as stride/height alignment or
tiling? Or would there need to be some alternative channel for
determining those values and then calculating the appropriate buffer
size?

-David

On Tue, Nov 5, 2019 at 7:55 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi folks,
>
> The issue of sharing buffers between guests and hosts keeps poping
> up again and again in different contexts.  Most recently here:
>
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html
>
> So, I'm grabbing the recipient list of the virtio-vdec thread and some
> more people I know might be interested in this, hoping to have everyone
> included.
>
> Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> resources" is really a good answer for all the different use cases
> we have collected over time.  Maybe it is better to have a dedicated
> buffer sharing virtio device?  Here is the rough idea:
>
>
> (1) The virtio device
> =====================
>
> Has a single virtio queue, so the guest can send commands to register
> and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> has a list of memory ranges for the data.  Each buffer also has some
> properties to carry metadata, some fixed (id, size, application), but
> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).
>
>
> (2) The linux guest implementation
> ==================================
>
> I guess I'd try to make it a drm driver, so we can re-use drm
> infrastructure (shmem helpers for example).  Buffers are dumb drm
> buffers.  dma-buf import and export is supported (shmem helpers
> get us that for free).  Some device-specific ioctls to get/set
> properties and to register/unregister the buffers on the host.
>
>
> (3) The qemu host implementation
> ================================
>
> qemu (likewise other vmms) can use the udmabuf driver to create
> host-side dma-bufs for the buffers.  The dma-bufs can be passed to
> anyone interested, inside and outside qemu.  We'll need some protocol
> for communication between qemu and external users interested in those
> buffers, to receive dma-bufs (via unix file descriptor passing) and
> update notifications.  Dispatching updates could be done based on the
> application property, which could be "virtio-vdec" or "wayland-proxy"
> for example.
>
>
> commments?
>
> cheers,
>   Gerd
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-05 10:54 guest / host buffer sharing Gerd Hoffmann
  2019-11-05 11:35 ` Geoffrey McRae
  2019-11-06  8:36 ` David Stevens
@ 2019-11-06  8:43 ` Stefan Hajnoczi
  2019-11-06  9:51   ` Gerd Hoffmann
  2019-11-11  3:04   ` David Stevens
  2 siblings, 2 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2019-11-06  8:43 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Keiichi Watanabe, geoff, virtio-dev, Alex Lau, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Hans Verkuil, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

[-- Attachment #1: Type: text/plain, Size: 1571 bytes --]

On Tue, Nov 05, 2019 at 11:54:56AM +0100, Gerd Hoffmann wrote:
> The issue of sharing buffers between guests and hosts keeps poping
> up again and again in different contexts.  Most recently here:
> 
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html
> 
> So, I'm grabbing the recipient list of the virtio-vdec thread and some
> more people I know might be interested in this, hoping to have everyone
> included.
> 
> Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> resources" is really a good answer for all the different use cases
> we have collected over time.  Maybe it is better to have a dedicated
> buffer sharing virtio device?  Here is the rough idea:

My concern is that buffer sharing isn't a "device".  It's a primitive
used in building other devices.  When someone asks for just buffer
sharing it's often because they do not intend to upstream a
specification for their device.

If this buffer sharing device's main purpose is for building proprietary
devices without contributing to VIRTIO, then I don't think it makes
sense for the VIRTIO community to assist in its development.

VIRTIO recently gained a shared memory resource concept for access to
host memory.  It is being used in virtio-pmem and virtio-fs (and
virtio-gpu?).  If another flavor of shared memory is required it can be
added to the spec and new VIRTIO device types can use it.  But it's not
clear why this should be its own device.

My question would be "what is the actual problem you are trying to
solve?".

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  8:43 ` Stefan Hajnoczi
@ 2019-11-06  9:51   ` Gerd Hoffmann
  2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
                       ` (2 more replies)
  2019-11-11  3:04   ` David Stevens
  1 sibling, 3 replies; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-06  9:51 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

  Hi,

> > Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> > resources" is really a good answer for all the different use cases
> > we have collected over time.  Maybe it is better to have a dedicated
> > buffer sharing virtio device?  Here is the rough idea:
> 
> My concern is that buffer sharing isn't a "device".  It's a primitive
> used in building other devices.  When someone asks for just buffer
> sharing it's often because they do not intend to upstream a
> specification for their device.

Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either.
It is more a service to allow communication between host and guest

That buffer sharing device falls into the same category.  Maybe it even
makes sense to build that as virtio-vsock extension.  Not sure how well
that would work with the multi-transport architecture of vsock though.

> If this buffer sharing device's main purpose is for building proprietary
> devices without contributing to VIRTIO, then I don't think it makes
> sense for the VIRTIO community to assist in its development.

One possible use case would be building a wayland proxy, using vsock for
the wayland protocol messages and virtio-buffers for the shared buffers
(wayland client window content).

It could also simplify buffer sharing between devices (feed decoded
video frames from decoder to gpu), although in that case it is less
clear that it'll actually simplify things because virtio-gpu is
involved anyway.

We can't prevent people from using that for proprietary stuff (same goes
for vsock).

There is the option to use virtio-gpu instead, i.e. add support to qemu
to export dma-buf handles for virtio-gpu resources to other processes
(such as a wayland proxy).  That would provide very similar
functionality (and thereby create the same loophole).

> VIRTIO recently gained a shared memory resource concept for access to
> host memory.  It is being used in virtio-pmem and virtio-fs (and
> virtio-gpu?).

virtio-gpu is in progress still unfortunately (all kinds of fixes for
the qemu drm drivers and virtio-gpu guest driver refactoring kept me
busy for quite a while ...).

> If another flavor of shared memory is required it can be
> added to the spec and new VIRTIO device types can use it.  But it's not
> clear why this should be its own device.

This is not about host memory, buffers are in guest ram, everything else
would make sharing those buffers between drivers inside the guest (as
dma-buf) quite difficult.

> My question would be "what is the actual problem you are trying to
> solve?".

Typical use cases center around sharing graphics data between guest
and host.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-06  9:51   ` Gerd Hoffmann
@ 2019-11-06 10:10     ` Dr. David Alan Gilbert
  2019-11-07 11:11       ` Gerd Hoffmann
  2019-11-06 11:46     ` Stefan Hajnoczi
  2019-11-20 12:11     ` Tomasz Figa
  2 siblings, 1 reply; 51+ messages in thread
From: Dr. David Alan Gilbert @ 2019-11-06 10:10 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Stefan Hajnoczi, geoff, virtio-dev, Alex Lau, Daniel Vetter,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

* Gerd Hoffmann (kraxel@redhat.com) wrote:
>   Hi,
> 
> > > Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> > > resources" is really a good answer for all the different use cases
> > > we have collected over time.  Maybe it is better to have a dedicated
> > > buffer sharing virtio device?  Here is the rough idea:
> > 
> > My concern is that buffer sharing isn't a "device".  It's a primitive
> > used in building other devices.  When someone asks for just buffer
> > sharing it's often because they do not intend to upstream a
> > specification for their device.
> 
> Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either.
> It is more a service to allow communication between host and guest
> 
> That buffer sharing device falls into the same category.  Maybe it even
> makes sense to build that as virtio-vsock extension.  Not sure how well
> that would work with the multi-transport architecture of vsock though.
> 
> > If this buffer sharing device's main purpose is for building proprietary
> > devices without contributing to VIRTIO, then I don't think it makes
> > sense for the VIRTIO community to assist in its development.
> 
> One possible use case would be building a wayland proxy, using vsock for
> the wayland protocol messages and virtio-buffers for the shared buffers
> (wayland client window content).
> 
> It could also simplify buffer sharing between devices (feed decoded
> video frames from decoder to gpu), although in that case it is less
> clear that it'll actually simplify things because virtio-gpu is
> involved anyway.
> 
> We can't prevent people from using that for proprietary stuff (same goes
> for vsock).
> 
> There is the option to use virtio-gpu instead, i.e. add support to qemu
> to export dma-buf handles for virtio-gpu resources to other processes
> (such as a wayland proxy).  That would provide very similar
> functionality (and thereby create the same loophole).
> 
> > VIRTIO recently gained a shared memory resource concept for access to
> > host memory.  It is being used in virtio-pmem and virtio-fs (and
> > virtio-gpu?).
> 
> virtio-gpu is in progress still unfortunately (all kinds of fixes for
> the qemu drm drivers and virtio-gpu guest driver refactoring kept me
> busy for quite a while ...).
> 
> > If another flavor of shared memory is required it can be
> > added to the spec and new VIRTIO device types can use it.  But it's not
> > clear why this should be its own device.
> 
> This is not about host memory, buffers are in guest ram, everything else
> would make sharing those buffers between drivers inside the guest (as
> dma-buf) quite difficult.

Given it's just guest memory, can the guest just have a virt queue on
which it places pointers to the memory it wants to share as elements in
the queue?

Dave

> > My question would be "what is the actual problem you are trying to
> > solve?".
> 
> Typical use cases center around sharing graphics data between guest
> and host.
> 
> cheers,
>   Gerd
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  9:51   ` Gerd Hoffmann
  2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
@ 2019-11-06 11:46     ` Stefan Hajnoczi
  2019-11-06 12:50       ` Gerd Hoffmann
  2019-11-20 12:11     ` Tomasz Figa
  2 siblings, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2019-11-06 11:46 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Wed, Nov 6, 2019 at 10:51 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> > > resources" is really a good answer for all the different use cases
> > > we have collected over time.  Maybe it is better to have a dedicated
> > > buffer sharing virtio device?  Here is the rough idea:
> >
> > My concern is that buffer sharing isn't a "device".  It's a primitive
> > used in building other devices.  When someone asks for just buffer
> > sharing it's often because they do not intend to upstream a
> > specification for their device.
>
> Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either.
> It is more a service to allow communication between host and guest

There are existing applications and network protocols that can be
easily run over virtio-vsock, virtio-net, and virtio-serial to do
useful things.

If a new device has no use except for writing a custom code, then it's
a clue that we're missing the actual use case.

In the graphics buffer sharing use case, how does the other side
determine how to interpret this data?  Shouldn't there be a VIRTIO
device spec for the messaging so compatible implementations can be
written by others?

Stefan

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  8:36 ` David Stevens
@ 2019-11-06 12:41   ` Gerd Hoffmann
  2019-11-06 22:28     ` Geoffrey McRae
  0 siblings, 1 reply; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-06 12:41 UTC (permalink / raw)
  To: David Stevens
  Cc: Keiichi Watanabe, Tomasz Figa, Dmitry Morozov, Alexandre Courbot,
	Alex Lau, Dylan Reid, Stéphane Marchesin, Pawel Osciak,
	Hans Verkuil, Daniel Vetter, geoff, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > (1) The virtio device
> > =====================
> >
> > Has a single virtio queue, so the guest can send commands to register
> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> > has a list of memory ranges for the data. Each buffer also has some
> 
> Allocating from guest ram would work most of the time, but I think
> it's insufficient for many use cases. It doesn't really support things
> such as contiguous allocations, allocations from carveouts or <4GB,
> protected buffers, etc.

If there are additional constrains (due to gpu hardware I guess)
I think it is better to leave the buffer allocation to virtio-gpu.

virtio-gpu can't do that right now, but we have to improve virtio-gpu
memory management for vulkan support anyway.

> > properties to carry metadata, some fixed (id, size, application), but
> 
> What exactly do you mean by application?

Basically some way to group buffers.  A wayland proxy for example would
add a "application=wayland-proxy" tag to the buffers it creates in the
guest, and the host side part of the proxy could ask qemu (or another
vmm) to notify about all buffers with that tag.  So in case multiple
applications are using the device in parallel they don't interfere with
each other.

> > also allow free form (name = value, framebuffers would have
> > width/height/stride/format for example).
> 
> Is this approach expected to handle allocating buffers with
> hardware-specific constraints such as stride/height alignment or
> tiling? Or would there need to be some alternative channel for
> determining those values and then calculating the appropriate buffer
> size?

No parameter negotiation.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 11:46     ` Stefan Hajnoczi
@ 2019-11-06 12:50       ` Gerd Hoffmann
  2019-11-07 12:10         ` Stefan Hajnoczi
  0 siblings, 1 reply; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-06 12:50 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

  Hi,

> In the graphics buffer sharing use case, how does the other side
> determine how to interpret this data?

The idea is to have free form properties (name=value, with value being
a string) for that kind of metadata.

> Shouldn't there be a VIRTIO
> device spec for the messaging so compatible implementations can be
> written by others?

Adding a list of common properties to the spec certainly makes sense,
so everybody uses the same names.  Adding struct-ed properties for
common use cases might be useful too.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 12:41   ` Gerd Hoffmann
@ 2019-11-06 22:28     ` Geoffrey McRae
  2019-11-07  6:48       ` Gerd Hoffmann
  2019-11-20 12:13       ` Tomasz Figa
  0 siblings, 2 replies; 51+ messages in thread
From: Geoffrey McRae @ 2019-11-06 22:28 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: David Stevens, Keiichi Watanabe, Tomasz Figa, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel



On 2019-11-06 23:41, Gerd Hoffmann wrote:
> On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
>> > (1) The virtio device
>> > =====================
>> >
>> > Has a single virtio queue, so the guest can send commands to register
>> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
>> > has a list of memory ranges for the data. Each buffer also has some
>> 
>> Allocating from guest ram would work most of the time, but I think
>> it's insufficient for many use cases. It doesn't really support things
>> such as contiguous allocations, allocations from carveouts or <4GB,
>> protected buffers, etc.
> 
> If there are additional constrains (due to gpu hardware I guess)
> I think it is better to leave the buffer allocation to virtio-gpu.

The entire point of this for our purposes is due to the fact that we can
not allocate the buffer, it's either provided by the GPU driver or
DirectX. If virtio-gpu were to allocate the buffer we might as well 
forget
all this and continue using the ivshmem device.

Our use case is niche, and the state of things may change if vendors 
like
AMD follow through with their promises and give us SR-IOV on consumer
GPUs, but even then we would still need their support to achieve the 
same
results as the same issue would still be present.

Also don't forget that QEMU already has a non virtio generic device
(IVSHMEM). The only difference is, this device doesn't allow us to 
attain
zero-copy transfers.

Currently IVSHMEM is used by two projects that I am aware of, Looking
Glass and SCREAM. While Looking Glass is solving a problem that is out 
of
scope for QEMU, SCREAM is working around the audio problems in QEMU that
have been present for years now.

While I don't agree with SCREAM being used this way (we really need a
virtio-sound device, and/or intel-hda needs to be fixed), it again is an
example of working around bugs/faults/limitations in QEMU by those of us
that are unable to fix them ourselves and seem to have low priority to 
the
QEMU project.

What we are trying to attain is freedom from dual boot Linux/Windows
systems, not migrate-able enterprise VPS configurations. The Looking
Glass project has brought attention to several other bugs/problems in
QEMU, some of which were fixed as a direct result of this project (i8042
race, AMD NPT).

Unless there is another solution to getting the guest GPUs frame-buffer
back to the host, a device like this will always be required. Since the
landscape could change at any moment, this device should not be a LG
specific device, but rather a generic device to allow for other
workarounds like LG to be developed in the future should they be 
required.

Is it optimal? no
Is there a better solution? not that I am aware of

> 
> virtio-gpu can't do that right now, but we have to improve virtio-gpu
> memory management for vulkan support anyway.
> 
>> > properties to carry metadata, some fixed (id, size, application), but
>> 
>> What exactly do you mean by application?
> 
> Basically some way to group buffers.  A wayland proxy for example would
> add a "application=wayland-proxy" tag to the buffers it creates in the
> guest, and the host side part of the proxy could ask qemu (or another
> vmm) to notify about all buffers with that tag.  So in case multiple
> applications are using the device in parallel they don't interfere with
> each other.
> 
>> > also allow free form (name = value, framebuffers would have
>> > width/height/stride/format for example).
>> 
>> Is this approach expected to handle allocating buffers with
>> hardware-specific constraints such as stride/height alignment or
>> tiling? Or would there need to be some alternative channel for
>> determining those values and then calculating the appropriate buffer
>> size?
> 
> No parameter negotiation.
> 
> cheers,
>   Gerd

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 22:28     ` Geoffrey McRae
@ 2019-11-07  6:48       ` Gerd Hoffmann
  2019-11-20 12:13       ` Tomasz Figa
  1 sibling, 0 replies; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-07  6:48 UTC (permalink / raw)
  To: Geoffrey McRae
  Cc: David Stevens, Keiichi Watanabe, Tomasz Figa, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

> On 2019-11-06 23:41, Gerd Hoffmann wrote:
> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > > > (1) The virtio device
> > > > =====================
> > > >
> > > > Has a single virtio queue, so the guest can send commands to register
> > > > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> > > > has a list of memory ranges for the data. Each buffer also has some
> > > 
> > > Allocating from guest ram would work most of the time, but I think
> > > it's insufficient for many use cases. It doesn't really support things
> > > such as contiguous allocations, allocations from carveouts or <4GB,
> > > protected buffers, etc.
> > 
> > If there are additional constrains (due to gpu hardware I guess)
> > I think it is better to leave the buffer allocation to virtio-gpu.
> 
> The entire point of this for our purposes is due to the fact that we can
> not allocate the buffer, it's either provided by the GPU driver or
> DirectX. If virtio-gpu were to allocate the buffer we might as well forget
> all this and continue using the ivshmem device.

Well, virtio-gpu resources are in guest ram, like the buffers of a
virtio-buffers device would be.  So it isn't much of a difference.  If
the buffer provided by the (nvidia/amd/intel) gpu driver lives in ram
you can create a virtio-gpu resource for it.

On the linux side that is typically handled with dma-buf, one driver
exports the dma-buf and the other imports it.  virtio-gpu doesn't
support that fully yet though (import is being worked on, export is done
and will land upstream in the next merge window).

No clue how this looks like for windows guests ...

> Currently IVSHMEM is used by two projects that I am aware of, Looking
> Glass and SCREAM. While Looking Glass is solving a problem that is out of
> scope for QEMU, SCREAM is working around the audio problems in QEMU that
> have been present for years now.

Side note: sound in qemu 3.1+ should be alot better than in 2.x
versions.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
@ 2019-11-07 11:11       ` Gerd Hoffmann
  2019-11-07 11:16         ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-07 11:11 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Stefan Hajnoczi, geoff, virtio-dev, Alex Lau, Daniel Vetter,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

  Hi,

> > This is not about host memory, buffers are in guest ram, everything else
> > would make sharing those buffers between drivers inside the guest (as
> > dma-buf) quite difficult.
> 
> Given it's just guest memory, can the guest just have a virt queue on
> which it places pointers to the memory it wants to share as elements in
> the queue?

Well, good question.  I'm actually wondering what the best approach is
to handle long-living, large buffers in virtio ...

virtio-blk (and others) are using the approach you describe.  They put a
pointer to the io request header, followed by pointer(s) to the io
buffers directly into the virtqueue.  That works great with storage for
example.  The queue entries are tagged being "in" or "out" (driver to
device or visa-versa), so the virtio transport can set up dma mappings
accordingly or even transparently copy data if needed.

For long-living buffers where data can potentially flow both ways this
model doesn't fit very well though.  So what virtio-gpu does instead is
transferring the scatter list as virtio payload.  Does feel a bit
unclean as it doesn't really fit the virtio architecture.  It assumes
the host can directly access guest memory for example (which is usually
the case but explicitly not required by virtio).  It also requires
quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
in theory should be handled fully transparently by the virtio-pci
transport.

We could instead have a "create-buffer" command which adds the buffer
pointers as elements to the virtqueue as you describe.  Then simply
continue using the buffer even after completing the "create-buffer"
command.  Which isn't exactly clean either.  It would likewise assume
direct access to guest memory, and it would likewise need quirks for
VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
mappings for the virtqueue entries after command completion.

Comments, suggestions, ideas?

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-07 11:11       ` Gerd Hoffmann
@ 2019-11-07 11:16         ` Dr. David Alan Gilbert
  2019-11-08  6:45           ` Gerd Hoffmann
  0 siblings, 1 reply; 51+ messages in thread
From: Dr. David Alan Gilbert @ 2019-11-07 11:16 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Stefan Hajnoczi, geoff, virtio-dev, Alex Lau, Daniel Vetter,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

* Gerd Hoffmann (kraxel@redhat.com) wrote:
>   Hi,
> 
> > > This is not about host memory, buffers are in guest ram, everything else
> > > would make sharing those buffers between drivers inside the guest (as
> > > dma-buf) quite difficult.
> > 
> > Given it's just guest memory, can the guest just have a virt queue on
> > which it places pointers to the memory it wants to share as elements in
> > the queue?
> 
> Well, good question.  I'm actually wondering what the best approach is
> to handle long-living, large buffers in virtio ...
> 
> virtio-blk (and others) are using the approach you describe.  They put a
> pointer to the io request header, followed by pointer(s) to the io
> buffers directly into the virtqueue.  That works great with storage for
> example.  The queue entries are tagged being "in" or "out" (driver to
> device or visa-versa), so the virtio transport can set up dma mappings
> accordingly or even transparently copy data if needed.
> 
> For long-living buffers where data can potentially flow both ways this
> model doesn't fit very well though.  So what virtio-gpu does instead is
> transferring the scatter list as virtio payload.  Does feel a bit
> unclean as it doesn't really fit the virtio architecture.  It assumes
> the host can directly access guest memory for example (which is usually
> the case but explicitly not required by virtio).  It also requires
> quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
> in theory should be handled fully transparently by the virtio-pci
> transport.
> 
> We could instead have a "create-buffer" command which adds the buffer
> pointers as elements to the virtqueue as you describe.  Then simply
> continue using the buffer even after completing the "create-buffer"
> command.  Which isn't exactly clean either.  It would likewise assume
> direct access to guest memory, and it would likewise need quirks for
> VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
> mappings for the virtqueue entries after command completion.
> 
> Comments, suggestions, ideas?

What about not completing the command while the device is using the
memory?

Dave

> cheers,
>   Gerd
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 12:50       ` Gerd Hoffmann
@ 2019-11-07 12:10         ` Stefan Hajnoczi
  2019-11-08  7:22           ` Gerd Hoffmann
       [not found]           ` <CAEkmjvU8or7YT7CCBe7aUx-XQ3yJpUrY4CfBOnqk7pUH9d9RGQ@mail.gmail.com>
  0 siblings, 2 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2019-11-07 12:10 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Wed, Nov 6, 2019 at 1:50 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > In the graphics buffer sharing use case, how does the other side
> > determine how to interpret this data?
>
> The idea is to have free form properties (name=value, with value being
> a string) for that kind of metadata.
>
> > Shouldn't there be a VIRTIO
> > device spec for the messaging so compatible implementations can be
> > written by others?
>
> Adding a list of common properties to the spec certainly makes sense,
> so everybody uses the same names.  Adding struct-ed properties for
> common use cases might be useful too.

Why not define VIRTIO devices for wayland and friends?

This new device exposes buffer sharing plus properties - effectively a
new device model nested inside VIRTIO.  The VIRTIO device model has
the necessary primitives to solve the buffer sharing problem so I'm
struggling to see the purpose of this new device.

Custom/niche applications that do not wish to standardize their device
type can maintain out-of-tree VIRTIO devices.  Both kernel and
userspace drivers can be written for the device and there is already
VIRTIO driver code that can be reused.  They have access to the full
VIRTIO device model, including feature negotiation and configuration
space.

Stefan

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-07 11:16         ` Dr. David Alan Gilbert
@ 2019-11-08  6:45           ` Gerd Hoffmann
  0 siblings, 0 replies; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-08  6:45 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Stefan Hajnoczi, geoff, virtio-dev, Alex Lau, Daniel Vetter,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Thu, Nov 07, 2019 at 11:16:18AM +0000, Dr. David Alan Gilbert wrote:
> * Gerd Hoffmann (kraxel@redhat.com) wrote:
> >   Hi,
> > 
> > > > This is not about host memory, buffers are in guest ram, everything else
> > > > would make sharing those buffers between drivers inside the guest (as
> > > > dma-buf) quite difficult.
> > > 
> > > Given it's just guest memory, can the guest just have a virt queue on
> > > which it places pointers to the memory it wants to share as elements in
> > > the queue?
> > 
> > Well, good question.  I'm actually wondering what the best approach is
> > to handle long-living, large buffers in virtio ...
> > 
> > virtio-blk (and others) are using the approach you describe.  They put a
> > pointer to the io request header, followed by pointer(s) to the io
> > buffers directly into the virtqueue.  That works great with storage for
> > example.  The queue entries are tagged being "in" or "out" (driver to
> > device or visa-versa), so the virtio transport can set up dma mappings
> > accordingly or even transparently copy data if needed.
> > 
> > For long-living buffers where data can potentially flow both ways this
> > model doesn't fit very well though.  So what virtio-gpu does instead is
> > transferring the scatter list as virtio payload.  Does feel a bit
> > unclean as it doesn't really fit the virtio architecture.  It assumes
> > the host can directly access guest memory for example (which is usually
> > the case but explicitly not required by virtio).  It also requires
> > quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
> > in theory should be handled fully transparently by the virtio-pci
> > transport.
> > 
> > We could instead have a "create-buffer" command which adds the buffer
> > pointers as elements to the virtqueue as you describe.  Then simply
> > continue using the buffer even after completing the "create-buffer"
> > command.  Which isn't exactly clean either.  It would likewise assume
> > direct access to guest memory, and it would likewise need quirks for
> > VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
> > mappings for the virtqueue entries after command completion.
> > 
> > Comments, suggestions, ideas?
> 
> What about not completing the command while the device is using the
> memory?

Thought about that too, but I don't think this is a good idea for
buffers which exist for a long time.

Example #1:  A video decoder would setup a bunch of buffers and use
them robin-round, so they would exist until the video playback is
finished.

Example #2:  virtio-gpu creates a framebuffer for fbcon which exists
forever.  And virtio-gpu potentially needs lots of buffers.  With 3d
active there can be tons of objects.  Although they typically don't
stay around that long we would still need a pretty big virtqueue to
store them all I guess.

And it also doesn't fully match the virtio spirit, it still assumes
direct guest memory access.  Without direct guest memory access
updates to the fbcon object would never reach the host for example.
In case a iommu is present we might need additional dma map flushes
for updates happening after submitting the lingering "create-buffer"
command.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-07 12:10         ` Stefan Hajnoczi
@ 2019-11-08  7:22           ` Gerd Hoffmann
  2019-11-08  7:35             ` Stefan Hajnoczi
       [not found]           ` <CAEkmjvU8or7YT7CCBe7aUx-XQ3yJpUrY4CfBOnqk7pUH9d9RGQ@mail.gmail.com>
  1 sibling, 1 reply; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-08  7:22 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

  Hi,

> > Adding a list of common properties to the spec certainly makes sense,
> > so everybody uses the same names.  Adding struct-ed properties for
> > common use cases might be useful too.
> 
> Why not define VIRTIO devices for wayland and friends?

There is an out-of-tree implementation of that, so yes, that surely is
an option.

Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
pipe as control channel.  Pretty much the same for X11, except that
shared buffers are optional because the X protocol can also squeeze all
display updates through the stream pipe.

So, if you want allow guests talk to the host display server you can run
the stream pipe over vsock.  But there is nothing for the shared
buffers ...

We could replicate vsock functionality elsewhere.  I think that happened
in the out-of-tree virtio-wayland implementation.  There also was some
discussion about adding streams to virtio-gpu, slightly pimped up so you
can easily pass around virtio-gpu resource references for buffer
sharing.  But given that getting vsock right isn't exactly trivial
(consider all the fairness issues when multiplexing multiple streams
over a virtqueue for example) I don't think this is a good plan.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-08  7:22           ` Gerd Hoffmann
@ 2019-11-08  7:35             ` Stefan Hajnoczi
  2019-11-09  1:41               ` Stéphane Marchesin
  0 siblings, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2019-11-08  7:35 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > Adding a list of common properties to the spec certainly makes sense,
> > > so everybody uses the same names.  Adding struct-ed properties for
> > > common use cases might be useful too.
> >
> > Why not define VIRTIO devices for wayland and friends?
>
> There is an out-of-tree implementation of that, so yes, that surely is
> an option.
>
> Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> pipe as control channel.  Pretty much the same for X11, except that
> shared buffers are optional because the X protocol can also squeeze all
> display updates through the stream pipe.
>
> So, if you want allow guests talk to the host display server you can run
> the stream pipe over vsock.  But there is nothing for the shared
> buffers ...
>
> We could replicate vsock functionality elsewhere.  I think that happened
> in the out-of-tree virtio-wayland implementation.  There also was some
> discussion about adding streams to virtio-gpu, slightly pimped up so you
> can easily pass around virtio-gpu resource references for buffer
> sharing.  But given that getting vsock right isn't exactly trivial
> (consider all the fairness issues when multiplexing multiple streams
> over a virtqueue for example) I don't think this is a good plan.

I also think vsock isn't the right fit.

Defining a virtio-wayland device makes sense to me: you get the guest
RAM access via virtqueues, plus the VIRTIO infrastructure (device IDs,
configuration space, feature bits, and existing reusable
kernel/userspace/QEMU code).

Stefan

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-08  7:35             ` Stefan Hajnoczi
@ 2019-11-09  1:41               ` Stéphane Marchesin
  2019-11-09 10:12                 ` Stefan Hajnoczi
  0 siblings, 1 reply; 51+ messages in thread
From: Stéphane Marchesin @ 2019-11-09  1:41 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Gerd Hoffmann, geoff, virtio-dev, Alex Lau, Daniel Vetter,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Dylan Reid, Gurchetan Singh,
	Dmitry Morozov, Pawel Osciak, Linux Media Mailing List

On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > Adding a list of common properties to the spec certainly makes sense,
> > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > common use cases might be useful too.
> > >
> > > Why not define VIRTIO devices for wayland and friends?
> >
> > There is an out-of-tree implementation of that, so yes, that surely is
> > an option.
> >
> > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > pipe as control channel.  Pretty much the same for X11, except that
> > shared buffers are optional because the X protocol can also squeeze all
> > display updates through the stream pipe.
> >
> > So, if you want allow guests talk to the host display server you can run
> > the stream pipe over vsock.  But there is nothing for the shared
> > buffers ...
> >
> > We could replicate vsock functionality elsewhere.  I think that happened
> > in the out-of-tree virtio-wayland implementation.  There also was some
> > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > can easily pass around virtio-gpu resource references for buffer
> > sharing.  But given that getting vsock right isn't exactly trivial
> > (consider all the fairness issues when multiplexing multiple streams
> > over a virtqueue for example) I don't think this is a good plan.
>
> I also think vsock isn't the right fit.
>

+1 we are using vsock right now and we have a few pains because of it.

I think the high-level problem is that because it is a side channel,
we don't see everything that happens to the buffer in one place
(rendering + display) and we can't do things like reallocate the
format accordingly if needed, or we can't do flushing etc. on that
buffer where needed.

Best,
Stéphane

>
> Defining a virtio-wayland device makes sense to me: you get the guest
> RAM access via virtqueues, plus the VIRTIO infrastructure (device IDs,
> configuration space, feature bits, and existing reusable
> kernel/userspace/QEMU code).
>
> Stefan

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09  1:41               ` Stéphane Marchesin
@ 2019-11-09 10:12                 ` Stefan Hajnoczi
  2019-11-09 11:16                   ` Tomasz Figa
  0 siblings, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2019-11-09 10:12 UTC (permalink / raw)
  To: Stéphane Marchesin
  Cc: Gerd Hoffmann, geoff, virtio-dev, Alex Lau, Daniel Vetter,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Dylan Reid, Gurchetan Singh,
	Dmitry Morozov, Pawel Osciak, Linux Media Mailing List

On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
>
> On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >
> > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > common use cases might be useful too.
> > > >
> > > > Why not define VIRTIO devices for wayland and friends?
> > >
> > > There is an out-of-tree implementation of that, so yes, that surely is
> > > an option.
> > >
> > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > pipe as control channel.  Pretty much the same for X11, except that
> > > shared buffers are optional because the X protocol can also squeeze all
> > > display updates through the stream pipe.
> > >
> > > So, if you want allow guests talk to the host display server you can run
> > > the stream pipe over vsock.  But there is nothing for the shared
> > > buffers ...
> > >
> > > We could replicate vsock functionality elsewhere.  I think that happened
> > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > can easily pass around virtio-gpu resource references for buffer
> > > sharing.  But given that getting vsock right isn't exactly trivial
> > > (consider all the fairness issues when multiplexing multiple streams
> > > over a virtqueue for example) I don't think this is a good plan.
> >
> > I also think vsock isn't the right fit.
> >
>
> +1 we are using vsock right now and we have a few pains because of it.
>
> I think the high-level problem is that because it is a side channel,
> we don't see everything that happens to the buffer in one place
> (rendering + display) and we can't do things like reallocate the
> format accordingly if needed, or we can't do flushing etc. on that
> buffer where needed.

Do you think a VIRTIO device designed for your use case is an
appropriate solution?

I have been arguing that these use cases should be addressed with
dedicated VIRTIO devices, but I don't understand the use cases of
everyone on the CC list so maybe I'm missing something :).  If there
are reasons why having a VIRTIO device for your use case does not make
sense then it would be good to discuss them.  Blockers like "VIRTIO is
too heavyweight/complex for us because ...", "Our application can't
make use of VIRTIO devices because ...", etc would be important to
hear.

Stefan

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09 10:12                 ` Stefan Hajnoczi
@ 2019-11-09 11:16                   ` Tomasz Figa
  2019-11-09 12:08                     ` Stefan Hajnoczi
  0 siblings, 1 reply; 51+ messages in thread
From: Tomasz Figa @ 2019-11-09 11:16 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stéphane Marchesin, Gerd Hoffmann, geoff, virtio-dev,
	Alex Lau, Daniel Vetter, Alexandre Courbot, qemu-devel,
	Keiichi Watanabe, David Stevens, Hans Verkuil, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Sat, Nov 9, 2019 at 7:12 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
> >
> > On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > >
> > > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > > common use cases might be useful too.
> > > > >
> > > > > Why not define VIRTIO devices for wayland and friends?
> > > >
> > > > There is an out-of-tree implementation of that, so yes, that surely is
> > > > an option.
> > > >
> > > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > > pipe as control channel.  Pretty much the same for X11, except that
> > > > shared buffers are optional because the X protocol can also squeeze all
> > > > display updates through the stream pipe.
> > > >
> > > > So, if you want allow guests talk to the host display server you can run
> > > > the stream pipe over vsock.  But there is nothing for the shared
> > > > buffers ...
> > > >
> > > > We could replicate vsock functionality elsewhere.  I think that happened
> > > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > > can easily pass around virtio-gpu resource references for buffer
> > > > sharing.  But given that getting vsock right isn't exactly trivial
> > > > (consider all the fairness issues when multiplexing multiple streams
> > > > over a virtqueue for example) I don't think this is a good plan.
> > >
> > > I also think vsock isn't the right fit.
> > >
> >
> > +1 we are using vsock right now and we have a few pains because of it.
> >
> > I think the high-level problem is that because it is a side channel,
> > we don't see everything that happens to the buffer in one place
> > (rendering + display) and we can't do things like reallocate the
> > format accordingly if needed, or we can't do flushing etc. on that
> > buffer where needed.
>
> Do you think a VIRTIO device designed for your use case is an
> appropriate solution?
>
> I have been arguing that these use cases should be addressed with
> dedicated VIRTIO devices, but I don't understand the use cases of
> everyone on the CC list so maybe I'm missing something :).  If there
> are reasons why having a VIRTIO device for your use case does not make
> sense then it would be good to discuss them.  Blockers like "VIRTIO is
> too heavyweight/complex for us because ...", "Our application can't
> make use of VIRTIO devices because ...", etc would be important to
> hear.

Do you have any idea on how to model Wayland as a VIRTIO device?

Stephane mentioned that we use vsock, but in fact we have our own
VIRTIO device, except that it's semantically almost the same as vsock,
with a difference being the ability to pass buffers and pipes across
the VM boundary.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09 11:16                   ` Tomasz Figa
@ 2019-11-09 12:08                     ` Stefan Hajnoczi
  2019-11-09 15:12                       ` Tomasz Figa
  0 siblings, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2019-11-09 12:08 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Stéphane Marchesin, Gerd Hoffmann, geoff, virtio-dev,
	Alex Lau, Daniel Vetter, Alexandre Courbot, qemu-devel,
	Keiichi Watanabe, David Stevens, Hans Verkuil, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Sat, Nov 9, 2019 at 12:17 PM Tomasz Figa <tfiga@chromium.org> wrote:
> On Sat, Nov 9, 2019 at 7:12 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
> > > On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > > > common use cases might be useful too.
> > > > > >
> > > > > > Why not define VIRTIO devices for wayland and friends?
> > > > >
> > > > > There is an out-of-tree implementation of that, so yes, that surely is
> > > > > an option.
> > > > >
> > > > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > > > pipe as control channel.  Pretty much the same for X11, except that
> > > > > shared buffers are optional because the X protocol can also squeeze all
> > > > > display updates through the stream pipe.
> > > > >
> > > > > So, if you want allow guests talk to the host display server you can run
> > > > > the stream pipe over vsock.  But there is nothing for the shared
> > > > > buffers ...
> > > > >
> > > > > We could replicate vsock functionality elsewhere.  I think that happened
> > > > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > > > can easily pass around virtio-gpu resource references for buffer
> > > > > sharing.  But given that getting vsock right isn't exactly trivial
> > > > > (consider all the fairness issues when multiplexing multiple streams
> > > > > over a virtqueue for example) I don't think this is a good plan.
> > > >
> > > > I also think vsock isn't the right fit.
> > > >
> > >
> > > +1 we are using vsock right now and we have a few pains because of it.
> > >
> > > I think the high-level problem is that because it is a side channel,
> > > we don't see everything that happens to the buffer in one place
> > > (rendering + display) and we can't do things like reallocate the
> > > format accordingly if needed, or we can't do flushing etc. on that
> > > buffer where needed.
> >
> > Do you think a VIRTIO device designed for your use case is an
> > appropriate solution?
> >
> > I have been arguing that these use cases should be addressed with
> > dedicated VIRTIO devices, but I don't understand the use cases of
> > everyone on the CC list so maybe I'm missing something :).  If there
> > are reasons why having a VIRTIO device for your use case does not make
> > sense then it would be good to discuss them.  Blockers like "VIRTIO is
> > too heavyweight/complex for us because ...", "Our application can't
> > make use of VIRTIO devices because ...", etc would be important to
> > hear.
>
> Do you have any idea on how to model Wayland as a VIRTIO device?
>
> Stephane mentioned that we use vsock, but in fact we have our own
> VIRTIO device, except that it's semantically almost the same as vsock,
> with a difference being the ability to pass buffers and pipes across
> the VM boundary.

I know neither Wayland nor your use case :).

But we can discuss the design of your VIRTIO device.  Please post a
link to the code.

Stefan

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09 12:08                     ` Stefan Hajnoczi
@ 2019-11-09 15:12                       ` Tomasz Figa
  2019-11-18 10:20                         ` Stefan Hajnoczi
  0 siblings, 1 reply; 51+ messages in thread
From: Tomasz Figa @ 2019-11-09 15:12 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stéphane Marchesin, Gerd Hoffmann, geoff, virtio-dev,
	Alex Lau, Daniel Vetter, Alexandre Courbot, qemu-devel,
	Keiichi Watanabe, David Stevens, Hans Verkuil, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Sat, Nov 9, 2019 at 9:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Sat, Nov 9, 2019 at 12:17 PM Tomasz Figa <tfiga@chromium.org> wrote:
> > On Sat, Nov 9, 2019 at 7:12 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
> > > > On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > > > > common use cases might be useful too.
> > > > > > >
> > > > > > > Why not define VIRTIO devices for wayland and friends?
> > > > > >
> > > > > > There is an out-of-tree implementation of that, so yes, that surely is
> > > > > > an option.
> > > > > >
> > > > > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > > > > pipe as control channel.  Pretty much the same for X11, except that
> > > > > > shared buffers are optional because the X protocol can also squeeze all
> > > > > > display updates through the stream pipe.
> > > > > >
> > > > > > So, if you want allow guests talk to the host display server you can run
> > > > > > the stream pipe over vsock.  But there is nothing for the shared
> > > > > > buffers ...
> > > > > >
> > > > > > We could replicate vsock functionality elsewhere.  I think that happened
> > > > > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > > > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > > > > can easily pass around virtio-gpu resource references for buffer
> > > > > > sharing.  But given that getting vsock right isn't exactly trivial
> > > > > > (consider all the fairness issues when multiplexing multiple streams
> > > > > > over a virtqueue for example) I don't think this is a good plan.
> > > > >
> > > > > I also think vsock isn't the right fit.
> > > > >
> > > >
> > > > +1 we are using vsock right now and we have a few pains because of it.
> > > >
> > > > I think the high-level problem is that because it is a side channel,
> > > > we don't see everything that happens to the buffer in one place
> > > > (rendering + display) and we can't do things like reallocate the
> > > > format accordingly if needed, or we can't do flushing etc. on that
> > > > buffer where needed.
> > >
> > > Do you think a VIRTIO device designed for your use case is an
> > > appropriate solution?
> > >
> > > I have been arguing that these use cases should be addressed with
> > > dedicated VIRTIO devices, but I don't understand the use cases of
> > > everyone on the CC list so maybe I'm missing something :).  If there
> > > are reasons why having a VIRTIO device for your use case does not make
> > > sense then it would be good to discuss them.  Blockers like "VIRTIO is
> > > too heavyweight/complex for us because ...", "Our application can't
> > > make use of VIRTIO devices because ...", etc would be important to
> > > hear.
> >
> > Do you have any idea on how to model Wayland as a VIRTIO device?
> >
> > Stephane mentioned that we use vsock, but in fact we have our own
> > VIRTIO device, except that it's semantically almost the same as vsock,
> > with a difference being the ability to pass buffers and pipes across
> > the VM boundary.
>
> I know neither Wayland nor your use case :).
>
> But we can discuss the design of your VIRTIO device.  Please post a
> link to the code.

The guest-side driver:
https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.19/drivers/virtio/virtio_wl.c

Protocol definitions:
https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.19/include/uapi/linux/virtio_wl.h

crosvm device implementation:
https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/master/devices/src/virtio/wl.rs

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  8:43 ` Stefan Hajnoczi
  2019-11-06  9:51   ` Gerd Hoffmann
@ 2019-11-11  3:04   ` David Stevens
  2019-11-11 15:36     ` [virtio-dev] " Liam Girdwood
  1 sibling, 1 reply; 51+ messages in thread
From: David Stevens @ 2019-11-11  3:04 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Gerd Hoffmann, Keiichi Watanabe, geoff, virtio-dev, Alex Lau,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Hans Verkuil,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

> My question would be "what is the actual problem you are trying to
> solve?".

One problem that needs to be solved is sharing buffers between
devices. With the out-of-tree Wayland device, to share virtio-gpu
buffers we've been using the virtio resource id. However, that
approach isn't necessarily the right approach, especially once there
are more devices allocating/sharing buffers. Specifically, this issue
came up in the recent RFC about adding a virtio video decoder device.

Having a centralized buffer allocator device is one way to deal with
sharing buffers, since it gives a definitive buffer identifier that
can be used by all drivers/devices to refer to the buffer. That being
said, I think the device as proposed is insufficient, as such a
centralized buffer allocator should probably be responsible for
allocating all shared buffers, not just linear guest ram buffers.

-David

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-11  3:04   ` David Stevens
@ 2019-11-11 15:36     ` Liam Girdwood
  2019-11-12  0:54       ` Gurchetan Singh
  0 siblings, 1 reply; 51+ messages in thread
From: Liam Girdwood @ 2019-11-11 15:36 UTC (permalink / raw)
  To: David Stevens, Stefan Hajnoczi
  Cc: Gerd Hoffmann, Keiichi Watanabe, geoff, virtio-dev, Alex Lau,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Hans Verkuil,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Mon, 2019-11-11 at 12:04 +0900, David Stevens wrote:
> Having a centralized buffer allocator device is one way to deal with
> sharing buffers, since it gives a definitive buffer identifier that
> can be used by all drivers/devices to refer to the buffer. That being
> said, I think the device as proposed is insufficient, as such a
> centralized buffer allocator should probably be responsible for
> allocating all shared buffers, not just linear guest ram buffers.

This would work for audio. I need to be able to :-

1) Allocate buffers on guests that I can pass as SG physical pages to
DMA engine (via privileged VM driver) for audio data. Can be any memory
as long as it's DMA-able.

2) Export hardware mailbox memory (in a real device PCI BAR) as RO to
each guest to give guests low latency information on each audio stream.
To support use cases like voice calls, gaming, system notifications and
general audio processing.

Liam


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-11 15:36     ` [virtio-dev] " Liam Girdwood
@ 2019-11-12  0:54       ` Gurchetan Singh
  2019-11-12 13:56         ` Liam Girdwood
  0 siblings, 1 reply; 51+ messages in thread
From: Gurchetan Singh @ 2019-11-12  0:54 UTC (permalink / raw)
  To: Liam Girdwood
  Cc: David Stevens, Stefan Hajnoczi, Gerd Hoffmann, Keiichi Watanabe,
	geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Hans Verkuil, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Dmitry Morozov,
	Pawel Osciak, Linux Media Mailing List

On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> Each buffer also has some properties to carry metadata, some fixed (id, size, application), but
> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).

Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:

https://patchwork.freedesktop.org/patch/310349/

For virtio-wayland + virtio-vdec, the problem is sharing -- not allocation.

As the buffer reaches a kernel boundary, it's properties devolve into
[fd, size].  Userspace can typically handle sharing metadata.  The
issue is the guest dma-buf fd doesn't mean anything on the host.

One scenario could be:

1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
hypercall to the host.
2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
name will be "virtgpu-buffer-${UUID}").
3) When importing, virtio-{vdec, video} reads the dma-buf name in
userspace, and calls fd to handle.  The name is sent to the host via a
hypercall, giving host virtio-{vdec, video} enough information to
identify the buffer.

This solution is entirely userspace -- we can probably come up with
something in kernel space [generate_random_uuid()] if need be.  We
only need two universal IDs: {device ID, buffer ID}.

> On Wed, Nov 6, 2019 at 2:28 PM Geoffrey McRae <geoff@hostfission.com> wrote:
> The entire point of this for our purposes is due to the fact that we can
> not allocate the buffer, it's either provided by the GPU driver or
> DirectX. If virtio-gpu were to allocate the buffer we might as well
> forget
> all this and continue using the ivshmem device.

We have a similar problem with closed source drivers.  As @lfy
mentioned, it's possible to map memory directory into virtio-gpu's PCI
bar and it's actually a planned feature.  Would that work for your use
case?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-12  0:54       ` Gurchetan Singh
@ 2019-11-12 13:56         ` Liam Girdwood
  2019-11-12 22:55           ` Gurchetan Singh
  0 siblings, 1 reply; 51+ messages in thread
From: Liam Girdwood @ 2019-11-12 13:56 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: David Stevens, Stefan Hajnoczi, Gerd Hoffmann, Keiichi Watanabe,
	geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Hans Verkuil, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Dmitry Morozov,
	Pawel Osciak, Linux Media Mailing List

On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann <kraxel@redhat.com>
> wrote:
> > Each buffer also has some properties to carry metadata, some fixed
> > (id, size, application), but
> > also allow free form (name = value, framebuffers would have
> > width/height/stride/format for example).
> 
> Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> 
> https://patchwork.freedesktop.org/patch/310349/
> 
> For virtio-wayland + virtio-vdec, the problem is sharing -- not
> allocation.
> 

Audio also needs to share buffers with firmware running on DSPs.

> As the buffer reaches a kernel boundary, it's properties devolve into
> [fd, size].  Userspace can typically handle sharing metadata.  The
> issue is the guest dma-buf fd doesn't mean anything on the host.
> 
> One scenario could be:
> 
> 1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
> allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
> hypercall to the host.
> 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
> name will be "virtgpu-buffer-${UUID}").
> 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> userspace, and calls fd to handle.  The name is sent to the host via
> a
> hypercall, giving host virtio-{vdec, video} enough information to
> identify the buffer.
> 
> This solution is entirely userspace -- we can probably come up with
> something in kernel space [generate_random_uuid()] if need be.  We
> only need two universal IDs: {device ID, buffer ID}.
> 

I need something where I can take a guest buffer and then convert it to
physical scatter gather page list. I can then either pass the SG page
list to the DSP firmware (for DMAC IP programming) or have the host
driver program the DMAC directly using the page list (who programs DMAC
depends on DSP architecture).

DSP FW has no access to userspace so we would need some additional API
on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?

Liam



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-12 13:56         ` Liam Girdwood
@ 2019-11-12 22:55           ` Gurchetan Singh
  2019-11-19 15:31             ` Liam Girdwood
  0 siblings, 1 reply; 51+ messages in thread
From: Gurchetan Singh @ 2019-11-12 22:55 UTC (permalink / raw)
  To: Liam Girdwood
  Cc: David Stevens, Stefan Hajnoczi, Gerd Hoffmann, Keiichi Watanabe,
	geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Hans Verkuil, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Dmitry Morozov,
	Pawel Osciak, Linux Media Mailing List

On Tue, Nov 12, 2019 at 5:56 AM Liam Girdwood
<liam.r.girdwood@linux.intel.com> wrote:
>
> On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> > On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann <kraxel@redhat.com>
> > wrote:
> > > Each buffer also has some properties to carry metadata, some fixed
> > > (id, size, application), but
> > > also allow free form (name = value, framebuffers would have
> > > width/height/stride/format for example).
> >
> > Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> >
> > https://patchwork.freedesktop.org/patch/310349/
> >
> > For virtio-wayland + virtio-vdec, the problem is sharing -- not
> > allocation.
> >
>
> Audio also needs to share buffers with firmware running on DSPs.
>
> > As the buffer reaches a kernel boundary, it's properties devolve into
> > [fd, size].  Userspace can typically handle sharing metadata.  The
> > issue is the guest dma-buf fd doesn't mean anything on the host.
> >
> > One scenario could be:
> >
> > 1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
> > allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
> > hypercall to the host.
> > 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
> > name will be "virtgpu-buffer-${UUID}").
> > 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> > userspace, and calls fd to handle.  The name is sent to the host via
> > a
> > hypercall, giving host virtio-{vdec, video} enough information to
> > identify the buffer.
> >
> > This solution is entirely userspace -- we can probably come up with
> > something in kernel space [generate_random_uuid()] if need be.  We
> > only need two universal IDs: {device ID, buffer ID}.
> >
>
> I need something where I can take a guest buffer and then convert it to
> physical scatter gather page list. I can then either pass the SG page
> list to the DSP firmware (for DMAC IP programming) or have the host
> driver program the DMAC directly using the page list (who programs DMAC
> depends on DSP architecture).

So you need the HW address space from a guest allocation?  Would your
allocation hypercalls use something like the virtio_gpu_mem_entry
(virtio_gpu.h) and the draft virtio_video_mem_entry (draft)?

struct {
        __le64 addr;
        __le32 length;
        __le32 padding;
};

/* VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING */
struct virtio_gpu_resource_attach_backing {
        struct virtio_gpu_ctrl_hdr hdr;
        __le32 resource_id;
        __le32 nr_entries;
      *struct struct virtio_gpu_mem_entry */
};

struct virtio_video_mem_entry {
    __le64 addr;
    __le32 length;
    __u8 padding[4];
};

struct virtio_video_resource_attach_backing {
    struct virtio_video_ctrl_hdr hdr;
    __le32 resource_id;
    __le32 nr_entries;
};

>
> DSP FW has no access to userspace so we would need some additional API
> on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?

The dma-buf api currently can share guest memory sg-lists.

>
> Liam
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09 15:12                       ` Tomasz Figa
@ 2019-11-18 10:20                         ` Stefan Hajnoczi
  2019-11-20 10:11                           ` Tomasz Figa
  0 siblings, 1 reply; 51+ messages in thread
From: Stefan Hajnoczi @ 2019-11-18 10:20 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Stéphane Marchesin, Gerd Hoffmann, geoff, virtio-dev,
	Alex Lau, Daniel Vetter, Alexandre Courbot, qemu-devel,
	Keiichi Watanabe, David Stevens, Hans Verkuil, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Sat, Nov 9, 2019 at 3:12 PM Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Sat, Nov 9, 2019 at 9:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >
> > On Sat, Nov 9, 2019 at 12:17 PM Tomasz Figa <tfiga@chromium.org> wrote:
> > > On Sat, Nov 9, 2019 at 7:12 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
> > > > > On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > > > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > > > > > common use cases might be useful too.
> > > > > > > >
> > > > > > > > Why not define VIRTIO devices for wayland and friends?
> > > > > > >
> > > > > > > There is an out-of-tree implementation of that, so yes, that surely is
> > > > > > > an option.
> > > > > > >
> > > > > > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > > > > > pipe as control channel.  Pretty much the same for X11, except that
> > > > > > > shared buffers are optional because the X protocol can also squeeze all
> > > > > > > display updates through the stream pipe.
> > > > > > >
> > > > > > > So, if you want allow guests talk to the host display server you can run
> > > > > > > the stream pipe over vsock.  But there is nothing for the shared
> > > > > > > buffers ...
> > > > > > >
> > > > > > > We could replicate vsock functionality elsewhere.  I think that happened
> > > > > > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > > > > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > > > > > can easily pass around virtio-gpu resource references for buffer
> > > > > > > sharing.  But given that getting vsock right isn't exactly trivial
> > > > > > > (consider all the fairness issues when multiplexing multiple streams
> > > > > > > over a virtqueue for example) I don't think this is a good plan.
> > > > > >
> > > > > > I also think vsock isn't the right fit.
> > > > > >
> > > > >
> > > > > +1 we are using vsock right now and we have a few pains because of it.
> > > > >
> > > > > I think the high-level problem is that because it is a side channel,
> > > > > we don't see everything that happens to the buffer in one place
> > > > > (rendering + display) and we can't do things like reallocate the
> > > > > format accordingly if needed, or we can't do flushing etc. on that
> > > > > buffer where needed.
> > > >
> > > > Do you think a VIRTIO device designed for your use case is an
> > > > appropriate solution?
> > > >
> > > > I have been arguing that these use cases should be addressed with
> > > > dedicated VIRTIO devices, but I don't understand the use cases of
> > > > everyone on the CC list so maybe I'm missing something :).  If there
> > > > are reasons why having a VIRTIO device for your use case does not make
> > > > sense then it would be good to discuss them.  Blockers like "VIRTIO is
> > > > too heavyweight/complex for us because ...", "Our application can't
> > > > make use of VIRTIO devices because ...", etc would be important to
> > > > hear.
> > >
> > > Do you have any idea on how to model Wayland as a VIRTIO device?
> > >
> > > Stephane mentioned that we use vsock, but in fact we have our own
> > > VIRTIO device, except that it's semantically almost the same as vsock,
> > > with a difference being the ability to pass buffers and pipes across
> > > the VM boundary.
> >
> > I know neither Wayland nor your use case :).
> >
> > But we can discuss the design of your VIRTIO device.  Please post a
> > link to the code.
>
> The guest-side driver:
> https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.19/drivers/virtio/virtio_wl.c
>
> Protocol definitions:
> https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.19/include/uapi/linux/virtio_wl.h
>
> crosvm device implementation:
> https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/master/devices/src/virtio/wl.rs

Thanks, Tomasz!

Unfortunately I haven't had a chance to look or catch up on this email
thread due to other work that will keep me away for at least another
week :(.

Stefan

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-12 22:55           ` Gurchetan Singh
@ 2019-11-19 15:31             ` Liam Girdwood
  2019-11-20  0:42               ` Gurchetan Singh
  2019-11-20  9:53               ` Gerd Hoffmann
  0 siblings, 2 replies; 51+ messages in thread
From: Liam Girdwood @ 2019-11-19 15:31 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: David Stevens, Stefan Hajnoczi, Gerd Hoffmann, Keiichi Watanabe,
	geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Hans Verkuil, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Dmitry Morozov,
	Pawel Osciak, Linux Media Mailing List

On Tue, 2019-11-12 at 14:55 -0800, Gurchetan Singh wrote:
> On Tue, Nov 12, 2019 at 5:56 AM Liam Girdwood
> <liam.r.girdwood@linux.intel.com> wrote:
> > 
> > On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> > > On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann <kraxel@redhat.com>
> > > wrote:
> > > > Each buffer also has some properties to carry metadata, some
> > > > fixed
> > > > (id, size, application), but
> > > > also allow free form (name = value, framebuffers would have
> > > > width/height/stride/format for example).
> > > 
> > > Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> > > 
> > > https://patchwork.freedesktop.org/patch/310349/
> > > 
> > > For virtio-wayland + virtio-vdec, the problem is sharing -- not
> > > allocation.
> > > 
> > 
> > Audio also needs to share buffers with firmware running on DSPs.
> > 
> > > As the buffer reaches a kernel boundary, it's properties devolve
> > > into
> > > [fd, size].  Userspace can typically handle sharing
> > > metadata.  The
> > > issue is the guest dma-buf fd doesn't mean anything on the host.
> > > 
> > > One scenario could be:
> > > 
> > > 1) Guest userspace (say, gralloc) allocates using virtio-
> > > gpu.  When
> > > allocating, we call uuidgen() and then pass that via
> > > RESOURCE_CREATE
> > > hypercall to the host.
> > > 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the
> > > buffer
> > > name will be "virtgpu-buffer-${UUID}").
> > > 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> > > userspace, and calls fd to handle.  The name is sent to the host
> > > via
> > > a
> > > hypercall, giving host virtio-{vdec, video} enough information to
> > > identify the buffer.
> > > 
> > > This solution is entirely userspace -- we can probably come up
> > > with
> > > something in kernel space [generate_random_uuid()] if need
> > > be.  We
> > > only need two universal IDs: {device ID, buffer ID}.
> > > 
> > 
> > I need something where I can take a guest buffer and then convert
> > it to
> > physical scatter gather page list. I can then either pass the SG
> > page
> > list to the DSP firmware (for DMAC IP programming) or have the host
> > driver program the DMAC directly using the page list (who programs
> > DMAC
> > depends on DSP architecture).
> 
> So you need the HW address space from a guest allocation? 

Yes.

>  Would your
> allocation hypercalls use something like the virtio_gpu_mem_entry
> (virtio_gpu.h) and the draft virtio_video_mem_entry (draft)?

IIUC, this looks like generic SG buffer allocation ?

> 
> struct {
>         __le64 addr;
>         __le32 length;
>         __le32 padding;
> };
> 
> /* VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING */
> struct virtio_gpu_resource_attach_backing {
>         struct virtio_gpu_ctrl_hdr hdr;
>         __le32 resource_id;
>         __le32 nr_entries;
>       *struct struct virtio_gpu_mem_entry */
> };
> 
> struct virtio_video_mem_entry {
>     __le64 addr;
>     __le32 length;
>     __u8 padding[4];
> };
> 
> struct virtio_video_resource_attach_backing {
>     struct virtio_video_ctrl_hdr hdr;
>     __le32 resource_id;
>     __le32 nr_entries;
> };
> 
> > 
> > DSP FW has no access to userspace so we would need some additional
> > API
> > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> 
> The dma-buf api currently can share guest memory sg-lists.

Ok, IIUC buffers can either be shared using the GPU proposed APIs
(above) or using the dma-buf API to share via userspace ? My preference
would be to use teh more direct GPU APIs sending physical page
addresses from Guest to device driver. I guess this is your use case
too ?

Thanks

Liam

> 
> > 
> > Liam
> > 
> > 
> > 
> > -----------------------------------------------------------------
> > ----
> > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > For additional commands, e-mail: 
> > virtio-dev-help@lists.oasis-open.org
> > 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-19 15:31             ` Liam Girdwood
@ 2019-11-20  0:42               ` Gurchetan Singh
  2019-11-20  9:53               ` Gerd Hoffmann
  1 sibling, 0 replies; 51+ messages in thread
From: Gurchetan Singh @ 2019-11-20  0:42 UTC (permalink / raw)
  To: Liam Girdwood
  Cc: David Stevens, Stefan Hajnoczi, Gerd Hoffmann, Keiichi Watanabe,
	geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Hans Verkuil, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Dmitry Morozov,
	Pawel Osciak, Linux Media Mailing List

On Tue, Nov 19, 2019 at 7:31 AM Liam Girdwood
<liam.r.girdwood@linux.intel.com> wrote:
>
> On Tue, 2019-11-12 at 14:55 -0800, Gurchetan Singh wrote:
> > On Tue, Nov 12, 2019 at 5:56 AM Liam Girdwood
> > <liam.r.girdwood@linux.intel.com> wrote:
> > >
> > > On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> > > > On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann <kraxel@redhat.com>
> > > > wrote:
> > > > > Each buffer also has some properties to carry metadata, some
> > > > > fixed
> > > > > (id, size, application), but
> > > > > also allow free form (name = value, framebuffers would have
> > > > > width/height/stride/format for example).
> > > >
> > > > Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> > > >
> > > > https://patchwork.freedesktop.org/patch/310349/
> > > >
> > > > For virtio-wayland + virtio-vdec, the problem is sharing -- not
> > > > allocation.
> > > >
> > >
> > > Audio also needs to share buffers with firmware running on DSPs.
> > >
> > > > As the buffer reaches a kernel boundary, it's properties devolve
> > > > into
> > > > [fd, size].  Userspace can typically handle sharing
> > > > metadata.  The
> > > > issue is the guest dma-buf fd doesn't mean anything on the host.
> > > >
> > > > One scenario could be:
> > > >
> > > > 1) Guest userspace (say, gralloc) allocates using virtio-
> > > > gpu.  When
> > > > allocating, we call uuidgen() and then pass that via
> > > > RESOURCE_CREATE
> > > > hypercall to the host.
> > > > 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the
> > > > buffer
> > > > name will be "virtgpu-buffer-${UUID}").
> > > > 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> > > > userspace, and calls fd to handle.  The name is sent to the host
> > > > via
> > > > a
> > > > hypercall, giving host virtio-{vdec, video} enough information to
> > > > identify the buffer.
> > > >
> > > > This solution is entirely userspace -- we can probably come up
> > > > with
> > > > something in kernel space [generate_random_uuid()] if need
> > > > be.  We
> > > > only need two universal IDs: {device ID, buffer ID}.
> > > >
> > >
> > > I need something where I can take a guest buffer and then convert
> > > it to
> > > physical scatter gather page list. I can then either pass the SG
> > > page
> > > list to the DSP firmware (for DMAC IP programming) or have the host
> > > driver program the DMAC directly using the page list (who programs
> > > DMAC
> > > depends on DSP architecture).
> >
> > So you need the HW address space from a guest allocation?
>
> Yes.
>
> >  Would your
> > allocation hypercalls use something like the virtio_gpu_mem_entry
> > (virtio_gpu.h) and the draft virtio_video_mem_entry (draft)?
>
> IIUC, this looks like generic SG buffer allocation ?
>
> >
> > struct {
> >         __le64 addr;
> >         __le32 length;
> >         __le32 padding;
> > };
> >
> > /* VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING */
> > struct virtio_gpu_resource_attach_backing {
> >         struct virtio_gpu_ctrl_hdr hdr;
> >         __le32 resource_id;
> >         __le32 nr_entries;
> >       *struct struct virtio_gpu_mem_entry */
> > };
> >
> > struct virtio_video_mem_entry {
> >     __le64 addr;
> >     __le32 length;
> >     __u8 padding[4];
> > };
> >
> > struct virtio_video_resource_attach_backing {
> >     struct virtio_video_ctrl_hdr hdr;
> >     __le32 resource_id;
> >     __le32 nr_entries;
> > };
> >
> > >
> > > DSP FW has no access to userspace so we would need some additional
> > > API
> > > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> >
> > The dma-buf api currently can share guest memory sg-lists.
>
> Ok, IIUC buffers can either be shared using the GPU proposed APIs
> (above) or using the dma-buf API to share via userspace ?

If we restrict ourselves to guest-sg lists only, then the current
dma-buf API is sufficient to share buffers.  From example, virtio-gpu
can allocate with the following hypercall (as it does now):

struct virtio_gpu_resource_attach_backing {
         struct virtio_gpu_ctrl_hdr hdr;
         __le32 resource_id;
         __le32 nr_entries;
       *struct struct virtio_gpu_mem_entry */
};

Then in the guest kernel, virtio-{video, snd} can get the sg-list via
dma_buf_map_attachment, and then have a hypercall of it's own:

struct virtio_video_resource_import {
         struct virtio_video_ctrl_hdr hdr;
         __le32 video_resource_id;
         __le32 nr_entries;
       *struct struct virtio_gpu_mem_entry */
};

Then it can create dmabuf on the host or get the HW address from the SG list.

The complications come in from sharing host allocated buffers ... for
that we may need a method to translate from guest fds to universal
"virtualized" resource IDs.  I've heard talk about the need to
translate from guest fence fds to host fence fds as well.

> My preference
> would be to use teh more direct GPU APIs sending physical page
> addresses from Guest to device driver. I guess this is your use case
> too ?

For my use case, guest memory is sufficient, especially given the
direction towards modifiers + system memory.  For closed source
drivers, we may need to directly map host buffers.  However, that use
case is restricted to virtio-gpu and won't work with other virtio
devices.


>
> Thanks
>
> Liam
>
> >
> > >
> > > Liam
> > >
> > >
> > >
> > > -----------------------------------------------------------------
> > > ----
> > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > For additional commands, e-mail:
> > > virtio-dev-help@lists.oasis-open.org
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-19 15:31             ` Liam Girdwood
  2019-11-20  0:42               ` Gurchetan Singh
@ 2019-11-20  9:53               ` Gerd Hoffmann
  2019-11-25 16:46                 ` Liam Girdwood
  1 sibling, 1 reply; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-20  9:53 UTC (permalink / raw)
  To: Liam Girdwood
  Cc: Gurchetan Singh, David Stevens, Stefan Hajnoczi,
	Keiichi Watanabe, geoff, virtio-dev, Alex Lau, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Hans Verkuil, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Dmitry Morozov,
	Pawel Osciak, Linux Media Mailing List

  Hi,

> > > DSP FW has no access to userspace so we would need some additional
> > > API
> > > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> > 
> > The dma-buf api currently can share guest memory sg-lists.
> 
> Ok, IIUC buffers can either be shared using the GPU proposed APIs
> (above) or using the dma-buf API to share via userspace ? My preference
> would be to use teh more direct GPU APIs sending physical page
> addresses from Guest to device driver. I guess this is your use case
> too ?

I'm not convinced this is useful for audio ...

I basically see two modes of operation which are useful:

  (1) send audio data via virtqueue.
  (2) map host audio buffers into the guest address space.

The audio driver api (i.e. alsa) typically allows to mmap() the audio
data buffers, so it is the host audio driver which handles the
allocation.  Let the audio hardware dma from/to userspace-allocated
buffers is not possible[1], but we would need that to allow qemu (or
other vmms) use guest-allocated buffers.

cheers,
  Gerd

[1] Disclaimer: It's been a while I looked at alsa more closely, so
    there is a chance this might have changed without /me noticing.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-18 10:20                         ` Stefan Hajnoczi
@ 2019-11-20 10:11                           ` Tomasz Figa
  0 siblings, 0 replies; 51+ messages in thread
From: Tomasz Figa @ 2019-11-20 10:11 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Stéphane Marchesin, Gerd Hoffmann, geoff, virtio-dev,
	Alex Lau, Daniel Vetter, Alexandre Courbot, qemu-devel,
	Keiichi Watanabe, David Stevens, Hans Verkuil, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

Hi Stefan,

On Mon, Nov 18, 2019 at 7:21 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Sat, Nov 9, 2019 at 3:12 PM Tomasz Figa <tfiga@chromium.org> wrote:
> >
> > On Sat, Nov 9, 2019 at 9:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > >
> > > On Sat, Nov 9, 2019 at 12:17 PM Tomasz Figa <tfiga@chromium.org> wrote:
> > > > On Sat, Nov 9, 2019 at 7:12 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > > On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
> > > > > > On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > > > > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > > > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > > > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > > > > > > common use cases might be useful too.
> > > > > > > > >
> > > > > > > > > Why not define VIRTIO devices for wayland and friends?
> > > > > > > >
> > > > > > > > There is an out-of-tree implementation of that, so yes, that surely is
> > > > > > > > an option.
> > > > > > > >
> > > > > > > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > > > > > > pipe as control channel.  Pretty much the same for X11, except that
> > > > > > > > shared buffers are optional because the X protocol can also squeeze all
> > > > > > > > display updates through the stream pipe.
> > > > > > > >
> > > > > > > > So, if you want allow guests talk to the host display server you can run
> > > > > > > > the stream pipe over vsock.  But there is nothing for the shared
> > > > > > > > buffers ...
> > > > > > > >
> > > > > > > > We could replicate vsock functionality elsewhere.  I think that happened
> > > > > > > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > > > > > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > > > > > > can easily pass around virtio-gpu resource references for buffer
> > > > > > > > sharing.  But given that getting vsock right isn't exactly trivial
> > > > > > > > (consider all the fairness issues when multiplexing multiple streams
> > > > > > > > over a virtqueue for example) I don't think this is a good plan.
> > > > > > >
> > > > > > > I also think vsock isn't the right fit.
> > > > > > >
> > > > > >
> > > > > > +1 we are using vsock right now and we have a few pains because of it.
> > > > > >
> > > > > > I think the high-level problem is that because it is a side channel,
> > > > > > we don't see everything that happens to the buffer in one place
> > > > > > (rendering + display) and we can't do things like reallocate the
> > > > > > format accordingly if needed, or we can't do flushing etc. on that
> > > > > > buffer where needed.
> > > > >
> > > > > Do you think a VIRTIO device designed for your use case is an
> > > > > appropriate solution?
> > > > >
> > > > > I have been arguing that these use cases should be addressed with
> > > > > dedicated VIRTIO devices, but I don't understand the use cases of
> > > > > everyone on the CC list so maybe I'm missing something :).  If there
> > > > > are reasons why having a VIRTIO device for your use case does not make
> > > > > sense then it would be good to discuss them.  Blockers like "VIRTIO is
> > > > > too heavyweight/complex for us because ...", "Our application can't
> > > > > make use of VIRTIO devices because ...", etc would be important to
> > > > > hear.
> > > >
> > > > Do you have any idea on how to model Wayland as a VIRTIO device?
> > > >
> > > > Stephane mentioned that we use vsock, but in fact we have our own
> > > > VIRTIO device, except that it's semantically almost the same as vsock,
> > > > with a difference being the ability to pass buffers and pipes across
> > > > the VM boundary.
> > >
> > > I know neither Wayland nor your use case :).
> > >
> > > But we can discuss the design of your VIRTIO device.  Please post a
> > > link to the code.
> >
> > The guest-side driver:
> > https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.19/drivers/virtio/virtio_wl.c
> >
> > Protocol definitions:
> > https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.19/include/uapi/linux/virtio_wl.h
> >
> > crosvm device implementation:
> > https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/master/devices/src/virtio/wl.rs
>
> Thanks, Tomasz!
>
> Unfortunately I haven't had a chance to look or catch up on this email
> thread due to other work that will keep me away for at least another
> week :(.

Thanks for the note. Waiting patiently. :)

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
       [not found]           ` <CAEkmjvU8or7YT7CCBe7aUx-XQ3yJpUrY4CfBOnqk7pUH9d9RGQ@mail.gmail.com>
@ 2019-11-20 11:58             ` Tomasz Figa
  0 siblings, 0 replies; 51+ messages in thread
From: Tomasz Figa @ 2019-11-20 11:58 UTC (permalink / raw)
  To: Frank Yang
  Cc: Stefan Hajnoczi, Gerd Hoffmann, geoff, virtio-dev, Alex Lau,
	Alexandre Courbot, qemu-devel, Keiichi Watanabe, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

Hi Frank,

On Fri, Nov 8, 2019 at 12:10 AM Frank Yang <lfy@google.com> wrote:
>
> So I'm not really sure why people are having issues sharing buffers that live on the GPU. Doesn't that show up as some integer ID on the host, and some $GuestFramework (dmabuf, gralloc) ID on the guest, and it all works out due to maintaining the correspondence in your particular stack of virtual devices? For example, if you want to do video decode in hardware on an Android guest, there should be a gralloc buffer whose handle contains enough information to reconstruct the GPU buffer ID on the host, because gralloc is how processes communicate gpu buffer ids to each other on Android.

I don't think we really have any issues with that. :)

We just need a standard for:
a) assignment of buffer IDs that the guest can refer to,
b) making all virtual devices understand the IDs from a) when such are
passed to them by the guest.

>
> BTW, if we have a new device just for this, this should also be more flexible than being udmabuf on the host. There are other OSes than Linux. Keep in mind, also, that across different drivers even on Linux, e.g., NVIDIA proprietary, dmabuf might not always be available.
>
> As for host CPU memory that is allocated in various ways, I think Android Emulator has built a very flexible/general solution, esp if we need to share a host CPU buffer allocated via something thats not completely under our control, such as Vulkan. We reserve a PCI BAR for that and map memory directly from the host Vk drier into there, via the address space device. It's
>
> https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/hw/pci/goldfish_address_space.c
> https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emu/android/emulation/address_space_device.cpp#205

I recall that we already agreed on exposing host memory to the guests
using PCI BARs. There should be work-in-progress patches for
virtio-gpu to use that instead of shadow buffers and transfers.

>
> Number of copies is also completely under the user's control, unlike ivshmem. It also is not tied to any particular device such as gpu or codec. Since the memory is owned by the host and directly mapped to the guest PCI without any abstraction, it's contiguous, it doesn't carve out guest RAM, doesn't waste CMA, etc.

That's one of the reasons we use host-based allocations in VMs running
on Chrome OS. That said, I think everyone here agrees that it's a good
optimization that should be specified and implemented.

P.S. The common mailing list netiquette recommends bottom posting and
plain text emails.

Best regards,
Tomasz

>
> On Thu, Nov 7, 2019 at 4:13 AM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>
>> On Wed, Nov 6, 2019 at 1:50 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>> > > In the graphics buffer sharing use case, how does the other side
>> > > determine how to interpret this data?
>> >
>> > The idea is to have free form properties (name=value, with value being
>> > a string) for that kind of metadata.
>> >
>> > > Shouldn't there be a VIRTIO
>> > > device spec for the messaging so compatible implementations can be
>> > > written by others?
>> >
>> > Adding a list of common properties to the spec certainly makes sense,
>> > so everybody uses the same names.  Adding struct-ed properties for
>> > common use cases might be useful too.
>>
>> Why not define VIRTIO devices for wayland and friends?
>>
>> This new device exposes buffer sharing plus properties - effectively a
>> new device model nested inside VIRTIO.  The VIRTIO device model has
>> the necessary primitives to solve the buffer sharing problem so I'm
>> struggling to see the purpose of this new device.
>>
>> Custom/niche applications that do not wish to standardize their device
>> type can maintain out-of-tree VIRTIO devices.  Both kernel and
>> userspace drivers can be written for the device and there is already
>> VIRTIO driver code that can be reused.  They have access to the full
>> VIRTIO device model, including feature negotiation and configuration
>> space.
>>
>> Stefan
>>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  9:51   ` Gerd Hoffmann
  2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
  2019-11-06 11:46     ` Stefan Hajnoczi
@ 2019-11-20 12:11     ` Tomasz Figa
  2 siblings, 0 replies; 51+ messages in thread
From: Tomasz Figa @ 2019-11-20 12:11 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Stefan Hajnoczi, geoff, virtio-dev, Alex Lau, Daniel Vetter,
	Alexandre Courbot, qemu-devel, Keiichi Watanabe, David Stevens,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Wed, Nov 6, 2019 at 6:51 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi,
>
> > > Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> > > resources" is really a good answer for all the different use cases
> > > we have collected over time.  Maybe it is better to have a dedicated
> > > buffer sharing virtio device?  Here is the rough idea:
> >
> > My concern is that buffer sharing isn't a "device".  It's a primitive
> > used in building other devices.  When someone asks for just buffer
> > sharing it's often because they do not intend to upstream a
> > specification for their device.
>
> Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either.
> It is more a service to allow communication between host and guest
>
> That buffer sharing device falls into the same category.  Maybe it even
> makes sense to build that as virtio-vsock extension.  Not sure how well
> that would work with the multi-transport architecture of vsock though.
>
> > If this buffer sharing device's main purpose is for building proprietary
> > devices without contributing to VIRTIO, then I don't think it makes
> > sense for the VIRTIO community to assist in its development.
>
> One possible use case would be building a wayland proxy, using vsock for
> the wayland protocol messages and virtio-buffers for the shared buffers
> (wayland client window content).
>
> It could also simplify buffer sharing between devices (feed decoded
> video frames from decoder to gpu), although in that case it is less
> clear that it'll actually simplify things because virtio-gpu is
> involved anyway.
>
> We can't prevent people from using that for proprietary stuff (same goes
> for vsock).
>
> There is the option to use virtio-gpu instead, i.e. add support to qemu
> to export dma-buf handles for virtio-gpu resources to other processes
> (such as a wayland proxy).  That would provide very similar
> functionality (and thereby create the same loophole).
>
> > VIRTIO recently gained a shared memory resource concept for access to
> > host memory.  It is being used in virtio-pmem and virtio-fs (and
> > virtio-gpu?).
>
> virtio-gpu is in progress still unfortunately (all kinds of fixes for
> the qemu drm drivers and virtio-gpu guest driver refactoring kept me
> busy for quite a while ...).
>
> > If another flavor of shared memory is required it can be
> > added to the spec and new VIRTIO device types can use it.  But it's not
> > clear why this should be its own device.
>
> This is not about host memory, buffers are in guest ram, everything else
> would make sharing those buffers between drivers inside the guest (as
> dma-buf) quite difficult.

I wonder if we're not forgetting about the main reason we ended up
with all this chaos - the host-allocated buffers. ;)

Do we really have an issue with sharing guest memory between different
virtio devices? Each of those devices could just accept a scatterlist
of guest pages and import that memory to whatever host component it's
backed by.

The case that really needs some support from VIRTIO is when the
buffers are allocated in the host. Sharing buffers from virtio-gpu
with a virtio video decoder or Wayland (be it a dedicated virtio
device or vsock) are some of the examples.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 22:28     ` Geoffrey McRae
  2019-11-07  6:48       ` Gerd Hoffmann
@ 2019-11-20 12:13       ` Tomasz Figa
  2019-11-20 21:41         ` Geoffrey McRae
  1 sibling, 1 reply; 51+ messages in thread
From: Tomasz Figa @ 2019-11-20 12:13 UTC (permalink / raw)
  To: Geoffrey McRae
  Cc: Gerd Hoffmann, David Stevens, Keiichi Watanabe, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

Hi Geoffrey,

On Thu, Nov 7, 2019 at 7:28 AM Geoffrey McRae <geoff@hostfission.com> wrote:
>
>
>
> On 2019-11-06 23:41, Gerd Hoffmann wrote:
> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> >> > (1) The virtio device
> >> > =====================
> >> >
> >> > Has a single virtio queue, so the guest can send commands to register
> >> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> >> > has a list of memory ranges for the data. Each buffer also has some
> >>
> >> Allocating from guest ram would work most of the time, but I think
> >> it's insufficient for many use cases. It doesn't really support things
> >> such as contiguous allocations, allocations from carveouts or <4GB,
> >> protected buffers, etc.
> >
> > If there are additional constrains (due to gpu hardware I guess)
> > I think it is better to leave the buffer allocation to virtio-gpu.
>
> The entire point of this for our purposes is due to the fact that we can
> not allocate the buffer, it's either provided by the GPU driver or
> DirectX. If virtio-gpu were to allocate the buffer we might as well
> forget
> all this and continue using the ivshmem device.

I don't understand why virtio-gpu couldn't allocate those buffers.
Allocation doesn't necessarily mean creating new memory. Since the
virtio-gpu device on the host talks to the GPU driver (or DirectX?),
why couldn't it return one of the buffers provided by those if
BIND_SCANOUT is requested?

>
> Our use case is niche, and the state of things may change if vendors
> like
> AMD follow through with their promises and give us SR-IOV on consumer
> GPUs, but even then we would still need their support to achieve the
> same
> results as the same issue would still be present.
>
> Also don't forget that QEMU already has a non virtio generic device
> (IVSHMEM). The only difference is, this device doesn't allow us to
> attain
> zero-copy transfers.
>
> Currently IVSHMEM is used by two projects that I am aware of, Looking
> Glass and SCREAM. While Looking Glass is solving a problem that is out
> of
> scope for QEMU, SCREAM is working around the audio problems in QEMU that
> have been present for years now.
>
> While I don't agree with SCREAM being used this way (we really need a
> virtio-sound device, and/or intel-hda needs to be fixed), it again is an
> example of working around bugs/faults/limitations in QEMU by those of us
> that are unable to fix them ourselves and seem to have low priority to
> the
> QEMU project.
>
> What we are trying to attain is freedom from dual boot Linux/Windows
> systems, not migrate-able enterprise VPS configurations. The Looking
> Glass project has brought attention to several other bugs/problems in
> QEMU, some of which were fixed as a direct result of this project (i8042
> race, AMD NPT).
>
> Unless there is another solution to getting the guest GPUs frame-buffer
> back to the host, a device like this will always be required. Since the
> landscape could change at any moment, this device should not be a LG
> specific device, but rather a generic device to allow for other
> workarounds like LG to be developed in the future should they be
> required.
>
> Is it optimal? no
> Is there a better solution? not that I am aware of
>
> >
> > virtio-gpu can't do that right now, but we have to improve virtio-gpu
> > memory management for vulkan support anyway.
> >
> >> > properties to carry metadata, some fixed (id, size, application), but
> >>
> >> What exactly do you mean by application?
> >
> > Basically some way to group buffers.  A wayland proxy for example would
> > add a "application=wayland-proxy" tag to the buffers it creates in the
> > guest, and the host side part of the proxy could ask qemu (or another
> > vmm) to notify about all buffers with that tag.  So in case multiple
> > applications are using the device in parallel they don't interfere with
> > each other.
> >
> >> > also allow free form (name = value, framebuffers would have
> >> > width/height/stride/format for example).
> >>
> >> Is this approach expected to handle allocating buffers with
> >> hardware-specific constraints such as stride/height alignment or
> >> tiling? Or would there need to be some alternative channel for
> >> determining those values and then calculating the appropriate buffer
> >> size?
> >
> > No parameter negotiation.
> >
> > cheers,
> >   Gerd

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-20 12:13       ` Tomasz Figa
@ 2019-11-20 21:41         ` Geoffrey McRae
  2019-11-21  5:51           ` Tomasz Figa
  0 siblings, 1 reply; 51+ messages in thread
From: Geoffrey McRae @ 2019-11-20 21:41 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Gerd Hoffmann, David Stevens, Keiichi Watanabe, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel



On 2019-11-20 23:13, Tomasz Figa wrote:
> Hi Geoffrey,
> 
> On Thu, Nov 7, 2019 at 7:28 AM Geoffrey McRae <geoff@hostfission.com> 
> wrote:
>> 
>> 
>> 
>> On 2019-11-06 23:41, Gerd Hoffmann wrote:
>> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
>> >> > (1) The virtio device
>> >> > =====================
>> >> >
>> >> > Has a single virtio queue, so the guest can send commands to register
>> >> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
>> >> > has a list of memory ranges for the data. Each buffer also has some
>> >>
>> >> Allocating from guest ram would work most of the time, but I think
>> >> it's insufficient for many use cases. It doesn't really support things
>> >> such as contiguous allocations, allocations from carveouts or <4GB,
>> >> protected buffers, etc.
>> >
>> > If there are additional constrains (due to gpu hardware I guess)
>> > I think it is better to leave the buffer allocation to virtio-gpu.
>> 
>> The entire point of this for our purposes is due to the fact that we 
>> can
>> not allocate the buffer, it's either provided by the GPU driver or
>> DirectX. If virtio-gpu were to allocate the buffer we might as well
>> forget
>> all this and continue using the ivshmem device.
> 
> I don't understand why virtio-gpu couldn't allocate those buffers.
> Allocation doesn't necessarily mean creating new memory. Since the
> virtio-gpu device on the host talks to the GPU driver (or DirectX?),
> why couldn't it return one of the buffers provided by those if
> BIND_SCANOUT is requested?
> 

Because in our application we are a user-mode application in windows
that is provided with buffers that were allocated by the video stack in
windows. We are not using a virtual GPU but a physical GPU via vfio
passthrough and as such we are limited in what we can do. Unless I have
completely missed what virtio-gpu does, from what I understand it's
attempting to be a virtual GPU in its own right, which is not at all
suitable for our requirements.

This discussion seems to have moved away completely from the original
simple feature we need, which is to share a random block of guest
allocated ram with the host. While it would be nice if it's contiguous
ram, it's not an issue if it's not, and with udmabuf (now I understand
it) it can be made to appear contigous if it is so desired anyway.

vhost-user could be used for this if it is fixed to allow dynamic
remapping, all the other bells and whistles that are virtio-gpu are
useless to us.

>> 
>> Our use case is niche, and the state of things may change if vendors
>> like
>> AMD follow through with their promises and give us SR-IOV on consumer
>> GPUs, but even then we would still need their support to achieve the
>> same
>> results as the same issue would still be present.
>> 
>> Also don't forget that QEMU already has a non virtio generic device
>> (IVSHMEM). The only difference is, this device doesn't allow us to
>> attain
>> zero-copy transfers.
>> 
>> Currently IVSHMEM is used by two projects that I am aware of, Looking
>> Glass and SCREAM. While Looking Glass is solving a problem that is out
>> of
>> scope for QEMU, SCREAM is working around the audio problems in QEMU 
>> that
>> have been present for years now.
>> 
>> While I don't agree with SCREAM being used this way (we really need a
>> virtio-sound device, and/or intel-hda needs to be fixed), it again is 
>> an
>> example of working around bugs/faults/limitations in QEMU by those of 
>> us
>> that are unable to fix them ourselves and seem to have low priority to
>> the
>> QEMU project.
>> 
>> What we are trying to attain is freedom from dual boot Linux/Windows
>> systems, not migrate-able enterprise VPS configurations. The Looking
>> Glass project has brought attention to several other bugs/problems in
>> QEMU, some of which were fixed as a direct result of this project 
>> (i8042
>> race, AMD NPT).
>> 
>> Unless there is another solution to getting the guest GPUs 
>> frame-buffer
>> back to the host, a device like this will always be required. Since 
>> the
>> landscape could change at any moment, this device should not be a LG
>> specific device, but rather a generic device to allow for other
>> workarounds like LG to be developed in the future should they be
>> required.
>> 
>> Is it optimal? no
>> Is there a better solution? not that I am aware of
>> 
>> >
>> > virtio-gpu can't do that right now, but we have to improve virtio-gpu
>> > memory management for vulkan support anyway.
>> >
>> >> > properties to carry metadata, some fixed (id, size, application), but
>> >>
>> >> What exactly do you mean by application?
>> >
>> > Basically some way to group buffers.  A wayland proxy for example would
>> > add a "application=wayland-proxy" tag to the buffers it creates in the
>> > guest, and the host side part of the proxy could ask qemu (or another
>> > vmm) to notify about all buffers with that tag.  So in case multiple
>> > applications are using the device in parallel they don't interfere with
>> > each other.
>> >
>> >> > also allow free form (name = value, framebuffers would have
>> >> > width/height/stride/format for example).
>> >>
>> >> Is this approach expected to handle allocating buffers with
>> >> hardware-specific constraints such as stride/height alignment or
>> >> tiling? Or would there need to be some alternative channel for
>> >> determining those values and then calculating the appropriate buffer
>> >> size?
>> >
>> > No parameter negotiation.
>> >
>> > cheers,
>> >   Gerd

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-20 21:41         ` Geoffrey McRae
@ 2019-11-21  5:51           ` Tomasz Figa
  2019-12-04 22:22             ` Dylan Reid
  0 siblings, 1 reply; 51+ messages in thread
From: Tomasz Figa @ 2019-11-21  5:51 UTC (permalink / raw)
  To: Geoffrey McRae
  Cc: Gerd Hoffmann, David Stevens, Keiichi Watanabe, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

On Thu, Nov 21, 2019 at 6:41 AM Geoffrey McRae <geoff@hostfission.com> wrote:
>
>
>
> On 2019-11-20 23:13, Tomasz Figa wrote:
> > Hi Geoffrey,
> >
> > On Thu, Nov 7, 2019 at 7:28 AM Geoffrey McRae <geoff@hostfission.com>
> > wrote:
> >>
> >>
> >>
> >> On 2019-11-06 23:41, Gerd Hoffmann wrote:
> >> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> >> >> > (1) The virtio device
> >> >> > =====================
> >> >> >
> >> >> > Has a single virtio queue, so the guest can send commands to register
> >> >> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> >> >> > has a list of memory ranges for the data. Each buffer also has some
> >> >>
> >> >> Allocating from guest ram would work most of the time, but I think
> >> >> it's insufficient for many use cases. It doesn't really support things
> >> >> such as contiguous allocations, allocations from carveouts or <4GB,
> >> >> protected buffers, etc.
> >> >
> >> > If there are additional constrains (due to gpu hardware I guess)
> >> > I think it is better to leave the buffer allocation to virtio-gpu.
> >>
> >> The entire point of this for our purposes is due to the fact that we
> >> can
> >> not allocate the buffer, it's either provided by the GPU driver or
> >> DirectX. If virtio-gpu were to allocate the buffer we might as well
> >> forget
> >> all this and continue using the ivshmem device.
> >
> > I don't understand why virtio-gpu couldn't allocate those buffers.
> > Allocation doesn't necessarily mean creating new memory. Since the
> > virtio-gpu device on the host talks to the GPU driver (or DirectX?),
> > why couldn't it return one of the buffers provided by those if
> > BIND_SCANOUT is requested?
> >
>
> Because in our application we are a user-mode application in windows
> that is provided with buffers that were allocated by the video stack in
> windows. We are not using a virtual GPU but a physical GPU via vfio
> passthrough and as such we are limited in what we can do. Unless I have
> completely missed what virtio-gpu does, from what I understand it's
> attempting to be a virtual GPU in its own right, which is not at all
> suitable for our requirements.

Not necessarily. virtio-gpu in its basic shape is an interface for
allocating frame buffers and sending them to the host to display.

It sounds to me like a PRIME-based setup similar to how integrated +
discrete GPUs are handled on regular systems could work for you. The
virtio-gpu device would be used like the integrated GPU that basically
just drives the virtual screen. The guest component that controls the
display of the guest (typically some sort of a compositor) would
allocate the frame buffers using virtio-gpu and then import those to
the vfio GPU when using it for compositing the parts of the screen.
The parts of the screen themselves would be rendered beforehand by
applications into local buffers managed fully by the vfio GPU, so
there wouldn't be any need to involve virtio-gpu there. Only the
compositor would have to be aware of it.

Of course if your guest is not Linux, I have no idea if that can be
handled in any reasonable way. I know those integrated + discrete GPU
setups do work on Windows, but things are obviously 100% proprietary,
so I don't know if one could make them work with virtio-gpu as the
integrated GPU.

>
> This discussion seems to have moved away completely from the original
> simple feature we need, which is to share a random block of guest
> allocated ram with the host. While it would be nice if it's contiguous
> ram, it's not an issue if it's not, and with udmabuf (now I understand
> it) it can be made to appear contigous if it is so desired anyway.
>
> vhost-user could be used for this if it is fixed to allow dynamic
> remapping, all the other bells and whistles that are virtio-gpu are
> useless to us.
>

As far as I followed the thread, my impression is that we don't want
to have an ad-hoc interface just for sending memory to the host. The
thread was started to look for a way to create identifiers for guest
memory, which proper virtio devices could use to refer to the memory
within requests sent to the host.

That said, I'm not really sure if there is any benefit of making it
anything other than just the specific virtio protocol accepting
scatterlist of guest pages directly.

Putting the ability to obtain the shared memory itself, how do you
trigger a copy from the guest frame buffer to the shared memory?

> >>
> >> Our use case is niche, and the state of things may change if vendors
> >> like
> >> AMD follow through with their promises and give us SR-IOV on consumer
> >> GPUs, but even then we would still need their support to achieve the
> >> same
> >> results as the same issue would still be present.
> >>
> >> Also don't forget that QEMU already has a non virtio generic device
> >> (IVSHMEM). The only difference is, this device doesn't allow us to
> >> attain
> >> zero-copy transfers.
> >>
> >> Currently IVSHMEM is used by two projects that I am aware of, Looking
> >> Glass and SCREAM. While Looking Glass is solving a problem that is out
> >> of
> >> scope for QEMU, SCREAM is working around the audio problems in QEMU
> >> that
> >> have been present for years now.
> >>
> >> While I don't agree with SCREAM being used this way (we really need a
> >> virtio-sound device, and/or intel-hda needs to be fixed), it again is
> >> an
> >> example of working around bugs/faults/limitations in QEMU by those of
> >> us
> >> that are unable to fix them ourselves and seem to have low priority to
> >> the
> >> QEMU project.
> >>
> >> What we are trying to attain is freedom from dual boot Linux/Windows
> >> systems, not migrate-able enterprise VPS configurations. The Looking
> >> Glass project has brought attention to several other bugs/problems in
> >> QEMU, some of which were fixed as a direct result of this project
> >> (i8042
> >> race, AMD NPT).
> >>
> >> Unless there is another solution to getting the guest GPUs
> >> frame-buffer
> >> back to the host, a device like this will always be required. Since
> >> the
> >> landscape could change at any moment, this device should not be a LG
> >> specific device, but rather a generic device to allow for other
> >> workarounds like LG to be developed in the future should they be
> >> required.
> >>
> >> Is it optimal? no
> >> Is there a better solution? not that I am aware of
> >>
> >> >
> >> > virtio-gpu can't do that right now, but we have to improve virtio-gpu
> >> > memory management for vulkan support anyway.
> >> >
> >> >> > properties to carry metadata, some fixed (id, size, application), but
> >> >>
> >> >> What exactly do you mean by application?
> >> >
> >> > Basically some way to group buffers.  A wayland proxy for example would
> >> > add a "application=wayland-proxy" tag to the buffers it creates in the
> >> > guest, and the host side part of the proxy could ask qemu (or another
> >> > vmm) to notify about all buffers with that tag.  So in case multiple
> >> > applications are using the device in parallel they don't interfere with
> >> > each other.
> >> >
> >> >> > also allow free form (name = value, framebuffers would have
> >> >> > width/height/stride/format for example).
> >> >>
> >> >> Is this approach expected to handle allocating buffers with
> >> >> hardware-specific constraints such as stride/height alignment or
> >> >> tiling? Or would there need to be some alternative channel for
> >> >> determining those values and then calculating the appropriate buffer
> >> >> size?
> >> >
> >> > No parameter negotiation.
> >> >
> >> > cheers,
> >> >   Gerd

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-20  9:53               ` Gerd Hoffmann
@ 2019-11-25 16:46                 ` Liam Girdwood
  2019-11-27  7:58                   ` Gerd Hoffmann
  0 siblings, 1 reply; 51+ messages in thread
From: Liam Girdwood @ 2019-11-25 16:46 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Gurchetan Singh, David Stevens, Stefan Hajnoczi,
	Keiichi Watanabe, geoff, virtio-dev, Alex Lau, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Hans Verkuil, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Dmitry Morozov,
	Pawel Osciak, Linux Media Mailing List

On Wed, 2019-11-20 at 10:53 +0100, Gerd Hoffmann wrote:
>   Hi,
> 
> > > > DSP FW has no access to userspace so we would need some
> > > > additional
> > > > API
> > > > on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?
> > > 
> > > The dma-buf api currently can share guest memory sg-lists.
> > 
> > Ok, IIUC buffers can either be shared using the GPU proposed APIs
> > (above) or using the dma-buf API to share via userspace ? My
> > preference
> > would be to use teh more direct GPU APIs sending physical page
> > addresses from Guest to device driver. I guess this is your use
> > case
> > too ?
> 
> I'm not convinced this is useful for audio ...
> 
> I basically see two modes of operation which are useful:
> 
>   (1) send audio data via virtqueue.
>   (2) map host audio buffers into the guest address space.
> 
> The audio driver api (i.e. alsa) typically allows to mmap() the audio
> data buffers, so it is the host audio driver which handles the
> allocation. 

Yes, in regular non VM mode, it's the host driver which allocs the
buffers.

My end goal is to be able to share physical SG pages from host to
guests and HW (including DSP firmwares). 

>  Let the audio hardware dma from/to userspace-allocated
> buffers is not possible[1], but we would need that to allow qemu (or
> other vmms) use guest-allocated buffers.

My misunderstanding here on how the various proposals being discussed
all pass buffers between guests & host. I'm reading that some are
passing buffers via userspace descriptors and this would not be
workable for audio.

> 
> cheers,
>   Gerd
> 
> [1] Disclaimer: It's been a while I looked at alsa more closely, so
>     there is a chance this might have changed without /me noticing.
> 

Your all good here from audio. Disclaimer: I'm new to virtio.

Liam 



^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-25 16:46                 ` Liam Girdwood
@ 2019-11-27  7:58                   ` Gerd Hoffmann
  0 siblings, 0 replies; 51+ messages in thread
From: Gerd Hoffmann @ 2019-11-27  7:58 UTC (permalink / raw)
  To: Liam Girdwood
  Cc: Gurchetan Singh, David Stevens, Stefan Hajnoczi,
	Keiichi Watanabe, geoff, virtio-dev, Alex Lau, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Hans Verkuil, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Dmitry Morozov,
	Pawel Osciak, Linux Media Mailing List

> > I'm not convinced this is useful for audio ...
> > 
> > I basically see two modes of operation which are useful:
> > 
> >   (1) send audio data via virtqueue.
> >   (2) map host audio buffers into the guest address space.
> > 
> > The audio driver api (i.e. alsa) typically allows to mmap() the audio
> > data buffers, so it is the host audio driver which handles the
> > allocation. 
> 
> Yes, in regular non VM mode, it's the host driver which allocs the
> buffers.
> 
> My end goal is to be able to share physical SG pages from host to
> guests and HW (including DSP firmwares). 

Yep.  So the host driver would allocate the pages, in a way that the hw
can access them of course.  qemu (or another vmm) would mmap() those
buffer pages, using the usual sound app interface, which would be alsa
on linux.

Virtio got support for shared memory recently (it is in the version 1.2
draft), virtio-pci transport uses a pci bar for the shared memory
regions.  qemu (or other vmms) can use that to map the buffer pages into
guest address space.

There are plans use shared memory in virtio-gpu too, for pretty much the
same reasons.  Some kinds of gpu buffers must be allocated by the host
gpu driver, to make sure the host hardware can use the buffers as
intended.

> >  Let the audio hardware dma from/to userspace-allocated
> > buffers is not possible[1], but we would need that to allow qemu (or
> > other vmms) use guest-allocated buffers.
> 
> My misunderstanding here on how the various proposals being discussed
> all pass buffers between guests & host. I'm reading that some are
> passing buffers via userspace descriptors and this would not be
> workable for audio.

Yep, dma-buf based buffer passing doesn't help much for audio.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-21  5:51           ` Tomasz Figa
@ 2019-12-04 22:22             ` Dylan Reid
  2019-12-11  5:08               ` David Stevens
  0 siblings, 1 reply; 51+ messages in thread
From: Dylan Reid @ 2019-12-04 22:22 UTC (permalink / raw)
  To: Tomasz Figa, Zach Reizner
  Cc: Geoffrey McRae, Gerd Hoffmann, David Stevens, Keiichi Watanabe,
	Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

On Thu, Nov 21, 2019 at 4:59 PM Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Thu, Nov 21, 2019 at 6:41 AM Geoffrey McRae <geoff@hostfission.com> wrote:
> >
> >
> >
> > On 2019-11-20 23:13, Tomasz Figa wrote:
> > > Hi Geoffrey,
> > >
> > > On Thu, Nov 7, 2019 at 7:28 AM Geoffrey McRae <geoff@hostfission.com>
> > > wrote:
> > >>
> > >>
> > >>
> > >> On 2019-11-06 23:41, Gerd Hoffmann wrote:
> > >> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > >> >> > (1) The virtio device
> > >> >> > =====================
> > >> >> >
> > >> >> > Has a single virtio queue, so the guest can send commands to register
> > >> >> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> > >> >> > has a list of memory ranges for the data. Each buffer also has some
> > >> >>
> > >> >> Allocating from guest ram would work most of the time, but I think
> > >> >> it's insufficient for many use cases. It doesn't really support things
> > >> >> such as contiguous allocations, allocations from carveouts or <4GB,
> > >> >> protected buffers, etc.
> > >> >
> > >> > If there are additional constrains (due to gpu hardware I guess)
> > >> > I think it is better to leave the buffer allocation to virtio-gpu.
> > >>
> > >> The entire point of this for our purposes is due to the fact that we
> > >> can
> > >> not allocate the buffer, it's either provided by the GPU driver or
> > >> DirectX. If virtio-gpu were to allocate the buffer we might as well
> > >> forget
> > >> all this and continue using the ivshmem device.
> > >
> > > I don't understand why virtio-gpu couldn't allocate those buffers.
> > > Allocation doesn't necessarily mean creating new memory. Since the
> > > virtio-gpu device on the host talks to the GPU driver (or DirectX?),
> > > why couldn't it return one of the buffers provided by those if
> > > BIND_SCANOUT is requested?
> > >
> >
> > Because in our application we are a user-mode application in windows
> > that is provided with buffers that were allocated by the video stack in
> > windows. We are not using a virtual GPU but a physical GPU via vfio
> > passthrough and as such we are limited in what we can do. Unless I have
> > completely missed what virtio-gpu does, from what I understand it's
> > attempting to be a virtual GPU in its own right, which is not at all
> > suitable for our requirements.
>
> Not necessarily. virtio-gpu in its basic shape is an interface for
> allocating frame buffers and sending them to the host to display.
>
> It sounds to me like a PRIME-based setup similar to how integrated +
> discrete GPUs are handled on regular systems could work for you. The
> virtio-gpu device would be used like the integrated GPU that basically
> just drives the virtual screen. The guest component that controls the
> display of the guest (typically some sort of a compositor) would
> allocate the frame buffers using virtio-gpu and then import those to
> the vfio GPU when using it for compositing the parts of the screen.
> The parts of the screen themselves would be rendered beforehand by
> applications into local buffers managed fully by the vfio GPU, so
> there wouldn't be any need to involve virtio-gpu there. Only the
> compositor would have to be aware of it.
>
> Of course if your guest is not Linux, I have no idea if that can be
> handled in any reasonable way. I know those integrated + discrete GPU
> setups do work on Windows, but things are obviously 100% proprietary,
> so I don't know if one could make them work with virtio-gpu as the
> integrated GPU.
>
> >
> > This discussion seems to have moved away completely from the original
> > simple feature we need, which is to share a random block of guest
> > allocated ram with the host. While it would be nice if it's contiguous
> > ram, it's not an issue if it's not, and with udmabuf (now I understand
> > it) it can be made to appear contigous if it is so desired anyway.
> >
> > vhost-user could be used for this if it is fixed to allow dynamic
> > remapping, all the other bells and whistles that are virtio-gpu are
> > useless to us.
> >
>
> As far as I followed the thread, my impression is that we don't want
> to have an ad-hoc interface just for sending memory to the host. The
> thread was started to look for a way to create identifiers for guest
> memory, which proper virtio devices could use to refer to the memory
> within requests sent to the host.
>
> That said, I'm not really sure if there is any benefit of making it
> anything other than just the specific virtio protocol accepting
> scatterlist of guest pages directly.
>
> Putting the ability to obtain the shared memory itself, how do you
> trigger a copy from the guest frame buffer to the shared memory?

Adding Zach for more background on virtio-wl particular use cases.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-12-04 22:22             ` Dylan Reid
@ 2019-12-11  5:08               ` David Stevens
  2019-12-11  9:26                 ` Gerd Hoffmann
  0 siblings, 1 reply; 51+ messages in thread
From: David Stevens @ 2019-12-11  5:08 UTC (permalink / raw)
  To: Dylan Reid
  Cc: Tomasz Figa, Zach Reizner, Geoffrey McRae, Gerd Hoffmann,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

There are three issues being discussed here that aren't being clearly
delineated: sharing guest allocated memory with the host, sharing host
allocated memory with the guest, and sharing buffers between devices.

Right now, guest allocated memory can be shared with the host through
the virtqueues or by passing a scatterlist in the virtio payload (i.e.
what virtio-gpu does). Host memory can be shared with the guest using
the new shared memory regions. As far as I can tell, these mechanisms
should be sufficient for sharing memory between the guest and host and
vice versa.

Where things are not sufficient is when we talk about sharing buffers
between devices. For starters, a 'buffer' as we're discussing here is
not something that is currently defined by the virtio spec. The
original proposal defines a buffer as a generic object that is guest
ram+id+metadata, and is created by a special buffer allocation device.
With this approach, buffers can be cleanly shared between devices.

An alternative that Tomasz suggested would be to avoid defining a
generic buffer object, and instead state that the scatterlist which
virtio-gpu currently uses is the 'correct' way for virtio device
protocols to define buffers. With this approach, sharing buffers
between devices potentially requires the host to map different
scatterlists back to a consistent representation of a buffer.

None of the proposals directly address the use case of sharing host
allocated buffers between devices, but I think they can be extended to
support it. Host buffers can be identified by the following tuple:
(transport type enum, transport specific device address, shmid,
offset). I think this is sufficient even for host-allocated buffers
that aren't visible to the guest (e.g. protected memory, vram), since
they can still be given address space in some shared memory region,
even if those addresses are actually inaccessible to the guest. At
this point, the host buffer identifier can simply be passed in place
of the guest ram scatterlist with either proposed buffer sharing
mechanism.

I think the main question here is whether or not the complexity of
generic buffers and a buffer sharing device is worth it compared to
the more implicit definition of buffers. Personally, I lean towards
the implicit definition of buffers, since a buffer sharing device
brings a lot of complexity and there aren't any clear clients of the
buffer metadata feature.

Cheers,
David

On Thu, Dec 5, 2019 at 7:22 AM Dylan Reid <dgreid@chromium.org> wrote:
>
> On Thu, Nov 21, 2019 at 4:59 PM Tomasz Figa <tfiga@chromium.org> wrote:
> >
> > On Thu, Nov 21, 2019 at 6:41 AM Geoffrey McRae <geoff@hostfission.com> wrote:
> > >
> > >
> > >
> > > On 2019-11-20 23:13, Tomasz Figa wrote:
> > > > Hi Geoffrey,
> > > >
> > > > On Thu, Nov 7, 2019 at 7:28 AM Geoffrey McRae <geoff@hostfission.com>
> > > > wrote:
> > > >>
> > > >>
> > > >>
> > > >> On 2019-11-06 23:41, Gerd Hoffmann wrote:
> > > >> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > > >> >> > (1) The virtio device
> > > >> >> > =====================
> > > >> >> >
> > > >> >> > Has a single virtio queue, so the guest can send commands to register
> > > >> >> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> > > >> >> > has a list of memory ranges for the data. Each buffer also has some
> > > >> >>
> > > >> >> Allocating from guest ram would work most of the time, but I think
> > > >> >> it's insufficient for many use cases. It doesn't really support things
> > > >> >> such as contiguous allocations, allocations from carveouts or <4GB,
> > > >> >> protected buffers, etc.
> > > >> >
> > > >> > If there are additional constrains (due to gpu hardware I guess)
> > > >> > I think it is better to leave the buffer allocation to virtio-gpu.
> > > >>
> > > >> The entire point of this for our purposes is due to the fact that we
> > > >> can
> > > >> not allocate the buffer, it's either provided by the GPU driver or
> > > >> DirectX. If virtio-gpu were to allocate the buffer we might as well
> > > >> forget
> > > >> all this and continue using the ivshmem device.
> > > >
> > > > I don't understand why virtio-gpu couldn't allocate those buffers.
> > > > Allocation doesn't necessarily mean creating new memory. Since the
> > > > virtio-gpu device on the host talks to the GPU driver (or DirectX?),
> > > > why couldn't it return one of the buffers provided by those if
> > > > BIND_SCANOUT is requested?
> > > >
> > >
> > > Because in our application we are a user-mode application in windows
> > > that is provided with buffers that were allocated by the video stack in
> > > windows. We are not using a virtual GPU but a physical GPU via vfio
> > > passthrough and as such we are limited in what we can do. Unless I have
> > > completely missed what virtio-gpu does, from what I understand it's
> > > attempting to be a virtual GPU in its own right, which is not at all
> > > suitable for our requirements.
> >
> > Not necessarily. virtio-gpu in its basic shape is an interface for
> > allocating frame buffers and sending them to the host to display.
> >
> > It sounds to me like a PRIME-based setup similar to how integrated +
> > discrete GPUs are handled on regular systems could work for you. The
> > virtio-gpu device would be used like the integrated GPU that basically
> > just drives the virtual screen. The guest component that controls the
> > display of the guest (typically some sort of a compositor) would
> > allocate the frame buffers using virtio-gpu and then import those to
> > the vfio GPU when using it for compositing the parts of the screen.
> > The parts of the screen themselves would be rendered beforehand by
> > applications into local buffers managed fully by the vfio GPU, so
> > there wouldn't be any need to involve virtio-gpu there. Only the
> > compositor would have to be aware of it.
> >
> > Of course if your guest is not Linux, I have no idea if that can be
> > handled in any reasonable way. I know those integrated + discrete GPU
> > setups do work on Windows, but things are obviously 100% proprietary,
> > so I don't know if one could make them work with virtio-gpu as the
> > integrated GPU.
> >
> > >
> > > This discussion seems to have moved away completely from the original
> > > simple feature we need, which is to share a random block of guest
> > > allocated ram with the host. While it would be nice if it's contiguous
> > > ram, it's not an issue if it's not, and with udmabuf (now I understand
> > > it) it can be made to appear contigous if it is so desired anyway.
> > >
> > > vhost-user could be used for this if it is fixed to allow dynamic
> > > remapping, all the other bells and whistles that are virtio-gpu are
> > > useless to us.
> > >
> >
> > As far as I followed the thread, my impression is that we don't want
> > to have an ad-hoc interface just for sending memory to the host. The
> > thread was started to look for a way to create identifiers for guest
> > memory, which proper virtio devices could use to refer to the memory
> > within requests sent to the host.
> >
> > That said, I'm not really sure if there is any benefit of making it
> > anything other than just the specific virtio protocol accepting
> > scatterlist of guest pages directly.
> >
> > Putting the ability to obtain the shared memory itself, how do you
> > trigger a copy from the guest frame buffer to the shared memory?
>
> Adding Zach for more background on virtio-wl particular use cases.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: guest / host buffer sharing ...
  2019-12-11  5:08               ` David Stevens
@ 2019-12-11  9:26                 ` Gerd Hoffmann
  2019-12-11 16:05                   ` [virtio-dev] " Enrico Granata
  2019-12-12  6:40                   ` David Stevens
  0 siblings, 2 replies; 51+ messages in thread
From: Gerd Hoffmann @ 2019-12-11  9:26 UTC (permalink / raw)
  To: David Stevens
  Cc: Dylan Reid, Tomasz Figa, Zach Reizner, Geoffrey McRae,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

  Hi,

> None of the proposals directly address the use case of sharing host
> allocated buffers between devices, but I think they can be extended to
> support it. Host buffers can be identified by the following tuple:
> (transport type enum, transport specific device address, shmid,
> offset). I think this is sufficient even for host-allocated buffers
> that aren't visible to the guest (e.g. protected memory, vram), since
> they can still be given address space in some shared memory region,
> even if those addresses are actually inaccessible to the guest. At
> this point, the host buffer identifier can simply be passed in place
> of the guest ram scatterlist with either proposed buffer sharing
> mechanism.

> I think the main question here is whether or not the complexity of
> generic buffers and a buffer sharing device is worth it compared to
> the more implicit definition of buffers.

Here are two issues mixed up.  First is, whenever we'll go define a
buffer sharing device or not.  Second is how we are going to address
buffers.

I think defining (and addressing) buffers implicitly is a bad idea.
First the addressing is non-trivial, especially with the "transport
specific device address" in the tuple.  Second I think it is a bad idea
from the security point of view.  When explicitly exporting buffers it
is easy to restrict access to the actual exports.

Instead of using a dedicated buffer sharing device we can also use
virtio-gpu (or any other driver which supports dma-buf exports) to
manage buffers.  virtio-gpu would create an identifier when exporting a
buffer (dma-buf exports inside the guest), attach the identifier to the
dma-buf so other drivers importing the buffer can see and use it.  Maybe
add an ioctl to query, so guest userspace can query the identifier too.
Also send the identifier to the host so it can also be used on the host
side to lookup and access the buffer.

With no central instance (buffer sharing device) being there managing
the buffer identifiers I think using uuids as identifiers would be a
good idea, to avoid clashes.  Also good for security because it's pretty
much impossible to guess buffer identifiers then.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-12-11  9:26                 ` Gerd Hoffmann
@ 2019-12-11 16:05                   ` Enrico Granata
  2019-12-12  6:40                   ` David Stevens
  1 sibling, 0 replies; 51+ messages in thread
From: Enrico Granata @ 2019-12-11 16:05 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: David Stevens, Dylan Reid, Tomasz Figa, Zach Reizner,
	Geoffrey McRae, Keiichi Watanabe, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

On Wed, Dec 11, 2019 at 1:26 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi,
>
> > None of the proposals directly address the use case of sharing host
> > allocated buffers between devices, but I think they can be extended to
> > support it. Host buffers can be identified by the following tuple:
> > (transport type enum, transport specific device address, shmid,
> > offset). I think this is sufficient even for host-allocated buffers
> > that aren't visible to the guest (e.g. protected memory, vram), since
> > they can still be given address space in some shared memory region,
> > even if those addresses are actually inaccessible to the guest. At
> > this point, the host buffer identifier can simply be passed in place
> > of the guest ram scatterlist with either proposed buffer sharing
> > mechanism.
>
> > I think the main question here is whether or not the complexity of
> > generic buffers and a buffer sharing device is worth it compared to
> > the more implicit definition of buffers.
>
> Here are two issues mixed up.  First is, whenever we'll go define a
> buffer sharing device or not.  Second is how we are going to address
> buffers.
>
> I think defining (and addressing) buffers implicitly is a bad idea.
> First the addressing is non-trivial, especially with the "transport
> specific device address" in the tuple.  Second I think it is a bad idea
> from the security point of view.  When explicitly exporting buffers it
> is easy to restrict access to the actual exports.
>

Strong +1 to the above. There are definitely use cases of interest
where it makes sense to be able to attach security attributes to
buffers.
Having an explicit interface that can handle all of this, instead of
duplicating logic in several subsystems, seems a worthy endeavor to
me.

> Instead of using a dedicated buffer sharing device we can also use
> virtio-gpu (or any other driver which supports dma-buf exports) to
> manage buffers.  virtio-gpu would create an identifier when exporting a
> buffer (dma-buf exports inside the guest), attach the identifier to the
> dma-buf so other drivers importing the buffer can see and use it.  Maybe
> add an ioctl to query, so guest userspace can query the identifier too.
> Also send the identifier to the host so it can also be used on the host
> side to lookup and access the buffer.
>
> With no central instance (buffer sharing device) being there managing
> the buffer identifiers I think using uuids as identifiers would be a
> good idea, to avoid clashes.  Also good for security because it's pretty
> much impossible to guess buffer identifiers then.
>
> cheers,
>   Gerd
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-12-11  9:26                 ` Gerd Hoffmann
  2019-12-11 16:05                   ` [virtio-dev] " Enrico Granata
@ 2019-12-12  6:40                   ` David Stevens
  2019-12-12  9:41                     ` Gerd Hoffmann
  1 sibling, 1 reply; 51+ messages in thread
From: David Stevens @ 2019-12-12  6:40 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Dylan Reid, Tomasz Figa, Zach Reizner, Geoffrey McRae,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

> First the addressing is non-trivial, especially with the "transport
> specific device address" in the tuple.

There is complexity here, but I think it would also be present in the
buffer sharing device case. With a buffer sharing device, the same
identifying information would need to be provided from the exporting
driver to the buffer sharing driver, so the buffer sharing device
would be able to identify the right device in the vmm. And then in
both import cases, the buffer is just identified by some opaque bytes
that need to be given to a buffer manager in the vmm to resolve the
actual buffer.

> Second I think it is a bad idea
> from the security point of view.  When explicitly exporting buffers it
> is easy to restrict access to the actual exports.

Restricting access to actual exports could perhaps help catch bugs.
However, I don't think it provides any security guarantees, since the
guest can always just export every buffer before using it. Using
implicit addresses doesn't mean that the buffer import actually has to
be allowed - it can be thought of as fusing the buffer export and
buffer import operations into a single operation. The vmm can still
perform exactly the same security checks.

> Instead of using a dedicated buffer sharing device we can also use
> virtio-gpu (or any other driver which supports dma-buf exports) to
> manage buffers.

I don't think adding generic buffer management to virtio-gpu (or any
specific device type) is a good idea, since that device would then
become a requirement for buffer sharing between unrelated devices. For
example, it's easy to imagine a device with a virtio-camera and a
virtio-encoder (although such protocols don't exist today). It
wouldn't make sense to require a virtio-gpu device to allow those two
devices to share buffers.

> With no central instance (buffer sharing device) being there managing
> the buffer identifiers I think using uuids as identifiers would be a
> good idea, to avoid clashes.  Also good for security because it's pretty
> much impossible to guess buffer identifiers then.

Using uuids to identify buffers would work. The fact that it provides
a single way to refer to both guest and host allocated buffers is
nice. And it could also directly apply to sharing resources other than
buffers (e.g. fences). Although unless we're positing that there are
different levels of trust within the guest, I don't think uuids really
provides much security.

If we're talking about uuids, they could also be used to simplify my
proposed implicit addressing scheme. Each device could be assigned a
uuid, which would simplify the shared resource identifier to
(device-uuid, shmid, offset).

In my opinion, the implicit buffer addressing scheme is fairly similar
to the uuid proposal. As I see it, the difference is that one is
referring to resources as uuids in a global namespace, whereas the
other is referring to resources with fully qualified names. Beyond
that, the implementations would be fairly similar.

-David

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-12-12  6:40                   ` David Stevens
@ 2019-12-12  9:41                     ` Gerd Hoffmann
  2019-12-12 12:26                       ` David Stevens
  0 siblings, 1 reply; 51+ messages in thread
From: Gerd Hoffmann @ 2019-12-12  9:41 UTC (permalink / raw)
  To: David Stevens
  Cc: Dylan Reid, Tomasz Figa, Zach Reizner, Geoffrey McRae,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

  Hi,

> > First the addressing is non-trivial, especially with the "transport
> > specific device address" in the tuple.
> 
> There is complexity here, but I think it would also be present in the
> buffer sharing device case. With a buffer sharing device, the same
> identifying information would need to be provided from the exporting
> driver to the buffer sharing driver, so the buffer sharing device
> would be able to identify the right device in the vmm.

No.  The idea is that the buffer sharing device will allocate and manage
the buffers (including identifiers), i.e. it will only export buffers,
never import.

> > Second I think it is a bad idea
> > from the security point of view.  When explicitly exporting buffers it
> > is easy to restrict access to the actual exports.
> 
> Restricting access to actual exports could perhaps help catch bugs.
> However, I don't think it provides any security guarantees, since the
> guest can always just export every buffer before using it.

Probably not on the guest/host boundary.

It's important for security inside the guest though.  You don't want
process A being able to access process B private resources via buffer
sharing support, by guessing implicit buffer identifiers.

With explicit buffer exports that opportunity doesn't exist in the first
place.  Anything not exported can't be accessed via buffer sharing,
period.  And to access the exported buffers you need to know the uuid,
which in turn allows the guest implement any access restrictions it
wants.

> > Instead of using a dedicated buffer sharing device we can also use
> > virtio-gpu (or any other driver which supports dma-buf exports) to
> > manage buffers.
> 
> I don't think adding generic buffer management to virtio-gpu (or any
> specific device type) is a good idea,

There isn't much to add btw.  virtio-gpu has buffer management, buffers
are called "resources" in virtio-gpu terminology.  You can already
export them as dma-bufs (just landed in 5.5-rc1) and import them into
other drivers.

Without buffer sharing support the driver importing a virtio-gpu dma-buf
can send the buffer scatter list to the host.  So both virtio-gpu and
the other device would actually access the same guest pages, but they
are not aware that the buffer is shared between devices.

With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
the importing driver can send the uuid (instead of the scatter list) to
the host.  So the device can simply lookup the buffer on the host side
and use it directly.  Another advantage is that this enables some more
use cases like sharing buffers between devices which are not backed by
guest ram.

> since that device would then
> become a requirement for buffer sharing between unrelated devices.

No.  When we drop the buffer sharing device idea (which is quite
likely), then any device can create buffers.  If virtio-gpu is involved
anyway, for example because you want show the images from the
virtio-camera device on the virtio-gpu display, it makes sense to use
virtio-gpu of course.  But any other device can create and export
buffers in a similar way.  Without a buffer sharing device there is no
central instance managing the buffers.  A virtio-video spec (video
encoder/decoder) is in discussion at the moment, it will probably get
resource management simliar to virtio-gpu for the video frames, and it
will be able to export/import those buffers (probably not in the first
revision, but it is on the radar).

> > With no central instance (buffer sharing device) being there managing
> > the buffer identifiers I think using uuids as identifiers would be a
> > good idea, to avoid clashes.  Also good for security because it's pretty
> > much impossible to guess buffer identifiers then.
> 
> Using uuids to identify buffers would work. The fact that it provides
> a single way to refer to both guest and host allocated buffers is
> nice. And it could also directly apply to sharing resources other than
> buffers (e.g. fences). Although unless we're positing that there are
> different levels of trust within the guest, I don't think uuids really
> provides much security.

Well, security-wise you want have buffer identifiers which can't be
easily guessed.  And guessing uuid is pretty much impossible due to
the namespace being huge.

> If we're talking about uuids, they could also be used to simplify my
> proposed implicit addressing scheme. Each device could be assigned a
> uuid, which would simplify the shared resource identifier to
> (device-uuid, shmid, offset).

See above for the security aspects of implicit vs. explicit buffer
identifiers.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-12-12  9:41                     ` Gerd Hoffmann
@ 2019-12-12 12:26                       ` David Stevens
  2019-12-12 13:30                         ` Gerd Hoffmann
  0 siblings, 1 reply; 51+ messages in thread
From: David Stevens @ 2019-12-12 12:26 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Dylan Reid, Tomasz Figa, Zach Reizner, Geoffrey McRae,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

> > > Second I think it is a bad idea
> > > from the security point of view.  When explicitly exporting buffers it
> > > is easy to restrict access to the actual exports.
> >
> > Restricting access to actual exports could perhaps help catch bugs.
> > However, I don't think it provides any security guarantees, since the
> > guest can always just export every buffer before using it.
>
> Probably not on the guest/host boundary.
>
> It's important for security inside the guest though.  You don't want
> process A being able to access process B private resources via buffer
> sharing support, by guessing implicit buffer identifiers.

At least for the linux guest implementation, I wouldn't think the
uuids would be exposed from the kernel. To me, it seems like something
that should be handled internally by the virtio drivers. Especially
since the 'export' process would be very much a virtio-specific
action, so it's likely that it wouldn't fit nicely into existing
userspace software. If you use some other guest with untrusted
userspace drivers, or if you're pulling the uuids out of the kernel to
give to some non-virtio transport, then I can see it being a concern.

> > > Instead of using a dedicated buffer sharing device we can also use
> > > virtio-gpu (or any other driver which supports dma-buf exports) to
> > > manage buffers.

Ah, okay. I misunderstood the original statement. I read the sentence
as 'we can use virtio-gpu in place of the dedicated buffer sharing
device', rather than 'every device can manage its own buffers'. I can
agree with the second meaning.

> Without buffer sharing support the driver importing a virtio-gpu dma-buf
> can send the buffer scatter list to the host.  So both virtio-gpu and
> the other device would actually access the same guest pages, but they
> are not aware that the buffer is shared between devices.

With the uuid approach, how should this case be handled? Should it be
equivalent to exporting and importing the buffer which was created
first? Should the spec say it's undefined behavior that might work as
expected but might not, depending on the device implementation? Does
the spec even need to say anything about it?

> With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
> the importing driver can send the uuid (instead of the scatter list) to
> the host.  So the device can simply lookup the buffer on the host side
> and use it directly.  Another advantage is that this enables some more
> use cases like sharing buffers between devices which are not backed by
> guest ram.

Not just buffers not backed by guest ram, but things like fences. I
would suggest the uuids represent 'exported resources' rather than
'exported buffers'.

> Well, security-wise you want have buffer identifiers which can't be
> easily guessed.  And guessing uuid is pretty much impossible due to
> the namespace being huge.

I guess this depends on what you're passing around within the guest.
If you're passing around the raw uuids, sure. But I would argue it's
better to pass around unforgeable identifiers (e.g. fds), and to
restrict the uuids to when talking directly to the virtio transport.
But I guess there are likely situations where that's not possible.

-David

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-12-12 12:26                       ` David Stevens
@ 2019-12-12 13:30                         ` Gerd Hoffmann
  2019-12-13  3:21                           ` David Stevens
  0 siblings, 1 reply; 51+ messages in thread
From: Gerd Hoffmann @ 2019-12-12 13:30 UTC (permalink / raw)
  To: David Stevens
  Cc: Dylan Reid, Tomasz Figa, Zach Reizner, Geoffrey McRae,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

On Thu, Dec 12, 2019 at 09:26:32PM +0900, David Stevens wrote:
> > > > Second I think it is a bad idea
> > > > from the security point of view.  When explicitly exporting buffers it
> > > > is easy to restrict access to the actual exports.
> > >
> > > Restricting access to actual exports could perhaps help catch bugs.
> > > However, I don't think it provides any security guarantees, since the
> > > guest can always just export every buffer before using it.
> >
> > Probably not on the guest/host boundary.
> >
> > It's important for security inside the guest though.  You don't want
> > process A being able to access process B private resources via buffer
> > sharing support, by guessing implicit buffer identifiers.
> 
> At least for the linux guest implementation, I wouldn't think the
> uuids would be exposed from the kernel. To me, it seems like something
> that should be handled internally by the virtio drivers.

That would be one possible use case, yes.  The exporting driver attaches
a uuid to the dma-buf.  The importing driver can see the attached uuid
and use it (if supported, otherwise run with the scatter list).  That
will be transparent to userspace, apps will just export/import dma-bufs
as usual and not even notice the uuid.

I can see other valid use cases though:  A wayland proxy could use
virtio-gpu buffer exports for shared memory and send the buffer uuid
to the host over some stream protocol (vsock, tcp, ...).  For that to
work we have to export the uuid to userspace, for example using a ioctl
on the dma-buf file handle.

> If you use some other guest with untrusted
> userspace drivers, or if you're pulling the uuids out of the kernel to
> give to some non-virtio transport, then I can see it being a concern.

I strongly prefer a design where we don't have to worry about that
concern in the first place instead of discussing whenever we should be
worried or not.

> > Without buffer sharing support the driver importing a virtio-gpu dma-buf
> > can send the buffer scatter list to the host.  So both virtio-gpu and
> > the other device would actually access the same guest pages, but they
> > are not aware that the buffer is shared between devices.
> 
> With the uuid approach, how should this case be handled? Should it be
> equivalent to exporting and importing the buffer which was created
> first? Should the spec say it's undefined behavior that might work as
> expected but might not, depending on the device implementation? Does
> the spec even need to say anything about it?

Using the uuid is an optional optimization.  I'd expect the workflow be
roughly this:

  (1) exporting driver exports a dma-buf as usual, additionally attaches
      a uuid to it and notifies the host (using device-specific commands).
  (2) importing driver will ask the host to use the buffer referenced by
      the given uuid.
  (3) if (2) fails for some reason use the dma-buf scatter list instead.

Of course only virtio drivers would try step (2), other drivers (when
sharing buffers between intel gvt device and virtio-gpu for example)
would go straight to (3).

> > With buffer sharing virtio-gpu would attach a uuid to the dma-buf, and
> > the importing driver can send the uuid (instead of the scatter list) to
> > the host.  So the device can simply lookup the buffer on the host side
> > and use it directly.  Another advantage is that this enables some more
> > use cases like sharing buffers between devices which are not backed by
> > guest ram.
> 
> Not just buffers not backed by guest ram, but things like fences. I
> would suggest the uuids represent 'exported resources' rather than
> 'exported buffers'.

Hmm, I can't see how this is useful.  Care to outline how you envision
this to work in a typical use case?

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-12-12 13:30                         ` Gerd Hoffmann
@ 2019-12-13  3:21                           ` David Stevens
  2019-12-16 13:47                             ` Gerd Hoffmann
  0 siblings, 1 reply; 51+ messages in thread
From: David Stevens @ 2019-12-13  3:21 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Dylan Reid, Tomasz Figa, Zach Reizner, Geoffrey McRae,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

> > > Without buffer sharing support the driver importing a virtio-gpu dma-buf
> > > can send the buffer scatter list to the host.  So both virtio-gpu and
> > > the other device would actually access the same guest pages, but they
> > > are not aware that the buffer is shared between devices.
> >
> > With the uuid approach, how should this case be handled? Should it be
> > equivalent to exporting and importing the buffer which was created
> > first? Should the spec say it's undefined behavior that might work as
> > expected but might not, depending on the device implementation? Does
> > the spec even need to say anything about it?
>
> Using the uuid is an optional optimization.  I'd expect the workflow be
> roughly this:
>
>   (1) exporting driver exports a dma-buf as usual, additionally attaches
>       a uuid to it and notifies the host (using device-specific commands).
>   (2) importing driver will ask the host to use the buffer referenced by
>       the given uuid.
>   (3) if (2) fails for some reason use the dma-buf scatter list instead.
>
> Of course only virtio drivers would try step (2), other drivers (when
> sharing buffers between intel gvt device and virtio-gpu for example)
> would go straight to (3).

For virtio-gpu as it is today, it's not clear to me that they're
equivalent. As I read it, the virtio-gpu spec makes a distinction
between the guest memory and the host resource. If virtio-gpu is
communicating with non-virtio devices, then obviously you'd just be
working with guest memory. But if it's communicating with another
virtio device, then there are potentially distinct guest and host
buffers that could be used. The spec shouldn't leave any room for
ambiguity as to how this distinction is handled.

> > Not just buffers not backed by guest ram, but things like fences. I
> > would suggest the uuids represent 'exported resources' rather than
> > 'exported buffers'.
>
> Hmm, I can't see how this is useful.  Care to outline how you envision
> this to work in a typical use case?

Looking at the spec again, it seems like there's some more work that
would need to be done before this would be possible. But the use case
I was thinking of would be to export a fence from virtio-gpu and share
it with a virtio decoder, to set up a decode pipeline that doesn't
need to go back into the guest for synchronization. I'm fine dropping
this point for now, though, and revisiting it as a separate proposal.

-David

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-12-13  3:21                           ` David Stevens
@ 2019-12-16 13:47                             ` Gerd Hoffmann
  2019-12-17 12:59                               ` David Stevens
  0 siblings, 1 reply; 51+ messages in thread
From: Gerd Hoffmann @ 2019-12-16 13:47 UTC (permalink / raw)
  To: David Stevens
  Cc: Dylan Reid, Tomasz Figa, Zach Reizner, Geoffrey McRae,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

  Hi,

> > Of course only virtio drivers would try step (2), other drivers (when
> > sharing buffers between intel gvt device and virtio-gpu for example)
> > would go straight to (3).
> 
> For virtio-gpu as it is today, it's not clear to me that they're
> equivalent. As I read it, the virtio-gpu spec makes a distinction
> between the guest memory and the host resource. If virtio-gpu is
> communicating with non-virtio devices, then obviously you'd just be
> working with guest memory. But if it's communicating with another
> virtio device, then there are potentially distinct guest and host
> buffers that could be used. The spec shouldn't leave any room for
> ambiguity as to how this distinction is handled.

Yep.  It should be the host side buffer.  The whole point is to avoid
the round trip through the guest after all.  Or does someone see a
useful use case for the guest buffer?  If so we might have to add some
way to explicitly specify whenever we want the guest or host buffer.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-12-16 13:47                             ` Gerd Hoffmann
@ 2019-12-17 12:59                               ` David Stevens
  0 siblings, 0 replies; 51+ messages in thread
From: David Stevens @ 2019-12-17 12:59 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Dylan Reid, Tomasz Figa, Zach Reizner, Geoffrey McRae,
	Keiichi Watanabe, Dmitry Morozov, Alexandre Courbot, Alex Lau,
	Stéphane Marchesin, Pawel Osciak, Hans Verkuil,
	Daniel Vetter, Gurchetan Singh, Linux Media Mailing List,
	virtio-dev, qemu-devel

> > > Of course only virtio drivers would try step (2), other drivers (when
> > > sharing buffers between intel gvt device and virtio-gpu for example)
> > > would go straight to (3).
> >
> > For virtio-gpu as it is today, it's not clear to me that they're
> > equivalent. As I read it, the virtio-gpu spec makes a distinction
> > between the guest memory and the host resource. If virtio-gpu is
> > communicating with non-virtio devices, then obviously you'd just be
> > working with guest memory. But if it's communicating with another
> > virtio device, then there are potentially distinct guest and host
> > buffers that could be used. The spec shouldn't leave any room for
> > ambiguity as to how this distinction is handled.
>
> Yep.  It should be the host side buffer.

I agree that it should be the host side buffer. I just want to make
sure that the meaning of 'import' is clear, and to establish the fact
that importing a buffer by uuid is not necessarily the same thing as
creating a new buffer in a different device from the same sglist (for
example, sharing a guest sglist might require more flushes).

-David

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2019-12-17 12:59 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-05 10:54 guest / host buffer sharing Gerd Hoffmann
2019-11-05 11:35 ` Geoffrey McRae
2019-11-06  6:24   ` Gerd Hoffmann
2019-11-06  8:36 ` David Stevens
2019-11-06 12:41   ` Gerd Hoffmann
2019-11-06 22:28     ` Geoffrey McRae
2019-11-07  6:48       ` Gerd Hoffmann
2019-11-20 12:13       ` Tomasz Figa
2019-11-20 21:41         ` Geoffrey McRae
2019-11-21  5:51           ` Tomasz Figa
2019-12-04 22:22             ` Dylan Reid
2019-12-11  5:08               ` David Stevens
2019-12-11  9:26                 ` Gerd Hoffmann
2019-12-11 16:05                   ` [virtio-dev] " Enrico Granata
2019-12-12  6:40                   ` David Stevens
2019-12-12  9:41                     ` Gerd Hoffmann
2019-12-12 12:26                       ` David Stevens
2019-12-12 13:30                         ` Gerd Hoffmann
2019-12-13  3:21                           ` David Stevens
2019-12-16 13:47                             ` Gerd Hoffmann
2019-12-17 12:59                               ` David Stevens
2019-11-06  8:43 ` Stefan Hajnoczi
2019-11-06  9:51   ` Gerd Hoffmann
2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
2019-11-07 11:11       ` Gerd Hoffmann
2019-11-07 11:16         ` Dr. David Alan Gilbert
2019-11-08  6:45           ` Gerd Hoffmann
2019-11-06 11:46     ` Stefan Hajnoczi
2019-11-06 12:50       ` Gerd Hoffmann
2019-11-07 12:10         ` Stefan Hajnoczi
2019-11-08  7:22           ` Gerd Hoffmann
2019-11-08  7:35             ` Stefan Hajnoczi
2019-11-09  1:41               ` Stéphane Marchesin
2019-11-09 10:12                 ` Stefan Hajnoczi
2019-11-09 11:16                   ` Tomasz Figa
2019-11-09 12:08                     ` Stefan Hajnoczi
2019-11-09 15:12                       ` Tomasz Figa
2019-11-18 10:20                         ` Stefan Hajnoczi
2019-11-20 10:11                           ` Tomasz Figa
     [not found]           ` <CAEkmjvU8or7YT7CCBe7aUx-XQ3yJpUrY4CfBOnqk7pUH9d9RGQ@mail.gmail.com>
2019-11-20 11:58             ` Tomasz Figa
2019-11-20 12:11     ` Tomasz Figa
2019-11-11  3:04   ` David Stevens
2019-11-11 15:36     ` [virtio-dev] " Liam Girdwood
2019-11-12  0:54       ` Gurchetan Singh
2019-11-12 13:56         ` Liam Girdwood
2019-11-12 22:55           ` Gurchetan Singh
2019-11-19 15:31             ` Liam Girdwood
2019-11-20  0:42               ` Gurchetan Singh
2019-11-20  9:53               ` Gerd Hoffmann
2019-11-25 16:46                 ` Liam Girdwood
2019-11-27  7:58                   ` Gerd Hoffmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).