QEMU-Devel Archive on lore.kernel.org
 help / color / Atom feed
* guest / host buffer sharing ...
@ 2019-11-05 10:54 Gerd Hoffmann
  2019-11-05 11:35 ` Geoffrey McRae
                   ` (2 more replies)
  0 siblings, 3 replies; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-05 10:54 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Hans Verkuil, David Stevens, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Dmitry Morozov, Pawel Osciak, Linux Media Mailing List

  Hi folks,

The issue of sharing buffers between guests and hosts keeps poping
up again and again in different contexts.  Most recently here:

https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html

So, I'm grabbing the recipient list of the virtio-vdec thread and some
more people I know might be interested in this, hoping to have everyone
included.

Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
resources" is really a good answer for all the different use cases
we have collected over time.  Maybe it is better to have a dedicated
buffer sharing virtio device?  Here is the rough idea:


(1) The virtio device
=====================

Has a single virtio queue, so the guest can send commands to register
and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
has a list of memory ranges for the data.  Each buffer also has some
properties to carry metadata, some fixed (id, size, application), but
also allow free form (name = value, framebuffers would have
width/height/stride/format for example).


(2) The linux guest implementation
==================================

I guess I'd try to make it a drm driver, so we can re-use drm
infrastructure (shmem helpers for example).  Buffers are dumb drm
buffers.  dma-buf import and export is supported (shmem helpers
get us that for free).  Some device-specific ioctls to get/set
properties and to register/unregister the buffers on the host.


(3) The qemu host implementation
================================

qemu (likewise other vmms) can use the udmabuf driver to create
host-side dma-bufs for the buffers.  The dma-bufs can be passed to
anyone interested, inside and outside qemu.  We'll need some protocol
for communication between qemu and external users interested in those
buffers, to receive dma-bufs (via unix file descriptor passing) and
update notifications.  Dispatching updates could be done based on the
application property, which could be "virtio-vdec" or "wayland-proxy"
for example.


commments?

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-05 10:54 guest / host buffer sharing Gerd Hoffmann
@ 2019-11-05 11:35 ` Geoffrey McRae
  2019-11-06  6:24   ` Gerd Hoffmann
  2019-11-06  8:36 ` David Stevens
  2019-11-06  8:43 ` Stefan Hajnoczi
  2 siblings, 1 reply; 29+ messages in thread
From: Geoffrey McRae @ 2019-11-05 11:35 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Keiichi Watanabe, David Stevens, Tomasz Figa, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel

Hi Gerd.

On 2019-11-05 21:54, Gerd Hoffmann wrote:
> Hi folks,
> 
> The issue of sharing buffers between guests and hosts keeps poping
> up again and again in different contexts.  Most recently here:
> 
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html
> 
> So, I'm grabbing the recipient list of the virtio-vdec thread and some
> more people I know might be interested in this, hoping to have everyone
> included.
> 
> Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> resources" is really a good answer for all the different use cases
> we have collected over time.  Maybe it is better to have a dedicated
> buffer sharing virtio device?  Here is the rough idea:
> 

This would be the ultimate solution to this, it would also make it the 
defacto device, possibly even leading to the deprecation of the IVSHMEM 
device.

> 
> (1) The virtio device
> =====================
> 
> Has a single virtio queue, so the guest can send commands to register
> and unregister buffers.  Buffers are allocated in guest ram.  Each 
> buffer
> has a list of memory ranges for the data.  Each buffer also has some
> properties to carry metadata, some fixed (id, size, application), but
> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).
> 

Perfect, however since it's to be a generic device there also needs to 
be a
method in the guest to identify which device is the one the application 
is
interested in without opening the device. Since windows makes the
subsystem vendor ID and device ID available to the userspace 
application,
I suggest these be used for this purpose.

To avoid clashes a simple text file to track reservations of subsystem 
IDs
for applications/protocols would be recommended.

The device should also support a reset feature allowing the guest to
notify the host application that all buffers have become invalid such as
on abnormal termination of the guest application that is using the 
device.

Conversely, qemu on unix socket disconnect should notify the guest of 
this
event also, allowing each end to properly syncronize.

> 
> (2) The linux guest implementation
> ==================================
> 
> I guess I'd try to make it a drm driver, so we can re-use drm
> infrastructure (shmem helpers for example).  Buffers are dumb drm
> buffers.  dma-buf import and export is supported (shmem helpers
> get us that for free).  Some device-specific ioctls to get/set
> properties and to register/unregister the buffers on the host.
> 

I would be happy to do what I can to implement the windows driver for 
this
if nobody else is interested in doing so, however, my abilities in this
field is rather limited and the results may not be that great :)

> 
> (3) The qemu host implementation
> ================================
> 
> qemu (likewise other vmms) can use the udmabuf driver to create
> host-side dma-bufs for the buffers.  The dma-bufs can be passed to
> anyone interested, inside and outside qemu.  We'll need some protocol
> for communication between qemu and external users interested in those
> buffers, to receive dma-bufs (via unix file descriptor passing) and
> update notifications.  Dispatching updates could be done based on the
> application property, which could be "virtio-vdec" or "wayland-proxy"
> for example.

I don't know enough about udmabuf to really comment on this except to 
ask
a question. Would this make guest to guest transfers without an
intermediate buffer possible?

-Geoff

> 
> 
> commments?
> 
> cheers,
>   Gerd


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-05 11:35 ` Geoffrey McRae
@ 2019-11-06  6:24   ` Gerd Hoffmann
  0 siblings, 0 replies; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-06  6:24 UTC (permalink / raw)
  To: Geoffrey McRae
  Cc: Hans Verkuil, Alex Lau, Alexandre Courbot, virtio-dev,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

> > (1) The virtio device
> > =====================
> > 
> > Has a single virtio queue, so the guest can send commands to register
> > and unregister buffers.  Buffers are allocated in guest ram.  Each
> > buffer
> > has a list of memory ranges for the data.  Each buffer also has some
> > properties to carry metadata, some fixed (id, size, application), but
> > also allow free form (name = value, framebuffers would have
> > width/height/stride/format for example).
> > 
> 
> Perfect, however since it's to be a generic device there also needs to be a
> method in the guest to identify which device is the one the application is
> interested in without opening the device.

This is what the application buffer property is supposed to handle, i.e.
you'll have a single device, all applications share it and the property
tells which buffer belongs to which application.

> The device should also support a reset feature allowing the guest to
> notify the host application that all buffers have become invalid such as
> on abnormal termination of the guest application that is using the device.

The guest driver should cleanup properly (i.e. unregister all buffers)
when an application terminates of course, no matter what the reason is
(crash, exit without unregistering buffers, ...).  Doable without a full
device reset.

Independent from that a full reset will be supported of course, it is a
standard virtio feature.

> Conversely, qemu on unix socket disconnect should notify the guest of this
> event also, allowing each end to properly syncronize.

I was thinking more about a simple guest-side publishing of buffers,
without a backchannel.  If more coordination is needed you can use
vsocks for that for example.

> > (3) The qemu host implementation
> > ================================
> > 
> > qemu (likewise other vmms) can use the udmabuf driver to create
> > host-side dma-bufs for the buffers.  The dma-bufs can be passed to
> > anyone interested, inside and outside qemu.  We'll need some protocol
> > for communication between qemu and external users interested in those
> > buffers, to receive dma-bufs (via unix file descriptor passing) and
> > update notifications.

Using vhost for the host-side implementation should be possible too.

> > Dispatching updates could be done based on the
> > application property, which could be "virtio-vdec" or "wayland-proxy"
> > for example.
> 
> I don't know enough about udmabuf to really comment on this except to ask
> a question. Would this make guest to guest transfers without an
> intermediate buffer possible?

Yes.

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-05 10:54 guest / host buffer sharing Gerd Hoffmann
  2019-11-05 11:35 ` Geoffrey McRae
@ 2019-11-06  8:36 ` David Stevens
  2019-11-06 12:41   ` Gerd Hoffmann
  2019-11-06  8:43 ` Stefan Hajnoczi
  2 siblings, 1 reply; 29+ messages in thread
From: David Stevens @ 2019-11-06  8:36 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, Hans Verkuil, Alex Lau, Alexandre Courbot, virtio-dev,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Dmitry Morozov, Pawel Osciak, Linux Media Mailing List

> (1) The virtio device
> =====================
>
> Has a single virtio queue, so the guest can send commands to register
> and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> has a list of memory ranges for the data. Each buffer also has some

Allocating from guest ram would work most of the time, but I think
it's insufficient for many use cases. It doesn't really support things
such as contiguous allocations, allocations from carveouts or <4GB,
protected buffers, etc.

> properties to carry metadata, some fixed (id, size, application), but

What exactly do you mean by application?

> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).

Is this approach expected to handle allocating buffers with
hardware-specific constraints such as stride/height alignment or
tiling? Or would there need to be some alternative channel for
determining those values and then calculating the appropriate buffer
size?

-David

On Tue, Nov 5, 2019 at 7:55 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi folks,
>
> The issue of sharing buffers between guests and hosts keeps poping
> up again and again in different contexts.  Most recently here:
>
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html
>
> So, I'm grabbing the recipient list of the virtio-vdec thread and some
> more people I know might be interested in this, hoping to have everyone
> included.
>
> Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> resources" is really a good answer for all the different use cases
> we have collected over time.  Maybe it is better to have a dedicated
> buffer sharing virtio device?  Here is the rough idea:
>
>
> (1) The virtio device
> =====================
>
> Has a single virtio queue, so the guest can send commands to register
> and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> has a list of memory ranges for the data.  Each buffer also has some
> properties to carry metadata, some fixed (id, size, application), but
> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).
>
>
> (2) The linux guest implementation
> ==================================
>
> I guess I'd try to make it a drm driver, so we can re-use drm
> infrastructure (shmem helpers for example).  Buffers are dumb drm
> buffers.  dma-buf import and export is supported (shmem helpers
> get us that for free).  Some device-specific ioctls to get/set
> properties and to register/unregister the buffers on the host.
>
>
> (3) The qemu host implementation
> ================================
>
> qemu (likewise other vmms) can use the udmabuf driver to create
> host-side dma-bufs for the buffers.  The dma-bufs can be passed to
> anyone interested, inside and outside qemu.  We'll need some protocol
> for communication between qemu and external users interested in those
> buffers, to receive dma-bufs (via unix file descriptor passing) and
> update notifications.  Dispatching updates could be done based on the
> application property, which could be "virtio-vdec" or "wayland-proxy"
> for example.
>
>
> commments?
>
> cheers,
>   Gerd
>


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-05 10:54 guest / host buffer sharing Gerd Hoffmann
  2019-11-05 11:35 ` Geoffrey McRae
  2019-11-06  8:36 ` David Stevens
@ 2019-11-06  8:43 ` Stefan Hajnoczi
  2019-11-06  9:51   ` Gerd Hoffmann
  2019-11-11  3:04   ` David Stevens
  2 siblings, 2 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2019-11-06  8:43 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

[-- Attachment #1: Type: text/plain, Size: 1571 bytes --]

On Tue, Nov 05, 2019 at 11:54:56AM +0100, Gerd Hoffmann wrote:
> The issue of sharing buffers between guests and hosts keeps poping
> up again and again in different contexts.  Most recently here:
> 
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg656685.html
> 
> So, I'm grabbing the recipient list of the virtio-vdec thread and some
> more people I know might be interested in this, hoping to have everyone
> included.
> 
> Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> resources" is really a good answer for all the different use cases
> we have collected over time.  Maybe it is better to have a dedicated
> buffer sharing virtio device?  Here is the rough idea:

My concern is that buffer sharing isn't a "device".  It's a primitive
used in building other devices.  When someone asks for just buffer
sharing it's often because they do not intend to upstream a
specification for their device.

If this buffer sharing device's main purpose is for building proprietary
devices without contributing to VIRTIO, then I don't think it makes
sense for the VIRTIO community to assist in its development.

VIRTIO recently gained a shared memory resource concept for access to
host memory.  It is being used in virtio-pmem and virtio-fs (and
virtio-gpu?).  If another flavor of shared memory is required it can be
added to the spec and new VIRTIO device types can use it.  But it's not
clear why this should be its own device.

My question would be "what is the actual problem you are trying to
solve?".

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  8:43 ` Stefan Hajnoczi
@ 2019-11-06  9:51   ` Gerd Hoffmann
  2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
  2019-11-06 11:46     ` Stefan Hajnoczi
  2019-11-11  3:04   ` David Stevens
  1 sibling, 2 replies; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-06  9:51 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Keiichi Watanabe, David Stevens, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

  Hi,

> > Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> > resources" is really a good answer for all the different use cases
> > we have collected over time.  Maybe it is better to have a dedicated
> > buffer sharing virtio device?  Here is the rough idea:
> 
> My concern is that buffer sharing isn't a "device".  It's a primitive
> used in building other devices.  When someone asks for just buffer
> sharing it's often because they do not intend to upstream a
> specification for their device.

Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either.
It is more a service to allow communication between host and guest

That buffer sharing device falls into the same category.  Maybe it even
makes sense to build that as virtio-vsock extension.  Not sure how well
that would work with the multi-transport architecture of vsock though.

> If this buffer sharing device's main purpose is for building proprietary
> devices without contributing to VIRTIO, then I don't think it makes
> sense for the VIRTIO community to assist in its development.

One possible use case would be building a wayland proxy, using vsock for
the wayland protocol messages and virtio-buffers for the shared buffers
(wayland client window content).

It could also simplify buffer sharing between devices (feed decoded
video frames from decoder to gpu), although in that case it is less
clear that it'll actually simplify things because virtio-gpu is
involved anyway.

We can't prevent people from using that for proprietary stuff (same goes
for vsock).

There is the option to use virtio-gpu instead, i.e. add support to qemu
to export dma-buf handles for virtio-gpu resources to other processes
(such as a wayland proxy).  That would provide very similar
functionality (and thereby create the same loophole).

> VIRTIO recently gained a shared memory resource concept for access to
> host memory.  It is being used in virtio-pmem and virtio-fs (and
> virtio-gpu?).

virtio-gpu is in progress still unfortunately (all kinds of fixes for
the qemu drm drivers and virtio-gpu guest driver refactoring kept me
busy for quite a while ...).

> If another flavor of shared memory is required it can be
> added to the spec and new VIRTIO device types can use it.  But it's not
> clear why this should be its own device.

This is not about host memory, buffers are in guest ram, everything else
would make sharing those buffers between drivers inside the guest (as
dma-buf) quite difficult.

> My question would be "what is the actual problem you are trying to
> solve?".

Typical use cases center around sharing graphics data between guest
and host.

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-06  9:51   ` Gerd Hoffmann
@ 2019-11-06 10:10     ` " Dr. David Alan Gilbert
  2019-11-07 11:11       ` Gerd Hoffmann
  2019-11-06 11:46     ` Stefan Hajnoczi
  1 sibling, 1 reply; 29+ messages in thread
From: Dr. David Alan Gilbert @ 2019-11-06 10:10 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, Stefan Hajnoczi,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

* Gerd Hoffmann (kraxel@redhat.com) wrote:
>   Hi,
> 
> > > Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> > > resources" is really a good answer for all the different use cases
> > > we have collected over time.  Maybe it is better to have a dedicated
> > > buffer sharing virtio device?  Here is the rough idea:
> > 
> > My concern is that buffer sharing isn't a "device".  It's a primitive
> > used in building other devices.  When someone asks for just buffer
> > sharing it's often because they do not intend to upstream a
> > specification for their device.
> 
> Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either.
> It is more a service to allow communication between host and guest
> 
> That buffer sharing device falls into the same category.  Maybe it even
> makes sense to build that as virtio-vsock extension.  Not sure how well
> that would work with the multi-transport architecture of vsock though.
> 
> > If this buffer sharing device's main purpose is for building proprietary
> > devices without contributing to VIRTIO, then I don't think it makes
> > sense for the VIRTIO community to assist in its development.
> 
> One possible use case would be building a wayland proxy, using vsock for
> the wayland protocol messages and virtio-buffers for the shared buffers
> (wayland client window content).
> 
> It could also simplify buffer sharing between devices (feed decoded
> video frames from decoder to gpu), although in that case it is less
> clear that it'll actually simplify things because virtio-gpu is
> involved anyway.
> 
> We can't prevent people from using that for proprietary stuff (same goes
> for vsock).
> 
> There is the option to use virtio-gpu instead, i.e. add support to qemu
> to export dma-buf handles for virtio-gpu resources to other processes
> (such as a wayland proxy).  That would provide very similar
> functionality (and thereby create the same loophole).
> 
> > VIRTIO recently gained a shared memory resource concept for access to
> > host memory.  It is being used in virtio-pmem and virtio-fs (and
> > virtio-gpu?).
> 
> virtio-gpu is in progress still unfortunately (all kinds of fixes for
> the qemu drm drivers and virtio-gpu guest driver refactoring kept me
> busy for quite a while ...).
> 
> > If another flavor of shared memory is required it can be
> > added to the spec and new VIRTIO device types can use it.  But it's not
> > clear why this should be its own device.
> 
> This is not about host memory, buffers are in guest ram, everything else
> would make sharing those buffers between drivers inside the guest (as
> dma-buf) quite difficult.

Given it's just guest memory, can the guest just have a virt queue on
which it places pointers to the memory it wants to share as elements in
the queue?

Dave

> > My question would be "what is the actual problem you are trying to
> > solve?".
> 
> Typical use cases center around sharing graphics data between guest
> and host.
> 
> cheers,
>   Gerd
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  9:51   ` Gerd Hoffmann
  2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
@ 2019-11-06 11:46     ` Stefan Hajnoczi
  2019-11-06 12:50       ` Gerd Hoffmann
  1 sibling, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2019-11-06 11:46 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Keiichi Watanabe, David Stevens, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Wed, Nov 6, 2019 at 10:51 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > Reason is:  Meanwhile I'm wondering whenever "just use virtio-gpu
> > > resources" is really a good answer for all the different use cases
> > > we have collected over time.  Maybe it is better to have a dedicated
> > > buffer sharing virtio device?  Here is the rough idea:
> >
> > My concern is that buffer sharing isn't a "device".  It's a primitive
> > used in building other devices.  When someone asks for just buffer
> > sharing it's often because they do not intend to upstream a
> > specification for their device.
>
> Well, "vsock" isn't a classic device (aka nic/storage/gpu/...) either.
> It is more a service to allow communication between host and guest

There are existing applications and network protocols that can be
easily run over virtio-vsock, virtio-net, and virtio-serial to do
useful things.

If a new device has no use except for writing a custom code, then it's
a clue that we're missing the actual use case.

In the graphics buffer sharing use case, how does the other side
determine how to interpret this data?  Shouldn't there be a VIRTIO
device spec for the messaging so compatible implementations can be
written by others?

Stefan


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  8:36 ` David Stevens
@ 2019-11-06 12:41   ` Gerd Hoffmann
  2019-11-06 22:28     ` Geoffrey McRae
  0 siblings, 1 reply; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-06 12:41 UTC (permalink / raw)
  To: David Stevens
  Cc: geoff, Hans Verkuil, Alex Lau, Alexandre Courbot, virtio-dev,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Dmitry Morozov, Pawel Osciak, Linux Media Mailing List

On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > (1) The virtio device
> > =====================
> >
> > Has a single virtio queue, so the guest can send commands to register
> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> > has a list of memory ranges for the data. Each buffer also has some
> 
> Allocating from guest ram would work most of the time, but I think
> it's insufficient for many use cases. It doesn't really support things
> such as contiguous allocations, allocations from carveouts or <4GB,
> protected buffers, etc.

If there are additional constrains (due to gpu hardware I guess)
I think it is better to leave the buffer allocation to virtio-gpu.

virtio-gpu can't do that right now, but we have to improve virtio-gpu
memory management for vulkan support anyway.

> > properties to carry metadata, some fixed (id, size, application), but
> 
> What exactly do you mean by application?

Basically some way to group buffers.  A wayland proxy for example would
add a "application=wayland-proxy" tag to the buffers it creates in the
guest, and the host side part of the proxy could ask qemu (or another
vmm) to notify about all buffers with that tag.  So in case multiple
applications are using the device in parallel they don't interfere with
each other.

> > also allow free form (name = value, framebuffers would have
> > width/height/stride/format for example).
> 
> Is this approach expected to handle allocating buffers with
> hardware-specific constraints such as stride/height alignment or
> tiling? Or would there need to be some alternative channel for
> determining those values and then calculating the appropriate buffer
> size?

No parameter negotiation.

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 11:46     ` Stefan Hajnoczi
@ 2019-11-06 12:50       ` Gerd Hoffmann
  2019-11-07 12:10         ` Stefan Hajnoczi
  0 siblings, 1 reply; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-06 12:50 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Keiichi Watanabe, David Stevens, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

  Hi,

> In the graphics buffer sharing use case, how does the other side
> determine how to interpret this data?

The idea is to have free form properties (name=value, with value being
a string) for that kind of metadata.

> Shouldn't there be a VIRTIO
> device spec for the messaging so compatible implementations can be
> written by others?

Adding a list of common properties to the spec certainly makes sense,
so everybody uses the same names.  Adding struct-ed properties for
common use cases might be useful too.

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 12:41   ` Gerd Hoffmann
@ 2019-11-06 22:28     ` Geoffrey McRae
  2019-11-07  6:48       ` Gerd Hoffmann
  0 siblings, 1 reply; 29+ messages in thread
From: Geoffrey McRae @ 2019-11-06 22:28 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: David Stevens, Keiichi Watanabe, Tomasz Figa, Dmitry Morozov,
	Alexandre Courbot, Alex Lau, Dylan Reid, Stéphane Marchesin,
	Pawel Osciak, Hans Verkuil, Daniel Vetter, Gurchetan Singh,
	Linux Media Mailing List, virtio-dev, qemu-devel



On 2019-11-06 23:41, Gerd Hoffmann wrote:
> On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
>> > (1) The virtio device
>> > =====================
>> >
>> > Has a single virtio queue, so the guest can send commands to register
>> > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
>> > has a list of memory ranges for the data. Each buffer also has some
>> 
>> Allocating from guest ram would work most of the time, but I think
>> it's insufficient for many use cases. It doesn't really support things
>> such as contiguous allocations, allocations from carveouts or <4GB,
>> protected buffers, etc.
> 
> If there are additional constrains (due to gpu hardware I guess)
> I think it is better to leave the buffer allocation to virtio-gpu.

The entire point of this for our purposes is due to the fact that we can
not allocate the buffer, it's either provided by the GPU driver or
DirectX. If virtio-gpu were to allocate the buffer we might as well 
forget
all this and continue using the ivshmem device.

Our use case is niche, and the state of things may change if vendors 
like
AMD follow through with their promises and give us SR-IOV on consumer
GPUs, but even then we would still need their support to achieve the 
same
results as the same issue would still be present.

Also don't forget that QEMU already has a non virtio generic device
(IVSHMEM). The only difference is, this device doesn't allow us to 
attain
zero-copy transfers.

Currently IVSHMEM is used by two projects that I am aware of, Looking
Glass and SCREAM. While Looking Glass is solving a problem that is out 
of
scope for QEMU, SCREAM is working around the audio problems in QEMU that
have been present for years now.

While I don't agree with SCREAM being used this way (we really need a
virtio-sound device, and/or intel-hda needs to be fixed), it again is an
example of working around bugs/faults/limitations in QEMU by those of us
that are unable to fix them ourselves and seem to have low priority to 
the
QEMU project.

What we are trying to attain is freedom from dual boot Linux/Windows
systems, not migrate-able enterprise VPS configurations. The Looking
Glass project has brought attention to several other bugs/problems in
QEMU, some of which were fixed as a direct result of this project (i8042
race, AMD NPT).

Unless there is another solution to getting the guest GPUs frame-buffer
back to the host, a device like this will always be required. Since the
landscape could change at any moment, this device should not be a LG
specific device, but rather a generic device to allow for other
workarounds like LG to be developed in the future should they be 
required.

Is it optimal? no
Is there a better solution? not that I am aware of

> 
> virtio-gpu can't do that right now, but we have to improve virtio-gpu
> memory management for vulkan support anyway.
> 
>> > properties to carry metadata, some fixed (id, size, application), but
>> 
>> What exactly do you mean by application?
> 
> Basically some way to group buffers.  A wayland proxy for example would
> add a "application=wayland-proxy" tag to the buffers it creates in the
> guest, and the host side part of the proxy could ask qemu (or another
> vmm) to notify about all buffers with that tag.  So in case multiple
> applications are using the device in parallel they don't interfere with
> each other.
> 
>> > also allow free form (name = value, framebuffers would have
>> > width/height/stride/format for example).
>> 
>> Is this approach expected to handle allocating buffers with
>> hardware-specific constraints such as stride/height alignment or
>> tiling? Or would there need to be some alternative channel for
>> determining those values and then calculating the appropriate buffer
>> size?
> 
> No parameter negotiation.
> 
> cheers,
>   Gerd


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 22:28     ` Geoffrey McRae
@ 2019-11-07  6:48       ` Gerd Hoffmann
  0 siblings, 0 replies; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-07  6:48 UTC (permalink / raw)
  To: Geoffrey McRae
  Cc: Hans Verkuil, Alex Lau, Alexandre Courbot, virtio-dev,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

> On 2019-11-06 23:41, Gerd Hoffmann wrote:
> > On Wed, Nov 06, 2019 at 05:36:22PM +0900, David Stevens wrote:
> > > > (1) The virtio device
> > > > =====================
> > > >
> > > > Has a single virtio queue, so the guest can send commands to register
> > > > and unregister buffers.  Buffers are allocated in guest ram.  Each buffer
> > > > has a list of memory ranges for the data. Each buffer also has some
> > > 
> > > Allocating from guest ram would work most of the time, but I think
> > > it's insufficient for many use cases. It doesn't really support things
> > > such as contiguous allocations, allocations from carveouts or <4GB,
> > > protected buffers, etc.
> > 
> > If there are additional constrains (due to gpu hardware I guess)
> > I think it is better to leave the buffer allocation to virtio-gpu.
> 
> The entire point of this for our purposes is due to the fact that we can
> not allocate the buffer, it's either provided by the GPU driver or
> DirectX. If virtio-gpu were to allocate the buffer we might as well forget
> all this and continue using the ivshmem device.

Well, virtio-gpu resources are in guest ram, like the buffers of a
virtio-buffers device would be.  So it isn't much of a difference.  If
the buffer provided by the (nvidia/amd/intel) gpu driver lives in ram
you can create a virtio-gpu resource for it.

On the linux side that is typically handled with dma-buf, one driver
exports the dma-buf and the other imports it.  virtio-gpu doesn't
support that fully yet though (import is being worked on, export is done
and will land upstream in the next merge window).

No clue how this looks like for windows guests ...

> Currently IVSHMEM is used by two projects that I am aware of, Looking
> Glass and SCREAM. While Looking Glass is solving a problem that is out of
> scope for QEMU, SCREAM is working around the audio problems in QEMU that
> have been present for years now.

Side note: sound in qemu 3.1+ should be alot better than in 2.x
versions.

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
@ 2019-11-07 11:11       ` Gerd Hoffmann
  2019-11-07 11:16         ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-07 11:11 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, Stefan Hajnoczi,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

  Hi,

> > This is not about host memory, buffers are in guest ram, everything else
> > would make sharing those buffers between drivers inside the guest (as
> > dma-buf) quite difficult.
> 
> Given it's just guest memory, can the guest just have a virt queue on
> which it places pointers to the memory it wants to share as elements in
> the queue?

Well, good question.  I'm actually wondering what the best approach is
to handle long-living, large buffers in virtio ...

virtio-blk (and others) are using the approach you describe.  They put a
pointer to the io request header, followed by pointer(s) to the io
buffers directly into the virtqueue.  That works great with storage for
example.  The queue entries are tagged being "in" or "out" (driver to
device or visa-versa), so the virtio transport can set up dma mappings
accordingly or even transparently copy data if needed.

For long-living buffers where data can potentially flow both ways this
model doesn't fit very well though.  So what virtio-gpu does instead is
transferring the scatter list as virtio payload.  Does feel a bit
unclean as it doesn't really fit the virtio architecture.  It assumes
the host can directly access guest memory for example (which is usually
the case but explicitly not required by virtio).  It also requires
quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
in theory should be handled fully transparently by the virtio-pci
transport.

We could instead have a "create-buffer" command which adds the buffer
pointers as elements to the virtqueue as you describe.  Then simply
continue using the buffer even after completing the "create-buffer"
command.  Which isn't exactly clean either.  It would likewise assume
direct access to guest memory, and it would likewise need quirks for
VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
mappings for the virtqueue entries after command completion.

Comments, suggestions, ideas?

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-07 11:11       ` Gerd Hoffmann
@ 2019-11-07 11:16         ` Dr. David Alan Gilbert
  2019-11-08  6:45           ` Gerd Hoffmann
  0 siblings, 1 reply; 29+ messages in thread
From: Dr. David Alan Gilbert @ 2019-11-07 11:16 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, Stefan Hajnoczi,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

* Gerd Hoffmann (kraxel@redhat.com) wrote:
>   Hi,
> 
> > > This is not about host memory, buffers are in guest ram, everything else
> > > would make sharing those buffers between drivers inside the guest (as
> > > dma-buf) quite difficult.
> > 
> > Given it's just guest memory, can the guest just have a virt queue on
> > which it places pointers to the memory it wants to share as elements in
> > the queue?
> 
> Well, good question.  I'm actually wondering what the best approach is
> to handle long-living, large buffers in virtio ...
> 
> virtio-blk (and others) are using the approach you describe.  They put a
> pointer to the io request header, followed by pointer(s) to the io
> buffers directly into the virtqueue.  That works great with storage for
> example.  The queue entries are tagged being "in" or "out" (driver to
> device or visa-versa), so the virtio transport can set up dma mappings
> accordingly or even transparently copy data if needed.
> 
> For long-living buffers where data can potentially flow both ways this
> model doesn't fit very well though.  So what virtio-gpu does instead is
> transferring the scatter list as virtio payload.  Does feel a bit
> unclean as it doesn't really fit the virtio architecture.  It assumes
> the host can directly access guest memory for example (which is usually
> the case but explicitly not required by virtio).  It also requires
> quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
> in theory should be handled fully transparently by the virtio-pci
> transport.
> 
> We could instead have a "create-buffer" command which adds the buffer
> pointers as elements to the virtqueue as you describe.  Then simply
> continue using the buffer even after completing the "create-buffer"
> command.  Which isn't exactly clean either.  It would likewise assume
> direct access to guest memory, and it would likewise need quirks for
> VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
> mappings for the virtqueue entries after command completion.
> 
> Comments, suggestions, ideas?

What about not completing the command while the device is using the
memory?

Dave

> cheers,
>   Gerd
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06 12:50       ` Gerd Hoffmann
@ 2019-11-07 12:10         ` Stefan Hajnoczi
  2019-11-07 15:10           ` Frank Yang
  2019-11-08  7:22           ` Gerd Hoffmann
  0 siblings, 2 replies; 29+ messages in thread
From: Stefan Hajnoczi @ 2019-11-07 12:10 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Keiichi Watanabe, David Stevens, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Wed, Nov 6, 2019 at 1:50 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > In the graphics buffer sharing use case, how does the other side
> > determine how to interpret this data?
>
> The idea is to have free form properties (name=value, with value being
> a string) for that kind of metadata.
>
> > Shouldn't there be a VIRTIO
> > device spec for the messaging so compatible implementations can be
> > written by others?
>
> Adding a list of common properties to the spec certainly makes sense,
> so everybody uses the same names.  Adding struct-ed properties for
> common use cases might be useful too.

Why not define VIRTIO devices for wayland and friends?

This new device exposes buffer sharing plus properties - effectively a
new device model nested inside VIRTIO.  The VIRTIO device model has
the necessary primitives to solve the buffer sharing problem so I'm
struggling to see the purpose of this new device.

Custom/niche applications that do not wish to standardize their device
type can maintain out-of-tree VIRTIO devices.  Both kernel and
userspace drivers can be written for the device and there is already
VIRTIO driver code that can be reused.  They have access to the full
VIRTIO device model, including feature negotiation and configuration
space.

Stefan


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-07 12:10         ` Stefan Hajnoczi
@ 2019-11-07 15:10           ` Frank Yang
  2019-11-08  7:22           ` Gerd Hoffmann
  1 sibling, 0 replies; 29+ messages in thread
From: Frank Yang @ 2019-11-07 15:10 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Gerd Hoffmann, geoff, virtio-dev, Alex Lau, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

[-- Attachment #1: Type: text/plain, Size: 3242 bytes --]

So I'm not really sure why people are having issues sharing buffers that
live on the GPU. Doesn't that show up as some integer ID on the host, and
some $GuestFramework (dmabuf, gralloc) ID on the guest, and it all works
out due to maintaining the correspondence in your particular stack of
virtual devices? For example, if you want to do video decode in hardware on
an Android guest, there should be a gralloc buffer whose handle contains
enough information to reconstruct the GPU buffer ID on the host, because
gralloc is how processes communicate gpu buffer ids to each other on
Android.

BTW, if we have a new device just for this, this should also be more
flexible than being udmabuf on the host. There are other OSes than Linux.
Keep in mind, also, that across different drivers even on Linux, e.g.,
NVIDIA proprietary, dmabuf might not always be available.

As for host CPU memory that is allocated in various ways, I think Android
Emulator has built a very flexible/general solution, esp if we need to
share a host CPU buffer allocated via something thats not completely under
our control, such as Vulkan. We reserve a PCI BAR for that and map memory
directly from the host Vk drier into there, via the address space device.
It's

https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/hw/pci/goldfish_address_space.c

https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emu/android/emulation/address_space_device.cpp#205


Number of copies is also completely under the user's control, unlike
ivshmem. It also is not tied to any particular device such as gpu or codec.
Since the memory is owned by the host and directly mapped to the guest PCI
without any abstraction, it's contiguous, it doesn't carve out guest RAM,
doesn't waste CMA, etc.

On Thu, Nov 7, 2019 at 4:13 AM Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Wed, Nov 6, 2019 at 1:50 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > In the graphics buffer sharing use case, how does the other side
> > > determine how to interpret this data?
> >
> > The idea is to have free form properties (name=value, with value being
> > a string) for that kind of metadata.
> >
> > > Shouldn't there be a VIRTIO
> > > device spec for the messaging so compatible implementations can be
> > > written by others?
> >
> > Adding a list of common properties to the spec certainly makes sense,
> > so everybody uses the same names.  Adding struct-ed properties for
> > common use cases might be useful too.
>
> Why not define VIRTIO devices for wayland and friends?
>
> This new device exposes buffer sharing plus properties - effectively a
> new device model nested inside VIRTIO.  The VIRTIO device model has
> the necessary primitives to solve the buffer sharing problem so I'm
> struggling to see the purpose of this new device.
>
> Custom/niche applications that do not wish to standardize their device
> type can maintain out-of-tree VIRTIO devices.  Both kernel and
> userspace drivers can be written for the device and there is already
> VIRTIO driver code that can be reused.  They have access to the full
> VIRTIO device model, including feature negotiation and configuration
> space.
>
> Stefan
>
>

[-- Attachment #2: Type: text/html, Size: 4191 bytes --]

<div dir="ltr">So I&#39;m not really sure why people are having issues sharing buffers that live on the GPU. Doesn&#39;t that show up as some integer ID on the host, and some $GuestFramework (dmabuf, gralloc) ID on the guest, and it all works out due to maintaining the correspondence in your particular stack of virtual devices? For example, if you want to do video decode in hardware on an Android guest, there should be a gralloc buffer whose handle contains enough information to reconstruct the GPU buffer ID on the host, because gralloc is how processes communicate gpu buffer ids to each other on Android.<div><br></div><div>BTW, if we have a new device just for this, this should also be more flexible than being udmabuf on the host. There are other OSes than Linux. Keep in mind, also, that across different drivers even on Linux, e.g., NVIDIA proprietary, dmabuf might not always be available.<div><br><div>As for host CPU memory that is allocated in various ways, I think Android Emulator has built a very flexible/general solution, esp if we need to share a host CPU buffer allocated via something thats not completely under our control, such as Vulkan. We reserve a PCI BAR for that and map memory directly from the host Vk drier into there, via the address space device. It&#39;s </div><div><br></div><div><a href="https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/hw/pci/goldfish_address_space.c">https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/hw/pci/goldfish_address_space.c</a> </div><div><a href="https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emu/android/emulation/address_space_device.cpp#205">https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emu/android/emulation/address_space_device.cpp#205</a> <br></div><div><br></div><div><div>Number of copies is also completely under the user&#39;s control, unlike ivshmem. It also is not tied to any particular device such as gpu or codec. Since the memory is owned by the host and directly mapped to the guest PCI without any abstraction, it&#39;s contiguous, it doesn&#39;t carve out guest RAM, doesn&#39;t waste CMA, etc.</div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Nov 7, 2019 at 4:13 AM Stefan Hajnoczi &lt;<a href="mailto:stefanha@gmail.com">stefanha@gmail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Wed, Nov 6, 2019 at 1:50 PM Gerd Hoffmann &lt;<a href="mailto:kraxel@redhat.com" target="_blank">kraxel@redhat.com</a>&gt; wrote:<br>
&gt; &gt; In the graphics buffer sharing use case, how does the other side<br>
&gt; &gt; determine how to interpret this data?<br>
&gt;<br>
&gt; The idea is to have free form properties (name=value, with value being<br>
&gt; a string) for that kind of metadata.<br>
&gt;<br>
&gt; &gt; Shouldn&#39;t there be a VIRTIO<br>
&gt; &gt; device spec for the messaging so compatible implementations can be<br>
&gt; &gt; written by others?<br>
&gt;<br>
&gt; Adding a list of common properties to the spec certainly makes sense,<br>
&gt; so everybody uses the same names.  Adding struct-ed properties for<br>
&gt; common use cases might be useful too.<br>
<br>
Why not define VIRTIO devices for wayland and friends?<br>
<br>
This new device exposes buffer sharing plus properties - effectively a<br>
new device model nested inside VIRTIO.  The VIRTIO device model has<br>
the necessary primitives to solve the buffer sharing problem so I&#39;m<br>
struggling to see the purpose of this new device.<br>
<br>
Custom/niche applications that do not wish to standardize their device<br>
type can maintain out-of-tree VIRTIO devices.  Both kernel and<br>
userspace drivers can be written for the device and there is already<br>
VIRTIO driver code that can be reused.  They have access to the full<br>
VIRTIO device model, including feature negotiation and configuration<br>
space.<br>
<br>
Stefan<br>
<br>
</blockquote></div>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-07 11:16         ` Dr. David Alan Gilbert
@ 2019-11-08  6:45           ` Gerd Hoffmann
  0 siblings, 0 replies; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-08  6:45 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, Stefan Hajnoczi,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, David Stevens,
	Daniel Vetter, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Thu, Nov 07, 2019 at 11:16:18AM +0000, Dr. David Alan Gilbert wrote:
> * Gerd Hoffmann (kraxel@redhat.com) wrote:
> >   Hi,
> > 
> > > > This is not about host memory, buffers are in guest ram, everything else
> > > > would make sharing those buffers between drivers inside the guest (as
> > > > dma-buf) quite difficult.
> > > 
> > > Given it's just guest memory, can the guest just have a virt queue on
> > > which it places pointers to the memory it wants to share as elements in
> > > the queue?
> > 
> > Well, good question.  I'm actually wondering what the best approach is
> > to handle long-living, large buffers in virtio ...
> > 
> > virtio-blk (and others) are using the approach you describe.  They put a
> > pointer to the io request header, followed by pointer(s) to the io
> > buffers directly into the virtqueue.  That works great with storage for
> > example.  The queue entries are tagged being "in" or "out" (driver to
> > device or visa-versa), so the virtio transport can set up dma mappings
> > accordingly or even transparently copy data if needed.
> > 
> > For long-living buffers where data can potentially flow both ways this
> > model doesn't fit very well though.  So what virtio-gpu does instead is
> > transferring the scatter list as virtio payload.  Does feel a bit
> > unclean as it doesn't really fit the virtio architecture.  It assumes
> > the host can directly access guest memory for example (which is usually
> > the case but explicitly not required by virtio).  It also requires
> > quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
> > in theory should be handled fully transparently by the virtio-pci
> > transport.
> > 
> > We could instead have a "create-buffer" command which adds the buffer
> > pointers as elements to the virtqueue as you describe.  Then simply
> > continue using the buffer even after completing the "create-buffer"
> > command.  Which isn't exactly clean either.  It would likewise assume
> > direct access to guest memory, and it would likewise need quirks for
> > VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
> > mappings for the virtqueue entries after command completion.
> > 
> > Comments, suggestions, ideas?
> 
> What about not completing the command while the device is using the
> memory?

Thought about that too, but I don't think this is a good idea for
buffers which exist for a long time.

Example #1:  A video decoder would setup a bunch of buffers and use
them robin-round, so they would exist until the video playback is
finished.

Example #2:  virtio-gpu creates a framebuffer for fbcon which exists
forever.  And virtio-gpu potentially needs lots of buffers.  With 3d
active there can be tons of objects.  Although they typically don't
stay around that long we would still need a pretty big virtqueue to
store them all I guess.

And it also doesn't fully match the virtio spirit, it still assumes
direct guest memory access.  Without direct guest memory access
updates to the fbcon object would never reach the host for example.
In case a iommu is present we might need additional dma map flushes
for updates happening after submitting the lingering "create-buffer"
command.

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-07 12:10         ` Stefan Hajnoczi
  2019-11-07 15:10           ` Frank Yang
@ 2019-11-08  7:22           ` Gerd Hoffmann
  2019-11-08  7:35             ` Stefan Hajnoczi
  1 sibling, 1 reply; 29+ messages in thread
From: Gerd Hoffmann @ 2019-11-08  7:22 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Keiichi Watanabe, David Stevens, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

  Hi,

> > Adding a list of common properties to the spec certainly makes sense,
> > so everybody uses the same names.  Adding struct-ed properties for
> > common use cases might be useful too.
> 
> Why not define VIRTIO devices for wayland and friends?

There is an out-of-tree implementation of that, so yes, that surely is
an option.

Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
pipe as control channel.  Pretty much the same for X11, except that
shared buffers are optional because the X protocol can also squeeze all
display updates through the stream pipe.

So, if you want allow guests talk to the host display server you can run
the stream pipe over vsock.  But there is nothing for the shared
buffers ...

We could replicate vsock functionality elsewhere.  I think that happened
in the out-of-tree virtio-wayland implementation.  There also was some
discussion about adding streams to virtio-gpu, slightly pimped up so you
can easily pass around virtio-gpu resource references for buffer
sharing.  But given that getting vsock right isn't exactly trivial
(consider all the fairness issues when multiplexing multiple streams
over a virtqueue for example) I don't think this is a good plan.

cheers,
  Gerd



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-08  7:22           ` Gerd Hoffmann
@ 2019-11-08  7:35             ` Stefan Hajnoczi
  2019-11-09  1:41               ` Stéphane Marchesin
  0 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2019-11-08  7:35 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Tomasz Figa, Keiichi Watanabe, David Stevens, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Gurchetan Singh,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > Adding a list of common properties to the spec certainly makes sense,
> > > so everybody uses the same names.  Adding struct-ed properties for
> > > common use cases might be useful too.
> >
> > Why not define VIRTIO devices for wayland and friends?
>
> There is an out-of-tree implementation of that, so yes, that surely is
> an option.
>
> Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> pipe as control channel.  Pretty much the same for X11, except that
> shared buffers are optional because the X protocol can also squeeze all
> display updates through the stream pipe.
>
> So, if you want allow guests talk to the host display server you can run
> the stream pipe over vsock.  But there is nothing for the shared
> buffers ...
>
> We could replicate vsock functionality elsewhere.  I think that happened
> in the out-of-tree virtio-wayland implementation.  There also was some
> discussion about adding streams to virtio-gpu, slightly pimped up so you
> can easily pass around virtio-gpu resource references for buffer
> sharing.  But given that getting vsock right isn't exactly trivial
> (consider all the fairness issues when multiplexing multiple streams
> over a virtqueue for example) I don't think this is a good plan.

I also think vsock isn't the right fit.

Defining a virtio-wayland device makes sense to me: you get the guest
RAM access via virtqueues, plus the VIRTIO infrastructure (device IDs,
configuration space, feature bits, and existing reusable
kernel/userspace/QEMU code).

Stefan


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-08  7:35             ` Stefan Hajnoczi
@ 2019-11-09  1:41               ` Stéphane Marchesin
  2019-11-09 10:12                 ` Stefan Hajnoczi
  0 siblings, 1 reply; 29+ messages in thread
From: Stéphane Marchesin @ 2019-11-09  1:41 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Linux Media Mailing List,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	Gerd Hoffmann, Daniel Vetter, Dylan Reid, Gurchetan Singh,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak, David Stevens

On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > Adding a list of common properties to the spec certainly makes sense,
> > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > common use cases might be useful too.
> > >
> > > Why not define VIRTIO devices for wayland and friends?
> >
> > There is an out-of-tree implementation of that, so yes, that surely is
> > an option.
> >
> > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > pipe as control channel.  Pretty much the same for X11, except that
> > shared buffers are optional because the X protocol can also squeeze all
> > display updates through the stream pipe.
> >
> > So, if you want allow guests talk to the host display server you can run
> > the stream pipe over vsock.  But there is nothing for the shared
> > buffers ...
> >
> > We could replicate vsock functionality elsewhere.  I think that happened
> > in the out-of-tree virtio-wayland implementation.  There also was some
> > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > can easily pass around virtio-gpu resource references for buffer
> > sharing.  But given that getting vsock right isn't exactly trivial
> > (consider all the fairness issues when multiplexing multiple streams
> > over a virtqueue for example) I don't think this is a good plan.
>
> I also think vsock isn't the right fit.
>

+1 we are using vsock right now and we have a few pains because of it.

I think the high-level problem is that because it is a side channel,
we don't see everything that happens to the buffer in one place
(rendering + display) and we can't do things like reallocate the
format accordingly if needed, or we can't do flushing etc. on that
buffer where needed.

Best,
Stéphane

>
> Defining a virtio-wayland device makes sense to me: you get the guest
> RAM access via virtqueues, plus the VIRTIO infrastructure (device IDs,
> configuration space, feature bits, and existing reusable
> kernel/userspace/QEMU code).
>
> Stefan


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09  1:41               ` Stéphane Marchesin
@ 2019-11-09 10:12                 ` Stefan Hajnoczi
  2019-11-09 11:16                   ` Tomasz Figa
  0 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2019-11-09 10:12 UTC (permalink / raw)
  To: Stéphane Marchesin
  Cc: geoff, virtio-dev, Alex Lau, Linux Media Mailing List,
	Alexandre Courbot, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	Gerd Hoffmann, Daniel Vetter, Dylan Reid, Gurchetan Singh,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak, David Stevens

On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
>
> On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >
> > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > common use cases might be useful too.
> > > >
> > > > Why not define VIRTIO devices for wayland and friends?
> > >
> > > There is an out-of-tree implementation of that, so yes, that surely is
> > > an option.
> > >
> > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > pipe as control channel.  Pretty much the same for X11, except that
> > > shared buffers are optional because the X protocol can also squeeze all
> > > display updates through the stream pipe.
> > >
> > > So, if you want allow guests talk to the host display server you can run
> > > the stream pipe over vsock.  But there is nothing for the shared
> > > buffers ...
> > >
> > > We could replicate vsock functionality elsewhere.  I think that happened
> > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > can easily pass around virtio-gpu resource references for buffer
> > > sharing.  But given that getting vsock right isn't exactly trivial
> > > (consider all the fairness issues when multiplexing multiple streams
> > > over a virtqueue for example) I don't think this is a good plan.
> >
> > I also think vsock isn't the right fit.
> >
>
> +1 we are using vsock right now and we have a few pains because of it.
>
> I think the high-level problem is that because it is a side channel,
> we don't see everything that happens to the buffer in one place
> (rendering + display) and we can't do things like reallocate the
> format accordingly if needed, or we can't do flushing etc. on that
> buffer where needed.

Do you think a VIRTIO device designed for your use case is an
appropriate solution?

I have been arguing that these use cases should be addressed with
dedicated VIRTIO devices, but I don't understand the use cases of
everyone on the CC list so maybe I'm missing something :).  If there
are reasons why having a VIRTIO device for your use case does not make
sense then it would be good to discuss them.  Blockers like "VIRTIO is
too heavyweight/complex for us because ...", "Our application can't
make use of VIRTIO devices because ...", etc would be important to
hear.

Stefan


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09 10:12                 ` Stefan Hajnoczi
@ 2019-11-09 11:16                   ` Tomasz Figa
  2019-11-09 12:08                     ` Stefan Hajnoczi
  0 siblings, 1 reply; 29+ messages in thread
From: Tomasz Figa @ 2019-11-09 11:16 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Gurchetan Singh, Keiichi Watanabe, Gerd Hoffmann, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Linux Media Mailing List,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak, David Stevens

On Sat, Nov 9, 2019 at 7:12 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
> >
> > On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > >
> > > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > > common use cases might be useful too.
> > > > >
> > > > > Why not define VIRTIO devices for wayland and friends?
> > > >
> > > > There is an out-of-tree implementation of that, so yes, that surely is
> > > > an option.
> > > >
> > > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > > pipe as control channel.  Pretty much the same for X11, except that
> > > > shared buffers are optional because the X protocol can also squeeze all
> > > > display updates through the stream pipe.
> > > >
> > > > So, if you want allow guests talk to the host display server you can run
> > > > the stream pipe over vsock.  But there is nothing for the shared
> > > > buffers ...
> > > >
> > > > We could replicate vsock functionality elsewhere.  I think that happened
> > > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > > can easily pass around virtio-gpu resource references for buffer
> > > > sharing.  But given that getting vsock right isn't exactly trivial
> > > > (consider all the fairness issues when multiplexing multiple streams
> > > > over a virtqueue for example) I don't think this is a good plan.
> > >
> > > I also think vsock isn't the right fit.
> > >
> >
> > +1 we are using vsock right now and we have a few pains because of it.
> >
> > I think the high-level problem is that because it is a side channel,
> > we don't see everything that happens to the buffer in one place
> > (rendering + display) and we can't do things like reallocate the
> > format accordingly if needed, or we can't do flushing etc. on that
> > buffer where needed.
>
> Do you think a VIRTIO device designed for your use case is an
> appropriate solution?
>
> I have been arguing that these use cases should be addressed with
> dedicated VIRTIO devices, but I don't understand the use cases of
> everyone on the CC list so maybe I'm missing something :).  If there
> are reasons why having a VIRTIO device for your use case does not make
> sense then it would be good to discuss them.  Blockers like "VIRTIO is
> too heavyweight/complex for us because ...", "Our application can't
> make use of VIRTIO devices because ...", etc would be important to
> hear.

Do you have any idea on how to model Wayland as a VIRTIO device?

Stephane mentioned that we use vsock, but in fact we have our own
VIRTIO device, except that it's semantically almost the same as vsock,
with a difference being the ability to pass buffers and pipes across
the VM boundary.

Best regards,
Tomasz


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09 11:16                   ` Tomasz Figa
@ 2019-11-09 12:08                     ` Stefan Hajnoczi
  2019-11-09 15:12                       ` Tomasz Figa
  0 siblings, 1 reply; 29+ messages in thread
From: Stefan Hajnoczi @ 2019-11-09 12:08 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Gurchetan Singh, Keiichi Watanabe, Gerd Hoffmann, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Linux Media Mailing List,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak, David Stevens

On Sat, Nov 9, 2019 at 12:17 PM Tomasz Figa <tfiga@chromium.org> wrote:
> On Sat, Nov 9, 2019 at 7:12 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
> > > On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > > > common use cases might be useful too.
> > > > > >
> > > > > > Why not define VIRTIO devices for wayland and friends?
> > > > >
> > > > > There is an out-of-tree implementation of that, so yes, that surely is
> > > > > an option.
> > > > >
> > > > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > > > pipe as control channel.  Pretty much the same for X11, except that
> > > > > shared buffers are optional because the X protocol can also squeeze all
> > > > > display updates through the stream pipe.
> > > > >
> > > > > So, if you want allow guests talk to the host display server you can run
> > > > > the stream pipe over vsock.  But there is nothing for the shared
> > > > > buffers ...
> > > > >
> > > > > We could replicate vsock functionality elsewhere.  I think that happened
> > > > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > > > can easily pass around virtio-gpu resource references for buffer
> > > > > sharing.  But given that getting vsock right isn't exactly trivial
> > > > > (consider all the fairness issues when multiplexing multiple streams
> > > > > over a virtqueue for example) I don't think this is a good plan.
> > > >
> > > > I also think vsock isn't the right fit.
> > > >
> > >
> > > +1 we are using vsock right now and we have a few pains because of it.
> > >
> > > I think the high-level problem is that because it is a side channel,
> > > we don't see everything that happens to the buffer in one place
> > > (rendering + display) and we can't do things like reallocate the
> > > format accordingly if needed, or we can't do flushing etc. on that
> > > buffer where needed.
> >
> > Do you think a VIRTIO device designed for your use case is an
> > appropriate solution?
> >
> > I have been arguing that these use cases should be addressed with
> > dedicated VIRTIO devices, but I don't understand the use cases of
> > everyone on the CC list so maybe I'm missing something :).  If there
> > are reasons why having a VIRTIO device for your use case does not make
> > sense then it would be good to discuss them.  Blockers like "VIRTIO is
> > too heavyweight/complex for us because ...", "Our application can't
> > make use of VIRTIO devices because ...", etc would be important to
> > hear.
>
> Do you have any idea on how to model Wayland as a VIRTIO device?
>
> Stephane mentioned that we use vsock, but in fact we have our own
> VIRTIO device, except that it's semantically almost the same as vsock,
> with a difference being the ability to pass buffers and pipes across
> the VM boundary.

I know neither Wayland nor your use case :).

But we can discuss the design of your VIRTIO device.  Please post a
link to the code.

Stefan


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-09 12:08                     ` Stefan Hajnoczi
@ 2019-11-09 15:12                       ` Tomasz Figa
  0 siblings, 0 replies; 29+ messages in thread
From: Tomasz Figa @ 2019-11-09 15:12 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Alexandre Courbot, qemu-devel,
	Gurchetan Singh, Keiichi Watanabe, Gerd Hoffmann, Daniel Vetter,
	Stéphane Marchesin, Dylan Reid, Linux Media Mailing List,
	Hans Verkuil, Dmitry Morozov, Pawel Osciak, David Stevens

On Sat, Nov 9, 2019 at 9:08 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> On Sat, Nov 9, 2019 at 12:17 PM Tomasz Figa <tfiga@chromium.org> wrote:
> > On Sat, Nov 9, 2019 at 7:12 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > On Sat, Nov 9, 2019 at 2:41 AM Stéphane Marchesin <marcheu@chromium.org> wrote:
> > > > On Thu, Nov 7, 2019 at 11:35 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > > > On Fri, Nov 8, 2019 at 8:22 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > > > > > > > Adding a list of common properties to the spec certainly makes sense,
> > > > > > > > so everybody uses the same names.  Adding struct-ed properties for
> > > > > > > > common use cases might be useful too.
> > > > > > >
> > > > > > > Why not define VIRTIO devices for wayland and friends?
> > > > > >
> > > > > > There is an out-of-tree implementation of that, so yes, that surely is
> > > > > > an option.
> > > > > >
> > > > > > Wayland needs (a) shared buffers, mostly for gfx data, and (b) a stream
> > > > > > pipe as control channel.  Pretty much the same for X11, except that
> > > > > > shared buffers are optional because the X protocol can also squeeze all
> > > > > > display updates through the stream pipe.
> > > > > >
> > > > > > So, if you want allow guests talk to the host display server you can run
> > > > > > the stream pipe over vsock.  But there is nothing for the shared
> > > > > > buffers ...
> > > > > >
> > > > > > We could replicate vsock functionality elsewhere.  I think that happened
> > > > > > in the out-of-tree virtio-wayland implementation.  There also was some
> > > > > > discussion about adding streams to virtio-gpu, slightly pimped up so you
> > > > > > can easily pass around virtio-gpu resource references for buffer
> > > > > > sharing.  But given that getting vsock right isn't exactly trivial
> > > > > > (consider all the fairness issues when multiplexing multiple streams
> > > > > > over a virtqueue for example) I don't think this is a good plan.
> > > > >
> > > > > I also think vsock isn't the right fit.
> > > > >
> > > >
> > > > +1 we are using vsock right now and we have a few pains because of it.
> > > >
> > > > I think the high-level problem is that because it is a side channel,
> > > > we don't see everything that happens to the buffer in one place
> > > > (rendering + display) and we can't do things like reallocate the
> > > > format accordingly if needed, or we can't do flushing etc. on that
> > > > buffer where needed.
> > >
> > > Do you think a VIRTIO device designed for your use case is an
> > > appropriate solution?
> > >
> > > I have been arguing that these use cases should be addressed with
> > > dedicated VIRTIO devices, but I don't understand the use cases of
> > > everyone on the CC list so maybe I'm missing something :).  If there
> > > are reasons why having a VIRTIO device for your use case does not make
> > > sense then it would be good to discuss them.  Blockers like "VIRTIO is
> > > too heavyweight/complex for us because ...", "Our application can't
> > > make use of VIRTIO devices because ...", etc would be important to
> > > hear.
> >
> > Do you have any idea on how to model Wayland as a VIRTIO device?
> >
> > Stephane mentioned that we use vsock, but in fact we have our own
> > VIRTIO device, except that it's semantically almost the same as vsock,
> > with a difference being the ability to pass buffers and pipes across
> > the VM boundary.
>
> I know neither Wayland nor your use case :).
>
> But we can discuss the design of your VIRTIO device.  Please post a
> link to the code.

The guest-side driver:
https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.19/drivers/virtio/virtio_wl.c

Protocol definitions:
https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-4.19/include/uapi/linux/virtio_wl.h

crosvm device implementation:
https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/master/devices/src/virtio/wl.rs

Best regards,
Tomasz


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: guest / host buffer sharing ...
  2019-11-06  8:43 ` Stefan Hajnoczi
  2019-11-06  9:51   ` Gerd Hoffmann
@ 2019-11-11  3:04   ` David Stevens
  2019-11-11 15:36     ` [virtio-dev] " Liam Girdwood
  1 sibling, 1 reply; 29+ messages in thread
From: David Stevens @ 2019-11-11  3:04 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, Gerd Hoffmann,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

> My question would be "what is the actual problem you are trying to
> solve?".

One problem that needs to be solved is sharing buffers between
devices. With the out-of-tree Wayland device, to share virtio-gpu
buffers we've been using the virtio resource id. However, that
approach isn't necessarily the right approach, especially once there
are more devices allocating/sharing buffers. Specifically, this issue
came up in the recent RFC about adding a virtio video decoder device.

Having a centralized buffer allocator device is one way to deal with
sharing buffers, since it gives a definitive buffer identifier that
can be used by all drivers/devices to refer to the buffer. That being
said, I think the device as proposed is insufficient, as such a
centralized buffer allocator should probably be responsible for
allocating all shared buffers, not just linear guest ram buffers.

-David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-11  3:04   ` David Stevens
@ 2019-11-11 15:36     ` " Liam Girdwood
  2019-11-12  0:54       ` Gurchetan Singh
  0 siblings, 1 reply; 29+ messages in thread
From: Liam Girdwood @ 2019-11-11 15:36 UTC (permalink / raw)
  To: David Stevens, Stefan Hajnoczi
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	qemu-devel, Tomasz Figa, Keiichi Watanabe, Gerd Hoffmann,
	Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Gurchetan Singh, Dmitry Morozov, Pawel Osciak,
	Linux Media Mailing List

On Mon, 2019-11-11 at 12:04 +0900, David Stevens wrote:
> Having a centralized buffer allocator device is one way to deal with
> sharing buffers, since it gives a definitive buffer identifier that
> can be used by all drivers/devices to refer to the buffer. That being
> said, I think the device as proposed is insufficient, as such a
> centralized buffer allocator should probably be responsible for
> allocating all shared buffers, not just linear guest ram buffers.

This would work for audio. I need to be able to :-

1) Allocate buffers on guests that I can pass as SG physical pages to
DMA engine (via privileged VM driver) for audio data. Can be any memory
as long as it's DMA-able.

2) Export hardware mailbox memory (in a real device PCI BAR) as RO to
each guest to give guests low latency information on each audio stream.
To support use cases like voice calls, gaming, system notifications and
general audio processing.

Liam



^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-11 15:36     ` [virtio-dev] " Liam Girdwood
@ 2019-11-12  0:54       ` Gurchetan Singh
  2019-11-12 13:56         ` Liam Girdwood
  0 siblings, 1 reply; 29+ messages in thread
From: Gurchetan Singh @ 2019-11-12  0:54 UTC (permalink / raw)
  To: Liam Girdwood
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	Stefan Hajnoczi, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Linux Media Mailing List, Dmitry Morozov, Pawel Osciak,
	Gerd Hoffmann

On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> Each buffer also has some properties to carry metadata, some fixed (id, size, application), but
> also allow free form (name = value, framebuffers would have
> width/height/stride/format for example).

Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:

https://patchwork.freedesktop.org/patch/310349/

For virtio-wayland + virtio-vdec, the problem is sharing -- not allocation.

As the buffer reaches a kernel boundary, it's properties devolve into
[fd, size].  Userspace can typically handle sharing metadata.  The
issue is the guest dma-buf fd doesn't mean anything on the host.

One scenario could be:

1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
hypercall to the host.
2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
name will be "virtgpu-buffer-${UUID}").
3) When importing, virtio-{vdec, video} reads the dma-buf name in
userspace, and calls fd to handle.  The name is sent to the host via a
hypercall, giving host virtio-{vdec, video} enough information to
identify the buffer.

This solution is entirely userspace -- we can probably come up with
something in kernel space [generate_random_uuid()] if need be.  We
only need two universal IDs: {device ID, buffer ID}.

> On Wed, Nov 6, 2019 at 2:28 PM Geoffrey McRae <geoff@hostfission.com> wrote:
> The entire point of this for our purposes is due to the fact that we can
> not allocate the buffer, it's either provided by the GPU driver or
> DirectX. If virtio-gpu were to allocate the buffer we might as well
> forget
> all this and continue using the ivshmem device.

We have a similar problem with closed source drivers.  As @lfy
mentioned, it's possible to map memory directory into virtio-gpu's PCI
bar and it's actually a planned feature.  Would that work for your use
case?


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-12  0:54       ` Gurchetan Singh
@ 2019-11-12 13:56         ` Liam Girdwood
  2019-11-12 22:55           ` Gurchetan Singh
  0 siblings, 1 reply; 29+ messages in thread
From: Liam Girdwood @ 2019-11-12 13:56 UTC (permalink / raw)
  To: Gurchetan Singh
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	Stefan Hajnoczi, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Linux Media Mailing List, Dmitry Morozov, Pawel Osciak,
	Gerd Hoffmann

On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann <kraxel@redhat.com>
> wrote:
> > Each buffer also has some properties to carry metadata, some fixed
> > (id, size, application), but
> > also allow free form (name = value, framebuffers would have
> > width/height/stride/format for example).
> 
> Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> 
> https://patchwork.freedesktop.org/patch/310349/
> 
> For virtio-wayland + virtio-vdec, the problem is sharing -- not
> allocation.
> 

Audio also needs to share buffers with firmware running on DSPs.

> As the buffer reaches a kernel boundary, it's properties devolve into
> [fd, size].  Userspace can typically handle sharing metadata.  The
> issue is the guest dma-buf fd doesn't mean anything on the host.
> 
> One scenario could be:
> 
> 1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
> allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
> hypercall to the host.
> 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
> name will be "virtgpu-buffer-${UUID}").
> 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> userspace, and calls fd to handle.  The name is sent to the host via
> a
> hypercall, giving host virtio-{vdec, video} enough information to
> identify the buffer.
> 
> This solution is entirely userspace -- we can probably come up with
> something in kernel space [generate_random_uuid()] if need be.  We
> only need two universal IDs: {device ID, buffer ID}.
> 

I need something where I can take a guest buffer and then convert it to
physical scatter gather page list. I can then either pass the SG page
list to the DSP firmware (for DMAC IP programming) or have the host
driver program the DMAC directly using the page list (who programs DMAC
depends on DSP architecture).

DSP FW has no access to userspace so we would need some additional API
on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?

Liam




^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [virtio-dev] Re: guest / host buffer sharing ...
  2019-11-12 13:56         ` Liam Girdwood
@ 2019-11-12 22:55           ` Gurchetan Singh
  0 siblings, 0 replies; 29+ messages in thread
From: Gurchetan Singh @ 2019-11-12 22:55 UTC (permalink / raw)
  To: Liam Girdwood
  Cc: geoff, virtio-dev, Alex Lau, Daniel Vetter, Alexandre Courbot,
	Stefan Hajnoczi, qemu-devel, Tomasz Figa, Keiichi Watanabe,
	David Stevens, Hans Verkuil, Stéphane Marchesin, Dylan Reid,
	Linux Media Mailing List, Dmitry Morozov, Pawel Osciak,
	Gerd Hoffmann

On Tue, Nov 12, 2019 at 5:56 AM Liam Girdwood
<liam.r.girdwood@linux.intel.com> wrote:
>
> On Mon, 2019-11-11 at 16:54 -0800, Gurchetan Singh wrote:
> > On Tue, Nov 5, 2019 at 2:55 AM Gerd Hoffmann <kraxel@redhat.com>
> > wrote:
> > > Each buffer also has some properties to carry metadata, some fixed
> > > (id, size, application), but
> > > also allow free form (name = value, framebuffers would have
> > > width/height/stride/format for example).
> >
> > Sounds a lot like the recently added DMA_BUF_SET_NAME ioctls:
> >
> > https://patchwork.freedesktop.org/patch/310349/
> >
> > For virtio-wayland + virtio-vdec, the problem is sharing -- not
> > allocation.
> >
>
> Audio also needs to share buffers with firmware running on DSPs.
>
> > As the buffer reaches a kernel boundary, it's properties devolve into
> > [fd, size].  Userspace can typically handle sharing metadata.  The
> > issue is the guest dma-buf fd doesn't mean anything on the host.
> >
> > One scenario could be:
> >
> > 1) Guest userspace (say, gralloc) allocates using virtio-gpu.  When
> > allocating, we call uuidgen() and then pass that via RESOURCE_CREATE
> > hypercall to the host.
> > 2) When exporting the dma-buf, we call DMA_BUF_SET_NAME (the buffer
> > name will be "virtgpu-buffer-${UUID}").
> > 3) When importing, virtio-{vdec, video} reads the dma-buf name in
> > userspace, and calls fd to handle.  The name is sent to the host via
> > a
> > hypercall, giving host virtio-{vdec, video} enough information to
> > identify the buffer.
> >
> > This solution is entirely userspace -- we can probably come up with
> > something in kernel space [generate_random_uuid()] if need be.  We
> > only need two universal IDs: {device ID, buffer ID}.
> >
>
> I need something where I can take a guest buffer and then convert it to
> physical scatter gather page list. I can then either pass the SG page
> list to the DSP firmware (for DMAC IP programming) or have the host
> driver program the DMAC directly using the page list (who programs DMAC
> depends on DSP architecture).

So you need the HW address space from a guest allocation?  Would your
allocation hypercalls use something like the virtio_gpu_mem_entry
(virtio_gpu.h) and the draft virtio_video_mem_entry (draft)?

struct {
        __le64 addr;
        __le32 length;
        __le32 padding;
};

/* VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING */
struct virtio_gpu_resource_attach_backing {
        struct virtio_gpu_ctrl_hdr hdr;
        __le32 resource_id;
        __le32 nr_entries;
      *struct struct virtio_gpu_mem_entry */
};

struct virtio_video_mem_entry {
    __le64 addr;
    __le32 length;
    __u8 padding[4];
};

struct virtio_video_resource_attach_backing {
    struct virtio_video_ctrl_hdr hdr;
    __le32 resource_id;
    __le32 nr_entries;
};

>
> DSP FW has no access to userspace so we would need some additional API
> on top of DMA_BUF_SET_NAME etc to get physical hardware pages ?

The dma-buf api currently can share guest memory sg-lists.

>
> Liam
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, back to index

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-05 10:54 guest / host buffer sharing Gerd Hoffmann
2019-11-05 11:35 ` Geoffrey McRae
2019-11-06  6:24   ` Gerd Hoffmann
2019-11-06  8:36 ` David Stevens
2019-11-06 12:41   ` Gerd Hoffmann
2019-11-06 22:28     ` Geoffrey McRae
2019-11-07  6:48       ` Gerd Hoffmann
2019-11-06  8:43 ` Stefan Hajnoczi
2019-11-06  9:51   ` Gerd Hoffmann
2019-11-06 10:10     ` [virtio-dev] " Dr. David Alan Gilbert
2019-11-07 11:11       ` Gerd Hoffmann
2019-11-07 11:16         ` Dr. David Alan Gilbert
2019-11-08  6:45           ` Gerd Hoffmann
2019-11-06 11:46     ` Stefan Hajnoczi
2019-11-06 12:50       ` Gerd Hoffmann
2019-11-07 12:10         ` Stefan Hajnoczi
2019-11-07 15:10           ` Frank Yang
2019-11-08  7:22           ` Gerd Hoffmann
2019-11-08  7:35             ` Stefan Hajnoczi
2019-11-09  1:41               ` Stéphane Marchesin
2019-11-09 10:12                 ` Stefan Hajnoczi
2019-11-09 11:16                   ` Tomasz Figa
2019-11-09 12:08                     ` Stefan Hajnoczi
2019-11-09 15:12                       ` Tomasz Figa
2019-11-11  3:04   ` David Stevens
2019-11-11 15:36     ` [virtio-dev] " Liam Girdwood
2019-11-12  0:54       ` Gurchetan Singh
2019-11-12 13:56         ` Liam Girdwood
2019-11-12 22:55           ` Gurchetan Singh

QEMU-Devel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/qemu-devel/0 qemu-devel/git/0.git
	git clone --mirror https://lore.kernel.org/qemu-devel/1 qemu-devel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 qemu-devel qemu-devel/ https://lore.kernel.org/qemu-devel \
		qemu-devel@nongnu.org
	public-inbox-index qemu-devel

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.nongnu.qemu-devel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git