All of lore.kernel.org
 help / color / mirror / Atom feed
* passing FDs across domains
@ 2017-11-13 14:18 Tomeu Vizoso
  2017-11-14  8:02 ` Gerd Hoffmann
  0 siblings, 1 reply; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-13 14:18 UTC (permalink / raw)
  To: kvm; +Cc: kraxel, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier

Hi,

I'm looking at what would be the best way for Wayland clients running in 
a VM to communicate with a Wayland compositor in the host.

The base Wayland protocol is fairly self-contained, with little or no 
references to objects outside the client and the server, so it lends 
itself very well to virtualization.

But there's a problem when passing references to pixel buffers around 
with FD passing, as SCM_RIGHTS isn't currently implemented in AF_VSOCK.

The Wayland project is willing to consider adding AF_VSOCK support to 
the libwayland libraries alongside existing support for AF_UNIX, both on 
client and server.

Any opinions on whether adding SCM_RIGHTS support to AF_VSOCK is a good 
idea? Or other options for letting processes in the host to access shmem 
buffers allocated within the guest?

Regarding the mechanics of mapping guest buffers in the host, I was 
hoping the approach described below would work:

http://www.fp7-save.eu/papers/SCALCOM2016.pdf

Thanks,

Tomeu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-13 14:18 passing FDs across domains Tomeu Vizoso
@ 2017-11-14  8:02 ` Gerd Hoffmann
  2017-11-14  8:08   ` Tomeu Vizoso
  0 siblings, 1 reply; 22+ messages in thread
From: Gerd Hoffmann @ 2017-11-14  8:02 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier

On Mon, Nov 13, 2017 at 03:18:46PM +0100, Tomeu Vizoso wrote:
> Hi,
> 
> I'm looking at what would be the best way for Wayland clients running in a
> VM to communicate with a Wayland compositor in the host.
> 
> The base Wayland protocol is fairly self-contained, with little or no
> references to objects outside the client and the server, so it lends itself
> very well to virtualization.
> 
> But there's a problem when passing references to pixel buffers around with
> FD passing, as SCM_RIGHTS isn't currently implemented in AF_VSOCK.
> 
> The Wayland project is willing to consider adding AF_VSOCK support to the
> libwayland libraries alongside existing support for AF_UNIX, both on client
> and server.
> 
> Any opinions on whether adding SCM_RIGHTS support to AF_VSOCK is a good
> idea?

Not going to work.  A file handle is a reference to a kernel object
which can be pretty much anything (file, socket, timer, memory, ...) and
you can't pass that across machine borders.  It's not working for
AF_INET for the same reason.

> Or other options for letting processes in the host to access shmem
> buffers allocated within the guest?
> 
> Regarding the mechanics of mapping guest buffers in the host, I was hoping
> the approach described below would work:
> 
> http://www.fp7-save.eu/papers/SCALCOM2016.pdf

Doesn't look that useful to me on a quick glance.

Lets step back and look at the problem you are trying to solve.

It seems you want guest wayland applications appear seamless on the host
wayland server, correct?

How does the wayland rendering workflow look like?  As far I know the
wayland protocol doesn't include any rendering.  Rendering happens
client side, into some buffer (one per window), which is then passed to
the server for display compositing.  Correct?  So you basically want
pass that buffer from guest to host?

What kind of shared memory is used by wayland?
sysv shm?  gbm buffers / dmabufs?

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-14  8:02 ` Gerd Hoffmann
@ 2017-11-14  8:08   ` Tomeu Vizoso
  2017-11-14  9:33     ` Gerd Hoffmann
  0 siblings, 1 reply; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-14  8:08 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier

On 11/14/2017 09:02 AM, Gerd Hoffmann wrote:
> On Mon, Nov 13, 2017 at 03:18:46PM +0100, Tomeu Vizoso wrote:
>> Hi,
>>
>> I'm looking at what would be the best way for Wayland clients running in a
>> VM to communicate with a Wayland compositor in the host.
>>
>> The base Wayland protocol is fairly self-contained, with little or no
>> references to objects outside the client and the server, so it lends itself
>> very well to virtualization.
>>
>> But there's a problem when passing references to pixel buffers around with
>> FD passing, as SCM_RIGHTS isn't currently implemented in AF_VSOCK.
>>
>> The Wayland project is willing to consider adding AF_VSOCK support to the
>> libwayland libraries alongside existing support for AF_UNIX, both on client
>> and server.
>>
>> Any opinions on whether adding SCM_RIGHTS support to AF_VSOCK is a good
>> idea?
> 
> Not going to work.  A file handle is a reference to a kernel object
> which can be pretty much anything (file, socket, timer, memory, ...) and
> you can't pass that across machine borders.  It's not working for
> AF_INET for the same reason.

The idea was to only allow passing FDs that point that objects that can 
be shared across domains. So shared memory buffers in this case.

>> Or other options for letting processes in the host to access shmem
>> buffers allocated within the guest?
>>
>> Regarding the mechanics of mapping guest buffers in the host, I was hoping
>> the approach described below would work:
>>
>> http://www.fp7-save.eu/papers/SCALCOM2016.pdf
> 
> Doesn't look that useful to me on a quick glance.
> 
> Lets step back and look at the problem you are trying to solve.
> 
> It seems you want guest wayland applications appear seamless on the host
> wayland server, correct?

Correct.

> How does the wayland rendering workflow look like?  As far I know the
> wayland protocol doesn't include any rendering.  Rendering happens
> client side, into some buffer (one per window), which is then passed to
> the server for display compositing.  Correct?  So you basically want
> pass that buffer from guest to host?

Correct.

> What kind of shared memory is used by wayland?
> sysv shm?  gbm buffers / dmabufs?

Typically, shared memory for CPU-rendered content, and dmabufs for 
GPU-rendered content.

Thanks,

Tomeu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-14  8:08   ` Tomeu Vizoso
@ 2017-11-14  9:33     ` Gerd Hoffmann
  2017-11-14 14:01       ` Tomeu Vizoso
  0 siblings, 1 reply; 22+ messages in thread
From: Gerd Hoffmann @ 2017-11-14  9:33 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

  Hi,

[ Cc'ing David Airlie and Marc-André Lureau ]

> > Lets step back and look at the problem you are trying to solve.
> > 
> > It seems you want guest wayland applications appear seamless on the host
> > wayland server, correct?
> 
> Correct.
> 
> > How does the wayland rendering workflow look like?  As far I know the
> > wayland protocol doesn't include any rendering.  Rendering happens
> > client side, into some buffer (one per window), which is then passed to
> > the server for display compositing.  Correct?  So you basically want
> > pass that buffer from guest to host?
> 
> Correct.
> 
> > What kind of shared memory is used by wayland?
> > sysv shm?  gbm buffers / dmabufs?
> 
> Typically, shared memory for CPU-rendered content, and dmabufs for
> GPU-rendered content.

Ok.  I guess solving this for virtio-gpu (with virgl enabled) is easiest
then.  Due to opengl rendering being offloaded to the host gpu the
guest window content already is in a host gpu buffer.

So we "only" need to export that buffer as dmabuf and pass it to the
wayland server.  That still doesn't look trivial though.  qemu must
manage the dmabufs, so it must be involved somehow; a direct guest
client -> host server wayland connection (alone) will not work.

I think we need either a wayland proxy in qemu which rewrites the buffer
references, or the host wayland server must talk to both guest client
(vsock could work for that) and qemu (for dmabuf management).  Or some
mixed model, such as a separate wayland proxy server talking to qemu for
dmabuf management.

I suspect in any case we need a wayland protocol extension for a new
buffer type as passing buffer references from guest to host can't use
filehandles.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-14  9:33     ` Gerd Hoffmann
@ 2017-11-14 14:01       ` Tomeu Vizoso
  2017-11-14 14:19         ` Gerd Hoffmann
  0 siblings, 1 reply; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-14 14:01 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

On 11/14/2017 10:33 AM, Gerd Hoffmann wrote:
>    Hi,
> 
> [ Cc'ing David Airlie and Marc-André Lureau ]
> 
>>> Lets step back and look at the problem you are trying to solve.
>>>
>>> It seems you want guest wayland applications appear seamless on the host
>>> wayland server, correct?
>>
>> Correct.
>>
>>> How does the wayland rendering workflow look like?  As far I know the
>>> wayland protocol doesn't include any rendering.  Rendering happens
>>> client side, into some buffer (one per window), which is then passed to
>>> the server for display compositing.  Correct?  So you basically want
>>> pass that buffer from guest to host?
>>
>> Correct.
>>
>>> What kind of shared memory is used by wayland?
>>> sysv shm?  gbm buffers / dmabufs?
>>
>> Typically, shared memory for CPU-rendered content, and dmabufs for
>> GPU-rendered content.
> 
> Ok.  I guess solving this for virtio-gpu (with virgl enabled) is easiest
> then.  Due to opengl rendering being offloaded to the host gpu the
> guest window content already is in a host gpu buffer.

Yes, besides, we already have virtio-gpu in place which can be improved 
as needed.

I'm more worried about CPU-rendered buffers, as the client is just 
putting in the socket the output of shm_open or similar. There I don't 
see any easy solution which is why I tried to get more "creative".

Thanks,

Tomeu

> So we "only" need to export that buffer as dmabuf and pass it to the
> wayland server.  That still doesn't look trivial though.  qemu must
> manage the dmabufs, so it must be involved somehow; a direct guest
> client -> host server wayland connection (alone) will not work.
> 
> I think we need either a wayland proxy in qemu which rewrites the buffer
> references, or the host wayland server must talk to both guest client
> (vsock could work for that) and qemu (for dmabuf management).  Or some
> mixed model, such as a separate wayland proxy server talking to qemu for
> dmabuf management.
> 
> I suspect in any case we need a wayland protocol extension for a new
> buffer type as passing buffer references from guest to host can't use
> filehandles.
> 
> cheers,
>    Gerd
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-14 14:01       ` Tomeu Vizoso
@ 2017-11-14 14:19         ` Gerd Hoffmann
  2017-11-15  8:21           ` Tomeu Vizoso
  2017-11-16  9:57           ` Tomeu Vizoso
  0 siblings, 2 replies; 22+ messages in thread
From: Gerd Hoffmann @ 2017-11-14 14:19 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

> > > > What kind of shared memory is used by wayland?
> > > > sysv shm?  gbm buffers / dmabufs?
> > > 
> > > Typically, shared memory for CPU-rendered content, and dmabufs for
> > > GPU-rendered content.
> > 
> > Ok.  I guess solving this for virtio-gpu (with virgl enabled) is easiest
> > then.  Due to opengl rendering being offloaded to the host gpu the
> > guest window content already is in a host gpu buffer.
> 
> Yes, besides, we already have virtio-gpu in place which can be improved as
> needed.
> 
> I'm more worried about CPU-rendered buffers, as the client is just putting
> in the socket the output of shm_open or similar. There I don't see any easy
> solution which is why I tried to get more "creative".

Will clients actually use cpu rendering of opengl is available?
Can clients cpu-render into dumb drm buffers?

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-14 14:19         ` Gerd Hoffmann
@ 2017-11-15  8:21           ` Tomeu Vizoso
  2017-11-15  9:47             ` Tomeu Vizoso
  2017-11-16  9:57           ` Tomeu Vizoso
  1 sibling, 1 reply; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-15  8:21 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

On 11/14/2017 03:19 PM, Gerd Hoffmann wrote:
>>>>> What kind of shared memory is used by wayland?
>>>>> sysv shm?  gbm buffers / dmabufs?
>>>>
>>>> Typically, shared memory for CPU-rendered content, and dmabufs for
>>>> GPU-rendered content.
>>>
>>> Ok.  I guess solving this for virtio-gpu (with virgl enabled) is easiest
>>> then.  Due to opengl rendering being offloaded to the host gpu the
>>> guest window content already is in a host gpu buffer.
>>
>> Yes, besides, we already have virtio-gpu in place which can be improved as
>> needed.
>>
>> I'm more worried about CPU-rendered buffers, as the client is just putting
>> in the socket the output of shm_open or similar. There I don't see any easy
>> solution which is why I tried to get more "creative".
> 
> Will clients actually use cpu rendering of opengl is available?

Don't know of any Wayland clients that would behave in that way.

> Can clients cpu-render into dumb drm buffers?

They could, but dumb buffers aren't generally shareable, besides being 
intended to be just dumb. We could probably use VGEM for that though.

But the problem is that all Wayland clients are currently expected to do 
their CPU rendering to a buffer that was created in /dev/shm/ or with 
the memfd_create syscall.

I know we cannot support all the kinds of objects that can be referenced 
by FDs, but I think it can be said that all modern protocols that can 
contain references to big areas of memory make use of passing FDs to 
shared memory. Depending on the situation that memory would have been 
allocated in the host or in the client.

Thanks,

Tomeu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-15  8:21           ` Tomeu Vizoso
@ 2017-11-15  9:47             ` Tomeu Vizoso
  0 siblings, 0 replies; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-15  9:47 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

On 11/15/2017 09:21 AM, Tomeu Vizoso wrote:
> On 11/14/2017 03:19 PM, Gerd Hoffmann wrote:
>>>>>> What kind of shared memory is used by wayland?
>>>>>> sysv shm?  gbm buffers / dmabufs?
>>>>>
>>>>> Typically, shared memory for CPU-rendered content, and dmabufs for
>>>>> GPU-rendered content.
>>>>
>>>> Ok.  I guess solving this for virtio-gpu (with virgl enabled) is 
>>>> easiest
>>>> then.  Due to opengl rendering being offloaded to the host gpu the
>>>> guest window content already is in a host gpu buffer.
>>>
>>> Yes, besides, we already have virtio-gpu in place which can be 
>>> improved as
>>> needed.
>>>
>>> I'm more worried about CPU-rendered buffers, as the client is just 
>>> putting
>>> in the socket the output of shm_open or similar. There I don't see 
>>> any easy
>>> solution which is why I tried to get more "creative".
>>
>> Will clients actually use cpu rendering of opengl is available?
> 
> Don't know of any Wayland clients that would behave in that way.

Sorry, I'm afraid that this actually says the opposite of what I 
intended to say.

I meant to say that we cannot rely on clients choosing GL rendering over 
CPU rendering when that's available. In most cases, applications will 
either do their rendering on the CPU, or on the GPU.

Regards,

Tomeu


>> Can clients cpu-render into dumb drm buffers?
> 
> They could, but dumb buffers aren't generally shareable, besides being 
> intended to be just dumb. We could probably use VGEM for that though.
> 
> But the problem is that all Wayland clients are currently expected to do 
> their CPU rendering to a buffer that was created in /dev/shm/ or with 
> the memfd_create syscall.
> 
> I know we cannot support all the kinds of objects that can be referenced 
> by FDs, but I think it can be said that all modern protocols that can 
> contain references to big areas of memory make use of passing FDs to 
> shared memory. Depending on the situation that memory would have been 
> allocated in the host or in the client.
> 
> Thanks,
> 
> Tomeu
> 
> 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-14 14:19         ` Gerd Hoffmann
  2017-11-15  8:21           ` Tomeu Vizoso
@ 2017-11-16  9:57           ` Tomeu Vizoso
  2017-11-16 10:49             ` Gerd Hoffmann
  1 sibling, 1 reply; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-16  9:57 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

On 11/14/2017 03:19 PM, Gerd Hoffmann wrote:
>>>>> What kind of shared memory is used by wayland?
>>>>> sysv shm?  gbm buffers / dmabufs?
>>>>
>>>> Typically, shared memory for CPU-rendered content, and dmabufs for
>>>> GPU-rendered content.
>>>
>>> Ok.  I guess solving this for virtio-gpu (with virgl enabled) is easiest
>>> then.  Due to opengl rendering being offloaded to the host gpu the
>>> guest window content already is in a host gpu buffer.
>>
>> Yes, besides, we already have virtio-gpu in place which can be improved as
>> needed.
>>
>> I'm more worried about CPU-rendered buffers, as the client is just putting
>> in the socket the output of shm_open or similar. There I don't see any easy
>> solution which is why I tried to get more "creative".
> 
> Will clients actually use cpu rendering of opengl is available?
> Can clients cpu-render into dumb drm buffers?

For the sake of moving forward, let's assume for now that wl_shm clients 
will be rendering to buffers allocated by VGEM or virtio-gpu. How would 
the client in the guest be communicating that to the compositor in the host?

I thought of SCM_RIGHTS on AF_VSOCK, which would return BADF if a FD is 
passed that cannot be shared in the requested direction. Are there any 
better options?

Thanks,

Tomeu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-16  9:57           ` Tomeu Vizoso
@ 2017-11-16 10:49             ` Gerd Hoffmann
  2017-11-16 14:36               ` Tomeu Vizoso
  0 siblings, 1 reply; 22+ messages in thread
From: Gerd Hoffmann @ 2017-11-16 10:49 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

  Hi,

> > > I'm more worried about CPU-rendered buffers, as the client is just putting
> > > in the socket the output of shm_open or similar. There I don't see any easy
> > > solution which is why I tried to get more "creative".
> > 
> > Will clients actually use cpu rendering of opengl is available?
> > Can clients cpu-render into dumb drm buffers?
> 
> For the sake of moving forward, let's assume for now that wl_shm clients
> will be rendering to buffers allocated by VGEM or virtio-gpu. How would the
> client in the guest be communicating that to the compositor in the host?

Export the drm buffer as dma-buf, then pass that.

> I thought of SCM_RIGHTS on AF_VSOCK, which would return BADF if a FD is
> passed that cannot be shared in the requested direction. Are there any
> better options?

When limiting this SCM_RIGHTS support to dma-bufs, guest -> host, that
could actually work.  The nice thing about dma-bufs is that the size is
fixed and the pages are pinned.  Which should make it *alot* simpler to
come up with a robust implement compared to something using shm_open()
or memfd_create() filehandles.

So, guest vsock driver would import the dma-buf and send the scatter
list associated with the dma-buf over to the host.  host vsock driver
has the guest memory map (like all vhost drivers), so it should be able
to create a dma-buf on the host side and pass it on to the host process.
When the process releases the dma-buf notify the guest driver that it
can drop its dma-buf reference.

Looks doable, also generic enough that it could be useful for more than
just seamless wayland support.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-16 10:49             ` Gerd Hoffmann
@ 2017-11-16 14:36               ` Tomeu Vizoso
  2017-11-16 15:51                 ` Gerd Hoffmann
  0 siblings, 1 reply; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-16 14:36 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

On 11/16/2017 11:49 AM, Gerd Hoffmann wrote:
>    Hi,
> 
>>>> I'm more worried about CPU-rendered buffers, as the client is just putting
>>>> in the socket the output of shm_open or similar. There I don't see any easy
>>>> solution which is why I tried to get more "creative".
>>>
>>> Will clients actually use cpu rendering of opengl is available?
>>> Can clients cpu-render into dumb drm buffers?
>>
>> For the sake of moving forward, let's assume for now that wl_shm clients
>> will be rendering to buffers allocated by VGEM or virtio-gpu. How would the
>> client in the guest be communicating that to the compositor in the host?
> 
> Export the drm buffer as dma-buf, then pass that.
> 
>> I thought of SCM_RIGHTS on AF_VSOCK, which would return BADF if a FD is
>> passed that cannot be shared in the requested direction. Are there any
>> better options?
> 
> When limiting this SCM_RIGHTS support to dma-bufs, guest -> host, that
> could actually work.

Would you go with SCM_RIGHTS as defined for AF_UNIX, or with a 
different, more specific name?

>  The nice thing about dma-bufs is that the size is
> fixed and the pages are pinned.  

Those pages are permanently pinned? Would have expected to be only 
pinned while scanning out, and such.

> Which should make it *alot* simpler to
> come up with a robust implement compared to something using shm_open()
> or memfd_create() filehandles.

Guess some Wayland proxy in the guest will need to intercept SHM buffers 
and copy them to virtio-gpu buffers then.

Hopefully at a later point we can do without those copies somehow.

> So, guest vsock driver would import the dma-buf and send the scatter
> list associated with the dma-buf over to the host.  host vsock driver
> has the guest memory map (like all vhost drivers), so it should be able
> to create a dma-buf on the host side and pass it on to the host process.
> When the process releases the dma-buf notify the guest driver that it
> can drop its dma-buf reference.
> 
> Looks doable, also generic enough that it could be useful for more than
> just seamless wayland support.

Thanks,

Tomeu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-16 14:36               ` Tomeu Vizoso
@ 2017-11-16 15:51                 ` Gerd Hoffmann
  2017-11-22 10:54                   ` Stefan Hajnoczi
  0 siblings, 1 reply; 22+ messages in thread
From: Gerd Hoffmann @ 2017-11-16 15:51 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: kvm, Stefan Hajnoczi, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

> > > I thought of SCM_RIGHTS on AF_VSOCK, which would return BADF if a FD is
> > > passed that cannot be shared in the requested direction. Are there any
> > > better options?
> > 
> > When limiting this SCM_RIGHTS support to dma-bufs, guest -> host, that
> > could actually work.
> 
> Would you go with SCM_RIGHTS as defined for AF_UNIX, or with a different,
> more specific name?

Hmm, good question.  Maybe a separate name is less confusing.

> >  The nice thing about dma-bufs is that the size is
> > fixed and the pages are pinned.
> 
> Those pages are permanently pinned? Would have expected to be only pinned
> while scanning out, and such.

Hmm, not fully sure, maybe only when someone holds a reference.
Didn't look at them too deeply from the kernel side.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-16 15:51                 ` Gerd Hoffmann
@ 2017-11-22 10:54                   ` Stefan Hajnoczi
  2017-11-22 15:47                     ` Stefan Hajnoczi
       [not found]                     ` <bf6ce187-46b6-f01d-4eba-79b035c94426@collabora.com>
  0 siblings, 2 replies; 22+ messages in thread
From: Stefan Hajnoczi @ 2017-11-22 10:54 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, kvm, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

On Thu, Nov 16, 2017 at 3:51 PM, Gerd Hoffmann <kraxel@redhat.com> wrote:
>> > > I thought of SCM_RIGHTS on AF_VSOCK, which would return BADF if a FD is
>> > > passed that cannot be shared in the requested direction. Are there any
>> > > better options?
>> >
>> > When limiting this SCM_RIGHTS support to dma-bufs, guest -> host, that
>> > could actually work.
>>
>> Would you go with SCM_RIGHTS as defined for AF_UNIX, or with a different,
>> more specific name?
>
> Hmm, good question.  Maybe a separate name is less confusing.
>
>> >  The nice thing about dma-bufs is that the size is
>> > fixed and the pages are pinned.
>>
>> Those pages are permanently pinned? Would have expected to be only pinned
>> while scanning out, and such.
>
> Hmm, not fully sure, maybe only when someone holds a reference.
> Didn't look at them too deeply from the kernel side.

>From a AF_VSOCK perspective there are two things that worry me:

1. Denial of Service.  A mechanism that involves pinning and relies on
guest cooperation needs to be careful to prevent buggy or malicious
guests from hogging resources that the host cannot reclaim.

Imagine a process on the host is about to access the shared memory and
the guest resets its virtio-vsock device or terminates.  What happens
now?  Does the host process get a SIGBUS upon memory access?  That
would be bad for the host process.  On the other hand, the host
process shouldn't be able to hang the guest either by keeping the
dma-buf alive.

2. dma-buf passing only works in the guest->host direction.  It
doesn't work host<->host or guest<->guest (if we decide to support it
in the future) because a guest cannot "see" memory ranges from the
host or other guests.  I don't like this asymmetry but I guess we
could live with it.

I wonder if it would be cleaner to extend virtio-gpu for this use case
instead of trying to pass buffers over AF_VSOCK.

Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-22 10:54                   ` Stefan Hajnoczi
@ 2017-11-22 15:47                     ` Stefan Hajnoczi
  2017-11-23  8:17                       ` Tomeu Vizoso
       [not found]                     ` <bf6ce187-46b6-f01d-4eba-79b035c94426@collabora.com>
  1 sibling, 1 reply; 22+ messages in thread
From: Stefan Hajnoczi @ 2017-11-22 15:47 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, kvm, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

On Wed, Nov 22, 2017 at 10:54 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> I wonder if it would be cleaner to extend virtio-gpu for this use case
> instead of trying to pass buffers over AF_VSOCK.

By the way, what is the status of virtio-wayland and how does this
discussion relate to virtio-wayland?
https://chromium.googlesource.com/chromiumos/platform/crosvm/+/master/devices/src/virtio/wl.rs

Stefan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-22 15:47                     ` Stefan Hajnoczi
@ 2017-11-23  8:17                       ` Tomeu Vizoso
  2017-11-27 10:59                         ` Stefan Hajnoczi
  0 siblings, 1 reply; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-23  8:17 UTC (permalink / raw)
  To: Stefan Hajnoczi, Gerd Hoffmann
  Cc: kvm, Zach Reizner, Helen Mae Koike Fornazier, David Airlie,
	Marc-André Lureau

On 11/22/2017 04:47 PM, Stefan Hajnoczi wrote:
> On Wed, Nov 22, 2017 at 10:54 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>> I wonder if it would be cleaner to extend virtio-gpu for this use case
>> instead of trying to pass buffers over AF_VSOCK.
> 
> By the way, what is the status of virtio-wayland 

I can say that by following the instructions in 
https://chromium.googlesource.com/chromiumos/platform/crosvm/, I was 
able to get a simple SHM client presented in the host (with gnome-shell).

But I had to change the SHM allocation to call VIRTWL_IOCTL_NEW_ALLOC 
instead of shm_open or its equivalent.

Zach (on CC) will be able to give a more informed answer.

> and how does this
> discussion relate to virtio-wayland?
> https://chromium.googlesource.com/chromiumos/platform/crosvm/+/master/devices/src/virtio/wl.rs

I'm working on getting the equivalent functionality into mainline.

Thanks,

Tomeu

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2017-11-23  8:17                       ` Tomeu Vizoso
@ 2017-11-27 10:59                         ` Stefan Hajnoczi
  2017-11-27 11:59                           ` cross-domain Wayland (Re: passing FDs across domains) Tomeu Vizoso
  0 siblings, 1 reply; 22+ messages in thread
From: Stefan Hajnoczi @ 2017-11-27 10:59 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Gerd Hoffmann, kvm, Zach Reizner, Helen Mae Koike Fornazier,
	David Airlie, Marc-André Lureau

[-- Attachment #1: Type: text/plain, Size: 1067 bytes --]

On Thu, Nov 23, 2017 at 09:17:02AM +0100, Tomeu Vizoso wrote:
> On 11/22/2017 04:47 PM, Stefan Hajnoczi wrote:
> > On Wed, Nov 22, 2017 at 10:54 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> > > I wonder if it would be cleaner to extend virtio-gpu for this use case
> > > instead of trying to pass buffers over AF_VSOCK.
> > 
> > By the way, what is the status of virtio-wayland
> 
> I can say that by following the instructions in
> https://chromium.googlesource.com/chromiumos/platform/crosvm/, I was able to
> get a simple SHM client presented in the host (with gnome-shell).
> 
> But I had to change the SHM allocation to call VIRTWL_IOCTL_NEW_ALLOC
> instead of shm_open or its equivalent.
> 
> Zach (on CC) will be able to give a more informed answer.
> 
> > and how does this
> > discussion relate to virtio-wayland?
> > https://chromium.googlesource.com/chromiumos/platform/crosvm/+/master/devices/src/virtio/wl.rs
> 
> I'm working on getting the equivalent functionality into mainline.

Why not mainline virtio-wayland?

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* cross-domain Wayland (Re: passing FDs across domains)
  2017-11-27 10:59                         ` Stefan Hajnoczi
@ 2017-11-27 11:59                           ` Tomeu Vizoso
  0 siblings, 0 replies; 22+ messages in thread
From: Tomeu Vizoso @ 2017-11-27 11:59 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kvm, dri-devel, Gerd Hoffmann, Helen Mae Koike Fornazier, David Airlie

On 11/27/2017 11:59 AM, Stefan Hajnoczi wrote:
> On Thu, Nov 23, 2017 at 09:17:02AM +0100, Tomeu Vizoso wrote:
>> On 11/22/2017 04:47 PM, Stefan Hajnoczi wrote:
>>> On Wed, Nov 22, 2017 at 10:54 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>>>> I wonder if it would be cleaner to extend virtio-gpu for this use case
>>>> instead of trying to pass buffers over AF_VSOCK.
>>>
>>> By the way, what is the status of virtio-wayland
>>
>> I can say that by following the instructions in
>> https://chromium.googlesource.com/chromiumos/platform/crosvm/, I was able to
>> get a simple SHM client presented in the host (with gnome-shell).
>>
>> But I had to change the SHM allocation to call VIRTWL_IOCTL_NEW_ALLOC
>> instead of shm_open or its equivalent.
>>
>> Zach (on CC) will be able to give a more informed answer.
>>
>>> and how does this
>>> discussion relate to virtio-wayland?
>>> https://chromium.googlesource.com/chromiumos/platform/crosvm/+/master/devices/src/virtio/wl.rs
>>
>> I'm working on getting the equivalent functionality into mainline.
> 
> Why not mainline virtio-wayland?

Because I can see no reason to have Wayland-specific code in the kernel. 
Everything that virtio-wayland does is needed by other presentation 
protocols such as X.

We could have virtio-graphics-presentation I guess, but virtio-gpu 
already deals with sharing graphics buffers between guest and host, so 
we would be duplicating concerns.

Regards,

Tomeu
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
       [not found]                             ` <20190401161940.c32mxs7locfzlp6z@sirius.home.kraxel.org>
@ 2019-04-16  5:29                               ` Tomasz Figa
  2019-04-17  9:30                                 ` Gerd Hoffmann
  0 siblings, 1 reply; 22+ messages in thread
From: Tomasz Figa @ 2019-04-16  5:29 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Stefan Hajnoczi, kvm, Zach Reizner,
	Helen Mae Koike Fornazier, David Airlie, Marc-André Lureau,
	Pawel Osciak, Lepton Wu, Stéphane Marchesin, dgreid,
	suleiman, Keiichi Watanabe

Hi Gerd,

Sorry for late reply. It was a crazy two weeks.

On Tue, Apr 2, 2019 at 1:19 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi,
>
> > > Camera was mentioned too.
> >
> > Right, forgot to mention it.
> >
> > Actually for cameras it gets complicated even if we put the buffer
> > sharing aside. The key point is that on modern systems, which need
> > more advanced camera capabilities than a simple UVC webcam, the camera
> > is in fact a whole subsystem of hardware components, i.e. sensors,
> > lenses, raw capture I/F and 1 or more ISPs. Currently the only
> > relatively successful way of standardizing the way to control those is
> > the Android Camera HALv3 API, which is a relatively complex userspace
> > interface. Getting feature parity, which is crucial for the use cases
> > Chrome OS is targeting, is going to require quite a sophisticated
> > interface between the host and guest.
>
> Sounds tricky indeed, especially the signal processor part.
> Any plans already how to tackle that?
>

There are two possible approaches here:

1) a platform specific one - in our case we already have a camera
service in the host that exposes a sort of IPC (Mojo) and we already
have clients talking that IPC and we're just thinking about moving
some of those clients into virtual machines. Since the IPC usability
is just limited to our host and our guests, it doesn't make much sense
to abstract it and so I mentioned the generic IPC pass through before.

2) an abstract one - modelling the camera from the high level
perspective, but without exposing too much detail. Basically we would
want to expose all the functionality and not all the hardware
topology. I would need to think a bit more about this, but the general
idea would be to expose logical cameras that can provide a number of
video streams with certain parameters. That would generally match the
functionality of the Android Camera HALv3 API, which is currently the
most functional and standard camera API used in the consumer market,
but without imposing the API details on the virtio interface.

> > Mojo IPC. Mojo is just yet another IPC designed to work over a Unix
> > socket, relying on file descriptor passing (SCM_RIGHTS) for passing
> > various platform handles (e.g. DMA-bufs). The clients exchange
> > DMA-bufs with the service.
>
> Only dma-bufs?
>

Mojo is just a framework that can serialize things and pass various
objects around. What is being passed depends on the particular
interface.

For the camera use case that would be DMA-bufs and fences.

We also have some more general use cases where we actually pass files,
sockets and other objects there. They can be easily handled with a
userspace proxy, though. Not very efficiently, but that's not a
requirement for our use cases.

> Handling dma-bufs looks doable without too much trouble to me.  guest ->
> host can pass a scatter list, host -> guest can map the buffer into
> guest address space using the new shared memory support which is planned
> to be added to virtio (for virtio-fs, and virtio-gpu will most likely
> use that too).
>

In some of our cases we would preallocate those buffers via
virtio-gpu, since they would be later used in the GPU or display
pipeline. In this case, sending a virtio-gpu handle sounds more
straightforward.

> > > >  - crypto hardware accelerators.
> > >
> > > Note: there is virtio-crypto.
> >
> > Thanks, that's a useful pointer.
> >
> > One more aspect is that the nature of some data may require that only
> > the host can access the decrypted data.
>
> What is the use case?  Playback drm-encrypted media, where the host gpu
> handles decryption?
>

Correct.

> > > One problem with sysv shm is that you can resize buffers.  Which in turn
> > > is the reason why we have memfs with sealing these days.
> >
> > Indeed shm is a bit problematic. However, passing file descriptors of
> > pipe-like objects or regular files could be implemented with a
> > reasonable amount of effort, if some performance trade-offs are
> > acceptable.
>
> Pipes could just create a new vsock stream and use that as transport.

Right.

>
> Any ideas or plans for files?
>

This is a very interesting problem and also depends on the direction
of transfer.

Host -> guest should be relatively easy, as reads/writes could go
inline, while guest side mmap could rely on the shared memory to map
the host file mapping into the guest, although I'm not sure how that
would play with the host fs/block subsystems, read aheads, write backs
and so on...

Guest -> host would be more complicated. In the simplest approach one
could just push the data inline and expose the files locally via FUSE.
Not sure if any memory sharing can be reasonably implemented here, due
to guest side fs/block not aware of the host accessing its memory...

> > > Third: Any plan for passing virtio-gpu resources to the host side when
> > > running wayland over virtio-vsock?  With dumb buffers it's probably not
> > > much of a problem, you can grab a list of pages and run with it.  But
> > > for virgl-rendered resources (where the rendered data is stored in a
> > > host texture) I can't see how that will work without copying around the
> > > data.
> >
> > I think it could work the same way as with the virtio-gpu window
> > system pipe being proposed in another thread. The guest vsock driver
> > would figure out that the FD the userspace is trying to pass points to
> > a virtio-gpu resource, convert that to some kind of a resource handle
> > (or descriptor) and pass that to the host. The host vsock
> > implementation would then resolve the resource handle (descriptor)
> > into an object that can be represented as a host file descriptor
> > (DMA-buf?).
>
> Well, when adding wayland stream support to virtio-gpu this is easy.
>
> When using virtio-vsock streams with SCM_RIGHTS this will need some
> cross-driver coordination between virtio-vsock and virtio-gpu on both
> guest and host side.
>
> Possibly such cross-driver coordination is useful for other cases
> too.  virtio-vsock and virtio-fs could likewise work together to allow
> pass-through of handles for regular files.
>

That would also be the case for the virtio-vdec (video decoder) we're
working on.

> > I'd expect that buffers that are used for Wayland surfaces
> > would be more than just a regular GL(ES) texture, since the compositor
> > and virglrenderer would normally be different processes, with the
> > former not having any idea of the latter's textures.
>
> wayland client export the egl frontbuffer as dma-buf.
>

I guess that's also one option. In that case that buffer would come
from the virtio-gpu driver already, right? Now the question is whether
it had the right bind flags set at allocation time, but given that
it's a front buffer, it should. Then it boils down to the same case as
buffers allocated explicitly from virtio-gpu and imported to EGL (via
EGLimage), just the allocation flow changes.

> > By the way, are you perhaps planning to visit the Open Source Summit
> > Japan in July [1]?
>
> No.

Got it. I submitted a CFP about handling multimedia use cases inside
VMs (not approved yet), so thought it could be a good chance to
discuss things. Still, I'd hope we move forward well enough to not
have much need to discuss anymore before July. ;)

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2019-04-16  5:29                               ` passing FDs across domains Tomasz Figa
@ 2019-04-17  9:30                                 ` Gerd Hoffmann
  2019-04-17 10:12                                   ` Tomasz Figa
  0 siblings, 1 reply; 22+ messages in thread
From: Gerd Hoffmann @ 2019-04-17  9:30 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Tomeu Vizoso, Stefan Hajnoczi, kvm, Zach Reizner,
	Helen Mae Koike Fornazier, David Airlie, Marc-André Lureau,
	Pawel Osciak, Lepton Wu, Stéphane Marchesin, dgreid,
	suleiman, Keiichi Watanabe

> > > Mojo IPC. Mojo is just yet another IPC designed to work over a Unix
> > > socket, relying on file descriptor passing (SCM_RIGHTS) for passing
> > > various platform handles (e.g. DMA-bufs). The clients exchange
> > > DMA-bufs with the service.
> >
> > Only dma-bufs?
> >
> 
> Mojo is just a framework that can serialize things and pass various
> objects around. What is being passed depends on the particular
> interface.
> 
> For the camera use case that would be DMA-bufs and fences.

Hmm, fences.  That'll be tricky too.

> We also have some more general use cases where we actually pass files,
> sockets and other objects there. They can be easily handled with a
> userspace proxy, though. Not very efficiently, but that's not a
> requirement for our use cases.

Ok.  So you'll have a userspace proxy anyway?

That pretty much removes the requirement to handle dma-bufs in
virtio-vsock (even though that still might be the best option),
the proxy could also use virtio-gpu or something else.

> > Any ideas or plans for files?
> 
> This is a very interesting problem and also depends on the direction
> of transfer.
> 
> Host -> guest should be relatively easy, as reads/writes could go
> inline, while guest side mmap could rely on the shared memory to map
> the host file mapping into the guest, although I'm not sure how that
> would play with the host fs/block subsystems, read aheads, write backs
> and so on...
> 
> Guest -> host would be more complicated. In the simplest approach one
> could just push the data inline and expose the files locally via FUSE.
> Not sure if any memory sharing can be reasonably implemented here, due
> to guest side fs/block not aware of the host accessing its memory...

One option could be to build on virtio-fs.  For files stored elsewhere
fallback to inline read/write and maybe simply disallow mmap.

cheers,
  Gerd


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2019-04-17  9:30                                 ` Gerd Hoffmann
@ 2019-04-17 10:12                                   ` Tomasz Figa
  2019-04-18  6:56                                     ` Tomasz Figa
  0 siblings, 1 reply; 22+ messages in thread
From: Tomasz Figa @ 2019-04-17 10:12 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Stefan Hajnoczi, kvm, Zach Reizner,
	Helen Mae Koike Fornazier, David Airlie, Marc-André Lureau,
	Pawel Osciak, Lepton Wu, Stéphane Marchesin, dgreid,
	suleiman, Keiichi Watanabe

On Wed, Apr 17, 2019 at 6:31 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> > > > Mojo IPC. Mojo is just yet another IPC designed to work over a Unix
> > > > socket, relying on file descriptor passing (SCM_RIGHTS) for passing
> > > > various platform handles (e.g. DMA-bufs). The clients exchange
> > > > DMA-bufs with the service.
> > >
> > > Only dma-bufs?
> > >
> >
> > Mojo is just a framework that can serialize things and pass various
> > objects around. What is being passed depends on the particular
> > interface.
> >
> > For the camera use case that would be DMA-bufs and fences.
>
> Hmm, fences.  That'll be tricky too.
>
> > We also have some more general use cases where we actually pass files,
> > sockets and other objects there. They can be easily handled with a
> > userspace proxy, though. Not very efficiently, but that's not a
> > requirement for our use cases.
>
> Ok.  So you'll have a userspace proxy anyway?
>
> That pretty much removes the requirement to handle dma-bufs in
> virtio-vsock (even though that still might be the best option),
> the proxy could also use virtio-gpu or something else.
>

We have a proxy for some other IPC interfaces, like sharing guest
files with the host. It doesn't handle DMA-bufs or fences, but those
could be added. Given that, no, technically we don't need that in
virtio-vsock. It would cost us a bit of latency, but it should work
okay.

Still, we need to solve the same problem of passing buffers and fences
for other use cases that are going to be virtualized properly, like
video and crypto mentioned earlier. I believe the solution could be
reused easily in vsock.

> > > Any ideas or plans for files?
> >
> > This is a very interesting problem and also depends on the direction
> > of transfer.
> >
> > Host -> guest should be relatively easy, as reads/writes could go
> > inline, while guest side mmap could rely on the shared memory to map
> > the host file mapping into the guest, although I'm not sure how that
> > would play with the host fs/block subsystems, read aheads, write backs
> > and so on...
> >
> > Guest -> host would be more complicated. In the simplest approach one
> > could just push the data inline and expose the files locally via FUSE.
> > Not sure if any memory sharing can be reasonably implemented here, due
> > to guest side fs/block not aware of the host accessing its memory...
>
> One option could be to build on virtio-fs.  For files stored elsewhere
> fallback to inline read/write and maybe simply disallow mmap.

That's an option too.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2019-04-17 10:12                                   ` Tomasz Figa
@ 2019-04-18  6:56                                     ` Tomasz Figa
  2019-04-18  7:07                                       ` Lepton Wu
  0 siblings, 1 reply; 22+ messages in thread
From: Tomasz Figa @ 2019-04-18  6:56 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Tomeu Vizoso, Stefan Hajnoczi, kvm, Zach Reizner,
	Helen Mae Koike Fornazier, David Airlie, Marc-André Lureau,
	Pawel Osciak, Lepton Wu, Stéphane Marchesin, dgreid,
	suleiman, Keiichi Watanabe

On Wed, Apr 17, 2019 at 7:12 PM Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Wed, Apr 17, 2019 at 6:31 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > > > > Mojo IPC. Mojo is just yet another IPC designed to work over a Unix
> > > > > socket, relying on file descriptor passing (SCM_RIGHTS) for passing
> > > > > various platform handles (e.g. DMA-bufs). The clients exchange
> > > > > DMA-bufs with the service.
> > > >
> > > > Only dma-bufs?
> > > >
> > >
> > > Mojo is just a framework that can serialize things and pass various
> > > objects around. What is being passed depends on the particular
> > > interface.
> > >
> > > For the camera use case that would be DMA-bufs and fences.
> >
> > Hmm, fences.  That'll be tricky too.
> >
> > > We also have some more general use cases where we actually pass files,
> > > sockets and other objects there. They can be easily handled with a
> > > userspace proxy, though. Not very efficiently, but that's not a
> > > requirement for our use cases.
> >
> > Ok.  So you'll have a userspace proxy anyway?
> >
> > That pretty much removes the requirement to handle dma-bufs in
> > virtio-vsock (even though that still might be the best option),
> > the proxy could also use virtio-gpu or something else.
> >
>
> We have a proxy for some other IPC interfaces, like sharing guest
> files with the host. It doesn't handle DMA-bufs or fences, but those
> could be added. Given that, no, technically we don't need that in
> virtio-vsock. It would cost us a bit of latency, but it should work
> okay.

Actually there is one issue with the user space proxy model.

Malicious userspace could still open vsock directly and start guessing
resource handles to get access to buffers. While this could be
prevented by making vsock accessible only for the proxy process, it
would render any other legit use cases for vsock impossible without
making the processes implementing those privileged as well.

Of course that would not be a security issue for the host, since the
malicious process could only get access to the buffers accessible to
the guest as a whole. Still, it would significantly affect the
security level within the guest.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: passing FDs across domains
  2019-04-18  6:56                                     ` Tomasz Figa
@ 2019-04-18  7:07                                       ` Lepton Wu
  0 siblings, 0 replies; 22+ messages in thread
From: Lepton Wu @ 2019-04-18  7:07 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Gerd Hoffmann, Tomeu Vizoso, Stefan Hajnoczi, kvm, Zach Reizner,
	Helen Mae Koike Fornazier, David Airlie, Marc-André Lureau,
	Pawel Osciak, Stéphane Marchesin, dgreid, suleiman,
	Keiichi Watanabe

On Wed, Apr 17, 2019 at 11:57 PM Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Wed, Apr 17, 2019 at 7:12 PM Tomasz Figa <tfiga@chromium.org> wrote:
> >
> > On Wed, Apr 17, 2019 at 6:31 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
> > >
> > > > > > Mojo IPC. Mojo is just yet another IPC designed to work over a Unix
> > > > > > socket, relying on file descriptor passing (SCM_RIGHTS) for passing
> > > > > > various platform handles (e.g. DMA-bufs). The clients exchange
> > > > > > DMA-bufs with the service.
> > > > >
> > > > > Only dma-bufs?
> > > > >
> > > >
> > > > Mojo is just a framework that can serialize things and pass various
> > > > objects around. What is being passed depends on the particular
> > > > interface.
> > > >
> > > > For the camera use case that would be DMA-bufs and fences.
> > >
> > > Hmm, fences.  That'll be tricky too.
> > >
> > > > We also have some more general use cases where we actually pass files,
> > > > sockets and other objects there. They can be easily handled with a
> > > > userspace proxy, though. Not very efficiently, but that's not a
> > > > requirement for our use cases.
> > >
> > > Ok.  So you'll have a userspace proxy anyway?
> > >
> > > That pretty much removes the requirement to handle dma-bufs in
> > > virtio-vsock (even though that still might be the best option),
> > > the proxy could also use virtio-gpu or something else.
> > >
> >
> > We have a proxy for some other IPC interfaces, like sharing guest
> > files with the host. It doesn't handle DMA-bufs or fences, but those
> > could be added. Given that, no, technically we don't need that in
> > virtio-vsock. It would cost us a bit of latency, but it should work
> > okay.
>
> Actually there is one issue with the user space proxy model.
>
> Malicious userspace could still open vsock directly and start guessing
> resource handles to get access to buffers. While this could be
We can just use 1024 bits handles to make it hard to guess. Or  we can
have some kind of  client/server authortication and then track ownership/access
of buffers at host side?
> prevented by making vsock accessible only for the proxy process, it
> would render any other legit use cases for vsock impossible without
> making the processes implementing those privileged as well.
>
> Of course that would not be a security issue for the host, since the
> malicious process could only get access to the buffers accessible to
> the guest as a whole. Still, it would significantly affect the
> security level within the guest.
>
> Best regards,
> Tomasz

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2019-04-18  7:07 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-13 14:18 passing FDs across domains Tomeu Vizoso
2017-11-14  8:02 ` Gerd Hoffmann
2017-11-14  8:08   ` Tomeu Vizoso
2017-11-14  9:33     ` Gerd Hoffmann
2017-11-14 14:01       ` Tomeu Vizoso
2017-11-14 14:19         ` Gerd Hoffmann
2017-11-15  8:21           ` Tomeu Vizoso
2017-11-15  9:47             ` Tomeu Vizoso
2017-11-16  9:57           ` Tomeu Vizoso
2017-11-16 10:49             ` Gerd Hoffmann
2017-11-16 14:36               ` Tomeu Vizoso
2017-11-16 15:51                 ` Gerd Hoffmann
2017-11-22 10:54                   ` Stefan Hajnoczi
2017-11-22 15:47                     ` Stefan Hajnoczi
2017-11-23  8:17                       ` Tomeu Vizoso
2017-11-27 10:59                         ` Stefan Hajnoczi
2017-11-27 11:59                           ` cross-domain Wayland (Re: passing FDs across domains) Tomeu Vizoso
     [not found]                     ` <bf6ce187-46b6-f01d-4eba-79b035c94426@collabora.com>
     [not found]                       ` <CAAFQd5AJFtK524sh1RStOeE8yagknW_hoJVr6mzMXb=+eL03Bg@mail.gmail.com>
     [not found]                         ` <20190320121132.ysny44krkgw6jlje@sirius.home.kraxel.org>
     [not found]                           ` <CAAFQd5BmAfYzF0JprqWjpkmk-_PGAGXzy7HOa-QJJBxfqigVkg@mail.gmail.com>
     [not found]                             ` <20190401161940.c32mxs7locfzlp6z@sirius.home.kraxel.org>
2019-04-16  5:29                               ` passing FDs across domains Tomasz Figa
2019-04-17  9:30                                 ` Gerd Hoffmann
2019-04-17 10:12                                   ` Tomasz Figa
2019-04-18  6:56                                     ` Tomasz Figa
2019-04-18  7:07                                       ` Lepton Wu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.