dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Tomeu Vizoso <tomeu.vizoso@collabora.com>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	David Airlie <airlied@linux.ie>,
	Stefan Hajnoczi <stefanha@gmail.com>,
	Jason Wang <jasowang@redhat.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org, kernel@collabora.com
Subject: Re: [PATCH v3 1/2] drm/virtio: Add window server support
Date: Mon, 5 Feb 2018 15:46:17 +0100	[thread overview]
Message-ID: <f07a5347-13fe-76c2-f971-3f02abebd3c9@collabora.com> (raw)
In-Reply-To: <20180205122017.4vb5nlpodkq2uhxa@sirius.home.kraxel.org>

On 02/05/2018 01:20 PM, Gerd Hoffmann wrote:
>    Hi,
> 
>>> Why not use virtio-vsock to run the wayland protocol? I don't like
>>> the idea to duplicate something with very simliar functionality in
>>> virtio-gpu.
>>
>> The reason for abandoning that approach was the type of objects that
>> could be shared via virtio-vsock would be extremely limited. Besides
>> that being potentially confusing to users, it would mean from the
>> implementation side that either virtio-vsock would gain a dependency on
>> the drm subsystem, or an appropriate abstraction for shareable buffers
>> would need to be added for little gain.
> 
> Well, no.  The idea is that virtio-vsock and virtio-gpu are used largely
> as-is, without knowing about each other.  The guest wayland proxy which
> does the buffer management talks to both devices.

Note that the proxy won't know anything about buffers if clients opt-in 
for zero-copy support (they allocate the buffers in a way that allows 
for sharing with the host).

>>> If you have a guest proxy anyway using virtio-sock for the protocol
>>> stream and virtio-gpu for buffer sharing (and some day 3d rendering
>>> too) should work fine I think.
>>
>> If I understand correctly your proposal, virtio-gpu would be used for
>> creating buffers that could be shared across domains, but something
>> equivalent to SCM_RIGHTS would still be needed in virtio-vsock?
> 
> Yes, the proxy would send a reference to the buffer over virtio-vsock.
> I was more thinking about a struct specifying something like
> "ressource-id 42 on virtio-gpu-pci device in slot 1:23.0" instead of
> using SCM_RIGHTS.

Can you extend on this? I'm having trouble figuring out how this could 
work in a way that keeps protocol data together with the resources it 
refers to.

>> If the mechanics of passing presentation data were very complex, I think
>> this approach would have more merit. But as you can see from the code,
>> it isn't that bad.
> 
> Well, the devil is in the details.  If you have multiple connections you
> don't want one being able to stall the others for example.  There are
> reasons took quite a while to bring virtio-vsock to the state where it
> is today.

Yes, but at the same time there are use cases that virtio-vsock has to 
support but aren't important in this scenario.

>>> What is the plan for the host side? I see basically two options. Either
>>> implement the host wayland proxy directly in qemu. Or
>>> implement it as separate process, which then needs some help from
>>> qemu to get access to the buffers. The later would allow qemu running
>>> independant from the desktop session.
>>
>> Regarding synchronizing buffers, this will stop becoming needed in
>> subsequent commits as all shared memory is allocated in the host and
>> mapped to the guest via KVM_SET_USER_MEMORY_REGION.
>
> --verbose please.  The qemu patches linked from the cover letter not
> exactly helpful in understanding how all this is supposed to work.

A client will allocate a buffer with DRM_VIRTGPU_RESOURCE_CREATE, export 
it and pass the FD to the compositor (via the proxy).

During resource creation, QEMU would allocate a shmem buffer and map it 
into the guest with KVM_SET_USER_MEMORY_REGION.

The client would mmap that resource and render to it. Because it's 
backed by host memory, the compositor would be able to read it without 
any further copies.

>> This is already the case for buffers passed from the compositor to the
>> clients (see patch 2/2), and I'm working on the equivalent for buffers
>> from the guest to the host (clients still have to create buffers with
>> DRM_VIRTGPU_RESOURCE_CREATE but they will be only backend by host memory
>> so no calls to DRM_VIRTGPU_TRANSFER_TO_HOST are needed).
> 
> Same here.  --verbose please.

When a FD comes from the compositor, QEMU mmaps it and maps that virtual 
address to the guest via KVM_SET_USER_MEMORY_REGION.

When the guest proxy reads from the winsrv socket, it will get a FD that 
wraps the buffer referenced above.

When the client reads from the guest proxy, it would get a FD that 
references that same buffer and would mmap it. At that point, the client 
is reading from the same physical pages where the compositor wrote to.

To be clear, I'm not against solving this via some form of restricted FD 
passing in virtio-vsock, but Stefan (added to CC) thought that it would 
be cleaner to do it all within virtio-gpu. This is the thread where it 
was discussed:

https://lkml.kernel.org/r/<2d73a3e1-af70-83a1-0e84-98b5932ea20c@collabora.com>

Thanks,

Tomeu
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2018-02-05 14:46 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-26 13:58 [PATCH v3 0/2] drm/virtio: Add window server support Tomeu Vizoso
2018-01-26 13:58 ` [PATCH v3 1/2] " Tomeu Vizoso
2018-02-01 16:36   ` Gerd Hoffmann
2018-02-05  8:19     ` Tomeu Vizoso
2018-02-05 12:20       ` Gerd Hoffmann
2018-02-05 14:46         ` Tomeu Vizoso [this message]
2018-02-05 16:03           ` Gerd Hoffmann
2018-02-06 12:41             ` Tomeu Vizoso
2018-02-06 14:23               ` Gerd Hoffmann
2018-02-07  1:09                 ` Michael S. Tsirkin
2018-02-07  7:41                   ` Tomeu Vizoso
2018-02-07  9:49                 ` Tomeu Vizoso
2018-02-09 11:14                   ` Tomeu Vizoso
2018-02-12 11:52                     ` Gerd Hoffmann
2018-02-12 14:00                       ` Tomeu Vizoso
2018-02-12 14:27                         ` Gerd Hoffmann
2018-02-12 14:42                           ` Tomeu Vizoso
2018-02-12 15:29                             ` Gerd Hoffmann
2018-02-12 11:45                   ` Gerd Hoffmann
2018-02-13  7:41                     ` Pekka Paalanen
2018-02-13 14:27                     ` Tomeu Vizoso
2018-02-16 10:48                       ` Gerd Hoffmann
2018-02-15 15:28                     ` Tomeu Vizoso
2018-02-06 15:00             ` Pekka Paalanen
2019-03-18 12:47       ` Tomeu Vizoso
2018-01-26 13:58 ` [PATCH v3 2/2] drm/virtio: Handle buffers from the compositor Tomeu Vizoso

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f07a5347-13fe-76c2-f971-3f02abebd3c9@collabora.com \
    --to=tomeu.vizoso@collabora.com \
    --cc=airlied@linux.ie \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jasowang@redhat.com \
    --cc=kernel@collabora.com \
    --cc=kraxel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=stefanha@gmail.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).