linux-media.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: Daniel Vetter <daniel@ffwll.ch>,
	Vivek Kasireddy <vivek.kasireddy@intel.com>,
	virtualization@lists.linux-foundation.org,
	dri-devel@lists.freedesktop.org, daniel.vetter@intel.com,
	daniel.vetter@ffwll.ch, dongwon.kim@intel.com,
	sumit.semwal@linaro.org, christian.koenig@amd.com,
	linux-media@vger.kernel.org
Subject: Re: [RFC v3 2/3] virtio: Introduce Vdmabuf driver
Date: Mon, 8 Feb 2021 10:38:54 +0100	[thread overview]
Message-ID: <YCEGrrT0/eqqz/ok@phenom.ffwll.local> (raw)
In-Reply-To: <20210208075748.xejgcb4il2egow2u@sirius.home.kraxel.org>

On Mon, Feb 08, 2021 at 08:57:48AM +0100, Gerd Hoffmann wrote:
>   Hi,
> 
> > > +/* extract pages referenced by sgt */
> > > +static struct page **extr_pgs(struct sg_table *sgt, int *nents, int *last_len)
> > 
> > Nack, this doesn't work on dma-buf. And it'll blow up at runtime when you
> > enable the very recently merged CONFIG_DMABUF_DEBUG (would be good to test
> > with that, just to make sure).
> 
> > Aside from this, for virtio/kvm use-cases we've already merged the udmabuf
> > driver. Does this not work for your usecase?
> 
> udmabuf can be used on the host side to make a collection of guest pages
> available as host dmabuf.  It's part of the puzzle, but not a complete
> solution.
> 
> As I understand it the intended workflow is this:
> 
>   (1) guest gpu driver exports some object as dma-buf
>   (2) dma-buf is imported into this new driver.
>   (3) driver sends the pages to the host.
>   (4) hypervisor uses udmabuf to create a host dma-buf.
>   (5) host dma-buf is passed on.
> 
> And step (3) is the problematic one as this will not
> work in case the dma-buf doesn't live in guest ram but
> in -- for example -- gpu device memory.

Yup, vram or similar special ram is the reason why an importer can't look
at the pages behind a dma-buf sg table.

> Reversing the driver roles in the guest (virtio driver
> allocates pages and exports the dma-buf to the guest
> gpu driver) should work fine.

Yup, this needs to flow the other way round than in these patches.

> Which btw is something you can do today with virtio-gpu.
> Maybe it makes sense to have the option to run virtio-gpu
> in render-only mode for that use case.

Yeah that sounds like a useful addition.

Also, the same flow should work for real gpus passed through as pci
devices. What we need is some way to surface the dma-buf on the guest
side, which I think doesn't exist yet stand-alone. But this role could be
fulfilled by virtio-gpu in render-only mode I think. And (assuming I've
understood the recent discussions around virtio dma-buf sharing using
virtio ids) this would give you some neat zero-copy tricks for free if you
share multiple devices.

Also if you really want seamless buffer sharing between devices that are
passed to the guest and devices on the host side (like displays I guess?
or maybe video encode if this is for cloug gaming?), then using virtio-gpu
in render mode should also allow you to pass the dma_fence back&forth.
Which we'll need too, not just the dma-buf.

So at a first guess I'd say "render-only virtio-gpu mode" sounds like
something rather useful. But I might be totally off here.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

  reply	other threads:[~2021-02-08  9:42 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-03  7:35 [RFC v3 0/3] Introduce Virtio based Dmabuf driver Vivek Kasireddy
2021-02-03  7:35 ` [RFC v3 1/3] kvm: Add a notifier for create and destroy VM events Vivek Kasireddy
2021-02-03  7:35 ` [RFC v3 2/3] virtio: Introduce Vdmabuf driver Vivek Kasireddy
2021-02-05 16:03   ` Daniel Vetter
2021-02-08  7:57     ` Gerd Hoffmann
2021-02-08  9:38       ` Daniel Vetter [this message]
2021-02-09  0:25         ` Kasireddy, Vivek
2021-02-09  8:44           ` Gerd Hoffmann
2021-02-10  4:47             ` Kasireddy, Vivek
2021-02-10  8:05               ` Christian König
2021-02-12  8:36                 ` Kasireddy, Vivek
2021-02-12  8:47                   ` Christian König
2021-02-12 10:14                     ` Gerd Hoffmann
2021-02-10  9:16               ` Gerd Hoffmann
2021-02-12  8:15                 ` Kasireddy, Vivek
2021-02-12 11:01                   ` Gerd Hoffmann
2021-02-22  8:52                     ` Kasireddy, Vivek
2021-03-15  2:27                     ` Zhang, Tina
2021-02-03  7:35 ` [RFC v3 3/3] vhost: Add Vdmabuf backend Vivek Kasireddy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YCEGrrT0/eqqz/ok@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel.vetter@intel.com \
    --cc=dongwon.kim@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=kraxel@redhat.com \
    --cc=linux-media@vger.kernel.org \
    --cc=sumit.semwal@linaro.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=vivek.kasireddy@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).