linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Keiichi Watanabe <keiichiw@chromium.org>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: "Tomasz Figa" <tfiga@chromium.org>,
	"David Airlie" <airlied@linux.ie>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	virtualization@lists.linux-foundation.org,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>,
	"David Stevens" <stevensd@chromium.org>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Zach Reizner" <zachr@chromium.org>,
	"Pawel Osciak" <posciak@chromium.org>
Subject: Re: [RFC PATCH] drm/virtio: Export resource handles via DMA-buf API
Date: Thu, 19 Sep 2019 20:41:19 +0900	[thread overview]
Message-ID: <CAD90VcZYXZDHMOfRUzbOFcauxrJC98OP6zxk_WS0aFKnXboTdg@mail.gmail.com> (raw)
In-Reply-To: <20190913110544.htmslqt4yzejugs4@sirius.home.kraxel.org>

On Fri, Sep 13, 2019 at 8:05 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
>   Hi,
>
> > > No.  DMA master address space in virtual machines is pretty much the
> > > same it is on physical machines.  So, on x86 without iommu, identical
> > > to (guest) physical address space.  You can't re-define it like that.
> >
> > That's not true. Even on x86 without iommu the DMA address space can
> > be different from the physical address space.
>
> On a standard pc (like the ones emulated by qemu) that is the case.
> It's different on !x86, it also changes with a iommu being present.
>
> But that is not the main point here.  The point is the dma master
> address already has a definition and you can't simply change that.
>
> > That could be still just
> > a simple addition/subtraction by constant, but still, the two are
> > explicitly defined without any guarantees about a simple mapping
> > between one or another existing.
>
> Sure.
>
> > "A CPU cannot reference a dma_addr_t directly because there may be
> > translation between its physical
> > address space and the DMA address space."
>
> Also note that dma address space is device-specific.  In case a iommu
> is present in the system you can have *multiple* dma address spaces,
> separating (groups of) devices from each other.  So passing a dma
> address from one device to another isn't going to work.
>
> > > > However, we could as well introduce a separate DMA address
> > > > space if resource handles are not the right way to refer to the memory
> > > > from other virtio devices.
> > >
> > > s/other virtio devices/other devices/
> > >
> > > dma-bufs are for buffer sharing between devices, not limited to virtio.
> > > You can't re-define that in some virtio-specific way.
> > >
> >
> > We don't need to limit this to virtio devices only. In fact I actually
> > foresee this having a use case with the emulated USB host controller,
> > which isn't a virtio device.
>
> What exactly?
>
> > That said, I deliberately referred to virtio to keep the scope of the
> > problem in control. If there is a solution that could work without
> > such assumption, I'm more than open to discuss it, of course.
>
> But it might lead to taking virtio-specific (or virtualization-specific)
> shortcuts which will hurt in the long run ...
>
> > As per my understanding of the DMA address, anything that lets the DMA
> > master access the target memory would qualify and there would be no
> > need for an IOMMU in between.
>
> Yes.  But that DMA address is already defined by the platform, you can't
> freely re-define it.  Well, you sort-of can when you design your own
> virtual iommu for qemu.  But even then you can't just pass around magic
> cookies masqueraded as dma address.  You have to make sure they actually
> form an address space, without buffers overlapping, ...
>
> > Exactly. The very specific first scenario that we want to start with
> > is allocating host memory through virtio-gpu and using that memory
> > both as output of a video decoder and as input (texture) to Virgil3D.
> > The memory needs to be specifically allocated by the host as only the
> > host can know the requirements for memory allocation of the video
> > decode accelerator hardware.
>
> So you probably have some virtio-video-decoder.  You allocate a gpu
> buffer, export it as dma-buf, import it into the decoder, then let the
> video decoder render to it.  Right?

Right. I sent an RFC about virtio-vdec (video decoder) to virtio-dev ML today.
https://lists.oasis-open.org/archives/virtio-dev/201909/msg00111.html

In the current design, the driver and the device uses an integer to identify a
DMA-buf. We call this integer a "resource handle" in the RFC. Our prototype
driver uses exposed virtio-gpu's resource handles for this purpose.

Regards,
Keiichi

>
> Using dmabufs makes sense for sure.  But we need an separate field in
> struct dma_buf for an (optional) host dmabuf identifier, we can't just
> hijack the dma address.
>
> cheers,
>   Gerd
>

  reply	other threads:[~2019-09-19 11:41 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-12  9:41 [RFC PATCH] drm/virtio: Export resource handles via DMA-buf API Tomasz Figa
2019-09-12 12:38 ` Gerd Hoffmann
2019-09-12 14:04   ` Tomasz Figa
2019-09-13  8:07     ` Gerd Hoffmann
2019-09-13  8:31       ` Tomasz Figa
2019-09-13 11:05         ` Gerd Hoffmann
2019-09-19 11:41           ` Keiichi Watanabe [this message]
2019-09-17 13:23 ` Daniel Vetter
2019-10-05  5:41   ` Tomasz Figa
2019-10-08 10:03     ` Daniel Vetter
2019-10-08 10:49       ` Tomasz Figa
2019-10-08 15:04         ` Daniel Vetter
2019-10-16  3:19           ` Tomasz Figa
2019-10-16  6:12             ` Gerd Hoffmann
2019-10-16  8:27               ` Tomasz Figa
2019-10-16  9:18             ` Daniel Vetter
2019-10-16 10:31               ` Tomasz Figa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAD90VcZYXZDHMOfRUzbOFcauxrJC98OP6zxk_WS0aFKnXboTdg@mail.gmail.com \
    --to=keiichiw@chromium.org \
    --cc=airlied@linux.ie \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=kraxel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marcheu@chromium.org \
    --cc=posciak@chromium.org \
    --cc=stevensd@chromium.org \
    --cc=tfiga@chromium.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=zachr@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).