All of lore.kernel.org
 help / color / mirror / Atom feed
From: Gerd Hoffmann <kraxel@redhat.com>
To: Tomasz Figa <tfiga@chromium.org>
Cc: "Keiichi Watanabe" <keiichiw@chromium.org>,
	virtio-dev@lists.oasis-open.org,
	"Alexandre Courbot" <acourbot@chromium.org>,
	alexlau@chromium.org, dgreid@chromium.org,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Pawel Osciak" <posciak@chromium.org>,
	stevensd@chromium.org, "Hans Verkuil" <hverkuil@xs4all.nl>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Daniel Vetter" <daniel@ffwll.ch>
Subject: Re: [virtio-dev] [PATCH] [RFC RESEND] vdec: Add virtio video decode device specification
Date: Mon, 14 Oct 2019 14:19:14 +0200	[thread overview]
Message-ID: <20191014121914.lyptm3gdmekvcu6v@sirius.home.kraxel.org> (raw)
In-Reply-To: <CAAFQd5Ba-REZU9=rdm3J6NRqqeAUFdCV7SJ_WdO2BHyKNBN7TQ@mail.gmail.com>

> > Well.  I think before even discussing the protocol details we need a
> > reasonable plan for buffer handling.  I think using virtio-gpu buffers
> > should be an optional optimization and not a requirement.  Also the
> > motivation for that should be clear (Let the host decoder write directly
> > to virtio-gpu resources, to display video without copying around the
> > decoded framebuffers from one device to another).
> 
> Just to make sure we're on the same page, what would the buffers come
> from if we don't use this optimization?
> 
> I can imagine a setup like this;
>  1) host device allocates host memory appropriate for usage with host
> video decoder,
>  2) guest driver allocates arbitrary guest pages for storage
> accessible to the guest software,
>  3) guest userspace writes input for the decoder to guest pages,
>  4) guest driver passes the list of pages for the input and output
> buffers to the host device
>  5) host device copies data from input guest pages to host buffer
>  6) host device runs the decoding
>  7) host device copies decoded frame to output guest pages
>  8) guest userspace can access decoded frame from those pages; back to 3
> 
> Is that something you have in mind?

I don't have any specific workflow in mind.

If you want display the decoded video frames you want use dma-bufs shared
by video decoder and gpu, right?  So the userspace application (video
player probably) would create the buffers using one of the drivers,
export them as dma-buf, then import them into the other driver.  Just
like you would do on physical hardware.  So, when using virtio-gpu
buffers:

  (1) guest app creates buffers using virtio-gpu.
  (2) guest app exports virtio-gpu buffers buffers as dma-buf.
  (3) guest app imports the dma-bufs into virtio-vdec.
  (4) guest app asks the virtio-vdec driver to write the decoded
      frames into the dma-bufs.
  (5) guest app asks the virtio-gpu driver to display the decoded
      frame.

The guest video decoder driver passes the dma-buf pages to the host, and
it is the host driver's job to fill the buffer.  How this is done
exactly might depend on hardware capabilities (whenever a host-allocated
bounce buffer is needed or whenever the hardware can decode directly to
the dma-buf passed by the guest driver) and is an implementation detail.

Now, with cross-device sharing added the virtio-gpu would attach some
kind of identifier to the dma-buf, virtio-vdec could fetch the
identifier and pass it to the host too, and the host virtio-vdec device
can use the identifier to get a host dma-buf handle for the (virtio-gpu)
buffer.  Ask the host video decoder driver to import the host dma-buf.
If it all worked fine it can ask the host hardware to decode directly to
the host virtio-gpu resource.

> > Referencing virtio-gpu buffers needs a better plan than just re-using
> > virtio-gpu resource handles.  The handles are device-specific.  What if
> > there are multiple virtio-gpu devices present in the guest?
> >
> > I think we need a framework for cross-device buffer sharing.  One
> > possible option would be to have some kind of buffer registry, where
> > buffers can be registered for cross-device sharing and get a unique
> > id (a uuid maybe?).  Drivers would typically register buffers on
> > dma-buf export.
> 
> This approach could possibly let us handle this transparently to
> importers, which would work for guest kernel subsystems that rely on
> the ability to handle buffers like native memory (e.g. having a
> sgtable or DMA address) for them.
> 
> How about allocating guest physical addresses for memory corresponding
> to those buffers? On the virtio-gpu example, that could work like
> this:
>  - by default a virtio-gpu buffer has only a resource handle,
>  - VIRTIO_GPU_RESOURCE_EXPORT command could be called to have the
> virtio-gpu device export the buffer to a host framework (inside the
> VMM) that would allocate guest page addresses for it, which the
> command would return in a response to the guest,

Hmm, the cross-device buffer sharing framework I have in mind would
basically be a buffer registry.  virtio-gpu would create buffers as
usual, create a identifier somehow (details to be hashed out), attach
the identifier to the dma-buf so it can be used as outlined above.

Also note that the guest manages the address space, so the host can't
simply allocate guest page addresses.  Mapping host virtio-gpu resources
into guest address space is planned, it'll most likely use a pci memory
bar to reserve some address space.  The host can map resources into that
pci bar, on guest request.

>  - virtio-gpu driver could then create a regular DMA-buf object for
> such memory, because it's just backed by pages (even though they may
> not be accessible to the guest; just like in the case of TrustZone
> memory protection on bare metal systems),

Hmm, well, pci memory bars are *not* backed by pages.  Maybe we can use
Documentation/driver-api/pci/p2pdma.rst though.  With that we might be
able to lookup buffers using device and dma address, without explicitly
creating some identifier.  Not investigated yet in detail.

cheers,
  Gerd


WARNING: multiple messages have this Message-ID (diff)
From: Gerd Hoffmann <kraxel@redhat.com>
To: Tomasz Figa <tfiga@chromium.org>
Cc: "Keiichi Watanabe" <keiichiw@chromium.org>,
	virtio-dev@lists.oasis-open.org,
	"Alexandre Courbot" <acourbot@chromium.org>,
	alexlau@chromium.org, dgreid@chromium.org,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Pawel Osciak" <posciak@chromium.org>,
	stevensd@chromium.org, "Hans Verkuil" <hverkuil@xs4all.nl>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Daniel Vetter" <daniel@ffwll.ch>
Subject: Re: [virtio-dev] [PATCH] [RFC RESEND] vdec: Add virtio video decode device specification
Date: Mon, 14 Oct 2019 14:19:14 +0200	[thread overview]
Message-ID: <20191014121914.lyptm3gdmekvcu6v@sirius.home.kraxel.org> (raw)
In-Reply-To: <CAAFQd5Ba-REZU9=rdm3J6NRqqeAUFdCV7SJ_WdO2BHyKNBN7TQ@mail.gmail.com>

> > Well.  I think before even discussing the protocol details we need a
> > reasonable plan for buffer handling.  I think using virtio-gpu buffers
> > should be an optional optimization and not a requirement.  Also the
> > motivation for that should be clear (Let the host decoder write directly
> > to virtio-gpu resources, to display video without copying around the
> > decoded framebuffers from one device to another).
> 
> Just to make sure we're on the same page, what would the buffers come
> from if we don't use this optimization?
> 
> I can imagine a setup like this;
>  1) host device allocates host memory appropriate for usage with host
> video decoder,
>  2) guest driver allocates arbitrary guest pages for storage
> accessible to the guest software,
>  3) guest userspace writes input for the decoder to guest pages,
>  4) guest driver passes the list of pages for the input and output
> buffers to the host device
>  5) host device copies data from input guest pages to host buffer
>  6) host device runs the decoding
>  7) host device copies decoded frame to output guest pages
>  8) guest userspace can access decoded frame from those pages; back to 3
> 
> Is that something you have in mind?

I don't have any specific workflow in mind.

If you want display the decoded video frames you want use dma-bufs shared
by video decoder and gpu, right?  So the userspace application (video
player probably) would create the buffers using one of the drivers,
export them as dma-buf, then import them into the other driver.  Just
like you would do on physical hardware.  So, when using virtio-gpu
buffers:

  (1) guest app creates buffers using virtio-gpu.
  (2) guest app exports virtio-gpu buffers buffers as dma-buf.
  (3) guest app imports the dma-bufs into virtio-vdec.
  (4) guest app asks the virtio-vdec driver to write the decoded
      frames into the dma-bufs.
  (5) guest app asks the virtio-gpu driver to display the decoded
      frame.

The guest video decoder driver passes the dma-buf pages to the host, and
it is the host driver's job to fill the buffer.  How this is done
exactly might depend on hardware capabilities (whenever a host-allocated
bounce buffer is needed or whenever the hardware can decode directly to
the dma-buf passed by the guest driver) and is an implementation detail.

Now, with cross-device sharing added the virtio-gpu would attach some
kind of identifier to the dma-buf, virtio-vdec could fetch the
identifier and pass it to the host too, and the host virtio-vdec device
can use the identifier to get a host dma-buf handle for the (virtio-gpu)
buffer.  Ask the host video decoder driver to import the host dma-buf.
If it all worked fine it can ask the host hardware to decode directly to
the host virtio-gpu resource.

> > Referencing virtio-gpu buffers needs a better plan than just re-using
> > virtio-gpu resource handles.  The handles are device-specific.  What if
> > there are multiple virtio-gpu devices present in the guest?
> >
> > I think we need a framework for cross-device buffer sharing.  One
> > possible option would be to have some kind of buffer registry, where
> > buffers can be registered for cross-device sharing and get a unique
> > id (a uuid maybe?).  Drivers would typically register buffers on
> > dma-buf export.
> 
> This approach could possibly let us handle this transparently to
> importers, which would work for guest kernel subsystems that rely on
> the ability to handle buffers like native memory (e.g. having a
> sgtable or DMA address) for them.
> 
> How about allocating guest physical addresses for memory corresponding
> to those buffers? On the virtio-gpu example, that could work like
> this:
>  - by default a virtio-gpu buffer has only a resource handle,
>  - VIRTIO_GPU_RESOURCE_EXPORT command could be called to have the
> virtio-gpu device export the buffer to a host framework (inside the
> VMM) that would allocate guest page addresses for it, which the
> command would return in a response to the guest,

Hmm, the cross-device buffer sharing framework I have in mind would
basically be a buffer registry.  virtio-gpu would create buffers as
usual, create a identifier somehow (details to be hashed out), attach
the identifier to the dma-buf so it can be used as outlined above.

Also note that the guest manages the address space, so the host can't
simply allocate guest page addresses.  Mapping host virtio-gpu resources
into guest address space is planned, it'll most likely use a pci memory
bar to reserve some address space.  The host can map resources into that
pci bar, on guest request.

>  - virtio-gpu driver could then create a regular DMA-buf object for
> such memory, because it's just backed by pages (even though they may
> not be accessible to the guest; just like in the case of TrustZone
> memory protection on bare metal systems),

Hmm, well, pci memory bars are *not* backed by pages.  Maybe we can use
Documentation/driver-api/pci/p2pdma.rst though.  With that we might be
able to lookup buffers using device and dma address, without explicitly
creating some identifier.  Not investigated yet in detail.

cheers,
  Gerd


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


  parent reply	other threads:[~2019-10-14 12:19 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-19  9:34 [PATCH] [RFC RESEND] vdec: Add virtio video decode device specification Keiichi Watanabe
2019-09-19  9:34 ` [virtio-dev] " Keiichi Watanabe
2019-09-19  9:52 ` Hans Verkuil
2019-09-19 11:15   ` Keiichi Watanabe
2019-09-19 11:15     ` [virtio-dev] " Keiichi Watanabe
2019-09-19 11:17     ` Keiichi Watanabe
2019-09-19 11:17       ` [virtio-dev] " Keiichi Watanabe
2019-09-23  8:56 ` [virtio-dev] " Gerd Hoffmann
2019-09-23  8:56   ` Gerd Hoffmann
2019-10-05  6:08   ` Tomasz Figa
2019-10-05  6:08     ` Tomasz Figa
2019-10-07 14:00     ` Dmitry Morozov
2019-10-07 14:00       ` Dmitry Morozov
2019-10-07 14:14       ` Tomasz Figa
2019-10-07 14:14         ` Tomasz Figa
2019-10-07 15:09         ` Dmitry Morozov
2019-10-07 15:09           ` Dmitry Morozov
2019-10-09  3:55           ` Tomasz Figa
2019-10-09  3:55             ` Tomasz Figa
2019-10-11  8:53             ` Dmitry Morozov
2019-10-11  8:53               ` Dmitry Morozov
2019-10-14 12:34               ` Gerd Hoffmann
2019-10-14 12:34                 ` Gerd Hoffmann
2019-10-14 13:05                 ` Dmitry Morozov
2019-10-14 13:05                   ` Dmitry Morozov
2019-10-15  7:54                   ` Gerd Hoffmann
2019-10-15  7:54                     ` Gerd Hoffmann
2019-10-15 14:06                     ` Dmitry Morozov
2019-10-15 14:06                       ` Dmitry Morozov
2019-10-17  8:06                       ` Tomasz Figa
2019-10-17  8:06                         ` Tomasz Figa
2019-10-17  6:40               ` Tomasz Figa
2019-10-17  6:40                 ` Tomasz Figa
2019-10-17  7:19                 ` Gerd Hoffmann
2019-10-17  7:19                   ` Gerd Hoffmann
2019-10-17  8:11                   ` Tomasz Figa
2019-10-17  8:11                     ` Tomasz Figa
2019-10-17 10:13                     ` Gerd Hoffmann
2019-10-17 10:13                       ` Gerd Hoffmann
2019-10-29  7:39                       ` David Stevens
2019-10-31  7:30                         ` Keiichi Watanabe
2019-10-31  7:30                           ` Keiichi Watanabe
2019-10-31  9:10                       ` David Stevens
2019-10-31  9:10                         ` David Stevens
2019-11-07  8:29                         ` Keiichi Watanabe
2019-11-07  8:29                           ` Keiichi Watanabe
2019-10-14 12:19     ` Gerd Hoffmann [this message]
2019-10-14 12:19       ` Gerd Hoffmann
2019-10-17  6:58       ` Tomasz Figa
2019-10-17  6:58         ` Tomasz Figa
2019-10-17  7:44         ` Gerd Hoffmann
2019-10-17  7:44           ` Gerd Hoffmann
2019-10-17  8:23           ` Tomasz Figa
2019-10-17  8:23             ` Tomasz Figa
2019-10-17 10:22             ` Gerd Hoffmann
2019-10-17 10:22               ` Gerd Hoffmann
2019-10-17 15:00         ` Frank Yang
2019-10-17 16:22           ` Frank Yang
2019-10-17  7:06       ` David Stevens

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191014121914.lyptm3gdmekvcu6v@sirius.home.kraxel.org \
    --to=kraxel@redhat.com \
    --cc=acourbot@chromium.org \
    --cc=alexlau@chromium.org \
    --cc=daniel@ffwll.ch \
    --cc=dgreid@chromium.org \
    --cc=hverkuil@xs4all.nl \
    --cc=keiichiw@chromium.org \
    --cc=linux-media@vger.kernel.org \
    --cc=marcheu@chromium.org \
    --cc=posciak@chromium.org \
    --cc=stevensd@chromium.org \
    --cc=tfiga@chromium.org \
    --cc=virtio-dev@lists.oasis-open.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.