All of lore.kernel.org
 help / color / mirror / Atom feed
* Qemu Support for Virtio Video V4L2 driver
@ 2020-05-08 23:09 Saket Sinha
  2020-05-09 14:09   ` [virtio-dev] " Saket Sinha
  0 siblings, 1 reply; 56+ messages in thread
From: Saket Sinha @ 2020-05-08 23:09 UTC (permalink / raw)
  To: qemu-devel, Dmitry Sepp, Kiran Pawar, Samiullah Khawaja; +Cc: virtio-dev

Hi ,

This is to inquire about Qemu support for Virtio Video V4L2 driver
posted in [1].
I am currently not aware of any upstream effort for Qemu reference
implementation and would like to discuss how to proceed with the same.

[1]: https://patchwork.linuxtv.org/patch/61717/

Regards,
Saket Sinha


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: Qemu Support for Virtio Video V4L2 driver
  2020-05-08 23:09 Qemu Support for Virtio Video V4L2 driver Saket Sinha
@ 2020-05-09 14:09   ` Saket Sinha
  0 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-09 14:09 UTC (permalink / raw)
  To: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel
  Cc: virtio-dev, Gerd Hoffmann, Michael S. Tsirkin

Hi,

As suggested on #qemu-devel IRC channel, I am including Gerd and
Michael to point in the right direction how to move forward with Qemu
support for Virtio Video V4L2 driver
posted in [1].

[1]: https://patchwork.linuxtv.org/patch/61717/

Regards,
Saket Sinha

On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
>
> Hi ,
>
> This is to inquire about Qemu support for Virtio Video V4L2 driver
> posted in [1].
> I am currently not aware of any upstream effort for Qemu reference
> implementation and would like to discuss how to proceed with the same.
>
> [1]: https://patchwork.linuxtv.org/patch/61717/
>
> Regards,
> Saket Sinha


^ permalink raw reply	[flat|nested] 56+ messages in thread

* [virtio-dev] Re: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-09 14:09   ` Saket Sinha
  0 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-09 14:09 UTC (permalink / raw)
  To: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel
  Cc: Gerd Hoffmann, virtio-dev, Michael S. Tsirkin

Hi,

As suggested on #qemu-devel IRC channel, I am including Gerd and
Michael to point in the right direction how to move forward with Qemu
support for Virtio Video V4L2 driver
posted in [1].

[1]: https://patchwork.linuxtv.org/patch/61717/

Regards,
Saket Sinha

On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
>
> Hi ,
>
> This is to inquire about Qemu support for Virtio Video V4L2 driver
> posted in [1].
> I am currently not aware of any upstream effort for Qemu reference
> implementation and would like to discuss how to proceed with the same.
>
> [1]: https://patchwork.linuxtv.org/patch/61717/
>
> Regards,
> Saket Sinha

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-09 14:09   ` [virtio-dev] " Saket Sinha
@ 2020-05-09 14:11     ` Saket Sinha
  -1 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-09 14:11 UTC (permalink / raw)
  To: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel, virtio-dev
  Cc: Gerd Hoffmann, Michael S. Tsirkin

Hi,

As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
Michael to point in the right direction how to move forward with Qemu
support for Virtio Video V4L2 driver
posted in [1].

[1]: https://patchwork.linuxtv.org/patch/61717/

Regards,
Saket Sinha

On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
>
> Hi ,
>
> This is to inquire about Qemu support for Virtio Video V4L2 driver
> posted in [1].
> I am currently not aware of any upstream effort for Qemu reference
> implementation and would like to discuss how to proceed with the same.
>
> [1]: https://patchwork.linuxtv.org/patch/61717/
>
> Regards,
> Saket Sinha


^ permalink raw reply	[flat|nested] 56+ messages in thread

* [virtio-dev] Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-09 14:11     ` Saket Sinha
  0 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-09 14:11 UTC (permalink / raw)
  To: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel, virtio-dev
  Cc: Gerd Hoffmann, Michael S. Tsirkin

Hi,

As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
Michael to point in the right direction how to move forward with Qemu
support for Virtio Video V4L2 driver
posted in [1].

[1]: https://patchwork.linuxtv.org/patch/61717/

Regards,
Saket Sinha

On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
>
> Hi ,
>
> This is to inquire about Qemu support for Virtio Video V4L2 driver
> posted in [1].
> I am currently not aware of any upstream effort for Qemu reference
> implementation and would like to discuss how to proceed with the same.
>
> [1]: https://patchwork.linuxtv.org/patch/61717/
>
> Regards,
> Saket Sinha

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-09 14:11     ` [virtio-dev] " Saket Sinha
@ 2020-05-11  9:40       ` Dmitry Sepp
  -1 siblings, 0 replies; 56+ messages in thread
From: Dmitry Sepp @ 2020-05-11  9:40 UTC (permalink / raw)
  To: Kiran Pawar, Samiullah Khawaja, qemu-devel, Saket Sinha
  Cc: virtio-dev, Gerd Hoffmann, Michael S. Tsirkin

Hi Saket and all,

As we are working with automotive platforms, unfortunately we don't plan any 
Qemu reference implementation so far.

Of course we are ready to support the community if any help is needed. Is 
there interest in support for the FWHT format only for testing purpose or you 
want a full-featured implementation on the QEMU side?

Please note that the spec is not finalized yet and a major update is now 
discussed with upstream and the Chrome OS team, which is also interested and 
deeply involved in the process. The update mostly implies some rewording and 
reorganization of data structures, but for sure will require a driver rework.

Best regards,
Dmitry.

On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> Hi,
> 
> As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
> Michael to point in the right direction how to move forward with Qemu
> support for Virtio Video V4L2 driver
> posted in [1].
> 
> [1]: https://patchwork.linuxtv.org/patch/61717/
> 
> Regards,
> Saket Sinha
> 
> On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > Hi ,
> > 
> > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > posted in [1].
> > I am currently not aware of any upstream effort for Qemu reference
> > implementation and would like to discuss how to proceed with the same.
> > 
> > [1]: https://patchwork.linuxtv.org/patch/61717/
> > 
> > Regards,
> > Saket Sinha




^ permalink raw reply	[flat|nested] 56+ messages in thread

* [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11  9:40       ` Dmitry Sepp
  0 siblings, 0 replies; 56+ messages in thread
From: Dmitry Sepp @ 2020-05-11  9:40 UTC (permalink / raw)
  To: Kiran Pawar, Samiullah Khawaja, qemu-devel, Saket Sinha
  Cc: virtio-dev, Gerd Hoffmann, Michael S. Tsirkin

Hi Saket and all,

As we are working with automotive platforms, unfortunately we don't plan any 
Qemu reference implementation so far.

Of course we are ready to support the community if any help is needed. Is 
there interest in support for the FWHT format only for testing purpose or you 
want a full-featured implementation on the QEMU side?

Please note that the spec is not finalized yet and a major update is now 
discussed with upstream and the Chrome OS team, which is also interested and 
deeply involved in the process. The update mostly implies some rewording and 
reorganization of data structures, but for sure will require a driver rework.

Best regards,
Dmitry.

On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> Hi,
> 
> As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
> Michael to point in the right direction how to move forward with Qemu
> support for Virtio Video V4L2 driver
> posted in [1].
> 
> [1]: https://patchwork.linuxtv.org/patch/61717/
> 
> Regards,
> Saket Sinha
> 
> On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > Hi ,
> > 
> > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > posted in [1].
> > I am currently not aware of any upstream effort for Qemu reference
> > implementation and would like to discuss how to proceed with the same.
> > 
> > [1]: https://patchwork.linuxtv.org/patch/61717/
> > 
> > Regards,
> > Saket Sinha



---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11  9:40       ` [virtio-dev] " Dmitry Sepp
  (?)
@ 2020-05-11 10:20         ` Keiichi Watanabe
  -1 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 10:20 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Kiran Pawar, Samiullah Khawaja, qemu-devel, Saket Sinha,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

Hi Dmitry,

On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
>
> Hi Saket and all,
>
> As we are working with automotive platforms, unfortunately we don't plan any
> Qemu reference implementation so far.
>
> Of course we are ready to support the community if any help is needed. Is
> there interest in support for the FWHT format only for testing purpose or you
> want a full-featured implementation on the QEMU side?

I guess we don't need to implement the codec algorithm in QEMU.
Rather, QEMU forwards virtio-video requests to the host video device
or a software library such as GStreamer or ffmpeg.
So, what we need to implement in QEMU is a kind of API translation,
which shouldn't care about actual video formats so much.

Regarding the FWHT format discussed in the patch thread [1], in my
understanding, Hans suggested to have QEMU implementation forwarding
requests to the host's vicodec module [2].
Then, we'll be able to test the virtio-video driver on QEMU on Linux
even if the host Linux has no hardware video decoder.
(Please correct me if I'm wrong.)

Let me add Hans and Linux media ML in CC.

[1]  https://patchwork.linuxtv.org/patch/61717/
[2] https://lwn.net/Articles/760650/

Best regards,
Keiichi

>
> Please note that the spec is not finalized yet and a major update is now
> discussed with upstream and the Chrome OS team, which is also interested and
> deeply involved in the process. The update mostly implies some rewording and
> reorganization of data structures, but for sure will require a driver rework.
>
> Best regards,
> Dmitry.
>
> On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > Hi,
> >
> > As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
> > Michael to point in the right direction how to move forward with Qemu
> > support for Virtio Video V4L2 driver
> > posted in [1].
> >
> > [1]: https://patchwork.linuxtv.org/patch/61717/
> >
> > Regards,
> > Saket Sinha
> >
> > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > > Hi ,
> > >
> > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > posted in [1].
> > > I am currently not aware of any upstream effort for Qemu reference
> > > implementation and would like to discuss how to proceed with the same.
> > >
> > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > >
> > > Regards,
> > > Saket Sinha
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 10:20         ` Keiichi Watanabe
  0 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 10:20 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Saket Sinha, Gerd Hoffmann, Hans Verkuil, Pawel Osciak,
	Linux Media Mailing List

Hi Dmitry,

On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
>
> Hi Saket and all,
>
> As we are working with automotive platforms, unfortunately we don't plan any
> Qemu reference implementation so far.
>
> Of course we are ready to support the community if any help is needed. Is
> there interest in support for the FWHT format only for testing purpose or you
> want a full-featured implementation on the QEMU side?

I guess we don't need to implement the codec algorithm in QEMU.
Rather, QEMU forwards virtio-video requests to the host video device
or a software library such as GStreamer or ffmpeg.
So, what we need to implement in QEMU is a kind of API translation,
which shouldn't care about actual video formats so much.

Regarding the FWHT format discussed in the patch thread [1], in my
understanding, Hans suggested to have QEMU implementation forwarding
requests to the host's vicodec module [2].
Then, we'll be able to test the virtio-video driver on QEMU on Linux
even if the host Linux has no hardware video decoder.
(Please correct me if I'm wrong.)

Let me add Hans and Linux media ML in CC.

[1]  https://patchwork.linuxtv.org/patch/61717/
[2] https://lwn.net/Articles/760650/

Best regards,
Keiichi

>
> Please note that the spec is not finalized yet and a major update is now
> discussed with upstream and the Chrome OS team, which is also interested and
> deeply involved in the process. The update mostly implies some rewording and
> reorganization of data structures, but for sure will require a driver rework.
>
> Best regards,
> Dmitry.
>
> On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > Hi,
> >
> > As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
> > Michael to point in the right direction how to move forward with Qemu
> > support for Virtio Video V4L2 driver
> > posted in [1].
> >
> > [1]: https://patchwork.linuxtv.org/patch/61717/
> >
> > Regards,
> > Saket Sinha
> >
> > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > > Hi ,
> > >
> > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > posted in [1].
> > > I am currently not aware of any upstream effort for Qemu reference
> > > implementation and would like to discuss how to proceed with the same.
> > >
> > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > >
> > > Regards,
> > > Saket Sinha
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 10:20         ` Keiichi Watanabe
  0 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 10:20 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Kiran Pawar, Samiullah Khawaja, qemu-devel, Saket Sinha,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

Hi Dmitry,

On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
>
> Hi Saket and all,
>
> As we are working with automotive platforms, unfortunately we don't plan any
> Qemu reference implementation so far.
>
> Of course we are ready to support the community if any help is needed. Is
> there interest in support for the FWHT format only for testing purpose or you
> want a full-featured implementation on the QEMU side?

I guess we don't need to implement the codec algorithm in QEMU.
Rather, QEMU forwards virtio-video requests to the host video device
or a software library such as GStreamer or ffmpeg.
So, what we need to implement in QEMU is a kind of API translation,
which shouldn't care about actual video formats so much.

Regarding the FWHT format discussed in the patch thread [1], in my
understanding, Hans suggested to have QEMU implementation forwarding
requests to the host's vicodec module [2].
Then, we'll be able to test the virtio-video driver on QEMU on Linux
even if the host Linux has no hardware video decoder.
(Please correct me if I'm wrong.)

Let me add Hans and Linux media ML in CC.

[1]  https://patchwork.linuxtv.org/patch/61717/
[2] https://lwn.net/Articles/760650/

Best regards,
Keiichi

>
> Please note that the spec is not finalized yet and a major update is now
> discussed with upstream and the Chrome OS team, which is also interested and
> deeply involved in the process. The update mostly implies some rewording and
> reorganization of data structures, but for sure will require a driver rework.
>
> Best regards,
> Dmitry.
>
> On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > Hi,
> >
> > As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
> > Michael to point in the right direction how to move forward with Qemu
> > support for Virtio Video V4L2 driver
> > posted in [1].
> >
> > [1]: https://patchwork.linuxtv.org/patch/61717/
> >
> > Regards,
> > Saket Sinha
> >
> > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > > Hi ,
> > >
> > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > posted in [1].
> > > I am currently not aware of any upstream effort for Qemu reference
> > > implementation and would like to discuss how to proceed with the same.
> > >
> > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > >
> > > Regards,
> > > Saket Sinha
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 10:20         ` Keiichi Watanabe
  (?)
@ 2020-05-11 11:05           ` Saket Sinha
  -1 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-11 11:05 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

Hi Keiichi,

I do not support the approach of  QEMU implementation forwarding
requests to the host's vicodec module since  this can limit the scope
of the virtio-video device only for testing, which instead can be used
with multiple use cases such as -

1. VM gets access to paravirtualized  camera devices which shares the
video frames input through actual HW camera attached to Host.

2. If Host has multiple video devices (especially in ARM SOCs over
MIPI interfaces or USB), different VM can be started or hotplugged
with selective video streams from actual HW video devices.

Also instead of using libraries like Gstreamer in Host userspace, they
can also be used inside the VM userspace after getting access to
paravirtualized HW camera devices .

Regards,
Saket Sinha

On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> wrote:
>
> Hi Dmitry,
>
> On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
> >
> > Hi Saket and all,
> >
> > As we are working with automotive platforms, unfortunately we don't plan any
> > Qemu reference implementation so far.
> >
> > Of course we are ready to support the community if any help is needed. Is
> > there interest in support for the FWHT format only for testing purpose or you
> > want a full-featured implementation on the QEMU side?
>
> I guess we don't need to implement the codec algorithm in QEMU.
> Rather, QEMU forwards virtio-video requests to the host video device
> or a software library such as GStreamer or ffmpeg.
> So, what we need to implement in QEMU is a kind of API translation,
> which shouldn't care about actual video formats so much.
>
> Regarding the FWHT format discussed in the patch thread [1], in my
> understanding, Hans suggested to have QEMU implementation forwarding
> requests to the host's vicodec module [2].
> Then, we'll be able to test the virtio-video driver on QEMU on Linux
> even if the host Linux has no hardware video decoder.
> (Please correct me if I'm wrong.)
>
> Let me add Hans and Linux media ML in CC.
>
> [1]  https://patchwork.linuxtv.org/patch/61717/
> [2] https://lwn.net/Articles/760650/
>
> Best regards,
> Keiichi
>
> >
> > Please note that the spec is not finalized yet and a major update is now
> > discussed with upstream and the Chrome OS team, which is also interested and
> > deeply involved in the process. The update mostly implies some rewording and
> > reorganization of data structures, but for sure will require a driver rework.
> >
> > Best regards,
> > Dmitry.
> >
> > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > Hi,
> > >
> > > As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
> > > Michael to point in the right direction how to move forward with Qemu
> > > support for Virtio Video V4L2 driver
> > > posted in [1].
> > >
> > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > >
> > > Regards,
> > > Saket Sinha
> > >
> > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > > > Hi ,
> > > >
> > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > posted in [1].
> > > > I am currently not aware of any upstream effort for Qemu reference
> > > > implementation and would like to discuss how to proceed with the same.
> > > >
> > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > >
> > > > Regards,
> > > > Saket Sinha
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> >

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:05           ` Saket Sinha
  0 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-11 11:05 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Hans Verkuil, Gerd Hoffmann, Dmitry Sepp, Pawel Osciak,
	Linux Media Mailing List

Hi Keiichi,

I do not support the approach of  QEMU implementation forwarding
requests to the host's vicodec module since  this can limit the scope
of the virtio-video device only for testing, which instead can be used
with multiple use cases such as -

1. VM gets access to paravirtualized  camera devices which shares the
video frames input through actual HW camera attached to Host.

2. If Host has multiple video devices (especially in ARM SOCs over
MIPI interfaces or USB), different VM can be started or hotplugged
with selective video streams from actual HW video devices.

Also instead of using libraries like Gstreamer in Host userspace, they
can also be used inside the VM userspace after getting access to
paravirtualized HW camera devices .

Regards,
Saket Sinha

On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> wrote:
>
> Hi Dmitry,
>
> On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
> >
> > Hi Saket and all,
> >
> > As we are working with automotive platforms, unfortunately we don't plan any
> > Qemu reference implementation so far.
> >
> > Of course we are ready to support the community if any help is needed. Is
> > there interest in support for the FWHT format only for testing purpose or you
> > want a full-featured implementation on the QEMU side?
>
> I guess we don't need to implement the codec algorithm in QEMU.
> Rather, QEMU forwards virtio-video requests to the host video device
> or a software library such as GStreamer or ffmpeg.
> So, what we need to implement in QEMU is a kind of API translation,
> which shouldn't care about actual video formats so much.
>
> Regarding the FWHT format discussed in the patch thread [1], in my
> understanding, Hans suggested to have QEMU implementation forwarding
> requests to the host's vicodec module [2].
> Then, we'll be able to test the virtio-video driver on QEMU on Linux
> even if the host Linux has no hardware video decoder.
> (Please correct me if I'm wrong.)
>
> Let me add Hans and Linux media ML in CC.
>
> [1]  https://patchwork.linuxtv.org/patch/61717/
> [2] https://lwn.net/Articles/760650/
>
> Best regards,
> Keiichi
>
> >
> > Please note that the spec is not finalized yet and a major update is now
> > discussed with upstream and the Chrome OS team, which is also interested and
> > deeply involved in the process. The update mostly implies some rewording and
> > reorganization of data structures, but for sure will require a driver rework.
> >
> > Best regards,
> > Dmitry.
> >
> > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > Hi,
> > >
> > > As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
> > > Michael to point in the right direction how to move forward with Qemu
> > > support for Virtio Video V4L2 driver
> > > posted in [1].
> > >
> > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > >
> > > Regards,
> > > Saket Sinha
> > >
> > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > > > Hi ,
> > > >
> > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > posted in [1].
> > > > I am currently not aware of any upstream effort for Qemu reference
> > > > implementation and would like to discuss how to proceed with the same.
> > > >
> > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > >
> > > > Regards,
> > > > Saket Sinha
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> >


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:05           ` Saket Sinha
  0 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-11 11:05 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

Hi Keiichi,

I do not support the approach of  QEMU implementation forwarding
requests to the host's vicodec module since  this can limit the scope
of the virtio-video device only for testing, which instead can be used
with multiple use cases such as -

1. VM gets access to paravirtualized  camera devices which shares the
video frames input through actual HW camera attached to Host.

2. If Host has multiple video devices (especially in ARM SOCs over
MIPI interfaces or USB), different VM can be started or hotplugged
with selective video streams from actual HW video devices.

Also instead of using libraries like Gstreamer in Host userspace, they
can also be used inside the VM userspace after getting access to
paravirtualized HW camera devices .

Regards,
Saket Sinha

On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> wrote:
>
> Hi Dmitry,
>
> On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
> >
> > Hi Saket and all,
> >
> > As we are working with automotive platforms, unfortunately we don't plan any
> > Qemu reference implementation so far.
> >
> > Of course we are ready to support the community if any help is needed. Is
> > there interest in support for the FWHT format only for testing purpose or you
> > want a full-featured implementation on the QEMU side?
>
> I guess we don't need to implement the codec algorithm in QEMU.
> Rather, QEMU forwards virtio-video requests to the host video device
> or a software library such as GStreamer or ffmpeg.
> So, what we need to implement in QEMU is a kind of API translation,
> which shouldn't care about actual video formats so much.
>
> Regarding the FWHT format discussed in the patch thread [1], in my
> understanding, Hans suggested to have QEMU implementation forwarding
> requests to the host's vicodec module [2].
> Then, we'll be able to test the virtio-video driver on QEMU on Linux
> even if the host Linux has no hardware video decoder.
> (Please correct me if I'm wrong.)
>
> Let me add Hans and Linux media ML in CC.
>
> [1]  https://patchwork.linuxtv.org/patch/61717/
> [2] https://lwn.net/Articles/760650/
>
> Best regards,
> Keiichi
>
> >
> > Please note that the spec is not finalized yet and a major update is now
> > discussed with upstream and the Chrome OS team, which is also interested and
> > deeply involved in the process. The update mostly implies some rewording and
> > reorganization of data structures, but for sure will require a driver rework.
> >
> > Best regards,
> > Dmitry.
> >
> > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > Hi,
> > >
> > > As suggested on #qemu-devel IRC channel, I am including virtio-dev, Gerd and
> > > Michael to point in the right direction how to move forward with Qemu
> > > support for Virtio Video V4L2 driver
> > > posted in [1].
> > >
> > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > >
> > > Regards,
> > > Saket Sinha
> > >
> > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > > > Hi ,
> > > >
> > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > posted in [1].
> > > > I am currently not aware of any upstream effort for Qemu reference
> > > > implementation and would like to discuss how to proceed with the same.
> > > >
> > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > >
> > > > Regards,
> > > > Saket Sinha
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> >

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 11:05           ` Saket Sinha
  (?)
@ 2020-05-11 11:25             ` Dmitry Sepp
  -1 siblings, 0 replies; 56+ messages in thread
From: Dmitry Sepp @ 2020-05-11 11:25 UTC (permalink / raw)
  To: Saket Sinha
  Cc: Keiichi Watanabe, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

Hi Saket,

On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> Hi Keiichi,
> 
> I do not support the approach of  QEMU implementation forwarding
> requests to the host's vicodec module since  this can limit the scope
> of the virtio-video device only for testing,

That was my understanding as well.

> which instead can be used with multiple use cases such as -
> 
> 1. VM gets access to paravirtualized  camera devices which shares the
> video frames input through actual HW camera attached to Host.

This use-case is out of the scope of virtio-video. Initially I had a plan to 
support capture-only streams like camera as well, but later the decision was 
made upstream that camera should be implemented as separate device type. We 
still plan to implement a simple frame capture capability as a downstream 
patch though.

> 
> 2. If Host has multiple video devices (especially in ARM SOCs over
> MIPI interfaces or USB), different VM can be started or hotplugged
> with selective video streams from actual HW video devices.

We do support this in our device implementation. But spec in general has no 
requirements or instructions regarding this. And it is in fact flexible enough 
to provide abstraction on top of several HW devices.

> 
> Also instead of using libraries like Gstreamer in Host userspace, they
> can also be used inside the VM userspace after getting access to
> paravirtualized HW camera devices .
> 

Regarding the cameras, unfortunately same as above.

Best regards,
Dmitry.

> Regards,
> Saket Sinha
> 
> On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
wrote:
> > Hi Dmitry,
> > 
> > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
wrote:
> > > Hi Saket and all,
> > > 
> > > As we are working with automotive platforms, unfortunately we don't plan
> > > any Qemu reference implementation so far.
> > > 
> > > Of course we are ready to support the community if any help is needed.
> > > Is
> > > there interest in support for the FWHT format only for testing purpose
> > > or you want a full-featured implementation on the QEMU side?
> > 
> > I guess we don't need to implement the codec algorithm in QEMU.
> > Rather, QEMU forwards virtio-video requests to the host video device
> > or a software library such as GStreamer or ffmpeg.
> > So, what we need to implement in QEMU is a kind of API translation,
> > which shouldn't care about actual video formats so much.
> > 
> > Regarding the FWHT format discussed in the patch thread [1], in my
> > understanding, Hans suggested to have QEMU implementation forwarding
> > requests to the host's vicodec module [2].
> > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > even if the host Linux has no hardware video decoder.
> > (Please correct me if I'm wrong.)
> > 
> > Let me add Hans and Linux media ML in CC.
> > 
> > [1]  https://patchwork.linuxtv.org/patch/61717/
> > [2] https://lwn.net/Articles/760650/
> > 
> > Best regards,
> > Keiichi
> > 
> > > Please note that the spec is not finalized yet and a major update is now
> > > discussed with upstream and the Chrome OS team, which is also interested
> > > and deeply involved in the process. The update mostly implies some
> > > rewording and reorganization of data structures, but for sure will
> > > require a driver rework.
> > > 
> > > Best regards,
> > > Dmitry.
> > > 
> > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > Hi,
> > > > 
> > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > Gerd and Michael to point in the right direction how to move forward
> > > > with Qemu support for Virtio Video V4L2 driver
> > > > posted in [1].
> > > > 
> > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > 
> > > > Regards,
> > > > Saket Sinha
> > > > 
> > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
wrote:
> > > > > Hi ,
> > > > > 
> > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > implementation and would like to discuss how to proceed with the
> > > > > same.
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > 
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:25             ` Dmitry Sepp
  0 siblings, 0 replies; 56+ messages in thread
From: Dmitry Sepp @ 2020-05-11 11:25 UTC (permalink / raw)
  To: Saket Sinha
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Keiichi Watanabe, Gerd Hoffmann, Hans Verkuil, Pawel Osciak,
	Linux Media Mailing List

Hi Saket,

On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> Hi Keiichi,
> 
> I do not support the approach of  QEMU implementation forwarding
> requests to the host's vicodec module since  this can limit the scope
> of the virtio-video device only for testing,

That was my understanding as well.

> which instead can be used with multiple use cases such as -
> 
> 1. VM gets access to paravirtualized  camera devices which shares the
> video frames input through actual HW camera attached to Host.

This use-case is out of the scope of virtio-video. Initially I had a plan to 
support capture-only streams like camera as well, but later the decision was 
made upstream that camera should be implemented as separate device type. We 
still plan to implement a simple frame capture capability as a downstream 
patch though.

> 
> 2. If Host has multiple video devices (especially in ARM SOCs over
> MIPI interfaces or USB), different VM can be started or hotplugged
> with selective video streams from actual HW video devices.

We do support this in our device implementation. But spec in general has no 
requirements or instructions regarding this. And it is in fact flexible enough 
to provide abstraction on top of several HW devices.

> 
> Also instead of using libraries like Gstreamer in Host userspace, they
> can also be used inside the VM userspace after getting access to
> paravirtualized HW camera devices .
> 

Regarding the cameras, unfortunately same as above.

Best regards,
Dmitry.

> Regards,
> Saket Sinha
> 
> On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
wrote:
> > Hi Dmitry,
> > 
> > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
wrote:
> > > Hi Saket and all,
> > > 
> > > As we are working with automotive platforms, unfortunately we don't plan
> > > any Qemu reference implementation so far.
> > > 
> > > Of course we are ready to support the community if any help is needed.
> > > Is
> > > there interest in support for the FWHT format only for testing purpose
> > > or you want a full-featured implementation on the QEMU side?
> > 
> > I guess we don't need to implement the codec algorithm in QEMU.
> > Rather, QEMU forwards virtio-video requests to the host video device
> > or a software library such as GStreamer or ffmpeg.
> > So, what we need to implement in QEMU is a kind of API translation,
> > which shouldn't care about actual video formats so much.
> > 
> > Regarding the FWHT format discussed in the patch thread [1], in my
> > understanding, Hans suggested to have QEMU implementation forwarding
> > requests to the host's vicodec module [2].
> > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > even if the host Linux has no hardware video decoder.
> > (Please correct me if I'm wrong.)
> > 
> > Let me add Hans and Linux media ML in CC.
> > 
> > [1]  https://patchwork.linuxtv.org/patch/61717/
> > [2] https://lwn.net/Articles/760650/
> > 
> > Best regards,
> > Keiichi
> > 
> > > Please note that the spec is not finalized yet and a major update is now
> > > discussed with upstream and the Chrome OS team, which is also interested
> > > and deeply involved in the process. The update mostly implies some
> > > rewording and reorganization of data structures, but for sure will
> > > require a driver rework.
> > > 
> > > Best regards,
> > > Dmitry.
> > > 
> > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > Hi,
> > > > 
> > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > Gerd and Michael to point in the right direction how to move forward
> > > > with Qemu support for Virtio Video V4L2 driver
> > > > posted in [1].
> > > > 
> > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > 
> > > > Regards,
> > > > Saket Sinha
> > > > 
> > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
wrote:
> > > > > Hi ,
> > > > > 
> > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > implementation and would like to discuss how to proceed with the
> > > > > same.
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > 
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org




^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:25             ` Dmitry Sepp
  0 siblings, 0 replies; 56+ messages in thread
From: Dmitry Sepp @ 2020-05-11 11:25 UTC (permalink / raw)
  To: Saket Sinha
  Cc: Keiichi Watanabe, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

Hi Saket,

On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> Hi Keiichi,
> 
> I do not support the approach of  QEMU implementation forwarding
> requests to the host's vicodec module since  this can limit the scope
> of the virtio-video device only for testing,

That was my understanding as well.

> which instead can be used with multiple use cases such as -
> 
> 1. VM gets access to paravirtualized  camera devices which shares the
> video frames input through actual HW camera attached to Host.

This use-case is out of the scope of virtio-video. Initially I had a plan to 
support capture-only streams like camera as well, but later the decision was 
made upstream that camera should be implemented as separate device type. We 
still plan to implement a simple frame capture capability as a downstream 
patch though.

> 
> 2. If Host has multiple video devices (especially in ARM SOCs over
> MIPI interfaces or USB), different VM can be started or hotplugged
> with selective video streams from actual HW video devices.

We do support this in our device implementation. But spec in general has no 
requirements or instructions regarding this. And it is in fact flexible enough 
to provide abstraction on top of several HW devices.

> 
> Also instead of using libraries like Gstreamer in Host userspace, they
> can also be used inside the VM userspace after getting access to
> paravirtualized HW camera devices .
> 

Regarding the cameras, unfortunately same as above.

Best regards,
Dmitry.

> Regards,
> Saket Sinha
> 
> On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
wrote:
> > Hi Dmitry,
> > 
> > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
wrote:
> > > Hi Saket and all,
> > > 
> > > As we are working with automotive platforms, unfortunately we don't plan
> > > any Qemu reference implementation so far.
> > > 
> > > Of course we are ready to support the community if any help is needed.
> > > Is
> > > there interest in support for the FWHT format only for testing purpose
> > > or you want a full-featured implementation on the QEMU side?
> > 
> > I guess we don't need to implement the codec algorithm in QEMU.
> > Rather, QEMU forwards virtio-video requests to the host video device
> > or a software library such as GStreamer or ffmpeg.
> > So, what we need to implement in QEMU is a kind of API translation,
> > which shouldn't care about actual video formats so much.
> > 
> > Regarding the FWHT format discussed in the patch thread [1], in my
> > understanding, Hans suggested to have QEMU implementation forwarding
> > requests to the host's vicodec module [2].
> > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > even if the host Linux has no hardware video decoder.
> > (Please correct me if I'm wrong.)
> > 
> > Let me add Hans and Linux media ML in CC.
> > 
> > [1]  https://patchwork.linuxtv.org/patch/61717/
> > [2] https://lwn.net/Articles/760650/
> > 
> > Best regards,
> > Keiichi
> > 
> > > Please note that the spec is not finalized yet and a major update is now
> > > discussed with upstream and the Chrome OS team, which is also interested
> > > and deeply involved in the process. The update mostly implies some
> > > rewording and reorganization of data structures, but for sure will
> > > require a driver rework.
> > > 
> > > Best regards,
> > > Dmitry.
> > > 
> > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > Hi,
> > > > 
> > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > Gerd and Michael to point in the right direction how to move forward
> > > > with Qemu support for Virtio Video V4L2 driver
> > > > posted in [1].
> > > > 
> > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > 
> > > > Regards,
> > > > Saket Sinha
> > > > 
> > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
wrote:
> > > > > Hi ,
> > > > > 
> > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > implementation and would like to discuss how to proceed with the
> > > > > same.
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > 
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org



---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 11:25             ` Dmitry Sepp
  (?)
@ 2020-05-11 11:32               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-11 11:32 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Saket Sinha, Keiichi Watanabe, Kiran Pawar, Samiullah Khawaja,
	qemu-devel, virtio-dev, Gerd Hoffmann, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

On Mon, May 11, 2020 at 01:25:23PM +0200, Dmitry Sepp wrote:
> Hi Saket,
> 
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> > 
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
> 
> That was my understanding as well.
> 
> > which instead can be used with multiple use cases such as -
> > 
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
> 
> This use-case is out of the scope of virtio-video. Initially I had a plan to 
> support capture-only streams like camera as well, but later the decision was 
> made upstream that camera should be implemented as separate device type. We 
> still plan to implement a simple frame capture capability as a downstream 
> patch though.

You want to spec out what's in the field, spec-wise internal up/down
stream distinctions are not important.

> > 
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
> 
> We do support this in our device implementation. But spec in general has no 
> requirements or instructions regarding this. And it is in fact flexible enough 
> to provide abstraction on top of several HW devices.
> 
> > 
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .
> > 
> 
> Regarding the cameras, unfortunately same as above.
> 
> Best regards,
> Dmitry.
> 
> > Regards,
> > Saket Sinha
> > 
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
> wrote:
> > > Hi Dmitry,
> > > 
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
> wrote:
> > > > Hi Saket and all,
> > > > 
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > > 
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > > 
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > > 
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > > 
> > > Let me add Hans and Linux media ML in CC.
> > > 
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > > 
> > > Best regards,
> > > Keiichi
> > > 
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > > 
> > > > Best regards,
> > > > Dmitry.
> > > > 
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > > 
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > > > 
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
> wrote:
> > > > > > Hi ,
> > > > > > 
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > > 
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > Regards,
> > > > > > Saket Sinha
> > > > 
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:32               ` Michael S. Tsirkin
  0 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-11 11:32 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Samiullah Khawaja, Keiichi Watanabe, Alex Lau, Kiran Pawar,
	Alexandre Courbot, virtio-dev, qemu-devel, Tomasz Figa,
	Saket Sinha, Gerd Hoffmann, Hans Verkuil, Pawel Osciak,
	Linux Media Mailing List

On Mon, May 11, 2020 at 01:25:23PM +0200, Dmitry Sepp wrote:
> Hi Saket,
> 
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> > 
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
> 
> That was my understanding as well.
> 
> > which instead can be used with multiple use cases such as -
> > 
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
> 
> This use-case is out of the scope of virtio-video. Initially I had a plan to 
> support capture-only streams like camera as well, but later the decision was 
> made upstream that camera should be implemented as separate device type. We 
> still plan to implement a simple frame capture capability as a downstream 
> patch though.

You want to spec out what's in the field, spec-wise internal up/down
stream distinctions are not important.

> > 
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
> 
> We do support this in our device implementation. But spec in general has no 
> requirements or instructions regarding this. And it is in fact flexible enough 
> to provide abstraction on top of several HW devices.
> 
> > 
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .
> > 
> 
> Regarding the cameras, unfortunately same as above.
> 
> Best regards,
> Dmitry.
> 
> > Regards,
> > Saket Sinha
> > 
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
> wrote:
> > > Hi Dmitry,
> > > 
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
> wrote:
> > > > Hi Saket and all,
> > > > 
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > > 
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > > 
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > > 
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > > 
> > > Let me add Hans and Linux media ML in CC.
> > > 
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > > 
> > > Best regards,
> > > Keiichi
> > > 
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > > 
> > > > Best regards,
> > > > Dmitry.
> > > > 
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > > 
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > > > 
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
> wrote:
> > > > > > Hi ,
> > > > > > 
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > > 
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > Regards,
> > > > > > Saket Sinha
> > > > 
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:32               ` Michael S. Tsirkin
  0 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-11 11:32 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Saket Sinha, Keiichi Watanabe, Kiran Pawar, Samiullah Khawaja,
	qemu-devel, virtio-dev, Gerd Hoffmann, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

On Mon, May 11, 2020 at 01:25:23PM +0200, Dmitry Sepp wrote:
> Hi Saket,
> 
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> > 
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
> 
> That was my understanding as well.
> 
> > which instead can be used with multiple use cases such as -
> > 
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
> 
> This use-case is out of the scope of virtio-video. Initially I had a plan to 
> support capture-only streams like camera as well, but later the decision was 
> made upstream that camera should be implemented as separate device type. We 
> still plan to implement a simple frame capture capability as a downstream 
> patch though.

You want to spec out what's in the field, spec-wise internal up/down
stream distinctions are not important.

> > 
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
> 
> We do support this in our device implementation. But spec in general has no 
> requirements or instructions regarding this. And it is in fact flexible enough 
> to provide abstraction on top of several HW devices.
> 
> > 
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .
> > 
> 
> Regarding the cameras, unfortunately same as above.
> 
> Best regards,
> Dmitry.
> 
> > Regards,
> > Saket Sinha
> > 
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
> wrote:
> > > Hi Dmitry,
> > > 
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
> wrote:
> > > > Hi Saket and all,
> > > > 
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > > 
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > > 
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > > 
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > > 
> > > Let me add Hans and Linux media ML in CC.
> > > 
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > > 
> > > Best regards,
> > > Keiichi
> > > 
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > > 
> > > > Best regards,
> > > > Dmitry.
> > > > 
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > > 
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > > > 
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
> wrote:
> > > > > > Hi ,
> > > > > > 
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > > 
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > Regards,
> > > > > > Saket Sinha
> > > > 
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 11:25             ` Dmitry Sepp
  (?)
@ 2020-05-11 11:34               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-11 11:34 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Saket Sinha, Keiichi Watanabe, Kiran Pawar, Samiullah Khawaja,
	qemu-devel, virtio-dev, Gerd Hoffmann, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

On Mon, May 11, 2020 at 01:25:23PM +0200, Dmitry Sepp wrote:
> Hi Saket,
> 
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> > 
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
> 
> That was my understanding as well.
> 
> > which instead can be used with multiple use cases such as -
> > 
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
> 
> This use-case is out of the scope of virtio-video. Initially I had a plan to 
> support capture-only streams like camera as well, but later the decision was 
> made upstream that camera should be implemented as separate device type. We 
> still plan to implement a simple frame capture capability as a downstream 
> patch though.
> 
> > 
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
> 
> We do support this in our device implementation. But spec in general has no 
> requirements or instructions regarding this. And it is in fact flexible enough 
> to provide abstraction on top of several HW devices.

Hmm I agree if it's just for pass-through of host devices that's a very
limited usecase. Not out of scope for virtio, but let's make
it clear it's pass-through in the device name, so that if
people want to create a virtualizeable interface down the road
they don't feel blocked.



> > 
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .
> > 
> 
> Regarding the cameras, unfortunately same as above.
> 
> Best regards,
> Dmitry.
> 
> > Regards,
> > Saket Sinha
> > 
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
> wrote:
> > > Hi Dmitry,
> > > 
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
> wrote:
> > > > Hi Saket and all,
> > > > 
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > > 
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > > 
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > > 
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > > 
> > > Let me add Hans and Linux media ML in CC.
> > > 
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > > 
> > > Best regards,
> > > Keiichi
> > > 
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > > 
> > > > Best regards,
> > > > Dmitry.
> > > > 
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > > 
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > > > 
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
> wrote:
> > > > > > Hi ,
> > > > > > 
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > > 
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > Regards,
> > > > > > Saket Sinha
> > > > 
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:34               ` Michael S. Tsirkin
  0 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-11 11:34 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Samiullah Khawaja, Keiichi Watanabe, Alex Lau, Kiran Pawar,
	Alexandre Courbot, virtio-dev, qemu-devel, Tomasz Figa,
	Saket Sinha, Gerd Hoffmann, Hans Verkuil, Pawel Osciak,
	Linux Media Mailing List

On Mon, May 11, 2020 at 01:25:23PM +0200, Dmitry Sepp wrote:
> Hi Saket,
> 
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> > 
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
> 
> That was my understanding as well.
> 
> > which instead can be used with multiple use cases such as -
> > 
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
> 
> This use-case is out of the scope of virtio-video. Initially I had a plan to 
> support capture-only streams like camera as well, but later the decision was 
> made upstream that camera should be implemented as separate device type. We 
> still plan to implement a simple frame capture capability as a downstream 
> patch though.
> 
> > 
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
> 
> We do support this in our device implementation. But spec in general has no 
> requirements or instructions regarding this. And it is in fact flexible enough 
> to provide abstraction on top of several HW devices.

Hmm I agree if it's just for pass-through of host devices that's a very
limited usecase. Not out of scope for virtio, but let's make
it clear it's pass-through in the device name, so that if
people want to create a virtualizeable interface down the road
they don't feel blocked.



> > 
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .
> > 
> 
> Regarding the cameras, unfortunately same as above.
> 
> Best regards,
> Dmitry.
> 
> > Regards,
> > Saket Sinha
> > 
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
> wrote:
> > > Hi Dmitry,
> > > 
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
> wrote:
> > > > Hi Saket and all,
> > > > 
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > > 
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > > 
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > > 
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > > 
> > > Let me add Hans and Linux media ML in CC.
> > > 
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > > 
> > > Best regards,
> > > Keiichi
> > > 
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > > 
> > > > Best regards,
> > > > Dmitry.
> > > > 
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > > 
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > > > 
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
> wrote:
> > > > > > Hi ,
> > > > > > 
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > > 
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > Regards,
> > > > > > Saket Sinha
> > > > 
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:34               ` Michael S. Tsirkin
  0 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-11 11:34 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Saket Sinha, Keiichi Watanabe, Kiran Pawar, Samiullah Khawaja,
	qemu-devel, virtio-dev, Gerd Hoffmann, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

On Mon, May 11, 2020 at 01:25:23PM +0200, Dmitry Sepp wrote:
> Hi Saket,
> 
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> > 
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
> 
> That was my understanding as well.
> 
> > which instead can be used with multiple use cases such as -
> > 
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
> 
> This use-case is out of the scope of virtio-video. Initially I had a plan to 
> support capture-only streams like camera as well, but later the decision was 
> made upstream that camera should be implemented as separate device type. We 
> still plan to implement a simple frame capture capability as a downstream 
> patch though.
> 
> > 
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
> 
> We do support this in our device implementation. But spec in general has no 
> requirements or instructions regarding this. And it is in fact flexible enough 
> to provide abstraction on top of several HW devices.

Hmm I agree if it's just for pass-through of host devices that's a very
limited usecase. Not out of scope for virtio, but let's make
it clear it's pass-through in the device name, so that if
people want to create a virtualizeable interface down the road
they don't feel blocked.



> > 
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .
> > 
> 
> Regarding the cameras, unfortunately same as above.
> 
> Best regards,
> Dmitry.
> 
> > Regards,
> > Saket Sinha
> > 
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org> 
> wrote:
> > > Hi Dmitry,
> > > 
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> 
> wrote:
> > > > Hi Saket and all,
> > > > 
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > > 
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > > 
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > > 
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > > 
> > > Let me add Hans and Linux media ML in CC.
> > > 
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > > 
> > > Best regards,
> > > Keiichi
> > > 
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > > 
> > > > Best regards,
> > > > Dmitry.
> > > > 
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > > 
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > > 
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > > > 
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com> 
> wrote:
> > > > > > Hi ,
> > > > > > 
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > > 
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > Regards,
> > > > > > Saket Sinha
> > > > 
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 11:25             ` Dmitry Sepp
  (?)
@ 2020-05-11 11:49               ` Keiichi Watanabe
  -1 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 11:49 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Saket Sinha, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

Hi,

Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
video encoding and decoding, not camera. So, my reply was about how to
implement paravirtualized video codec devices.

On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
>
> Hi Saket,
>
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> >
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
>
> That was my understanding as well.

Not really because the API which the vicodec provides is V4L2 stateful
decoder interface [1], which are also used by other video drivers on
Linux.
The difference between vicodec and actual device drivers is that
vicodec performs decoding in the kernel space without using special
video devices. In other words, vicodec is a software decoder in kernel
space which provides the same interface with actual video drivers.
Thus, if the QEMU implementation can forward virtio-video requests to
vicodec, it can forward them to the actual V4L2 video decoder devices
as well and VM gets access to a paravirtualized video device.

The reason why we discussed vicodec in the previous thread was it'll
allow us to test the virtio-video driver without hardware requirement.

[1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html

>
> > which instead can be used with multiple use cases such as -
> >
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
>
> This use-case is out of the scope of virtio-video. Initially I had a plan to
> support capture-only streams like camera as well, but later the decision was
> made upstream that camera should be implemented as separate device type. We
> still plan to implement a simple frame capture capability as a downstream
> patch though.
>
> >
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
>
> We do support this in our device implementation. But spec in general has no
> requirements or instructions regarding this. And it is in fact flexible enough
> to provide abstraction on top of several HW devices.
>
> >
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .

Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
can translate virtio-video requests to this API, we can easily support
multiple platforms.
I'm not sure how feasible it is though, as I have no experience of
using this API by myself...

[2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html

Best regards,
Keiichi

> >
>
> Regarding the cameras, unfortunately same as above.
>
> Best regards,
> Dmitry.
>
> > Regards,
> > Saket Sinha
> >
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org>
> wrote:
> > > Hi Dmitry,
> > >
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com>
> wrote:
> > > > Hi Saket and all,
> > > >
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > >
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > >
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > >
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > >
> > > Let me add Hans and Linux media ML in CC.
> > >
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > >
> > > Best regards,
> > > Keiichi
> > >
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > >
> > > > Best regards,
> > > > Dmitry.
> > > >
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > >
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > >
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > >
> > > > > Regards,
> > > > > Saket Sinha
> > > > >
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com>
> wrote:
> > > > > > Hi ,
> > > > > >
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > >
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > >
> > > > > > Regards,
> > > > > > Saket Sinha
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:49               ` Keiichi Watanabe
  0 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 11:49 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Saket Sinha, Gerd Hoffmann, Hans Verkuil, Pawel Osciak,
	Linux Media Mailing List

Hi,

Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
video encoding and decoding, not camera. So, my reply was about how to
implement paravirtualized video codec devices.

On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
>
> Hi Saket,
>
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> >
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
>
> That was my understanding as well.

Not really because the API which the vicodec provides is V4L2 stateful
decoder interface [1], which are also used by other video drivers on
Linux.
The difference between vicodec and actual device drivers is that
vicodec performs decoding in the kernel space without using special
video devices. In other words, vicodec is a software decoder in kernel
space which provides the same interface with actual video drivers.
Thus, if the QEMU implementation can forward virtio-video requests to
vicodec, it can forward them to the actual V4L2 video decoder devices
as well and VM gets access to a paravirtualized video device.

The reason why we discussed vicodec in the previous thread was it'll
allow us to test the virtio-video driver without hardware requirement.

[1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html

>
> > which instead can be used with multiple use cases such as -
> >
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
>
> This use-case is out of the scope of virtio-video. Initially I had a plan to
> support capture-only streams like camera as well, but later the decision was
> made upstream that camera should be implemented as separate device type. We
> still plan to implement a simple frame capture capability as a downstream
> patch though.
>
> >
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
>
> We do support this in our device implementation. But spec in general has no
> requirements or instructions regarding this. And it is in fact flexible enough
> to provide abstraction on top of several HW devices.
>
> >
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .

Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
can translate virtio-video requests to this API, we can easily support
multiple platforms.
I'm not sure how feasible it is though, as I have no experience of
using this API by myself...

[2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html

Best regards,
Keiichi

> >
>
> Regarding the cameras, unfortunately same as above.
>
> Best regards,
> Dmitry.
>
> > Regards,
> > Saket Sinha
> >
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org>
> wrote:
> > > Hi Dmitry,
> > >
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com>
> wrote:
> > > > Hi Saket and all,
> > > >
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > >
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > >
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > >
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > >
> > > Let me add Hans and Linux media ML in CC.
> > >
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > >
> > > Best regards,
> > > Keiichi
> > >
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > >
> > > > Best regards,
> > > > Dmitry.
> > > >
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > >
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > >
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > >
> > > > > Regards,
> > > > > Saket Sinha
> > > > >
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com>
> wrote:
> > > > > > Hi ,
> > > > > >
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > >
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > >
> > > > > > Regards,
> > > > > > Saket Sinha
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>
>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 11:49               ` Keiichi Watanabe
  0 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 11:49 UTC (permalink / raw)
  To: Dmitry Sepp
  Cc: Saket Sinha, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak

Hi,

Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
video encoding and decoding, not camera. So, my reply was about how to
implement paravirtualized video codec devices.

On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <dmitry.sepp@opensynergy.com> wrote:
>
> Hi Saket,
>
> On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > Hi Keiichi,
> >
> > I do not support the approach of  QEMU implementation forwarding
> > requests to the host's vicodec module since  this can limit the scope
> > of the virtio-video device only for testing,
>
> That was my understanding as well.

Not really because the API which the vicodec provides is V4L2 stateful
decoder interface [1], which are also used by other video drivers on
Linux.
The difference between vicodec and actual device drivers is that
vicodec performs decoding in the kernel space without using special
video devices. In other words, vicodec is a software decoder in kernel
space which provides the same interface with actual video drivers.
Thus, if the QEMU implementation can forward virtio-video requests to
vicodec, it can forward them to the actual V4L2 video decoder devices
as well and VM gets access to a paravirtualized video device.

The reason why we discussed vicodec in the previous thread was it'll
allow us to test the virtio-video driver without hardware requirement.

[1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html

>
> > which instead can be used with multiple use cases such as -
> >
> > 1. VM gets access to paravirtualized  camera devices which shares the
> > video frames input through actual HW camera attached to Host.
>
> This use-case is out of the scope of virtio-video. Initially I had a plan to
> support capture-only streams like camera as well, but later the decision was
> made upstream that camera should be implemented as separate device type. We
> still plan to implement a simple frame capture capability as a downstream
> patch though.
>
> >
> > 2. If Host has multiple video devices (especially in ARM SOCs over
> > MIPI interfaces or USB), different VM can be started or hotplugged
> > with selective video streams from actual HW video devices.
>
> We do support this in our device implementation. But spec in general has no
> requirements or instructions regarding this. And it is in fact flexible enough
> to provide abstraction on top of several HW devices.
>
> >
> > Also instead of using libraries like Gstreamer in Host userspace, they
> > can also be used inside the VM userspace after getting access to
> > paravirtualized HW camera devices .

Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
can translate virtio-video requests to this API, we can easily support
multiple platforms.
I'm not sure how feasible it is though, as I have no experience of
using this API by myself...

[2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html

Best regards,
Keiichi

> >
>
> Regarding the cameras, unfortunately same as above.
>
> Best regards,
> Dmitry.
>
> > Regards,
> > Saket Sinha
> >
> > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org>
> wrote:
> > > Hi Dmitry,
> > >
> > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com>
> wrote:
> > > > Hi Saket and all,
> > > >
> > > > As we are working with automotive platforms, unfortunately we don't plan
> > > > any Qemu reference implementation so far.
> > > >
> > > > Of course we are ready to support the community if any help is needed.
> > > > Is
> > > > there interest in support for the FWHT format only for testing purpose
> > > > or you want a full-featured implementation on the QEMU side?
> > >
> > > I guess we don't need to implement the codec algorithm in QEMU.
> > > Rather, QEMU forwards virtio-video requests to the host video device
> > > or a software library such as GStreamer or ffmpeg.
> > > So, what we need to implement in QEMU is a kind of API translation,
> > > which shouldn't care about actual video formats so much.
> > >
> > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > understanding, Hans suggested to have QEMU implementation forwarding
> > > requests to the host's vicodec module [2].
> > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > even if the host Linux has no hardware video decoder.
> > > (Please correct me if I'm wrong.)
> > >
> > > Let me add Hans and Linux media ML in CC.
> > >
> > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > [2] https://lwn.net/Articles/760650/
> > >
> > > Best regards,
> > > Keiichi
> > >
> > > > Please note that the spec is not finalized yet and a major update is now
> > > > discussed with upstream and the Chrome OS team, which is also interested
> > > > and deeply involved in the process. The update mostly implies some
> > > > rewording and reorganization of data structures, but for sure will
> > > > require a driver rework.
> > > >
> > > > Best regards,
> > > > Dmitry.
> > > >
> > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > Hi,
> > > > >
> > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > posted in [1].
> > > > >
> > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > >
> > > > > Regards,
> > > > > Saket Sinha
> > > > >
> > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com>
> wrote:
> > > > > > Hi ,
> > > > > >
> > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > implementation and would like to discuss how to proceed with the
> > > > > > same.
> > > > > >
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > >
> > > > > > Regards,
> > > > > > Saket Sinha
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 11:49               ` Keiichi Watanabe
  (?)
@ 2020-05-11 12:32                 ` Saket Sinha
  -1 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-11 12:32 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak, libcamera-devel

Hi Keiichi,

> > > I do not support the approach of  QEMU implementation forwarding
> > > requests to the host's vicodec module since  this can limit the scope
> > > of the virtio-video device only for testing,
> >
> > That was my understanding as well.
>
> Not really because the API which the vicodec provides is V4L2 stateful
> decoder interface [1], which are also used by other video drivers on
> Linux.
> The difference between vicodec and actual device drivers is that
> vicodec performs decoding in the kernel space without using special
> video devices. In other words, vicodec is a software decoder in kernel
> space which provides the same interface with actual video drivers.
> Thus, if the QEMU implementation can forward virtio-video requests to
> vicodec, it can forward them to the actual V4L2 video decoder devices
> as well and VM gets access to a paravirtualized video device.
>
> The reason why we discussed vicodec in the previous thread was it'll
> allow us to test the virtio-video driver without hardware requirement.
>
> [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
>

Thanks for clarification.

Could  you provide your views if it would be possible to support also
paravirtualized v4l-subdev devices which is enabled by media
controller to expose ISP processing blocks to linux userspace.
Ofcourse, we might need to change implementation and spec to support that
Please refer (1) for details.

> >
> > > which instead can be used with multiple use cases such as -
> > >
> > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > video frames input through actual HW camera attached to Host.
> >
> > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > support capture-only streams like camera as well, but later the decision was
> > made upstream that camera should be implemented as separate device type. We
> > still plan to implement a simple frame capture capability as a downstream
> > patch though.
> >
> > >
> > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > with selective video streams from actual HW video devices.
> >
> > We do support this in our device implementation. But spec in general has no
> > requirements or instructions regarding this. And it is in fact flexible enough
> > to provide abstraction on top of several HW devices.
> >
> > >
> > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > can also be used inside the VM userspace after getting access to
> > > paravirtualized HW camera devices .
>
> Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> can translate virtio-video requests to this API, we can easily support
> multiple platforms.
> I'm not sure how feasible it is though, as I have no experience of
> using this API by myself...
>
> [2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
>

Like pointed out above, Gstreamer is not the only framework present there.
We have the newer libcamera framework [2] and then Openmax (used in
Android Hal )
Refer [3] for comparison.

My intentions are to make the implementation more generic so that it
can be used by different frameworks on different platforms.

[1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
[2]: http://libcamera.org
[3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf

Regards,
Saket Sinha

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 12:32                 ` Saket Sinha
  0 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-11 12:32 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Hans Verkuil, libcamera-devel, Gerd Hoffmann, Dmitry Sepp,
	Pawel Osciak, Linux Media Mailing List

Hi Keiichi,

> > > I do not support the approach of  QEMU implementation forwarding
> > > requests to the host's vicodec module since  this can limit the scope
> > > of the virtio-video device only for testing,
> >
> > That was my understanding as well.
>
> Not really because the API which the vicodec provides is V4L2 stateful
> decoder interface [1], which are also used by other video drivers on
> Linux.
> The difference between vicodec and actual device drivers is that
> vicodec performs decoding in the kernel space without using special
> video devices. In other words, vicodec is a software decoder in kernel
> space which provides the same interface with actual video drivers.
> Thus, if the QEMU implementation can forward virtio-video requests to
> vicodec, it can forward them to the actual V4L2 video decoder devices
> as well and VM gets access to a paravirtualized video device.
>
> The reason why we discussed vicodec in the previous thread was it'll
> allow us to test the virtio-video driver without hardware requirement.
>
> [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
>

Thanks for clarification.

Could  you provide your views if it would be possible to support also
paravirtualized v4l-subdev devices which is enabled by media
controller to expose ISP processing blocks to linux userspace.
Ofcourse, we might need to change implementation and spec to support that
Please refer (1) for details.

> >
> > > which instead can be used with multiple use cases such as -
> > >
> > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > video frames input through actual HW camera attached to Host.
> >
> > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > support capture-only streams like camera as well, but later the decision was
> > made upstream that camera should be implemented as separate device type. We
> > still plan to implement a simple frame capture capability as a downstream
> > patch though.
> >
> > >
> > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > with selective video streams from actual HW video devices.
> >
> > We do support this in our device implementation. But spec in general has no
> > requirements or instructions regarding this. And it is in fact flexible enough
> > to provide abstraction on top of several HW devices.
> >
> > >
> > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > can also be used inside the VM userspace after getting access to
> > > paravirtualized HW camera devices .
>
> Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> can translate virtio-video requests to this API, we can easily support
> multiple platforms.
> I'm not sure how feasible it is though, as I have no experience of
> using this API by myself...
>
> [2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
>

Like pointed out above, Gstreamer is not the only framework present there.
We have the newer libcamera framework [2] and then Openmax (used in
Android Hal )
Refer [3] for comparison.

My intentions are to make the implementation more generic so that it
can be used by different frameworks on different platforms.

[1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
[2]: http://libcamera.org
[3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf

Regards,
Saket Sinha


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 12:32                 ` Saket Sinha
  0 siblings, 0 replies; 56+ messages in thread
From: Saket Sinha @ 2020-05-11 12:32 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak, libcamera-devel

Hi Keiichi,

> > > I do not support the approach of  QEMU implementation forwarding
> > > requests to the host's vicodec module since  this can limit the scope
> > > of the virtio-video device only for testing,
> >
> > That was my understanding as well.
>
> Not really because the API which the vicodec provides is V4L2 stateful
> decoder interface [1], which are also used by other video drivers on
> Linux.
> The difference between vicodec and actual device drivers is that
> vicodec performs decoding in the kernel space without using special
> video devices. In other words, vicodec is a software decoder in kernel
> space which provides the same interface with actual video drivers.
> Thus, if the QEMU implementation can forward virtio-video requests to
> vicodec, it can forward them to the actual V4L2 video decoder devices
> as well and VM gets access to a paravirtualized video device.
>
> The reason why we discussed vicodec in the previous thread was it'll
> allow us to test the virtio-video driver without hardware requirement.
>
> [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
>

Thanks for clarification.

Could  you provide your views if it would be possible to support also
paravirtualized v4l-subdev devices which is enabled by media
controller to expose ISP processing blocks to linux userspace.
Ofcourse, we might need to change implementation and spec to support that
Please refer (1) for details.

> >
> > > which instead can be used with multiple use cases such as -
> > >
> > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > video frames input through actual HW camera attached to Host.
> >
> > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > support capture-only streams like camera as well, but later the decision was
> > made upstream that camera should be implemented as separate device type. We
> > still plan to implement a simple frame capture capability as a downstream
> > patch though.
> >
> > >
> > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > with selective video streams from actual HW video devices.
> >
> > We do support this in our device implementation. But spec in general has no
> > requirements or instructions regarding this. And it is in fact flexible enough
> > to provide abstraction on top of several HW devices.
> >
> > >
> > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > can also be used inside the VM userspace after getting access to
> > > paravirtualized HW camera devices .
>
> Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> can translate virtio-video requests to this API, we can easily support
> multiple platforms.
> I'm not sure how feasible it is though, as I have no experience of
> using this API by myself...
>
> [2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
>

Like pointed out above, Gstreamer is not the only framework present there.
We have the newer libcamera framework [2] and then Openmax (used in
Android Hal )
Refer [3] for comparison.

My intentions are to make the implementation more generic so that it
can be used by different frameworks on different platforms.

[1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
[2]: http://libcamera.org
[3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf

Regards,
Saket Sinha

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 12:32                 ` Saket Sinha
  (?)
@ 2020-05-11 14:06                   ` Keiichi Watanabe
  -1 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 14:06 UTC (permalink / raw)
  To: Saket Sinha
  Cc: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak, libcamera-devel

Hi Saket,

On Mon, May 11, 2020 at 9:33 PM Saket Sinha <saket.sinha89@gmail.com> wrote:
>
> Hi Keiichi,
>
> > > > I do not support the approach of  QEMU implementation forwarding
> > > > requests to the host's vicodec module since  this can limit the scope
> > > > of the virtio-video device only for testing,
> > >
> > > That was my understanding as well.
> >
> > Not really because the API which the vicodec provides is V4L2 stateful
> > decoder interface [1], which are also used by other video drivers on
> > Linux.
> > The difference between vicodec and actual device drivers is that
> > vicodec performs decoding in the kernel space without using special
> > video devices. In other words, vicodec is a software decoder in kernel
> > space which provides the same interface with actual video drivers.
> > Thus, if the QEMU implementation can forward virtio-video requests to
> > vicodec, it can forward them to the actual V4L2 video decoder devices
> > as well and VM gets access to a paravirtualized video device.
> >
> > The reason why we discussed vicodec in the previous thread was it'll
> > allow us to test the virtio-video driver without hardware requirement.
> >
> > [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> >
>
> Thanks for clarification.
>
> Could  you provide your views if it would be possible to support also
> paravirtualized v4l-subdev devices which is enabled by media
> controller to expose ISP processing blocks to linux userspace.
> Ofcourse, we might need to change implementation and spec to support that
> Please refer (1) for details.

Again, the current virtio-video protocol and driver only support video
encoding and decoding. We had no detailed discussion about camera
supports.
Moreover, I personally disagree with supporting video capturing in
virtio-video protocol. Instead, I believe it's better to have a
separate protocol like "virtio-camera". Decoupling video codec APIs
and camera APIs should make protocols simpler and easier to maintain.
I suggested this idea in [1].

So, the answer to your question is:
No in virtio-video protocol. But, it's possible to start designing a
new "virtio-camera" protocol that supports camera features including
image processing.

[1] https://markmail.org/message/4q2g5oqniw62pmqd

>
> > >
> > > > which instead can be used with multiple use cases such as -
> > > >
> > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > video frames input through actual HW camera attached to Host.
> > >
> > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > support capture-only streams like camera as well, but later the decision was
> > > made upstream that camera should be implemented as separate device type. We
> > > still plan to implement a simple frame capture capability as a downstream
> > > patch though.
> > >
> > > >
> > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > with selective video streams from actual HW video devices.
> > >
> > > We do support this in our device implementation. But spec in general has no
> > > requirements or instructions regarding this. And it is in fact flexible enough
> > > to provide abstraction on top of several HW devices.
> > >
> > > >
> > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > can also be used inside the VM userspace after getting access to
> > > > paravirtualized HW camera devices .
> >
> > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > can translate virtio-video requests to this API, we can easily support
> > multiple platforms.
> > I'm not sure how feasible it is though, as I have no experience of
> > using this API by myself...
> >
> > [2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> >
>
> Like pointed out above, Gstreamer is not the only framework present there.
> We have the newer libcamera framework [2] and then Openmax (used in
> Android Hal )
> Refer [3] for comparison.

It seems that we had miscommunication here. While I had mentioned
Gstreamer as a generic implementation to cover "video decoding" APIs
on various platforms, you were talking about "camera" APIs.
As I said above, virtio-video is NOT designed for cameras.

For abstraction of video decoding APIs, I don't know any better
library than Gstreamer. For cameras, libcamera sounds good, but I'm
not so familiar with this area...

Best regards,
Keiichi


>
> My intentions are to make the implementation more generic so that it
> can be used by different frameworks on different platforms.
>
> [1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
> [2]: http://libcamera.org
> [3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf
>
> Regards,
> Saket Sinha

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 14:06                   ` Keiichi Watanabe
  0 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 14:06 UTC (permalink / raw)
  To: Saket Sinha
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Hans Verkuil, libcamera-devel, Gerd Hoffmann, Dmitry Sepp,
	Pawel Osciak, Linux Media Mailing List

Hi Saket,

On Mon, May 11, 2020 at 9:33 PM Saket Sinha <saket.sinha89@gmail.com> wrote:
>
> Hi Keiichi,
>
> > > > I do not support the approach of  QEMU implementation forwarding
> > > > requests to the host's vicodec module since  this can limit the scope
> > > > of the virtio-video device only for testing,
> > >
> > > That was my understanding as well.
> >
> > Not really because the API which the vicodec provides is V4L2 stateful
> > decoder interface [1], which are also used by other video drivers on
> > Linux.
> > The difference between vicodec and actual device drivers is that
> > vicodec performs decoding in the kernel space without using special
> > video devices. In other words, vicodec is a software decoder in kernel
> > space which provides the same interface with actual video drivers.
> > Thus, if the QEMU implementation can forward virtio-video requests to
> > vicodec, it can forward them to the actual V4L2 video decoder devices
> > as well and VM gets access to a paravirtualized video device.
> >
> > The reason why we discussed vicodec in the previous thread was it'll
> > allow us to test the virtio-video driver without hardware requirement.
> >
> > [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> >
>
> Thanks for clarification.
>
> Could  you provide your views if it would be possible to support also
> paravirtualized v4l-subdev devices which is enabled by media
> controller to expose ISP processing blocks to linux userspace.
> Ofcourse, we might need to change implementation and spec to support that
> Please refer (1) for details.

Again, the current virtio-video protocol and driver only support video
encoding and decoding. We had no detailed discussion about camera
supports.
Moreover, I personally disagree with supporting video capturing in
virtio-video protocol. Instead, I believe it's better to have a
separate protocol like "virtio-camera". Decoupling video codec APIs
and camera APIs should make protocols simpler and easier to maintain.
I suggested this idea in [1].

So, the answer to your question is:
No in virtio-video protocol. But, it's possible to start designing a
new "virtio-camera" protocol that supports camera features including
image processing.

[1] https://markmail.org/message/4q2g5oqniw62pmqd

>
> > >
> > > > which instead can be used with multiple use cases such as -
> > > >
> > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > video frames input through actual HW camera attached to Host.
> > >
> > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > support capture-only streams like camera as well, but later the decision was
> > > made upstream that camera should be implemented as separate device type. We
> > > still plan to implement a simple frame capture capability as a downstream
> > > patch though.
> > >
> > > >
> > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > with selective video streams from actual HW video devices.
> > >
> > > We do support this in our device implementation. But spec in general has no
> > > requirements or instructions regarding this. And it is in fact flexible enough
> > > to provide abstraction on top of several HW devices.
> > >
> > > >
> > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > can also be used inside the VM userspace after getting access to
> > > > paravirtualized HW camera devices .
> >
> > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > can translate virtio-video requests to this API, we can easily support
> > multiple platforms.
> > I'm not sure how feasible it is though, as I have no experience of
> > using this API by myself...
> >
> > [2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> >
>
> Like pointed out above, Gstreamer is not the only framework present there.
> We have the newer libcamera framework [2] and then Openmax (used in
> Android Hal )
> Refer [3] for comparison.

It seems that we had miscommunication here. While I had mentioned
Gstreamer as a generic implementation to cover "video decoding" APIs
on various platforms, you were talking about "camera" APIs.
As I said above, virtio-video is NOT designed for cameras.

For abstraction of video decoding APIs, I don't know any better
library than Gstreamer. For cameras, libcamera sounds good, but I'm
not so familiar with this area...

Best regards,
Keiichi


>
> My intentions are to make the implementation more generic so that it
> can be used by different frameworks on different platforms.
>
> [1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
> [2]: http://libcamera.org
> [3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf
>
> Regards,
> Saket Sinha


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 14:06                   ` Keiichi Watanabe
  0 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-11 14:06 UTC (permalink / raw)
  To: Saket Sinha
  Cc: Dmitry Sepp, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak, libcamera-devel

Hi Saket,

On Mon, May 11, 2020 at 9:33 PM Saket Sinha <saket.sinha89@gmail.com> wrote:
>
> Hi Keiichi,
>
> > > > I do not support the approach of  QEMU implementation forwarding
> > > > requests to the host's vicodec module since  this can limit the scope
> > > > of the virtio-video device only for testing,
> > >
> > > That was my understanding as well.
> >
> > Not really because the API which the vicodec provides is V4L2 stateful
> > decoder interface [1], which are also used by other video drivers on
> > Linux.
> > The difference between vicodec and actual device drivers is that
> > vicodec performs decoding in the kernel space without using special
> > video devices. In other words, vicodec is a software decoder in kernel
> > space which provides the same interface with actual video drivers.
> > Thus, if the QEMU implementation can forward virtio-video requests to
> > vicodec, it can forward them to the actual V4L2 video decoder devices
> > as well and VM gets access to a paravirtualized video device.
> >
> > The reason why we discussed vicodec in the previous thread was it'll
> > allow us to test the virtio-video driver without hardware requirement.
> >
> > [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> >
>
> Thanks for clarification.
>
> Could  you provide your views if it would be possible to support also
> paravirtualized v4l-subdev devices which is enabled by media
> controller to expose ISP processing blocks to linux userspace.
> Ofcourse, we might need to change implementation and spec to support that
> Please refer (1) for details.

Again, the current virtio-video protocol and driver only support video
encoding and decoding. We had no detailed discussion about camera
supports.
Moreover, I personally disagree with supporting video capturing in
virtio-video protocol. Instead, I believe it's better to have a
separate protocol like "virtio-camera". Decoupling video codec APIs
and camera APIs should make protocols simpler and easier to maintain.
I suggested this idea in [1].

So, the answer to your question is:
No in virtio-video protocol. But, it's possible to start designing a
new "virtio-camera" protocol that supports camera features including
image processing.

[1] https://markmail.org/message/4q2g5oqniw62pmqd

>
> > >
> > > > which instead can be used with multiple use cases such as -
> > > >
> > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > video frames input through actual HW camera attached to Host.
> > >
> > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > support capture-only streams like camera as well, but later the decision was
> > > made upstream that camera should be implemented as separate device type. We
> > > still plan to implement a simple frame capture capability as a downstream
> > > patch though.
> > >
> > > >
> > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > with selective video streams from actual HW video devices.
> > >
> > > We do support this in our device implementation. But spec in general has no
> > > requirements or instructions regarding this. And it is in fact flexible enough
> > > to provide abstraction on top of several HW devices.
> > >
> > > >
> > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > can also be used inside the VM userspace after getting access to
> > > > paravirtualized HW camera devices .
> >
> > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > can translate virtio-video requests to this API, we can easily support
> > multiple platforms.
> > I'm not sure how feasible it is though, as I have no experience of
> > using this API by myself...
> >
> > [2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> >
>
> Like pointed out above, Gstreamer is not the only framework present there.
> We have the newer libcamera framework [2] and then Openmax (used in
> Android Hal )
> Refer [3] for comparison.

It seems that we had miscommunication here. While I had mentioned
Gstreamer as a generic implementation to cover "video decoding" APIs
on various platforms, you were talking about "camera" APIs.
As I said above, virtio-video is NOT designed for cameras.

For abstraction of video decoding APIs, I don't know any better
library than Gstreamer. For cameras, libcamera sounds good, but I'm
not so familiar with this area...

Best regards,
Keiichi


>
> My intentions are to make the implementation more generic so that it
> can be used by different frameworks on different platforms.
>
> [1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
> [2]: http://libcamera.org
> [3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf
>
> Regards,
> Saket Sinha

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [libcamera-devel] [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 14:06                   ` Keiichi Watanabe
@ 2020-05-11 14:31                     ` Laurent Pinchart
  -1 siblings, 0 replies; 56+ messages in thread
From: Laurent Pinchart @ 2020-05-11 14:31 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Saket Sinha, Samiullah Khawaja, virtio-dev, Alex Lau,
	Kiran Pawar, Alexandre Courbot, Michael S. Tsirkin, qemu-devel,
	libcamera-devel, Gerd Hoffmann, Dmitry Sepp, Pawel Osciak,
	Linux Media Mailing List

Hello,

Jumping in the middle of this thread, so I apologize if some of my
comments are a bit out of context.

On Mon, May 11, 2020 at 11:06:34PM +0900, Keiichi Watanabe wrote:
> On Mon, May 11, 2020 at 9:33 PM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > of the virtio-video device only for testing,
> > > >
> > > > That was my understanding as well.
> > >
> > > Not really because the API which the vicodec provides is V4L2 stateful
> > > decoder interface [1], which are also used by other video drivers on
> > > Linux.
> > > The difference between vicodec and actual device drivers is that
> > > vicodec performs decoding in the kernel space without using special
> > > video devices. In other words, vicodec is a software decoder in kernel
> > > space which provides the same interface with actual video drivers.
> > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > as well and VM gets access to a paravirtualized video device.
> > >
> > > The reason why we discussed vicodec in the previous thread was it'll
> > > allow us to test the virtio-video driver without hardware requirement.
> > >
> > > [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > >
> >
> > Thanks for clarification.
> >
> > Could  you provide your views if it would be possible to support also
> > paravirtualized v4l-subdev devices which is enabled by media
> > controller to expose ISP processing blocks to linux userspace.
> > Ofcourse, we might need to change implementation and spec to support that
> > Please refer (1) for details.

I don't think this would be the right level of abstraction. The V4L2 API
is way too low-level when it comes to camera paravirtualization (and may
not be the only API we'll have in the future). I thus recommend
virtualizing cameras with a higher-level API, more or less on top of
libcamera or the Android camera HAL (they both sit at the same level in
the camera stack). Anything lower than that won't be practical.

> Again, the current virtio-video protocol and driver only support video
> encoding and decoding. We had no detailed discussion about camera
> supports.
> Moreover, I personally disagree with supporting video capturing in
> virtio-video protocol. Instead, I believe it's better to have a
> separate protocol like "virtio-camera". Decoupling video codec APIs
> and camera APIs should make protocols simpler and easier to maintain.
> I suggested this idea in [1].
> 
> So, the answer to your question is:
> No in virtio-video protocol. But, it's possible to start designing a
> new "virtio-camera" protocol that supports camera features including
> image processing.
> 
> [1] https://markmail.org/message/4q2g5oqniw62pmqd
> 
> > > > > which instead can be used with multiple use cases such as -
> > > > >
> > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > video frames input through actual HW camera attached to Host.
> > > >
> > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > support capture-only streams like camera as well, but later the decision was
> > > > made upstream that camera should be implemented as separate device type. We
> > > > still plan to implement a simple frame capture capability as a downstream
> > > > patch though.
> > > >
> > > > >
> > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > with selective video streams from actual HW video devices.
> > > >
> > > > We do support this in our device implementation. But spec in general has no
> > > > requirements or instructions regarding this. And it is in fact flexible enough
> > > > to provide abstraction on top of several HW devices.
> > > >
> > > > >
> > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > can also be used inside the VM userspace after getting access to
> > > > > paravirtualized HW camera devices .
> > >
> > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > can translate virtio-video requests to this API, we can easily support
> > > multiple platforms.
> > > I'm not sure how feasible it is though, as I have no experience of
> > > using this API by myself...
> > >
> > > [2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> > >
> >
> > Like pointed out above, Gstreamer is not the only framework present there.
> > We have the newer libcamera framework [2] and then Openmax (used in
> > Android Hal )
> > Refer [3] for comparison.
> 
> It seems that we had miscommunication here. While I had mentioned
> Gstreamer as a generic implementation to cover "video decoding" APIs
> on various platforms, you were talking about "camera" APIs.
> As I said above, virtio-video is NOT designed for cameras.
> 
> For abstraction of video decoding APIs, I don't know any better
> library than Gstreamer. For cameras, libcamera sounds good, but I'm
> not so familiar with this area...
> 
> > My intentions are to make the implementation more generic so that it
> > can be used by different frameworks on different platforms.
> >
> > [1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
> > [2]: http://libcamera.org
> > [3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [libcamera-devel] [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-11 14:31                     ` Laurent Pinchart
  0 siblings, 0 replies; 56+ messages in thread
From: Laurent Pinchart @ 2020-05-11 14:31 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Saket Sinha,
	libcamera-devel, Gerd Hoffmann, Dmitry Sepp, Pawel Osciak,
	Linux Media Mailing List

Hello,

Jumping in the middle of this thread, so I apologize if some of my
comments are a bit out of context.

On Mon, May 11, 2020 at 11:06:34PM +0900, Keiichi Watanabe wrote:
> On Mon, May 11, 2020 at 9:33 PM Saket Sinha <saket.sinha89@gmail.com> wrote:
> > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > of the virtio-video device only for testing,
> > > >
> > > > That was my understanding as well.
> > >
> > > Not really because the API which the vicodec provides is V4L2 stateful
> > > decoder interface [1], which are also used by other video drivers on
> > > Linux.
> > > The difference between vicodec and actual device drivers is that
> > > vicodec performs decoding in the kernel space without using special
> > > video devices. In other words, vicodec is a software decoder in kernel
> > > space which provides the same interface with actual video drivers.
> > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > as well and VM gets access to a paravirtualized video device.
> > >
> > > The reason why we discussed vicodec in the previous thread was it'll
> > > allow us to test the virtio-video driver without hardware requirement.
> > >
> > > [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > >
> >
> > Thanks for clarification.
> >
> > Could  you provide your views if it would be possible to support also
> > paravirtualized v4l-subdev devices which is enabled by media
> > controller to expose ISP processing blocks to linux userspace.
> > Ofcourse, we might need to change implementation and spec to support that
> > Please refer (1) for details.

I don't think this would be the right level of abstraction. The V4L2 API
is way too low-level when it comes to camera paravirtualization (and may
not be the only API we'll have in the future). I thus recommend
virtualizing cameras with a higher-level API, more or less on top of
libcamera or the Android camera HAL (they both sit at the same level in
the camera stack). Anything lower than that won't be practical.

> Again, the current virtio-video protocol and driver only support video
> encoding and decoding. We had no detailed discussion about camera
> supports.
> Moreover, I personally disagree with supporting video capturing in
> virtio-video protocol. Instead, I believe it's better to have a
> separate protocol like "virtio-camera". Decoupling video codec APIs
> and camera APIs should make protocols simpler and easier to maintain.
> I suggested this idea in [1].
> 
> So, the answer to your question is:
> No in virtio-video protocol. But, it's possible to start designing a
> new "virtio-camera" protocol that supports camera features including
> image processing.
> 
> [1] https://markmail.org/message/4q2g5oqniw62pmqd
> 
> > > > > which instead can be used with multiple use cases such as -
> > > > >
> > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > video frames input through actual HW camera attached to Host.
> > > >
> > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > support capture-only streams like camera as well, but later the decision was
> > > > made upstream that camera should be implemented as separate device type. We
> > > > still plan to implement a simple frame capture capability as a downstream
> > > > patch though.
> > > >
> > > > >
> > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > with selective video streams from actual HW video devices.
> > > >
> > > > We do support this in our device implementation. But spec in general has no
> > > > requirements or instructions regarding this. And it is in fact flexible enough
> > > > to provide abstraction on top of several HW devices.
> > > >
> > > > >
> > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > can also be used inside the VM userspace after getting access to
> > > > > paravirtualized HW camera devices .
> > >
> > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > can translate virtio-video requests to this API, we can easily support
> > > multiple platforms.
> > > I'm not sure how feasible it is though, as I have no experience of
> > > using this API by myself...
> > >
> > > [2] https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> > >
> >
> > Like pointed out above, Gstreamer is not the only framework present there.
> > We have the newer libcamera framework [2] and then Openmax (used in
> > Android Hal )
> > Refer [3] for comparison.
> 
> It seems that we had miscommunication here. While I had mentioned
> Gstreamer as a generic implementation to cover "video decoding" APIs
> on various platforms, you were talking about "camera" APIs.
> As I said above, virtio-video is NOT designed for cameras.
> 
> For abstraction of video decoding APIs, I don't know any better
> library than Gstreamer. For cameras, libcamera sounds good, but I'm
> not so familiar with this area...
> 
> > My intentions are to make the implementation more generic so that it
> > can be used by different frameworks on different platforms.
> >
> > [1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
> > [2]: http://libcamera.org
> > [3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [libcamera-devel] [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 14:31                     ` Laurent Pinchart
  (?)
@ 2020-05-12 12:10                       ` Dmitry Sepp
  -1 siblings, 0 replies; 56+ messages in thread
From: Dmitry Sepp @ 2020-05-12 12:10 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Keiichi Watanabe, Saket Sinha, Samiullah Khawaja, virtio-dev,
	Alex Lau, Kiran Pawar, Alexandre Courbot, Michael S. Tsirkin,
	qemu-devel, libcamera-devel, Gerd Hoffmann, Pawel Osciak,
	Linux Media Mailing List

Hi Laurent,

On Montag, 11. Mai 2020 16:31:36 CEST Laurent Pinchart wrote:
> 
> I don't think this would be the right level of abstraction. The V4L2 API
> is way too low-level when it comes to camera paravirtualization (and may
> not be the only API we'll have in the future). I thus recommend
> virtualizing cameras with a higher-level API, more or less on top of
> libcamera or the Android camera HAL (they both sit at the same level in
> the camera stack). Anything lower than that won't be practical.
> 

I think the the main thing to do first would be to define the logic of such 
virtio-camera device and the set of mandatory features. Host-side API is a bit 
of a side topic. But libcamera fits the best though.

Best regards,
Dmitry.



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [libcamera-devel] [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-12 12:10                       ` Dmitry Sepp
  0 siblings, 0 replies; 56+ messages in thread
From: Dmitry Sepp @ 2020-05-12 12:10 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Samiullah Khawaja, Saket Sinha, Alex Lau, Kiran Pawar,
	Alexandre Courbot, virtio-dev, Michael S. Tsirkin, qemu-devel,
	Keiichi Watanabe, libcamera-devel, Gerd Hoffmann, Pawel Osciak,
	Linux Media Mailing List

Hi Laurent,

On Montag, 11. Mai 2020 16:31:36 CEST Laurent Pinchart wrote:
> 
> I don't think this would be the right level of abstraction. The V4L2 API
> is way too low-level when it comes to camera paravirtualization (and may
> not be the only API we'll have in the future). I thus recommend
> virtualizing cameras with a higher-level API, more or less on top of
> libcamera or the Android camera HAL (they both sit at the same level in
> the camera stack). Anything lower than that won't be practical.
> 

I think the the main thing to do first would be to define the logic of such 
virtio-camera device and the set of mandatory features. Host-side API is a bit 
of a side topic. But libcamera fits the best though.

Best regards,
Dmitry.




^ permalink raw reply	[flat|nested] 56+ messages in thread

* [virtio-dev] Re: [libcamera-devel] [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-12 12:10                       ` Dmitry Sepp
  0 siblings, 0 replies; 56+ messages in thread
From: Dmitry Sepp @ 2020-05-12 12:10 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Keiichi Watanabe, Saket Sinha, Samiullah Khawaja, virtio-dev,
	Alex Lau, Kiran Pawar, Alexandre Courbot, Michael S. Tsirkin,
	qemu-devel, libcamera-devel, Gerd Hoffmann, Pawel Osciak,
	Linux Media Mailing List

Hi Laurent,

On Montag, 11. Mai 2020 16:31:36 CEST Laurent Pinchart wrote:
> 
> I don't think this would be the right level of abstraction. The V4L2 API
> is way too low-level when it comes to camera paravirtualization (and may
> not be the only API we'll have in the future). I thus recommend
> virtualizing cameras with a higher-level API, more or less on top of
> libcamera or the Android camera HAL (they both sit at the same level in
> the camera stack). Anything lower than that won't be practical.
> 

I think the the main thing to do first would be to define the logic of such 
virtio-camera device and the set of mandatory features. Host-side API is a bit 
of a side topic. But libcamera fits the best though.

Best regards,
Dmitry.



---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-11 11:49               ` Keiichi Watanabe
@ 2020-05-14 23:38                 ` Nicolas Dufresne
  -1 siblings, 0 replies; 56+ messages in thread
From: Nicolas Dufresne @ 2020-05-14 23:38 UTC (permalink / raw)
  To: Keiichi Watanabe, Dmitry Sepp
  Cc: Saket Sinha, Kiran Pawar, Samiullah Khawaja, qemu-devel,
	virtio-dev, Gerd Hoffmann, Michael S. Tsirkin, Hans Verkuil,
	Alexandre Courbot, Tomasz Figa, Linux Media Mailing List,
	Alex Lau, Pawel Osciak, Emil Velikov

Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> Hi,
> 
> Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> video encoding and decoding, not camera. So, my reply was about how to
> implement paravirtualized video codec devices.
> 
> On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <dmitry.sepp@opensynergy.com>
> wrote:
> > Hi Saket,
> > 
> > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > Hi Keiichi,
> > > 
> > > I do not support the approach of  QEMU implementation forwarding
> > > requests to the host's vicodec module since  this can limit the scope
> > > of the virtio-video device only for testing,
> > 
> > That was my understanding as well.
> 
> Not really because the API which the vicodec provides is V4L2 stateful
> decoder interface [1], which are also used by other video drivers on
> Linux.
> The difference between vicodec and actual device drivers is that
> vicodec performs decoding in the kernel space without using special
> video devices. In other words, vicodec is a software decoder in kernel
> space which provides the same interface with actual video drivers.
> Thus, if the QEMU implementation can forward virtio-video requests to
> vicodec, it can forward them to the actual V4L2 video decoder devices
> as well and VM gets access to a paravirtualized video device.
> 
> The reason why we discussed vicodec in the previous thread was it'll
> allow us to test the virtio-video driver without hardware requirement.
> 
> [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> 
> > > which instead can be used with multiple use cases such as -
> > > 
> > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > video frames input through actual HW camera attached to Host.
> > 
> > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > support capture-only streams like camera as well, but later the decision was
> > made upstream that camera should be implemented as separate device type. We
> > still plan to implement a simple frame capture capability as a downstream
> > patch though.
> > 
> > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > with selective video streams from actual HW video devices.
> > 
> > We do support this in our device implementation. But spec in general has no
> > requirements or instructions regarding this. And it is in fact flexible
> > enough
> > to provide abstraction on top of several HW devices.
> > 
> > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > can also be used inside the VM userspace after getting access to
> > > paravirtualized HW camera devices .
> 
> Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> can translate virtio-video requests to this API, we can easily support
> multiple platforms.
> I'm not sure how feasible it is though, as I have no experience of
> using this API by myself...

Not sure which API you aim exactly, but what one need to remember is that
mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
of "stateless" CODEC is not trivial and can't be done without userspace. Notably
because we don't want to do bitstream parsing in the kernel on the main CPU as
security would otherwise be very hard to guaranty. The other driver using same
API as virtio-video do bitstream parsing on a dedicated co-processor (through
firmware blobs though).

Having bridges between virtio-video, qemu and some abstraction library like
FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
type of HW accelerated decoder or if you need to virtualized something
proprietary (like NVDEC). Please shout if you need help.

> 
> [2] 
> https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> 
> Best regards,
> Keiichi
> 
> > 
> > Regarding the cameras, unfortunately same as above.
> > 
> > Best regards,
> > Dmitry.
> > 
> > > Regards,
> > > Saket Sinha
> > > 
> > > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org>
> > wrote:
> > > > Hi Dmitry,
> > > > 
> > > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com
> > > > >
> > wrote:
> > > > > Hi Saket and all,
> > > > > 
> > > > > As we are working with automotive platforms, unfortunately we don't
> > > > > plan
> > > > > any Qemu reference implementation so far.
> > > > > 
> > > > > Of course we are ready to support the community if any help is needed.
> > > > > Is
> > > > > there interest in support for the FWHT format only for testing purpose
> > > > > or you want a full-featured implementation on the QEMU side?
> > > > 
> > > > I guess we don't need to implement the codec algorithm in QEMU.
> > > > Rather, QEMU forwards virtio-video requests to the host video device
> > > > or a software library such as GStreamer or ffmpeg.
> > > > So, what we need to implement in QEMU is a kind of API translation,
> > > > which shouldn't care about actual video formats so much.
> > > > 
> > > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > > understanding, Hans suggested to have QEMU implementation forwarding
> > > > requests to the host's vicodec module [2].
> > > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > > even if the host Linux has no hardware video decoder.
> > > > (Please correct me if I'm wrong.)
> > > > 
> > > > Let me add Hans and Linux media ML in CC.
> > > > 
> > > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > > [2] https://lwn.net/Articles/760650/
> > > > 
> > > > Best regards,
> > > > Keiichi
> > > > 
> > > > > Please note that the spec is not finalized yet and a major update is
> > > > > now
> > > > > discussed with upstream and the Chrome OS team, which is also
> > > > > interested
> > > > > and deeply involved in the process. The update mostly implies some
> > > > > rewording and reorganization of data structures, but for sure will
> > > > > require a driver rework.
> > > > > 
> > > > > Best regards,
> > > > > Dmitry.
> > > > > 
> > > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > > Hi,
> > > > > > 
> > > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > 
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > Regards,
> > > > > > Saket Sinha
> > > > > > 
> > > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com>
> > wrote:
> > > > > > > Hi ,
> > > > > > > 
> > > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > > posted in [1].
> > > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > > implementation and would like to discuss how to proceed with the
> > > > > > > same.
> > > > > > > 
> > > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > > 
> > > > > > > Regards,
> > > > > > > Saket Sinha
> > > > > 
> > > > > ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-14 23:38                 ` Nicolas Dufresne
  0 siblings, 0 replies; 56+ messages in thread
From: Nicolas Dufresne @ 2020-05-14 23:38 UTC (permalink / raw)
  To: Keiichi Watanabe, Dmitry Sepp
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Saket Sinha, Gerd Hoffmann, Hans Verkuil, Emil Velikov,
	Pawel Osciak, Linux Media Mailing List

Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> Hi,
> 
> Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> video encoding and decoding, not camera. So, my reply was about how to
> implement paravirtualized video codec devices.
> 
> On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <dmitry.sepp@opensynergy.com>
> wrote:
> > Hi Saket,
> > 
> > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > Hi Keiichi,
> > > 
> > > I do not support the approach of  QEMU implementation forwarding
> > > requests to the host's vicodec module since  this can limit the scope
> > > of the virtio-video device only for testing,
> > 
> > That was my understanding as well.
> 
> Not really because the API which the vicodec provides is V4L2 stateful
> decoder interface [1], which are also used by other video drivers on
> Linux.
> The difference between vicodec and actual device drivers is that
> vicodec performs decoding in the kernel space without using special
> video devices. In other words, vicodec is a software decoder in kernel
> space which provides the same interface with actual video drivers.
> Thus, if the QEMU implementation can forward virtio-video requests to
> vicodec, it can forward them to the actual V4L2 video decoder devices
> as well and VM gets access to a paravirtualized video device.
> 
> The reason why we discussed vicodec in the previous thread was it'll
> allow us to test the virtio-video driver without hardware requirement.
> 
> [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> 
> > > which instead can be used with multiple use cases such as -
> > > 
> > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > video frames input through actual HW camera attached to Host.
> > 
> > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > support capture-only streams like camera as well, but later the decision was
> > made upstream that camera should be implemented as separate device type. We
> > still plan to implement a simple frame capture capability as a downstream
> > patch though.
> > 
> > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > with selective video streams from actual HW video devices.
> > 
> > We do support this in our device implementation. But spec in general has no
> > requirements or instructions regarding this. And it is in fact flexible
> > enough
> > to provide abstraction on top of several HW devices.
> > 
> > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > can also be used inside the VM userspace after getting access to
> > > paravirtualized HW camera devices .
> 
> Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> can translate virtio-video requests to this API, we can easily support
> multiple platforms.
> I'm not sure how feasible it is though, as I have no experience of
> using this API by myself...

Not sure which API you aim exactly, but what one need to remember is that
mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
of "stateless" CODEC is not trivial and can't be done without userspace. Notably
because we don't want to do bitstream parsing in the kernel on the main CPU as
security would otherwise be very hard to guaranty. The other driver using same
API as virtio-video do bitstream parsing on a dedicated co-processor (through
firmware blobs though).

Having bridges between virtio-video, qemu and some abstraction library like
FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
type of HW accelerated decoder or if you need to virtualized something
proprietary (like NVDEC). Please shout if you need help.

> 
> [2] 
> https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> 
> Best regards,
> Keiichi
> 
> > 
> > Regarding the cameras, unfortunately same as above.
> > 
> > Best regards,
> > Dmitry.
> > 
> > > Regards,
> > > Saket Sinha
> > > 
> > > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org>
> > wrote:
> > > > Hi Dmitry,
> > > > 
> > > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com
> > > > >
> > wrote:
> > > > > Hi Saket and all,
> > > > > 
> > > > > As we are working with automotive platforms, unfortunately we don't
> > > > > plan
> > > > > any Qemu reference implementation so far.
> > > > > 
> > > > > Of course we are ready to support the community if any help is needed.
> > > > > Is
> > > > > there interest in support for the FWHT format only for testing purpose
> > > > > or you want a full-featured implementation on the QEMU side?
> > > > 
> > > > I guess we don't need to implement the codec algorithm in QEMU.
> > > > Rather, QEMU forwards virtio-video requests to the host video device
> > > > or a software library such as GStreamer or ffmpeg.
> > > > So, what we need to implement in QEMU is a kind of API translation,
> > > > which shouldn't care about actual video formats so much.
> > > > 
> > > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > > understanding, Hans suggested to have QEMU implementation forwarding
> > > > requests to the host's vicodec module [2].
> > > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > > even if the host Linux has no hardware video decoder.
> > > > (Please correct me if I'm wrong.)
> > > > 
> > > > Let me add Hans and Linux media ML in CC.
> > > > 
> > > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > > [2] https://lwn.net/Articles/760650/
> > > > 
> > > > Best regards,
> > > > Keiichi
> > > > 
> > > > > Please note that the spec is not finalized yet and a major update is
> > > > > now
> > > > > discussed with upstream and the Chrome OS team, which is also
> > > > > interested
> > > > > and deeply involved in the process. The update mostly implies some
> > > > > rewording and reorganization of data structures, but for sure will
> > > > > require a driver rework.
> > > > > 
> > > > > Best regards,
> > > > > Dmitry.
> > > > > 
> > > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > > Hi,
> > > > > > 
> > > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > > posted in [1].
> > > > > > 
> > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > Regards,
> > > > > > Saket Sinha
> > > > > > 
> > > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com>
> > wrote:
> > > > > > > Hi ,
> > > > > > > 
> > > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > > posted in [1].
> > > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > > implementation and would like to discuss how to proceed with the
> > > > > > > same.
> > > > > > > 
> > > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > > 
> > > > > > > Regards,
> > > > > > > Saket Sinha
> > > > > 
> > > > > ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-14 23:38                 ` Nicolas Dufresne
  (?)
@ 2020-05-19  8:37                   ` Keiichi Watanabe
  -1 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-19  8:37 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Dmitry Sepp, Saket Sinha, Kiran Pawar, Samiullah Khawaja,
	qemu-devel, virtio-dev, Gerd Hoffmann, Michael S. Tsirkin,
	Hans Verkuil, Alexandre Courbot, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

Hi Nicolas,

On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > Hi,
> >
> > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > video encoding and decoding, not camera. So, my reply was about how to
> > implement paravirtualized video codec devices.
> >
> > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <dmitry.sepp@opensynergy.com>
> > wrote:
> > > Hi Saket,
> > >
> > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > Hi Keiichi,
> > > >
> > > > I do not support the approach of  QEMU implementation forwarding
> > > > requests to the host's vicodec module since  this can limit the scope
> > > > of the virtio-video device only for testing,
> > >
> > > That was my understanding as well.
> >
> > Not really because the API which the vicodec provides is V4L2 stateful
> > decoder interface [1], which are also used by other video drivers on
> > Linux.
> > The difference between vicodec and actual device drivers is that
> > vicodec performs decoding in the kernel space without using special
> > video devices. In other words, vicodec is a software decoder in kernel
> > space which provides the same interface with actual video drivers.
> > Thus, if the QEMU implementation can forward virtio-video requests to
> > vicodec, it can forward them to the actual V4L2 video decoder devices
> > as well and VM gets access to a paravirtualized video device.
> >
> > The reason why we discussed vicodec in the previous thread was it'll
> > allow us to test the virtio-video driver without hardware requirement.
> >
> > [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> >
> > > > which instead can be used with multiple use cases such as -
> > > >
> > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > video frames input through actual HW camera attached to Host.
> > >
> > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > support capture-only streams like camera as well, but later the decision was
> > > made upstream that camera should be implemented as separate device type. We
> > > still plan to implement a simple frame capture capability as a downstream
> > > patch though.
> > >
> > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > with selective video streams from actual HW video devices.
> > >
> > > We do support this in our device implementation. But spec in general has no
> > > requirements or instructions regarding this. And it is in fact flexible
> > > enough
> > > to provide abstraction on top of several HW devices.
> > >
> > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > can also be used inside the VM userspace after getting access to
> > > > paravirtualized HW camera devices .
> >
> > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > can translate virtio-video requests to this API, we can easily support
> > multiple platforms.
> > I'm not sure how feasible it is though, as I have no experience of
> > using this API by myself...
>
> Not sure which API you aim exactly, but what one need to remember is that
> mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> because we don't want to do bitstream parsing in the kernel on the main CPU as
> security would otherwise be very hard to guaranty. The other driver using same
> API as virtio-video do bitstream parsing on a dedicated co-processor (through
> firmware blobs though).
>
> Having bridges between virtio-video, qemu and some abstraction library like
> FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> type of HW accelerated decoder or if you need to virtualized something
> proprietary (like NVDEC). Please shout if you need help.
>

Yeah, I meant we should map virtio-video commands to a set of
abstracted userspace APIs to avoid having many platform-dependent code
in QEMU.
This is the same with what we implemented in crosvm, a VMM on
ChromiumOS. Crosvm's video device translates virtio-video commands
into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
stateful and V4L2 stateless. Unfortunately, since our library is
highly depending on Chrome, we cannot reuse this for QEMU.

So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
APIs in my previous link weren't for this purpose.
Nicolas, do you know any good references for FFMPEG or GStreamer's
abstracted video decoding APIs? Then, I may be able to think about how
virtio-video protocols can be mapped to them.

[1] libvda's C interface:
https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/libvda_decode.h
[2] libvda's Rust interface:
https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/rust/

Best regards,
Keiichi

> >
> > [2]
> > https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> >
> > Best regards,
> > Keiichi
> >
> > >
> > > Regarding the cameras, unfortunately same as above.
> > >
> > > Best regards,
> > > Dmitry.
> > >
> > > > Regards,
> > > > Saket Sinha
> > > >
> > > > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org>
> > > wrote:
> > > > > Hi Dmitry,
> > > > >
> > > > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com
> > > > > >
> > > wrote:
> > > > > > Hi Saket and all,
> > > > > >
> > > > > > As we are working with automotive platforms, unfortunately we don't
> > > > > > plan
> > > > > > any Qemu reference implementation so far.
> > > > > >
> > > > > > Of course we are ready to support the community if any help is needed.
> > > > > > Is
> > > > > > there interest in support for the FWHT format only for testing purpose
> > > > > > or you want a full-featured implementation on the QEMU side?
> > > > >
> > > > > I guess we don't need to implement the codec algorithm in QEMU.
> > > > > Rather, QEMU forwards virtio-video requests to the host video device
> > > > > or a software library such as GStreamer or ffmpeg.
> > > > > So, what we need to implement in QEMU is a kind of API translation,
> > > > > which shouldn't care about actual video formats so much.
> > > > >
> > > > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > > > understanding, Hans suggested to have QEMU implementation forwarding
> > > > > requests to the host's vicodec module [2].
> > > > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > > > even if the host Linux has no hardware video decoder.
> > > > > (Please correct me if I'm wrong.)
> > > > >
> > > > > Let me add Hans and Linux media ML in CC.
> > > > >
> > > > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > > > [2] https://lwn.net/Articles/760650/
> > > > >
> > > > > Best regards,
> > > > > Keiichi
> > > > >
> > > > > > Please note that the spec is not finalized yet and a major update is
> > > > > > now
> > > > > > discussed with upstream and the Chrome OS team, which is also
> > > > > > interested
> > > > > > and deeply involved in the process. The update mostly implies some
> > > > > > rewording and reorganization of data structures, but for sure will
> > > > > > require a driver rework.
> > > > > >
> > > > > > Best regards,
> > > > > > Dmitry.
> > > > > >
> > > > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > > > posted in [1].
> > > > > > >
> > > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > >
> > > > > > > Regards,
> > > > > > > Saket Sinha
> > > > > > >
> > > > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com>
> > > wrote:
> > > > > > > > Hi ,
> > > > > > > >
> > > > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > > > posted in [1].
> > > > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > > > implementation and would like to discuss how to proceed with the
> > > > > > > > same.
> > > > > > > >
> > > > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > Saket Sinha
> > > > > >
> > > > > > ---------------------------------------------------------------------
> > > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-19  8:37                   ` Keiichi Watanabe
  0 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-19  8:37 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Saket Sinha, Gerd Hoffmann, Hans Verkuil, Dmitry Sepp,
	Emil Velikov, Pawel Osciak, Linux Media Mailing List

Hi Nicolas,

On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > Hi,
> >
> > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > video encoding and decoding, not camera. So, my reply was about how to
> > implement paravirtualized video codec devices.
> >
> > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <dmitry.sepp@opensynergy.com>
> > wrote:
> > > Hi Saket,
> > >
> > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > Hi Keiichi,
> > > >
> > > > I do not support the approach of  QEMU implementation forwarding
> > > > requests to the host's vicodec module since  this can limit the scope
> > > > of the virtio-video device only for testing,
> > >
> > > That was my understanding as well.
> >
> > Not really because the API which the vicodec provides is V4L2 stateful
> > decoder interface [1], which are also used by other video drivers on
> > Linux.
> > The difference between vicodec and actual device drivers is that
> > vicodec performs decoding in the kernel space without using special
> > video devices. In other words, vicodec is a software decoder in kernel
> > space which provides the same interface with actual video drivers.
> > Thus, if the QEMU implementation can forward virtio-video requests to
> > vicodec, it can forward them to the actual V4L2 video decoder devices
> > as well and VM gets access to a paravirtualized video device.
> >
> > The reason why we discussed vicodec in the previous thread was it'll
> > allow us to test the virtio-video driver without hardware requirement.
> >
> > [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> >
> > > > which instead can be used with multiple use cases such as -
> > > >
> > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > video frames input through actual HW camera attached to Host.
> > >
> > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > support capture-only streams like camera as well, but later the decision was
> > > made upstream that camera should be implemented as separate device type. We
> > > still plan to implement a simple frame capture capability as a downstream
> > > patch though.
> > >
> > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > with selective video streams from actual HW video devices.
> > >
> > > We do support this in our device implementation. But spec in general has no
> > > requirements or instructions regarding this. And it is in fact flexible
> > > enough
> > > to provide abstraction on top of several HW devices.
> > >
> > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > can also be used inside the VM userspace after getting access to
> > > > paravirtualized HW camera devices .
> >
> > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > can translate virtio-video requests to this API, we can easily support
> > multiple platforms.
> > I'm not sure how feasible it is though, as I have no experience of
> > using this API by myself...
>
> Not sure which API you aim exactly, but what one need to remember is that
> mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> because we don't want to do bitstream parsing in the kernel on the main CPU as
> security would otherwise be very hard to guaranty. The other driver using same
> API as virtio-video do bitstream parsing on a dedicated co-processor (through
> firmware blobs though).
>
> Having bridges between virtio-video, qemu and some abstraction library like
> FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> type of HW accelerated decoder or if you need to virtualized something
> proprietary (like NVDEC). Please shout if you need help.
>

Yeah, I meant we should map virtio-video commands to a set of
abstracted userspace APIs to avoid having many platform-dependent code
in QEMU.
This is the same with what we implemented in crosvm, a VMM on
ChromiumOS. Crosvm's video device translates virtio-video commands
into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
stateful and V4L2 stateless. Unfortunately, since our library is
highly depending on Chrome, we cannot reuse this for QEMU.

So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
APIs in my previous link weren't for this purpose.
Nicolas, do you know any good references for FFMPEG or GStreamer's
abstracted video decoding APIs? Then, I may be able to think about how
virtio-video protocols can be mapped to them.

[1] libvda's C interface:
https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/libvda_decode.h
[2] libvda's Rust interface:
https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/rust/

Best regards,
Keiichi

> >
> > [2]
> > https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> >
> > Best regards,
> > Keiichi
> >
> > >
> > > Regarding the cameras, unfortunately same as above.
> > >
> > > Best regards,
> > > Dmitry.
> > >
> > > > Regards,
> > > > Saket Sinha
> > > >
> > > > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org>
> > > wrote:
> > > > > Hi Dmitry,
> > > > >
> > > > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com
> > > > > >
> > > wrote:
> > > > > > Hi Saket and all,
> > > > > >
> > > > > > As we are working with automotive platforms, unfortunately we don't
> > > > > > plan
> > > > > > any Qemu reference implementation so far.
> > > > > >
> > > > > > Of course we are ready to support the community if any help is needed.
> > > > > > Is
> > > > > > there interest in support for the FWHT format only for testing purpose
> > > > > > or you want a full-featured implementation on the QEMU side?
> > > > >
> > > > > I guess we don't need to implement the codec algorithm in QEMU.
> > > > > Rather, QEMU forwards virtio-video requests to the host video device
> > > > > or a software library such as GStreamer or ffmpeg.
> > > > > So, what we need to implement in QEMU is a kind of API translation,
> > > > > which shouldn't care about actual video formats so much.
> > > > >
> > > > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > > > understanding, Hans suggested to have QEMU implementation forwarding
> > > > > requests to the host's vicodec module [2].
> > > > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > > > even if the host Linux has no hardware video decoder.
> > > > > (Please correct me if I'm wrong.)
> > > > >
> > > > > Let me add Hans and Linux media ML in CC.
> > > > >
> > > > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > > > [2] https://lwn.net/Articles/760650/
> > > > >
> > > > > Best regards,
> > > > > Keiichi
> > > > >
> > > > > > Please note that the spec is not finalized yet and a major update is
> > > > > > now
> > > > > > discussed with upstream and the Chrome OS team, which is also
> > > > > > interested
> > > > > > and deeply involved in the process. The update mostly implies some
> > > > > > rewording and reorganization of data structures, but for sure will
> > > > > > require a driver rework.
> > > > > >
> > > > > > Best regards,
> > > > > > Dmitry.
> > > > > >
> > > > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > > > posted in [1].
> > > > > > >
> > > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > >
> > > > > > > Regards,
> > > > > > > Saket Sinha
> > > > > > >
> > > > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com>
> > > wrote:
> > > > > > > > Hi ,
> > > > > > > >
> > > > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > > > posted in [1].
> > > > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > > > implementation and would like to discuss how to proceed with the
> > > > > > > > same.
> > > > > > > >
> > > > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > Saket Sinha
> > > > > >
> > > > > > ---------------------------------------------------------------------
> > > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-19  8:37                   ` Keiichi Watanabe
  0 siblings, 0 replies; 56+ messages in thread
From: Keiichi Watanabe @ 2020-05-19  8:37 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Dmitry Sepp, Saket Sinha, Kiran Pawar, Samiullah Khawaja,
	qemu-devel, virtio-dev, Gerd Hoffmann, Michael S. Tsirkin,
	Hans Verkuil, Alexandre Courbot, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

Hi Nicolas,

On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > Hi,
> >
> > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > video encoding and decoding, not camera. So, my reply was about how to
> > implement paravirtualized video codec devices.
> >
> > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <dmitry.sepp@opensynergy.com>
> > wrote:
> > > Hi Saket,
> > >
> > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > Hi Keiichi,
> > > >
> > > > I do not support the approach of  QEMU implementation forwarding
> > > > requests to the host's vicodec module since  this can limit the scope
> > > > of the virtio-video device only for testing,
> > >
> > > That was my understanding as well.
> >
> > Not really because the API which the vicodec provides is V4L2 stateful
> > decoder interface [1], which are also used by other video drivers on
> > Linux.
> > The difference between vicodec and actual device drivers is that
> > vicodec performs decoding in the kernel space without using special
> > video devices. In other words, vicodec is a software decoder in kernel
> > space which provides the same interface with actual video drivers.
> > Thus, if the QEMU implementation can forward virtio-video requests to
> > vicodec, it can forward them to the actual V4L2 video decoder devices
> > as well and VM gets access to a paravirtualized video device.
> >
> > The reason why we discussed vicodec in the previous thread was it'll
> > allow us to test the virtio-video driver without hardware requirement.
> >
> > [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> >
> > > > which instead can be used with multiple use cases such as -
> > > >
> > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > video frames input through actual HW camera attached to Host.
> > >
> > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > support capture-only streams like camera as well, but later the decision was
> > > made upstream that camera should be implemented as separate device type. We
> > > still plan to implement a simple frame capture capability as a downstream
> > > patch though.
> > >
> > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > with selective video streams from actual HW video devices.
> > >
> > > We do support this in our device implementation. But spec in general has no
> > > requirements or instructions regarding this. And it is in fact flexible
> > > enough
> > > to provide abstraction on top of several HW devices.
> > >
> > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > can also be used inside the VM userspace after getting access to
> > > > paravirtualized HW camera devices .
> >
> > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > can translate virtio-video requests to this API, we can easily support
> > multiple platforms.
> > I'm not sure how feasible it is though, as I have no experience of
> > using this API by myself...
>
> Not sure which API you aim exactly, but what one need to remember is that
> mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> because we don't want to do bitstream parsing in the kernel on the main CPU as
> security would otherwise be very hard to guaranty. The other driver using same
> API as virtio-video do bitstream parsing on a dedicated co-processor (through
> firmware blobs though).
>
> Having bridges between virtio-video, qemu and some abstraction library like
> FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> type of HW accelerated decoder or if you need to virtualized something
> proprietary (like NVDEC). Please shout if you need help.
>

Yeah, I meant we should map virtio-video commands to a set of
abstracted userspace APIs to avoid having many platform-dependent code
in QEMU.
This is the same with what we implemented in crosvm, a VMM on
ChromiumOS. Crosvm's video device translates virtio-video commands
into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
stateful and V4L2 stateless. Unfortunately, since our library is
highly depending on Chrome, we cannot reuse this for QEMU.

So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
APIs in my previous link weren't for this purpose.
Nicolas, do you know any good references for FFMPEG or GStreamer's
abstracted video decoding APIs? Then, I may be able to think about how
virtio-video protocols can be mapped to them.

[1] libvda's C interface:
https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/libvda_decode.h
[2] libvda's Rust interface:
https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/rust/

Best regards,
Keiichi

> >
> > [2]
> > https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> >
> > Best regards,
> > Keiichi
> >
> > >
> > > Regarding the cameras, unfortunately same as above.
> > >
> > > Best regards,
> > > Dmitry.
> > >
> > > > Regards,
> > > > Saket Sinha
> > > >
> > > > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <keiichiw@chromium.org>
> > > wrote:
> > > > > Hi Dmitry,
> > > > >
> > > > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <dmitry.sepp@opensynergy.com
> > > > > >
> > > wrote:
> > > > > > Hi Saket and all,
> > > > > >
> > > > > > As we are working with automotive platforms, unfortunately we don't
> > > > > > plan
> > > > > > any Qemu reference implementation so far.
> > > > > >
> > > > > > Of course we are ready to support the community if any help is needed.
> > > > > > Is
> > > > > > there interest in support for the FWHT format only for testing purpose
> > > > > > or you want a full-featured implementation on the QEMU side?
> > > > >
> > > > > I guess we don't need to implement the codec algorithm in QEMU.
> > > > > Rather, QEMU forwards virtio-video requests to the host video device
> > > > > or a software library such as GStreamer or ffmpeg.
> > > > > So, what we need to implement in QEMU is a kind of API translation,
> > > > > which shouldn't care about actual video formats so much.
> > > > >
> > > > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > > > understanding, Hans suggested to have QEMU implementation forwarding
> > > > > requests to the host's vicodec module [2].
> > > > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > > > even if the host Linux has no hardware video decoder.
> > > > > (Please correct me if I'm wrong.)
> > > > >
> > > > > Let me add Hans and Linux media ML in CC.
> > > > >
> > > > > [1]  https://patchwork.linuxtv.org/patch/61717/
> > > > > [2] https://lwn.net/Articles/760650/
> > > > >
> > > > > Best regards,
> > > > > Keiichi
> > > > >
> > > > > > Please note that the spec is not finalized yet and a major update is
> > > > > > now
> > > > > > discussed with upstream and the Chrome OS team, which is also
> > > > > > interested
> > > > > > and deeply involved in the process. The update mostly implies some
> > > > > > rewording and reorganization of data structures, but for sure will
> > > > > > require a driver rework.
> > > > > >
> > > > > > Best regards,
> > > > > > Dmitry.
> > > > > >
> > > > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > > > Hi,
> > > > > > >
> > > > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > > > posted in [1].
> > > > > > >
> > > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > >
> > > > > > > Regards,
> > > > > > > Saket Sinha
> > > > > > >
> > > > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <saket.sinha89@gmail.com>
> > > wrote:
> > > > > > > > Hi ,
> > > > > > > >
> > > > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > > > posted in [1].
> > > > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > > > implementation and would like to discuss how to proceed with the
> > > > > > > > same.
> > > > > > > >
> > > > > > > > [1]: https://patchwork.linuxtv.org/patch/61717/
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > Saket Sinha
> > > > > >
> > > > > > ---------------------------------------------------------------------
> > > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-19  8:37                   ` Keiichi Watanabe
@ 2020-05-19 17:29                     ` Nicolas Dufresne
  -1 siblings, 0 replies; 56+ messages in thread
From: Nicolas Dufresne @ 2020-05-19 17:29 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Dmitry Sepp, Saket Sinha, Kiran Pawar, Samiullah Khawaja,
	qemu-devel, virtio-dev, Gerd Hoffmann, Michael S. Tsirkin,
	Hans Verkuil, Alexandre Courbot, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> Hi Nicolas,
> 
> On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> nicolas@ndufresne.ca
> > wrote:
> > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > Hi,
> > > 
> > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > video encoding and decoding, not camera. So, my reply was about how to
> > > implement paravirtualized video codec devices.
> > > 
> > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > dmitry.sepp@opensynergy.com
> > > >
> > > wrote:
> > > > Hi Saket,
> > > > 
> > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > Hi Keiichi,
> > > > > 
> > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > of the virtio-video device only for testing,
> > > > 
> > > > That was my understanding as well.
> > > 
> > > Not really because the API which the vicodec provides is V4L2 stateful
> > > decoder interface [1], which are also used by other video drivers on
> > > Linux.
> > > The difference between vicodec and actual device drivers is that
> > > vicodec performs decoding in the kernel space without using special
> > > video devices. In other words, vicodec is a software decoder in kernel
> > > space which provides the same interface with actual video drivers.
> > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > as well and VM gets access to a paravirtualized video device.
> > > 
> > > The reason why we discussed vicodec in the previous thread was it'll
> > > allow us to test the virtio-video driver without hardware requirement.
> > > 
> > > [1] 
> > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > 
> > > 
> > > > > which instead can be used with multiple use cases such as -
> > > > > 
> > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > video frames input through actual HW camera attached to Host.
> > > > 
> > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > support capture-only streams like camera as well, but later the decision was
> > > > made upstream that camera should be implemented as separate device type. We
> > > > still plan to implement a simple frame capture capability as a downstream
> > > > patch though.
> > > > 
> > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > with selective video streams from actual HW video devices.
> > > > 
> > > > We do support this in our device implementation. But spec in general has no
> > > > requirements or instructions regarding this. And it is in fact flexible
> > > > enough
> > > > to provide abstraction on top of several HW devices.
> > > > 
> > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > can also be used inside the VM userspace after getting access to
> > > > > paravirtualized HW camera devices .
> > > 
> > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > can translate virtio-video requests to this API, we can easily support
> > > multiple platforms.
> > > I'm not sure how feasible it is though, as I have no experience of
> > > using this API by myself...
> > 
> > Not sure which API you aim exactly, but what one need to remember is that
> > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > security would otherwise be very hard to guaranty. The other driver using same
> > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > firmware blobs though).
> > 
> > Having bridges between virtio-video, qemu and some abstraction library like
> > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > type of HW accelerated decoder or if you need to virtualized something
> > proprietary (like NVDEC). Please shout if you need help.
> > 
> 
> Yeah, I meant we should map virtio-video commands to a set of
> abstracted userspace APIs to avoid having many platform-dependent code
> in QEMU.
> This is the same with what we implemented in crosvm, a VMM on
> ChromiumOS. Crosvm's video device translates virtio-video commands
> into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> stateful and V4L2 stateless. Unfortunately, since our library is
> highly depending on Chrome, we cannot reuse this for QEMU.
> 
> So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> APIs in my previous link weren't for this purpose.
> Nicolas, do you know any good references for FFMPEG or GStreamer's
> abstracted video decoding APIs? Then, I may be able to think about how
> virtio-video protocols can be mapped to them.

The FFMpeg API for libavcodec can be found here:

  http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h

GStreamer does not really have such a low level CODEC API. So while
it's possible to use it (Wine project uses it for it's parsers as an
example, and Firefox use to have CODEC support wrapping GStreamer
CODEC), there will not be any one-to-one mapping. GStreamer is often
chosen as it's LGPL code does not carry directly any patented
implementation. It instead rely on plugins, which maybe provided as
third party, allowing to distribute your project while giving uses the
option to install potentially non-free technologies.

But overall, I can describe GStreamer API for CODEC wrapping (pipeline
less) as:

  - Push GstCaps describing the stream format
  - Push bitstream buffer on sink pad
  - When ready, buffers will be pushed through the push function 
    callback on src pad

Of course nothing prevent adding something like the vda abstraction in
qemu and make this multi-backend capable.

> 
> [1] libvda's C interface:
> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/libvda_decode.h
> 
> [2] libvda's Rust interface:
> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/rust/
> 
> 
> Best regards,
> Keiichi
> 
> > > [2]
> > > https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> > > 
> > > 
> > > Best regards,
> > > Keiichi
> > > 
> > > > Regarding the cameras, unfortunately same as above.
> > > > 
> > > > Best regards,
> > > > Dmitry.
> > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > > > 
> > > > > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <
> > > > > keiichiw@chromium.org
> > > > > >
> > > > 
> > > > wrote:
> > > > > > Hi Dmitry,
> > > > > > 
> > > > > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <
> > > > > > dmitry.sepp@opensynergy.com
> > > > > > 
> > > > 
> > > > wrote:
> > > > > > > Hi Saket and all,
> > > > > > > 
> > > > > > > As we are working with automotive platforms, unfortunately we don't
> > > > > > > plan
> > > > > > > any Qemu reference implementation so far.
> > > > > > > 
> > > > > > > Of course we are ready to support the community if any help is needed.
> > > > > > > Is
> > > > > > > there interest in support for the FWHT format only for testing purpose
> > > > > > > or you want a full-featured implementation on the QEMU side?
> > > > > > 
> > > > > > I guess we don't need to implement the codec algorithm in QEMU.
> > > > > > Rather, QEMU forwards virtio-video requests to the host video device
> > > > > > or a software library such as GStreamer or ffmpeg.
> > > > > > So, what we need to implement in QEMU is a kind of API translation,
> > > > > > which shouldn't care about actual video formats so much.
> > > > > > 
> > > > > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > > > > understanding, Hans suggested to have QEMU implementation forwarding
> > > > > > requests to the host's vicodec module [2].
> > > > > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > > > > even if the host Linux has no hardware video decoder.
> > > > > > (Please correct me if I'm wrong.)
> > > > > > 
> > > > > > Let me add Hans and Linux media ML in CC.
> > > > > > 
> > > > > > [1]  
> > > > > > https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > [2] 
> > > > > > https://lwn.net/Articles/760650/
> > > > > > 
> > > > > > 
> > > > > > Best regards,
> > > > > > Keiichi
> > > > > > 
> > > > > > > Please note that the spec is not finalized yet and a major update is
> > > > > > > now
> > > > > > > discussed with upstream and the Chrome OS team, which is also
> > > > > > > interested
> > > > > > > and deeply involved in the process. The update mostly implies some
> > > > > > > rewording and reorganization of data structures, but for sure will
> > > > > > > require a driver rework.
> > > > > > > 
> > > > > > > Best regards,
> > > > > > > Dmitry.
> > > > > > > 
> > > > > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > > > > Hi,
> > > > > > > > 
> > > > > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > > > > posted in [1].
> > > > > > > > 
> > > > > > > > [1]: 
> > > > > > > > https://patchwork.linuxtv.org/patch/61717/
> > > > > > > > 
> > > > > > > > 
> > > > > > > > Regards,
> > > > > > > > Saket Sinha
> > > > > > > > 
> > > > > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <
> > > > > > > > saket.sinha89@gmail.com
> > > > > > > > >
> > > > 
> > > > wrote:
> > > > > > > > > Hi ,
> > > > > > > > > 
> > > > > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > > > > posted in [1].
> > > > > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > > > > implementation and would like to discuss how to proceed with the
> > > > > > > > > same.
> > > > > > > > > 
> > > > > > > > > [1]: 
> > > > > > > > > https://patchwork.linuxtv.org/patch/61717/
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > Regards,
> > > > > > > > > Saket Sinha
> > > > > > > 
> > > > > > > ---------------------------------------------------------------------
> > > > > > > To unsubscribe, e-mail: 
> > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > > > 
> > > > > > > For additional commands, e-mail: 
> > > > > > > virtio-dev-help@lists.oasis-open.org
> > > > > > > 


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-19 17:29                     ` Nicolas Dufresne
  0 siblings, 0 replies; 56+ messages in thread
From: Nicolas Dufresne @ 2020-05-19 17:29 UTC (permalink / raw)
  To: Keiichi Watanabe
  Cc: Samiullah Khawaja, virtio-dev, Alex Lau, Kiran Pawar,
	Alexandre Courbot, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Saket Sinha, Gerd Hoffmann, Hans Verkuil, Dmitry Sepp,
	Emil Velikov, Pawel Osciak, Linux Media Mailing List

Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> Hi Nicolas,
> 
> On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> nicolas@ndufresne.ca
> > wrote:
> > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > Hi,
> > > 
> > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > video encoding and decoding, not camera. So, my reply was about how to
> > > implement paravirtualized video codec devices.
> > > 
> > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > dmitry.sepp@opensynergy.com
> > > >
> > > wrote:
> > > > Hi Saket,
> > > > 
> > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > Hi Keiichi,
> > > > > 
> > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > of the virtio-video device only for testing,
> > > > 
> > > > That was my understanding as well.
> > > 
> > > Not really because the API which the vicodec provides is V4L2 stateful
> > > decoder interface [1], which are also used by other video drivers on
> > > Linux.
> > > The difference between vicodec and actual device drivers is that
> > > vicodec performs decoding in the kernel space without using special
> > > video devices. In other words, vicodec is a software decoder in kernel
> > > space which provides the same interface with actual video drivers.
> > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > as well and VM gets access to a paravirtualized video device.
> > > 
> > > The reason why we discussed vicodec in the previous thread was it'll
> > > allow us to test the virtio-video driver without hardware requirement.
> > > 
> > > [1] 
> > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > 
> > > 
> > > > > which instead can be used with multiple use cases such as -
> > > > > 
> > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > video frames input through actual HW camera attached to Host.
> > > > 
> > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > support capture-only streams like camera as well, but later the decision was
> > > > made upstream that camera should be implemented as separate device type. We
> > > > still plan to implement a simple frame capture capability as a downstream
> > > > patch though.
> > > > 
> > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > with selective video streams from actual HW video devices.
> > > > 
> > > > We do support this in our device implementation. But spec in general has no
> > > > requirements or instructions regarding this. And it is in fact flexible
> > > > enough
> > > > to provide abstraction on top of several HW devices.
> > > > 
> > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > can also be used inside the VM userspace after getting access to
> > > > > paravirtualized HW camera devices .
> > > 
> > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > can translate virtio-video requests to this API, we can easily support
> > > multiple platforms.
> > > I'm not sure how feasible it is though, as I have no experience of
> > > using this API by myself...
> > 
> > Not sure which API you aim exactly, but what one need to remember is that
> > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > security would otherwise be very hard to guaranty. The other driver using same
> > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > firmware blobs though).
> > 
> > Having bridges between virtio-video, qemu and some abstraction library like
> > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > type of HW accelerated decoder or if you need to virtualized something
> > proprietary (like NVDEC). Please shout if you need help.
> > 
> 
> Yeah, I meant we should map virtio-video commands to a set of
> abstracted userspace APIs to avoid having many platform-dependent code
> in QEMU.
> This is the same with what we implemented in crosvm, a VMM on
> ChromiumOS. Crosvm's video device translates virtio-video commands
> into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> stateful and V4L2 stateless. Unfortunately, since our library is
> highly depending on Chrome, we cannot reuse this for QEMU.
> 
> So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> APIs in my previous link weren't for this purpose.
> Nicolas, do you know any good references for FFMPEG or GStreamer's
> abstracted video decoding APIs? Then, I may be able to think about how
> virtio-video protocols can be mapped to them.

The FFMpeg API for libavcodec can be found here:

  http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h

GStreamer does not really have such a low level CODEC API. So while
it's possible to use it (Wine project uses it for it's parsers as an
example, and Firefox use to have CODEC support wrapping GStreamer
CODEC), there will not be any one-to-one mapping. GStreamer is often
chosen as it's LGPL code does not carry directly any patented
implementation. It instead rely on plugins, which maybe provided as
third party, allowing to distribute your project while giving uses the
option to install potentially non-free technologies.

But overall, I can describe GStreamer API for CODEC wrapping (pipeline
less) as:

  - Push GstCaps describing the stream format
  - Push bitstream buffer on sink pad
  - When ready, buffers will be pushed through the push function 
    callback on src pad

Of course nothing prevent adding something like the vda abstraction in
qemu and make this multi-backend capable.

> 
> [1] libvda's C interface:
> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/libvda_decode.h
> 
> [2] libvda's Rust interface:
> https://chromium.googlesource.com/chromiumos/platform2/+/refs/heads/master/arc/vm/libvda/rust/
> 
> 
> Best regards,
> Keiichi
> 
> > > [2]
> > > https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
> > > 
> > > 
> > > Best regards,
> > > Keiichi
> > > 
> > > > Regarding the cameras, unfortunately same as above.
> > > > 
> > > > Best regards,
> > > > Dmitry.
> > > > 
> > > > > Regards,
> > > > > Saket Sinha
> > > > > 
> > > > > On Mon, May 11, 2020 at 12:20 PM Keiichi Watanabe <
> > > > > keiichiw@chromium.org
> > > > > >
> > > > 
> > > > wrote:
> > > > > > Hi Dmitry,
> > > > > > 
> > > > > > On Mon, May 11, 2020 at 6:40 PM Dmitry Sepp <
> > > > > > dmitry.sepp@opensynergy.com
> > > > > > 
> > > > 
> > > > wrote:
> > > > > > > Hi Saket and all,
> > > > > > > 
> > > > > > > As we are working with automotive platforms, unfortunately we don't
> > > > > > > plan
> > > > > > > any Qemu reference implementation so far.
> > > > > > > 
> > > > > > > Of course we are ready to support the community if any help is needed.
> > > > > > > Is
> > > > > > > there interest in support for the FWHT format only for testing purpose
> > > > > > > or you want a full-featured implementation on the QEMU side?
> > > > > > 
> > > > > > I guess we don't need to implement the codec algorithm in QEMU.
> > > > > > Rather, QEMU forwards virtio-video requests to the host video device
> > > > > > or a software library such as GStreamer or ffmpeg.
> > > > > > So, what we need to implement in QEMU is a kind of API translation,
> > > > > > which shouldn't care about actual video formats so much.
> > > > > > 
> > > > > > Regarding the FWHT format discussed in the patch thread [1], in my
> > > > > > understanding, Hans suggested to have QEMU implementation forwarding
> > > > > > requests to the host's vicodec module [2].
> > > > > > Then, we'll be able to test the virtio-video driver on QEMU on Linux
> > > > > > even if the host Linux has no hardware video decoder.
> > > > > > (Please correct me if I'm wrong.)
> > > > > > 
> > > > > > Let me add Hans and Linux media ML in CC.
> > > > > > 
> > > > > > [1]  
> > > > > > https://patchwork.linuxtv.org/patch/61717/
> > > > > > 
> > > > > > [2] 
> > > > > > https://lwn.net/Articles/760650/
> > > > > > 
> > > > > > 
> > > > > > Best regards,
> > > > > > Keiichi
> > > > > > 
> > > > > > > Please note that the spec is not finalized yet and a major update is
> > > > > > > now
> > > > > > > discussed with upstream and the Chrome OS team, which is also
> > > > > > > interested
> > > > > > > and deeply involved in the process. The update mostly implies some
> > > > > > > rewording and reorganization of data structures, but for sure will
> > > > > > > require a driver rework.
> > > > > > > 
> > > > > > > Best regards,
> > > > > > > Dmitry.
> > > > > > > 
> > > > > > > On Samstag, 9. Mai 2020 16:11:43 CEST Saket Sinha wrote:
> > > > > > > > Hi,
> > > > > > > > 
> > > > > > > > As suggested on #qemu-devel IRC channel, I am including virtio-dev,
> > > > > > > > Gerd and Michael to point in the right direction how to move forward
> > > > > > > > with Qemu support for Virtio Video V4L2 driver
> > > > > > > > posted in [1].
> > > > > > > > 
> > > > > > > > [1]: 
> > > > > > > > https://patchwork.linuxtv.org/patch/61717/
> > > > > > > > 
> > > > > > > > 
> > > > > > > > Regards,
> > > > > > > > Saket Sinha
> > > > > > > > 
> > > > > > > > On Sat, May 9, 2020 at 1:09 AM Saket Sinha <
> > > > > > > > saket.sinha89@gmail.com
> > > > > > > > >
> > > > 
> > > > wrote:
> > > > > > > > > Hi ,
> > > > > > > > > 
> > > > > > > > > This is to inquire about Qemu support for Virtio Video V4L2 driver
> > > > > > > > > posted in [1].
> > > > > > > > > I am currently not aware of any upstream effort for Qemu reference
> > > > > > > > > implementation and would like to discuss how to proceed with the
> > > > > > > > > same.
> > > > > > > > > 
> > > > > > > > > [1]: 
> > > > > > > > > https://patchwork.linuxtv.org/patch/61717/
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > Regards,
> > > > > > > > > Saket Sinha
> > > > > > > 
> > > > > > > ---------------------------------------------------------------------
> > > > > > > To unsubscribe, e-mail: 
> > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > > > 
> > > > > > > For additional commands, e-mail: 
> > > > > > > virtio-dev-help@lists.oasis-open.org
> > > > > > > 



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-19 17:29                     ` Nicolas Dufresne
  (?)
@ 2020-05-20  3:19                       ` Alexandre Courbot
  -1 siblings, 0 replies; 56+ messages in thread
From: Alexandre Courbot @ 2020-05-20  3:19 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Keiichi Watanabe, Dmitry Sepp, Saket Sinha, Kiran Pawar,
	Samiullah Khawaja, qemu-devel, virtio-dev, Gerd Hoffmann,
	Michael S. Tsirkin, Hans Verkuil, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > Hi Nicolas,
> >
> > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > nicolas@ndufresne.ca
> > > wrote:
> > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > Hi,
> > > >
> > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > implement paravirtualized video codec devices.
> > > >
> > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > dmitry.sepp@opensynergy.com
> > > > >
> > > > wrote:
> > > > > Hi Saket,
> > > > >
> > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > Hi Keiichi,
> > > > > >
> > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > of the virtio-video device only for testing,
> > > > >
> > > > > That was my understanding as well.
> > > >
> > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > decoder interface [1], which are also used by other video drivers on
> > > > Linux.
> > > > The difference between vicodec and actual device drivers is that
> > > > vicodec performs decoding in the kernel space without using special
> > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > space which provides the same interface with actual video drivers.
> > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > as well and VM gets access to a paravirtualized video device.
> > > >
> > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > allow us to test the virtio-video driver without hardware requirement.
> > > >
> > > > [1]
> > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > >
> > > >
> > > > > > which instead can be used with multiple use cases such as -
> > > > > >
> > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > video frames input through actual HW camera attached to Host.
> > > > >
> > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > support capture-only streams like camera as well, but later the decision was
> > > > > made upstream that camera should be implemented as separate device type. We
> > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > patch though.
> > > > >
> > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > with selective video streams from actual HW video devices.
> > > > >
> > > > > We do support this in our device implementation. But spec in general has no
> > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > enough
> > > > > to provide abstraction on top of several HW devices.
> > > > >
> > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > can also be used inside the VM userspace after getting access to
> > > > > > paravirtualized HW camera devices .
> > > >
> > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > can translate virtio-video requests to this API, we can easily support
> > > > multiple platforms.
> > > > I'm not sure how feasible it is though, as I have no experience of
> > > > using this API by myself...
> > >
> > > Not sure which API you aim exactly, but what one need to remember is that
> > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > security would otherwise be very hard to guaranty. The other driver using same
> > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > firmware blobs though).
> > >
> > > Having bridges between virtio-video, qemu and some abstraction library like
> > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > type of HW accelerated decoder or if you need to virtualized something
> > > proprietary (like NVDEC). Please shout if you need help.
> > >
> >
> > Yeah, I meant we should map virtio-video commands to a set of
> > abstracted userspace APIs to avoid having many platform-dependent code
> > in QEMU.
> > This is the same with what we implemented in crosvm, a VMM on
> > ChromiumOS. Crosvm's video device translates virtio-video commands
> > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > stateful and V4L2 stateless. Unfortunately, since our library is
> > highly depending on Chrome, we cannot reuse this for QEMU.
> >
> > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > APIs in my previous link weren't for this purpose.
> > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > abstracted video decoding APIs? Then, I may be able to think about how
> > virtio-video protocols can be mapped to them.
>
> The FFMpeg API for libavcodec can be found here:
>
>   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
>
> GStreamer does not really have such a low level CODEC API. So while
> it's possible to use it (Wine project uses it for it's parsers as an
> example, and Firefox use to have CODEC support wrapping GStreamer
> CODEC), there will not be any one-to-one mapping. GStreamer is often
> chosen as it's LGPL code does not carry directly any patented
> implementation. It instead rely on plugins, which maybe provided as
> third party, allowing to distribute your project while giving uses the
> option to install potentially non-free technologies.
>
> But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> less) as:
>
>   - Push GstCaps describing the stream format
>   - Push bitstream buffer on sink pad
>   - When ready, buffers will be pushed through the push function
>     callback on src pad
>
> Of course nothing prevent adding something like the vda abstraction in
> qemu and make this multi-backend capable.

My understanding is that we don't need a particularly low-level API to
interact with. The host virtual device is receiving the whole encoded
data, and can thus easily reconstruct the original stream (minus the
container) and pass it to ffmpeg/gstreamer. So we can be pretty
high-level here.

Now the choice of API will also determine whether we want to allow
emulation of codec devices, or whether we stay on a purely
para-virtual track. If we use e.g. gstreamer, then the host can
provide a virtual device that is backed by a purely software
implementation. This can be useful for testing purposes, but for
real-life usage the guest would be just as well using gstreamer
itself.

If we want to make sure that there is hardware on the host side, then
an API like libva might make more sense, but it would be more
complicated and may not support all hardware (I don't know if the V4L2
backends are usable for instance).

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-20  3:19                       ` Alexandre Courbot
  0 siblings, 0 replies; 56+ messages in thread
From: Alexandre Courbot @ 2020-05-20  3:19 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Samiullah Khawaja, Saket Sinha, Alex Lau, Kiran Pawar,
	virtio-dev, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Keiichi Watanabe, Gerd Hoffmann, Hans Verkuil, Dmitry Sepp,
	Emil Velikov, Pawel Osciak, Linux Media Mailing List

On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > Hi Nicolas,
> >
> > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > nicolas@ndufresne.ca
> > > wrote:
> > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > Hi,
> > > >
> > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > implement paravirtualized video codec devices.
> > > >
> > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > dmitry.sepp@opensynergy.com
> > > > >
> > > > wrote:
> > > > > Hi Saket,
> > > > >
> > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > Hi Keiichi,
> > > > > >
> > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > of the virtio-video device only for testing,
> > > > >
> > > > > That was my understanding as well.
> > > >
> > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > decoder interface [1], which are also used by other video drivers on
> > > > Linux.
> > > > The difference between vicodec and actual device drivers is that
> > > > vicodec performs decoding in the kernel space without using special
> > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > space which provides the same interface with actual video drivers.
> > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > as well and VM gets access to a paravirtualized video device.
> > > >
> > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > allow us to test the virtio-video driver without hardware requirement.
> > > >
> > > > [1]
> > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > >
> > > >
> > > > > > which instead can be used with multiple use cases such as -
> > > > > >
> > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > video frames input through actual HW camera attached to Host.
> > > > >
> > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > support capture-only streams like camera as well, but later the decision was
> > > > > made upstream that camera should be implemented as separate device type. We
> > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > patch though.
> > > > >
> > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > with selective video streams from actual HW video devices.
> > > > >
> > > > > We do support this in our device implementation. But spec in general has no
> > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > enough
> > > > > to provide abstraction on top of several HW devices.
> > > > >
> > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > can also be used inside the VM userspace after getting access to
> > > > > > paravirtualized HW camera devices .
> > > >
> > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > can translate virtio-video requests to this API, we can easily support
> > > > multiple platforms.
> > > > I'm not sure how feasible it is though, as I have no experience of
> > > > using this API by myself...
> > >
> > > Not sure which API you aim exactly, but what one need to remember is that
> > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > security would otherwise be very hard to guaranty. The other driver using same
> > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > firmware blobs though).
> > >
> > > Having bridges between virtio-video, qemu and some abstraction library like
> > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > type of HW accelerated decoder or if you need to virtualized something
> > > proprietary (like NVDEC). Please shout if you need help.
> > >
> >
> > Yeah, I meant we should map virtio-video commands to a set of
> > abstracted userspace APIs to avoid having many platform-dependent code
> > in QEMU.
> > This is the same with what we implemented in crosvm, a VMM on
> > ChromiumOS. Crosvm's video device translates virtio-video commands
> > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > stateful and V4L2 stateless. Unfortunately, since our library is
> > highly depending on Chrome, we cannot reuse this for QEMU.
> >
> > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > APIs in my previous link weren't for this purpose.
> > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > abstracted video decoding APIs? Then, I may be able to think about how
> > virtio-video protocols can be mapped to them.
>
> The FFMpeg API for libavcodec can be found here:
>
>   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
>
> GStreamer does not really have such a low level CODEC API. So while
> it's possible to use it (Wine project uses it for it's parsers as an
> example, and Firefox use to have CODEC support wrapping GStreamer
> CODEC), there will not be any one-to-one mapping. GStreamer is often
> chosen as it's LGPL code does not carry directly any patented
> implementation. It instead rely on plugins, which maybe provided as
> third party, allowing to distribute your project while giving uses the
> option to install potentially non-free technologies.
>
> But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> less) as:
>
>   - Push GstCaps describing the stream format
>   - Push bitstream buffer on sink pad
>   - When ready, buffers will be pushed through the push function
>     callback on src pad
>
> Of course nothing prevent adding something like the vda abstraction in
> qemu and make this multi-backend capable.

My understanding is that we don't need a particularly low-level API to
interact with. The host virtual device is receiving the whole encoded
data, and can thus easily reconstruct the original stream (minus the
container) and pass it to ffmpeg/gstreamer. So we can be pretty
high-level here.

Now the choice of API will also determine whether we want to allow
emulation of codec devices, or whether we stay on a purely
para-virtual track. If we use e.g. gstreamer, then the host can
provide a virtual device that is backed by a purely software
implementation. This can be useful for testing purposes, but for
real-life usage the guest would be just as well using gstreamer
itself.

If we want to make sure that there is hardware on the host side, then
an API like libva might make more sense, but it would be more
complicated and may not support all hardware (I don't know if the V4L2
backends are usable for instance).


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-20  3:19                       ` Alexandre Courbot
  0 siblings, 0 replies; 56+ messages in thread
From: Alexandre Courbot @ 2020-05-20  3:19 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Keiichi Watanabe, Dmitry Sepp, Saket Sinha, Kiran Pawar,
	Samiullah Khawaja, qemu-devel, virtio-dev, Gerd Hoffmann,
	Michael S. Tsirkin, Hans Verkuil, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > Hi Nicolas,
> >
> > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > nicolas@ndufresne.ca
> > > wrote:
> > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > Hi,
> > > >
> > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > implement paravirtualized video codec devices.
> > > >
> > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > dmitry.sepp@opensynergy.com
> > > > >
> > > > wrote:
> > > > > Hi Saket,
> > > > >
> > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > Hi Keiichi,
> > > > > >
> > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > of the virtio-video device only for testing,
> > > > >
> > > > > That was my understanding as well.
> > > >
> > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > decoder interface [1], which are also used by other video drivers on
> > > > Linux.
> > > > The difference between vicodec and actual device drivers is that
> > > > vicodec performs decoding in the kernel space without using special
> > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > space which provides the same interface with actual video drivers.
> > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > as well and VM gets access to a paravirtualized video device.
> > > >
> > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > allow us to test the virtio-video driver without hardware requirement.
> > > >
> > > > [1]
> > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > >
> > > >
> > > > > > which instead can be used with multiple use cases such as -
> > > > > >
> > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > video frames input through actual HW camera attached to Host.
> > > > >
> > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > support capture-only streams like camera as well, but later the decision was
> > > > > made upstream that camera should be implemented as separate device type. We
> > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > patch though.
> > > > >
> > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > with selective video streams from actual HW video devices.
> > > > >
> > > > > We do support this in our device implementation. But spec in general has no
> > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > enough
> > > > > to provide abstraction on top of several HW devices.
> > > > >
> > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > can also be used inside the VM userspace after getting access to
> > > > > > paravirtualized HW camera devices .
> > > >
> > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > can translate virtio-video requests to this API, we can easily support
> > > > multiple platforms.
> > > > I'm not sure how feasible it is though, as I have no experience of
> > > > using this API by myself...
> > >
> > > Not sure which API you aim exactly, but what one need to remember is that
> > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > security would otherwise be very hard to guaranty. The other driver using same
> > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > firmware blobs though).
> > >
> > > Having bridges between virtio-video, qemu and some abstraction library like
> > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > type of HW accelerated decoder or if you need to virtualized something
> > > proprietary (like NVDEC). Please shout if you need help.
> > >
> >
> > Yeah, I meant we should map virtio-video commands to a set of
> > abstracted userspace APIs to avoid having many platform-dependent code
> > in QEMU.
> > This is the same with what we implemented in crosvm, a VMM on
> > ChromiumOS. Crosvm's video device translates virtio-video commands
> > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > stateful and V4L2 stateless. Unfortunately, since our library is
> > highly depending on Chrome, we cannot reuse this for QEMU.
> >
> > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > APIs in my previous link weren't for this purpose.
> > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > abstracted video decoding APIs? Then, I may be able to think about how
> > virtio-video protocols can be mapped to them.
>
> The FFMpeg API for libavcodec can be found here:
>
>   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
>
> GStreamer does not really have such a low level CODEC API. So while
> it's possible to use it (Wine project uses it for it's parsers as an
> example, and Firefox use to have CODEC support wrapping GStreamer
> CODEC), there will not be any one-to-one mapping. GStreamer is often
> chosen as it's LGPL code does not carry directly any patented
> implementation. It instead rely on plugins, which maybe provided as
> third party, allowing to distribute your project while giving uses the
> option to install potentially non-free technologies.
>
> But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> less) as:
>
>   - Push GstCaps describing the stream format
>   - Push bitstream buffer on sink pad
>   - When ready, buffers will be pushed through the push function
>     callback on src pad
>
> Of course nothing prevent adding something like the vda abstraction in
> qemu and make this multi-backend capable.

My understanding is that we don't need a particularly low-level API to
interact with. The host virtual device is receiving the whole encoded
data, and can thus easily reconstruct the original stream (minus the
container) and pass it to ffmpeg/gstreamer. So we can be pretty
high-level here.

Now the choice of API will also determine whether we want to allow
emulation of codec devices, or whether we stay on a purely
para-virtual track. If we use e.g. gstreamer, then the host can
provide a virtual device that is backed by a purely software
implementation. This can be useful for testing purposes, but for
real-life usage the guest would be just as well using gstreamer
itself.

If we want to make sure that there is hardware on the host side, then
an API like libva might make more sense, but it would be more
complicated and may not support all hardware (I don't know if the V4L2
backends are usable for instance).

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-20  3:19                       ` Alexandre Courbot
@ 2020-05-20 16:21                         ` Nicolas Dufresne
  -1 siblings, 0 replies; 56+ messages in thread
From: Nicolas Dufresne @ 2020-05-20 16:21 UTC (permalink / raw)
  To: Alexandre Courbot
  Cc: Keiichi Watanabe, Dmitry Sepp, Saket Sinha, Kiran Pawar,
	Samiullah Khawaja, qemu-devel, virtio-dev, Gerd Hoffmann,
	Michael S. Tsirkin, Hans Verkuil, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > Hi Nicolas,
> > > 
> > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > nicolas@ndufresne.ca
> > > > wrote:
> > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > Hi,
> > > > > 
> > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > implement paravirtualized video codec devices.
> > > > > 
> > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > dmitry.sepp@opensynergy.com
> > > > > wrote:
> > > > > > Hi Saket,
> > > > > > 
> > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > Hi Keiichi,
> > > > > > > 
> > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > of the virtio-video device only for testing,
> > > > > > 
> > > > > > That was my understanding as well.
> > > > > 
> > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > decoder interface [1], which are also used by other video drivers on
> > > > > Linux.
> > > > > The difference between vicodec and actual device drivers is that
> > > > > vicodec performs decoding in the kernel space without using special
> > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > space which provides the same interface with actual video drivers.
> > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > as well and VM gets access to a paravirtualized video device.
> > > > > 
> > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > 
> > > > > [1]
> > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > 
> > > > > 
> > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > 
> > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > 
> > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > patch though.
> > > > > > 
> > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > with selective video streams from actual HW video devices.
> > > > > > 
> > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > enough
> > > > > > to provide abstraction on top of several HW devices.
> > > > > > 
> > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > paravirtualized HW camera devices .
> > > > > 
> > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > can translate virtio-video requests to this API, we can easily support
> > > > > multiple platforms.
> > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > using this API by myself...
> > > > 
> > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > firmware blobs though).
> > > > 
> > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > type of HW accelerated decoder or if you need to virtualized something
> > > > proprietary (like NVDEC). Please shout if you need help.
> > > > 
> > > 
> > > Yeah, I meant we should map virtio-video commands to a set of
> > > abstracted userspace APIs to avoid having many platform-dependent code
> > > in QEMU.
> > > This is the same with what we implemented in crosvm, a VMM on
> > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > 
> > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > APIs in my previous link weren't for this purpose.
> > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > abstracted video decoding APIs? Then, I may be able to think about how
> > > virtio-video protocols can be mapped to them.
> > 
> > The FFMpeg API for libavcodec can be found here:
> > 
> >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > 
> > GStreamer does not really have such a low level CODEC API. So while
> > it's possible to use it (Wine project uses it for it's parsers as an
> > example, and Firefox use to have CODEC support wrapping GStreamer
> > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > chosen as it's LGPL code does not carry directly any patented
> > implementation. It instead rely on plugins, which maybe provided as
> > third party, allowing to distribute your project while giving uses the
> > option to install potentially non-free technologies.
> > 
> > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > less) as:
> > 
> >   - Push GstCaps describing the stream format
> >   - Push bitstream buffer on sink pad
> >   - When ready, buffers will be pushed through the push function
> >     callback on src pad
> > 
> > Of course nothing prevent adding something like the vda abstraction in
> > qemu and make this multi-backend capable.
> 
> My understanding is that we don't need a particularly low-level API to
> interact with. The host virtual device is receiving the whole encoded
> data, and can thus easily reconstruct the original stream (minus the
> container) and pass it to ffmpeg/gstreamer. So we can be pretty
> high-level here.
> 
> Now the choice of API will also determine whether we want to allow
> emulation of codec devices, or whether we stay on a purely
> para-virtual track. If we use e.g. gstreamer, then the host can
> provide a virtual device that is backed by a purely software
> implementation. This can be useful for testing purposes, but for
> real-life usage the guest would be just as well using gstreamer
> itself.

Agreed.

> 
> If we want to make sure that there is hardware on the host side, then
> an API like libva might make more sense, but it would be more
> complicated and may not support all hardware (I don't know if the V4L2
> backends are usable for instance).

To bring VAAPI into Qemu directly you'd have to introduce bitstream
parser, DPB management and other CODEC specific bits. I cannot speak
for the project, but that's re-inventing the wheel again with very
little gain. Best is to open the discussion with them early.

Note that it's relatively simple in both framework to only choose HW
accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
with HWContext, so your wrapper need to know specific HWContext for the
specific accelerator. In GStreamer, since 1.16, we add a metadata that
let the user know which decoder is hardware accelerated. (This is
usually used to disable HW acceleration at the moment).


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-20 16:21                         ` Nicolas Dufresne
  0 siblings, 0 replies; 56+ messages in thread
From: Nicolas Dufresne @ 2020-05-20 16:21 UTC (permalink / raw)
  To: Alexandre Courbot
  Cc: Samiullah Khawaja, Saket Sinha, Alex Lau, Kiran Pawar,
	virtio-dev, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Keiichi Watanabe, Gerd Hoffmann, Hans Verkuil, Dmitry Sepp,
	Emil Velikov, Pawel Osciak, Linux Media Mailing List

Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > Hi Nicolas,
> > > 
> > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > nicolas@ndufresne.ca
> > > > wrote:
> > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > Hi,
> > > > > 
> > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > implement paravirtualized video codec devices.
> > > > > 
> > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > dmitry.sepp@opensynergy.com
> > > > > wrote:
> > > > > > Hi Saket,
> > > > > > 
> > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > Hi Keiichi,
> > > > > > > 
> > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > of the virtio-video device only for testing,
> > > > > > 
> > > > > > That was my understanding as well.
> > > > > 
> > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > decoder interface [1], which are also used by other video drivers on
> > > > > Linux.
> > > > > The difference between vicodec and actual device drivers is that
> > > > > vicodec performs decoding in the kernel space without using special
> > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > space which provides the same interface with actual video drivers.
> > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > as well and VM gets access to a paravirtualized video device.
> > > > > 
> > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > 
> > > > > [1]
> > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > 
> > > > > 
> > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > 
> > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > 
> > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > patch though.
> > > > > > 
> > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > with selective video streams from actual HW video devices.
> > > > > > 
> > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > enough
> > > > > > to provide abstraction on top of several HW devices.
> > > > > > 
> > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > paravirtualized HW camera devices .
> > > > > 
> > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > can translate virtio-video requests to this API, we can easily support
> > > > > multiple platforms.
> > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > using this API by myself...
> > > > 
> > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > firmware blobs though).
> > > > 
> > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > type of HW accelerated decoder or if you need to virtualized something
> > > > proprietary (like NVDEC). Please shout if you need help.
> > > > 
> > > 
> > > Yeah, I meant we should map virtio-video commands to a set of
> > > abstracted userspace APIs to avoid having many platform-dependent code
> > > in QEMU.
> > > This is the same with what we implemented in crosvm, a VMM on
> > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > 
> > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > APIs in my previous link weren't for this purpose.
> > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > abstracted video decoding APIs? Then, I may be able to think about how
> > > virtio-video protocols can be mapped to them.
> > 
> > The FFMpeg API for libavcodec can be found here:
> > 
> >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > 
> > GStreamer does not really have such a low level CODEC API. So while
> > it's possible to use it (Wine project uses it for it's parsers as an
> > example, and Firefox use to have CODEC support wrapping GStreamer
> > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > chosen as it's LGPL code does not carry directly any patented
> > implementation. It instead rely on plugins, which maybe provided as
> > third party, allowing to distribute your project while giving uses the
> > option to install potentially non-free technologies.
> > 
> > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > less) as:
> > 
> >   - Push GstCaps describing the stream format
> >   - Push bitstream buffer on sink pad
> >   - When ready, buffers will be pushed through the push function
> >     callback on src pad
> > 
> > Of course nothing prevent adding something like the vda abstraction in
> > qemu and make this multi-backend capable.
> 
> My understanding is that we don't need a particularly low-level API to
> interact with. The host virtual device is receiving the whole encoded
> data, and can thus easily reconstruct the original stream (minus the
> container) and pass it to ffmpeg/gstreamer. So we can be pretty
> high-level here.
> 
> Now the choice of API will also determine whether we want to allow
> emulation of codec devices, or whether we stay on a purely
> para-virtual track. If we use e.g. gstreamer, then the host can
> provide a virtual device that is backed by a purely software
> implementation. This can be useful for testing purposes, but for
> real-life usage the guest would be just as well using gstreamer
> itself.

Agreed.

> 
> If we want to make sure that there is hardware on the host side, then
> an API like libva might make more sense, but it would be more
> complicated and may not support all hardware (I don't know if the V4L2
> backends are usable for instance).

To bring VAAPI into Qemu directly you'd have to introduce bitstream
parser, DPB management and other CODEC specific bits. I cannot speak
for the project, but that's re-inventing the wheel again with very
little gain. Best is to open the discussion with them early.

Note that it's relatively simple in both framework to only choose HW
accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
with HWContext, so your wrapper need to know specific HWContext for the
specific accelerator. In GStreamer, since 1.16, we add a metadata that
let the user know which decoder is hardware accelerated. (This is
usually used to disable HW acceleration at the moment).



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-20 16:21                         ` Nicolas Dufresne
  (?)
@ 2020-05-20 16:27                           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-20 16:27 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Alexandre Courbot, Keiichi Watanabe, Dmitry Sepp, Saket Sinha,
	Kiran Pawar, Samiullah Khawaja, qemu-devel, virtio-dev,
	Gerd Hoffmann, Hans Verkuil, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

On Wed, May 20, 2020 at 12:21:05PM -0400, Nicolas Dufresne wrote:
> Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> > On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > > Hi Nicolas,
> > > > 
> > > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > > nicolas@ndufresne.ca
> > > > > wrote:
> > > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > > Hi,
> > > > > > 
> > > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > > implement paravirtualized video codec devices.
> > > > > > 
> > > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > > dmitry.sepp@opensynergy.com
> > > > > > wrote:
> > > > > > > Hi Saket,
> > > > > > > 
> > > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > > Hi Keiichi,
> > > > > > > > 
> > > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > > of the virtio-video device only for testing,
> > > > > > > 
> > > > > > > That was my understanding as well.
> > > > > > 
> > > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > > decoder interface [1], which are also used by other video drivers on
> > > > > > Linux.
> > > > > > The difference between vicodec and actual device drivers is that
> > > > > > vicodec performs decoding in the kernel space without using special
> > > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > > space which provides the same interface with actual video drivers.
> > > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > > as well and VM gets access to a paravirtualized video device.
> > > > > > 
> > > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > > 
> > > > > > [1]
> > > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > > 
> > > > > > 
> > > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > > 
> > > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > > 
> > > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > > patch though.
> > > > > > > 
> > > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > > with selective video streams from actual HW video devices.
> > > > > > > 
> > > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > > enough
> > > > > > > to provide abstraction on top of several HW devices.
> > > > > > > 
> > > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > > paravirtualized HW camera devices .
> > > > > > 
> > > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > > can translate virtio-video requests to this API, we can easily support
> > > > > > multiple platforms.
> > > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > > using this API by myself...
> > > > > 
> > > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > > firmware blobs though).
> > > > > 
> > > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > > type of HW accelerated decoder or if you need to virtualized something
> > > > > proprietary (like NVDEC). Please shout if you need help.
> > > > > 
> > > > 
> > > > Yeah, I meant we should map virtio-video commands to a set of
> > > > abstracted userspace APIs to avoid having many platform-dependent code
> > > > in QEMU.
> > > > This is the same with what we implemented in crosvm, a VMM on
> > > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > > 
> > > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > > APIs in my previous link weren't for this purpose.
> > > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > > abstracted video decoding APIs? Then, I may be able to think about how
> > > > virtio-video protocols can be mapped to them.
> > > 
> > > The FFMpeg API for libavcodec can be found here:
> > > 
> > >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > > 
> > > GStreamer does not really have such a low level CODEC API. So while
> > > it's possible to use it (Wine project uses it for it's parsers as an
> > > example, and Firefox use to have CODEC support wrapping GStreamer
> > > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > > chosen as it's LGPL code does not carry directly any patented
> > > implementation. It instead rely on plugins, which maybe provided as
> > > third party, allowing to distribute your project while giving uses the
> > > option to install potentially non-free technologies.
> > > 
> > > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > > less) as:
> > > 
> > >   - Push GstCaps describing the stream format
> > >   - Push bitstream buffer on sink pad
> > >   - When ready, buffers will be pushed through the push function
> > >     callback on src pad
> > > 
> > > Of course nothing prevent adding something like the vda abstraction in
> > > qemu and make this multi-backend capable.
> > 
> > My understanding is that we don't need a particularly low-level API to
> > interact with. The host virtual device is receiving the whole encoded
> > data, and can thus easily reconstruct the original stream (minus the
> > container) and pass it to ffmpeg/gstreamer. So we can be pretty
> > high-level here.
> > 
> > Now the choice of API will also determine whether we want to allow
> > emulation of codec devices, or whether we stay on a purely
> > para-virtual track. If we use e.g. gstreamer, then the host can
> > provide a virtual device that is backed by a purely software
> > implementation. This can be useful for testing purposes, but for
> > real-life usage the guest would be just as well using gstreamer
> > itself.
> 
> Agreed.
> 
> > 
> > If we want to make sure that there is hardware on the host side, then
> > an API like libva might make more sense, but it would be more
> > complicated and may not support all hardware (I don't know if the V4L2
> > backends are usable for instance).
> 
> To bring VAAPI into Qemu directly you'd have to introduce bitstream
> parser, DPB management and other CODEC specific bits. I cannot speak
> for the project, but that's re-inventing the wheel again with very
> little gain. Best is to open the discussion with them early.
> 
> Note that it's relatively simple in both framework to only choose HW
> accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
> with HWContext, so your wrapper need to know specific HWContext for the
> specific accelerator. In GStreamer, since 1.16, we add a metadata that
> let the user know which decoder is hardware accelerated. (This is
> usually used to disable HW acceleration at the moment).

I don't know too much about the options here, unfortunately. But I
wonder about security implications of all these approaches.

We have this issue with other cases such as libusb where the
library we are using is not expecting hostile input so does
not validate it fully.
This is often the case for pass-through approaches.
Do all the options here expect untrusted input?


-- 
MST


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-20 16:27                           ` Michael S. Tsirkin
  0 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-20 16:27 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Samiullah Khawaja, Saket Sinha, Alex Lau, Kiran Pawar,
	Alexandre Courbot, virtio-dev, qemu-devel, Tomasz Figa,
	Keiichi Watanabe, Gerd Hoffmann, Hans Verkuil, Dmitry Sepp,
	Emil Velikov, Pawel Osciak, Linux Media Mailing List

On Wed, May 20, 2020 at 12:21:05PM -0400, Nicolas Dufresne wrote:
> Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> > On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > > Hi Nicolas,
> > > > 
> > > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > > nicolas@ndufresne.ca
> > > > > wrote:
> > > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > > Hi,
> > > > > > 
> > > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > > implement paravirtualized video codec devices.
> > > > > > 
> > > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > > dmitry.sepp@opensynergy.com
> > > > > > wrote:
> > > > > > > Hi Saket,
> > > > > > > 
> > > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > > Hi Keiichi,
> > > > > > > > 
> > > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > > of the virtio-video device only for testing,
> > > > > > > 
> > > > > > > That was my understanding as well.
> > > > > > 
> > > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > > decoder interface [1], which are also used by other video drivers on
> > > > > > Linux.
> > > > > > The difference between vicodec and actual device drivers is that
> > > > > > vicodec performs decoding in the kernel space without using special
> > > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > > space which provides the same interface with actual video drivers.
> > > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > > as well and VM gets access to a paravirtualized video device.
> > > > > > 
> > > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > > 
> > > > > > [1]
> > > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > > 
> > > > > > 
> > > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > > 
> > > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > > 
> > > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > > patch though.
> > > > > > > 
> > > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > > with selective video streams from actual HW video devices.
> > > > > > > 
> > > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > > enough
> > > > > > > to provide abstraction on top of several HW devices.
> > > > > > > 
> > > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > > paravirtualized HW camera devices .
> > > > > > 
> > > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > > can translate virtio-video requests to this API, we can easily support
> > > > > > multiple platforms.
> > > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > > using this API by myself...
> > > > > 
> > > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > > firmware blobs though).
> > > > > 
> > > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > > type of HW accelerated decoder or if you need to virtualized something
> > > > > proprietary (like NVDEC). Please shout if you need help.
> > > > > 
> > > > 
> > > > Yeah, I meant we should map virtio-video commands to a set of
> > > > abstracted userspace APIs to avoid having many platform-dependent code
> > > > in QEMU.
> > > > This is the same with what we implemented in crosvm, a VMM on
> > > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > > 
> > > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > > APIs in my previous link weren't for this purpose.
> > > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > > abstracted video decoding APIs? Then, I may be able to think about how
> > > > virtio-video protocols can be mapped to them.
> > > 
> > > The FFMpeg API for libavcodec can be found here:
> > > 
> > >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > > 
> > > GStreamer does not really have such a low level CODEC API. So while
> > > it's possible to use it (Wine project uses it for it's parsers as an
> > > example, and Firefox use to have CODEC support wrapping GStreamer
> > > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > > chosen as it's LGPL code does not carry directly any patented
> > > implementation. It instead rely on plugins, which maybe provided as
> > > third party, allowing to distribute your project while giving uses the
> > > option to install potentially non-free technologies.
> > > 
> > > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > > less) as:
> > > 
> > >   - Push GstCaps describing the stream format
> > >   - Push bitstream buffer on sink pad
> > >   - When ready, buffers will be pushed through the push function
> > >     callback on src pad
> > > 
> > > Of course nothing prevent adding something like the vda abstraction in
> > > qemu and make this multi-backend capable.
> > 
> > My understanding is that we don't need a particularly low-level API to
> > interact with. The host virtual device is receiving the whole encoded
> > data, and can thus easily reconstruct the original stream (minus the
> > container) and pass it to ffmpeg/gstreamer. So we can be pretty
> > high-level here.
> > 
> > Now the choice of API will also determine whether we want to allow
> > emulation of codec devices, or whether we stay on a purely
> > para-virtual track. If we use e.g. gstreamer, then the host can
> > provide a virtual device that is backed by a purely software
> > implementation. This can be useful for testing purposes, but for
> > real-life usage the guest would be just as well using gstreamer
> > itself.
> 
> Agreed.
> 
> > 
> > If we want to make sure that there is hardware on the host side, then
> > an API like libva might make more sense, but it would be more
> > complicated and may not support all hardware (I don't know if the V4L2
> > backends are usable for instance).
> 
> To bring VAAPI into Qemu directly you'd have to introduce bitstream
> parser, DPB management and other CODEC specific bits. I cannot speak
> for the project, but that's re-inventing the wheel again with very
> little gain. Best is to open the discussion with them early.
> 
> Note that it's relatively simple in both framework to only choose HW
> accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
> with HWContext, so your wrapper need to know specific HWContext for the
> specific accelerator. In GStreamer, since 1.16, we add a metadata that
> let the user know which decoder is hardware accelerated. (This is
> usually used to disable HW acceleration at the moment).

I don't know too much about the options here, unfortunately. But I
wonder about security implications of all these approaches.

We have this issue with other cases such as libusb where the
library we are using is not expecting hostile input so does
not validate it fully.
This is often the case for pass-through approaches.
Do all the options here expect untrusted input?


-- 
MST



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-20 16:27                           ` Michael S. Tsirkin
  0 siblings, 0 replies; 56+ messages in thread
From: Michael S. Tsirkin @ 2020-05-20 16:27 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Alexandre Courbot, Keiichi Watanabe, Dmitry Sepp, Saket Sinha,
	Kiran Pawar, Samiullah Khawaja, qemu-devel, virtio-dev,
	Gerd Hoffmann, Hans Verkuil, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

On Wed, May 20, 2020 at 12:21:05PM -0400, Nicolas Dufresne wrote:
> Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> > On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > > Hi Nicolas,
> > > > 
> > > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > > nicolas@ndufresne.ca
> > > > > wrote:
> > > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > > Hi,
> > > > > > 
> > > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > > implement paravirtualized video codec devices.
> > > > > > 
> > > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > > dmitry.sepp@opensynergy.com
> > > > > > wrote:
> > > > > > > Hi Saket,
> > > > > > > 
> > > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > > Hi Keiichi,
> > > > > > > > 
> > > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > > of the virtio-video device only for testing,
> > > > > > > 
> > > > > > > That was my understanding as well.
> > > > > > 
> > > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > > decoder interface [1], which are also used by other video drivers on
> > > > > > Linux.
> > > > > > The difference between vicodec and actual device drivers is that
> > > > > > vicodec performs decoding in the kernel space without using special
> > > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > > space which provides the same interface with actual video drivers.
> > > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > > as well and VM gets access to a paravirtualized video device.
> > > > > > 
> > > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > > 
> > > > > > [1]
> > > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > > 
> > > > > > 
> > > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > > 
> > > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > > 
> > > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > > patch though.
> > > > > > > 
> > > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > > with selective video streams from actual HW video devices.
> > > > > > > 
> > > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > > enough
> > > > > > > to provide abstraction on top of several HW devices.
> > > > > > > 
> > > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > > paravirtualized HW camera devices .
> > > > > > 
> > > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > > can translate virtio-video requests to this API, we can easily support
> > > > > > multiple platforms.
> > > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > > using this API by myself...
> > > > > 
> > > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > > firmware blobs though).
> > > > > 
> > > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > > type of HW accelerated decoder or if you need to virtualized something
> > > > > proprietary (like NVDEC). Please shout if you need help.
> > > > > 
> > > > 
> > > > Yeah, I meant we should map virtio-video commands to a set of
> > > > abstracted userspace APIs to avoid having many platform-dependent code
> > > > in QEMU.
> > > > This is the same with what we implemented in crosvm, a VMM on
> > > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > > 
> > > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > > APIs in my previous link weren't for this purpose.
> > > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > > abstracted video decoding APIs? Then, I may be able to think about how
> > > > virtio-video protocols can be mapped to them.
> > > 
> > > The FFMpeg API for libavcodec can be found here:
> > > 
> > >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > > 
> > > GStreamer does not really have such a low level CODEC API. So while
> > > it's possible to use it (Wine project uses it for it's parsers as an
> > > example, and Firefox use to have CODEC support wrapping GStreamer
> > > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > > chosen as it's LGPL code does not carry directly any patented
> > > implementation. It instead rely on plugins, which maybe provided as
> > > third party, allowing to distribute your project while giving uses the
> > > option to install potentially non-free technologies.
> > > 
> > > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > > less) as:
> > > 
> > >   - Push GstCaps describing the stream format
> > >   - Push bitstream buffer on sink pad
> > >   - When ready, buffers will be pushed through the push function
> > >     callback on src pad
> > > 
> > > Of course nothing prevent adding something like the vda abstraction in
> > > qemu and make this multi-backend capable.
> > 
> > My understanding is that we don't need a particularly low-level API to
> > interact with. The host virtual device is receiving the whole encoded
> > data, and can thus easily reconstruct the original stream (minus the
> > container) and pass it to ffmpeg/gstreamer. So we can be pretty
> > high-level here.
> > 
> > Now the choice of API will also determine whether we want to allow
> > emulation of codec devices, or whether we stay on a purely
> > para-virtual track. If we use e.g. gstreamer, then the host can
> > provide a virtual device that is backed by a purely software
> > implementation. This can be useful for testing purposes, but for
> > real-life usage the guest would be just as well using gstreamer
> > itself.
> 
> Agreed.
> 
> > 
> > If we want to make sure that there is hardware on the host side, then
> > an API like libva might make more sense, but it would be more
> > complicated and may not support all hardware (I don't know if the V4L2
> > backends are usable for instance).
> 
> To bring VAAPI into Qemu directly you'd have to introduce bitstream
> parser, DPB management and other CODEC specific bits. I cannot speak
> for the project, but that's re-inventing the wheel again with very
> little gain. Best is to open the discussion with them early.
> 
> Note that it's relatively simple in both framework to only choose HW
> accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
> with HWContext, so your wrapper need to know specific HWContext for the
> specific accelerator. In GStreamer, since 1.16, we add a metadata that
> let the user know which decoder is hardware accelerated. (This is
> usually used to disable HW acceleration at the moment).

I don't know too much about the options here, unfortunately. But I
wonder about security implications of all these approaches.

We have this issue with other cases such as libusb where the
library we are using is not expecting hostile input so does
not validate it fully.
This is often the case for pass-through approaches.
Do all the options here expect untrusted input?


-- 
MST


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-20 16:27                           ` Michael S. Tsirkin
@ 2020-05-20 16:56                             ` Nicolas Dufresne
  -1 siblings, 0 replies; 56+ messages in thread
From: Nicolas Dufresne @ 2020-05-20 16:56 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Alexandre Courbot, Keiichi Watanabe, Dmitry Sepp, Saket Sinha,
	Kiran Pawar, Samiullah Khawaja, qemu-devel, virtio-dev,
	Gerd Hoffmann, Hans Verkuil, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

Le mercredi 20 mai 2020 à 12:27 -0400, Michael S. Tsirkin a écrit :
> On Wed, May 20, 2020 at 12:21:05PM -0400, Nicolas Dufresne wrote:
> > Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> > > On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > > > Hi Nicolas,
> > > > > 
> > > > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > > > nicolas@ndufresne.ca
> > > > > > wrote:
> > > > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > > > Hi,
> > > > > > > 
> > > > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > > > implement paravirtualized video codec devices.
> > > > > > > 
> > > > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > > > dmitry.sepp@opensynergy.com
> > > > > > > wrote:
> > > > > > > > Hi Saket,
> > > > > > > > 
> > > > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > > > Hi Keiichi,
> > > > > > > > > 
> > > > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > > > of the virtio-video device only for testing,
> > > > > > > > 
> > > > > > > > That was my understanding as well.
> > > > > > > 
> > > > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > > > decoder interface [1], which are also used by other video drivers on
> > > > > > > Linux.
> > > > > > > The difference between vicodec and actual device drivers is that
> > > > > > > vicodec performs decoding in the kernel space without using special
> > > > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > > > space which provides the same interface with actual video drivers.
> > > > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > > > as well and VM gets access to a paravirtualized video device.
> > > > > > > 
> > > > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > > > 
> > > > > > > [1]
> > > > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > > > 
> > > > > > > 
> > > > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > > > 
> > > > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > > > 
> > > > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > > > patch though.
> > > > > > > > 
> > > > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > > > with selective video streams from actual HW video devices.
> > > > > > > > 
> > > > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > > > enough
> > > > > > > > to provide abstraction on top of several HW devices.
> > > > > > > > 
> > > > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > > > paravirtualized HW camera devices .
> > > > > > > 
> > > > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > > > can translate virtio-video requests to this API, we can easily support
> > > > > > > multiple platforms.
> > > > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > > > using this API by myself...
> > > > > > 
> > > > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > > > firmware blobs though).
> > > > > > 
> > > > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > > > type of HW accelerated decoder or if you need to virtualized something
> > > > > > proprietary (like NVDEC). Please shout if you need help.
> > > > > > 
> > > > > 
> > > > > Yeah, I meant we should map virtio-video commands to a set of
> > > > > abstracted userspace APIs to avoid having many platform-dependent code
> > > > > in QEMU.
> > > > > This is the same with what we implemented in crosvm, a VMM on
> > > > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > > > 
> > > > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > > > APIs in my previous link weren't for this purpose.
> > > > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > > > abstracted video decoding APIs? Then, I may be able to think about how
> > > > > virtio-video protocols can be mapped to them.
> > > > 
> > > > The FFMpeg API for libavcodec can be found here:
> > > > 
> > > >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > > > 
> > > > GStreamer does not really have such a low level CODEC API. So while
> > > > it's possible to use it (Wine project uses it for it's parsers as an
> > > > example, and Firefox use to have CODEC support wrapping GStreamer
> > > > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > > > chosen as it's LGPL code does not carry directly any patented
> > > > implementation. It instead rely on plugins, which maybe provided as
> > > > third party, allowing to distribute your project while giving uses the
> > > > option to install potentially non-free technologies.
> > > > 
> > > > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > > > less) as:
> > > > 
> > > >   - Push GstCaps describing the stream format
> > > >   - Push bitstream buffer on sink pad
> > > >   - When ready, buffers will be pushed through the push function
> > > >     callback on src pad
> > > > 
> > > > Of course nothing prevent adding something like the vda abstraction in
> > > > qemu and make this multi-backend capable.
> > > 
> > > My understanding is that we don't need a particularly low-level API to
> > > interact with. The host virtual device is receiving the whole encoded
> > > data, and can thus easily reconstruct the original stream (minus the
> > > container) and pass it to ffmpeg/gstreamer. So we can be pretty
> > > high-level here.
> > > 
> > > Now the choice of API will also determine whether we want to allow
> > > emulation of codec devices, or whether we stay on a purely
> > > para-virtual track. If we use e.g. gstreamer, then the host can
> > > provide a virtual device that is backed by a purely software
> > > implementation. This can be useful for testing purposes, but for
> > > real-life usage the guest would be just as well using gstreamer
> > > itself.
> > 
> > Agreed.
> > 
> > > If we want to make sure that there is hardware on the host side, then
> > > an API like libva might make more sense, but it would be more
> > > complicated and may not support all hardware (I don't know if the V4L2
> > > backends are usable for instance).
> > 
> > To bring VAAPI into Qemu directly you'd have to introduce bitstream
> > parser, DPB management and other CODEC specific bits. I cannot speak
> > for the project, but that's re-inventing the wheel again with very
> > little gain. Best is to open the discussion with them early.
> > 
> > Note that it's relatively simple in both framework to only choose HW
> > accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
> > with HWContext, so your wrapper need to know specific HWContext for the
> > specific accelerator. In GStreamer, since 1.16, we add a metadata that
> > let the user know which decoder is hardware accelerated. (This is
> > usually used to disable HW acceleration at the moment).
> 
> I don't know too much about the options here, unfortunately. But I
> wonder about security implications of all these approaches.
> 
> We have this issue with other cases such as libusb where the
> library we are using is not expecting hostile input so does
> not validate it fully.
> This is often the case for pass-through approaches.
> Do all the options here expect untrusted input?

Both project cares as much as ChromeOS backend do. FFMPEG the main
backend in Firefox notably, GStreamer is used in many embedded
applications. We haven't started a complete rewrite in RUST (yet)
though.

Bitstream parsers (which are strictly requires for VAAPI and V4L2
Stateless CODEC handling through virtio-video) will always have
possible security issues, they deal with user bitstream and a very
large amount of parameters. A RUST rewrite only protects you from
taking control through buffer overflows, it does not mean your code
won't still have few crashers caused by hostile bitstream. The logical
thing to do if it get integrated into QEmu will be to sandbox this bit.
If you already virtualize your GPU, you likely have larger issues, as
for many GPUs, malicious shaders could freeze few GPU cores for
multiple seconds (or forever if you have older GPU drivers or a GPU
that does not have preemption/reset support).

Writing a backend from scratch just for QEmu will likely lead to no or
little maintenance, as it's would be very niche in the project. Relying
strictly on ChromeOS backend will mean a world without HEVC, without
interlaced content, but is already better in my view then redoing that.
Now it's unclear if Google will maintain a stable API there, something
that GStreamer and FFMPEG seems to do well now. It was also mention in
this discussion that it was not really an option, but I haven't yet
captured why.

There is plenty of approaches that could be taken of course. One could
completely abstract that backend, and use PipeWire to stream the
buffers between a sandboxed CODEC manager service and your QEmu
instance (the codec handling could even run in a PipeWire real-time
node to guaranty lowest latency). Or you could go with a custom, but
more targeted design. I think that's all open to who will implement and
what are the requirements. It also depends on the trend in the resource
management that QEmu project tries to achieve (or if that's delegated
somehow, I don't know). For CODECs, it can be quite variable how
resources are available.

Some V4L2 statefull driver offers only 1 or 2 instances which cannot be
multiplexed. The highest resolution and rate might only be possible for
1 stream too. Most VAAPI / V4L2 sateless drivers can be multiplex
without bound, but won't operate in real-time anymore if you have too
many streams. So I think from a QEmu perspective point of view, the
backend should enable few constraints, which in a real life deployement
will endup having to be configured manually. All sort of things that
need userspace for. Basically were I want to get with, is that the
kernel will never fully offer this service.


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-20 16:56                             ` Nicolas Dufresne
  0 siblings, 0 replies; 56+ messages in thread
From: Nicolas Dufresne @ 2020-05-20 16:56 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Samiullah Khawaja, Saket Sinha, Alex Lau, Kiran Pawar,
	Alexandre Courbot, virtio-dev, qemu-devel, Tomasz Figa,
	Keiichi Watanabe, Gerd Hoffmann, Hans Verkuil, Dmitry Sepp,
	Emil Velikov, Pawel Osciak, Linux Media Mailing List

Le mercredi 20 mai 2020 à 12:27 -0400, Michael S. Tsirkin a écrit :
> On Wed, May 20, 2020 at 12:21:05PM -0400, Nicolas Dufresne wrote:
> > Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> > > On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > > > Hi Nicolas,
> > > > > 
> > > > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > > > nicolas@ndufresne.ca
> > > > > > wrote:
> > > > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > > > Hi,
> > > > > > > 
> > > > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > > > implement paravirtualized video codec devices.
> > > > > > > 
> > > > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > > > dmitry.sepp@opensynergy.com
> > > > > > > wrote:
> > > > > > > > Hi Saket,
> > > > > > > > 
> > > > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > > > Hi Keiichi,
> > > > > > > > > 
> > > > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > > > of the virtio-video device only for testing,
> > > > > > > > 
> > > > > > > > That was my understanding as well.
> > > > > > > 
> > > > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > > > decoder interface [1], which are also used by other video drivers on
> > > > > > > Linux.
> > > > > > > The difference between vicodec and actual device drivers is that
> > > > > > > vicodec performs decoding in the kernel space without using special
> > > > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > > > space which provides the same interface with actual video drivers.
> > > > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > > > as well and VM gets access to a paravirtualized video device.
> > > > > > > 
> > > > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > > > 
> > > > > > > [1]
> > > > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > > > 
> > > > > > > 
> > > > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > > > 
> > > > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > > > 
> > > > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > > > patch though.
> > > > > > > > 
> > > > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > > > with selective video streams from actual HW video devices.
> > > > > > > > 
> > > > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > > > enough
> > > > > > > > to provide abstraction on top of several HW devices.
> > > > > > > > 
> > > > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > > > paravirtualized HW camera devices .
> > > > > > > 
> > > > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > > > can translate virtio-video requests to this API, we can easily support
> > > > > > > multiple platforms.
> > > > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > > > using this API by myself...
> > > > > > 
> > > > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > > > firmware blobs though).
> > > > > > 
> > > > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > > > type of HW accelerated decoder or if you need to virtualized something
> > > > > > proprietary (like NVDEC). Please shout if you need help.
> > > > > > 
> > > > > 
> > > > > Yeah, I meant we should map virtio-video commands to a set of
> > > > > abstracted userspace APIs to avoid having many platform-dependent code
> > > > > in QEMU.
> > > > > This is the same with what we implemented in crosvm, a VMM on
> > > > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > > > 
> > > > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > > > APIs in my previous link weren't for this purpose.
> > > > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > > > abstracted video decoding APIs? Then, I may be able to think about how
> > > > > virtio-video protocols can be mapped to them.
> > > > 
> > > > The FFMpeg API for libavcodec can be found here:
> > > > 
> > > >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > > > 
> > > > GStreamer does not really have such a low level CODEC API. So while
> > > > it's possible to use it (Wine project uses it for it's parsers as an
> > > > example, and Firefox use to have CODEC support wrapping GStreamer
> > > > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > > > chosen as it's LGPL code does not carry directly any patented
> > > > implementation. It instead rely on plugins, which maybe provided as
> > > > third party, allowing to distribute your project while giving uses the
> > > > option to install potentially non-free technologies.
> > > > 
> > > > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > > > less) as:
> > > > 
> > > >   - Push GstCaps describing the stream format
> > > >   - Push bitstream buffer on sink pad
> > > >   - When ready, buffers will be pushed through the push function
> > > >     callback on src pad
> > > > 
> > > > Of course nothing prevent adding something like the vda abstraction in
> > > > qemu and make this multi-backend capable.
> > > 
> > > My understanding is that we don't need a particularly low-level API to
> > > interact with. The host virtual device is receiving the whole encoded
> > > data, and can thus easily reconstruct the original stream (minus the
> > > container) and pass it to ffmpeg/gstreamer. So we can be pretty
> > > high-level here.
> > > 
> > > Now the choice of API will also determine whether we want to allow
> > > emulation of codec devices, or whether we stay on a purely
> > > para-virtual track. If we use e.g. gstreamer, then the host can
> > > provide a virtual device that is backed by a purely software
> > > implementation. This can be useful for testing purposes, but for
> > > real-life usage the guest would be just as well using gstreamer
> > > itself.
> > 
> > Agreed.
> > 
> > > If we want to make sure that there is hardware on the host side, then
> > > an API like libva might make more sense, but it would be more
> > > complicated and may not support all hardware (I don't know if the V4L2
> > > backends are usable for instance).
> > 
> > To bring VAAPI into Qemu directly you'd have to introduce bitstream
> > parser, DPB management and other CODEC specific bits. I cannot speak
> > for the project, but that's re-inventing the wheel again with very
> > little gain. Best is to open the discussion with them early.
> > 
> > Note that it's relatively simple in both framework to only choose HW
> > accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
> > with HWContext, so your wrapper need to know specific HWContext for the
> > specific accelerator. In GStreamer, since 1.16, we add a metadata that
> > let the user know which decoder is hardware accelerated. (This is
> > usually used to disable HW acceleration at the moment).
> 
> I don't know too much about the options here, unfortunately. But I
> wonder about security implications of all these approaches.
> 
> We have this issue with other cases such as libusb where the
> library we are using is not expecting hostile input so does
> not validate it fully.
> This is often the case for pass-through approaches.
> Do all the options here expect untrusted input?

Both project cares as much as ChromeOS backend do. FFMPEG the main
backend in Firefox notably, GStreamer is used in many embedded
applications. We haven't started a complete rewrite in RUST (yet)
though.

Bitstream parsers (which are strictly requires for VAAPI and V4L2
Stateless CODEC handling through virtio-video) will always have
possible security issues, they deal with user bitstream and a very
large amount of parameters. A RUST rewrite only protects you from
taking control through buffer overflows, it does not mean your code
won't still have few crashers caused by hostile bitstream. The logical
thing to do if it get integrated into QEmu will be to sandbox this bit.
If you already virtualize your GPU, you likely have larger issues, as
for many GPUs, malicious shaders could freeze few GPU cores for
multiple seconds (or forever if you have older GPU drivers or a GPU
that does not have preemption/reset support).

Writing a backend from scratch just for QEmu will likely lead to no or
little maintenance, as it's would be very niche in the project. Relying
strictly on ChromeOS backend will mean a world without HEVC, without
interlaced content, but is already better in my view then redoing that.
Now it's unclear if Google will maintain a stable API there, something
that GStreamer and FFMPEG seems to do well now. It was also mention in
this discussion that it was not really an option, but I haven't yet
captured why.

There is plenty of approaches that could be taken of course. One could
completely abstract that backend, and use PipeWire to stream the
buffers between a sandboxed CODEC manager service and your QEmu
instance (the codec handling could even run in a PipeWire real-time
node to guaranty lowest latency). Or you could go with a custom, but
more targeted design. I think that's all open to who will implement and
what are the requirements. It also depends on the trend in the resource
management that QEmu project tries to achieve (or if that's delegated
somehow, I don't know). For CODECs, it can be quite variable how
resources are available.

Some V4L2 statefull driver offers only 1 or 2 instances which cannot be
multiplexed. The highest resolution and rate might only be possible for
1 stream too. Most VAAPI / V4L2 sateless drivers can be multiplex
without bound, but won't operate in real-time anymore if you have too
many streams. So I think from a QEmu perspective point of view, the
backend should enable few constraints, which in a real life deployement
will endup having to be configured manually. All sort of things that
need userspace for. Basically were I want to get with, is that the
kernel will never fully offer this service.



^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
  2020-05-20 16:21                         ` Nicolas Dufresne
  (?)
@ 2020-05-21  7:08                           ` Alexandre Courbot
  -1 siblings, 0 replies; 56+ messages in thread
From: Alexandre Courbot @ 2020-05-21  7:08 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Keiichi Watanabe, Dmitry Sepp, Saket Sinha, Kiran Pawar,
	Samiullah Khawaja, qemu-devel, virtio-dev, Gerd Hoffmann,
	Michael S. Tsirkin, Hans Verkuil, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

On Thu, May 21, 2020 at 1:21 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> > On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > > Hi Nicolas,
> > > >
> > > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > > nicolas@ndufresne.ca
> > > > > wrote:
> > > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > > Hi,
> > > > > >
> > > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > > implement paravirtualized video codec devices.
> > > > > >
> > > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > > dmitry.sepp@opensynergy.com
> > > > > > wrote:
> > > > > > > Hi Saket,
> > > > > > >
> > > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > > Hi Keiichi,
> > > > > > > >
> > > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > > of the virtio-video device only for testing,
> > > > > > >
> > > > > > > That was my understanding as well.
> > > > > >
> > > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > > decoder interface [1], which are also used by other video drivers on
> > > > > > Linux.
> > > > > > The difference between vicodec and actual device drivers is that
> > > > > > vicodec performs decoding in the kernel space without using special
> > > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > > space which provides the same interface with actual video drivers.
> > > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > > as well and VM gets access to a paravirtualized video device.
> > > > > >
> > > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > >
> > > > > > [1]
> > > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > >
> > > > > >
> > > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > >
> > > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > >
> > > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > > patch though.
> > > > > > >
> > > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > > with selective video streams from actual HW video devices.
> > > > > > >
> > > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > > enough
> > > > > > > to provide abstraction on top of several HW devices.
> > > > > > >
> > > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > > paravirtualized HW camera devices .
> > > > > >
> > > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > > can translate virtio-video requests to this API, we can easily support
> > > > > > multiple platforms.
> > > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > > using this API by myself...
> > > > >
> > > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > > firmware blobs though).
> > > > >
> > > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > > type of HW accelerated decoder or if you need to virtualized something
> > > > > proprietary (like NVDEC). Please shout if you need help.
> > > > >
> > > >
> > > > Yeah, I meant we should map virtio-video commands to a set of
> > > > abstracted userspace APIs to avoid having many platform-dependent code
> > > > in QEMU.
> > > > This is the same with what we implemented in crosvm, a VMM on
> > > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > >
> > > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > > APIs in my previous link weren't for this purpose.
> > > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > > abstracted video decoding APIs? Then, I may be able to think about how
> > > > virtio-video protocols can be mapped to them.
> > >
> > > The FFMpeg API for libavcodec can be found here:
> > >
> > >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > >
> > > GStreamer does not really have such a low level CODEC API. So while
> > > it's possible to use it (Wine project uses it for it's parsers as an
> > > example, and Firefox use to have CODEC support wrapping GStreamer
> > > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > > chosen as it's LGPL code does not carry directly any patented
> > > implementation. It instead rely on plugins, which maybe provided as
> > > third party, allowing to distribute your project while giving uses the
> > > option to install potentially non-free technologies.
> > >
> > > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > > less) as:
> > >
> > >   - Push GstCaps describing the stream format
> > >   - Push bitstream buffer on sink pad
> > >   - When ready, buffers will be pushed through the push function
> > >     callback on src pad
> > >
> > > Of course nothing prevent adding something like the vda abstraction in
> > > qemu and make this multi-backend capable.
> >
> > My understanding is that we don't need a particularly low-level API to
> > interact with. The host virtual device is receiving the whole encoded
> > data, and can thus easily reconstruct the original stream (minus the
> > container) and pass it to ffmpeg/gstreamer. So we can be pretty
> > high-level here.
> >
> > Now the choice of API will also determine whether we want to allow
> > emulation of codec devices, or whether we stay on a purely
> > para-virtual track. If we use e.g. gstreamer, then the host can
> > provide a virtual device that is backed by a purely software
> > implementation. This can be useful for testing purposes, but for
> > real-life usage the guest would be just as well using gstreamer
> > itself.
>
> Agreed.
>
> >
> > If we want to make sure that there is hardware on the host side, then
> > an API like libva might make more sense, but it would be more
> > complicated and may not support all hardware (I don't know if the V4L2
> > backends are usable for instance).
>
> To bring VAAPI into Qemu directly you'd have to introduce bitstream
> parser, DPB management and other CODEC specific bits. I cannot speak
> for the project, but that's re-inventing the wheel again with very
> little gain. Best is to open the discussion with them early.
>
> Note that it's relatively simple in both framework to only choose HW
> accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
> with HWContext, so your wrapper need to know specific HWContext for the
> specific accelerator. In GStreamer, since 1.16, we add a metadata that
> let the user know which decoder is hardware accelerated. (This is
> usually used to disable HW acceleration at the moment).

Good point, and that would also not close the door to exposing a
software-backed device for testing purposes.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-21  7:08                           ` Alexandre Courbot
  0 siblings, 0 replies; 56+ messages in thread
From: Alexandre Courbot @ 2020-05-21  7:08 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Samiullah Khawaja, Saket Sinha, Alex Lau, Kiran Pawar,
	virtio-dev, Michael S. Tsirkin, qemu-devel, Tomasz Figa,
	Keiichi Watanabe, Gerd Hoffmann, Hans Verkuil, Dmitry Sepp,
	Emil Velikov, Pawel Osciak, Linux Media Mailing List

On Thu, May 21, 2020 at 1:21 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> > On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > > Hi Nicolas,
> > > >
> > > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > > nicolas@ndufresne.ca
> > > > > wrote:
> > > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > > Hi,
> > > > > >
> > > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > > implement paravirtualized video codec devices.
> > > > > >
> > > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > > dmitry.sepp@opensynergy.com
> > > > > > wrote:
> > > > > > > Hi Saket,
> > > > > > >
> > > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > > Hi Keiichi,
> > > > > > > >
> > > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > > of the virtio-video device only for testing,
> > > > > > >
> > > > > > > That was my understanding as well.
> > > > > >
> > > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > > decoder interface [1], which are also used by other video drivers on
> > > > > > Linux.
> > > > > > The difference between vicodec and actual device drivers is that
> > > > > > vicodec performs decoding in the kernel space without using special
> > > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > > space which provides the same interface with actual video drivers.
> > > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > > as well and VM gets access to a paravirtualized video device.
> > > > > >
> > > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > >
> > > > > > [1]
> > > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > >
> > > > > >
> > > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > >
> > > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > >
> > > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > > patch though.
> > > > > > >
> > > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > > with selective video streams from actual HW video devices.
> > > > > > >
> > > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > > enough
> > > > > > > to provide abstraction on top of several HW devices.
> > > > > > >
> > > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > > paravirtualized HW camera devices .
> > > > > >
> > > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > > can translate virtio-video requests to this API, we can easily support
> > > > > > multiple platforms.
> > > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > > using this API by myself...
> > > > >
> > > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > > firmware blobs though).
> > > > >
> > > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > > type of HW accelerated decoder or if you need to virtualized something
> > > > > proprietary (like NVDEC). Please shout if you need help.
> > > > >
> > > >
> > > > Yeah, I meant we should map virtio-video commands to a set of
> > > > abstracted userspace APIs to avoid having many platform-dependent code
> > > > in QEMU.
> > > > This is the same with what we implemented in crosvm, a VMM on
> > > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > >
> > > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > > APIs in my previous link weren't for this purpose.
> > > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > > abstracted video decoding APIs? Then, I may be able to think about how
> > > > virtio-video protocols can be mapped to them.
> > >
> > > The FFMpeg API for libavcodec can be found here:
> > >
> > >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > >
> > > GStreamer does not really have such a low level CODEC API. So while
> > > it's possible to use it (Wine project uses it for it's parsers as an
> > > example, and Firefox use to have CODEC support wrapping GStreamer
> > > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > > chosen as it's LGPL code does not carry directly any patented
> > > implementation. It instead rely on plugins, which maybe provided as
> > > third party, allowing to distribute your project while giving uses the
> > > option to install potentially non-free technologies.
> > >
> > > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > > less) as:
> > >
> > >   - Push GstCaps describing the stream format
> > >   - Push bitstream buffer on sink pad
> > >   - When ready, buffers will be pushed through the push function
> > >     callback on src pad
> > >
> > > Of course nothing prevent adding something like the vda abstraction in
> > > qemu and make this multi-backend capable.
> >
> > My understanding is that we don't need a particularly low-level API to
> > interact with. The host virtual device is receiving the whole encoded
> > data, and can thus easily reconstruct the original stream (minus the
> > container) and pass it to ffmpeg/gstreamer. So we can be pretty
> > high-level here.
> >
> > Now the choice of API will also determine whether we want to allow
> > emulation of codec devices, or whether we stay on a purely
> > para-virtual track. If we use e.g. gstreamer, then the host can
> > provide a virtual device that is backed by a purely software
> > implementation. This can be useful for testing purposes, but for
> > real-life usage the guest would be just as well using gstreamer
> > itself.
>
> Agreed.
>
> >
> > If we want to make sure that there is hardware on the host side, then
> > an API like libva might make more sense, but it would be more
> > complicated and may not support all hardware (I don't know if the V4L2
> > backends are usable for instance).
>
> To bring VAAPI into Qemu directly you'd have to introduce bitstream
> parser, DPB management and other CODEC specific bits. I cannot speak
> for the project, but that's re-inventing the wheel again with very
> little gain. Best is to open the discussion with them early.
>
> Note that it's relatively simple in both framework to only choose HW
> accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
> with HWContext, so your wrapper need to know specific HWContext for the
> specific accelerator. In GStreamer, since 1.16, we add a metadata that
> let the user know which decoder is hardware accelerated. (This is
> usually used to disable HW acceleration at the moment).

Good point, and that would also not close the door to exposing a
software-backed device for testing purposes.


^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
@ 2020-05-21  7:08                           ` Alexandre Courbot
  0 siblings, 0 replies; 56+ messages in thread
From: Alexandre Courbot @ 2020-05-21  7:08 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Keiichi Watanabe, Dmitry Sepp, Saket Sinha, Kiran Pawar,
	Samiullah Khawaja, qemu-devel, virtio-dev, Gerd Hoffmann,
	Michael S. Tsirkin, Hans Verkuil, Tomasz Figa,
	Linux Media Mailing List, Alex Lau, Pawel Osciak, Emil Velikov

On Thu, May 21, 2020 at 1:21 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le mercredi 20 mai 2020 à 12:19 +0900, Alexandre Courbot a écrit :
> > On Wed, May 20, 2020 at 2:29 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > Le mardi 19 mai 2020 à 17:37 +0900, Keiichi Watanabe a écrit :
> > > > Hi Nicolas,
> > > >
> > > > On Fri, May 15, 2020 at 8:38 AM Nicolas Dufresne <
> > > > nicolas@ndufresne.ca
> > > > > wrote:
> > > > > Le lundi 11 mai 2020 à 20:49 +0900, Keiichi Watanabe a écrit :
> > > > > > Hi,
> > > > > >
> > > > > > Thanks Saket for your feedback. As Dmitry mentioned, we're focusing on
> > > > > > video encoding and decoding, not camera. So, my reply was about how to
> > > > > > implement paravirtualized video codec devices.
> > > > > >
> > > > > > On Mon, May 11, 2020 at 8:25 PM Dmitry Sepp <
> > > > > > dmitry.sepp@opensynergy.com
> > > > > > wrote:
> > > > > > > Hi Saket,
> > > > > > >
> > > > > > > On Montag, 11. Mai 2020 13:05:53 CEST Saket Sinha wrote:
> > > > > > > > Hi Keiichi,
> > > > > > > >
> > > > > > > > I do not support the approach of  QEMU implementation forwarding
> > > > > > > > requests to the host's vicodec module since  this can limit the scope
> > > > > > > > of the virtio-video device only for testing,
> > > > > > >
> > > > > > > That was my understanding as well.
> > > > > >
> > > > > > Not really because the API which the vicodec provides is V4L2 stateful
> > > > > > decoder interface [1], which are also used by other video drivers on
> > > > > > Linux.
> > > > > > The difference between vicodec and actual device drivers is that
> > > > > > vicodec performs decoding in the kernel space without using special
> > > > > > video devices. In other words, vicodec is a software decoder in kernel
> > > > > > space which provides the same interface with actual video drivers.
> > > > > > Thus, if the QEMU implementation can forward virtio-video requests to
> > > > > > vicodec, it can forward them to the actual V4L2 video decoder devices
> > > > > > as well and VM gets access to a paravirtualized video device.
> > > > > >
> > > > > > The reason why we discussed vicodec in the previous thread was it'll
> > > > > > allow us to test the virtio-video driver without hardware requirement.
> > > > > >
> > > > > > [1]
> > > > > > https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
> > > > > >
> > > > > >
> > > > > > > > which instead can be used with multiple use cases such as -
> > > > > > > >
> > > > > > > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > > > > > > video frames input through actual HW camera attached to Host.
> > > > > > >
> > > > > > > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > > > > > > support capture-only streams like camera as well, but later the decision was
> > > > > > > made upstream that camera should be implemented as separate device type. We
> > > > > > > still plan to implement a simple frame capture capability as a downstream
> > > > > > > patch though.
> > > > > > >
> > > > > > > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > > > > > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > > > > > > with selective video streams from actual HW video devices.
> > > > > > >
> > > > > > > We do support this in our device implementation. But spec in general has no
> > > > > > > requirements or instructions regarding this. And it is in fact flexible
> > > > > > > enough
> > > > > > > to provide abstraction on top of several HW devices.
> > > > > > >
> > > > > > > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > > > > > > can also be used inside the VM userspace after getting access to
> > > > > > > > paravirtualized HW camera devices .
> > > > > >
> > > > > > Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> > > > > > can translate virtio-video requests to this API, we can easily support
> > > > > > multiple platforms.
> > > > > > I'm not sure how feasible it is though, as I have no experience of
> > > > > > using this API by myself...
> > > > >
> > > > > Not sure which API you aim exactly, but what one need to remember is that
> > > > > mapping virtio-video CODEC on top of VAAPI, V4L2 Stateless, NVDEC or other type
> > > > > of "stateless" CODEC is not trivial and can't be done without userspace. Notably
> > > > > because we don't want to do bitstream parsing in the kernel on the main CPU as
> > > > > security would otherwise be very hard to guaranty. The other driver using same
> > > > > API as virtio-video do bitstream parsing on a dedicated co-processor (through
> > > > > firmware blobs though).
> > > > >
> > > > > Having bridges between virtio-video, qemu and some abstraction library like
> > > > > FFMPEG or GStreamer is certainly the best solution if you want to virtualize any
> > > > > type of HW accelerated decoder or if you need to virtualized something
> > > > > proprietary (like NVDEC). Please shout if you need help.
> > > > >
> > > >
> > > > Yeah, I meant we should map virtio-video commands to a set of
> > > > abstracted userspace APIs to avoid having many platform-dependent code
> > > > in QEMU.
> > > > This is the same with what we implemented in crosvm, a VMM on
> > > > ChromiumOS. Crosvm's video device translates virtio-video commands
> > > > into our own video decoding APIs [1, 2] which supports VAAPI, V4L2
> > > > stateful and V4L2 stateless. Unfortunately, since our library is
> > > > highly depending on Chrome, we cannot reuse this for QEMU.
> > > >
> > > > So, I agree that using FFMPEG or GStreamer is a good idea. Probably,
> > > > APIs in my previous link weren't for this purpose.
> > > > Nicolas, do you know any good references for FFMPEG or GStreamer's
> > > > abstracted video decoding APIs? Then, I may be able to think about how
> > > > virtio-video protocols can be mapped to them.
> > >
> > > The FFMpeg API for libavcodec can be found here:
> > >
> > >   http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavcodec/avcodec.h
> > >
> > > GStreamer does not really have such a low level CODEC API. So while
> > > it's possible to use it (Wine project uses it for it's parsers as an
> > > example, and Firefox use to have CODEC support wrapping GStreamer
> > > CODEC), there will not be any one-to-one mapping. GStreamer is often
> > > chosen as it's LGPL code does not carry directly any patented
> > > implementation. It instead rely on plugins, which maybe provided as
> > > third party, allowing to distribute your project while giving uses the
> > > option to install potentially non-free technologies.
> > >
> > > But overall, I can describe GStreamer API for CODEC wrapping (pipeline
> > > less) as:
> > >
> > >   - Push GstCaps describing the stream format
> > >   - Push bitstream buffer on sink pad
> > >   - When ready, buffers will be pushed through the push function
> > >     callback on src pad
> > >
> > > Of course nothing prevent adding something like the vda abstraction in
> > > qemu and make this multi-backend capable.
> >
> > My understanding is that we don't need a particularly low-level API to
> > interact with. The host virtual device is receiving the whole encoded
> > data, and can thus easily reconstruct the original stream (minus the
> > container) and pass it to ffmpeg/gstreamer. So we can be pretty
> > high-level here.
> >
> > Now the choice of API will also determine whether we want to allow
> > emulation of codec devices, or whether we stay on a purely
> > para-virtual track. If we use e.g. gstreamer, then the host can
> > provide a virtual device that is backed by a purely software
> > implementation. This can be useful for testing purposes, but for
> > real-life usage the guest would be just as well using gstreamer
> > itself.
>
> Agreed.
>
> >
> > If we want to make sure that there is hardware on the host side, then
> > an API like libva might make more sense, but it would be more
> > complicated and may not support all hardware (I don't know if the V4L2
> > backends are usable for instance).
>
> To bring VAAPI into Qemu directly you'd have to introduce bitstream
> parser, DPB management and other CODEC specific bits. I cannot speak
> for the project, but that's re-inventing the wheel again with very
> little gain. Best is to open the discussion with them early.
>
> Note that it's relatively simple in both framework to only choose HW
> accelerated CODECs. In ffmpeg, HW accelerator codecs can only be used
> with HWContext, so your wrapper need to know specific HWContext for the
> specific accelerator. In GStreamer, since 1.16, we add a metadata that
> let the user know which decoder is hardware accelerated. (This is
> usually used to disable HW acceleration at the moment).

Good point, and that would also not close the door to exposing a
software-backed device for testing purposes.

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2020-05-21  7:09 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-08 23:09 Qemu Support for Virtio Video V4L2 driver Saket Sinha
2020-05-09 14:09 ` Saket Sinha
2020-05-09 14:09   ` [virtio-dev] " Saket Sinha
2020-05-09 14:11   ` Fwd: " Saket Sinha
2020-05-09 14:11     ` [virtio-dev] " Saket Sinha
2020-05-11  9:40     ` Dmitry Sepp
2020-05-11  9:40       ` [virtio-dev] " Dmitry Sepp
2020-05-11 10:20       ` Keiichi Watanabe
2020-05-11 10:20         ` Keiichi Watanabe
2020-05-11 10:20         ` Keiichi Watanabe
2020-05-11 11:05         ` Saket Sinha
2020-05-11 11:05           ` Saket Sinha
2020-05-11 11:05           ` Saket Sinha
2020-05-11 11:25           ` Dmitry Sepp
2020-05-11 11:25             ` Dmitry Sepp
2020-05-11 11:25             ` Dmitry Sepp
2020-05-11 11:32             ` Michael S. Tsirkin
2020-05-11 11:32               ` Michael S. Tsirkin
2020-05-11 11:32               ` Michael S. Tsirkin
2020-05-11 11:34             ` Michael S. Tsirkin
2020-05-11 11:34               ` Michael S. Tsirkin
2020-05-11 11:34               ` Michael S. Tsirkin
2020-05-11 11:49             ` Keiichi Watanabe
2020-05-11 11:49               ` Keiichi Watanabe
2020-05-11 11:49               ` Keiichi Watanabe
2020-05-11 12:32               ` Saket Sinha
2020-05-11 12:32                 ` Saket Sinha
2020-05-11 12:32                 ` Saket Sinha
2020-05-11 14:06                 ` Keiichi Watanabe
2020-05-11 14:06                   ` Keiichi Watanabe
2020-05-11 14:06                   ` Keiichi Watanabe
2020-05-11 14:31                   ` [libcamera-devel] " Laurent Pinchart
2020-05-11 14:31                     ` Laurent Pinchart
2020-05-12 12:10                     ` Dmitry Sepp
2020-05-12 12:10                       ` [virtio-dev] " Dmitry Sepp
2020-05-12 12:10                       ` Dmitry Sepp
2020-05-14 23:38               ` Nicolas Dufresne
2020-05-14 23:38                 ` Nicolas Dufresne
2020-05-19  8:37                 ` Keiichi Watanabe
2020-05-19  8:37                   ` Keiichi Watanabe
2020-05-19  8:37                   ` Keiichi Watanabe
2020-05-19 17:29                   ` Nicolas Dufresne
2020-05-19 17:29                     ` Nicolas Dufresne
2020-05-20  3:19                     ` Alexandre Courbot
2020-05-20  3:19                       ` Alexandre Courbot
2020-05-20  3:19                       ` Alexandre Courbot
2020-05-20 16:21                       ` Nicolas Dufresne
2020-05-20 16:21                         ` Nicolas Dufresne
2020-05-20 16:27                         ` Michael S. Tsirkin
2020-05-20 16:27                           ` Michael S. Tsirkin
2020-05-20 16:27                           ` Michael S. Tsirkin
2020-05-20 16:56                           ` Nicolas Dufresne
2020-05-20 16:56                             ` Nicolas Dufresne
2020-05-21  7:08                         ` Alexandre Courbot
2020-05-21  7:08                           ` Alexandre Courbot
2020-05-21  7:08                           ` Alexandre Courbot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.