linux-media.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Single- vs Multi-planar APIs for mem2mem devices
@ 2019-07-11  9:21 wens Tsai
  2019-07-11  9:38 ` Paul Kocialkowski
  0 siblings, 1 reply; 7+ messages in thread
From: wens Tsai @ 2019-07-11  9:21 UTC (permalink / raw)
  To: Linux Media Mailing List
  Cc: Hans Verkuil, Mauro Carvalho Chehab, Paul Kocialkowski, Maxime Ripard

Hi,

I noticed that recent codec driver additions, such as hantro, meson/vdec,
mtk, used the multi-planar API (mplane) instead of the single-planar API.

Is there a preference for moving towards mplane?

Also I noticed that v4l-utils has a plugin that seamlessly adapts mplane
devices to work with single-planar API calls, but there isn't one the
other way around. Would that be something worthwhile working on? To me
it seems having one central adapter is better than having applications
try to use both APIs, though some such as FFmpeg already do so.

My interest in this is trying to get Chromium's v4l2 video decode
accelerator to work on the Raspberry Pi 3 with their downstream kernel,
which has the bcm2835-v4l2 driver. It would also benefit other platforms
that have a stateful VPU, such as Amlogic (meson) or Mediatek SoCs.
Chromium's code uses MPLANE exclusively, and it's geared towards ChromeOS,
not standard Linux, but still it might be interesting to get it to work.

There's also the v4l2 slice video decode accelerator which uses the
request API which has the same issue, but lets not get ahead of ourselves.

Regards
ChenYu

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Single- vs Multi-planar APIs for mem2mem devices
  2019-07-11  9:21 Single- vs Multi-planar APIs for mem2mem devices wens Tsai
@ 2019-07-11  9:38 ` Paul Kocialkowski
  2019-07-11 11:51   ` wens Tsai
  0 siblings, 1 reply; 7+ messages in thread
From: Paul Kocialkowski @ 2019-07-11  9:38 UTC (permalink / raw)
  To: wens Tsai
  Cc: Linux Media Mailing List, Hans Verkuil, Mauro Carvalho Chehab,
	Maxime Ripard

Hi,

On Thu 11 Jul 19, 17:21, wens Tsai wrote:
> I noticed that recent codec driver additions, such as hantro, meson/vdec,
> mtk, used the multi-planar API (mplane) instead of the single-planar API.
> 
> Is there a preference for moving towards mplane?

Well, there is strict technical requirement for using the single-planar API
instead of the multi-planar one in cedrus.

Historically, we started with mplane because we can definitely pass
different addresses for luma and chroma to our decoder. However, we found
out that there are some internal limitations of the decoder block and the
addresses cannot be "too far apart". This means that if we allocate luma
at the begging of our pool and chroma at the end, the gap was too big
and the resulting decoded frame was not stored at the right address.

We found out about this purely by accident, where one particular sequence of
events lead to such a gap and showed the issue. One possible explanation
would be that the decoder uses an offset from the luma address for chroma
internally, on a limited number of bits.

So unfortunately, this means we're stuck with having our multi-planar
(in the YUV sense) buffers allocated in a single chunk (the single-plane
V4L2 API). I don't have any better answer than "userspace should cope with both
APIs as it reflects a hardware constraint", which is indeed what ffmpeg is
already doing.

Cheers,

Paul

> Also I noticed that v4l-utils has a plugin that seamlessly adapts mplane
> devices to work with single-planar API calls, but there isn't one the
> other way around. Would that be something worthwhile working on? To me
> it seems having one central adapter is better than having applications
> try to use both APIs, though some such as FFmpeg already do so.
> 
> My interest in this is trying to get Chromium's v4l2 video decode
> accelerator to work on the Raspberry Pi 3 with their downstream kernel,
> which has the bcm2835-v4l2 driver. It would also benefit other platforms
> that have a stateful VPU, such as Amlogic (meson) or Mediatek SoCs.
> Chromium's code uses MPLANE exclusively, and it's geared towards ChromeOS,
> not standard Linux, but still it might be interesting to get it to work.
> 
> There's also the v4l2 slice video decode accelerator which uses the
> request API which has the same issue, but lets not get ahead of ourselves.
> 
> Regards
> ChenYu

-- 
Paul Kocialkowski, Bootlin
Embedded Linux and kernel engineering
https://bootlin.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Single- vs Multi-planar APIs for mem2mem devices
  2019-07-11  9:38 ` Paul Kocialkowski
@ 2019-07-11 11:51   ` wens Tsai
  2019-07-11 12:19     ` Paul Kocialkowski
  0 siblings, 1 reply; 7+ messages in thread
From: wens Tsai @ 2019-07-11 11:51 UTC (permalink / raw)
  To: Paul Kocialkowski
  Cc: Linux Media Mailing List, Hans Verkuil, Mauro Carvalho Chehab,
	Maxime Ripard

On Thu, Jul 11, 2019 at 5:39 PM Paul Kocialkowski
<paul.kocialkowski@bootlin.com> wrote:
>
> Hi,
>
> On Thu 11 Jul 19, 17:21, wens Tsai wrote:
> > I noticed that recent codec driver additions, such as hantro, meson/vdec,
> > mtk, used the multi-planar API (mplane) instead of the single-planar API.
> >
> > Is there a preference for moving towards mplane?
>
> Well, there is strict technical requirement for using the single-planar API
> instead of the multi-planar one in cedrus.
>
> Historically, we started with mplane because we can definitely pass
> different addresses for luma and chroma to our decoder. However, we found
> out that there are some internal limitations of the decoder block and the
> addresses cannot be "too far apart". This means that if we allocate luma
> at the begging of our pool and chroma at the end, the gap was too big
> and the resulting decoded frame was not stored at the right address.
>
> We found out about this purely by accident, where one particular sequence of
> events lead to such a gap and showed the issue. One possible explanation
> would be that the decoder uses an offset from the luma address for chroma
> internally, on a limited number of bits.
>
> So unfortunately, this means we're stuck with having our multi-planar
> (in the YUV sense) buffers allocated in a single chunk (the single-plane
> V4L2 API). I don't have any better answer than "userspace should cope with both
> APIs as it reflects a hardware constraint", which is indeed what ffmpeg is
> already doing.

If I understand the API correctly, using the multi-planar API doesn't mean
you have to actually support multi-planar formats such as NVxyM and YUVxyzM.
AFAICT these are the only two families that have non contiguous planes.

So for all the other existing formats, the buffers will still be contiguous,
which means cedrus can actually support the multi-planar API, right?

ChenYu

> Cheers,
>
> Paul
>
> > Also I noticed that v4l-utils has a plugin that seamlessly adapts mplane
> > devices to work with single-planar API calls, but there isn't one the
> > other way around. Would that be something worthwhile working on? To me
> > it seems having one central adapter is better than having applications
> > try to use both APIs, though some such as FFmpeg already do so.
> >
> > My interest in this is trying to get Chromium's v4l2 video decode
> > accelerator to work on the Raspberry Pi 3 with their downstream kernel,
> > which has the bcm2835-v4l2 driver. It would also benefit other platforms
> > that have a stateful VPU, such as Amlogic (meson) or Mediatek SoCs.
> > Chromium's code uses MPLANE exclusively, and it's geared towards ChromeOS,
> > not standard Linux, but still it might be interesting to get it to work.
> >
> > There's also the v4l2 slice video decode accelerator which uses the
> > request API which has the same issue, but lets not get ahead of ourselves.
> >
> > Regards
> > ChenYu
>
> --
> Paul Kocialkowski, Bootlin
> Embedded Linux and kernel engineering
> https://bootlin.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Single- vs Multi-planar APIs for mem2mem devices
  2019-07-11 11:51   ` wens Tsai
@ 2019-07-11 12:19     ` Paul Kocialkowski
  2019-07-12  4:26       ` wens Tsai
  0 siblings, 1 reply; 7+ messages in thread
From: Paul Kocialkowski @ 2019-07-11 12:19 UTC (permalink / raw)
  To: wens Tsai
  Cc: Linux Media Mailing List, Hans Verkuil, Mauro Carvalho Chehab,
	Maxime Ripard

On Thu 11 Jul 19, 19:51, wens Tsai wrote:
> On Thu, Jul 11, 2019 at 5:39 PM Paul Kocialkowski
> <paul.kocialkowski@bootlin.com> wrote:
> >
> > Hi,
> >
> > On Thu 11 Jul 19, 17:21, wens Tsai wrote:
> > > I noticed that recent codec driver additions, such as hantro, meson/vdec,
> > > mtk, used the multi-planar API (mplane) instead of the single-planar API.
> > >
> > > Is there a preference for moving towards mplane?
> >
> > Well, there is strict technical requirement for using the single-planar API
> > instead of the multi-planar one in cedrus.
> >
> > Historically, we started with mplane because we can definitely pass
> > different addresses for luma and chroma to our decoder. However, we found
> > out that there are some internal limitations of the decoder block and the
> > addresses cannot be "too far apart". This means that if we allocate luma
> > at the begging of our pool and chroma at the end, the gap was too big
> > and the resulting decoded frame was not stored at the right address.
> >
> > We found out about this purely by accident, where one particular sequence of
> > events lead to such a gap and showed the issue. One possible explanation
> > would be that the decoder uses an offset from the luma address for chroma
> > internally, on a limited number of bits.
> >
> > So unfortunately, this means we're stuck with having our multi-planar
> > (in the YUV sense) buffers allocated in a single chunk (the single-plane
> > V4L2 API). I don't have any better answer than "userspace should cope with both
> > APIs as it reflects a hardware constraint", which is indeed what ffmpeg is
> > already doing.
> 
> If I understand the API correctly, using the multi-planar API doesn't mean
> you have to actually support multi-planar formats such as NVxyM and YUVxyzM.
> AFAICT these are the only two families that have non contiguous planes.

That is indeed quite right and maybe switching away from mplane was unnecessary.
In practice, we can only use single-planar (non-M formats) but we could totally
support the mplane semantics for these formats as well.

What I was thinking probably boiled down to the idea that without mplane
supported, userspace wouldn't be able to allocate multiple planes and then
try to use them with a single-planar format. I think this is what I had
implemented before and something that "worked" although being totally wrong
regarding the API. It looks like the core doesn't check that single-planar
formats are only given one plane by userspace and, as long as the driver goes
with it, it can still operate like this. Could be nice to have this kind of
checking around.

So in the end, it looks like we can safely bring back mplane support :)

> So for all the other existing formats, the buffers will still be contiguous,
> which means cedrus can actually support the multi-planar API, right?

Thinking twice about it, I think you're right yes.

Cheers,

Paul
 
> ChenYu
> 
> > Cheers,
> >
> > Paul
> >
> > > Also I noticed that v4l-utils has a plugin that seamlessly adapts mplane
> > > devices to work with single-planar API calls, but there isn't one the
> > > other way around. Would that be something worthwhile working on? To me
> > > it seems having one central adapter is better than having applications
> > > try to use both APIs, though some such as FFmpeg already do so.
> > >
> > > My interest in this is trying to get Chromium's v4l2 video decode
> > > accelerator to work on the Raspberry Pi 3 with their downstream kernel,
> > > which has the bcm2835-v4l2 driver. It would also benefit other platforms
> > > that have a stateful VPU, such as Amlogic (meson) or Mediatek SoCs.
> > > Chromium's code uses MPLANE exclusively, and it's geared towards ChromeOS,
> > > not standard Linux, but still it might be interesting to get it to work.
> > >
> > > There's also the v4l2 slice video decode accelerator which uses the
> > > request API which has the same issue, but lets not get ahead of ourselves.
> > >
> > > Regards
> > > ChenYu
> >
> > --
> > Paul Kocialkowski, Bootlin
> > Embedded Linux and kernel engineering
> > https://bootlin.com

-- 
Paul Kocialkowski, Bootlin
Embedded Linux and kernel engineering
https://bootlin.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Single- vs Multi-planar APIs for mem2mem devices
  2019-07-11 12:19     ` Paul Kocialkowski
@ 2019-07-12  4:26       ` wens Tsai
  2019-07-12  9:37         ` Paul Kocialkowski
  0 siblings, 1 reply; 7+ messages in thread
From: wens Tsai @ 2019-07-12  4:26 UTC (permalink / raw)
  To: Paul Kocialkowski
  Cc: Linux Media Mailing List, Hans Verkuil, Mauro Carvalho Chehab,
	Maxime Ripard

On Thu, Jul 11, 2019 at 8:19 PM Paul Kocialkowski
<paul.kocialkowski@bootlin.com> wrote:
>
> On Thu 11 Jul 19, 19:51, wens Tsai wrote:
> > On Thu, Jul 11, 2019 at 5:39 PM Paul Kocialkowski
> > <paul.kocialkowski@bootlin.com> wrote:
> > >
> > > Hi,
> > >
> > > On Thu 11 Jul 19, 17:21, wens Tsai wrote:
> > > > I noticed that recent codec driver additions, such as hantro, meson/vdec,
> > > > mtk, used the multi-planar API (mplane) instead of the single-planar API.
> > > >
> > > > Is there a preference for moving towards mplane?
> > >
> > > Well, there is strict technical requirement for using the single-planar API
> > > instead of the multi-planar one in cedrus.
> > >
> > > Historically, we started with mplane because we can definitely pass
> > > different addresses for luma and chroma to our decoder. However, we found
> > > out that there are some internal limitations of the decoder block and the
> > > addresses cannot be "too far apart". This means that if we allocate luma
> > > at the begging of our pool and chroma at the end, the gap was too big
> > > and the resulting decoded frame was not stored at the right address.
> > >
> > > We found out about this purely by accident, where one particular sequence of
> > > events lead to such a gap and showed the issue. One possible explanation
> > > would be that the decoder uses an offset from the luma address for chroma
> > > internally, on a limited number of bits.
> > >
> > > So unfortunately, this means we're stuck with having our multi-planar
> > > (in the YUV sense) buffers allocated in a single chunk (the single-plane
> > > V4L2 API). I don't have any better answer than "userspace should cope with both
> > > APIs as it reflects a hardware constraint", which is indeed what ffmpeg is
> > > already doing.
> >
> > If I understand the API correctly, using the multi-planar API doesn't mean
> > you have to actually support multi-planar formats such as NVxyM and YUVxyzM.
> > AFAICT these are the only two families that have non contiguous planes.
>
> That is indeed quite right and maybe switching away from mplane was unnecessary.
> In practice, we can only use single-planar (non-M formats) but we could totally
> support the mplane semantics for these formats as well.
>
> What I was thinking probably boiled down to the idea that without mplane
> supported, userspace wouldn't be able to allocate multiple planes and then
> try to use them with a single-planar format. I think this is what I had
> implemented before and something that "worked" although being totally wrong
> regarding the API. It looks like the core doesn't check that single-planar
> formats are only given one plane by userspace and, as long as the driver goes
> with it, it can still operate like this. Could be nice to have this kind of
> checking around.
>
> So in the end, it looks like we can safely bring back mplane support :)

This brings me to another question: is there anything (policy or technical)
preventing us from supporting both APIs? I haven't seen any drivers do this,
and with the compatibility plugin in libv4l2, I suppose it isn't necessary.

ChenYu

> > So for all the other existing formats, the buffers will still be contiguous,
> > which means cedrus can actually support the multi-planar API, right?
>
> Thinking twice about it, I think you're right yes.
>
> Cheers,
>
> Paul
>
> > ChenYu
> >
> > > Cheers,
> > >
> > > Paul
> > >
> > > > Also I noticed that v4l-utils has a plugin that seamlessly adapts mplane
> > > > devices to work with single-planar API calls, but there isn't one the
> > > > other way around. Would that be something worthwhile working on? To me
> > > > it seems having one central adapter is better than having applications
> > > > try to use both APIs, though some such as FFmpeg already do so.
> > > >
> > > > My interest in this is trying to get Chromium's v4l2 video decode
> > > > accelerator to work on the Raspberry Pi 3 with their downstream kernel,
> > > > which has the bcm2835-v4l2 driver. It would also benefit other platforms
> > > > that have a stateful VPU, such as Amlogic (meson) or Mediatek SoCs.
> > > > Chromium's code uses MPLANE exclusively, and it's geared towards ChromeOS,
> > > > not standard Linux, but still it might be interesting to get it to work.
> > > >
> > > > There's also the v4l2 slice video decode accelerator which uses the
> > > > request API which has the same issue, but lets not get ahead of ourselves.
> > > >
> > > > Regards
> > > > ChenYu
> > >
> > > --
> > > Paul Kocialkowski, Bootlin
> > > Embedded Linux and kernel engineering
> > > https://bootlin.com
>
> --
> Paul Kocialkowski, Bootlin
> Embedded Linux and kernel engineering
> https://bootlin.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Single- vs Multi-planar APIs for mem2mem devices
  2019-07-12  4:26       ` wens Tsai
@ 2019-07-12  9:37         ` Paul Kocialkowski
  2019-07-12 10:04           ` wens Tsai
  0 siblings, 1 reply; 7+ messages in thread
From: Paul Kocialkowski @ 2019-07-12  9:37 UTC (permalink / raw)
  To: wens Tsai
  Cc: Linux Media Mailing List, Hans Verkuil, Mauro Carvalho Chehab,
	Maxime Ripard

Hi,

On Fri 12 Jul 19, 12:26, wens Tsai wrote:
> On Thu, Jul 11, 2019 at 8:19 PM Paul Kocialkowski
> <paul.kocialkowski@bootlin.com> wrote:
> >
> > On Thu 11 Jul 19, 19:51, wens Tsai wrote:
> > > On Thu, Jul 11, 2019 at 5:39 PM Paul Kocialkowski
> > > <paul.kocialkowski@bootlin.com> wrote:
> > > >
> > > > Hi,
> > > >
> > > > On Thu 11 Jul 19, 17:21, wens Tsai wrote:
> > > > > I noticed that recent codec driver additions, such as hantro, meson/vdec,
> > > > > mtk, used the multi-planar API (mplane) instead of the single-planar API.
> > > > >
> > > > > Is there a preference for moving towards mplane?
> > > >
> > > > Well, there is strict technical requirement for using the single-planar API
> > > > instead of the multi-planar one in cedrus.
> > > >
> > > > Historically, we started with mplane because we can definitely pass
> > > > different addresses for luma and chroma to our decoder. However, we found
> > > > out that there are some internal limitations of the decoder block and the
> > > > addresses cannot be "too far apart". This means that if we allocate luma
> > > > at the begging of our pool and chroma at the end, the gap was too big
> > > > and the resulting decoded frame was not stored at the right address.
> > > >
> > > > We found out about this purely by accident, where one particular sequence of
> > > > events lead to such a gap and showed the issue. One possible explanation
> > > > would be that the decoder uses an offset from the luma address for chroma
> > > > internally, on a limited number of bits.
> > > >
> > > > So unfortunately, this means we're stuck with having our multi-planar
> > > > (in the YUV sense) buffers allocated in a single chunk (the single-plane
> > > > V4L2 API). I don't have any better answer than "userspace should cope with both
> > > > APIs as it reflects a hardware constraint", which is indeed what ffmpeg is
> > > > already doing.
> > >
> > > If I understand the API correctly, using the multi-planar API doesn't mean
> > > you have to actually support multi-planar formats such as NVxyM and YUVxyzM.
> > > AFAICT these are the only two families that have non contiguous planes.
> >
> > That is indeed quite right and maybe switching away from mplane was unnecessary.
> > In practice, we can only use single-planar (non-M formats) but we could totally
> > support the mplane semantics for these formats as well.
> >
> > What I was thinking probably boiled down to the idea that without mplane
> > supported, userspace wouldn't be able to allocate multiple planes and then
> > try to use them with a single-planar format. I think this is what I had
> > implemented before and something that "worked" although being totally wrong
> > regarding the API. It looks like the core doesn't check that single-planar
> > formats are only given one plane by userspace and, as long as the driver goes
> > with it, it can still operate like this. Could be nice to have this kind of
> > checking around.
> >
> > So in the end, it looks like we can safely bring back mplane support :)
> 
> This brings me to another question: is there anything (policy or technical)
> preventing us from supporting both APIs? I haven't seen any drivers do this,
> and with the compatibility plugin in libv4l2, I suppose it isn't necessary.

So according to Hans, this makes drivers particuarly more complex to write,
so we might want to fully switch over to mplane for the time being.

I'm not sure there are common cases where that would really be beneficial, and
as you said, it's probably not necessary.

Cheers,

Paul

> ChenYu
> 
> > > So for all the other existing formats, the buffers will still be contiguous,
> > > which means cedrus can actually support the multi-planar API, right?
> >
> > Thinking twice about it, I think you're right yes.
> >
> > Cheers,
> >
> > Paul
> >
> > > ChenYu
> > >
> > > > Cheers,
> > > >
> > > > Paul
> > > >
> > > > > Also I noticed that v4l-utils has a plugin that seamlessly adapts mplane
> > > > > devices to work with single-planar API calls, but there isn't one the
> > > > > other way around. Would that be something worthwhile working on? To me
> > > > > it seems having one central adapter is better than having applications
> > > > > try to use both APIs, though some such as FFmpeg already do so.
> > > > >
> > > > > My interest in this is trying to get Chromium's v4l2 video decode
> > > > > accelerator to work on the Raspberry Pi 3 with their downstream kernel,
> > > > > which has the bcm2835-v4l2 driver. It would also benefit other platforms
> > > > > that have a stateful VPU, such as Amlogic (meson) or Mediatek SoCs.
> > > > > Chromium's code uses MPLANE exclusively, and it's geared towards ChromeOS,
> > > > > not standard Linux, but still it might be interesting to get it to work.
> > > > >
> > > > > There's also the v4l2 slice video decode accelerator which uses the
> > > > > request API which has the same issue, but lets not get ahead of ourselves.
> > > > >
> > > > > Regards
> > > > > ChenYu
> > > >
> > > > --
> > > > Paul Kocialkowski, Bootlin
> > > > Embedded Linux and kernel engineering
> > > > https://bootlin.com
> >
> > --
> > Paul Kocialkowski, Bootlin
> > Embedded Linux and kernel engineering
> > https://bootlin.com

-- 
Paul Kocialkowski, Bootlin
Embedded Linux and kernel engineering
https://bootlin.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Single- vs Multi-planar APIs for mem2mem devices
  2019-07-12  9:37         ` Paul Kocialkowski
@ 2019-07-12 10:04           ` wens Tsai
  0 siblings, 0 replies; 7+ messages in thread
From: wens Tsai @ 2019-07-12 10:04 UTC (permalink / raw)
  To: Paul Kocialkowski
  Cc: Linux Media Mailing List, Hans Verkuil, Mauro Carvalho Chehab,
	Maxime Ripard

On Fri, Jul 12, 2019 at 5:37 PM Paul Kocialkowski
<paul.kocialkowski@bootlin.com> wrote:
>
> Hi,
>
> On Fri 12 Jul 19, 12:26, wens Tsai wrote:
> > On Thu, Jul 11, 2019 at 8:19 PM Paul Kocialkowski
> > <paul.kocialkowski@bootlin.com> wrote:
> > >
> > > On Thu 11 Jul 19, 19:51, wens Tsai wrote:
> > > > On Thu, Jul 11, 2019 at 5:39 PM Paul Kocialkowski
> > > > <paul.kocialkowski@bootlin.com> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > On Thu 11 Jul 19, 17:21, wens Tsai wrote:
> > > > > > I noticed that recent codec driver additions, such as hantro, meson/vdec,
> > > > > > mtk, used the multi-planar API (mplane) instead of the single-planar API.
> > > > > >
> > > > > > Is there a preference for moving towards mplane?
> > > > >
> > > > > Well, there is strict technical requirement for using the single-planar API
> > > > > instead of the multi-planar one in cedrus.
> > > > >
> > > > > Historically, we started with mplane because we can definitely pass
> > > > > different addresses for luma and chroma to our decoder. However, we found
> > > > > out that there are some internal limitations of the decoder block and the
> > > > > addresses cannot be "too far apart". This means that if we allocate luma
> > > > > at the begging of our pool and chroma at the end, the gap was too big
> > > > > and the resulting decoded frame was not stored at the right address.
> > > > >
> > > > > We found out about this purely by accident, where one particular sequence of
> > > > > events lead to such a gap and showed the issue. One possible explanation
> > > > > would be that the decoder uses an offset from the luma address for chroma
> > > > > internally, on a limited number of bits.
> > > > >
> > > > > So unfortunately, this means we're stuck with having our multi-planar
> > > > > (in the YUV sense) buffers allocated in a single chunk (the single-plane
> > > > > V4L2 API). I don't have any better answer than "userspace should cope with both
> > > > > APIs as it reflects a hardware constraint", which is indeed what ffmpeg is
> > > > > already doing.
> > > >
> > > > If I understand the API correctly, using the multi-planar API doesn't mean
> > > > you have to actually support multi-planar formats such as NVxyM and YUVxyzM.
> > > > AFAICT these are the only two families that have non contiguous planes.
> > >
> > > That is indeed quite right and maybe switching away from mplane was unnecessary.
> > > In practice, we can only use single-planar (non-M formats) but we could totally
> > > support the mplane semantics for these formats as well.
> > >
> > > What I was thinking probably boiled down to the idea that without mplane
> > > supported, userspace wouldn't be able to allocate multiple planes and then
> > > try to use them with a single-planar format. I think this is what I had
> > > implemented before and something that "worked" although being totally wrong
> > > regarding the API. It looks like the core doesn't check that single-planar
> > > formats are only given one plane by userspace and, as long as the driver goes
> > > with it, it can still operate like this. Could be nice to have this kind of
> > > checking around.
> > >
> > > So in the end, it looks like we can safely bring back mplane support :)
> >
> > This brings me to another question: is there anything (policy or technical)
> > preventing us from supporting both APIs? I haven't seen any drivers do this,
> > and with the compatibility plugin in libv4l2, I suppose it isn't necessary.
>
> So according to Hans, this makes drivers particuarly more complex to write,
> so we might want to fully switch over to mplane for the time being.

For cedrus it doesn't seem to be particularly hard, given the hardware can only
do single-planar style actions. The complexity is only in translating between
"struct v4l2_pix_format" and "struct v4l2_pix_format_mplane".

> I'm not sure there are common cases where that would really be beneficial, and
> as you said, it's probably not necessary.

I started adding some code to cedrus to cover both. When I'm done I'll remove
the old single-planar bits. That way I have some idea about what needs to be
done for other drivers, such as bcm2835_codec, which was my original goal.
We can also see if it's actually complicated or not.

ChenYu

> Paul
>
> > ChenYu
> >
> > > > So for all the other existing formats, the buffers will still be contiguous,
> > > > which means cedrus can actually support the multi-planar API, right?
> > >
> > > Thinking twice about it, I think you're right yes.
> > >
> > > Cheers,
> > >
> > > Paul
> > >
> > > > ChenYu
> > > >
> > > > > Cheers,
> > > > >
> > > > > Paul
> > > > >
> > > > > > Also I noticed that v4l-utils has a plugin that seamlessly adapts mplane
> > > > > > devices to work with single-planar API calls, but there isn't one the
> > > > > > other way around. Would that be something worthwhile working on? To me
> > > > > > it seems having one central adapter is better than having applications
> > > > > > try to use both APIs, though some such as FFmpeg already do so.
> > > > > >
> > > > > > My interest in this is trying to get Chromium's v4l2 video decode
> > > > > > accelerator to work on the Raspberry Pi 3 with their downstream kernel,
> > > > > > which has the bcm2835-v4l2 driver. It would also benefit other platforms
> > > > > > that have a stateful VPU, such as Amlogic (meson) or Mediatek SoCs.
> > > > > > Chromium's code uses MPLANE exclusively, and it's geared towards ChromeOS,
> > > > > > not standard Linux, but still it might be interesting to get it to work.
> > > > > >
> > > > > > There's also the v4l2 slice video decode accelerator which uses the
> > > > > > request API which has the same issue, but lets not get ahead of ourselves.
> > > > > >
> > > > > > Regards
> > > > > > ChenYu
> > > > >
> > > > > --
> > > > > Paul Kocialkowski, Bootlin
> > > > > Embedded Linux and kernel engineering
> > > > > https://bootlin.com
> > >
> > > --
> > > Paul Kocialkowski, Bootlin
> > > Embedded Linux and kernel engineering
> > > https://bootlin.com
>
> --
> Paul Kocialkowski, Bootlin
> Embedded Linux and kernel engineering
> https://bootlin.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-07-12 10:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-11  9:21 Single- vs Multi-planar APIs for mem2mem devices wens Tsai
2019-07-11  9:38 ` Paul Kocialkowski
2019-07-11 11:51   ` wens Tsai
2019-07-11 12:19     ` Paul Kocialkowski
2019-07-12  4:26       ` wens Tsai
2019-07-12  9:37         ` Paul Kocialkowski
2019-07-12 10:04           ` wens Tsai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).