All of lore.kernel.org
 help / color / mirror / Atom feed
* metadata node
@ 2017-01-11  9:42 Guennadi Liakhovetski
  2017-01-30 17:26 ` Stanimir Varbanov
  0 siblings, 1 reply; 6+ messages in thread
From: Guennadi Liakhovetski @ 2017-01-11  9:42 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: Linux Media Mailing List, sakari.ailus, Hans Verkuil

Hi Laurent,

As you know, I'm working on a project, that involves streaming metadata, 
obtained from UVC payload headers to the userspace. Luckily, you have 
created "metadata node" patces a while ago. The core patch has also been 
acked by Hans, so, I decided it would be a safe enough bet to base my work 
on top of that.

Your patch makes creating /dev/video* metadata device nodes possible, but 
it doesn't provide any means to associate metadata nodes to respective 
video (image) nodes. Another important aspect of using per-frame metadata 
is synchronisation between metadata and image buffers.  The user is 
supposed to use buffer sequence numbers for that. That should be possible, 
but might be difficult if buffers lose synchronisation at some point. As a 
solution to the latter problem the use of requests with buffers for both 
nodes has been proposed, which should be possible once the request API is 
available.

An alternative approach to metadata support, e.g. heterogeneous 
multi-plain buffers with one plain carrying image data and another plain 
carrying metadata would be possible. It could also have other uses. E.g. 
we have come across cameras, streaming buffers, containing multiple images 
(e.g. A and B). Possibly both images have supported fourcc format, but 
they cannot be re-used, because A+B now are transferred in a single 
buffer. Instead a new fourcc code has to be invented for such cases to 
describe A+B.

As an important argument in favour of having a separate video node for 
metadata you named cases, when metadata has to be obtained before a 
complete image is available. Do you remember specifically what those cases 
are? Or have you got a link to that discussion?

In any case, _if_ we do keep the current approach of separate /dev/video* 
nodes, we need a way to associate video and metadata nodes. Earlier I 
proposed using media controller links for that. In your implementation of 
the R-Car VSP1 1-D histogram engine metadata node, where did you link it 
in the media controller topology? Could you provide a (snippet of a) 
graph?

Thanks
Guennadi

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: metadata node
  2017-01-11  9:42 metadata node Guennadi Liakhovetski
@ 2017-01-30 17:26 ` Stanimir Varbanov
  2017-02-02 18:35   ` Guennadi Liakhovetski
  0 siblings, 1 reply; 6+ messages in thread
From: Stanimir Varbanov @ 2017-01-30 17:26 UTC (permalink / raw)
  To: Guennadi Liakhovetski, Laurent Pinchart
  Cc: Linux Media Mailing List, sakari.ailus, Hans Verkuil

Hi Guennadi,

On 01/11/2017 11:42 AM, Guennadi Liakhovetski wrote:
> Hi Laurent,
> 
> As you know, I'm working on a project, that involves streaming metadata, 
> obtained from UVC payload headers to the userspace. Luckily, you have 
> created "metadata node" patces a while ago. The core patch has also been 
> acked by Hans, so, I decided it would be a safe enough bet to base my work 
> on top of that.
> 
> Your patch makes creating /dev/video* metadata device nodes possible, but 
> it doesn't provide any means to associate metadata nodes to respective 
> video (image) nodes. Another important aspect of using per-frame metadata 
> is synchronisation between metadata and image buffers.  The user is 
> supposed to use buffer sequence numbers for that. That should be possible, 
> but might be difficult if buffers lose synchronisation at some point. As a 
> solution to the latter problem the use of requests with buffers for both 
> nodes has been proposed, which should be possible once the request API is 
> available.
> 
> An alternative approach to metadata support, e.g. heterogeneous 
> multi-plain buffers with one plain carrying image data and another plain 
> carrying metadata would be possible. It could also have other uses. E.g. 
> we have come across cameras, streaming buffers, containing multiple images 
> (e.g. A and B). Possibly both images have supported fourcc format, but 
> they cannot be re-used, because A+B now are transferred in a single 
> buffer. Instead a new fourcc code has to be invented for such cases to 
> describe A+B.
> 
> As an important argument in favour of having a separate video node for 
> metadata you named cases, when metadata has to be obtained before a 
> complete image is available. Do you remember specifically what those cases 
> are? Or have you got a link to that discussion?
> 
> In any case, _if_ we do keep the current approach of separate /dev/video* 
> nodes, we need a way to associate video and metadata nodes. Earlier I 
> proposed using media controller links for that. In your implementation of 

I don't think that media controller links is a good idea. This metadata
api could be used by mem2mem drivers which don't have media controller
links so we will need a generic v4l2 way to bound image buffer and its
metadata buffer.


-- 
regards,
Stan

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: metadata node
  2017-01-30 17:26 ` Stanimir Varbanov
@ 2017-02-02 18:35   ` Guennadi Liakhovetski
  2017-02-10 12:09     ` Stanimir Varbanov
  0 siblings, 1 reply; 6+ messages in thread
From: Guennadi Liakhovetski @ 2017-02-02 18:35 UTC (permalink / raw)
  To: Stanimir Varbanov
  Cc: Laurent Pinchart, Linux Media Mailing List, sakari.ailus, Hans Verkuil

Hi Stanimir,

On Mon, 30 Jan 2017, Stanimir Varbanov wrote:

> Hi Guennadi,
> 
> On 01/11/2017 11:42 AM, Guennadi Liakhovetski wrote:

[snip]

> > In any case, _if_ we do keep the current approach of separate /dev/video* 
> > nodes, we need a way to associate video and metadata nodes. Earlier I 
> > proposed using media controller links for that. In your implementation of 
> 
> I don't think that media controller links is a good idea. This metadata
> api could be used by mem2mem drivers which don't have media controller
> links so we will need a generic v4l2 way to bound image buffer and its
> metadata buffer.

Is there anything, that's preventing mem2mem drivers from using the MC 
API? Arguably, if you need metadata, you cross the line of becoming a 
complex enough device to deserve MC support?

Thanks
Guennadi

> -- 
> regards,
> Stan
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: metadata node
  2017-02-02 18:35   ` Guennadi Liakhovetski
@ 2017-02-10 12:09     ` Stanimir Varbanov
  2017-02-11 20:38       ` Guennadi Liakhovetski
  2017-02-11 22:07       ` Sakari Ailus
  0 siblings, 2 replies; 6+ messages in thread
From: Stanimir Varbanov @ 2017-02-10 12:09 UTC (permalink / raw)
  To: Guennadi Liakhovetski, Stanimir Varbanov
  Cc: Laurent Pinchart, Linux Media Mailing List, sakari.ailus, Hans Verkuil

Hi Guennadi,

On 02/02/2017 08:35 PM, Guennadi Liakhovetski wrote:
> Hi Stanimir,
> 
> On Mon, 30 Jan 2017, Stanimir Varbanov wrote:
> 
>> Hi Guennadi,
>>
>> On 01/11/2017 11:42 AM, Guennadi Liakhovetski wrote:
> 
> [snip]
> 
>>> In any case, _if_ we do keep the current approach of separate /dev/video* 
>>> nodes, we need a way to associate video and metadata nodes. Earlier I 
>>> proposed using media controller links for that. In your implementation of 
>>
>> I don't think that media controller links is a good idea. This metadata
>> api could be used by mem2mem drivers which don't have media controller
>> links so we will need a generic v4l2 way to bound image buffer and its
>> metadata buffer.
> 
> Is there anything, that's preventing mem2mem drivers from using the MC 
> API? Arguably, if you need metadata, you cross the line of becoming a 
> complex enough device to deserve MC support?

Well I don't want to cross that boundary :), and I don't want to use MC
for such simple entity with one input and one output. The only reason to
reply to your email was to provoke your attention to the drivers which
aren't MC based.

On other side I think that sequence field in struct vb2_v4l2_buffer
should be sufficient to bound image buffer with metadata buffer.

-- 
regards,
Stan

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: metadata node
  2017-02-10 12:09     ` Stanimir Varbanov
@ 2017-02-11 20:38       ` Guennadi Liakhovetski
  2017-02-11 22:07       ` Sakari Ailus
  1 sibling, 0 replies; 6+ messages in thread
From: Guennadi Liakhovetski @ 2017-02-11 20:38 UTC (permalink / raw)
  To: Stanimir Varbanov
  Cc: Laurent Pinchart, Linux Media Mailing List, sakari.ailus, Hans Verkuil

On Fri, 10 Feb 2017, Stanimir Varbanov wrote:

> Hi Guennadi,
> 
> On 02/02/2017 08:35 PM, Guennadi Liakhovetski wrote:
> > Hi Stanimir,
> > 
> > On Mon, 30 Jan 2017, Stanimir Varbanov wrote:
> > 
> >> Hi Guennadi,
> >>
> >> On 01/11/2017 11:42 AM, Guennadi Liakhovetski wrote:
> > 
> > [snip]
> > 
> >>> In any case, _if_ we do keep the current approach of separate /dev/video* 
> >>> nodes, we need a way to associate video and metadata nodes. Earlier I 
> >>> proposed using media controller links for that. In your implementation of 
> >>
> >> I don't think that media controller links is a good idea. This metadata
> >> api could be used by mem2mem drivers which don't have media controller
> >> links so we will need a generic v4l2 way to bound image buffer and its
> >> metadata buffer.
> > 
> > Is there anything, that's preventing mem2mem drivers from using the MC 
> > API? Arguably, if you need metadata, you cross the line of becoming a 
> > complex enough device to deserve MC support?
> 
> Well I don't want to cross that boundary :), and I don't want to use MC
> for such simple entity with one input and one output. The only reason to
> reply to your email was to provoke your attention to the drivers which
> aren't MC based.

Would be nice to hear others' opinions.

> On other side I think that sequence field in struct vb2_v4l2_buffer
> should be sufficient to bound image buffer with metadata buffer.

How can that be helpful? Firstly you have to be able to find node pairs 
without streaming. Secondly, you don't want to open all nodes and try to 
dequeue buffers on them to find out which ones begin to stream and have 
equal sequence numbers.

Thanks
Guennadi

> -- 
> regards,
> Stan

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: metadata node
  2017-02-10 12:09     ` Stanimir Varbanov
  2017-02-11 20:38       ` Guennadi Liakhovetski
@ 2017-02-11 22:07       ` Sakari Ailus
  1 sibling, 0 replies; 6+ messages in thread
From: Sakari Ailus @ 2017-02-11 22:07 UTC (permalink / raw)
  To: Stanimir Varbanov
  Cc: Guennadi Liakhovetski, Laurent Pinchart,
	Linux Media Mailing List, Hans Verkuil

Hi Stan,

It's been a long time. How are you doing? :-)

On Fri, Feb 10, 2017 at 02:09:42PM +0200, Stanimir Varbanov wrote:
> Hi Guennadi,
> 
> On 02/02/2017 08:35 PM, Guennadi Liakhovetski wrote:
> > Hi Stanimir,
> > 
> > On Mon, 30 Jan 2017, Stanimir Varbanov wrote:
> > 
> >> Hi Guennadi,
> >>
> >> On 01/11/2017 11:42 AM, Guennadi Liakhovetski wrote:
> > 
> > [snip]
> > 
> >>> In any case, _if_ we do keep the current approach of separate /dev/video* 
> >>> nodes, we need a way to associate video and metadata nodes. Earlier I 
> >>> proposed using media controller links for that. In your implementation of 
> >>
> >> I don't think that media controller links is a good idea. This metadata
> >> api could be used by mem2mem drivers which don't have media controller
> >> links so we will need a generic v4l2 way to bound image buffer and its
> >> metadata buffer.
> > 
> > Is there anything, that's preventing mem2mem drivers from using the MC 
> > API? Arguably, if you need metadata, you cross the line of becoming a 
> > complex enough device to deserve MC support?
> 
> Well I don't want to cross that boundary :), and I don't want to use MC
> for such simple entity with one input and one output. The only reason to
> reply to your email was to provoke your attention to the drivers which
> aren't MC based.

Do you have a particular use case in mind?

We'll need to continue to support two cases: existing hardware and use
cases employing the mem2mem interface and more complex hardware that the
mem2mem interface is not enough to support: this requires Media controller
and the request API.

Supposing that we'd extend the mem2mem interface to encompass further
functionality, for instance supporting metadata. That functionality would
only be available on mem2mem devices. Devices that would not fit to that
envelope would have to use MC / request API.

Adding more functionality to mem2mem thus will continue to extend two
incompatible (when it comes to semantics) APIs within V4L2 and MC
interfaces. That forces the driver developer to choose which one to use.
(S)he may not be fully aware of the implications of choosing either
option, possibly leading to not being able to fully support the hardware
with the chosen API. The effect is also similar for applications: they
need to support two different APIs.

Still, the existing mem2mem framework provides a well defined, easy to use
interface within the scope of the functionality it supports. As the MC
combined with request API will be a lot more generic, it is also more
demanding for applications to use it. The applications are required to, if
not know more about the devices, at least be ready to use a number of
interfaces to fully enumerate the device's capabilities.

> 
> On other side I think that sequence field in struct vb2_v4l2_buffer
> should be sufficient to bound image buffer with metadata buffer.

Indeed.

With request API, you'll be able to use the request field as well.  And
that'll work for non-mem2mem devices, too.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-02-11 22:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-11  9:42 metadata node Guennadi Liakhovetski
2017-01-30 17:26 ` Stanimir Varbanov
2017-02-02 18:35   ` Guennadi Liakhovetski
2017-02-10 12:09     ` Stanimir Varbanov
2017-02-11 20:38       ` Guennadi Liakhovetski
2017-02-11 22:07       ` Sakari Ailus

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.