linux-media.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
To: Sakari Ailus <sakari.ailus@iki.fi>
Cc: Hans Verkuil <hverkuil@xs4all.nl>,
	Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>,
	linux-media@vger.kernel.org, linux-renesas-soc@vger.kernel.org,
	Hans Verkuil <hans.verkuil@cisco.com>,
	Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Subject: Re: [PATCH/RFC v2 0/4] Meta-data video device type
Date: Mon, 23 May 2016 15:00:01 +0200 (CEST)	[thread overview]
Message-ID: <Pine.LNX.4.64.1605231200170.18611@axis700.grange> (raw)
In-Reply-To: <20160513095242.GV26360@valkosipuli.retiisi.org.uk>

Hi Sakari,

On Fri, 13 May 2016, Sakari Ailus wrote:

> Hi Hans and Laurent,
> 
> On Fri, May 13, 2016 at 11:26:22AM +0200, Hans Verkuil wrote:
> > On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > > Hello,
> > > 
> > > This RFC patch series is a second attempt at adding support for passing
> > > statistics data to userspace using a standard API.
> > > 
> > > The core requirements haven't changed. Statistics data capture requires
> > > zero-copy and decoupling statistics buffers from images buffers, in order to
> > > make statistics data available to userspace as soon as they're captured. For
> > > those reasons the early consensus we have reached is to use a video device
> > > node with a buffer queue to pass statistics buffers using the V4L2 API, and
> > > this new RFC version doesn't challenge that.
> > > 
> > > The major change compared to the previous version is how the first patch has
> > > been split in two. Patch 1/4 now adds a new metadata buffer type and format
> > > (including their support in videobuf2-v4l2), usable with regular V4L2 video
> > > device nodes, while patch 2/4 adds the new metadata video device type.
> > > Metadata buffer queues are thus usable on both the regular V4L2 device nodes
> > > and the new metadata device nodes.
> > > 
> > > This change was driven by the fact that an important category of use cases
> > > doesn't differentiate between metadata and image data in hardware at the DMA
> > > engine level. With such hardware (CSI-2 receivers in particular, but other bus
> > > types could also fall into this category) a stream containing both metadata
> > > and image data virtual streams is transmitted over a single physical link. The
> > > receiver demultiplexes, filters and routes the virtual streams to further
> > > hardware blocks, and in many cases, directly to DMA engines that are part of
> > > the receiver. Those DMA engines can capture a single virtual stream to memory,
> > > with as many DMA engines physically present in the device as the number of
> > > virtual streams that can be captured concurrently. All those DMA engines are
> > > usually identical and don't care about the type of data they receive and
> > > capture. For that reason limiting the metadata buffer type to metadata device
> > > nodes would require creating two device nodes for each DMA engine (and
> > > possibly more later if we need to capture other types of data). Not only would
> > > this make the API more complex to use for applications, it wouldn't bring any
> > > added value as the video and metadata device nodes associated with a DMA
> > > engine couldn't be used concurrently anyway, as they both correspond to the
> > > same hardware resource.
> > > 
> > > For this reason the ability to capture metadata on a video device node is
> > > useful and desired, and is implemented patch 1/4 using a dedicated video
> > > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > > internally for the same DMA engine, and can select which one to use based on
> > > the buffer type passed for instance to the REQBUFS ioctl (details still need
> > > to be discussed here).
> > 
> > Not quite. It still has only one vb2_queue, you just change the type depending
> > on what mode it is in (video or meta data). Similar to raw vs sliced VBI.
> > 
> > In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue type
> > depending on whether raw or sliced VBI is requested. That's probably where I
> > would do this for video vs meta as well.
> > 
> > There is one big thing missing here: how does userspace know in this case whether
> > it will get metadata or video? Who decides which CSI virtual stream is routed
> 
> My first impression would be to say by formats, so that's actually defined
> by the user. The media bus formats do not have such separation between image
> and metadata formats either.

I'm still not sure whether we actually need different formats for 
metadata. E.g. on CSI-2 I expect metadata to use the 8-bit embedded 
non-image Data Type - on all cameras. So, what should the CSI-2 bridge 
sink pad be configured with? Some sensor-specific type or just a format, 
telling it what to capture on the CSI-2 bus?

> VIDIOC_ENUM_FMT should be amended with media bus code as well so that the
> user can figure out which format corresponds to a given media bus code.

I'm not sure what you mean by this correspondence, could you elaborate on 
this a bit, please?

> > to which video node?
> 
> I think that should be considered as a seprate problem, albeit it will
> require a solution as well. And it's a much biffer problem than this one.

Yes, we did want to revive the stream routing work, didn't we? ;-)

But let me add one more use-case for consideration: UVC. Some UVC cameras 
include per-frame (meta)data in the private part of the payload header, 
even though I don't find anything in the UVC spec, that would suggest that 
as an acceptable approach. A more standard-conform design seems to be to 
transfer metadata using some Stream Based Payload on a separate USB 
Interface and synchronise it with video data using the timing information 
from UVC packet headers? I imagine each manufacturer would use a different 
GUID for their metadata format. Do we really want to create a new FOURCC 
code for each of them? Or just configure the pads with a fixed format and 
configure routing? But if then a camera decides to support several 
metadata formats on a single Input Terminal, we would only be able to 
distinguish between them, using size, unless they all have the same size.

Thanks
Guennadi

  parent reply	other threads:[~2016-05-23 13:01 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-12  0:17 [PATCH/RFC v2 0/4] Meta-data video device type Laurent Pinchart
2016-05-12  0:18 ` [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format Laurent Pinchart
2016-05-23 10:09   ` Hans Verkuil
2016-05-24 15:28     ` Sakari Ailus
2016-05-24 15:36       ` Hans Verkuil
2016-05-24 16:26         ` Sakari Ailus
2016-06-22 16:51           ` Laurent Pinchart
2016-06-24 23:15             ` Sakari Ailus
2016-06-22 16:32     ` Laurent Pinchart
2016-05-12  0:18 ` [PATCH/RFC v2 2/4] v4l: Add metadata video device type Laurent Pinchart
2016-05-12 21:22   ` Sakari Ailus
2016-05-12  0:18 ` [PATCH/RFC v2 3/4] v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine Laurent Pinchart
2016-05-12  0:18 ` [PATCH/RFC v2 4/4] v4l: vsp1: Add HGO support Laurent Pinchart
2016-06-13 15:33   ` Guennadi Liakhovetski
2016-06-24 14:35     ` Laurent Pinchart
2016-05-13  9:26 ` [PATCH/RFC v2 0/4] Meta-data video device type Hans Verkuil
2016-05-13  9:52   ` Sakari Ailus
2016-05-16  9:20     ` Laurent Pinchart
2016-05-23 13:00     ` Guennadi Liakhovetski [this message]
2016-05-16  9:21   ` Laurent Pinchart
2016-05-17  9:26     ` Sakari Ailus

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.1605231200170.18611@axis700.grange \
    --to=g.liakhovetski@gmx.de \
    --cc=hans.verkuil@cisco.com \
    --cc=hverkuil@xs4all.nl \
    --cc=laurent.pinchart+renesas@ideasonboard.com \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-renesas-soc@vger.kernel.org \
    --cc=mchehab@osg.samsung.com \
    --cc=sakari.ailus@iki.fi \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).