linux-media.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH/RFC v2 0/4] Meta-data video device type
@ 2016-05-12  0:17 Laurent Pinchart
  2016-05-12  0:18 ` [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format Laurent Pinchart
                   ` (4 more replies)
  0 siblings, 5 replies; 21+ messages in thread
From: Laurent Pinchart @ 2016-05-12  0:17 UTC (permalink / raw)
  To: linux-media
  Cc: linux-renesas-soc, Sakari Ailus, Hans Verkuil, Mauro Carvalho Chehab

Hello,

This RFC patch series is a second attempt at adding support for passing
statistics data to userspace using a standard API.

The core requirements haven't changed. Statistics data capture requires
zero-copy and decoupling statistics buffers from images buffers, in order to
make statistics data available to userspace as soon as they're captured. For
those reasons the early consensus we have reached is to use a video device
node with a buffer queue to pass statistics buffers using the V4L2 API, and
this new RFC version doesn't challenge that.

The major change compared to the previous version is how the first patch has
been split in two. Patch 1/4 now adds a new metadata buffer type and format
(including their support in videobuf2-v4l2), usable with regular V4L2 video
device nodes, while patch 2/4 adds the new metadata video device type.
Metadata buffer queues are thus usable on both the regular V4L2 device nodes
and the new metadata device nodes.

This change was driven by the fact that an important category of use cases
doesn't differentiate between metadata and image data in hardware at the DMA
engine level. With such hardware (CSI-2 receivers in particular, but other bus
types could also fall into this category) a stream containing both metadata
and image data virtual streams is transmitted over a single physical link. The
receiver demultiplexes, filters and routes the virtual streams to further
hardware blocks, and in many cases, directly to DMA engines that are part of
the receiver. Those DMA engines can capture a single virtual stream to memory,
with as many DMA engines physically present in the device as the number of
virtual streams that can be captured concurrently. All those DMA engines are
usually identical and don't care about the type of data they receive and
capture. For that reason limiting the metadata buffer type to metadata device
nodes would require creating two device nodes for each DMA engine (and
possibly more later if we need to capture other types of data). Not only would
this make the API more complex to use for applications, it wouldn't bring any
added value as the video and metadata device nodes associated with a DMA
engine couldn't be used concurrently anyway, as they both correspond to the
same hardware resource.

For this reason the ability to capture metadata on a video device node is
useful and desired, and is implemented patch 1/4 using a dedicated video
buffers queue. In the CSI-2 case a driver will create two buffer queues
internally for the same DMA engine, and can select which one to use based on
the buffer type passed for instance to the REQBUFS ioctl (details still need
to be discussed here). A device that contains DMA engines dedicated to
metadata would create a single buffer queue and implement metadata capture
only.

Patch 2/4 then adds a dedicated metadata device node type that is limited to
metadata capture. Support for metadata on video device nodes isn't removed
though, both device node types support metadata capture. I have included this
patch as the code existed in the previous version of the series (and was
explicitly requested during review of an earlier version), but I don't really
see what value this would bring compared to just using video device nodes.

As before patch 3/4 defines a first metadata format for the R-Car VSP1 1-D
statistics format as an example, and the new patch 4/4 adds support for the
histogram engine to the VSP1 driver. The implementation uses a metadata device
node, and switching to a video device node wouldn't require more than applying
the following one-liner patch.

-       histo->queue.type = V4L2_BUF_TYPE_META_CAPTURE;
+       histo->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

Beside whether patch 2/4 should be included or not (I would prefer dropping
it) and how to select the active queue on a multi-type video device node
(through the REQBUFS ioctl or through a diffent mean), one point that remains
to be discussed is what information to include in the metadata format. Patch
1/1 defines the new metadata format as

struct v4l2_meta_format {
	__u32				dataformat;
	__u32				buffersize;
	__u8				reserved[24];
} __attribute__ ((packed));

but at least in the CSI-2 case metadata is, as image data, transmitted in
lines and the receiver needs to be programmed with the line length and the
number of lines for proper operation. We started discussing this on IRC but
haven't reached a conclusion yet.

Laurent Pinchart (4):
  v4l: Add metadata buffer type and format
  v4l: Add metadata video device type
  v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine
  v4l: vsp1: Add HGO support

 Documentation/DocBook/media/v4l/dev-meta.xml       |  97 ++++
 .../DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml     | 307 +++++++++++++
 Documentation/DocBook/media/v4l/pixfmt.xml         |   9 +
 Documentation/DocBook/media/v4l/v4l2.xml           |   1 +
 drivers/media/platform/Kconfig                     |   1 +
 drivers/media/platform/vsp1/Makefile               |   2 +
 drivers/media/platform/vsp1/vsp1.h                 |   3 +
 drivers/media/platform/vsp1/vsp1_drm.c             |   2 +-
 drivers/media/platform/vsp1/vsp1_drv.c             |  37 +-
 drivers/media/platform/vsp1/vsp1_entity.c          | 131 +++++-
 drivers/media/platform/vsp1/vsp1_entity.h          |   7 +-
 drivers/media/platform/vsp1/vsp1_hgo.c             | 496 +++++++++++++++++++++
 drivers/media/platform/vsp1/vsp1_hgo.h             |  50 +++
 drivers/media/platform/vsp1/vsp1_histo.c           | 307 +++++++++++++
 drivers/media/platform/vsp1/vsp1_histo.h           |  68 +++
 drivers/media/platform/vsp1/vsp1_pipe.c            |  30 +-
 drivers/media/platform/vsp1/vsp1_pipe.h            |   2 +
 drivers/media/platform/vsp1/vsp1_regs.h            |  24 +-
 drivers/media/platform/vsp1/vsp1_video.c           |  22 +-
 drivers/media/v4l2-core/v4l2-compat-ioctl32.c      |  19 +
 drivers/media/v4l2-core/v4l2-dev.c                 |  37 +-
 drivers/media/v4l2-core/v4l2-ioctl.c               |  40 ++
 drivers/media/v4l2-core/videobuf2-v4l2.c           |   3 +
 include/media/v4l2-dev.h                           |   3 +-
 include/media/v4l2-ioctl.h                         |   8 +
 include/uapi/linux/media.h                         |   2 +
 include/uapi/linux/videodev2.h                     |  17 +
 27 files changed, 1678 insertions(+), 47 deletions(-)
 create mode 100644 Documentation/DocBook/media/v4l/dev-meta.xml
 create mode 100644 Documentation/DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml
 create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.c
 create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.h
 create mode 100644 drivers/media/platform/vsp1/vsp1_histo.c
 create mode 100644 drivers/media/platform/vsp1/vsp1_histo.h

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format
  2016-05-12  0:17 [PATCH/RFC v2 0/4] Meta-data video device type Laurent Pinchart
@ 2016-05-12  0:18 ` Laurent Pinchart
  2016-05-23 10:09   ` Hans Verkuil
  2016-05-12  0:18 ` [PATCH/RFC v2 2/4] v4l: Add metadata video device type Laurent Pinchart
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 21+ messages in thread
From: Laurent Pinchart @ 2016-05-12  0:18 UTC (permalink / raw)
  To: linux-media
  Cc: linux-renesas-soc, Sakari Ailus, Hans Verkuil, Mauro Carvalho Chehab

The metadata buffer type is used to transfer metadata between userspace
and kernelspace through a V4L2 buffers queue. It comes with a new
metadata capture capability and format description.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 Documentation/DocBook/media/v4l/dev-meta.xml  | 93 +++++++++++++++++++++++++++
 Documentation/DocBook/media/v4l/v4l2.xml      |  1 +
 drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 19 ++++++
 drivers/media/v4l2-core/v4l2-dev.c            | 16 +++--
 drivers/media/v4l2-core/v4l2-ioctl.c          | 34 ++++++++++
 drivers/media/v4l2-core/videobuf2-v4l2.c      |  3 +
 include/media/v4l2-ioctl.h                    |  8 +++
 include/uapi/linux/videodev2.h                | 14 ++++
 8 files changed, 182 insertions(+), 6 deletions(-)
 create mode 100644 Documentation/DocBook/media/v4l/dev-meta.xml

diff --git a/Documentation/DocBook/media/v4l/dev-meta.xml b/Documentation/DocBook/media/v4l/dev-meta.xml
new file mode 100644
index 000000000000..9b5b1fba2007
--- /dev/null
+++ b/Documentation/DocBook/media/v4l/dev-meta.xml
@@ -0,0 +1,93 @@
+  <title>Metadata Interface</title>
+
+  <note>
+    <title>Experimental</title>
+    <para>This is an <link linkend="experimental"> experimental </link>
+    interface and may change in the future.</para>
+  </note>
+
+  <para>
+Metadata refers to any non-image data that supplements video frames with
+additional information. This may include statistics computed over the image
+or frame capture parameters supplied by the image source. This interface is
+intended for transfer of metadata to userspace and control of that operation.
+  </para>
+
+  <para>
+The metadata interface is implemented on video capture devices. The device can
+be dedicated to metadata or can implement both video and metadata capture as
+specified in its reported capabilities.
+  </para>
+
+  <section>
+    <title>Querying Capabilities</title>
+
+    <para>
+Devices supporting the metadata interface set the
+<constant>V4L2_CAP_META_CAPTURE</constant> flag in the
+<structfield>capabilities</structfield> field of &v4l2-capability;
+returned by the &VIDIOC-QUERYCAP; ioctl. That flag means the device can capture
+metadata to memory.
+    </para>
+    <para>
+At least one of the read/write or streaming I/O methods must be supported.
+    </para>
+  </section>
+
+  <section>
+    <title>Data Format Negotiation</title>
+
+    <para>
+The metadata device uses the <link linkend="format">format</link> ioctls to
+select the capture format. The metadata buffer content format is bound to that
+selectable format. In addition to the basic
+<link linkend="format">format</link> ioctls, the &VIDIOC-ENUM-FMT; ioctl
+must be supported as well.
+    </para>
+
+    <para>
+To use the <link linkend="format">format</link> ioctls applications set the
+<structfield>type</structfield> field of a &v4l2-format; to
+<constant>V4L2_BUF_TYPE_META_CAPTURE</constant> and use the &v4l2-meta-format;
+<structfield>meta</structfield> member of the <structfield>fmt</structfield>
+union as needed per the desired operation.
+Currently there are two fields, <structfield>dataformat</structfield> and
+<structfield>buffersize</structfield>, of struct &v4l2-meta-format; that are
+used. Content of the <structfield>dataformat</structfield> is the V4L2 FourCC
+code of the data format. The <structfield>buffersize</structfield> field is the
+maximum buffer size in bytes required for data transfer, set by the driver in
+order to inform applications.
+    </para>
+
+    <table pgwide="1" frame="none" id="v4l2-meta-format">
+      <title>struct <structname>v4l2_meta_format</structname></title>
+      <tgroup cols="3">
+        &cs-str;
+        <tbody valign="top">
+          <row>
+            <entry>__u32</entry>
+            <entry><structfield>dataformat</structfield></entry>
+            <entry>
+The data format, set by the application. This is a little endian
+<link linkend="v4l2-fourcc">four character code</link>.
+V4L2 defines metadata formats in <xref linkend="meta-formats" />.
+           </entry>
+          </row>
+          <row>
+            <entry>__u32</entry>
+            <entry><structfield>buffersize</structfield></entry>
+            <entry>
+Maximum size in bytes required for data. Value is set by the driver.
+           </entry>
+          </row>
+          <row>
+            <entry>__u8</entry>
+            <entry><structfield>reserved[24]</structfield></entry>
+            <entry>This array is reserved for future extensions.
+Drivers and applications must set it to zero.</entry>
+          </row>
+        </tbody>
+      </tgroup>
+    </table>
+
+  </section>
diff --git a/Documentation/DocBook/media/v4l/v4l2.xml b/Documentation/DocBook/media/v4l/v4l2.xml
index 42e626d6c936..5c83b5d342dd 100644
--- a/Documentation/DocBook/media/v4l/v4l2.xml
+++ b/Documentation/DocBook/media/v4l/v4l2.xml
@@ -605,6 +605,7 @@ and discussions on the V4L mailing list.</revremark>
     <section id="radio"> &sub-dev-radio; </section>
     <section id="rds"> &sub-dev-rds; </section>
     <section id="sdr"> &sub-dev-sdr; </section>
+    <section id="meta"> &sub-dev-meta; </section>
     <section id="event"> &sub-dev-event; </section>
     <section id="subdev"> &sub-dev-subdev; </section>
   </chapter>
diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
index bacecbd68a6d..da2d836e8887 100644
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -161,6 +161,20 @@ static inline int put_v4l2_sdr_format(struct v4l2_sdr_format *kp, struct v4l2_sd
 	return 0;
 }
 
+static inline int get_v4l2_meta_format(struct v4l2_meta_format *kp, struct v4l2_meta_format __user *up)
+{
+	if (copy_from_user(kp, up, sizeof(struct v4l2_meta_format)))
+		return -EFAULT;
+	return 0;
+}
+
+static inline int put_v4l2_meta_format(struct v4l2_meta_format *kp, struct v4l2_meta_format __user *up)
+{
+	if (copy_to_user(up, kp, sizeof(struct v4l2_meta_format)))
+		return -EFAULT;
+	return 0;
+}
+
 struct v4l2_format32 {
 	__u32	type;	/* enum v4l2_buf_type */
 	union {
@@ -170,6 +184,7 @@ struct v4l2_format32 {
 		struct v4l2_vbi_format	vbi;
 		struct v4l2_sliced_vbi_format	sliced;
 		struct v4l2_sdr_format	sdr;
+		struct v4l2_meta_format	meta;
 		__u8	raw_data[200];        /* user-defined */
 	} fmt;
 };
@@ -216,6 +231,8 @@ static int __get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __us
 	case V4L2_BUF_TYPE_SDR_CAPTURE:
 	case V4L2_BUF_TYPE_SDR_OUTPUT:
 		return get_v4l2_sdr_format(&kp->fmt.sdr, &up->fmt.sdr);
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		return get_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
 	default:
 		pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
 								kp->type);
@@ -263,6 +280,8 @@ static int __put_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __us
 	case V4L2_BUF_TYPE_SDR_CAPTURE:
 	case V4L2_BUF_TYPE_SDR_OUTPUT:
 		return put_v4l2_sdr_format(&kp->fmt.sdr, &up->fmt.sdr);
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		return put_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
 	default:
 		pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
 								kp->type);
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
index 70b559d7ca80..74b79e60ac38 100644
--- a/drivers/media/v4l2-core/v4l2-dev.c
+++ b/drivers/media/v4l2-core/v4l2-dev.c
@@ -574,30 +574,34 @@ static void determine_valid_ioctls(struct video_device *vdev)
 		set_bit(_IOC_NR(VIDIOC_ENUM_FREQ_BANDS), valid_ioctls);
 
 	if (is_vid) {
-		/* video specific ioctls */
+		/* video and metadata specific ioctls */
 		if ((is_rx && (ops->vidioc_enum_fmt_vid_cap ||
 			       ops->vidioc_enum_fmt_vid_cap_mplane ||
-			       ops->vidioc_enum_fmt_vid_overlay)) ||
+			       ops->vidioc_enum_fmt_vid_overlay ||
+			       ops->vidioc_enum_fmt_meta_cap)) ||
 		    (is_tx && (ops->vidioc_enum_fmt_vid_out ||
 			       ops->vidioc_enum_fmt_vid_out_mplane)))
 			set_bit(_IOC_NR(VIDIOC_ENUM_FMT), valid_ioctls);
 		if ((is_rx && (ops->vidioc_g_fmt_vid_cap ||
 			       ops->vidioc_g_fmt_vid_cap_mplane ||
-			       ops->vidioc_g_fmt_vid_overlay)) ||
+			       ops->vidioc_g_fmt_vid_overlay ||
+			       ops->vidioc_g_fmt_meta_cap)) ||
 		    (is_tx && (ops->vidioc_g_fmt_vid_out ||
 			       ops->vidioc_g_fmt_vid_out_mplane ||
 			       ops->vidioc_g_fmt_vid_out_overlay)))
 			 set_bit(_IOC_NR(VIDIOC_G_FMT), valid_ioctls);
 		if ((is_rx && (ops->vidioc_s_fmt_vid_cap ||
 			       ops->vidioc_s_fmt_vid_cap_mplane ||
-			       ops->vidioc_s_fmt_vid_overlay)) ||
+			       ops->vidioc_s_fmt_vid_overlay ||
+			       ops->vidioc_s_fmt_meta_cap)) ||
 		    (is_tx && (ops->vidioc_s_fmt_vid_out ||
 			       ops->vidioc_s_fmt_vid_out_mplane ||
 			       ops->vidioc_s_fmt_vid_out_overlay)))
 			 set_bit(_IOC_NR(VIDIOC_S_FMT), valid_ioctls);
 		if ((is_rx && (ops->vidioc_try_fmt_vid_cap ||
 			       ops->vidioc_try_fmt_vid_cap_mplane ||
-			       ops->vidioc_try_fmt_vid_overlay)) ||
+			       ops->vidioc_try_fmt_vid_overlay ||
+			       ops->vidioc_try_fmt_meta_cap)) ||
 		    (is_tx && (ops->vidioc_try_fmt_vid_out ||
 			       ops->vidioc_try_fmt_vid_out_mplane ||
 			       ops->vidioc_try_fmt_vid_out_overlay)))
@@ -663,7 +667,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
 	}
 
 	if (is_vid || is_vbi || is_sdr) {
-		/* ioctls valid for video, vbi or sdr */
+		/* ioctls valid for video, metadata, vbi or sdr */
 		SET_VALID_IOCTL(ops, VIDIOC_REQBUFS, vidioc_reqbufs);
 		SET_VALID_IOCTL(ops, VIDIOC_QUERYBUF, vidioc_querybuf);
 		SET_VALID_IOCTL(ops, VIDIOC_QBUF, vidioc_qbuf);
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index 28e5be2c2eef..5d003152ff68 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -155,6 +155,7 @@ const char *v4l2_type_names[] = {
 	[V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE] = "vid-out-mplane",
 	[V4L2_BUF_TYPE_SDR_CAPTURE]        = "sdr-cap",
 	[V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
+	[V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
 };
 EXPORT_SYMBOL(v4l2_type_names);
 
@@ -249,6 +250,7 @@ static void v4l_print_format(const void *arg, bool write_only)
 	const struct v4l2_sliced_vbi_format *sliced;
 	const struct v4l2_window *win;
 	const struct v4l2_sdr_format *sdr;
+	const struct v4l2_meta_format *meta;
 	unsigned i;
 
 	pr_cont("type=%s", prt_names(p->type, v4l2_type_names));
@@ -336,6 +338,15 @@ static void v4l_print_format(const void *arg, bool write_only)
 			(sdr->pixelformat >> 16) & 0xff,
 			(sdr->pixelformat >> 24) & 0xff);
 		break;
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		meta = &p->fmt.meta;
+		pr_cont(", dataformat=%c%c%c%c, buffersize=%u\n",
+			(meta->dataformat >>  0) & 0xff,
+			(meta->dataformat >>  8) & 0xff,
+			(meta->dataformat >> 16) & 0xff,
+			(meta->dataformat >> 24) & 0xff,
+			meta->buffersize);
+		break;
 	}
 }
 
@@ -981,6 +992,10 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
 		if (is_sdr && is_tx && ops->vidioc_g_fmt_sdr_out)
 			return 0;
 		break;
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		if (is_vid && is_rx && ops->vidioc_g_fmt_meta_cap)
+			return 0;
+		break;
 	default:
 		break;
 	}
@@ -1349,6 +1364,11 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
 			break;
 		ret = ops->vidioc_enum_fmt_sdr_out(file, fh, arg);
 		break;
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_meta_cap))
+			break;
+		ret = ops->vidioc_enum_fmt_meta_cap(file, fh, arg);
+		break;
 	}
 	if (ret == 0)
 		v4l_fill_fmtdesc(p);
@@ -1447,6 +1467,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
 		if (unlikely(!is_tx || !is_sdr || !ops->vidioc_g_fmt_sdr_out))
 			break;
 		return ops->vidioc_g_fmt_sdr_out(file, fh, arg);
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_meta_cap))
+			break;
+		return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
 	}
 	return -EINVAL;
 }
@@ -1534,6 +1558,11 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
 			break;
 		CLEAR_AFTER_FIELD(p, fmt.sdr);
 		return ops->vidioc_s_fmt_sdr_out(file, fh, arg);
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_meta_cap))
+			break;
+		CLEAR_AFTER_FIELD(p, fmt.meta);
+		return ops->vidioc_s_fmt_meta_cap(file, fh, arg);
 	}
 	return -EINVAL;
 }
@@ -1618,6 +1647,11 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
 			break;
 		CLEAR_AFTER_FIELD(p, fmt.sdr);
 		return ops->vidioc_try_fmt_sdr_out(file, fh, arg);
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_meta_cap))
+			break;
+		CLEAR_AFTER_FIELD(p, fmt.meta);
+		return ops->vidioc_try_fmt_meta_cap(file, fh, arg);
 	}
 	return -EINVAL;
 }
diff --git a/drivers/media/v4l2-core/videobuf2-v4l2.c b/drivers/media/v4l2-core/videobuf2-v4l2.c
index 7f366f1b0377..e4e90d9a3a65 100644
--- a/drivers/media/v4l2-core/videobuf2-v4l2.c
+++ b/drivers/media/v4l2-core/videobuf2-v4l2.c
@@ -575,6 +575,9 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
 	case V4L2_BUF_TYPE_SDR_OUTPUT:
 		requested_sizes[0] = f->fmt.sdr.buffersize;
 		break;
+	case V4L2_BUF_TYPE_META_CAPTURE:
+		requested_sizes[0] = f->fmt.meta.buffersize;
+		break;
 	default:
 		return -EINVAL;
 	}
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
index 017ffb2220c7..2dd00c73e892 100644
--- a/include/media/v4l2-ioctl.h
+++ b/include/media/v4l2-ioctl.h
@@ -38,6 +38,8 @@ struct v4l2_ioctl_ops {
 					    struct v4l2_fmtdesc *f);
 	int (*vidioc_enum_fmt_sdr_out)     (struct file *file, void *fh,
 					    struct v4l2_fmtdesc *f);
+	int (*vidioc_enum_fmt_meta_cap)    (struct file *file, void *fh,
+					    struct v4l2_fmtdesc *f);
 
 	/* VIDIOC_G_FMT handlers */
 	int (*vidioc_g_fmt_vid_cap)    (struct file *file, void *fh,
@@ -64,6 +66,8 @@ struct v4l2_ioctl_ops {
 					struct v4l2_format *f);
 	int (*vidioc_g_fmt_sdr_out)    (struct file *file, void *fh,
 					struct v4l2_format *f);
+	int (*vidioc_g_fmt_meta_cap)   (struct file *file, void *fh,
+					struct v4l2_format *f);
 
 	/* VIDIOC_S_FMT handlers */
 	int (*vidioc_s_fmt_vid_cap)    (struct file *file, void *fh,
@@ -90,6 +94,8 @@ struct v4l2_ioctl_ops {
 					struct v4l2_format *f);
 	int (*vidioc_s_fmt_sdr_out)    (struct file *file, void *fh,
 					struct v4l2_format *f);
+	int (*vidioc_s_fmt_meta_cap)   (struct file *file, void *fh,
+					struct v4l2_format *f);
 
 	/* VIDIOC_TRY_FMT handlers */
 	int (*vidioc_try_fmt_vid_cap)    (struct file *file, void *fh,
@@ -116,6 +122,8 @@ struct v4l2_ioctl_ops {
 					  struct v4l2_format *f);
 	int (*vidioc_try_fmt_sdr_out)    (struct file *file, void *fh,
 					  struct v4l2_format *f);
+	int (*vidioc_try_fmt_meta_cap)   (struct file *file, void *fh,
+					  struct v4l2_format *f);
 
 	/* Buffer handlers */
 	int (*vidioc_reqbufs) (struct file *file, void *fh, struct v4l2_requestbuffers *b);
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index 8f951917be74..5fbd30ca9b1e 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -143,6 +143,7 @@ enum v4l2_buf_type {
 	V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE  = 10,
 	V4L2_BUF_TYPE_SDR_CAPTURE          = 11,
 	V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
+	V4L2_BUF_TYPE_META_CAPTURE         = 13,
 	/* Deprecated, do not use */
 	V4L2_BUF_TYPE_PRIVATE              = 0x80,
 };
@@ -435,6 +436,7 @@ struct v4l2_capability {
 #define V4L2_CAP_SDR_CAPTURE		0x00100000  /* Is a SDR capture device */
 #define V4L2_CAP_EXT_PIX_FORMAT		0x00200000  /* Supports the extended pixel format */
 #define V4L2_CAP_SDR_OUTPUT		0x00400000  /* Is a SDR output device */
+#define V4L2_CAP_META_CAPTURE		0x00800000  /* Is a metadata capture device */
 
 #define V4L2_CAP_READWRITE              0x01000000  /* read/write systemcalls */
 #define V4L2_CAP_ASYNCIO                0x02000000  /* async I/O */
@@ -2000,6 +2002,17 @@ struct v4l2_sdr_format {
 } __attribute__ ((packed));
 
 /**
+ * struct v4l2_meta_format - metadata format definition
+ * @dataformat:		little endian four character code (fourcc)
+ * @buffersize:		maximum size in bytes required for data
+ */
+struct v4l2_meta_format {
+	__u32				dataformat;
+	__u32				buffersize;
+	__u8				reserved[24];
+} __attribute__ ((packed));
+
+/**
  * struct v4l2_format - stream data format
  * @type:	enum v4l2_buf_type; type of the data stream
  * @pix:	definition of an image format
@@ -2018,6 +2031,7 @@ struct v4l2_format {
 		struct v4l2_vbi_format		vbi;     /* V4L2_BUF_TYPE_VBI_CAPTURE */
 		struct v4l2_sliced_vbi_format	sliced;  /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
 		struct v4l2_sdr_format		sdr;     /* V4L2_BUF_TYPE_SDR_CAPTURE */
+		struct v4l2_meta_format		meta;    /* V4L2_BUF_TYPE_META_CAPTURE */
 		__u8	raw_data[200];                   /* user-defined */
 	} fmt;
 };
-- 
2.7.3


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH/RFC v2 2/4] v4l: Add metadata video device type
  2016-05-12  0:17 [PATCH/RFC v2 0/4] Meta-data video device type Laurent Pinchart
  2016-05-12  0:18 ` [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format Laurent Pinchart
@ 2016-05-12  0:18 ` Laurent Pinchart
  2016-05-12 21:22   ` Sakari Ailus
  2016-05-12  0:18 ` [PATCH/RFC v2 3/4] v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine Laurent Pinchart
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 21+ messages in thread
From: Laurent Pinchart @ 2016-05-12  0:18 UTC (permalink / raw)
  To: linux-media
  Cc: linux-renesas-soc, Sakari Ailus, Hans Verkuil, Mauro Carvalho Chehab

The metadata video device is used to transfer metadata between
userspace and kernelspace. It supports the metadata buffer type only.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 Documentation/DocBook/media/v4l/dev-meta.xml | 10 +++++++---
 drivers/media/v4l2-core/v4l2-dev.c           | 21 ++++++++++++++++++++-
 drivers/media/v4l2-core/v4l2-ioctl.c         | 15 ++++++++++-----
 include/media/v4l2-dev.h                     |  3 ++-
 include/uapi/linux/media.h                   |  2 ++
 5 files changed, 41 insertions(+), 10 deletions(-)

diff --git a/Documentation/DocBook/media/v4l/dev-meta.xml b/Documentation/DocBook/media/v4l/dev-meta.xml
index 9b5b1fba2007..75bd22521af7 100644
--- a/Documentation/DocBook/media/v4l/dev-meta.xml
+++ b/Documentation/DocBook/media/v4l/dev-meta.xml
@@ -14,9 +14,13 @@ intended for transfer of metadata to userspace and control of that operation.
   </para>
 
   <para>
-The metadata interface is implemented on video capture devices. The device can
-be dedicated to metadata or can implement both video and metadata capture as
-specified in its reported capabilities.
+The metadata interface can be implemented on video capture devices, metadata
+devices or both, at the discretion of drivers. Metadata devices are accessed
+through character device special files named
+<filename>/dev/v4l-meta[0-9]+</filename> with major number 81 and dynamically
+allocated minor numbers. Video devices that support metadata capture can be
+dedicated to metadata or can implement both metadata capture and video capture
+and/or output, as specified in the device's reported capabilities.
   </para>
 
   <section>
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
index 74b79e60ac38..5a8d7b03ab97 100644
--- a/drivers/media/v4l2-core/v4l2-dev.c
+++ b/drivers/media/v4l2-core/v4l2-dev.c
@@ -527,6 +527,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
 	bool is_vbi = vdev->vfl_type == VFL_TYPE_VBI;
 	bool is_radio = vdev->vfl_type == VFL_TYPE_RADIO;
 	bool is_sdr = vdev->vfl_type == VFL_TYPE_SDR;
+	bool is_meta = vdev->vfl_type == VFL_TYPE_META;
 	bool is_rx = vdev->vfl_dir != VFL_DIR_TX;
 	bool is_tx = vdev->vfl_dir != VFL_DIR_RX;
 
@@ -664,9 +665,18 @@ static void determine_valid_ioctls(struct video_device *vdev)
 			set_bit(_IOC_NR(VIDIOC_S_FMT), valid_ioctls);
 		if (ops->vidioc_try_fmt_sdr_out)
 			set_bit(_IOC_NR(VIDIOC_TRY_FMT), valid_ioctls);
+	} else if (is_meta && is_rx) {
+		if (ops->vidioc_enum_fmt_meta_cap)
+			set_bit(_IOC_NR(VIDIOC_ENUM_FMT), valid_ioctls);
+		if (ops->vidioc_g_fmt_meta_cap)
+			set_bit(_IOC_NR(VIDIOC_G_FMT), valid_ioctls);
+		if (ops->vidioc_s_fmt_meta_cap)
+			set_bit(_IOC_NR(VIDIOC_S_FMT), valid_ioctls);
+		if (ops->vidioc_try_fmt_meta_cap)
+			set_bit(_IOC_NR(VIDIOC_TRY_FMT), valid_ioctls);
 	}
 
-	if (is_vid || is_vbi || is_sdr) {
+	if (is_vid || is_vbi || is_sdr || is_meta) {
 		/* ioctls valid for video, metadata, vbi or sdr */
 		SET_VALID_IOCTL(ops, VIDIOC_REQBUFS, vidioc_reqbufs);
 		SET_VALID_IOCTL(ops, VIDIOC_QUERYBUF, vidioc_querybuf);
@@ -767,6 +777,10 @@ static int video_register_media_controller(struct video_device *vdev, int type)
 		intf_type = MEDIA_INTF_T_V4L_SUBDEV;
 		/* Entity will be created via v4l2_device_register_subdev() */
 		break;
+	case VFL_TYPE_META:
+		intf_type = MEDIA_INTF_T_V4L_META;
+		vdev->entity.function = MEDIA_ENT_F_IO_META;
+		break;
 	default:
 		return 0;
 	}
@@ -849,6 +863,8 @@ static int video_register_media_controller(struct video_device *vdev, int type)
  *	%VFL_TYPE_SUBDEV - A subdevice
  *
  *	%VFL_TYPE_SDR - Software Defined Radio
+ *
+ *	%VFL_TYPE_META - Metadata
  */
 int __video_register_device(struct video_device *vdev, int type, int nr,
 		int warn_if_nr_in_use, struct module *owner)
@@ -892,6 +908,9 @@ int __video_register_device(struct video_device *vdev, int type, int nr,
 		/* Use device name 'swradio' because 'sdr' was already taken. */
 		name_base = "swradio";
 		break;
+	case VFL_TYPE_META:
+		name_base = "v4l-meta";
+		break;
 	default:
 		printk(KERN_ERR "%s called with unknown type: %d\n",
 		       __func__, type);
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index 5d003152ff68..256938fff9e0 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -935,6 +935,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
 	bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
 	bool is_vbi = vfd->vfl_type == VFL_TYPE_VBI;
 	bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+	bool is_meta = vfd->vfl_type == VFL_TYPE_META;
 	bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
 	bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
 
@@ -993,7 +994,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
 			return 0;
 		break;
 	case V4L2_BUF_TYPE_META_CAPTURE:
-		if (is_vid && is_rx && ops->vidioc_g_fmt_meta_cap)
+		if ((is_vid || is_meta) && is_rx && ops->vidioc_g_fmt_meta_cap)
 			return 0;
 		break;
 	default:
@@ -1324,6 +1325,7 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
 	struct video_device *vfd = video_devdata(file);
 	bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
 	bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+	bool is_meta = vfd->vfl_type == VFL_TYPE_META;
 	bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
 	bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
 	int ret = -EINVAL;
@@ -1365,7 +1367,7 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
 		ret = ops->vidioc_enum_fmt_sdr_out(file, fh, arg);
 		break;
 	case V4L2_BUF_TYPE_META_CAPTURE:
-		if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_meta_cap))
+		if (unlikely(!is_rx || !(is_vid || is_meta) || !ops->vidioc_enum_fmt_meta_cap))
 			break;
 		ret = ops->vidioc_enum_fmt_meta_cap(file, fh, arg);
 		break;
@@ -1382,6 +1384,7 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
 	struct video_device *vfd = video_devdata(file);
 	bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
 	bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+	bool is_meta = vfd->vfl_type == VFL_TYPE_META;
 	bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
 	bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
 	int ret;
@@ -1468,7 +1471,7 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
 			break;
 		return ops->vidioc_g_fmt_sdr_out(file, fh, arg);
 	case V4L2_BUF_TYPE_META_CAPTURE:
-		if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_meta_cap))
+		if (unlikely(!is_rx || !(is_vid || is_meta) || !ops->vidioc_g_fmt_meta_cap))
 			break;
 		return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
 	}
@@ -1482,6 +1485,7 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
 	struct video_device *vfd = video_devdata(file);
 	bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
 	bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+	bool is_meta = vfd->vfl_type == VFL_TYPE_META;
 	bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
 	bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
 	int ret;
@@ -1559,7 +1563,7 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
 		CLEAR_AFTER_FIELD(p, fmt.sdr);
 		return ops->vidioc_s_fmt_sdr_out(file, fh, arg);
 	case V4L2_BUF_TYPE_META_CAPTURE:
-		if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_meta_cap))
+		if (unlikely(!is_rx || !(is_vid || is_meta) || !ops->vidioc_s_fmt_meta_cap))
 			break;
 		CLEAR_AFTER_FIELD(p, fmt.meta);
 		return ops->vidioc_s_fmt_meta_cap(file, fh, arg);
@@ -1574,6 +1578,7 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
 	struct video_device *vfd = video_devdata(file);
 	bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
 	bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+	bool is_meta = vfd->vfl_type == VFL_TYPE_META;
 	bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
 	bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
 	int ret;
@@ -1648,7 +1653,7 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
 		CLEAR_AFTER_FIELD(p, fmt.sdr);
 		return ops->vidioc_try_fmt_sdr_out(file, fh, arg);
 	case V4L2_BUF_TYPE_META_CAPTURE:
-		if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_meta_cap))
+		if (unlikely(!is_rx || !(is_vid || is_meta) || !ops->vidioc_try_fmt_meta_cap))
 			break;
 		CLEAR_AFTER_FIELD(p, fmt.meta);
 		return ops->vidioc_try_fmt_meta_cap(file, fh, arg);
diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
index 25a3190308fb..eeab952d4940 100644
--- a/include/media/v4l2-dev.h
+++ b/include/media/v4l2-dev.h
@@ -25,7 +25,8 @@
 #define VFL_TYPE_RADIO		2
 #define VFL_TYPE_SUBDEV		3
 #define VFL_TYPE_SDR		4
-#define VFL_TYPE_MAX		5
+#define VFL_TYPE_META		5
+#define VFL_TYPE_MAX		6
 
 /* Is this a receiver, transmitter or mem-to-mem? */
 /* Ignored for VFL_TYPE_SUBDEV. */
diff --git a/include/uapi/linux/media.h b/include/uapi/linux/media.h
index df59edee25d1..e226bc35c639 100644
--- a/include/uapi/linux/media.h
+++ b/include/uapi/linux/media.h
@@ -77,6 +77,7 @@ struct media_device_info {
 #define MEDIA_ENT_F_IO_DTV		(MEDIA_ENT_F_BASE + 0x01001)
 #define MEDIA_ENT_F_IO_VBI		(MEDIA_ENT_F_BASE + 0x01002)
 #define MEDIA_ENT_F_IO_SWRADIO		(MEDIA_ENT_F_BASE + 0x01003)
+#define MEDIA_ENT_F_IO_META		(MEDIA_ENT_F_BASE + 0x01004)
 
 /*
  * Analog TV IF-PLL decoders
@@ -297,6 +298,7 @@ struct media_links_enum {
 #define MEDIA_INTF_T_V4L_RADIO  (MEDIA_INTF_T_V4L_BASE + 2)
 #define MEDIA_INTF_T_V4L_SUBDEV (MEDIA_INTF_T_V4L_BASE + 3)
 #define MEDIA_INTF_T_V4L_SWRADIO (MEDIA_INTF_T_V4L_BASE + 4)
+#define MEDIA_INTF_T_V4L_META   (MEDIA_INTF_T_V4L_BASE + 5)
 
 #define MEDIA_INTF_T_ALSA_PCM_CAPTURE   (MEDIA_INTF_T_ALSA_BASE)
 #define MEDIA_INTF_T_ALSA_PCM_PLAYBACK  (MEDIA_INTF_T_ALSA_BASE + 1)
-- 
2.7.3


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH/RFC v2 3/4] v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine
  2016-05-12  0:17 [PATCH/RFC v2 0/4] Meta-data video device type Laurent Pinchart
  2016-05-12  0:18 ` [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format Laurent Pinchart
  2016-05-12  0:18 ` [PATCH/RFC v2 2/4] v4l: Add metadata video device type Laurent Pinchart
@ 2016-05-12  0:18 ` Laurent Pinchart
  2016-05-12  0:18 ` [PATCH/RFC v2 4/4] v4l: vsp1: Add HGO support Laurent Pinchart
  2016-05-13  9:26 ` [PATCH/RFC v2 0/4] Meta-data video device type Hans Verkuil
  4 siblings, 0 replies; 21+ messages in thread
From: Laurent Pinchart @ 2016-05-12  0:18 UTC (permalink / raw)
  To: linux-media
  Cc: linux-renesas-soc, Sakari Ailus, Hans Verkuil, Mauro Carvalho Chehab

The format is used on the R-Car VSP1 video queues that carry
1-D histogram statistics data.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@linux.intel.com>
---
 .../DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml     | 307 +++++++++++++++++++++
 Documentation/DocBook/media/v4l/pixfmt.xml         |   9 +
 drivers/media/v4l2-core/v4l2-ioctl.c               |   1 +
 include/uapi/linux/videodev2.h                     |   3 +
 4 files changed, 320 insertions(+)
 create mode 100644 Documentation/DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml

diff --git a/Documentation/DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml b/Documentation/DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml
new file mode 100644
index 000000000000..b40bd10695d2
--- /dev/null
+++ b/Documentation/DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml
@@ -0,0 +1,307 @@
+<refentry id="V4L2-META-FMT-VSP1-HGO">
+  <refmeta>
+    <refentrytitle>V4L2_META_FMT_VSP1_HGO ('VSPH')</refentrytitle>
+    &manvol;
+  </refmeta>
+  <refnamediv>
+    <refname>
+      <constant>V4L2_META_FMT_VSP1_HGO</constant>
+    </refname>
+    <refpurpose>Renesas R-Car VSP1 1-D Histogram Data</refpurpose>
+  </refnamediv>
+  <refsect1>
+    <title>Description</title>
+    <para>
+This format describes histogram data generated by the Renesas R-Car VSP1 1-D
+Histogram (HGO) engine.
+    </para>
+    <para>
+The VSP1 HGO is a histogram computation engine that can operate on RGB, YCrCb
+or HSV data. It operates on a possibly cropped and subsampled input image and
+computes the minimum, maximum and sum of all pixels as well as per-channel
+histograms.
+    </para>
+The HGO can compute histograms independently per channel, on the maximum of the
+three channels (RGB data only) or on the Y channel only (YCbCr only). It can
+additionally output the histogram with 64 or 256 bins, resulting in four
+possible modes of operation.
+      <itemizedlist>
+	<listitem>
+	  <para>
+	    In <emphasis>64 bins normal mode</emphasis>, the HGO operates
+	    on the three channels independently to compute three 64-bins
+	    histograms. RGB, YCbCr and HSV image formats are supported.
+	  </para>
+	</listitem>
+	<listitem>
+	  <para>
+	    In <emphasis>64 bins maximum mode</emphasis>, the HGO operates
+	    on the maximum of the (R, G, B) channels to compute a single
+	    64-bins histogram. Only the RGB image format is supported.
+	  </para>
+	</listitem>
+	<listitem>
+	  <para>
+	    In <emphasis>256 bins normal mode</emphasis>, the HGO operates
+	    on the Y channel to compute a single 256-bins histogram. Only the
+	    YCbCr image format is supported.
+	  </para>
+	</listitem>
+	<listitem>
+	  <para>
+	    In <emphasis>256 bins maximum mode</emphasis>, the HGO operates
+	    on the maximum of the (R, G, B) channels to compute a single
+	    256-bins histogram. Only the RGB image format is supported.
+	  </para>
+	</listitem>
+      </itemizedlist>
+    <para>
+    </para>
+    <para>
+All data is stored in memory in little endian format. Each cell in the tables
+below contains one byte.
+    </para>
+    <table frame="none">
+      <title>VSP1 HGO Data - 64 Bins, Normal Mode (792 bytes)</title>
+      <tgroup cols="5">
+	<colspec colnum="1" colname="offset" align="left" />
+	<colspec colnum="2" colname="b3"     align="center" />
+	<colspec colnum="3" colname="b2"     align="center" />
+	<colspec colnum="4" colname="b1"     align="center" />
+	<colspec colnum="5" colname="b0"     align="center" />
+	<spanspec namest="b3" nameend="b0" spanname="word" colsep="1" align="center" />
+	<thead>
+	  <row>
+	    <entry>Offset</entry>
+	    <entry spanname="word">Memory</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry>[31:24]</entry>
+	    <entry>[23:16]</entry>
+	    <entry>[15:8]</entry>
+	    <entry>[7:0]</entry>
+	  </row>
+	</thead>
+	<tbody valign="top">
+	  <row>
+	    <entry>0</entry>
+	    <entry>-</entry>
+	    <entry>R/Cr/H max [7:0]</entry>
+	    <entry>-</entry>
+	    <entry>R/Cr/H min [7:0]</entry>
+	  </row>
+	  <row>
+	    <entry>4</entry>
+	    <entry>-</entry>
+	    <entry>G/Y/S max [7:0]</entry>
+	    <entry>-</entry>
+	    <entry>G/Y/S min [7:0]</entry>
+	  </row>
+	  <row>
+	    <entry>8</entry>
+	    <entry>-</entry>
+	    <entry>B/Cb/V max [7:0]</entry>
+	    <entry>-</entry>
+	    <entry>B/Cb/V min [7:0]</entry>
+	  </row>
+	  <row>
+	    <entry>12</entry>
+	    <entry spanname="word">R/Cr/H sum [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry>16</entry>
+	    <entry spanname="word">G/Y/S sum [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry>20</entry>
+	    <entry spanname="word">B/Cb/V sum [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry>24</entry>
+	    <entry spanname="word">R/Cr/H bin 0 [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry spanname="word">...</entry>
+	  </row>
+	  <row>
+	    <entry>276</entry>
+	    <entry spanname="word">R/Cr/H bin 63 [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry>280</entry>
+	    <entry spanname="word">G/Y/S bin 0 [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry spanname="word">...</entry>
+	  </row>
+	  <row>
+	    <entry>532</entry>
+	    <entry spanname="word">G/Y/S bin 63 [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry>536</entry>
+	    <entry spanname="word">B/Cb/V bin 0 [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry spanname="word">...</entry>
+	  </row>
+	  <row>
+	    <entry>788</entry>
+	    <entry spanname="word">B/Cb/V bin 63 [31:0]</entry>
+	  </row>
+	</tbody>
+      </tgroup>
+    </table>
+    <table frame="none">
+      <title>VSP1 HGO Data - 64 Bins, Max Mode (264 bytes)</title>
+      <tgroup cols="5">
+	<colspec colnum="1" colname="offset" align="left" />
+	<colspec colnum="2" colname="b3"     align="center" />
+	<colspec colnum="3" colname="b2"     align="center" />
+	<colspec colnum="4" colname="b1"     align="center" />
+	<colspec colnum="5" colname="b0"     align="center" />
+	<spanspec namest="b3" nameend="b0" spanname="word" colsep="1" align="center" />
+	<thead>
+	  <row>
+	    <entry>Offset</entry>
+	    <entry spanname="word">Memory</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry>[31:24]</entry>
+	    <entry>[23:16]</entry>
+	    <entry>[15:8]</entry>
+	    <entry>[7:0]</entry>
+	  </row>
+	</thead>
+	<tbody valign="top">
+	  <row>
+	    <entry>0</entry>
+	    <entry>-</entry>
+	    <entry>max(R,G,B) max [7:0]</entry>
+	    <entry>-</entry>
+	    <entry>max(R,G,B) min [7:0]</entry>
+	  </row>
+	  <row>
+	    <entry>4</entry>
+	    <entry spanname="word">max(R,G,B) sum [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry>8</entry>
+	    <entry spanname="word">max(R,G,B) bin 0 [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry spanname="word">...</entry>
+	  </row>
+	  <row>
+	    <entry>260</entry>
+	    <entry spanname="word">max(R,G,B) bin 63 [31:0]</entry>
+	  </row>
+	</tbody>
+      </tgroup>
+    </table>
+    <table frame="none">
+      <title>VSP1 HGO Data - 256 Bins, Normal Mode (1032 bytes)</title>
+      <tgroup cols="5">
+	<colspec colnum="1" colname="offset" align="left" />
+	<colspec colnum="2" colname="b3"     align="center" />
+	<colspec colnum="3" colname="b2"     align="center" />
+	<colspec colnum="4" colname="b1"     align="center" />
+	<colspec colnum="5" colname="b0"     align="center" />
+	<spanspec namest="b3" nameend="b0" spanname="word" colsep="1" align="center" />
+	<thead>
+	  <row>
+	    <entry>Offset</entry>
+	    <entry spanname="word">Memory</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry>[31:24]</entry>
+	    <entry>[23:16]</entry>
+	    <entry>[15:8]</entry>
+	    <entry>[7:0]</entry>
+	  </row>
+	</thead>
+	<tbody valign="top">
+	  <row>
+	    <entry>0</entry>
+	    <entry>-</entry>
+	    <entry>Y max [7:0]</entry>
+	    <entry>-</entry>
+	    <entry>Y min [7:0]</entry>
+	  </row>
+	  <row>
+	    <entry>4</entry>
+	    <entry spanname="word">Y sum [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry>8</entry>
+	    <entry spanname="word">Y bin 0 [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry spanname="word">...</entry>
+	  </row>
+	  <row>
+	    <entry>1028</entry>
+	    <entry spanname="word">Y bin 255 [31:0]</entry>
+	  </row>
+	</tbody>
+      </tgroup>
+    </table>
+    <table frame="none">
+      <title>VSP1 HGO Data - 256 Bins, Max Mode (1032 bytes)</title>
+      <tgroup cols="5">
+	<colspec colnum="1" colname="offset" align="left" />
+	<colspec colnum="2" colname="b3"     align="center" />
+	<colspec colnum="3" colname="b2"     align="center" />
+	<colspec colnum="4" colname="b1"     align="center" />
+	<colspec colnum="5" colname="b0"     align="center" />
+	<spanspec namest="b3" nameend="b0" spanname="word" colsep="1" align="center" />
+	<thead>
+	  <row>
+	    <entry>Offset</entry>
+	    <entry spanname="word">Memory</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry>[31:24]</entry>
+	    <entry>[23:16]</entry>
+	    <entry>[15:8]</entry>
+	    <entry>[7:0]</entry>
+	  </row>
+	</thead>
+	<tbody valign="top">
+	  <row>
+	    <entry>0</entry>
+	    <entry>-</entry>
+	    <entry>max(R,G,B) max [7:0]</entry>
+	    <entry>-</entry>
+	    <entry>max(R,G,B) min [7:0]</entry>
+	  </row>
+	  <row>
+	    <entry>4</entry>
+	    <entry spanname="word">max(R,G,B) sum [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry>8</entry>
+	    <entry spanname="word">max(R,G,B) bin 0 [31:0]</entry>
+	  </row>
+	  <row>
+	    <entry></entry>
+	    <entry spanname="word">...</entry>
+	  </row>
+	  <row>
+	    <entry>1028</entry>
+	    <entry spanname="word">max(R,G,B) bin 255 [31:0]</entry>
+	  </row>
+	</tbody>
+      </tgroup>
+    </table>
+  </refsect1>
+</refentry>
diff --git a/Documentation/DocBook/media/v4l/pixfmt.xml b/Documentation/DocBook/media/v4l/pixfmt.xml
index 5a08aeea4360..bd67539abf47 100644
--- a/Documentation/DocBook/media/v4l/pixfmt.xml
+++ b/Documentation/DocBook/media/v4l/pixfmt.xml
@@ -1740,6 +1740,15 @@ extended control <constant>V4L2_CID_MPEG_STREAM_TYPE</constant>, see
     </table>
   </section>
 
+  <section id="meta-formats">
+    <title>Metadata Formats</title>
+
+    <para>These formats are used for metadata.</para>
+
+    &sub-meta-vsp1-hgo;
+
+  </section>
+
   <section id="sdr-formats">
     <title>SDR Formats</title>
 
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index 256938fff9e0..19d3aee3b374 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -1259,6 +1259,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
 	case V4L2_SDR_FMT_CS8:		descr = "Complex S8"; break;
 	case V4L2_SDR_FMT_CS14LE:	descr = "Complex S14LE"; break;
 	case V4L2_SDR_FMT_RU12LE:	descr = "Real U12LE"; break;
+	case V4L2_META_FMT_VSP1_HGO:	descr = "R-Car VSP1 1-D Histogram"; break;
 
 	default:
 		/* Compressed formats */
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index 5fbd30ca9b1e..0c72202525c6 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -635,6 +635,9 @@ struct v4l2_pix_format {
 #define V4L2_SDR_FMT_CS14LE       v4l2_fourcc('C', 'S', '1', '4') /* complex s14le */
 #define V4L2_SDR_FMT_RU12LE       v4l2_fourcc('R', 'U', '1', '2') /* real u12le */
 
+/* Meta-data formats */
+#define V4L2_META_FMT_VSP1_HGO    v4l2_fourcc('V', 'S', 'P', 'H') /* R-Car VSP1 Histogram */
+
 /* priv field value to indicates that subsequent fields are valid. */
 #define V4L2_PIX_FMT_PRIV_MAGIC		0xfeedcafe
 
-- 
2.7.3


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH/RFC v2 4/4] v4l: vsp1: Add HGO support
  2016-05-12  0:17 [PATCH/RFC v2 0/4] Meta-data video device type Laurent Pinchart
                   ` (2 preceding siblings ...)
  2016-05-12  0:18 ` [PATCH/RFC v2 3/4] v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine Laurent Pinchart
@ 2016-05-12  0:18 ` Laurent Pinchart
  2016-06-13 15:33   ` Guennadi Liakhovetski
  2016-05-13  9:26 ` [PATCH/RFC v2 0/4] Meta-data video device type Hans Verkuil
  4 siblings, 1 reply; 21+ messages in thread
From: Laurent Pinchart @ 2016-05-12  0:18 UTC (permalink / raw)
  To: linux-media
  Cc: linux-renesas-soc, Sakari Ailus, Hans Verkuil, Mauro Carvalho Chehab

The HGO is a Histogram Generator One-Dimension. It computes per-channel
histograms over a configurable region of the image with optional
subsampling.

Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
---
 drivers/media/platform/Kconfig            |   1 +
 drivers/media/platform/vsp1/Makefile      |   2 +
 drivers/media/platform/vsp1/vsp1.h        |   3 +
 drivers/media/platform/vsp1/vsp1_drm.c    |   2 +-
 drivers/media/platform/vsp1/vsp1_drv.c    |  37 ++-
 drivers/media/platform/vsp1/vsp1_entity.c | 131 +++++++-
 drivers/media/platform/vsp1/vsp1_entity.h |   7 +-
 drivers/media/platform/vsp1/vsp1_hgo.c    | 496 ++++++++++++++++++++++++++++++
 drivers/media/platform/vsp1/vsp1_hgo.h    |  50 +++
 drivers/media/platform/vsp1/vsp1_histo.c  | 307 ++++++++++++++++++
 drivers/media/platform/vsp1/vsp1_histo.h  |  68 ++++
 drivers/media/platform/vsp1/vsp1_pipe.c   |  30 +-
 drivers/media/platform/vsp1/vsp1_pipe.h   |   2 +
 drivers/media/platform/vsp1/vsp1_regs.h   |  24 +-
 drivers/media/platform/vsp1/vsp1_video.c  |  22 +-
 15 files changed, 1143 insertions(+), 39 deletions(-)
 create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.c
 create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.h
 create mode 100644 drivers/media/platform/vsp1/vsp1_histo.c
 create mode 100644 drivers/media/platform/vsp1/vsp1_histo.h

diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index a3304466e628..0141af8cfdbc 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -266,6 +266,7 @@ config VIDEO_RENESAS_VSP1
 	depends on (ARCH_RENESAS && OF) || COMPILE_TEST
 	depends on !ARM64 || VIDEO_RENESAS_FCP
 	select VIDEOBUF2_DMA_CONTIG
+	select VIDEOBUF2_VMALLOC
 	---help---
 	  This is a V4L2 driver for the Renesas VSP1 video processing engine.
 
diff --git a/drivers/media/platform/vsp1/Makefile b/drivers/media/platform/vsp1/Makefile
index 95b3ac2ea7ef..a12356bf2135 100644
--- a/drivers/media/platform/vsp1/Makefile
+++ b/drivers/media/platform/vsp1/Makefile
@@ -3,5 +3,7 @@ vsp1-y					+= vsp1_dl.o vsp1_drm.o vsp1_video.o
 vsp1-y					+= vsp1_rpf.o vsp1_rwpf.o vsp1_wpf.o
 vsp1-y					+= vsp1_hsit.o vsp1_lif.o vsp1_lut.o
 vsp1-y					+= vsp1_bru.o vsp1_sru.o vsp1_uds.o
+vsp1-y					+= vsp1_hgo.o vsp1_histo.o
+vsp1-y					+= vsp1_lif.o
 
 obj-$(CONFIG_VIDEO_RENESAS_VSP1)	+= vsp1.o
diff --git a/drivers/media/platform/vsp1/vsp1.h b/drivers/media/platform/vsp1/vsp1.h
index 7cb0f5e428df..6bf6d54c0ae4 100644
--- a/drivers/media/platform/vsp1/vsp1.h
+++ b/drivers/media/platform/vsp1/vsp1.h
@@ -31,6 +31,7 @@ struct vsp1_drm;
 struct vsp1_entity;
 struct vsp1_platform_data;
 struct vsp1_bru;
+struct vsp1_hgo;
 struct vsp1_hsit;
 struct vsp1_lif;
 struct vsp1_lut;
@@ -46,6 +47,7 @@ struct vsp1_uds;
 #define VSP1_HAS_LUT		(1 << 1)
 #define VSP1_HAS_SRU		(1 << 2)
 #define VSP1_HAS_BRU		(1 << 3)
+#define VSP1_HAS_HGO		(1 << 4)
 
 struct vsp1_device_info {
 	u32 version;
@@ -66,6 +68,7 @@ struct vsp1_device {
 	struct rcar_fcp_device *fcp;
 
 	struct vsp1_bru *bru;
+	struct vsp1_hgo *hgo;
 	struct vsp1_hsit *hsi;
 	struct vsp1_hsit *hst;
 	struct vsp1_lif *lif;
diff --git a/drivers/media/platform/vsp1/vsp1_drm.c b/drivers/media/platform/vsp1/vsp1_drm.c
index 14730119687f..ff961b2c084e 100644
--- a/drivers/media/platform/vsp1/vsp1_drm.c
+++ b/drivers/media/platform/vsp1/vsp1_drm.c
@@ -489,7 +489,7 @@ void vsp1_du_atomic_flush(struct device *dev)
 			}
 		}
 
-		vsp1_entity_route_setup(entity, pipe->dl);
+		vsp1_entity_route_setup(entity, pipe, pipe->dl);
 
 		if (entity->ops->configure)
 			entity->ops->configure(entity, pipe, pipe->dl);
diff --git a/drivers/media/platform/vsp1/vsp1_drv.c b/drivers/media/platform/vsp1/vsp1_drv.c
index e655639af7e2..c737f15ad1f4 100644
--- a/drivers/media/platform/vsp1/vsp1_drv.c
+++ b/drivers/media/platform/vsp1/vsp1_drv.c
@@ -29,6 +29,7 @@
 #include "vsp1_bru.h"
 #include "vsp1_dl.h"
 #include "vsp1_drm.h"
+#include "vsp1_hgo.h"
 #include "vsp1_hsit.h"
 #include "vsp1_lif.h"
 #include "vsp1_lut.h"
@@ -104,7 +105,8 @@ static int vsp1_create_sink_links(struct vsp1_device *vsp1,
 		if (source->type == sink->type)
 			continue;
 
-		if (source->type == VSP1_ENTITY_LIF ||
+		if (source->type == VSP1_ENTITY_HGO ||
+		    source->type == VSP1_ENTITY_LIF ||
 		    source->type == VSP1_ENTITY_WPF)
 			continue;
 
@@ -147,6 +149,16 @@ static int vsp1_uapi_create_links(struct vsp1_device *vsp1)
 			return ret;
 	}
 
+	if (vsp1->info->features & VSP1_HAS_HGO) {
+		ret = media_create_pad_link(&vsp1->hgo->entity.subdev.entity,
+					    HGO_PAD_SOURCE,
+					    &vsp1->hgo->histo.video.entity, 0,
+					    MEDIA_LNK_FL_ENABLED |
+					    MEDIA_LNK_FL_IMMUTABLE);
+		if (ret < 0)
+			return ret;
+	}
+
 	if (vsp1->info->features & VSP1_HAS_LIF) {
 		ret = media_create_pad_link(&vsp1->wpf[0]->entity.subdev.entity,
 					    RWPF_PAD_SOURCE,
@@ -270,6 +282,16 @@ static int vsp1_create_entities(struct vsp1_device *vsp1)
 
 	list_add_tail(&vsp1->hst->entity.list_dev, &vsp1->entities);
 
+	if (vsp1->info->features & VSP1_HAS_HGO) {
+		vsp1->hgo = vsp1_hgo_create(vsp1);
+		if (IS_ERR(vsp1->hgo)) {
+			ret = PTR_ERR(vsp1->hgo);
+			goto done;
+		}
+
+		list_add_tail(&vsp1->hgo->entity.list_dev, &vsp1->entities);
+	}
+
 	if (vsp1->info->features & VSP1_HAS_LIF) {
 		vsp1->lif = vsp1_lif_create(vsp1);
 		if (IS_ERR(vsp1->lif)) {
@@ -549,7 +571,8 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
 	{
 		.version = VI6_IP_VERSION_MODEL_VSPS_H2,
 		.gen = 2,
-		.features = VSP1_HAS_BRU | VSP1_HAS_LUT | VSP1_HAS_SRU,
+		.features = VSP1_HAS_BRU | VSP1_HAS_HGO | VSP1_HAS_LUT
+			  | VSP1_HAS_SRU,
 		.rpf_count = 5,
 		.uds_count = 3,
 		.wpf_count = 4,
@@ -567,7 +590,8 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
 	}, {
 		.version = VI6_IP_VERSION_MODEL_VSPD_GEN2,
 		.gen = 2,
-		.features = VSP1_HAS_BRU | VSP1_HAS_LIF | VSP1_HAS_LUT,
+		.features = VSP1_HAS_BRU | VSP1_HAS_HGO | VSP1_HAS_LIF
+			  | VSP1_HAS_LUT,
 		.rpf_count = 4,
 		.uds_count = 1,
 		.wpf_count = 4,
@@ -576,7 +600,8 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
 	}, {
 		.version = VI6_IP_VERSION_MODEL_VSPS_M2,
 		.gen = 2,
-		.features = VSP1_HAS_BRU | VSP1_HAS_LUT | VSP1_HAS_SRU,
+		.features = VSP1_HAS_BRU | VSP1_HAS_HGO | VSP1_HAS_LUT
+			  | VSP1_HAS_SRU,
 		.rpf_count = 5,
 		.uds_count = 3,
 		.wpf_count = 4,
@@ -585,7 +610,7 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
 	}, {
 		.version = VI6_IP_VERSION_MODEL_VSPI_GEN3,
 		.gen = 3,
-		.features = VSP1_HAS_LUT | VSP1_HAS_SRU,
+		.features = VSP1_HAS_HGO | VSP1_HAS_LUT | VSP1_HAS_SRU,
 		.rpf_count = 1,
 		.uds_count = 1,
 		.wpf_count = 1,
@@ -601,7 +626,7 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
 	}, {
 		.version = VI6_IP_VERSION_MODEL_VSPBC_GEN3,
 		.gen = 3,
-		.features = VSP1_HAS_BRU | VSP1_HAS_LUT,
+		.features = VSP1_HAS_BRU | VSP1_HAS_HGO | VSP1_HAS_LUT,
 		.rpf_count = 5,
 		.wpf_count = 1,
 		.num_bru_inputs = 5,
diff --git a/drivers/media/platform/vsp1/vsp1_entity.c b/drivers/media/platform/vsp1/vsp1_entity.c
index fd20c0d8aeea..2d278583ce17 100644
--- a/drivers/media/platform/vsp1/vsp1_entity.c
+++ b/drivers/media/platform/vsp1/vsp1_entity.c
@@ -21,6 +21,8 @@
 #include "vsp1.h"
 #include "vsp1_dl.h"
 #include "vsp1_entity.h"
+#include "vsp1_pipe.h"
+#include "vsp1_rwpf.h"
 
 static inline struct vsp1_entity *
 media_entity_to_vsp1_entity(struct media_entity *entity)
@@ -28,11 +30,28 @@ media_entity_to_vsp1_entity(struct media_entity *entity)
 	return container_of(entity, struct vsp1_entity, subdev.entity);
 }
 
-void vsp1_entity_route_setup(struct vsp1_entity *source,
+void vsp1_entity_route_setup(struct vsp1_entity *entity,
+			     struct vsp1_pipeline *pipe,
 			     struct vsp1_dl_list *dl)
 {
+	struct vsp1_entity *source;
 	struct vsp1_entity *sink;
 
+	if (entity->type == VSP1_ENTITY_HGO) {
+		u32 smppt;
+
+		/* The HGO is a special case, its routing is configured on the
+		 * sink pad.
+		 */
+		source = media_entity_to_vsp1_entity(entity->source);
+		smppt = (pipe->output->entity.index << VI6_DPR_SMPPT_TGW_SHIFT)
+		      | (source->route->output << VI6_DPR_SMPPT_PT_SHIFT);
+
+		vsp1_dl_list_write(dl, VI6_DPR_HGO_SMPPT, smppt);
+		return;
+	}
+
+	source = entity;
 	if (source->route->reg == 0)
 		return;
 
@@ -267,25 +286,30 @@ int vsp1_subdev_enum_frame_size(struct v4l2_subdev *subdev,
  * Media Operations
  */
 
-int vsp1_entity_link_setup(struct media_entity *entity,
-			   const struct media_pad *local,
-			   const struct media_pad *remote, u32 flags)
+static int vsp1_entity_link_setup_source(const struct media_pad *source_pad,
+					 const struct media_pad *sink_pad,
+					 u32 flags)
 {
 	struct vsp1_entity *source;
 
-	if (!(local->flags & MEDIA_PAD_FL_SOURCE))
-		return 0;
-
-	source = media_entity_to_vsp1_entity(local->entity);
+	source = media_entity_to_vsp1_entity(source_pad->entity);
 
 	if (!source->route)
 		return 0;
 
 	if (flags & MEDIA_LNK_FL_ENABLED) {
-		if (source->sink)
-			return -EBUSY;
-		source->sink = remote->entity;
-		source->sink_pad = remote->index;
+		struct vsp1_entity *sink
+			= media_entity_to_vsp1_entity(sink_pad->entity);
+
+		/* Fan-out is limited to one for the normal data path plus an
+		 * optional HGO. We ignore the HGO here.
+		 */
+		if (sink->type != VSP1_ENTITY_HGO) {
+			if (source->sink)
+				return -EBUSY;
+			source->sink = sink_pad->entity;
+			source->sink_pad = sink_pad->index;
+		}
 	} else {
 		source->sink = NULL;
 		source->sink_pad = 0;
@@ -294,6 +318,84 @@ int vsp1_entity_link_setup(struct media_entity *entity,
 	return 0;
 }
 
+static int vsp1_entity_link_setup_sink(const struct media_pad *source_pad,
+				       const struct media_pad *sink_pad,
+				       u32 flags)
+{
+	struct vsp1_entity *sink;
+
+	sink = media_entity_to_vsp1_entity(sink_pad->entity);
+
+	if (flags & MEDIA_LNK_FL_ENABLED) {
+		/* Fan-in is limited to one. */
+		if (sink->source)
+			return -EBUSY;
+
+		sink->source = source_pad->entity;
+	} else {
+		sink->source = NULL;
+	}
+
+	return 0;
+}
+
+int vsp1_entity_link_setup(struct media_entity *entity,
+			   const struct media_pad *local,
+			   const struct media_pad *remote, u32 flags)
+{
+	if (local->flags & MEDIA_PAD_FL_SOURCE)
+		return vsp1_entity_link_setup_source(local, remote, flags);
+	else
+		return vsp1_entity_link_setup_sink(remote, local, flags);
+}
+
+/**
+ * vsp1_entity_remote_pad - Find the pad at the remote end of a link
+ * @pad: Pad at the local end of the link
+ *
+ * Search for a remote pad connected to the given pad by iterating over all
+ * links originating or terminating at that pad until an enabled link is found.
+ *
+ * Our link setup implementation guarantees that the output fan-out will not be
+ * higher than one for the data pipelines, except for the link to the HGO that
+ * can be enabled in addition to a regular data link. When traversing outgoing
+ * links this function ignores HGO entities and should thus be used in place of
+ * the generic media_entity_remote_pad() function when traversing data
+ * pipelines.
+ *
+ * Return a pointer to the pad at the remote end of the first found enabled
+ * link, or NULL if no enabled link has been found.
+ */
+struct media_pad *vsp1_entity_remote_pad(struct media_pad *pad)
+{
+	struct media_link *link;
+
+	list_for_each_entry(link, &pad->entity->links, list) {
+		struct vsp1_entity *entity;
+
+		if (!(link->flags & MEDIA_LNK_FL_ENABLED))
+			continue;
+
+		/* If we're the sink the source will never be an HGO. */
+		if (link->sink == pad)
+			return link->source;
+
+		if (link->source != pad)
+			continue;
+
+		/* If the sink isn't a subdevice it can't be an HGO. */
+		if (!is_media_entity_v4l2_subdev(link->sink->entity))
+			return link->sink;
+
+		entity = media_entity_to_vsp1_entity(link->sink->entity);
+		if (entity->type != VSP1_ENTITY_HGO)
+			return link->sink;
+	}
+
+	return NULL;
+
+}
+
 /* -----------------------------------------------------------------------------
  * Initialization
  */
@@ -319,6 +421,7 @@ static const struct vsp1_route vsp1_routes[] = {
 	  { VI6_DPR_NODE_BRU_IN(0), VI6_DPR_NODE_BRU_IN(1),
 	    VI6_DPR_NODE_BRU_IN(2), VI6_DPR_NODE_BRU_IN(3),
 	    VI6_DPR_NODE_BRU_IN(4) }, VI6_DPR_NODE_BRU_OUT },
+	{ VSP1_ENTITY_HGO, 0, 0, { 0, }, 0 },
 	VSP1_ENTITY_ROUTE(HSI),
 	VSP1_ENTITY_ROUTE(HST),
 	{ VSP1_ENTITY_LIF, 0, 0, { VI6_DPR_NODE_LIF, }, VI6_DPR_NODE_LIF },
@@ -369,7 +472,9 @@ int vsp1_entity_init(struct vsp1_device *vsp1, struct vsp1_entity *entity,
 	for (i = 0; i < num_pads - 1; ++i)
 		entity->pads[i].flags = MEDIA_PAD_FL_SINK;
 
-	entity->pads[num_pads - 1].flags = MEDIA_PAD_FL_SOURCE;
+	/* Single-pad entities only have a sink. */
+	entity->pads[num_pads - 1].flags = num_pads > 1 ? MEDIA_PAD_FL_SOURCE
+					 : MEDIA_PAD_FL_SINK;
 
 	/* Initialize the media entity. */
 	ret = media_entity_pads_init(&entity->subdev.entity, num_pads,
diff --git a/drivers/media/platform/vsp1/vsp1_entity.h b/drivers/media/platform/vsp1/vsp1_entity.h
index a240fc1c59a6..ea3f4125aa1c 100644
--- a/drivers/media/platform/vsp1/vsp1_entity.h
+++ b/drivers/media/platform/vsp1/vsp1_entity.h
@@ -24,6 +24,7 @@ struct vsp1_pipeline;
 
 enum vsp1_entity_type {
 	VSP1_ENTITY_BRU,
+	VSP1_ENTITY_HGO,
 	VSP1_ENTITY_HSI,
 	VSP1_ENTITY_HST,
 	VSP1_ENTITY_LIF,
@@ -90,6 +91,7 @@ struct vsp1_entity {
 	struct media_pad *pads;
 	unsigned int source_pad;
 
+	struct media_entity *source;
 	struct media_entity *sink;
 	unsigned int sink_pad;
 
@@ -128,9 +130,12 @@ vsp1_entity_get_pad_selection(struct vsp1_entity *entity,
 int vsp1_entity_init_cfg(struct v4l2_subdev *subdev,
 			 struct v4l2_subdev_pad_config *cfg);
 
-void vsp1_entity_route_setup(struct vsp1_entity *source,
+void vsp1_entity_route_setup(struct vsp1_entity *entity,
+			     struct vsp1_pipeline *pipe,
 			     struct vsp1_dl_list *dl);
 
+struct media_pad *vsp1_entity_remote_pad(struct media_pad *pad);
+
 int vsp1_subdev_get_pad_format(struct v4l2_subdev *subdev,
 			       struct v4l2_subdev_pad_config *cfg,
 			       struct v4l2_subdev_format *fmt);
diff --git a/drivers/media/platform/vsp1/vsp1_hgo.c b/drivers/media/platform/vsp1/vsp1_hgo.c
new file mode 100644
index 000000000000..8ca0ac3900bd
--- /dev/null
+++ b/drivers/media/platform/vsp1/vsp1_hgo.c
@@ -0,0 +1,496 @@
+/*
+ * vsp1_hgo.c  --  R-Car VSP1 Histogram Generator 1D
+ *
+ * Copyright (C) 2016 Renesas Electronics Corporation
+ *
+ * Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/device.h>
+#include <linux/gfp.h>
+
+#include <media/v4l2-subdev.h>
+#include <media/videobuf2-vmalloc.h>
+
+#include "vsp1.h"
+#include "vsp1_dl.h"
+#include "vsp1_hgo.h"
+
+#define HGO_MIN_SIZE				4U
+#define HGO_MAX_SIZE				8192U
+#define HGO_DATA_SIZE				((2 + 256) * 4)
+
+/* -----------------------------------------------------------------------------
+ * Device Access
+ */
+
+static inline u32 vsp1_hgo_read(struct vsp1_hgo *hgo, u32 reg)
+{
+	return vsp1_read(hgo->entity.vsp1, reg);
+}
+
+static inline void vsp1_hgo_write(struct vsp1_hgo *hgo, struct vsp1_dl_list *dl,
+				  u32 reg, u32 data)
+{
+	vsp1_dl_list_write(dl, reg, data);
+}
+
+/* -----------------------------------------------------------------------------
+ * Frame End Handler
+ */
+
+void vsp1_hgo_frame_end(struct vsp1_entity *entity)
+{
+	struct vsp1_hgo *hgo = to_hgo(&entity->subdev);
+	struct vsp1_histogram_buffer *buf;
+	unsigned int i;
+	size_t size;
+	u32 *data;
+
+	buf = vsp1_histogram_buffer_get(&hgo->histo);
+	if (!buf)
+		return;
+
+	data = buf->addr;
+
+	if (hgo->num_bins == 256) {
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_G_MAXMIN);
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_G_SUM);
+
+		for (i = 0; i < 256; ++i) {
+			vsp1_write(hgo->entity.vsp1, VI6_HGO_EXT_HIST_ADDR, i);
+			*data++ = vsp1_hgo_read(hgo, VI6_HGO_EXT_HIST_DATA);
+		}
+
+		size = (2 + 256) * sizeof(u32);
+	} else if (hgo->max_rgb) {
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_G_MAXMIN);
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_G_SUM);
+
+		for (i = 0; i < 64; ++i)
+			*data++ = vsp1_hgo_read(hgo, VI6_HGO_G_HISTO(i));
+
+		size = (2 + 64) * sizeof(u32);
+	} else {
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_R_MAXMIN);
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_G_MAXMIN);
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_B_MAXMIN);
+
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_R_SUM);
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_G_SUM);
+		*data++ = vsp1_hgo_read(hgo, VI6_HGO_B_SUM);
+
+		for (i = 0; i < 64; ++i) {
+			data[i] = vsp1_hgo_read(hgo, VI6_HGO_R_HISTO(i));
+			data[i+64] = vsp1_hgo_read(hgo, VI6_HGO_G_HISTO(i));
+			data[i+128] = vsp1_hgo_read(hgo, VI6_HGO_B_HISTO(i));
+		}
+
+		size = (6 + 64 * 3) * sizeof(u32);
+	}
+
+	vsp1_histogram_buffer_complete(&hgo->histo, buf, size);
+}
+
+/* -----------------------------------------------------------------------------
+ * Controls
+ */
+
+#define V4L2_CID_VSP1_HGO_MAX_RGB		(V4L2_CID_USER_BASE + 1)
+#define V4L2_CID_VSP1_HGO_NUM_BINS		(V4L2_CID_USER_BASE + 2)
+
+static const struct v4l2_ctrl_config hgo_max_rgb_control = {
+	.id = V4L2_CID_VSP1_HGO_MAX_RGB,
+	.name = "Maximum RGB Mode",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.min = 0,
+	.max = 1,
+	.def = 0,
+	.step = 1,
+};
+
+static const s64 hgo_num_bins[] = {
+	64, 256,
+};
+
+static const struct v4l2_ctrl_config hgo_num_bins_control = {
+	.id = V4L2_CID_VSP1_HGO_NUM_BINS,
+	.name = "Number of Bins",
+	.type = V4L2_CTRL_TYPE_INTEGER_MENU,
+	.min = 0,
+	.max = 1,
+	.def = 0,
+	.qmenu_int = hgo_num_bins,
+};
+
+/* -----------------------------------------------------------------------------
+ * V4L2 Subdevice Operations
+ */
+
+static int hgo_enum_mbus_code(struct v4l2_subdev *subdev,
+			       struct v4l2_subdev_pad_config *cfg,
+			       struct v4l2_subdev_mbus_code_enum *code)
+{
+	static const unsigned int codes[] = {
+		MEDIA_BUS_FMT_ARGB8888_1X32,
+		MEDIA_BUS_FMT_AHSV8888_1X32,
+		MEDIA_BUS_FMT_AYUV8_1X32,
+	};
+
+	if (code->pad == HGO_PAD_SOURCE) {
+		code->code = MEDIA_BUS_FMT_FIXED;
+		return 0;
+	}
+
+	return vsp1_subdev_enum_mbus_code(subdev, cfg, code, codes,
+					  ARRAY_SIZE(codes));
+}
+
+static int hgo_enum_frame_size(struct v4l2_subdev *subdev,
+				struct v4l2_subdev_pad_config *cfg,
+				struct v4l2_subdev_frame_size_enum *fse)
+{
+	if (fse->pad != HGO_PAD_SINK)
+		return -EINVAL;
+
+	return vsp1_subdev_enum_frame_size(subdev, cfg, fse, HGO_MIN_SIZE,
+					   HGO_MIN_SIZE, HGO_MAX_SIZE,
+					   HGO_MAX_SIZE);
+}
+
+static int hgo_get_selection(struct v4l2_subdev *subdev,
+			     struct v4l2_subdev_pad_config *cfg,
+			     struct v4l2_subdev_selection *sel)
+{
+	struct vsp1_hgo *hgo = to_hgo(subdev);
+	struct v4l2_subdev_pad_config *config;
+	struct v4l2_mbus_framefmt *format;
+	struct v4l2_rect *crop;
+
+	if (sel->pad != HGO_PAD_SINK)
+		return -EINVAL;
+
+	config = vsp1_entity_get_pad_config(&hgo->entity, cfg, sel->which);
+	if (!config)
+		return -EINVAL;
+
+	switch (sel->target) {
+	case V4L2_SEL_TGT_COMPOSE_BOUNDS:
+	case V4L2_SEL_TGT_COMPOSE_DEFAULT:
+		crop = vsp1_entity_get_pad_selection(&hgo->entity, config,
+						     HGO_PAD_SINK,
+						     V4L2_SEL_TGT_CROP);
+		sel->r.left = 0;
+		sel->r.top = 0;
+		sel->r.width = crop->width;
+		sel->r.height = crop->height;
+		return 0;
+
+	case V4L2_SEL_TGT_CROP_BOUNDS:
+	case V4L2_SEL_TGT_CROP_DEFAULT:
+		format = vsp1_entity_get_pad_format(&hgo->entity, config,
+						    HGO_PAD_SINK);
+		sel->r.left = 0;
+		sel->r.top = 0;
+		sel->r.width = format->width;
+		sel->r.height = format->height;
+		return 0;
+
+	case V4L2_SEL_TGT_COMPOSE:
+	case V4L2_SEL_TGT_CROP:
+		sel->r = *vsp1_entity_get_pad_selection(&hgo->entity, config,
+							sel->pad, sel->target);
+		return 0;
+
+	default:
+		return -EINVAL;
+	}
+}
+
+static int hgo_set_crop(struct v4l2_subdev *subdev,
+			struct v4l2_subdev_pad_config *config,
+			struct v4l2_subdev_selection *sel)
+{
+	struct vsp1_hgo *hgo = to_hgo(subdev);
+	struct v4l2_mbus_framefmt *format;
+	struct v4l2_rect *selection;
+
+	/* The crop rectangle must be inside the input frame. */
+	format = vsp1_entity_get_pad_format(&hgo->entity, config, HGO_PAD_SINK);
+	sel->r.left = clamp_t(unsigned int, sel->r.left, 0, format->width - 1);
+	sel->r.top = clamp_t(unsigned int, sel->r.top, 0, format->height - 1);
+	sel->r.width = clamp_t(unsigned int, sel->r.width, HGO_MIN_SIZE,
+			       format->width - sel->r.left);
+	sel->r.height = clamp_t(unsigned int, sel->r.height, HGO_MIN_SIZE,
+				format->height - sel->r.top);
+
+	/* Set the crop rectangle and reset the compose rectangle. */
+	selection = vsp1_entity_get_pad_selection(&hgo->entity, config,
+						  sel->pad, V4L2_SEL_TGT_CROP);
+	*selection = sel->r;
+
+	selection = vsp1_entity_get_pad_selection(&hgo->entity, config,
+						  sel->pad,
+						  V4L2_SEL_TGT_COMPOSE);
+	*selection = sel->r;
+
+	return 0;
+}
+
+static int hgo_set_compose(struct v4l2_subdev *subdev,
+			   struct v4l2_subdev_pad_config *config,
+			   struct v4l2_subdev_selection *sel)
+{
+	struct vsp1_hgo *hgo = to_hgo(subdev);
+	struct v4l2_rect *compose;
+	struct v4l2_rect *crop;
+	unsigned int ratio;
+
+	/* The compose rectangle is used to configure downscaling, the top left
+	 * corner is fixed to (0,0) and the size to 1/2 or 1/4 of the crop
+	 * rectangle.
+	 */
+	sel->r.left = 0;
+	sel->r.top = 0;
+
+	crop = vsp1_entity_get_pad_selection(&hgo->entity, config, sel->pad,
+					     V4L2_SEL_TGT_CROP);
+
+	/* Clamp the width and height to acceptable values first and then
+	 * compute the closest rounded dividing ratio.
+	 *
+	 * Ratio	Rounded ratio
+	 * --------------------------
+	 * [1.0 1.5[	1
+	 * [1.5 3.0[	2
+	 * [3.0 4.0]	4
+	 *
+	 * The rounded ratio can be computed using
+	 *
+	 * 1 << (ceil(ratio * 2) / 3)
+	 */
+	sel->r.width = clamp(sel->r.width, crop->width / 4, crop->width);
+	ratio = 1 << (crop->width * 2 / sel->r.width / 3);
+	sel->r.width = crop->width / ratio;
+
+
+	sel->r.height = clamp(sel->r.height, crop->height / 4, crop->height);
+	ratio = 1 << (crop->height * 2 / sel->r.height / 3);
+	sel->r.height = crop->height / ratio;
+
+	compose = vsp1_entity_get_pad_selection(&hgo->entity, config, sel->pad,
+						V4L2_SEL_TGT_COMPOSE);
+	*compose = sel->r;
+
+	return 0;
+}
+
+static int hgo_set_selection(struct v4l2_subdev *subdev,
+			     struct v4l2_subdev_pad_config *cfg,
+			     struct v4l2_subdev_selection *sel)
+{
+	struct vsp1_hgo *hgo = to_hgo(subdev);
+	struct v4l2_subdev_pad_config *config;
+
+	if (sel->pad != HGO_PAD_SINK)
+		return -EINVAL;
+
+	config = vsp1_entity_get_pad_config(&hgo->entity, cfg, sel->which);
+	if (!config)
+		return -EINVAL;
+
+	if (sel->target == V4L2_SEL_TGT_CROP)
+		return hgo_set_crop(subdev, config, sel);
+	else if (sel->target == V4L2_SEL_TGT_COMPOSE)
+		return hgo_set_compose(subdev, config, sel);
+	else
+		return -EINVAL;
+}
+
+static int hgo_get_format(struct v4l2_subdev *subdev,
+			   struct v4l2_subdev_pad_config *cfg,
+			   struct v4l2_subdev_format *fmt)
+{
+	if (fmt->pad == HGO_PAD_SOURCE) {
+		fmt->format.code = MEDIA_BUS_FMT_FIXED;
+		fmt->format.width = 0;
+		fmt->format.height = 0;
+		fmt->format.field = V4L2_FIELD_NONE;
+		fmt->format.colorspace = V4L2_COLORSPACE_RAW;
+	}
+
+	return vsp1_subdev_get_pad_format(subdev, cfg, fmt);
+}
+
+static int hgo_set_format(struct v4l2_subdev *subdev,
+			   struct v4l2_subdev_pad_config *cfg,
+			   struct v4l2_subdev_format *fmt)
+{
+	struct vsp1_hgo *hgo = to_hgo(subdev);
+	struct v4l2_subdev_pad_config *config;
+	struct v4l2_mbus_framefmt *format;
+	struct v4l2_rect *selection;
+
+	if (fmt->pad != HGO_PAD_SINK)
+		return hgo_get_format(subdev, cfg, fmt);
+
+	config = vsp1_entity_get_pad_config(&hgo->entity, cfg, fmt->which);
+	if (!config)
+		return -EINVAL;
+
+	/* Default to YUV if the requested format is not supported. */
+	if (fmt->format.code != MEDIA_BUS_FMT_ARGB8888_1X32 &&
+	    fmt->format.code != MEDIA_BUS_FMT_AHSV8888_1X32 &&
+	    fmt->format.code != MEDIA_BUS_FMT_AYUV8_1X32)
+		fmt->format.code = MEDIA_BUS_FMT_AYUV8_1X32;
+
+	format = vsp1_entity_get_pad_format(&hgo->entity, config, fmt->pad);
+
+	format->code = fmt->format.code;
+	format->width = clamp_t(unsigned int, fmt->format.width,
+				HGO_MIN_SIZE, HGO_MAX_SIZE);
+	format->height = clamp_t(unsigned int, fmt->format.height,
+				 HGO_MIN_SIZE, HGO_MAX_SIZE);
+	format->field = V4L2_FIELD_NONE;
+	format->colorspace = V4L2_COLORSPACE_SRGB;
+
+	fmt->format = *format;
+
+	/* Reset the crop and compose rectangles */
+	selection = vsp1_entity_get_pad_selection(&hgo->entity, config,
+						  fmt->pad, V4L2_SEL_TGT_CROP);
+	selection->left = 0;
+	selection->top = 0;
+	selection->width = format->width;
+	selection->height = format->height;
+
+	selection = vsp1_entity_get_pad_selection(&hgo->entity, config,
+						  fmt->pad,
+						  V4L2_SEL_TGT_COMPOSE);
+	selection->left = 0;
+	selection->top = 0;
+	selection->width = format->width;
+	selection->height = format->height;
+
+	return 0;
+}
+
+static struct v4l2_subdev_pad_ops hgo_pad_ops = {
+	.enum_mbus_code = hgo_enum_mbus_code,
+	.enum_frame_size = hgo_enum_frame_size,
+	.get_fmt = hgo_get_format,
+	.set_fmt = hgo_set_format,
+	.get_selection = hgo_get_selection,
+	.set_selection = hgo_set_selection,
+};
+
+static struct v4l2_subdev_ops hgo_ops = {
+	.pad    = &hgo_pad_ops,
+};
+
+/* -----------------------------------------------------------------------------
+ * VSP1 Entity Operations
+ */
+
+static void hgo_configure(struct vsp1_entity *entity,
+			  struct vsp1_pipeline *pipe,
+			  struct vsp1_dl_list *dl)
+{
+	struct vsp1_hgo *hgo = to_hgo(&entity->subdev);
+	struct v4l2_rect *compose;
+	struct v4l2_rect *crop;
+	unsigned int hratio;
+	unsigned int vratio;
+
+	crop = vsp1_entity_get_pad_selection(entity, entity->config,
+					     HGO_PAD_SINK, V4L2_SEL_TGT_CROP);
+	compose = vsp1_entity_get_pad_selection(entity, entity->config,
+						HGO_PAD_SINK,
+						V4L2_SEL_TGT_COMPOSE);
+
+	vsp1_hgo_write(hgo, dl, VI6_HGO_REGRST, VI6_HGO_REGRST_RCLEA);
+
+	vsp1_hgo_write(hgo, dl, VI6_HGO_OFFSET,
+		       (crop->left << VI6_HGO_OFFSET_HOFFSET_SHIFT) |
+		       (crop->top << VI6_HGO_OFFSET_VOFFSET_SHIFT));
+	vsp1_hgo_write(hgo, dl, VI6_HGO_SIZE,
+		       (crop->width << VI6_HGO_SIZE_HSIZE_SHIFT) |
+		       (crop->height << VI6_HGO_SIZE_VSIZE_SHIFT));
+
+	mutex_lock(hgo->ctrls.handler.lock);
+	hgo->max_rgb = hgo->ctrls.max_rgb->cur.val;
+	if (hgo->ctrls.num_bins)
+		hgo->num_bins = hgo_num_bins[hgo->ctrls.num_bins->cur.val];
+	mutex_unlock(hgo->ctrls.handler.lock);
+
+	hratio = crop->width * 2 / compose->width / 3;
+	vratio = crop->height * 2 / compose->height / 3;
+	vsp1_hgo_write(hgo, dl, VI6_HGO_MODE,
+		       (hgo->num_bins == 256 ? VI6_HGO_MODE_STEP : 0) |
+		       (hgo->max_rgb ? VI6_HGO_MODE_MAXRGB : 0) |
+		       (hratio << VI6_HGO_MODE_HRATIO_SHIFT) |
+		       (vratio << VI6_HGO_MODE_VRATIO_SHIFT));
+}
+
+static void hgo_destroy(struct vsp1_entity *entity)
+{
+	struct vsp1_hgo *hgo = to_hgo(&entity->subdev);
+
+	vsp1_histogram_cleanup(&hgo->histo);
+}
+
+static const struct vsp1_entity_operations hgo_entity_ops = {
+	.configure = hgo_configure,
+	.destroy = hgo_destroy,
+};
+
+/* -----------------------------------------------------------------------------
+ * Initialization and Cleanup
+ */
+
+struct vsp1_hgo *vsp1_hgo_create(struct vsp1_device *vsp1)
+{
+	struct vsp1_hgo *hgo;
+	int ret;
+
+	hgo = devm_kzalloc(vsp1->dev, sizeof(*hgo), GFP_KERNEL);
+	if (hgo == NULL)
+		return ERR_PTR(-ENOMEM);
+
+	hgo->entity.ops = &hgo_entity_ops;
+	hgo->entity.type = VSP1_ENTITY_HGO;
+
+	ret = vsp1_entity_init(vsp1, &hgo->entity, "hgo", 2, &hgo_ops);
+	if (ret < 0)
+		return ERR_PTR(ret);
+
+	/* Initialize the control handler. */
+	v4l2_ctrl_handler_init(&hgo->ctrls.handler,
+			       vsp1->info->gen == 3 ? 2 : 1);
+	hgo->ctrls.max_rgb = v4l2_ctrl_new_custom(&hgo->ctrls.handler,
+						  &hgo_max_rgb_control, NULL);
+	if (vsp1->info->gen == 3)
+		hgo->ctrls.num_bins =
+			v4l2_ctrl_new_custom(&hgo->ctrls.handler,
+					     &hgo_num_bins_control, NULL);
+
+	hgo->max_rgb = false;
+	hgo->num_bins = 64;
+
+	hgo->entity.subdev.ctrl_handler = &hgo->ctrls.handler;
+
+	/* Initialize the video device and queue for statistics data. */
+	ret = vsp1_histogram_init(vsp1, &hgo->histo, hgo->entity.subdev.name,
+				  HGO_DATA_SIZE);
+	if (ret < 0) {
+		vsp1_entity_destroy(&hgo->entity);
+		return ERR_PTR(ret);
+	}
+
+	return hgo;
+}
diff --git a/drivers/media/platform/vsp1/vsp1_hgo.h b/drivers/media/platform/vsp1/vsp1_hgo.h
new file mode 100644
index 000000000000..d677b3fe6023
--- /dev/null
+++ b/drivers/media/platform/vsp1/vsp1_hgo.h
@@ -0,0 +1,50 @@
+/*
+ * vsp1_hgo.h  --  R-Car VSP1 Histogram Generator 1D
+ *
+ * Copyright (C) 2016 Renesas Electronics Corporation
+ *
+ * Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#ifndef __VSP1_HGO_H__
+#define __VSP1_HGO_H__
+
+#include <media/media-entity.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-subdev.h>
+
+#include "vsp1_entity.h"
+#include "vsp1_histo.h"
+
+struct vsp1_device;
+
+#define HGO_PAD_SINK				0
+#define HGO_PAD_SOURCE				1
+
+struct vsp1_hgo {
+	struct vsp1_entity entity;
+	struct vsp1_histogram histo;
+
+	struct {
+		struct v4l2_ctrl_handler handler;
+		struct v4l2_ctrl *max_rgb;
+		struct v4l2_ctrl *num_bins;
+	} ctrls;
+
+	bool max_rgb;
+	unsigned int num_bins;
+};
+
+static inline struct vsp1_hgo *to_hgo(struct v4l2_subdev *subdev)
+{
+	return container_of(subdev, struct vsp1_hgo, entity.subdev);
+}
+
+struct vsp1_hgo *vsp1_hgo_create(struct vsp1_device *vsp1);
+void vsp1_hgo_frame_end(struct vsp1_entity *hgo);
+
+#endif /* __VSP1_HGO_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_histo.c b/drivers/media/platform/vsp1/vsp1_histo.c
new file mode 100644
index 000000000000..65f30c8e1bdb
--- /dev/null
+++ b/drivers/media/platform/vsp1/vsp1_histo.c
@@ -0,0 +1,307 @@
+/*
+ * vsp1_histo.c  --  R-Car VSP1 Histogram API
+ *
+ * Copyright (C) 2016 Renesas Electronics Corporation
+ * Copyright (C) 2016 Laurent Pinchart
+ *
+ * Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/device.h>
+#include <linux/gfp.h>
+
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-subdev.h>
+#include <media/videobuf2-vmalloc.h>
+
+#include "vsp1.h"
+#include "vsp1_histo.h"
+#include "vsp1_pipe.h"
+
+/* -----------------------------------------------------------------------------
+ * Buffer Operations
+ */
+
+static inline struct vsp1_histogram_buffer *
+to_vsp1_histogram_buffer(struct vb2_v4l2_buffer *vbuf)
+{
+	return container_of(vbuf, struct vsp1_histogram_buffer, buf);
+}
+
+struct vsp1_histogram_buffer *
+vsp1_histogram_buffer_get(struct vsp1_histogram *histo)
+{
+	struct vsp1_histogram_buffer *buf = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&histo->irqlock, flags);
+
+	if (list_empty(&histo->irqqueue))
+		goto done;
+
+	buf = list_first_entry(&histo->irqqueue, struct vsp1_histogram_buffer,
+			       queue);
+	list_del(&buf->queue);
+	histo->readout = true;
+
+done:
+	spin_unlock_irqrestore(&histo->irqlock, flags);
+	return buf;
+}
+
+void vsp1_histogram_buffer_complete(struct vsp1_histogram *histo,
+				    struct vsp1_histogram_buffer *buf,
+				    size_t size)
+{
+	struct vsp1_pipeline *pipe = histo->pipe;
+	unsigned long flags;
+
+	/* The pipeline pointer is guaranteed to be valid as this function is
+	 * called from the frame completion interrupt handler, which can only
+	 * occur when video streaming is active.
+	 */
+	buf->buf.sequence = pipe->sequence;
+	buf->buf.vb2_buf.timestamp = ktime_get_ns();
+	vb2_set_plane_payload(&buf->buf.vb2_buf, 0, size);
+	vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_DONE);
+
+	spin_lock_irqsave(&histo->irqlock, flags);
+	histo->readout = false;
+	wake_up(&histo->wait_queue);
+	spin_unlock_irqrestore(&histo->irqlock, flags);
+}
+
+/* -----------------------------------------------------------------------------
+ * videobuf2 Queue Operations
+ */
+
+static int histo_queue_setup(struct vb2_queue *vq, unsigned int *nbuffers,
+			     unsigned int *nplanes, unsigned int sizes[],
+			     void *alloc_ctxs[])
+{
+	struct vsp1_histogram *histo = vb2_get_drv_priv(vq);
+
+	if (*nplanes) {
+		if (*nplanes != 1)
+			return -EINVAL;
+
+		if (sizes[0] < histo->data_size)
+			return -EINVAL;
+
+		return 0;
+	}
+
+	*nplanes = 1;
+	sizes[0] = histo->data_size;
+
+	return 0;
+}
+
+static int histo_buffer_prepare(struct vb2_buffer *vb)
+{
+	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+	struct vsp1_histogram *histo = vb2_get_drv_priv(vb->vb2_queue);
+	struct vsp1_histogram_buffer *buf = to_vsp1_histogram_buffer(vbuf);
+
+	if (vb->num_planes != 1)
+		return -EINVAL;
+
+	if (vb2_plane_size(vb, 0) < histo->data_size)
+		return -EINVAL;
+
+	buf->addr = vb2_plane_vaddr(vb, 0);
+
+	return 0;
+}
+
+static void histo_buffer_queue(struct vb2_buffer *vb)
+{
+	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+	struct vsp1_histogram *histo = vb2_get_drv_priv(vb->vb2_queue);
+	struct vsp1_histogram_buffer *buf = to_vsp1_histogram_buffer(vbuf);
+	unsigned long flags;
+
+	spin_lock_irqsave(&histo->irqlock, flags);
+	list_add_tail(&buf->queue, &histo->irqqueue);
+	spin_unlock_irqrestore(&histo->irqlock, flags);
+}
+
+static int histo_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+	return 0;
+}
+
+static void histo_stop_streaming(struct vb2_queue *vq)
+{
+	struct vsp1_histogram *histo = vb2_get_drv_priv(vq);
+	struct vsp1_histogram_buffer *buffer;
+	unsigned long flags;
+
+	spin_lock_irqsave(&histo->irqlock, flags);
+
+	/* Remove all buffers from the IRQ queue. */
+	list_for_each_entry(buffer, &histo->irqqueue, queue)
+		vb2_buffer_done(&buffer->buf.vb2_buf, VB2_BUF_STATE_ERROR);
+	INIT_LIST_HEAD(&histo->irqqueue);
+
+	/* Wait for the buffer being read out (if any) to complete. */
+	wait_event_lock_irq(histo->wait_queue, !histo->readout, histo->irqlock);
+
+	spin_unlock_irqrestore(&histo->irqlock, flags);
+}
+
+static struct vb2_ops histo_video_queue_qops = {
+	.queue_setup = histo_queue_setup,
+	.buf_prepare = histo_buffer_prepare,
+	.buf_queue = histo_buffer_queue,
+	.wait_prepare = vb2_ops_wait_prepare,
+	.wait_finish = vb2_ops_wait_finish,
+	.start_streaming = histo_start_streaming,
+	.stop_streaming = histo_stop_streaming,
+};
+
+/* -----------------------------------------------------------------------------
+ * V4L2 ioctls
+ */
+
+static int histo_v4l2_querycap(struct file *file, void *fh,
+			       struct v4l2_capability *cap)
+{
+	struct v4l2_fh *vfh = file->private_data;
+	struct vsp1_histogram *histo = to_vsp1_histo(vfh->vdev);
+
+	cap->capabilities = V4L2_CAP_DEVICE_CAPS | V4L2_CAP_STREAMING
+			  | V4L2_CAP_VIDEO_CAPTURE_MPLANE
+			  | V4L2_CAP_VIDEO_OUTPUT_MPLANE
+			  | V4L2_CAP_META_CAPTURE;
+	cap->device_caps = V4L2_CAP_META_CAPTURE
+			 | V4L2_CAP_STREAMING;
+
+	strlcpy(cap->driver, "vsp1", sizeof(cap->driver));
+	strlcpy(cap->card, histo->video.name, sizeof(cap->card));
+	snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s",
+		 dev_name(histo->vsp1->dev));
+
+	return 0;
+}
+
+static int histo_v4l2_get_format(struct file *file, void *fh,
+				 struct v4l2_format *format)
+{
+	struct v4l2_fh *vfh = file->private_data;
+	struct vsp1_histogram *histo = to_vsp1_histo(vfh->vdev);
+	struct v4l2_meta_format *meta = &format->fmt.meta;
+
+	if (format->type != histo->queue.type)
+		return -EINVAL;
+
+	memset(meta, 0, sizeof(*meta));
+
+	meta->dataformat = V4L2_META_FMT_VSP1_HGO;
+	meta->buffersize = histo->data_size;
+
+	return 0;
+}
+
+static const struct v4l2_ioctl_ops histo_v4l2_ioctl_ops = {
+	.vidioc_querycap		= histo_v4l2_querycap,
+	.vidioc_g_fmt_meta_cap		= histo_v4l2_get_format,
+	.vidioc_s_fmt_meta_cap		= histo_v4l2_get_format,
+	.vidioc_try_fmt_meta_cap	= histo_v4l2_get_format,
+	.vidioc_reqbufs			= vb2_ioctl_reqbufs,
+	.vidioc_querybuf		= vb2_ioctl_querybuf,
+	.vidioc_qbuf			= vb2_ioctl_qbuf,
+	.vidioc_dqbuf			= vb2_ioctl_dqbuf,
+	.vidioc_create_bufs		= vb2_ioctl_create_bufs,
+	.vidioc_prepare_buf		= vb2_ioctl_prepare_buf,
+	.vidioc_streamon		= vb2_ioctl_streamon,
+	.vidioc_streamoff		= vb2_ioctl_streamoff,
+};
+
+/* -----------------------------------------------------------------------------
+ * V4L2 File Operations
+ */
+
+static struct v4l2_file_operations histo_v4l2_fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = video_ioctl2,
+	.open = v4l2_fh_open,
+	.release = vb2_fop_release,
+	.poll = vb2_fop_poll,
+	.mmap = vb2_fop_mmap,
+};
+
+int vsp1_histogram_init(struct vsp1_device *vsp1, struct vsp1_histogram *histo,
+			const char *name, size_t data_size)
+{
+	int ret;
+
+	histo->vsp1 = vsp1;
+	histo->data_size = data_size;
+
+	histo->pad.flags = MEDIA_PAD_FL_SINK;
+	histo->video.vfl_dir = VFL_DIR_RX;
+
+	mutex_init(&histo->lock);
+	spin_lock_init(&histo->irqlock);
+	INIT_LIST_HEAD(&histo->irqqueue);
+	init_waitqueue_head(&histo->wait_queue);
+
+	/* Initialize the media entity... */
+	ret = media_entity_pads_init(&histo->video.entity, 1, &histo->pad);
+	if (ret < 0)
+		return ret;
+
+	/* ... and the video node... */
+	histo->video.v4l2_dev = &vsp1->v4l2_dev;
+	histo->video.fops = &histo_v4l2_fops;
+	snprintf(histo->video.name, sizeof(histo->video.name),
+		 "%s histo", name);
+	histo->video.vfl_type = VFL_TYPE_META;
+	histo->video.release = video_device_release_empty;
+	histo->video.ioctl_ops = &histo_v4l2_ioctl_ops;
+
+	video_set_drvdata(&histo->video, histo);
+
+	/* ... and the buffers queue... */
+	histo->queue.type = V4L2_BUF_TYPE_META_CAPTURE;
+	histo->queue.io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+	histo->queue.lock = &histo->lock;
+	histo->queue.drv_priv = histo;
+	histo->queue.buf_struct_size = sizeof(struct vsp1_histogram_buffer);
+	histo->queue.ops = &histo_video_queue_qops;
+	histo->queue.mem_ops = &vb2_vmalloc_memops;
+	histo->queue.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+	ret = vb2_queue_init(&histo->queue);
+	if (ret < 0) {
+		dev_err(histo->vsp1->dev, "failed to initialize vb2 queue\n");
+		goto error;
+	}
+
+	/* ... and register the video device. */
+	histo->video.queue = &histo->queue;
+	ret = video_register_device(&histo->video, VFL_TYPE_META, -1);
+	if (ret < 0) {
+		dev_err(histo->vsp1->dev, "failed to register video device\n");
+		goto error;
+	}
+
+	return 0;
+
+error:
+	vsp1_histogram_cleanup(histo);
+	return ret;
+}
+
+void vsp1_histogram_cleanup(struct vsp1_histogram *histo)
+{
+	if (video_is_registered(&histo->video))
+		video_unregister_device(&histo->video);
+
+	media_entity_cleanup(&histo->video.entity);
+}
diff --git a/drivers/media/platform/vsp1/vsp1_histo.h b/drivers/media/platform/vsp1/vsp1_histo.h
new file mode 100644
index 000000000000..48be31dd5515
--- /dev/null
+++ b/drivers/media/platform/vsp1/vsp1_histo.h
@@ -0,0 +1,68 @@
+/*
+ * vsp1_histo.h  --  R-Car VSP1 Histogram API
+ *
+ * Copyright (C) 2016 Renesas Electronics Corporation
+ * Copyright (C) 2016 Laurent Pinchart
+ *
+ * Contact: Laurent Pinchart (laurent.pinchart@ideasonboard.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#ifndef __VSP1_HISTO_H__
+#define __VSP1_HISTO_H__
+
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+
+#include <media/media-entity.h>
+#include <media/v4l2-dev.h>
+#include <media/videobuf2-v4l2.h>
+
+struct vsp1_device;
+struct vsp1_pipeline;
+
+struct vsp1_histogram_buffer {
+	struct vb2_v4l2_buffer buf;
+	struct list_head queue;
+	void *addr;
+};
+
+struct vsp1_histogram {
+	struct vsp1_device *vsp1;
+	struct vsp1_pipeline *pipe;
+
+	struct video_device video;
+	struct media_pad pad;
+
+	size_t data_size;
+
+	struct mutex lock;
+	struct vb2_queue queue;
+
+	spinlock_t irqlock;
+	struct list_head irqqueue;
+
+	wait_queue_head_t wait_queue;
+	bool readout;
+};
+
+static inline struct vsp1_histogram *to_vsp1_histo(struct video_device *vdev)
+{
+	return container_of(vdev, struct vsp1_histogram, video);
+}
+
+int vsp1_histogram_init(struct vsp1_device *vsp1, struct vsp1_histogram *histo,
+			const char *name, size_t data_size);
+void vsp1_histogram_cleanup(struct vsp1_histogram *histo);
+
+struct vsp1_histogram_buffer *
+vsp1_histogram_buffer_get(struct vsp1_histogram *histo);
+void vsp1_histogram_buffer_complete(struct vsp1_histogram *histo,
+				    struct vsp1_histogram_buffer *buf,
+				    size_t size);
+
+#endif /* __VSP1_HISTO_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_pipe.c b/drivers/media/platform/vsp1/vsp1_pipe.c
index be47c8a1a812..aadeb7d859fc 100644
--- a/drivers/media/platform/vsp1/vsp1_pipe.c
+++ b/drivers/media/platform/vsp1/vsp1_pipe.c
@@ -23,6 +23,7 @@
 #include "vsp1_bru.h"
 #include "vsp1_dl.h"
 #include "vsp1_entity.h"
+#include "vsp1_hgo.h"
 #include "vsp1_pipe.h"
 #include "vsp1_rwpf.h"
 #include "vsp1_uds.h"
@@ -180,11 +181,18 @@ void vsp1_pipeline_reset(struct vsp1_pipeline *pipe)
 	pipe->output->pipe = NULL;
 	pipe->output = NULL;
 
+	if (pipe->hgo) {
+		struct vsp1_hgo *hgo = to_hgo(&pipe->hgo->subdev);
+
+		hgo->histo.pipe = NULL;
+	}
+
 	INIT_LIST_HEAD(&pipe->entities);
 	pipe->state = VSP1_PIPELINE_STOPPED;
 	pipe->buffers_ready = 0;
 	pipe->num_inputs = 0;
 	pipe->bru = NULL;
+	pipe->hgo = NULL;
 	pipe->lif = NULL;
 	pipe->uds = NULL;
 }
@@ -228,6 +236,7 @@ bool vsp1_pipeline_stopped(struct vsp1_pipeline *pipe)
 
 int vsp1_pipeline_stop(struct vsp1_pipeline *pipe)
 {
+	struct vsp1_device *vsp1 = pipe->output->entity.vsp1;
 	struct vsp1_entity *entity;
 	unsigned long flags;
 	int ret;
@@ -236,8 +245,7 @@ int vsp1_pipeline_stop(struct vsp1_pipeline *pipe)
 		/* When using display lists in continuous frame mode the only
 		 * way to stop the pipeline is to reset the hardware.
 		 */
-		ret = vsp1_reset_wpf(pipe->output->entity.vsp1,
-				     pipe->output->entity.index);
+		ret = vsp1_reset_wpf(vsp1, pipe->output->entity.index);
 		if (ret == 0) {
 			spin_lock_irqsave(&pipe->irqlock, flags);
 			pipe->state = VSP1_PIPELINE_STOPPED;
@@ -257,10 +265,15 @@ int vsp1_pipeline_stop(struct vsp1_pipeline *pipe)
 
 	list_for_each_entry(entity, &pipe->entities, list_pipe) {
 		if (entity->route && entity->route->reg)
-			vsp1_write(entity->vsp1, entity->route->reg,
+			vsp1_write(vsp1, entity->route->reg,
 				   VI6_DPR_NODE_UNUSED);
 	}
 
+	if (pipe->hgo)
+		vsp1_write(vsp1, VI6_DPR_HGO_SMPPT,
+			   (7 << VI6_DPR_SMPPT_TGW_SHIFT) |
+			   (VI6_DPR_NODE_UNUSED << VI6_DPR_SMPPT_PT_SHIFT));
+
 	v4l2_subdev_call(&pipe->output->entity.subdev, video, s_stream, 0);
 
 	return ret;
@@ -284,6 +297,9 @@ void vsp1_pipeline_frame_end(struct vsp1_pipeline *pipe)
 
 	vsp1_dlm_irq_frame_end(pipe->output->dlm);
 
+	if (pipe->hgo)
+		vsp1_hgo_frame_end(pipe->hgo);
+
 	if (pipe->frame_end)
 		pipe->frame_end(pipe);
 
@@ -309,7 +325,11 @@ void vsp1_pipeline_propagate_alpha(struct vsp1_pipeline *pipe,
 	struct vsp1_entity *entity;
 	struct media_pad *pad;
 
-	pad = media_entity_remote_pad(&input->pads[RWPF_PAD_SOURCE]);
+	/* The alpha value doesn't need to be propagated to the HGO, use
+	 * vsp1_entity_remote_pad() to traverse the graph.
+	 */
+
+	pad = vsp1_entity_remote_pad(&input->pads[RWPF_PAD_SOURCE]);
 
 	while (pad) {
 		if (!is_media_entity_v4l2_subdev(pad->entity))
@@ -331,7 +351,7 @@ void vsp1_pipeline_propagate_alpha(struct vsp1_pipeline *pipe,
 		}
 
 		pad = &entity->pads[entity->source_pad];
-		pad = media_entity_remote_pad(pad);
+		pad = vsp1_entity_remote_pad(pad);
 	}
 }
 
diff --git a/drivers/media/platform/vsp1/vsp1_pipe.h b/drivers/media/platform/vsp1/vsp1_pipe.h
index febc62f99d6d..3ecd3c1794a9 100644
--- a/drivers/media/platform/vsp1/vsp1_pipe.h
+++ b/drivers/media/platform/vsp1/vsp1_pipe.h
@@ -72,6 +72,7 @@ enum vsp1_pipeline_state {
  * @inputs: array of RPFs in the pipeline (indexed by RPF index)
  * @output: WPF at the output of the pipeline
  * @bru: BRU entity, if present
+ * @hgo: HGO entity, if present
  * @lif: LIF entity, if present
  * @uds: UDS entity, if present
  * @uds_input: entity at the input of the UDS, if the UDS is present
@@ -97,6 +98,7 @@ struct vsp1_pipeline {
 	struct vsp1_rwpf *inputs[VSP1_MAX_RPF];
 	struct vsp1_rwpf *output;
 	struct vsp1_entity *bru;
+	struct vsp1_entity *hgo;
 	struct vsp1_entity *lif;
 	struct vsp1_entity *uds;
 	struct vsp1_entity *uds_input;
diff --git a/drivers/media/platform/vsp1/vsp1_regs.h b/drivers/media/platform/vsp1/vsp1_regs.h
index 927b5fb94c48..596533435128 100644
--- a/drivers/media/platform/vsp1/vsp1_regs.h
+++ b/drivers/media/platform/vsp1/vsp1_regs.h
@@ -321,8 +321,8 @@
 #define VI6_DPR_ROUTE_RT_MASK		(0x3f << 0)
 #define VI6_DPR_ROUTE_RT_SHIFT		0
 
-#define VI6_DPR_HGO_SMPPT		0x2050
-#define VI6_DPR_HGT_SMPPT		0x2054
+#define VI6_DPR_HGO_SMPPT		0x2054
+#define VI6_DPR_HGT_SMPPT		0x2058
 #define VI6_DPR_SMPPT_TGW_MASK		(7 << 8)
 #define VI6_DPR_SMPPT_TGW_SHIFT		8
 #define VI6_DPR_SMPPT_PT_MASK		(0x3f << 0)
@@ -574,24 +574,38 @@
  */
 
 #define VI6_HGO_OFFSET			0x3000
+#define VI6_HGO_OFFSET_HOFFSET_SHIFT	16
+#define VI6_HGO_OFFSET_VOFFSET_SHIFT	0
 #define VI6_HGO_SIZE			0x3004
+#define VI6_HGO_SIZE_HSIZE_SHIFT	16
+#define VI6_HGO_SIZE_VSIZE_SHIFT	0
 #define VI6_HGO_MODE			0x3008
+#define VI6_HGO_MODE_STEP		(1 << 10)
+#define VI6_HGO_MODE_MAXRGB		(1 << 7)
+#define VI6_HGO_MODE_OFSB_R		(1 << 6)
+#define VI6_HGO_MODE_OFSB_G		(1 << 5)
+#define VI6_HGO_MODE_OFSB_B		(1 << 4)
+#define VI6_HGO_MODE_HRATIO_SHIFT	2
+#define VI6_HGO_MODE_VRATIO_SHIFT	0
 #define VI6_HGO_LB_TH			0x300c
 #define VI6_HGO_LBn_H(n)		(0x3010 + (n) * 8)
 #define VI6_HGO_LBn_V(n)		(0x3014 + (n) * 8)
-#define VI6_HGO_R_HISTO			0x3030
+#define VI6_HGO_R_HISTO(n)		(0x3030 + (n) * 4)
 #define VI6_HGO_R_MAXMIN		0x3130
 #define VI6_HGO_R_SUM			0x3134
 #define VI6_HGO_R_LB_DET		0x3138
-#define VI6_HGO_G_HISTO			0x3140
+#define VI6_HGO_G_HISTO(n)		(0x3140 + (n) * 4)
 #define VI6_HGO_G_MAXMIN		0x3240
 #define VI6_HGO_G_SUM			0x3244
 #define VI6_HGO_G_LB_DET		0x3248
-#define VI6_HGO_B_HISTO			0x3250
+#define VI6_HGO_B_HISTO(n)		(0x3250 + (n) * 4)
 #define VI6_HGO_B_MAXMIN		0x3350
 #define VI6_HGO_B_SUM			0x3354
 #define VI6_HGO_B_LB_DET		0x3358
+#define VI6_HGO_EXT_HIST_ADDR		0x335c
+#define VI6_HGO_EXT_HIST_DATA		0x3360
 #define VI6_HGO_REGRST			0x33fc
+#define VI6_HGO_REGRST_RCLEA		(1 << 0)
 
 /* -----------------------------------------------------------------------------
  * HGT Control Registers
diff --git a/drivers/media/platform/vsp1/vsp1_video.c b/drivers/media/platform/vsp1/vsp1_video.c
index 34aa6427662d..bcf47e7581b5 100644
--- a/drivers/media/platform/vsp1/vsp1_video.c
+++ b/drivers/media/platform/vsp1/vsp1_video.c
@@ -31,6 +31,7 @@
 #include "vsp1_bru.h"
 #include "vsp1_dl.h"
 #include "vsp1_entity.h"
+#include "vsp1_hgo.h"
 #include "vsp1_pipe.h"
 #include "vsp1_rwpf.h"
 #include "vsp1_uds.h"
@@ -319,7 +320,11 @@ static int vsp1_video_pipeline_build_branch(struct vsp1_pipeline *pipe,
 	if (ret < 0)
 		return ret;
 
-	pad = media_entity_remote_pad(&input->entity.pads[RWPF_PAD_SOURCE]);
+	/* The main data path doesn't include the HGO, use
+	 * vsp1_entity_remote_pad() to traverse the graph.
+	 */
+
+	pad = vsp1_entity_remote_pad(&input->entity.pads[RWPF_PAD_SOURCE]);
 
 	while (1) {
 		if (pad == NULL) {
@@ -371,13 +376,9 @@ static int vsp1_video_pipeline_build_branch(struct vsp1_pipeline *pipe,
 					: &input->entity;
 		}
 
-		/* Follow the source link. The link setup operations ensure
-		 * that the output fan-out can't be more than one, there is thus
-		 * no need to verify here that only a single source link is
-		 * activated.
-		 */
+		/* Follow the source link, ignoring any HGO. */
 		pad = &entity->pads[entity->source_pad];
-		pad = media_entity_remote_pad(pad);
+		pad = vsp1_entity_remote_pad(pad);
 	}
 
 	/* The last entity must be the output WPF. */
@@ -432,6 +433,11 @@ static int vsp1_video_pipeline_build(struct vsp1_pipeline *pipe,
 			pipe->lif = e;
 		} else if (e->type == VSP1_ENTITY_BRU) {
 			pipe->bru = e;
+		} else if (e->type == VSP1_ENTITY_HGO) {
+			struct vsp1_hgo *hgo = to_hgo(subdev);
+
+			pipe->hgo = e;
+			hgo->histo.pipe = pipe;
 		}
 	}
 
@@ -629,7 +635,7 @@ static int vsp1_video_setup_pipeline(struct vsp1_pipeline *pipe)
 	}
 
 	list_for_each_entry(entity, &pipe->entities, list_pipe) {
-		vsp1_entity_route_setup(entity, pipe->dl);
+		vsp1_entity_route_setup(entity, pipe, pipe->dl);
 
 		if (entity->ops->configure)
 			entity->ops->configure(entity, pipe, pipe->dl);
-- 
2.7.3


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 2/4] v4l: Add metadata video device type
  2016-05-12  0:18 ` [PATCH/RFC v2 2/4] v4l: Add metadata video device type Laurent Pinchart
@ 2016-05-12 21:22   ` Sakari Ailus
  0 siblings, 0 replies; 21+ messages in thread
From: Sakari Ailus @ 2016-05-12 21:22 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-media, linux-renesas-soc, Hans Verkuil, Mauro Carvalho Chehab

Hi Laurent,

Thanks for the patchset!

On Thu, May 12, 2016 at 03:18:01AM +0300, Laurent Pinchart wrote:
> The metadata video device is used to transfer metadata between
> userspace and kernelspace. It supports the metadata buffer type only.

I remember we briefly discussed this before but I have to bring this up
again: do you think we need a different device node type? It depends purely
on the use case for which purpose a DMA engine is used for. The receiver
hardware doesn't even really know whether the data is image data or
metadata; the hardware may well simply write all of it to memory and that's
it.

Should we use a different device node type for metadata, every driver which
uses sensors connected over the CSI-2 bus --- that hardware is designed to
receive metadata using a separate data type which is part of the same stream
--- would be required to create one node for image data and another for
metadata, essentially doubling the number of data device nodes.

Doubling doesn't necessarily sound bad, but modern devices tend to have tens
of DMA engines already.

For what it's worth, multi-plane buffers use video devices as well so this
is not the first time a type of a device node would make use of multiple
buffer types.

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 0/4] Meta-data video device type
  2016-05-12  0:17 [PATCH/RFC v2 0/4] Meta-data video device type Laurent Pinchart
                   ` (3 preceding siblings ...)
  2016-05-12  0:18 ` [PATCH/RFC v2 4/4] v4l: vsp1: Add HGO support Laurent Pinchart
@ 2016-05-13  9:26 ` Hans Verkuil
  2016-05-13  9:52   ` Sakari Ailus
  2016-05-16  9:21   ` Laurent Pinchart
  4 siblings, 2 replies; 21+ messages in thread
From: Hans Verkuil @ 2016-05-13  9:26 UTC (permalink / raw)
  To: Laurent Pinchart, linux-media
  Cc: linux-renesas-soc, Sakari Ailus, Hans Verkuil, Mauro Carvalho Chehab

On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> Hello,
> 
> This RFC patch series is a second attempt at adding support for passing
> statistics data to userspace using a standard API.
> 
> The core requirements haven't changed. Statistics data capture requires
> zero-copy and decoupling statistics buffers from images buffers, in order to
> make statistics data available to userspace as soon as they're captured. For
> those reasons the early consensus we have reached is to use a video device
> node with a buffer queue to pass statistics buffers using the V4L2 API, and
> this new RFC version doesn't challenge that.
> 
> The major change compared to the previous version is how the first patch has
> been split in two. Patch 1/4 now adds a new metadata buffer type and format
> (including their support in videobuf2-v4l2), usable with regular V4L2 video
> device nodes, while patch 2/4 adds the new metadata video device type.
> Metadata buffer queues are thus usable on both the regular V4L2 device nodes
> and the new metadata device nodes.
> 
> This change was driven by the fact that an important category of use cases
> doesn't differentiate between metadata and image data in hardware at the DMA
> engine level. With such hardware (CSI-2 receivers in particular, but other bus
> types could also fall into this category) a stream containing both metadata
> and image data virtual streams is transmitted over a single physical link. The
> receiver demultiplexes, filters and routes the virtual streams to further
> hardware blocks, and in many cases, directly to DMA engines that are part of
> the receiver. Those DMA engines can capture a single virtual stream to memory,
> with as many DMA engines physically present in the device as the number of
> virtual streams that can be captured concurrently. All those DMA engines are
> usually identical and don't care about the type of data they receive and
> capture. For that reason limiting the metadata buffer type to metadata device
> nodes would require creating two device nodes for each DMA engine (and
> possibly more later if we need to capture other types of data). Not only would
> this make the API more complex to use for applications, it wouldn't bring any
> added value as the video and metadata device nodes associated with a DMA
> engine couldn't be used concurrently anyway, as they both correspond to the
> same hardware resource.
> 
> For this reason the ability to capture metadata on a video device node is
> useful and desired, and is implemented patch 1/4 using a dedicated video
> buffers queue. In the CSI-2 case a driver will create two buffer queues
> internally for the same DMA engine, and can select which one to use based on
> the buffer type passed for instance to the REQBUFS ioctl (details still need
> to be discussed here).

Not quite. It still has only one vb2_queue, you just change the type depending
on what mode it is in (video or meta data). Similar to raw vs sliced VBI.

In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue type
depending on whether raw or sliced VBI is requested. That's probably where I
would do this for video vs meta as well.

There is one big thing missing here: how does userspace know in this case whether
it will get metadata or video? Who decides which CSI virtual stream is routed
to which video node?

> A device that contains DMA engines dedicated to
> metadata would create a single buffer queue and implement metadata capture
> only.
> 
> Patch 2/4 then adds a dedicated metadata device node type that is limited to
> metadata capture. Support for metadata on video device nodes isn't removed
> though, both device node types support metadata capture. I have included this
> patch as the code existed in the previous version of the series (and was
> explicitly requested during review of an earlier version), but I don't really
> see what value this would bring compared to just using video device nodes.

I'm inclined to agree with you.

> As before patch 3/4 defines a first metadata format for the R-Car VSP1 1-D
> statistics format as an example, and the new patch 4/4 adds support for the
> histogram engine to the VSP1 driver. The implementation uses a metadata device
> node, and switching to a video device node wouldn't require more than applying
> the following one-liner patch.
> 
> -       histo->queue.type = V4L2_BUF_TYPE_META_CAPTURE;
> +       histo->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

You probably mean replacing this:

	histo->video.vfl_type = VFL_TYPE_META;

by this:

	histo->video.vfl_type = VFL_TYPE_GRABBER;

Regards,

	Hans

> 
> Beside whether patch 2/4 should be included or not (I would prefer dropping
> it) and how to select the active queue on a multi-type video device node
> (through the REQBUFS ioctl or through a diffent mean), one point that remains
> to be discussed is what information to include in the metadata format. Patch
> 1/1 defines the new metadata format as
> 
> struct v4l2_meta_format {
> 	__u32				dataformat;
> 	__u32				buffersize;
> 	__u8				reserved[24];
> } __attribute__ ((packed));
> 
> but at least in the CSI-2 case metadata is, as image data, transmitted in
> lines and the receiver needs to be programmed with the line length and the
> number of lines for proper operation. We started discussing this on IRC but
> haven't reached a conclusion yet.
> 
> Laurent Pinchart (4):
>   v4l: Add metadata buffer type and format
>   v4l: Add metadata video device type
>   v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine
>   v4l: vsp1: Add HGO support
> 
>  Documentation/DocBook/media/v4l/dev-meta.xml       |  97 ++++
>  .../DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml     | 307 +++++++++++++
>  Documentation/DocBook/media/v4l/pixfmt.xml         |   9 +
>  Documentation/DocBook/media/v4l/v4l2.xml           |   1 +
>  drivers/media/platform/Kconfig                     |   1 +
>  drivers/media/platform/vsp1/Makefile               |   2 +
>  drivers/media/platform/vsp1/vsp1.h                 |   3 +
>  drivers/media/platform/vsp1/vsp1_drm.c             |   2 +-
>  drivers/media/platform/vsp1/vsp1_drv.c             |  37 +-
>  drivers/media/platform/vsp1/vsp1_entity.c          | 131 +++++-
>  drivers/media/platform/vsp1/vsp1_entity.h          |   7 +-
>  drivers/media/platform/vsp1/vsp1_hgo.c             | 496 +++++++++++++++++++++
>  drivers/media/platform/vsp1/vsp1_hgo.h             |  50 +++
>  drivers/media/platform/vsp1/vsp1_histo.c           | 307 +++++++++++++
>  drivers/media/platform/vsp1/vsp1_histo.h           |  68 +++
>  drivers/media/platform/vsp1/vsp1_pipe.c            |  30 +-
>  drivers/media/platform/vsp1/vsp1_pipe.h            |   2 +
>  drivers/media/platform/vsp1/vsp1_regs.h            |  24 +-
>  drivers/media/platform/vsp1/vsp1_video.c           |  22 +-
>  drivers/media/v4l2-core/v4l2-compat-ioctl32.c      |  19 +
>  drivers/media/v4l2-core/v4l2-dev.c                 |  37 +-
>  drivers/media/v4l2-core/v4l2-ioctl.c               |  40 ++
>  drivers/media/v4l2-core/videobuf2-v4l2.c           |   3 +
>  include/media/v4l2-dev.h                           |   3 +-
>  include/media/v4l2-ioctl.h                         |   8 +
>  include/uapi/linux/media.h                         |   2 +
>  include/uapi/linux/videodev2.h                     |  17 +
>  27 files changed, 1678 insertions(+), 47 deletions(-)
>  create mode 100644 Documentation/DocBook/media/v4l/dev-meta.xml
>  create mode 100644 Documentation/DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml
>  create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.c
>  create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.h
>  create mode 100644 drivers/media/platform/vsp1/vsp1_histo.c
>  create mode 100644 drivers/media/platform/vsp1/vsp1_histo.h
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 0/4] Meta-data video device type
  2016-05-13  9:26 ` [PATCH/RFC v2 0/4] Meta-data video device type Hans Verkuil
@ 2016-05-13  9:52   ` Sakari Ailus
  2016-05-16  9:20     ` Laurent Pinchart
  2016-05-23 13:00     ` Guennadi Liakhovetski
  2016-05-16  9:21   ` Laurent Pinchart
  1 sibling, 2 replies; 21+ messages in thread
From: Sakari Ailus @ 2016-05-13  9:52 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, linux-media, linux-renesas-soc, Hans Verkuil,
	Mauro Carvalho Chehab

Hi Hans and Laurent,

On Fri, May 13, 2016 at 11:26:22AM +0200, Hans Verkuil wrote:
> On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > Hello,
> > 
> > This RFC patch series is a second attempt at adding support for passing
> > statistics data to userspace using a standard API.
> > 
> > The core requirements haven't changed. Statistics data capture requires
> > zero-copy and decoupling statistics buffers from images buffers, in order to
> > make statistics data available to userspace as soon as they're captured. For
> > those reasons the early consensus we have reached is to use a video device
> > node with a buffer queue to pass statistics buffers using the V4L2 API, and
> > this new RFC version doesn't challenge that.
> > 
> > The major change compared to the previous version is how the first patch has
> > been split in two. Patch 1/4 now adds a new metadata buffer type and format
> > (including their support in videobuf2-v4l2), usable with regular V4L2 video
> > device nodes, while patch 2/4 adds the new metadata video device type.
> > Metadata buffer queues are thus usable on both the regular V4L2 device nodes
> > and the new metadata device nodes.
> > 
> > This change was driven by the fact that an important category of use cases
> > doesn't differentiate between metadata and image data in hardware at the DMA
> > engine level. With such hardware (CSI-2 receivers in particular, but other bus
> > types could also fall into this category) a stream containing both metadata
> > and image data virtual streams is transmitted over a single physical link. The
> > receiver demultiplexes, filters and routes the virtual streams to further
> > hardware blocks, and in many cases, directly to DMA engines that are part of
> > the receiver. Those DMA engines can capture a single virtual stream to memory,
> > with as many DMA engines physically present in the device as the number of
> > virtual streams that can be captured concurrently. All those DMA engines are
> > usually identical and don't care about the type of data they receive and
> > capture. For that reason limiting the metadata buffer type to metadata device
> > nodes would require creating two device nodes for each DMA engine (and
> > possibly more later if we need to capture other types of data). Not only would
> > this make the API more complex to use for applications, it wouldn't bring any
> > added value as the video and metadata device nodes associated with a DMA
> > engine couldn't be used concurrently anyway, as they both correspond to the
> > same hardware resource.
> > 
> > For this reason the ability to capture metadata on a video device node is
> > useful and desired, and is implemented patch 1/4 using a dedicated video
> > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > internally for the same DMA engine, and can select which one to use based on
> > the buffer type passed for instance to the REQBUFS ioctl (details still need
> > to be discussed here).
> 
> Not quite. It still has only one vb2_queue, you just change the type depending
> on what mode it is in (video or meta data). Similar to raw vs sliced VBI.
> 
> In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue type
> depending on whether raw or sliced VBI is requested. That's probably where I
> would do this for video vs meta as well.
> 
> There is one big thing missing here: how does userspace know in this case whether
> it will get metadata or video? Who decides which CSI virtual stream is routed

My first impression would be to say by formats, so that's actually defined
by the user. The media bus formats do not have such separation between image
and metadata formats either.

VIDIOC_ENUM_FMT should be amended with media bus code as well so that the
user can figure out which format corresponds to a given media bus code.

> to which video node?

I think that should be considered as a seprate problem, albeit it will
require a solution as well. And it's a much biffer problem than this one.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 0/4] Meta-data video device type
  2016-05-13  9:52   ` Sakari Ailus
@ 2016-05-16  9:20     ` Laurent Pinchart
  2016-05-23 13:00     ` Guennadi Liakhovetski
  1 sibling, 0 replies; 21+ messages in thread
From: Laurent Pinchart @ 2016-05-16  9:20 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Hans Verkuil, Laurent Pinchart, linux-media, linux-renesas-soc,
	Hans Verkuil, Mauro Carvalho Chehab

Hello,

On Friday 13 May 2016 12:52:42 Sakari Ailus wrote:
> On Fri, May 13, 2016 at 11:26:22AM +0200, Hans Verkuil wrote:
> > On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > > Hello,
> > > 
> > > This RFC patch series is a second attempt at adding support for passing
> > > statistics data to userspace using a standard API.
> > > 
> > > The core requirements haven't changed. Statistics data capture requires
> > > zero-copy and decoupling statistics buffers from images buffers, in
> > > order to make statistics data available to userspace as soon as they're
> > > captured. For those reasons the early consensus we have reached is to
> > > use a video device node with a buffer queue to pass statistics buffers
> > > using the V4L2 API, and this new RFC version doesn't challenge that.
> > > 
> > > The major change compared to the previous version is how the first patch
> > > has been split in two. Patch 1/4 now adds a new metadata buffer type
> > > and format (including their support in videobuf2-v4l2), usable with
> > > regular V4L2 video device nodes, while patch 2/4 adds the new metadata
> > > video device type. Metadata buffer queues are thus usable on both the
> > > regular V4L2 device nodes and the new metadata device nodes.
> > > 
> > > This change was driven by the fact that an important category of use
> > > cases doesn't differentiate between metadata and image data in hardware
> > > at the DMA engine level. With such hardware (CSI-2 receivers in
> > > particular, but other bus types could also fall into this category) a
> > > stream containing both metadata and image data virtual streams is
> > > transmitted over a single physical link. The receiver demultiplexes,
> > > filters and routes the virtual streams to further hardware blocks, and
> > > in many cases, directly to DMA engines that are part of the receiver.
> > > Those DMA engines can capture a single virtual stream to memory, with as
> > > many DMA engines physically present in the device as the number of
> > > virtual streams that can be captured concurrently. All those DMA engines
> > > are usually identical and don't care about the type of data they receive
> > > and capture. For that reason limiting the metadata buffer type to
> > > metadata device nodes would require creating two device nodes for each
> > > DMA engine (and possibly more later if we need to capture other types
> > > of data). Not only would this make the API more complex to use for
> > > applications, it wouldn't bring any added value as the video and
> > > metadata device nodes associated with a DMA engine couldn't be used
> > > concurrently anyway, as they both correspond to the same hardware
> > > resource.
> > > 
> > > For this reason the ability to capture metadata on a video device node
> > > is useful and desired, and is implemented patch 1/4 using a dedicated
> > > video buffers queue. In the CSI-2 case a driver will create two buffer
> > > queues internally for the same DMA engine, and can select which one to
> > > use based on the buffer type passed for instance to the REQBUFS ioctl
> > > (details still need to be discussed here).
> > 
> > Not quite. It still has only one vb2_queue, you just change the type
> > depending on what mode it is in (video or meta data). Similar to raw vs
> > sliced VBI.
> > 
> > In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue
> > type depending on whether raw or sliced VBI is requested. That's probably
> > where I would do this for video vs meta as well.
> > 
> > There is one big thing missing here: how does userspace know in this case
> > whether it will get metadata or video? Who decides which CSI virtual
> > stream is routed
> 
> My first impression would be to say by formats, so that's actually defined
> by the user. The media bus formats do not have such separation between image
> and metadata formats either.

It should be configured by userspace, as ultimately it's userspace that knows 
which virtual channels it wants to capture. This is somehow analogous to the 
input selection API, a capture driver that supports multiple inputs can't 
decide on its own which input(s) to capture.

We might be able to make an educated guess in the kernel if we have access to 
information about what the source produces, but that would just be a default 
configuration that userspace will override when needed (and I'm not even sure 
that would be a good idea, I'm brainstorming here).

In order to configure virtual channel filtering/selection userspace would need 
to know what the source produces. This knowledge could be hardcoded in 
applications that deal with known hardware, but in the general case we 
probably need a new API to expose that information to userspace.

The next question is how to configure filtering and routing. We also need a 
new API here, and it could make sense to discuss it along with the subdev 
routing API I've proposed a while back. I don't know whether the two problems 
could be solved by the same API, but I believe it's worth a try.

> VIDIOC_ENUM_FMT should be amended with media bus code as well so that the
> user can figure out which format corresponds to a given media bus code.
> 
> > to which video node?
> 
> I think that should be considered as a seprate problem, albeit it will
> require a solution as well. And it's a much biffer problem than this one.

I agree. We need a solution, but that's separate from how to capture metadata 
on the video node side.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 0/4] Meta-data video device type
  2016-05-13  9:26 ` [PATCH/RFC v2 0/4] Meta-data video device type Hans Verkuil
  2016-05-13  9:52   ` Sakari Ailus
@ 2016-05-16  9:21   ` Laurent Pinchart
  2016-05-17  9:26     ` Sakari Ailus
  1 sibling, 1 reply; 21+ messages in thread
From: Laurent Pinchart @ 2016-05-16  9:21 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, linux-media, linux-renesas-soc, Sakari Ailus,
	Hans Verkuil, Mauro Carvalho Chehab

Hi Hans,

On Friday 13 May 2016 11:26:22 Hans Verkuil wrote:
> On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > Hello,
> > 
> > This RFC patch series is a second attempt at adding support for passing
> > statistics data to userspace using a standard API.
> > 
> > The core requirements haven't changed. Statistics data capture requires
> > zero-copy and decoupling statistics buffers from images buffers, in order
> > to make statistics data available to userspace as soon as they're
> > captured. For those reasons the early consensus we have reached is to use
> > a video device node with a buffer queue to pass statistics buffers using
> > the V4L2 API, and this new RFC version doesn't challenge that.
> > 
> > The major change compared to the previous version is how the first patch
> > has been split in two. Patch 1/4 now adds a new metadata buffer type and
> > format (including their support in videobuf2-v4l2), usable with regular
> > V4L2 video device nodes, while patch 2/4 adds the new metadata video
> > device type. Metadata buffer queues are thus usable on both the regular
> > V4L2 device nodes and the new metadata device nodes.
> > 
> > This change was driven by the fact that an important category of use cases
> > doesn't differentiate between metadata and image data in hardware at the
> > DMA engine level. With such hardware (CSI-2 receivers in particular, but
> > other bus types could also fall into this category) a stream containing
> > both metadata and image data virtual streams is transmitted over a single
> > physical link. The receiver demultiplexes, filters and routes the virtual
> > streams to further hardware blocks, and in many cases, directly to DMA
> > engines that are part of the receiver. Those DMA engines can capture a
> > single virtual stream to memory, with as many DMA engines physically
> > present in the device as the number of virtual streams that can be
> > captured concurrently. All those DMA engines are usually identical and
> > don't care about the type of data they receive and capture. For that
> > reason limiting the metadata buffer type to metadata device nodes would
> > require creating two device nodes for each DMA engine (and possibly more
> > later if we need to capture other types of data). Not only would this
> > make the API more complex to use for applications, it wouldn't bring any
> > added value as the video and metadata device nodes associated with a DMA
> > engine couldn't be used concurrently anyway, as they both correspond to
> > the same hardware resource.
> > 
> > For this reason the ability to capture metadata on a video device node is
> > useful and desired, and is implemented patch 1/4 using a dedicated video
> > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > internally for the same DMA engine, and can select which one to use based
> > on the buffer type passed for instance to the REQBUFS ioctl (details
> > still need to be discussed here).
> 
> Not quite. It still has only one vb2_queue, you just change the type
> depending on what mode it is in (video or meta data). Similar to raw vs
> sliced VBI.
> 
> In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue
> type depending on whether raw or sliced VBI is requested. That's probably
> where I would do this for video vs meta as well.

That sounds good to me. I didn't know we had support for changing the type of 
a vb2 queue at runtime, that's good news :-)

> There is one big thing missing here: how does userspace know in this case
> whether it will get metadata or video? Who decides which CSI virtual stream
> is routed to which video node?

I've replied to Sakari's e-mail about this.

> > A device that contains DMA engines dedicated to
> > metadata would create a single buffer queue and implement metadata capture
> > only.
> > 
> > Patch 2/4 then adds a dedicated metadata device node type that is limited
> > to metadata capture. Support for metadata on video device nodes isn't
> > removed though, both device node types support metadata capture. I have
> > included this patch as the code existed in the previous version of the
> > series (and was explicitly requested during review of an earlier
> > version), but I don't really see what value this would bring compared to
> > just using video device nodes.
> 
> I'm inclined to agree with you.

Great :-) Should I just drop patch 2/4 then ? Sakari, Mauro, would that be 
fine with you ?

> > As before patch 3/4 defines a first metadata format for the R-Car VSP1 1-D
> > statistics format as an example, and the new patch 4/4 adds support for
> > the histogram engine to the VSP1 driver. The implementation uses a
> > metadata device node, and switching to a video device node wouldn't
> > require more than applying the following one-liner patch.
> > 
> > -       histo->queue.type = V4L2_BUF_TYPE_META_CAPTURE;
> > +       histo->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
> 
> You probably mean replacing this:
> 
> 	histo->video.vfl_type = VFL_TYPE_META;
> 
> by this:
> 
> 	histo->video.vfl_type = VFL_TYPE_GRABBER;

Yes, of course, my bad.

> > Beside whether patch 2/4 should be included or not (I would prefer
> > dropping it) and how to select the active queue on a multi-type video
> > device node (through the REQBUFS ioctl or through a diffent mean), one
> > point that remains to be discussed is what information to include in the
> > metadata format. Patch 1/1 defines the new metadata format as
> > 
> > struct v4l2_meta_format {
> > 	__u32				dataformat;
> > 	__u32				buffersize;
> > 	__u8				reserved[24];
> > } __attribute__ ((packed));
> > 
> > but at least in the CSI-2 case metadata is, as image data, transmitted in
> > lines and the receiver needs to be programmed with the line length and the
> > number of lines for proper operation. We started discussing this on IRC
> > but haven't reached a conclusion yet.

This is the last problem that needs to be solved. We can possibly postpone it 
as I don't need width/height for now.

> > Laurent Pinchart (4):
> >   v4l: Add metadata buffer type and format
> >   v4l: Add metadata video device type
> >   v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine
> >   v4l: vsp1: Add HGO support
> >  
> >  Documentation/DocBook/media/v4l/dev-meta.xml       |  97 ++++
> >  .../DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml     | 307 +++++++++++++
> >  Documentation/DocBook/media/v4l/pixfmt.xml         |   9 +
> >  Documentation/DocBook/media/v4l/v4l2.xml           |   1 +
> >  drivers/media/platform/Kconfig                     |   1 +
> >  drivers/media/platform/vsp1/Makefile               |   2 +
> >  drivers/media/platform/vsp1/vsp1.h                 |   3 +
> >  drivers/media/platform/vsp1/vsp1_drm.c             |   2 +-
> >  drivers/media/platform/vsp1/vsp1_drv.c             |  37 +-
> >  drivers/media/platform/vsp1/vsp1_entity.c          | 131 +++++-
> >  drivers/media/platform/vsp1/vsp1_entity.h          |   7 +-
> >  drivers/media/platform/vsp1/vsp1_hgo.c             | 496 ++++++++++++++++
> >  drivers/media/platform/vsp1/vsp1_hgo.h             |  50 +++
> >  drivers/media/platform/vsp1/vsp1_histo.c           | 307 +++++++++++++
> >  drivers/media/platform/vsp1/vsp1_histo.h           |  68 +++
> >  drivers/media/platform/vsp1/vsp1_pipe.c            |  30 +-
> >  drivers/media/platform/vsp1/vsp1_pipe.h            |   2 +
> >  drivers/media/platform/vsp1/vsp1_regs.h            |  24 +-
> >  drivers/media/platform/vsp1/vsp1_video.c           |  22 +-
> >  drivers/media/v4l2-core/v4l2-compat-ioctl32.c      |  19 +
> >  drivers/media/v4l2-core/v4l2-dev.c                 |  37 +-
> >  drivers/media/v4l2-core/v4l2-ioctl.c               |  40 ++
> >  drivers/media/v4l2-core/videobuf2-v4l2.c           |   3 +
> >  include/media/v4l2-dev.h                           |   3 +-
> >  include/media/v4l2-ioctl.h                         |   8 +
> >  include/uapi/linux/media.h                         |   2 +
> >  include/uapi/linux/videodev2.h                     |  17 +
> >  27 files changed, 1678 insertions(+), 47 deletions(-)
> >  create mode 100644 Documentation/DocBook/media/v4l/dev-meta.xml
> >  create mode 100644
> >  Documentation/DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml
> >  create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.c
> >  create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.h
> >  create mode 100644 drivers/media/platform/vsp1/vsp1_histo.c
> >  create mode 100644 drivers/media/platform/vsp1/vsp1_histo.h

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 0/4] Meta-data video device type
  2016-05-16  9:21   ` Laurent Pinchart
@ 2016-05-17  9:26     ` Sakari Ailus
  0 siblings, 0 replies; 21+ messages in thread
From: Sakari Ailus @ 2016-05-17  9:26 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Hans Verkuil, Laurent Pinchart, linux-media, linux-renesas-soc,
	Hans Verkuil, Mauro Carvalho Chehab

Hi Laurent,

On Mon, May 16, 2016 at 12:21:17PM +0300, Laurent Pinchart wrote:
> Hi Hans,
> 
> On Friday 13 May 2016 11:26:22 Hans Verkuil wrote:
> > On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > > Hello,
> > > 
> > > This RFC patch series is a second attempt at adding support for passing
> > > statistics data to userspace using a standard API.
> > > 
> > > The core requirements haven't changed. Statistics data capture requires
> > > zero-copy and decoupling statistics buffers from images buffers, in order
> > > to make statistics data available to userspace as soon as they're
> > > captured. For those reasons the early consensus we have reached is to use
> > > a video device node with a buffer queue to pass statistics buffers using
> > > the V4L2 API, and this new RFC version doesn't challenge that.
> > > 
> > > The major change compared to the previous version is how the first patch
> > > has been split in two. Patch 1/4 now adds a new metadata buffer type and
> > > format (including their support in videobuf2-v4l2), usable with regular
> > > V4L2 video device nodes, while patch 2/4 adds the new metadata video
> > > device type. Metadata buffer queues are thus usable on both the regular
> > > V4L2 device nodes and the new metadata device nodes.
> > > 
> > > This change was driven by the fact that an important category of use cases
> > > doesn't differentiate between metadata and image data in hardware at the
> > > DMA engine level. With such hardware (CSI-2 receivers in particular, but
> > > other bus types could also fall into this category) a stream containing
> > > both metadata and image data virtual streams is transmitted over a single
> > > physical link. The receiver demultiplexes, filters and routes the virtual
> > > streams to further hardware blocks, and in many cases, directly to DMA
> > > engines that are part of the receiver. Those DMA engines can capture a
> > > single virtual stream to memory, with as many DMA engines physically
> > > present in the device as the number of virtual streams that can be
> > > captured concurrently. All those DMA engines are usually identical and
> > > don't care about the type of data they receive and capture. For that
> > > reason limiting the metadata buffer type to metadata device nodes would
> > > require creating two device nodes for each DMA engine (and possibly more
> > > later if we need to capture other types of data). Not only would this
> > > make the API more complex to use for applications, it wouldn't bring any
> > > added value as the video and metadata device nodes associated with a DMA
> > > engine couldn't be used concurrently anyway, as they both correspond to
> > > the same hardware resource.
> > > 
> > > For this reason the ability to capture metadata on a video device node is
> > > useful and desired, and is implemented patch 1/4 using a dedicated video
> > > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > > internally for the same DMA engine, and can select which one to use based
> > > on the buffer type passed for instance to the REQBUFS ioctl (details
> > > still need to be discussed here).
> > 
> > Not quite. It still has only one vb2_queue, you just change the type
> > depending on what mode it is in (video or meta data). Similar to raw vs
> > sliced VBI.
> > 
> > In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue
> > type depending on whether raw or sliced VBI is requested. That's probably
> > where I would do this for video vs meta as well.
> 
> That sounds good to me. I didn't know we had support for changing the type of 
> a vb2 queue at runtime, that's good news :-)
> 
> > There is one big thing missing here: how does userspace know in this case
> > whether it will get metadata or video? Who decides which CSI virtual stream
> > is routed to which video node?
> 
> I've replied to Sakari's e-mail about this.
> 
> > > A device that contains DMA engines dedicated to
> > > metadata would create a single buffer queue and implement metadata capture
> > > only.
> > > 
> > > Patch 2/4 then adds a dedicated metadata device node type that is limited
> > > to metadata capture. Support for metadata on video device nodes isn't
> > > removed though, both device node types support metadata capture. I have
> > > included this patch as the code existed in the previous version of the
> > > series (and was explicitly requested during review of an earlier
> > > version), but I don't really see what value this would bring compared to
> > > just using video device nodes.
> > 
> > I'm inclined to agree with you.
> 
> Great :-) Should I just drop patch 2/4 then ? Sakari, Mauro, would that be 
> fine with you ?

I do encourage dropping that patch, yes!

> 
> > > As before patch 3/4 defines a first metadata format for the R-Car VSP1 1-D
> > > statistics format as an example, and the new patch 4/4 adds support for
> > > the histogram engine to the VSP1 driver. The implementation uses a
> > > metadata device node, and switching to a video device node wouldn't
> > > require more than applying the following one-liner patch.
> > > 
> > > -       histo->queue.type = V4L2_BUF_TYPE_META_CAPTURE;
> > > +       histo->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
> > 
> > You probably mean replacing this:
> > 
> > 	histo->video.vfl_type = VFL_TYPE_META;
> > 
> > by this:
> > 
> > 	histo->video.vfl_type = VFL_TYPE_GRABBER;
> 
> Yes, of course, my bad.
> 
> > > Beside whether patch 2/4 should be included or not (I would prefer
> > > dropping it) and how to select the active queue on a multi-type video
> > > device node (through the REQBUFS ioctl or through a diffent mean), one
> > > point that remains to be discussed is what information to include in the
> > > metadata format. Patch 1/1 defines the new metadata format as
> > > 
> > > struct v4l2_meta_format {
> > > 	__u32				dataformat;
> > > 	__u32				buffersize;
> > > 	__u8				reserved[24];
> > > } __attribute__ ((packed));
> > > 
> > > but at least in the CSI-2 case metadata is, as image data, transmitted in
> > > lines and the receiver needs to be programmed with the line length and the
> > > number of lines for proper operation. We started discussing this on IRC
> > > but haven't reached a conclusion yet.

In two-dimensional DMA there may be padding added at the end of the line. We
also need bytesperline.

Metadata may be also packed the same way as the pixel data for the frame,
albeit it only contains 8 bits per "sample", leaving empty bytes in between.

> 
> This is the last problem that needs to be solved. We can possibly postpone it 
> as I don't need width/height for now.

The three fields consume already half of the reserved space. How about
__u32[16] or such?

> 
> > > Laurent Pinchart (4):
> > >   v4l: Add metadata buffer type and format
> > >   v4l: Add metadata video device type
> > >   v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine
> > >   v4l: vsp1: Add HGO support
> > >  
> > >  Documentation/DocBook/media/v4l/dev-meta.xml       |  97 ++++
> > >  .../DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml     | 307 +++++++++++++
> > >  Documentation/DocBook/media/v4l/pixfmt.xml         |   9 +
> > >  Documentation/DocBook/media/v4l/v4l2.xml           |   1 +
> > >  drivers/media/platform/Kconfig                     |   1 +
> > >  drivers/media/platform/vsp1/Makefile               |   2 +
> > >  drivers/media/platform/vsp1/vsp1.h                 |   3 +
> > >  drivers/media/platform/vsp1/vsp1_drm.c             |   2 +-
> > >  drivers/media/platform/vsp1/vsp1_drv.c             |  37 +-
> > >  drivers/media/platform/vsp1/vsp1_entity.c          | 131 +++++-
> > >  drivers/media/platform/vsp1/vsp1_entity.h          |   7 +-
> > >  drivers/media/platform/vsp1/vsp1_hgo.c             | 496 ++++++++++++++++
> > >  drivers/media/platform/vsp1/vsp1_hgo.h             |  50 +++
> > >  drivers/media/platform/vsp1/vsp1_histo.c           | 307 +++++++++++++
> > >  drivers/media/platform/vsp1/vsp1_histo.h           |  68 +++
> > >  drivers/media/platform/vsp1/vsp1_pipe.c            |  30 +-
> > >  drivers/media/platform/vsp1/vsp1_pipe.h            |   2 +
> > >  drivers/media/platform/vsp1/vsp1_regs.h            |  24 +-
> > >  drivers/media/platform/vsp1/vsp1_video.c           |  22 +-
> > >  drivers/media/v4l2-core/v4l2-compat-ioctl32.c      |  19 +
> > >  drivers/media/v4l2-core/v4l2-dev.c                 |  37 +-
> > >  drivers/media/v4l2-core/v4l2-ioctl.c               |  40 ++
> > >  drivers/media/v4l2-core/videobuf2-v4l2.c           |   3 +
> > >  include/media/v4l2-dev.h                           |   3 +-
> > >  include/media/v4l2-ioctl.h                         |   8 +
> > >  include/uapi/linux/media.h                         |   2 +
> > >  include/uapi/linux/videodev2.h                     |  17 +
> > >  27 files changed, 1678 insertions(+), 47 deletions(-)
> > >  create mode 100644 Documentation/DocBook/media/v4l/dev-meta.xml
> > >  create mode 100644
> > >  Documentation/DocBook/media/v4l/pixfmt-meta-vsp1-hgo.xml
> > >  create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.c
> > >  create mode 100644 drivers/media/platform/vsp1/vsp1_hgo.h
> > >  create mode 100644 drivers/media/platform/vsp1/vsp1_histo.c
> > >  create mode 100644 drivers/media/platform/vsp1/vsp1_histo.h

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format
  2016-05-12  0:18 ` [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format Laurent Pinchart
@ 2016-05-23 10:09   ` Hans Verkuil
  2016-05-24 15:28     ` Sakari Ailus
  2016-06-22 16:32     ` Laurent Pinchart
  0 siblings, 2 replies; 21+ messages in thread
From: Hans Verkuil @ 2016-05-23 10:09 UTC (permalink / raw)
  To: Laurent Pinchart, linux-media
  Cc: linux-renesas-soc, Sakari Ailus, Hans Verkuil, Mauro Carvalho Chehab

On 05/12/2016 02:18 AM, Laurent Pinchart wrote:
> The metadata buffer type is used to transfer metadata between userspace
> and kernelspace through a V4L2 buffers queue. It comes with a new
> metadata capture capability and format description.

Thanks for the patch! I have some comments/suggestions below:

> 
> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> ---
>  Documentation/DocBook/media/v4l/dev-meta.xml  | 93 +++++++++++++++++++++++++++
>  Documentation/DocBook/media/v4l/v4l2.xml      |  1 +
>  drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 19 ++++++
>  drivers/media/v4l2-core/v4l2-dev.c            | 16 +++--
>  drivers/media/v4l2-core/v4l2-ioctl.c          | 34 ++++++++++
>  drivers/media/v4l2-core/videobuf2-v4l2.c      |  3 +
>  include/media/v4l2-ioctl.h                    |  8 +++
>  include/uapi/linux/videodev2.h                | 14 ++++
>  8 files changed, 182 insertions(+), 6 deletions(-)
>  create mode 100644 Documentation/DocBook/media/v4l/dev-meta.xml
> 
> diff --git a/Documentation/DocBook/media/v4l/dev-meta.xml b/Documentation/DocBook/media/v4l/dev-meta.xml
> new file mode 100644
> index 000000000000..9b5b1fba2007
> --- /dev/null
> +++ b/Documentation/DocBook/media/v4l/dev-meta.xml
> @@ -0,0 +1,93 @@
> +  <title>Metadata Interface</title>
> +
> +  <note>
> +    <title>Experimental</title>
> +    <para>This is an <link linkend="experimental"> experimental </link>

No space before/after 'experimental'. It will look ugly in the docbook (I tried
it :-) ).

> +    interface and may change in the future.</para>
> +  </note>
> +
> +  <para>
> +Metadata refers to any non-image data that supplements video frames with
> +additional information. This may include statistics computed over the image
> +or frame capture parameters supplied by the image source. This interface is
> +intended for transfer of metadata to userspace and control of that operation.

I think it can also be in the other direction. I'm thinking of metadata needed
by codecs. I'm not sure if it should be mentioned that this is not supported at
the moment and that the ML should be contacted for more info if someone wants this.

> +  </para>
> +
> +  <para>
> +The metadata interface is implemented on video capture devices. The device can

s/on/for/?

> +be dedicated to metadata or can implement both video and metadata capture as
> +specified in its reported capabilities.
> +  </para>
> +
> +  <section>
> +    <title>Querying Capabilities</title>
> +
> +    <para>
> +Devices supporting the metadata interface set the
> +<constant>V4L2_CAP_META_CAPTURE</constant> flag in the
> +<structfield>capabilities</structfield> field of &v4l2-capability;
> +returned by the &VIDIOC-QUERYCAP; ioctl. That flag means the device can capture
> +metadata to memory.
> +    </para>

I know this is a copy and paste from the other interface descriptions, but it
is rather outdated. I would write this instead:

 <para>
Device nodes supporting the metadata interface set the
<constant>V4L2_CAP_META_CAPTURE</constant> flag in the
<structfield>device_caps</structfield> field of &v4l2-capability;
returned by the &VIDIOC-QUERYCAP; ioctl. That flag means the device node can capture
metadata to memory.
 </para>

Any driver using this will be recent and always have a valid device_caps field
(which, as you know, didn't exist in old kernels).

I find the capabilities field fairly useless in most cases due to the fact that
it contains the capabilities of all device nodes created by the device driver.

> +    <para>
> +At least one of the read/write or streaming I/O methods must be supported.
> +    </para>
> +  </section>
> +
> +  <section>
> +    <title>Data Format Negotiation</title>
> +
> +    <para>
> +The metadata device uses the <link linkend="format">format</link> ioctls to
> +select the capture format. The metadata buffer content format is bound to that
> +selectable format. In addition to the basic
> +<link linkend="format">format</link> ioctls, the &VIDIOC-ENUM-FMT; ioctl
> +must be supported as well.
> +    </para>
> +
> +    <para>
> +To use the <link linkend="format">format</link> ioctls applications set the
> +<structfield>type</structfield> field of a &v4l2-format; to
> +<constant>V4L2_BUF_TYPE_META_CAPTURE</constant> and use the &v4l2-meta-format;
> +<structfield>meta</structfield> member of the <structfield>fmt</structfield>
> +union as needed per the desired operation.
> +Currently there are two fields, <structfield>dataformat</structfield> and
> +<structfield>buffersize</structfield>, of struct &v4l2-meta-format; that are
> +used. Content of the <structfield>dataformat</structfield> is the V4L2 FourCC
> +code of the data format. The <structfield>buffersize</structfield> field is the
> +maximum buffer size in bytes required for data transfer, set by the driver in
> +order to inform applications.

Should it be mentioned here that changing the video format might change the buffersize?
In case the buffersize is always a multiple of the width?

> +    </para>
> +
> +    <table pgwide="1" frame="none" id="v4l2-meta-format">
> +      <title>struct <structname>v4l2_meta_format</structname></title>
> +      <tgroup cols="3">
> +        &cs-str;
> +        <tbody valign="top">
> +          <row>
> +            <entry>__u32</entry>
> +            <entry><structfield>dataformat</structfield></entry>
> +            <entry>
> +The data format, set by the application. This is a little endian
> +<link linkend="v4l2-fourcc">four character code</link>.
> +V4L2 defines metadata formats in <xref linkend="meta-formats" />.
> +           </entry>
> +          </row>
> +          <row>
> +            <entry>__u32</entry>
> +            <entry><structfield>buffersize</structfield></entry>
> +            <entry>
> +Maximum size in bytes required for data. Value is set by the driver.
> +           </entry>
> +          </row>
> +          <row>
> +            <entry>__u8</entry>
> +            <entry><structfield>reserved[24]</structfield></entry>
> +            <entry>This array is reserved for future extensions.
> +Drivers and applications must set it to zero.</entry>
> +          </row>
> +        </tbody>
> +      </tgroup>
> +    </table>
> +
> +  </section>
> diff --git a/Documentation/DocBook/media/v4l/v4l2.xml b/Documentation/DocBook/media/v4l/v4l2.xml
> index 42e626d6c936..5c83b5d342dd 100644
> --- a/Documentation/DocBook/media/v4l/v4l2.xml
> +++ b/Documentation/DocBook/media/v4l/v4l2.xml
> @@ -605,6 +605,7 @@ and discussions on the V4L mailing list.</revremark>
>      <section id="radio"> &sub-dev-radio; </section>
>      <section id="rds"> &sub-dev-rds; </section>
>      <section id="sdr"> &sub-dev-sdr; </section>
> +    <section id="meta"> &sub-dev-meta; </section>
>      <section id="event"> &sub-dev-event; </section>
>      <section id="subdev"> &sub-dev-subdev; </section>
>    </chapter>
> diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
> index bacecbd68a6d..da2d836e8887 100644
> --- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
> +++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
> @@ -161,6 +161,20 @@ static inline int put_v4l2_sdr_format(struct v4l2_sdr_format *kp, struct v4l2_sd
>  	return 0;
>  }
>  
> +static inline int get_v4l2_meta_format(struct v4l2_meta_format *kp, struct v4l2_meta_format __user *up)
> +{
> +	if (copy_from_user(kp, up, sizeof(struct v4l2_meta_format)))
> +		return -EFAULT;
> +	return 0;
> +}
> +
> +static inline int put_v4l2_meta_format(struct v4l2_meta_format *kp, struct v4l2_meta_format __user *up)
> +{
> +	if (copy_to_user(up, kp, sizeof(struct v4l2_meta_format)))
> +		return -EFAULT;
> +	return 0;
> +}
> +
>  struct v4l2_format32 {
>  	__u32	type;	/* enum v4l2_buf_type */
>  	union {
> @@ -170,6 +184,7 @@ struct v4l2_format32 {
>  		struct v4l2_vbi_format	vbi;
>  		struct v4l2_sliced_vbi_format	sliced;
>  		struct v4l2_sdr_format	sdr;
> +		struct v4l2_meta_format	meta;
>  		__u8	raw_data[200];        /* user-defined */
>  	} fmt;
>  };
> @@ -216,6 +231,8 @@ static int __get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __us
>  	case V4L2_BUF_TYPE_SDR_CAPTURE:
>  	case V4L2_BUF_TYPE_SDR_OUTPUT:
>  		return get_v4l2_sdr_format(&kp->fmt.sdr, &up->fmt.sdr);
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		return get_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
>  	default:
>  		pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
>  								kp->type);
> @@ -263,6 +280,8 @@ static int __put_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __us
>  	case V4L2_BUF_TYPE_SDR_CAPTURE:
>  	case V4L2_BUF_TYPE_SDR_OUTPUT:
>  		return put_v4l2_sdr_format(&kp->fmt.sdr, &up->fmt.sdr);
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		return put_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
>  	default:
>  		pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
>  								kp->type);
> diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
> index 70b559d7ca80..74b79e60ac38 100644
> --- a/drivers/media/v4l2-core/v4l2-dev.c
> +++ b/drivers/media/v4l2-core/v4l2-dev.c
> @@ -574,30 +574,34 @@ static void determine_valid_ioctls(struct video_device *vdev)
>  		set_bit(_IOC_NR(VIDIOC_ENUM_FREQ_BANDS), valid_ioctls);
>  
>  	if (is_vid) {
> -		/* video specific ioctls */
> +		/* video and metadata specific ioctls */
>  		if ((is_rx && (ops->vidioc_enum_fmt_vid_cap ||
>  			       ops->vidioc_enum_fmt_vid_cap_mplane ||
> -			       ops->vidioc_enum_fmt_vid_overlay)) ||
> +			       ops->vidioc_enum_fmt_vid_overlay ||
> +			       ops->vidioc_enum_fmt_meta_cap)) ||
>  		    (is_tx && (ops->vidioc_enum_fmt_vid_out ||
>  			       ops->vidioc_enum_fmt_vid_out_mplane)))
>  			set_bit(_IOC_NR(VIDIOC_ENUM_FMT), valid_ioctls);
>  		if ((is_rx && (ops->vidioc_g_fmt_vid_cap ||
>  			       ops->vidioc_g_fmt_vid_cap_mplane ||
> -			       ops->vidioc_g_fmt_vid_overlay)) ||
> +			       ops->vidioc_g_fmt_vid_overlay ||
> +			       ops->vidioc_g_fmt_meta_cap)) ||
>  		    (is_tx && (ops->vidioc_g_fmt_vid_out ||
>  			       ops->vidioc_g_fmt_vid_out_mplane ||
>  			       ops->vidioc_g_fmt_vid_out_overlay)))
>  			 set_bit(_IOC_NR(VIDIOC_G_FMT), valid_ioctls);
>  		if ((is_rx && (ops->vidioc_s_fmt_vid_cap ||
>  			       ops->vidioc_s_fmt_vid_cap_mplane ||
> -			       ops->vidioc_s_fmt_vid_overlay)) ||
> +			       ops->vidioc_s_fmt_vid_overlay ||
> +			       ops->vidioc_s_fmt_meta_cap)) ||
>  		    (is_tx && (ops->vidioc_s_fmt_vid_out ||
>  			       ops->vidioc_s_fmt_vid_out_mplane ||
>  			       ops->vidioc_s_fmt_vid_out_overlay)))
>  			 set_bit(_IOC_NR(VIDIOC_S_FMT), valid_ioctls);
>  		if ((is_rx && (ops->vidioc_try_fmt_vid_cap ||
>  			       ops->vidioc_try_fmt_vid_cap_mplane ||
> -			       ops->vidioc_try_fmt_vid_overlay)) ||
> +			       ops->vidioc_try_fmt_vid_overlay ||
> +			       ops->vidioc_try_fmt_meta_cap)) ||
>  		    (is_tx && (ops->vidioc_try_fmt_vid_out ||
>  			       ops->vidioc_try_fmt_vid_out_mplane ||
>  			       ops->vidioc_try_fmt_vid_out_overlay)))
> @@ -663,7 +667,7 @@ static void determine_valid_ioctls(struct video_device *vdev)
>  	}
>  
>  	if (is_vid || is_vbi || is_sdr) {
> -		/* ioctls valid for video, vbi or sdr */
> +		/* ioctls valid for video, metadata, vbi or sdr */
>  		SET_VALID_IOCTL(ops, VIDIOC_REQBUFS, vidioc_reqbufs);
>  		SET_VALID_IOCTL(ops, VIDIOC_QUERYBUF, vidioc_querybuf);
>  		SET_VALID_IOCTL(ops, VIDIOC_QBUF, vidioc_qbuf);
> diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> index 28e5be2c2eef..5d003152ff68 100644
> --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> @@ -155,6 +155,7 @@ const char *v4l2_type_names[] = {
>  	[V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE] = "vid-out-mplane",
>  	[V4L2_BUF_TYPE_SDR_CAPTURE]        = "sdr-cap",
>  	[V4L2_BUF_TYPE_SDR_OUTPUT]         = "sdr-out",
> +	[V4L2_BUF_TYPE_META_CAPTURE]       = "meta-cap",
>  };
>  EXPORT_SYMBOL(v4l2_type_names);
>  
> @@ -249,6 +250,7 @@ static void v4l_print_format(const void *arg, bool write_only)
>  	const struct v4l2_sliced_vbi_format *sliced;
>  	const struct v4l2_window *win;
>  	const struct v4l2_sdr_format *sdr;
> +	const struct v4l2_meta_format *meta;
>  	unsigned i;
>  
>  	pr_cont("type=%s", prt_names(p->type, v4l2_type_names));
> @@ -336,6 +338,15 @@ static void v4l_print_format(const void *arg, bool write_only)
>  			(sdr->pixelformat >> 16) & 0xff,
>  			(sdr->pixelformat >> 24) & 0xff);
>  		break;
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		meta = &p->fmt.meta;
> +		pr_cont(", dataformat=%c%c%c%c, buffersize=%u\n",
> +			(meta->dataformat >>  0) & 0xff,
> +			(meta->dataformat >>  8) & 0xff,
> +			(meta->dataformat >> 16) & 0xff,
> +			(meta->dataformat >> 24) & 0xff,
> +			meta->buffersize);
> +		break;
>  	}
>  }
>  
> @@ -981,6 +992,10 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type)
>  		if (is_sdr && is_tx && ops->vidioc_g_fmt_sdr_out)
>  			return 0;
>  		break;
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		if (is_vid && is_rx && ops->vidioc_g_fmt_meta_cap)
> +			return 0;
> +		break;
>  	default:
>  		break;
>  	}
> @@ -1349,6 +1364,11 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		ret = ops->vidioc_enum_fmt_sdr_out(file, fh, arg);
>  		break;
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_meta_cap))
> +			break;
> +		ret = ops->vidioc_enum_fmt_meta_cap(file, fh, arg);
> +		break;
>  	}
>  	if (ret == 0)
>  		v4l_fill_fmtdesc(p);
> @@ -1447,6 +1467,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
>  		if (unlikely(!is_tx || !is_sdr || !ops->vidioc_g_fmt_sdr_out))
>  			break;
>  		return ops->vidioc_g_fmt_sdr_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_meta_cap))
> +			break;
> +		return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> @@ -1534,6 +1558,11 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		CLEAR_AFTER_FIELD(p, fmt.sdr);
>  		return ops->vidioc_s_fmt_sdr_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_meta_cap))
> +			break;
> +		CLEAR_AFTER_FIELD(p, fmt.meta);

I would suggest using "CLEAR_FROM_FIELD(p, fmt.meta.reserved)" here. Of course, you'd
need to add a new CLEAR_FROM_FIELD macro for this.

I think this also should be used for the SDR_CAPTURE/OUTPUT types and possibly for
other types as well (can be done later of course).

This would also zero the reserved field, so drivers don't need to care about it.

> +		return ops->vidioc_s_fmt_meta_cap(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> @@ -1618,6 +1647,11 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
>  			break;
>  		CLEAR_AFTER_FIELD(p, fmt.sdr);
>  		return ops->vidioc_try_fmt_sdr_out(file, fh, arg);
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_meta_cap))
> +			break;
> +		CLEAR_AFTER_FIELD(p, fmt.meta);
> +		return ops->vidioc_try_fmt_meta_cap(file, fh, arg);
>  	}
>  	return -EINVAL;
>  }
> diff --git a/drivers/media/v4l2-core/videobuf2-v4l2.c b/drivers/media/v4l2-core/videobuf2-v4l2.c
> index 7f366f1b0377..e4e90d9a3a65 100644
> --- a/drivers/media/v4l2-core/videobuf2-v4l2.c
> +++ b/drivers/media/v4l2-core/videobuf2-v4l2.c
> @@ -575,6 +575,9 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create)
>  	case V4L2_BUF_TYPE_SDR_OUTPUT:
>  		requested_sizes[0] = f->fmt.sdr.buffersize;
>  		break;
> +	case V4L2_BUF_TYPE_META_CAPTURE:
> +		requested_sizes[0] = f->fmt.meta.buffersize;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h
> index 017ffb2220c7..2dd00c73e892 100644
> --- a/include/media/v4l2-ioctl.h
> +++ b/include/media/v4l2-ioctl.h
> @@ -38,6 +38,8 @@ struct v4l2_ioctl_ops {
>  					    struct v4l2_fmtdesc *f);
>  	int (*vidioc_enum_fmt_sdr_out)     (struct file *file, void *fh,
>  					    struct v4l2_fmtdesc *f);
> +	int (*vidioc_enum_fmt_meta_cap)    (struct file *file, void *fh,
> +					    struct v4l2_fmtdesc *f);
>  
>  	/* VIDIOC_G_FMT handlers */
>  	int (*vidioc_g_fmt_vid_cap)    (struct file *file, void *fh,
> @@ -64,6 +66,8 @@ struct v4l2_ioctl_ops {
>  					struct v4l2_format *f);
>  	int (*vidioc_g_fmt_sdr_out)    (struct file *file, void *fh,
>  					struct v4l2_format *f);
> +	int (*vidioc_g_fmt_meta_cap)   (struct file *file, void *fh,
> +					struct v4l2_format *f);
>  
>  	/* VIDIOC_S_FMT handlers */
>  	int (*vidioc_s_fmt_vid_cap)    (struct file *file, void *fh,
> @@ -90,6 +94,8 @@ struct v4l2_ioctl_ops {
>  					struct v4l2_format *f);
>  	int (*vidioc_s_fmt_sdr_out)    (struct file *file, void *fh,
>  					struct v4l2_format *f);
> +	int (*vidioc_s_fmt_meta_cap)   (struct file *file, void *fh,
> +					struct v4l2_format *f);
>  
>  	/* VIDIOC_TRY_FMT handlers */
>  	int (*vidioc_try_fmt_vid_cap)    (struct file *file, void *fh,
> @@ -116,6 +122,8 @@ struct v4l2_ioctl_ops {
>  					  struct v4l2_format *f);
>  	int (*vidioc_try_fmt_sdr_out)    (struct file *file, void *fh,
>  					  struct v4l2_format *f);
> +	int (*vidioc_try_fmt_meta_cap)   (struct file *file, void *fh,
> +					  struct v4l2_format *f);
>  
>  	/* Buffer handlers */
>  	int (*vidioc_reqbufs) (struct file *file, void *fh, struct v4l2_requestbuffers *b);
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 8f951917be74..5fbd30ca9b1e 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -143,6 +143,7 @@ enum v4l2_buf_type {
>  	V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE  = 10,
>  	V4L2_BUF_TYPE_SDR_CAPTURE          = 11,
>  	V4L2_BUF_TYPE_SDR_OUTPUT           = 12,
> +	V4L2_BUF_TYPE_META_CAPTURE         = 13,
>  	/* Deprecated, do not use */
>  	V4L2_BUF_TYPE_PRIVATE              = 0x80,
>  };
> @@ -435,6 +436,7 @@ struct v4l2_capability {
>  #define V4L2_CAP_SDR_CAPTURE		0x00100000  /* Is a SDR capture device */
>  #define V4L2_CAP_EXT_PIX_FORMAT		0x00200000  /* Supports the extended pixel format */
>  #define V4L2_CAP_SDR_OUTPUT		0x00400000  /* Is a SDR output device */
> +#define V4L2_CAP_META_CAPTURE		0x00800000  /* Is a metadata capture device */
>  
>  #define V4L2_CAP_READWRITE              0x01000000  /* read/write systemcalls */
>  #define V4L2_CAP_ASYNCIO                0x02000000  /* async I/O */
> @@ -2000,6 +2002,17 @@ struct v4l2_sdr_format {
>  } __attribute__ ((packed));
>  
>  /**
> + * struct v4l2_meta_format - metadata format definition
> + * @dataformat:		little endian four character code (fourcc)
> + * @buffersize:		maximum size in bytes required for data
> + */
> +struct v4l2_meta_format {
> +	__u32				dataformat;
> +	__u32				buffersize;
> +	__u8				reserved[24];
> +} __attribute__ ((packed));
> +
> +/**
>   * struct v4l2_format - stream data format
>   * @type:	enum v4l2_buf_type; type of the data stream
>   * @pix:	definition of an image format
> @@ -2018,6 +2031,7 @@ struct v4l2_format {
>  		struct v4l2_vbi_format		vbi;     /* V4L2_BUF_TYPE_VBI_CAPTURE */
>  		struct v4l2_sliced_vbi_format	sliced;  /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
>  		struct v4l2_sdr_format		sdr;     /* V4L2_BUF_TYPE_SDR_CAPTURE */
> +		struct v4l2_meta_format		meta;    /* V4L2_BUF_TYPE_META_CAPTURE */
>  		__u8	raw_data[200];                   /* user-defined */
>  	} fmt;
>  };
> 

Regards,

	Hans

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 0/4] Meta-data video device type
  2016-05-13  9:52   ` Sakari Ailus
  2016-05-16  9:20     ` Laurent Pinchart
@ 2016-05-23 13:00     ` Guennadi Liakhovetski
  1 sibling, 0 replies; 21+ messages in thread
From: Guennadi Liakhovetski @ 2016-05-23 13:00 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Hans Verkuil, Laurent Pinchart, linux-media, linux-renesas-soc,
	Hans Verkuil, Mauro Carvalho Chehab

Hi Sakari,

On Fri, 13 May 2016, Sakari Ailus wrote:

> Hi Hans and Laurent,
> 
> On Fri, May 13, 2016 at 11:26:22AM +0200, Hans Verkuil wrote:
> > On 05/12/2016 02:17 AM, Laurent Pinchart wrote:
> > > Hello,
> > > 
> > > This RFC patch series is a second attempt at adding support for passing
> > > statistics data to userspace using a standard API.
> > > 
> > > The core requirements haven't changed. Statistics data capture requires
> > > zero-copy and decoupling statistics buffers from images buffers, in order to
> > > make statistics data available to userspace as soon as they're captured. For
> > > those reasons the early consensus we have reached is to use a video device
> > > node with a buffer queue to pass statistics buffers using the V4L2 API, and
> > > this new RFC version doesn't challenge that.
> > > 
> > > The major change compared to the previous version is how the first patch has
> > > been split in two. Patch 1/4 now adds a new metadata buffer type and format
> > > (including their support in videobuf2-v4l2), usable with regular V4L2 video
> > > device nodes, while patch 2/4 adds the new metadata video device type.
> > > Metadata buffer queues are thus usable on both the regular V4L2 device nodes
> > > and the new metadata device nodes.
> > > 
> > > This change was driven by the fact that an important category of use cases
> > > doesn't differentiate between metadata and image data in hardware at the DMA
> > > engine level. With such hardware (CSI-2 receivers in particular, but other bus
> > > types could also fall into this category) a stream containing both metadata
> > > and image data virtual streams is transmitted over a single physical link. The
> > > receiver demultiplexes, filters and routes the virtual streams to further
> > > hardware blocks, and in many cases, directly to DMA engines that are part of
> > > the receiver. Those DMA engines can capture a single virtual stream to memory,
> > > with as many DMA engines physically present in the device as the number of
> > > virtual streams that can be captured concurrently. All those DMA engines are
> > > usually identical and don't care about the type of data they receive and
> > > capture. For that reason limiting the metadata buffer type to metadata device
> > > nodes would require creating two device nodes for each DMA engine (and
> > > possibly more later if we need to capture other types of data). Not only would
> > > this make the API more complex to use for applications, it wouldn't bring any
> > > added value as the video and metadata device nodes associated with a DMA
> > > engine couldn't be used concurrently anyway, as they both correspond to the
> > > same hardware resource.
> > > 
> > > For this reason the ability to capture metadata on a video device node is
> > > useful and desired, and is implemented patch 1/4 using a dedicated video
> > > buffers queue. In the CSI-2 case a driver will create two buffer queues
> > > internally for the same DMA engine, and can select which one to use based on
> > > the buffer type passed for instance to the REQBUFS ioctl (details still need
> > > to be discussed here).
> > 
> > Not quite. It still has only one vb2_queue, you just change the type depending
> > on what mode it is in (video or meta data). Similar to raw vs sliced VBI.
> > 
> > In the latter case it is the VIDIOC_S_FMT call that changes the vb2_queue type
> > depending on whether raw or sliced VBI is requested. That's probably where I
> > would do this for video vs meta as well.
> > 
> > There is one big thing missing here: how does userspace know in this case whether
> > it will get metadata or video? Who decides which CSI virtual stream is routed
> 
> My first impression would be to say by formats, so that's actually defined
> by the user. The media bus formats do not have such separation between image
> and metadata formats either.

I'm still not sure whether we actually need different formats for 
metadata. E.g. on CSI-2 I expect metadata to use the 8-bit embedded 
non-image Data Type - on all cameras. So, what should the CSI-2 bridge 
sink pad be configured with? Some sensor-specific type or just a format, 
telling it what to capture on the CSI-2 bus?

> VIDIOC_ENUM_FMT should be amended with media bus code as well so that the
> user can figure out which format corresponds to a given media bus code.

I'm not sure what you mean by this correspondence, could you elaborate on 
this a bit, please?

> > to which video node?
> 
> I think that should be considered as a seprate problem, albeit it will
> require a solution as well. And it's a much biffer problem than this one.

Yes, we did want to revive the stream routing work, didn't we? ;-)

But let me add one more use-case for consideration: UVC. Some UVC cameras 
include per-frame (meta)data in the private part of the payload header, 
even though I don't find anything in the UVC spec, that would suggest that 
as an acceptable approach. A more standard-conform design seems to be to 
transfer metadata using some Stream Based Payload on a separate USB 
Interface and synchronise it with video data using the timing information 
from UVC packet headers? I imagine each manufacturer would use a different 
GUID for their metadata format. Do we really want to create a new FOURCC 
code for each of them? Or just configure the pads with a fixed format and 
configure routing? But if then a camera decides to support several 
metadata formats on a single Input Terminal, we would only be able to 
distinguish between them, using size, unless they all have the same size.

Thanks
Guennadi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format
  2016-05-23 10:09   ` Hans Verkuil
@ 2016-05-24 15:28     ` Sakari Ailus
  2016-05-24 15:36       ` Hans Verkuil
  2016-06-22 16:32     ` Laurent Pinchart
  1 sibling, 1 reply; 21+ messages in thread
From: Sakari Ailus @ 2016-05-24 15:28 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, linux-media, linux-renesas-soc, Hans Verkuil,
	Mauro Carvalho Chehab

Hi Hans,

On Mon, May 23, 2016 at 12:09:16PM +0200, Hans Verkuil wrote:
> On 05/12/2016 02:18 AM, Laurent Pinchart wrote:
> > The metadata buffer type is used to transfer metadata between userspace
> > and kernelspace through a V4L2 buffers queue. It comes with a new
> > metadata capture capability and format description.
> 
> Thanks for the patch! I have some comments/suggestions below:
> 
> > 
> > Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> > ---
> >  Documentation/DocBook/media/v4l/dev-meta.xml  | 93 +++++++++++++++++++++++++++
> >  Documentation/DocBook/media/v4l/v4l2.xml      |  1 +
> >  drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 19 ++++++
> >  drivers/media/v4l2-core/v4l2-dev.c            | 16 +++--
> >  drivers/media/v4l2-core/v4l2-ioctl.c          | 34 ++++++++++
> >  drivers/media/v4l2-core/videobuf2-v4l2.c      |  3 +
> >  include/media/v4l2-ioctl.h                    |  8 +++
> >  include/uapi/linux/videodev2.h                | 14 ++++
> >  8 files changed, 182 insertions(+), 6 deletions(-)
> >  create mode 100644 Documentation/DocBook/media/v4l/dev-meta.xml
> > 
> > diff --git a/Documentation/DocBook/media/v4l/dev-meta.xml b/Documentation/DocBook/media/v4l/dev-meta.xml
> > new file mode 100644
> > index 000000000000..9b5b1fba2007
> > --- /dev/null
> > +++ b/Documentation/DocBook/media/v4l/dev-meta.xml
> > @@ -0,0 +1,93 @@
> > +  <title>Metadata Interface</title>
> > +
> > +  <note>
> > +    <title>Experimental</title>
> > +    <para>This is an <link linkend="experimental"> experimental </link>
> 
> No space before/after 'experimental'. It will look ugly in the docbook (I tried
> it :-) ).
> 
> > +    interface and may change in the future.</para>
> > +  </note>
> > +
> > +  <para>
> > +Metadata refers to any non-image data that supplements video frames with
> > +additional information. This may include statistics computed over the image
> > +or frame capture parameters supplied by the image source. This interface is
> > +intended for transfer of metadata to userspace and control of that operation.
> 
> I think it can also be in the other direction. I'm thinking of metadata needed
> by codecs. I'm not sure if it should be mentioned that this is not supported at
> the moment and that the ML should be contacted for more info if someone wants this.

Good point.

> 
> > +  </para>
> > +
> > +  <para>
> > +The metadata interface is implemented on video capture devices. The device can
> 
> s/on/for/?
> 
> > +be dedicated to metadata or can implement both video and metadata capture as
> > +specified in its reported capabilities.
> > +  </para>
> > +
> > +  <section>
> > +    <title>Querying Capabilities</title>
> > +
> > +    <para>
> > +Devices supporting the metadata interface set the
> > +<constant>V4L2_CAP_META_CAPTURE</constant> flag in the
> > +<structfield>capabilities</structfield> field of &v4l2-capability;
> > +returned by the &VIDIOC-QUERYCAP; ioctl. That flag means the device can capture
> > +metadata to memory.
> > +    </para>
> 
> I know this is a copy and paste from the other interface descriptions, but it
> is rather outdated. I would write this instead:
> 
>  <para>
> Device nodes supporting the metadata interface set the
> <constant>V4L2_CAP_META_CAPTURE</constant> flag in the
> <structfield>device_caps</structfield> field of &v4l2-capability;
> returned by the &VIDIOC-QUERYCAP; ioctl. That flag means the device node can capture
> metadata to memory.
>  </para>
> 
> Any driver using this will be recent and always have a valid device_caps field
> (which, as you know, didn't exist in old kernels).
> 
> I find the capabilities field fairly useless in most cases due to the fact that
> it contains the capabilities of all device nodes created by the device driver.
> 
> > +    <para>
> > +At least one of the read/write or streaming I/O methods must be supported.
> > +    </para>
> > +  </section>
> > +
> > +  <section>
> > +    <title>Data Format Negotiation</title>
> > +
> > +    <para>
> > +The metadata device uses the <link linkend="format">format</link> ioctls to
> > +select the capture format. The metadata buffer content format is bound to that
> > +selectable format. In addition to the basic
> > +<link linkend="format">format</link> ioctls, the &VIDIOC-ENUM-FMT; ioctl
> > +must be supported as well.
> > +    </para>
> > +
> > +    <para>
> > +To use the <link linkend="format">format</link> ioctls applications set the
> > +<structfield>type</structfield> field of a &v4l2-format; to
> > +<constant>V4L2_BUF_TYPE_META_CAPTURE</constant> and use the &v4l2-meta-format;
> > +<structfield>meta</structfield> member of the <structfield>fmt</structfield>
> > +union as needed per the desired operation.
> > +Currently there are two fields, <structfield>dataformat</structfield> and
> > +<structfield>buffersize</structfield>, of struct &v4l2-meta-format; that are
> > +used. Content of the <structfield>dataformat</structfield> is the V4L2 FourCC
> > +code of the data format. The <structfield>buffersize</structfield> field is the
> > +maximum buffer size in bytes required for data transfer, set by the driver in
> > +order to inform applications.
> 
> Should it be mentioned here that changing the video format might change
> the buffersize? In case the buffersize is always a multiple of the width?

Isn't that the case in general, as with pixel formats? buffersize could also
be something else than a multiple of width (there's no width for metadata
formats) due to e.g. padding required by hardware.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format
  2016-05-24 15:28     ` Sakari Ailus
@ 2016-05-24 15:36       ` Hans Verkuil
  2016-05-24 16:26         ` Sakari Ailus
  0 siblings, 1 reply; 21+ messages in thread
From: Hans Verkuil @ 2016-05-24 15:36 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, linux-media, linux-renesas-soc, Hans Verkuil,
	Mauro Carvalho Chehab

On 05/24/2016 05:28 PM, Sakari Ailus wrote:
> Hi Hans,
> 
>> Should it be mentioned here that changing the video format might change
>> the buffersize? In case the buffersize is always a multiple of the width?
> 
> Isn't that the case in general, as with pixel formats? buffersize could also
> be something else than a multiple of width (there's no width for metadata
> formats) due to e.g. padding required by hardware.

Well, I don't think it is obvious that the metadata buffersize depends on the
video width. Perhaps developers who are experienced with CSI know this, but
if you know little or nothing about CSI, then it can be unexpected (hey, that
was the case for me!).

I think it doesn't hurt to mention this relation.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format
  2016-05-24 15:36       ` Hans Verkuil
@ 2016-05-24 16:26         ` Sakari Ailus
  2016-06-22 16:51           ` Laurent Pinchart
  0 siblings, 1 reply; 21+ messages in thread
From: Sakari Ailus @ 2016-05-24 16:26 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, linux-media, linux-renesas-soc, Hans Verkuil,
	Mauro Carvalho Chehab

Hi Hans,

On Tue, May 24, 2016 at 05:36:42PM +0200, Hans Verkuil wrote:
> On 05/24/2016 05:28 PM, Sakari Ailus wrote:
> > Hi Hans,
> > 
> >> Should it be mentioned here that changing the video format might change
> >> the buffersize? In case the buffersize is always a multiple of the width?
> > 
> > Isn't that the case in general, as with pixel formats? buffersize could also
> > be something else than a multiple of width (there's no width for metadata
> > formats) due to e.g. padding required by hardware.
> 
> Well, I don't think it is obvious that the metadata buffersize depends on the
> video width. Perhaps developers who are experienced with CSI know this, but
> if you know little or nothing about CSI, then it can be unexpected (hey, that
> was the case for me!).
> 
> I think it doesn't hurt to mention this relation.

Ah, I think I misunderstood you first.

Typically the metadata width is the same as the image data width, that's
true. And it's how the hardware works. This is still visible in the media
bus format and the solution belongs rather to how multiple streams over a
single link are supported.

It's just that setting the image media bus format affects the metadata media
bus format. I guess that could be mentioned albeit it's hardware specific,
on some sensors metadata width is independent of the image width. Even then
this is not where I'd put it. I'd get back to the topic when documenting how
the API for multiple streams over a single link works.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 4/4] v4l: vsp1: Add HGO support
  2016-05-12  0:18 ` [PATCH/RFC v2 4/4] v4l: vsp1: Add HGO support Laurent Pinchart
@ 2016-06-13 15:33   ` Guennadi Liakhovetski
  2016-06-24 14:35     ` Laurent Pinchart
  0 siblings, 1 reply; 21+ messages in thread
From: Guennadi Liakhovetski @ 2016-06-13 15:33 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-media, linux-renesas-soc, Sakari Ailus, Hans Verkuil,
	Mauro Carvalho Chehab

Hi Laurent,

On Thu, 12 May 2016, Laurent Pinchart wrote:

> The HGO is a Histogram Generator One-Dimension. It computes per-channel
> histograms over a configurable region of the image with optional
> subsampling.
> 
> Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>

[snip]

Do I understand this correctly, that with this if such a metadata device 
is opened, while no video data is streaming, all calls will succeed, but 
no buffers will be dequeued, i.e. the user-space application will hang in 
DQBUF or poll() / select() indefinitely?

Thanks
Guennadi

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format
  2016-05-23 10:09   ` Hans Verkuil
  2016-05-24 15:28     ` Sakari Ailus
@ 2016-06-22 16:32     ` Laurent Pinchart
  1 sibling, 0 replies; 21+ messages in thread
From: Laurent Pinchart @ 2016-06-22 16:32 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Laurent Pinchart, linux-media, linux-renesas-soc, Sakari Ailus,
	Hans Verkuil, Mauro Carvalho Chehab

Hi Hans,

Thank you for the review.

On Monday 23 May 2016 12:09:16 Hans Verkuil wrote:
> On 05/12/2016 02:18 AM, Laurent Pinchart wrote:
> > The metadata buffer type is used to transfer metadata between userspace
> > and kernelspace through a V4L2 buffers queue. It comes with a new
> > metadata capture capability and format description.
> 
> Thanks for the patch! I have some comments/suggestions below:
> > Signed-off-by: Laurent Pinchart
> > <laurent.pinchart+renesas@ideasonboard.com>
> > ---
> > 
> >  Documentation/DocBook/media/v4l/dev-meta.xml  | 93 ++++++++++++++++++++++
> >  Documentation/DocBook/media/v4l/v4l2.xml      |  1 +
> >  drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 19 ++++++
> >  drivers/media/v4l2-core/v4l2-dev.c            | 16 +++--
> >  drivers/media/v4l2-core/v4l2-ioctl.c          | 34 ++++++++++
> >  drivers/media/v4l2-core/videobuf2-v4l2.c      |  3 +
> >  include/media/v4l2-ioctl.h                    |  8 +++
> >  include/uapi/linux/videodev2.h                | 14 ++++
> >  8 files changed, 182 insertions(+), 6 deletions(-)
> >  create mode 100644 Documentation/DocBook/media/v4l/dev-meta.xml
> > 
> > diff --git a/Documentation/DocBook/media/v4l/dev-meta.xml
> > b/Documentation/DocBook/media/v4l/dev-meta.xml new file mode 100644
> > index 000000000000..9b5b1fba2007
> > --- /dev/null
> > +++ b/Documentation/DocBook/media/v4l/dev-meta.xml
> > @@ -0,0 +1,93 @@
> > +  <title>Metadata Interface</title>
> > +
> > +  <note>
> > +    <title>Experimental</title>
> > +    <para>This is an <link linkend="experimental"> experimental </link>
> 
> No space before/after 'experimental'. It will look ugly in the docbook (I
> tried it :-) ).

Will be fixed in the next version.

> > +    interface and may change in the future.</para>
> > +  </note>
> > +
> > +  <para>
> > +Metadata refers to any non-image data that supplements video frames with
> > +additional information. This may include statistics computed over the
> > +image or frame capture parameters supplied by the image source. This
> > +interface is intended for transfer of metadata to userspace and control
> > +of that operation.
>
> I think it can also be in the other direction. I'm thinking of metadata
> needed by codecs. I'm not sure if it should be mentioned that this is not
> supported at the moment and that the ML should be contacted for more info
> if someone wants this.

Metadata for codecs make sense, but given that the topic hasn't been 
researched yet, I've kept metadata support focused on the capture side only 
for now. I propose revisiting the topic if (or rather when) someone needs 
metadata support on output video nodes.

> > +  </para>
> > +
> > +  <para>
> > +The metadata interface is implemented on video capture devices. The
> > device can
>
> s/on/for/?

I meant "on video capture device nodes", I'll fix that.
 
> > +be dedicated to metadata or can implement both video and metadata capture
> > +as specified in its reported capabilities.
> > +  </para>
> > +
> > +  <section>
> > +    <title>Querying Capabilities</title>
> > +
> > +    <para>
> > +Devices supporting the metadata interface set the
> > +<constant>V4L2_CAP_META_CAPTURE</constant> flag in the
> > +<structfield>capabilities</structfield> field of &v4l2-capability;
> > +returned by the &VIDIOC-QUERYCAP; ioctl. That flag means the device can
> > +capture metadata to memory.
> > +    </para>
> 
> I know this is a copy and paste from the other interface descriptions, but
> it is rather outdated. I would write this instead:
> 
>  <para>
> Device nodes supporting the metadata interface set the
> <constant>V4L2_CAP_META_CAPTURE</constant> flag in the
> <structfield>device_caps</structfield> field of &v4l2-capability;
> returned by the &VIDIOC-QUERYCAP; ioctl. That flag means the device node can
> capture metadata to memory.
>  </para>

Will be fixed in the next version.
 
> Any driver using this will be recent and always have a valid device_caps
> field (which, as you know, didn't exist in old kernels).
> 
> I find the capabilities field fairly useless in most cases due to the fact
> that it contains the capabilities of all device nodes created by the device
> driver.

I agree.

> > +    <para>
> > +At least one of the read/write or streaming I/O methods must be
> > supported.
> > +    </para>
> > +  </section>
> > +
> > +  <section>
> > +    <title>Data Format Negotiation</title>
> > +
> > +    <para>
> > +The metadata device uses the <link linkend="format">format</link> ioctls
> > to +select the capture format. The metadata buffer content format is
> > bound to that +selectable format. In addition to the basic
> > +<link linkend="format">format</link> ioctls, the &VIDIOC-ENUM-FMT; ioctl
> > +must be supported as well.
> > +    </para>
> > +
> > +    <para>
> > +To use the <link linkend="format">format</link> ioctls applications set
> > the +<structfield>type</structfield> field of a &v4l2-format; to
> > +<constant>V4L2_BUF_TYPE_META_CAPTURE</constant> and use the
> > &v4l2-meta-format; +<structfield>meta</structfield> member of the
> > <structfield>fmt</structfield> +union as needed per the desired
> > operation.
> > +Currently there are two fields, <structfield>dataformat</structfield> and
> > +<structfield>buffersize</structfield>, of struct &v4l2-meta-format; that
> > are +used. Content of the <structfield>dataformat</structfield> is the
> > V4L2 FourCC +code of the data format. The
> > <structfield>buffersize</structfield> field is the +maximum buffer size
> > in bytes required for data transfer, set by the driver in +order to
> > inform applications.
> 
> Should it be mentioned here that changing the video format might change the
> buffersize? In case the buffersize is always a multiple of the width?

I'll reply to Sakari's mail further down in this thread about this.

> > +    </para>
> > +
> > +    <table pgwide="1" frame="none" id="v4l2-meta-format">
> > +      <title>struct <structname>v4l2_meta_format</structname></title>
> > +      <tgroup cols="3">
> > +        &cs-str;
> > +        <tbody valign="top">
> > +          <row>
> > +            <entry>__u32</entry>
> > +            <entry><structfield>dataformat</structfield></entry>
> > +            <entry>
> > +The data format, set by the application. This is a little endian
> > +<link linkend="v4l2-fourcc">four character code</link>.
> > +V4L2 defines metadata formats in <xref linkend="meta-formats" />.
> > +           </entry>
> > +          </row>
> > +          <row>
> > +            <entry>__u32</entry>
> > +            <entry><structfield>buffersize</structfield></entry>
> > +            <entry>
> > +Maximum size in bytes required for data. Value is set by the driver.
> > +           </entry>
> > +          </row>
> > +          <row>
> > +            <entry>__u8</entry>
> > +            <entry><structfield>reserved[24]</structfield></entry>
> > +            <entry>This array is reserved for future extensions.
> > +Drivers and applications must set it to zero.</entry>
> > +          </row>
> > +        </tbody>
> > +      </tgroup>
> > +    </table>
> > +
> > +  </section>

[snip]

> > diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c
> > b/drivers/media/v4l2-core/v4l2-ioctl.c index 28e5be2c2eef..5d003152ff68
> > 100644
> > --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> > +++ b/drivers/media/v4l2-core/v4l2-ioctl.c

[snip]

> > @@ -1534,6 +1558,11 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops
> > *ops,
> >  			break;
> >  		CLEAR_AFTER_FIELD(p, fmt.sdr);
> >  		return ops->vidioc_s_fmt_sdr_out(file, fh, arg);
> > +	case V4L2_BUF_TYPE_META_CAPTURE:
> > +		if (unlikely(!is_rx || !is_vid ||
> > !ops->vidioc_s_fmt_meta_cap))
> > +			break;
> > +		CLEAR_AFTER_FIELD(p, fmt.meta);
> 
> I would suggest using "CLEAR_FROM_FIELD(p, fmt.meta.reserved)" here. Of
> course, you'd need to add a new CLEAR_FROM_FIELD macro for this.
> 
> I think this also should be used for the SDR_CAPTURE/OUTPUT types and
> possibly for other types as well (can be done later of course).
> 
> This would also zero the reserved field, so drivers don't need to care about
> it.

Will be fixed in the next version (in v4l_g_fmt() too).

> > +		return ops->vidioc_s_fmt_meta_cap(file, fh, arg);
> >  	}
> >  	return -EINVAL;
> >  }

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format
  2016-05-24 16:26         ` Sakari Ailus
@ 2016-06-22 16:51           ` Laurent Pinchart
  2016-06-24 23:15             ` Sakari Ailus
  0 siblings, 1 reply; 21+ messages in thread
From: Laurent Pinchart @ 2016-06-22 16:51 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Hans Verkuil, Laurent Pinchart, linux-media, linux-renesas-soc,
	Hans Verkuil, Mauro Carvalho Chehab

Hello,

On Tuesday 24 May 2016 19:26:32 Sakari Ailus wrote:
> On Tue, May 24, 2016 at 05:36:42PM +0200, Hans Verkuil wrote:
> > On 05/24/2016 05:28 PM, Sakari Ailus wrote:
> > > Hi Hans,
> > > 
> > >> Should it be mentioned here that changing the video format might change
> > >> the buffersize? In case the buffersize is always a multiple of the
> > >> width?
> > > 
> > > Isn't that the case in general, as with pixel formats? buffersize could
> > > also be something else than a multiple of width (there's no width for
> > > metadata formats) due to e.g. padding required by hardware.
> > 
> > Well, I don't think it is obvious that the metadata buffersize depends on
> > the video width. Perhaps developers who are experienced with CSI know
> > this, but if you know little or nothing about CSI, then it can be
> > unexpected (hey, that was the case for me!).
> > 
> > I think it doesn't hurt to mention this relation.
> 
> Ah, I think I misunderstood you first.
> 
> Typically the metadata width is the same as the image data width, that's
> true. And it's how the hardware works. This is still visible in the media
> bus format and the solution belongs rather to how multiple streams over a
> single link are supported.

Let me clarify on this.

In the general case there's no concept of metadata width when stored in 
memory. The two most common use cases for metadata store register values (or 
similar) information, or statistics. The former is just a byte stream in some 
kind of TLV (Type Length Value) format. The latter a set of values or arrays 
computed either on the full image or on subwindows, possibly laid out as a 
grid.

When transported over a bus, however, metadata can sometimes have a width and 
height. That's the case for CSI-2, which is a line-oriented format. Metadata 
then need to be broken into chunks transmitted in a CSI-2 line packet, even if 
they don't correspond to a line of an image. The line width on the bus is just 
the number of bytes transmitted in a single packet, which could be chosen 
freely (within the range allowed by CSI-2). In practice, to simplify the 
implementation, the line width is chosen to be identical to the line width of 
the image frames that the metadata correspond to, but that's not a requirement 
of either CSI-2 or the metadata format itself.

We thus need to expose metadata width and height on subdevs to ensure proper 
configuration of the transmitter and receiver, but that's not strictly 
mandatory on video nodes.

The metadata buffer size itself doesn't depend on the width and height of the 
corresponding image frames. A histogram using 64 bins on 3 components will be 
stored exactly the same way regardless of whether it's computed on a VGA or 
1080p frame. The buffer size depends on the configuration of the metadata 
source only, which in the case of the histogram generator in the VSP would 
include a control that decides whether to compute the histogram with 64 bins 
or 256 bins (the latter needs a 4 times larger buffer).

For metadata computed on a variable number of subwindows the buffer size will 
depend on the number of subwindows, which will in turn be possibly influenced 
by the size of the image. It could make sense to use fewer subwindows to 
compute AF data on a VGA image than on a 4k image. That is not however a 
requirement, and there's no direct mapping between image size and metadata 
size, the number of subwindows being usually configured by userspace.

I hope this clarifies the problem. Please let me know if you have additional 
questions or thoughts, and if you see anything that should be worded 
differently in the documentation (or even structures that should get new 
fields).

> It's just that setting the image media bus format affects the metadata media
> bus format. I guess that could be mentioned albeit it's hardware specific,
> on some sensors metadata width is independent of the image width. Even then
> this is not where I'd put it. I'd get back to the topic when documenting
> how the API for multiple streams over a single link works.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 4/4] v4l: vsp1: Add HGO support
  2016-06-13 15:33   ` Guennadi Liakhovetski
@ 2016-06-24 14:35     ` Laurent Pinchart
  0 siblings, 0 replies; 21+ messages in thread
From: Laurent Pinchart @ 2016-06-24 14:35 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Laurent Pinchart, linux-media, linux-renesas-soc, Sakari Ailus,
	Hans Verkuil, Mauro Carvalho Chehab

Hi Guennadi,

On Monday 13 Jun 2016 17:33:01 Guennadi Liakhovetski wrote:
> On Thu, 12 May 2016, Laurent Pinchart wrote:
> > The HGO is a Histogram Generator One-Dimension. It computes per-channel
> > histograms over a configurable region of the image with optional
> > subsampling.
> > 
> > Signed-off-by: Laurent Pinchart
> > <laurent.pinchart+renesas@ideasonboard.com>
> 
> [snip]
> 
> Do I understand this correctly, that with this if such a metadata device
> is opened, while no video data is streaming, all calls will succeed, but
> no buffers will be dequeued, i.e. the user-space application will hang in
> DQBUF or poll() / select() indefinitely?

That's correct. Same as with a mem-to-mem device, if you open the capture 
node, queue buffers and start streaming, DQBUF will wait until you start 
supplying buffers on the node at the other end of the pipeline.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format
  2016-06-22 16:51           ` Laurent Pinchart
@ 2016-06-24 23:15             ` Sakari Ailus
  0 siblings, 0 replies; 21+ messages in thread
From: Sakari Ailus @ 2016-06-24 23:15 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Hans Verkuil, Laurent Pinchart, linux-media, linux-renesas-soc,
	Hans Verkuil, Mauro Carvalho Chehab

Hi Laurent,

On Wed, Jun 22, 2016 at 07:51:06PM +0300, Laurent Pinchart wrote:
> Hello,
> 
> On Tuesday 24 May 2016 19:26:32 Sakari Ailus wrote:
> > On Tue, May 24, 2016 at 05:36:42PM +0200, Hans Verkuil wrote:
> > > On 05/24/2016 05:28 PM, Sakari Ailus wrote:
> > > > Hi Hans,
> > > > 
> > > >> Should it be mentioned here that changing the video format might change
> > > >> the buffersize? In case the buffersize is always a multiple of the
> > > >> width?
> > > > 
> > > > Isn't that the case in general, as with pixel formats? buffersize could
> > > > also be something else than a multiple of width (there's no width for
> > > > metadata formats) due to e.g. padding required by hardware.
> > > 
> > > Well, I don't think it is obvious that the metadata buffersize depends on
> > > the video width. Perhaps developers who are experienced with CSI know
> > > this, but if you know little or nothing about CSI, then it can be
> > > unexpected (hey, that was the case for me!).
> > > 
> > > I think it doesn't hurt to mention this relation.
> > 
> > Ah, I think I misunderstood you first.
> > 
> > Typically the metadata width is the same as the image data width, that's
> > true. And it's how the hardware works. This is still visible in the media
> > bus format and the solution belongs rather to how multiple streams over a
> > single link are supported.
> 
> Let me clarify on this.
> 
> In the general case there's no concept of metadata width when stored in 
> memory. The two most common use cases for metadata store register values (or 
> similar) information, or statistics. The former is just a byte stream in some 

In general case perhaps not.

But in specific cases such as sensor produced embedded data is indeed
line-based, with each line having a preable and a predetermined amount of
data after the preamble. In this respect it's very much like image data.

> kind of TLV (Type Length Value) format. The latter a set of values or arrays 
> computed either on the full image or on subwindows, possibly laid out as a 
> grid.
> 
> When transported over a bus, however, metadata can sometimes have a width and 
> height. That's the case for CSI-2, which is a line-oriented format. Metadata 
> then need to be broken into chunks transmitted in a CSI-2 line packet, even if 
> they don't correspond to a line of an image. The line width on the bus is just 
> the number of bytes transmitted in a single packet, which could be chosen 
> freely (within the range allowed by CSI-2). In practice, to simplify the 
> implementation, the line width is chosen to be identical to the line width of 
> the image frames that the metadata correspond to, but that's not a requirement 
> of either CSI-2 or the metadata format itself.
> 
> We thus need to expose metadata width and height on subdevs to ensure proper 
> configuration of the transmitter and receiver, but that's not strictly 
> mandatory on video nodes.

To support sensor embedded data, I think we do need bytesperline, width and
height. They might not be applicable to e.g. some types of statistics.

The rest looks good to me.

> 
> The metadata buffer size itself doesn't depend on the width and height of the 
> corresponding image frames. A histogram using 64 bins on 3 components will be 
> stored exactly the same way regardless of whether it's computed on a VGA or 
> 1080p frame. The buffer size depends on the configuration of the metadata 
> source only, which in the case of the histogram generator in the VSP would 
> include a control that decides whether to compute the histogram with 64 bins 
> or 256 bins (the latter needs a 4 times larger buffer).
> 
> For metadata computed on a variable number of subwindows the buffer size will 
> depend on the number of subwindows, which will in turn be possibly influenced 
> by the size of the image. It could make sense to use fewer subwindows to 
> compute AF data on a VGA image than on a 4k image. That is not however a 
> requirement, and there's no direct mapping between image size and metadata 
> size, the number of subwindows being usually configured by userspace.
> 
> I hope this clarifies the problem. Please let me know if you have additional 
> questions or thoughts, and if you see anything that should be worded 
> differently in the documentation (or even structures that should get new 
> fields).
> 
> > It's just that setting the image media bus format affects the metadata media
> > bus format. I guess that could be mentioned albeit it's hardware specific,
> > on some sensors metadata width is independent of the image width. Even then
> > this is not where I'd put it. I'd get back to the topic when documenting
> > how the API for multiple streams over a single link works.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-06-24 23:15 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-12  0:17 [PATCH/RFC v2 0/4] Meta-data video device type Laurent Pinchart
2016-05-12  0:18 ` [PATCH/RFC v2 1/4] v4l: Add metadata buffer type and format Laurent Pinchart
2016-05-23 10:09   ` Hans Verkuil
2016-05-24 15:28     ` Sakari Ailus
2016-05-24 15:36       ` Hans Verkuil
2016-05-24 16:26         ` Sakari Ailus
2016-06-22 16:51           ` Laurent Pinchart
2016-06-24 23:15             ` Sakari Ailus
2016-06-22 16:32     ` Laurent Pinchart
2016-05-12  0:18 ` [PATCH/RFC v2 2/4] v4l: Add metadata video device type Laurent Pinchart
2016-05-12 21:22   ` Sakari Ailus
2016-05-12  0:18 ` [PATCH/RFC v2 3/4] v4l: Define a pixel format for the R-Car VSP1 1-D histogram engine Laurent Pinchart
2016-05-12  0:18 ` [PATCH/RFC v2 4/4] v4l: vsp1: Add HGO support Laurent Pinchart
2016-06-13 15:33   ` Guennadi Liakhovetski
2016-06-24 14:35     ` Laurent Pinchart
2016-05-13  9:26 ` [PATCH/RFC v2 0/4] Meta-data video device type Hans Verkuil
2016-05-13  9:52   ` Sakari Ailus
2016-05-16  9:20     ` Laurent Pinchart
2016-05-23 13:00     ` Guennadi Liakhovetski
2016-05-16  9:21   ` Laurent Pinchart
2016-05-17  9:26     ` Sakari Ailus

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).