All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v2] v4l2: Compressed data formats on the video bus
@ 2011-12-01 10:20 Sylwester Nawrocki
  2011-12-01 10:20 ` [PATCH/RFC v2 1/4] v4l: Add new framesamples field to struct v4l2_mbus_framefmt Sylwester Nawrocki
                   ` (3 more replies)
  0 siblings, 4 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-01 10:20 UTC (permalink / raw)
  To: linux-media
  Cc: mchehab, laurent.pinchart, g.liakhovetski, sakari.ailus,
	m.szyprowski, riverful.kim, sw0312.kim, s.nawrocki

Hello,

this is an updated version of the changeset extending struct v4l2_mbus_framefmt
with new framesamples field.

Changes since v1:
 - Docbook documentation improvements
 - drivers that are exposing a sub-device node are modified to initialize
   the new struct v4l2_mbus_framefmt member to 0 if they support only
   uncompressed formats
 
The proposed semantics for the framesamples parameter is as follows:

 - the value is propagated at video pipeline entities where 'code' indicates
   compressed format;
 - the subdevs adjust the value if needed;
 - although currently there is only one compressed data format at the media 
   bus - V4L2_MBUS_FMT_JPEG_1X8 which corresponds to V4L2_PIX_FMT_JPEG and
   one sample at the media bus equals to one byte in memory, it is assumed
   that the host knows exactly what is framesamples/sizeimage ratio and it will 
   validate framesamples/sizeimage values before starting streaming;
 - the host will query internally a relevant subdev to properly handle 'sizeimage' 
   at the VIDIOC_TRY/S_FMT ioctl 
      
And here is a link to my initial RFC:
http://www.mail-archive.com/linux-media@vger.kernel.org/msg39321.html


-- 
Regards, 

Sylwester Nawrocki 
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH/RFC v2 1/4] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-01 10:20 [RFC v2] v4l2: Compressed data formats on the video bus Sylwester Nawrocki
@ 2011-12-01 10:20 ` Sylwester Nawrocki
  2011-12-01 10:20 ` [PATCH/RFC v2 2/4] m5mols: Add support for buffer size configuration for compressed formats Sylwester Nawrocki
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-01 10:20 UTC (permalink / raw)
  To: linux-media
  Cc: mchehab, laurent.pinchart, g.liakhovetski, sakari.ailus,
	m.szyprowski, riverful.kim, sw0312.kim, s.nawrocki,
	Kyungmin Park

The purpose of the new field is to allow the video pipeline elements
to negotiate memory buffer size for compressed data frames, where
the buffer size cannot be derived from pixel width and height and
the pixel code.

For VIDIOC_SUBDEV_S_FMT and VIDIOC_SUBDEV_G_FMT ioctls, the
framesamples parameter should be calculated by the driver from pixel
width, height, color format and other parameters if required and
returned to the caller. This applies to compressed data formats only.

The application should propagate the framesamples value, whatever
returned at the first sub-device within a data pipeline, i.e. at the
pipeline's data source.

For compressed data formats the host drivers should internally
validate the framesamples parameter values before streaming is
enabled, to make sure the memory buffer size requirements are
satisfied along the pipeline.

Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
 Documentation/DocBook/media/v4l/dev-subdev.xml     |   10 ++++++++--
 Documentation/DocBook/media/v4l/subdev-formats.xml |    9 ++++++++-
 include/linux/v4l2-mediabus.h                      |    4 +++-
 3 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml b/Documentation/DocBook/media/v4l/dev-subdev.xml
index 0916a73..b9d24eb 100644
--- a/Documentation/DocBook/media/v4l/dev-subdev.xml
+++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
@@ -113,7 +113,7 @@
     <para>Drivers that implement the <link linkend="media-controller-intro">media
     API</link> can expose pad-level image format configuration to applications.
     When they do, applications can use the &VIDIOC-SUBDEV-G-FMT; and
-    &VIDIOC-SUBDEV-S-FMT; ioctls. to negotiate formats on a per-pad basis.</para>
+    &VIDIOC-SUBDEV-S-FMT; ioctls to negotiate formats on a per-pad basis.</para>
 
     <para>Applications are responsible for configuring coherent parameters on
     the whole pipeline and making sure that connected pads have compatible
@@ -160,7 +160,13 @@
       guaranteed to be supported by the device. In particular, drivers guarantee
       that a returned format will not be further changed if passed to an
       &VIDIOC-SUBDEV-S-FMT; call as-is (as long as external parameters, such as
-      formats on other pads or links' configuration are not changed).</para>
+      formats on other pads or links' configuration are not changed). When a
+      device contains a data encoder, the <structfield>
+      <link linkend="v4l2-mbus-framefmt-framesamples">framesamples</link>
+      </structfield> field value may be further changed, if parameters of the
+      encoding process are changed after the format has been negotiated. In
+      such situation applications should use &VIDIOC-SUBDEV-G-FMT; ioctl to
+      query an updated format.</para>
 
       <para>Drivers automatically propagate formats inside sub-devices. When a
       try or active format is set on a pad, corresponding formats on other pads
diff --git a/Documentation/DocBook/media/v4l/subdev-formats.xml b/Documentation/DocBook/media/v4l/subdev-formats.xml
index 49c532e..7c202a1 100644
--- a/Documentation/DocBook/media/v4l/subdev-formats.xml
+++ b/Documentation/DocBook/media/v4l/subdev-formats.xml
@@ -33,9 +33,16 @@
 	  <entry>Image colorspace, from &v4l2-colorspace;. See
 	  <xref linkend="colorspaces" /> for details.</entry>
 	</row>
+	<row id="v4l2-mbus-framefmt-framesamples">
+	  <entry>__u32</entry>
+	  <entry><structfield>framesamples</structfield></entry>
+	  <entry>Maximum number of bus samples per frame for compressed data
+	    formats. For uncompressed formats drivers and applications must
+	    set this parameter to zero.</entry>
+	</row>
 	<row>
 	  <entry>__u32</entry>
-	  <entry><structfield>reserved</structfield>[7]</entry>
+	  <entry><structfield>reserved</structfield>[6]</entry>
 	  <entry>Reserved for future extensions. Applications and drivers must
 	  set the array to zero.</entry>
 	</row>
diff --git a/include/linux/v4l2-mediabus.h b/include/linux/v4l2-mediabus.h
index 5ea7f75..f18d6cd 100644
--- a/include/linux/v4l2-mediabus.h
+++ b/include/linux/v4l2-mediabus.h
@@ -101,6 +101,7 @@ enum v4l2_mbus_pixelcode {
  * @code:	data format code (from enum v4l2_mbus_pixelcode)
  * @field:	used interlacing type (from enum v4l2_field)
  * @colorspace:	colorspace of the data (from enum v4l2_colorspace)
+ * @framesamples: maximum number of bus samples per frame
  */
 struct v4l2_mbus_framefmt {
 	__u32			width;
@@ -108,7 +109,8 @@ struct v4l2_mbus_framefmt {
 	__u32			code;
 	__u32			field;
 	__u32			colorspace;
-	__u32			reserved[7];
+	__u32			framesamples;
+	__u32			reserved[6];
 };
 
 #endif
-- 
1.7.7.2


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH/RFC v2 2/4] m5mols: Add support for buffer size configuration for compressed formats
  2011-12-01 10:20 [RFC v2] v4l2: Compressed data formats on the video bus Sylwester Nawrocki
  2011-12-01 10:20 ` [PATCH/RFC v2 1/4] v4l: Add new framesamples field to struct v4l2_mbus_framefmt Sylwester Nawrocki
@ 2011-12-01 10:20 ` Sylwester Nawrocki
  2011-12-01 10:20 ` [PATCH/RFC v2 3/4] s5p-fimc: Add support for media bus framesamples parameter Sylwester Nawrocki
  2011-12-01 10:20 ` [PATCH/RFC v2 4/4] v4l: Update subdev drivers to handle " Sylwester Nawrocki
  3 siblings, 0 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-01 10:20 UTC (permalink / raw)
  To: linux-media
  Cc: mchehab, laurent.pinchart, g.liakhovetski, sakari.ailus,
	m.szyprowski, riverful.kim, sw0312.kim, s.nawrocki,
	Kyungmin Park

Use new struct v4l2_mbus_framefmt framesamples field to return maximum
size of data transmitted per single frame. The framesamples value is
adjusted according to pixel format and configured at the ISP registers.

Except the pixel width and height, the frame size can also be made
dependent on JPEG image quality, when corresponding control is
available at the sub-device interface.

Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
 drivers/media/video/m5mols/m5mols.h         |    8 ++++++++
 drivers/media/video/m5mols/m5mols_capture.c |    4 ++++
 drivers/media/video/m5mols/m5mols_core.c    |   15 +++++++++++++++
 drivers/media/video/m5mols/m5mols_reg.h     |    1 +
 4 files changed, 28 insertions(+), 0 deletions(-)

diff --git a/drivers/media/video/m5mols/m5mols.h b/drivers/media/video/m5mols/m5mols.h
index 42c494c..3e71151 100644
--- a/drivers/media/video/m5mols/m5mols.h
+++ b/drivers/media/video/m5mols/m5mols.h
@@ -16,9 +16,17 @@
 #ifndef M5MOLS_H
 #define M5MOLS_H
 
+#include <asm/sizes.h>
 #include <media/v4l2-subdev.h>
 #include "m5mols_reg.h"
 
+/*
+ * This is an amount of data transmitted in addition to the configured
+ * JPEG_SIZE_MAX value.
+ */
+#define M5MOLS_JPEG_TAGS_SIZE		0x20000
+#define M5MOLS_MAIN_JPEG_SIZE_MAX	(5 * SZ_1M)
+
 extern int m5mols_debug;
 
 #define to_m5mols(__sd)	container_of(__sd, struct m5mols_info, sd)
diff --git a/drivers/media/video/m5mols/m5mols_capture.c b/drivers/media/video/m5mols/m5mols_capture.c
index 3248ac8..c8da22f 100644
--- a/drivers/media/video/m5mols/m5mols_capture.c
+++ b/drivers/media/video/m5mols/m5mols_capture.c
@@ -119,6 +119,7 @@ static int m5mols_capture_info(struct m5mols_info *info)
 
 int m5mols_start_capture(struct m5mols_info *info)
 {
+	struct v4l2_mbus_framefmt *mf = &info->ffmt[info->res_type];
 	struct v4l2_subdev *sd = &info->sd;
 	u8 resolution = info->resolution;
 	int timeout;
@@ -170,6 +171,9 @@ int m5mols_start_capture(struct m5mols_info *info)
 	if (!ret)
 		ret = m5mols_write(sd, CAPP_MAIN_IMAGE_SIZE, resolution);
 	if (!ret)
+		ret = m5mols_write(sd, CAPP_JPEG_SIZE_MAX,
+				   mf->framesamples - M5MOLS_JPEG_TAGS_SIZE);
+	if (!ret)
 		ret = m5mols_enable_interrupt(sd, REG_INT_CAPTURE);
 	if (!ret)
 		ret = m5mols_write(sd, CAPC_START, REG_CAP_START_MAIN);
diff --git a/drivers/media/video/m5mols/m5mols_core.c b/drivers/media/video/m5mols/m5mols_core.c
index 3344df5..c230a44 100644
--- a/drivers/media/video/m5mols/m5mols_core.c
+++ b/drivers/media/video/m5mols/m5mols_core.c
@@ -505,6 +505,16 @@ static struct v4l2_mbus_framefmt *__find_format(struct m5mols_info *info,
 	return &info->ffmt[type];
 }
 
+static inline void m5mols_get_buffer_size(struct m5mols_info *info,
+					  struct v4l2_mbus_framefmt *mf)
+{
+	u32 min_sz = mf->width * mf->height + M5MOLS_JPEG_TAGS_SIZE;
+
+	/* Clamp provided value to the maximum needed */
+	mf->framesamples = clamp_t(u32, mf->framesamples, min_sz,
+				   M5MOLS_MAIN_JPEG_SIZE_MAX);
+}
+
 static int m5mols_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 			  struct v4l2_subdev_format *fmt)
 {
@@ -515,6 +525,9 @@ static int m5mols_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 	if (!format)
 		return -EINVAL;
 
+	if (format->code == V4L2_MBUS_FMT_JPEG_1X8)
+		m5mols_get_buffer_size(info, format);
+
 	fmt->format = *format;
 	return 0;
 }
@@ -537,6 +550,8 @@ static int m5mols_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 	if (!sfmt)
 		return 0;
 
+	if (format->code == V4L2_MBUS_FMT_JPEG_1X8)
+		m5mols_get_buffer_size(info, format);
 
 	format->code = m5mols_default_ffmt[type].code;
 	format->colorspace = V4L2_COLORSPACE_JPEG;
diff --git a/drivers/media/video/m5mols/m5mols_reg.h b/drivers/media/video/m5mols/m5mols_reg.h
index 642d8c5..43a6445 100644
--- a/drivers/media/video/m5mols/m5mols_reg.h
+++ b/drivers/media/video/m5mols/m5mols_reg.h
@@ -310,6 +310,7 @@
 #define REG_JPEG		0x10
 
 #define CAPP_MAIN_IMAGE_SIZE	I2C_REG(CAT_CAPT_PARM, 0x01, 1)
+#define CAPP_JPEG_SIZE_MAX	I2C_REG(CAT_CAPT_PARM, 0x0f, 4)
 
 #define CAPP_MCC_MODE		I2C_REG(CAT_CAPT_PARM, 0x1d, 1)
 #define REG_MCC_OFF		0x00
-- 
1.7.7.2


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH/RFC v2 3/4] s5p-fimc: Add support for media bus framesamples parameter
  2011-12-01 10:20 [RFC v2] v4l2: Compressed data formats on the video bus Sylwester Nawrocki
  2011-12-01 10:20 ` [PATCH/RFC v2 1/4] v4l: Add new framesamples field to struct v4l2_mbus_framefmt Sylwester Nawrocki
  2011-12-01 10:20 ` [PATCH/RFC v2 2/4] m5mols: Add support for buffer size configuration for compressed formats Sylwester Nawrocki
@ 2011-12-01 10:20 ` Sylwester Nawrocki
  2011-12-01 10:20 ` [PATCH/RFC v2 4/4] v4l: Update subdev drivers to handle " Sylwester Nawrocki
  3 siblings, 0 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-01 10:20 UTC (permalink / raw)
  To: linux-media
  Cc: mchehab, laurent.pinchart, g.liakhovetski, sakari.ailus,
	m.szyprowski, riverful.kim, sw0312.kim, s.nawrocki,
	Kyungmin Park

The framesamples field of struct v4l2_mbus_framefmt is used to
retrieve an exact required maximum memory buffer size for a
compressed data frame from the sensor subdev. This allows to
avoid allocating huge buffers in the host driver.

To make sure the size of allocated buffers is correct for a
subdev configuration during VIDIOC_STREAMON ioctl, the video
pipeline validation has been extended with an additional
check.

Flag FMT_FLAGS_COMPRESSED indicates the buffer size should
be determined through the framesamples member of struct
v4l2_mbus_framefmt.

Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
 drivers/media/video/s5p-fimc/fimc-capture.c |   53 +++++++++++++++++++++++++--
 drivers/media/video/s5p-fimc/fimc-core.c    |    7 +++-
 drivers/media/video/s5p-fimc/fimc-core.h    |    5 ++-
 3 files changed, 59 insertions(+), 6 deletions(-)

diff --git a/drivers/media/video/s5p-fimc/fimc-capture.c b/drivers/media/video/s5p-fimc/fimc-capture.c
index 8ca4d32..48b2592 100644
--- a/drivers/media/video/s5p-fimc/fimc-capture.c
+++ b/drivers/media/video/s5p-fimc/fimc-capture.c
@@ -250,6 +250,10 @@ static unsigned int get_plane_size(struct fimc_frame *fr, unsigned int plane)
 {
 	if (!fr || plane >= fr->fmt->memplanes)
 		return 0;
+
+	if (fimc_fmt_is_jpeg(fr->fmt->color))
+		return fr->payload[0];
+
 	return fr->f_width * fr->f_height * fr->fmt->depth[plane] / 8;
 }
 
@@ -722,6 +726,29 @@ static int fimc_pipeline_try_format(struct fimc_ctx *ctx,
 	return 0;
 }
 
+/* Query the sensor for required buffer size (applicable to compressed data). */
+static int fimc_capture_get_sizeimage(struct fimc_dev *fimc, unsigned int *size)
+{
+	struct v4l2_subdev *sd = fimc->pipeline.sensor;
+	struct v4l2_subdev_format sfmt;
+	int ret;
+
+	sfmt.pad = 0;
+	sfmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+	ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &sfmt);
+	if (ret < 0)
+		return ret;
+
+	if (sfmt.format.framesamples > FIMC_MAX_JPEG_BUF_SIZE) {
+		v4l2_err(sd, "Unsupported frame buffer size\n");
+		return -EINVAL;
+	}
+
+	*size = sfmt.format.framesamples;
+
+	return 0;
+}
+
 static int fimc_cap_g_fmt_mplane(struct file *file, void *fh,
 				 struct v4l2_format *f)
 {
@@ -774,7 +801,11 @@ static int fimc_cap_try_fmt_mplane(struct file *file, void *fh,
 	}
 
 	fimc_adjust_mplane_format(ffmt, pix->width, pix->height, pix);
-	return 0;
+
+	if (!(ffmt->flags & FMT_FLAGS_COMPRESSED))
+		return 0;
+
+	return fimc_capture_get_sizeimage(fimc, &pix->plane_fmt[0].sizeimage);
 }
 
 static void fimc_capture_mark_jpeg_xfer(struct fimc_ctx *ctx, bool jpeg)
@@ -837,9 +868,17 @@ static int fimc_capture_set_format(struct fimc_dev *fimc, struct v4l2_format *f)
 		pix->height = mf->height;
 	}
 	fimc_adjust_mplane_format(ff->fmt, pix->width, pix->height, pix);
-	for (i = 0; i < ff->fmt->colplanes; i++)
-		ff->payload[i] =
-			(pix->width * pix->height * ff->fmt->depth[i]) / 8;
+
+	if (ff->fmt->flags & FMT_FLAGS_COMPRESSED) {
+		ret = fimc_capture_get_sizeimage(fimc, &ff->payload[0]);
+		if (ret < 0)
+			return ret;
+		pix->plane_fmt[0].sizeimage = ff->payload[0];
+	} else {
+		for (i = 0; i < ff->fmt->colplanes; i++)
+			ff->payload[i] = pix->width * pix->height *
+					 ff->fmt->depth[i] / 8;
+	}
 
 	set_frame_bounds(ff, pix->width, pix->height);
 	/* Reset the composition rectangle if not yet configured */
@@ -948,6 +987,12 @@ static int fimc_pipeline_validate(struct fimc_dev *fimc)
 		    src_fmt.format.height != sink_fmt.format.height ||
 		    src_fmt.format.code != sink_fmt.format.code)
 			return -EPIPE;
+
+		if (sd == fimc->pipeline.sensor &&
+		    src_fmt.format.code == V4L2_MBUS_FMT_JPEG_1X8 &&
+		    vid_cap->ctx->d_frame.payload[0] <
+		    src_fmt.format.framesamples)
+			return -EPIPE;
 	}
 	return 0;
 }
diff --git a/drivers/media/video/s5p-fimc/fimc-core.c b/drivers/media/video/s5p-fimc/fimc-core.c
index caf1c08..331beda 100644
--- a/drivers/media/video/s5p-fimc/fimc-core.c
+++ b/drivers/media/video/s5p-fimc/fimc-core.c
@@ -183,7 +183,7 @@ static struct fimc_fmt fimc_formats[] = {
 		.memplanes	= 1,
 		.colplanes	= 1,
 		.mbus_code	= V4L2_MBUS_FMT_JPEG_1X8,
-		.flags		= FMT_FLAGS_CAM,
+		.flags		= FMT_FLAGS_CAM | FMT_FLAGS_COMPRESSED,
 	},
 };
 
@@ -965,6 +965,11 @@ int fimc_fill_format(struct fimc_frame *frame, struct v4l2_format *f)
 		if (frame->fmt->colplanes == 1) /* packed formats */
 			bpl = (bpl * frame->fmt->depth[0]) / 8;
 		pixm->plane_fmt[i].bytesperline = bpl;
+
+		if (frame->fmt->flags & FMT_FLAGS_COMPRESSED) {
+			pixm->plane_fmt[i].sizeimage = frame->payload[i];
+			continue;
+		}
 		pixm->plane_fmt[i].sizeimage = (frame->o_width *
 			frame->o_height * frame->fmt->depth[i]) / 8;
 	}
diff --git a/drivers/media/video/s5p-fimc/fimc-core.h b/drivers/media/video/s5p-fimc/fimc-core.h
index 21e4ad4..643dc0e 100644
--- a/drivers/media/video/s5p-fimc/fimc-core.h
+++ b/drivers/media/video/s5p-fimc/fimc-core.h
@@ -17,6 +17,7 @@
 #include <linux/types.h>
 #include <linux/videodev2.h>
 #include <linux/io.h>
+#include <asm/sizes.h>
 
 #include <media/media-entity.h>
 #include <media/videobuf2-core.h>
@@ -44,6 +45,7 @@
 #define SCALER_MAX_VRATIO	64
 #define DMA_MIN_SIZE		8
 #define FIMC_CAMIF_MAX_HEIGHT	0x2000
+#define FIMC_MAX_JPEG_BUF_SIZE	(10 * SZ_1M)
 
 /* indices to the clocks array */
 enum {
@@ -168,6 +170,7 @@ struct fimc_fmt {
 #define FMT_FLAGS_M2M_OUT	(1 << 2)
 #define FMT_FLAGS_M2M		(1 << 1 | 1 << 2)
 #define FMT_HAS_ALPHA		(1 << 3)
+#define FMT_FLAGS_COMPRESSED	(1 << 4)
 };
 
 /**
@@ -285,7 +288,7 @@ struct fimc_frame {
 	u32	offs_v;
 	u32	width;
 	u32	height;
-	unsigned long		payload[VIDEO_MAX_PLANES];
+	unsigned int		payload[VIDEO_MAX_PLANES];
 	struct fimc_addr	paddr;
 	struct fimc_dma_offset	dma_offset;
 	struct fimc_fmt		*fmt;
-- 
1.7.7.2


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH/RFC v2 4/4] v4l: Update subdev drivers to handle framesamples parameter
  2011-12-01 10:20 [RFC v2] v4l2: Compressed data formats on the video bus Sylwester Nawrocki
                   ` (2 preceding siblings ...)
  2011-12-01 10:20 ` [PATCH/RFC v2 3/4] s5p-fimc: Add support for media bus framesamples parameter Sylwester Nawrocki
@ 2011-12-01 10:20 ` Sylwester Nawrocki
  2011-12-06 16:12   ` Laurent Pinchart
  3 siblings, 1 reply; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-01 10:20 UTC (permalink / raw)
  To: linux-media
  Cc: mchehab, laurent.pinchart, g.liakhovetski, sakari.ailus,
	m.szyprowski, riverful.kim, sw0312.kim, s.nawrocki,
	Kyungmin Park

Update the sub-device drivers having a devnode enabled so they properly
handle the new framesamples field of struct v4l2_mbus_framefmt.
These drivers don't support compressed (entropy encoded) formats so the
framesamples field is simply initialized to 0.

There is a few other drivers that expose a devnode (mt9p031, mt9t001,
mt9v032) but they already implicitly initialize the new data structure
field to 0, so they don't need to be touched.

Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
 drivers/media/video/noon010pc30.c         |    6 ++++--
 drivers/media/video/omap3isp/ispccdc.c    |    1 +
 drivers/media/video/omap3isp/ispccp2.c    |    1 +
 drivers/media/video/omap3isp/ispcsi2.c    |    1 +
 drivers/media/video/omap3isp/isppreview.c |    1 +
 drivers/media/video/omap3isp/ispresizer.c |    1 +
 drivers/media/video/s5k6aa.c              |    1 +
 7 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/media/video/noon010pc30.c b/drivers/media/video/noon010pc30.c
index 50838bf..ad94ffe 100644
--- a/drivers/media/video/noon010pc30.c
+++ b/drivers/media/video/noon010pc30.c
@@ -523,9 +523,10 @@ static int noon010_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 	mf->height = info->curr_win->height;
 	mf->code = info->curr_fmt->code;
 	mf->colorspace = info->curr_fmt->colorspace;
-	mf->field = V4L2_FIELD_NONE;
-
 	mutex_unlock(&info->lock);
+
+	mf->field = V4L2_FIELD_NONE;
+	mf->framesamples = 0;
 	return 0;
 }
 
@@ -555,6 +556,7 @@ static int noon010_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 	nf = noon010_try_fmt(sd, &fmt->format);
 	noon010_try_frame_size(&fmt->format, &size);
 	fmt->format.colorspace = V4L2_COLORSPACE_JPEG;
+	fmt->format.framesamples = 0;
 
 	if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
 		if (fh) {
diff --git a/drivers/media/video/omap3isp/ispccdc.c b/drivers/media/video/omap3isp/ispccdc.c
index b0b0fa5..3dff028 100644
--- a/drivers/media/video/omap3isp/ispccdc.c
+++ b/drivers/media/video/omap3isp/ispccdc.c
@@ -1863,6 +1863,7 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
 	 */
 	fmt->colorspace = V4L2_COLORSPACE_SRGB;
 	fmt->field = V4L2_FIELD_NONE;
+	fmt->framesamples = 0;
 }
 
 /*
diff --git a/drivers/media/video/omap3isp/ispccp2.c b/drivers/media/video/omap3isp/ispccp2.c
index 904ca8c..fd9dba6 100644
--- a/drivers/media/video/omap3isp/ispccp2.c
+++ b/drivers/media/video/omap3isp/ispccp2.c
@@ -711,6 +711,7 @@ static void ccp2_try_format(struct isp_ccp2_device *ccp2,
 
 	fmt->field = V4L2_FIELD_NONE;
 	fmt->colorspace = V4L2_COLORSPACE_SRGB;
+	fmt->framesamples = 0;
 }
 
 /*
diff --git a/drivers/media/video/omap3isp/ispcsi2.c b/drivers/media/video/omap3isp/ispcsi2.c
index 0c5f1cb..6b973f5 100644
--- a/drivers/media/video/omap3isp/ispcsi2.c
+++ b/drivers/media/video/omap3isp/ispcsi2.c
@@ -888,6 +888,7 @@ csi2_try_format(struct isp_csi2_device *csi2, struct v4l2_subdev_fh *fh,
 	/* RGB, non-interlaced */
 	fmt->colorspace = V4L2_COLORSPACE_SRGB;
 	fmt->field = V4L2_FIELD_NONE;
+	fmt->framesamples = 0;
 }
 
 /*
diff --git a/drivers/media/video/omap3isp/isppreview.c b/drivers/media/video/omap3isp/isppreview.c
index ccb876f..6f4bdf0 100644
--- a/drivers/media/video/omap3isp/isppreview.c
+++ b/drivers/media/video/omap3isp/isppreview.c
@@ -1720,6 +1720,7 @@ static void preview_try_format(struct isp_prev_device *prev,
 	}
 
 	fmt->field = V4L2_FIELD_NONE;
+	fmt->framesamples = 0;
 }
 
 /*
diff --git a/drivers/media/video/omap3isp/ispresizer.c b/drivers/media/video/omap3isp/ispresizer.c
index 50e593b..923ba1b 100644
--- a/drivers/media/video/omap3isp/ispresizer.c
+++ b/drivers/media/video/omap3isp/ispresizer.c
@@ -1363,6 +1363,7 @@ static void resizer_try_format(struct isp_res_device *res,
 
 	fmt->colorspace = V4L2_COLORSPACE_JPEG;
 	fmt->field = V4L2_FIELD_NONE;
+	fmt->framesamples = 0;
 }
 
 /*
diff --git a/drivers/media/video/s5k6aa.c b/drivers/media/video/s5k6aa.c
index 86ee35b..efc5ba3 100644
--- a/drivers/media/video/s5k6aa.c
+++ b/drivers/media/video/s5k6aa.c
@@ -1087,6 +1087,7 @@ static void s5k6aa_try_format(struct s5k6aa *s5k6aa,
 	mf->colorspace	= s5k6aa_formats[index].colorspace;
 	mf->code	= s5k6aa_formats[index].code;
 	mf->field	= V4L2_FIELD_NONE;
+	mf->framesamples = 0;
 }
 
 static int s5k6aa_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
-- 
1.7.7.2


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH/RFC v2 4/4] v4l: Update subdev drivers to handle framesamples parameter
  2011-12-01 10:20 ` [PATCH/RFC v2 4/4] v4l: Update subdev drivers to handle " Sylwester Nawrocki
@ 2011-12-06 16:12   ` Laurent Pinchart
  2011-12-06 17:36     ` Sylwester Nawrocki
  2011-12-09 17:59     ` [PATCH/RFC v3 " Sylwester Nawrocki
  0 siblings, 2 replies; 25+ messages in thread
From: Laurent Pinchart @ 2011-12-06 16:12 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: linux-media, mchehab, g.liakhovetski, sakari.ailus, m.szyprowski,
	riverful.kim, sw0312.kim, Kyungmin Park

Hi Sylwester,

On Thursday 01 December 2011 11:20:53 Sylwester Nawrocki wrote:
> Update the sub-device drivers having a devnode enabled so they properly
> handle the new framesamples field of struct v4l2_mbus_framefmt.
> These drivers don't support compressed (entropy encoded) formats so the
> framesamples field is simply initialized to 0.

Wouldn't it be better to memset the whole structure before filling it ? This 
would handle reserved fields as well. One option would be to make the caller 
zero the structure, I think that would likely result in a smaller patch.

> There is a few other drivers that expose a devnode (mt9p031, mt9t001,
> mt9v032) but they already implicitly initialize the new data structure
> field to 0, so they don't need to be touched.
> 
> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> ---
>  drivers/media/video/noon010pc30.c         |    6 ++++--
>  drivers/media/video/omap3isp/ispccdc.c    |    1 +
>  drivers/media/video/omap3isp/ispccp2.c    |    1 +
>  drivers/media/video/omap3isp/ispcsi2.c    |    1 +
>  drivers/media/video/omap3isp/isppreview.c |    1 +
>  drivers/media/video/omap3isp/ispresizer.c |    1 +
>  drivers/media/video/s5k6aa.c              |    1 +
>  7 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/media/video/noon010pc30.c
> b/drivers/media/video/noon010pc30.c index 50838bf..ad94ffe 100644
> --- a/drivers/media/video/noon010pc30.c
> +++ b/drivers/media/video/noon010pc30.c
> @@ -523,9 +523,10 @@ static int noon010_get_fmt(struct v4l2_subdev *sd,
> struct v4l2_subdev_fh *fh, mf->height = info->curr_win->height;
>  	mf->code = info->curr_fmt->code;
>  	mf->colorspace = info->curr_fmt->colorspace;
> -	mf->field = V4L2_FIELD_NONE;
> -
>  	mutex_unlock(&info->lock);
> +
> +	mf->field = V4L2_FIELD_NONE;
> +	mf->framesamples = 0;
>  	return 0;
>  }
> 
> @@ -555,6 +556,7 @@ static int noon010_set_fmt(struct v4l2_subdev *sd,
> struct v4l2_subdev_fh *fh, nf = noon010_try_fmt(sd, &fmt->format);
>  	noon010_try_frame_size(&fmt->format, &size);
>  	fmt->format.colorspace = V4L2_COLORSPACE_JPEG;
> +	fmt->format.framesamples = 0;
> 
>  	if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
>  		if (fh) {
> diff --git a/drivers/media/video/omap3isp/ispccdc.c
> b/drivers/media/video/omap3isp/ispccdc.c index b0b0fa5..3dff028 100644
> --- a/drivers/media/video/omap3isp/ispccdc.c
> +++ b/drivers/media/video/omap3isp/ispccdc.c
> @@ -1863,6 +1863,7 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct
> v4l2_subdev_fh *fh, */
>  	fmt->colorspace = V4L2_COLORSPACE_SRGB;
>  	fmt->field = V4L2_FIELD_NONE;
> +	fmt->framesamples = 0;
>  }
> 
>  /*
> diff --git a/drivers/media/video/omap3isp/ispccp2.c
> b/drivers/media/video/omap3isp/ispccp2.c index 904ca8c..fd9dba6 100644
> --- a/drivers/media/video/omap3isp/ispccp2.c
> +++ b/drivers/media/video/omap3isp/ispccp2.c
> @@ -711,6 +711,7 @@ static void ccp2_try_format(struct isp_ccp2_device
> *ccp2,
> 
>  	fmt->field = V4L2_FIELD_NONE;
>  	fmt->colorspace = V4L2_COLORSPACE_SRGB;
> +	fmt->framesamples = 0;
>  }
> 
>  /*
> diff --git a/drivers/media/video/omap3isp/ispcsi2.c
> b/drivers/media/video/omap3isp/ispcsi2.c index 0c5f1cb..6b973f5 100644
> --- a/drivers/media/video/omap3isp/ispcsi2.c
> +++ b/drivers/media/video/omap3isp/ispcsi2.c
> @@ -888,6 +888,7 @@ csi2_try_format(struct isp_csi2_device *csi2, struct
> v4l2_subdev_fh *fh, /* RGB, non-interlaced */
>  	fmt->colorspace = V4L2_COLORSPACE_SRGB;
>  	fmt->field = V4L2_FIELD_NONE;
> +	fmt->framesamples = 0;
>  }
> 
>  /*
> diff --git a/drivers/media/video/omap3isp/isppreview.c
> b/drivers/media/video/omap3isp/isppreview.c index ccb876f..6f4bdf0 100644
> --- a/drivers/media/video/omap3isp/isppreview.c
> +++ b/drivers/media/video/omap3isp/isppreview.c
> @@ -1720,6 +1720,7 @@ static void preview_try_format(struct isp_prev_device
> *prev, }
> 
>  	fmt->field = V4L2_FIELD_NONE;
> +	fmt->framesamples = 0;
>  }
> 
>  /*
> diff --git a/drivers/media/video/omap3isp/ispresizer.c
> b/drivers/media/video/omap3isp/ispresizer.c index 50e593b..923ba1b 100644
> --- a/drivers/media/video/omap3isp/ispresizer.c
> +++ b/drivers/media/video/omap3isp/ispresizer.c
> @@ -1363,6 +1363,7 @@ static void resizer_try_format(struct isp_res_device
> *res,
> 
>  	fmt->colorspace = V4L2_COLORSPACE_JPEG;
>  	fmt->field = V4L2_FIELD_NONE;
> +	fmt->framesamples = 0;
>  }
> 
>  /*
> diff --git a/drivers/media/video/s5k6aa.c b/drivers/media/video/s5k6aa.c
> index 86ee35b..efc5ba3 100644
> --- a/drivers/media/video/s5k6aa.c
> +++ b/drivers/media/video/s5k6aa.c
> @@ -1087,6 +1087,7 @@ static void s5k6aa_try_format(struct s5k6aa *s5k6aa,
>  	mf->colorspace	= s5k6aa_formats[index].colorspace;
>  	mf->code	= s5k6aa_formats[index].code;
>  	mf->field	= V4L2_FIELD_NONE;
> +	mf->framesamples = 0;
>  }
> 
>  static int s5k6aa_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh
> *fh,

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH/RFC v2 4/4] v4l: Update subdev drivers to handle framesamples parameter
  2011-12-06 16:12   ` Laurent Pinchart
@ 2011-12-06 17:36     ` Sylwester Nawrocki
  2011-12-09 17:59     ` [PATCH/RFC v3 " Sylwester Nawrocki
  1 sibling, 0 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-06 17:36 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-media, mchehab, g.liakhovetski, sakari.ailus, m.szyprowski,
	riverful.kim, sw0312.kim, Kyungmin Park

Hi Laurent,

On 12/06/2011 05:12 PM, Laurent Pinchart wrote:
> On Thursday 01 December 2011 11:20:53 Sylwester Nawrocki wrote:
>> Update the sub-device drivers having a devnode enabled so they properly
>> handle the new framesamples field of struct v4l2_mbus_framefmt.
>> These drivers don't support compressed (entropy encoded) formats so the
>> framesamples field is simply initialized to 0.
> 
> Wouldn't it be better to memset the whole structure before filling it ? This 
> would handle reserved fields as well. One option would be to make the caller 

Sounds like a good improvement to me. Then we wouldn't have to do any
modifications when in future someone converts the reserved field into
something useful.

> zero the structure, I think that would likely result in a smaller patch.

Do you mean to memset the whole structure before v4l2_subdev_call, in
subdev_do_ioctl() ? I guess no, since it could only be done for get_fmt.

> 
>> There is a few other drivers that expose a devnode (mt9p031, mt9t001,
>> mt9v032) but they already implicitly initialize the new data structure
>> field to 0, so they don't need to be touched.


Regards,
-- 
Sylwester Nawrocki
Samsung Poland R&D Center

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH/RFC v3 4/4] v4l: Update subdev drivers to handle framesamples parameter
  2011-12-06 16:12   ` Laurent Pinchart
  2011-12-06 17:36     ` Sylwester Nawrocki
@ 2011-12-09 17:59     ` Sylwester Nawrocki
  2011-12-12  0:31       ` Laurent Pinchart
  1 sibling, 1 reply; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-09 17:59 UTC (permalink / raw)
  To: linux-media
  Cc: laurent.pinchart, g.liakhovetski, sakari.ailus, riverful.kim,
	sw0312.kim, m.szyprowski, s.nawrocki, Kyungmin Park

Update the sub-device drivers having a devnode enabled so they properly
handle the new framesamples field of struct v4l2_mbus_framefmt.
These drivers don't support compressed (entropy encoded) formats so the
framesamples field is simply initialized to 0, altogether with the
reserved structure member.

There is a few other drivers that expose a devnode (mt9p031, mt9t001,
mt9v032), but they already implicitly initialize the new data structure
field to 0, so they don't need to be touched.

Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
Hi,

In this version the whole reserved field in struct v4l2_mbus_framefmt 
is also cleared, rather than setting only framesamples to 0.  

The omap3isp driver changes have been only compile tested.

Thanks,
Sylwester
---
 drivers/media/video/noon010pc30.c         |    5 ++++-
 drivers/media/video/omap3isp/ispccdc.c    |    2 ++
 drivers/media/video/omap3isp/ispccp2.c    |    2 ++
 drivers/media/video/omap3isp/ispcsi2.c    |    2 ++
 drivers/media/video/omap3isp/isppreview.c |    2 ++
 drivers/media/video/omap3isp/ispresizer.c |    2 ++
 drivers/media/video/s5k6aa.c              |    2 ++
 7 files changed, 16 insertions(+), 1 deletions(-)

diff --git a/drivers/media/video/noon010pc30.c b/drivers/media/video/noon010pc30.c
index 50838bf..5af9b60 100644
--- a/drivers/media/video/noon010pc30.c
+++ b/drivers/media/video/noon010pc30.c
@@ -519,13 +519,14 @@ static int noon010_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 	mf = &fmt->format;
 
 	mutex_lock(&info->lock);
+	memset(mf, 0, sizeof(mf));
 	mf->width = info->curr_win->width;
 	mf->height = info->curr_win->height;
 	mf->code = info->curr_fmt->code;
 	mf->colorspace = info->curr_fmt->colorspace;
 	mf->field = V4L2_FIELD_NONE;
-
 	mutex_unlock(&info->lock);
+
 	return 0;
 }
 
@@ -546,12 +547,14 @@ static const struct noon010_format *noon010_try_fmt(struct v4l2_subdev *sd,
 static int noon010_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 			   struct v4l2_subdev_format *fmt)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	struct noon010_info *info = to_noon010(sd);
 	const struct noon010_frmsize *size = NULL;
 	const struct noon010_format *nf;
 	struct v4l2_mbus_framefmt *mf;
 	int ret = 0;
 
+	memset(&fmt->format + offset, 0, sizeof(fmt->format) - offset);
 	nf = noon010_try_fmt(sd, &fmt->format);
 	noon010_try_frame_size(&fmt->format, &size);
 	fmt->format.colorspace = V4L2_COLORSPACE_JPEG;
diff --git a/drivers/media/video/omap3isp/ispccdc.c b/drivers/media/video/omap3isp/ispccdc.c
index b0b0fa5..a608149 100644
--- a/drivers/media/video/omap3isp/ispccdc.c
+++ b/drivers/media/video/omap3isp/ispccdc.c
@@ -1802,6 +1802,7 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
 		unsigned int pad, struct v4l2_mbus_framefmt *fmt,
 		enum v4l2_subdev_format_whence which)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	struct v4l2_mbus_framefmt *format;
 	const struct isp_format_info *info;
 	unsigned int width = fmt->width;
@@ -1863,6 +1864,7 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
 	 */
 	fmt->colorspace = V4L2_COLORSPACE_SRGB;
 	fmt->field = V4L2_FIELD_NONE;
+	memset(fmt + offset, 0, sizeof(*fmt) - offset);
 }
 
 /*
diff --git a/drivers/media/video/omap3isp/ispccp2.c b/drivers/media/video/omap3isp/ispccp2.c
index 904ca8c..a56a6ad 100644
--- a/drivers/media/video/omap3isp/ispccp2.c
+++ b/drivers/media/video/omap3isp/ispccp2.c
@@ -673,6 +673,7 @@ static void ccp2_try_format(struct isp_ccp2_device *ccp2,
 			       struct v4l2_mbus_framefmt *fmt,
 			       enum v4l2_subdev_format_whence which)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	struct v4l2_mbus_framefmt *format;
 
 	switch (pad) {
@@ -711,6 +712,7 @@ static void ccp2_try_format(struct isp_ccp2_device *ccp2,
 
 	fmt->field = V4L2_FIELD_NONE;
 	fmt->colorspace = V4L2_COLORSPACE_SRGB;
+	memset(fmt + offset, 0, sizeof(*fmt) - offset);
 }
 
 /*
diff --git a/drivers/media/video/omap3isp/ispcsi2.c b/drivers/media/video/omap3isp/ispcsi2.c
index 0c5f1cb..c41443b 100644
--- a/drivers/media/video/omap3isp/ispcsi2.c
+++ b/drivers/media/video/omap3isp/ispcsi2.c
@@ -846,6 +846,7 @@ csi2_try_format(struct isp_csi2_device *csi2, struct v4l2_subdev_fh *fh,
 		unsigned int pad, struct v4l2_mbus_framefmt *fmt,
 		enum v4l2_subdev_format_whence which)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	enum v4l2_mbus_pixelcode pixelcode;
 	struct v4l2_mbus_framefmt *format;
 	const struct isp_format_info *info;
@@ -888,6 +889,7 @@ csi2_try_format(struct isp_csi2_device *csi2, struct v4l2_subdev_fh *fh,
 	/* RGB, non-interlaced */
 	fmt->colorspace = V4L2_COLORSPACE_SRGB;
 	fmt->field = V4L2_FIELD_NONE;
+	memset(fmt + offset, 0, sizeof(*fmt) - offset);
 }
 
 /*
diff --git a/drivers/media/video/omap3isp/isppreview.c b/drivers/media/video/omap3isp/isppreview.c
index ccb876f..23861c4 100644
--- a/drivers/media/video/omap3isp/isppreview.c
+++ b/drivers/media/video/omap3isp/isppreview.c
@@ -1656,6 +1656,7 @@ static void preview_try_format(struct isp_prev_device *prev,
 			       struct v4l2_mbus_framefmt *fmt,
 			       enum v4l2_subdev_format_whence which)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	enum v4l2_mbus_pixelcode pixelcode;
 	struct v4l2_rect *crop;
 	unsigned int i;
@@ -1720,6 +1721,7 @@ static void preview_try_format(struct isp_prev_device *prev,
 	}
 
 	fmt->field = V4L2_FIELD_NONE;
+	memset(fmt + offset, 0, sizeof(*fmt) - offset);
 }
 
 /*
diff --git a/drivers/media/video/omap3isp/ispresizer.c b/drivers/media/video/omap3isp/ispresizer.c
index 50e593b..fff46e5 100644
--- a/drivers/media/video/omap3isp/ispresizer.c
+++ b/drivers/media/video/omap3isp/ispresizer.c
@@ -1336,6 +1336,7 @@ static void resizer_try_format(struct isp_res_device *res,
 			       struct v4l2_mbus_framefmt *fmt,
 			       enum v4l2_subdev_format_whence which)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	struct v4l2_mbus_framefmt *format;
 	struct resizer_ratio ratio;
 	struct v4l2_rect crop;
@@ -1363,6 +1364,7 @@ static void resizer_try_format(struct isp_res_device *res,
 
 	fmt->colorspace = V4L2_COLORSPACE_JPEG;
 	fmt->field = V4L2_FIELD_NONE;
+	memset(fmt + offset, 0, sizeof(*fmt) - offset);
 }
 
 /*
diff --git a/drivers/media/video/s5k6aa.c b/drivers/media/video/s5k6aa.c
index 0df7f2a..b9d1f03 100644
--- a/drivers/media/video/s5k6aa.c
+++ b/drivers/media/video/s5k6aa.c
@@ -1070,8 +1070,10 @@ __s5k6aa_get_crop_rect(struct s5k6aa *s5k6aa, struct v4l2_subdev_fh *fh,
 static void s5k6aa_try_format(struct s5k6aa *s5k6aa,
 			      struct v4l2_mbus_framefmt *mf)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	unsigned int index;
 
+	memset(mf + offset, 0, sizeof(*mf) - offset);
 	v4l_bound_align_image(&mf->width, S5K6AA_WIN_WIDTH_MIN,
 			      S5K6AA_WIN_WIDTH_MAX, 1,
 			      &mf->height, S5K6AA_WIN_HEIGHT_MIN,
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH/RFC v3 4/4] v4l: Update subdev drivers to handle framesamples parameter
  2011-12-09 17:59     ` [PATCH/RFC v3 " Sylwester Nawrocki
@ 2011-12-12  0:31       ` Laurent Pinchart
  2011-12-12 14:39         ` Sylwester Nawrocki
  2011-12-14 12:23         ` [PATCH/RFC v4 0/2] v4l2: Extend media bus format with framesamples field Sylwester Nawrocki
  0 siblings, 2 replies; 25+ messages in thread
From: Laurent Pinchart @ 2011-12-12  0:31 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: linux-media, g.liakhovetski, sakari.ailus, riverful.kim,
	sw0312.kim, m.szyprowski, Kyungmin Park

Hi Sylwester,

On Friday 09 December 2011 18:59:52 Sylwester Nawrocki wrote:
> Update the sub-device drivers having a devnode enabled so they properly
> handle the new framesamples field of struct v4l2_mbus_framefmt.
> These drivers don't support compressed (entropy encoded) formats so the
> framesamples field is simply initialized to 0, altogether with the
> reserved structure member.
> 
> There is a few other drivers that expose a devnode (mt9p031, mt9t001,
> mt9v032), but they already implicitly initialize the new data structure
> field to 0, so they don't need to be touched.
> 
> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> ---
> Hi,
> 
> In this version the whole reserved field in struct v4l2_mbus_framefmt
> is also cleared, rather than setting only framesamples to 0.
> 
> The omap3isp driver changes have been only compile tested.
> 
> Thanks,
> Sylwester
> ---
>  drivers/media/video/noon010pc30.c         |    5 ++++-
>  drivers/media/video/omap3isp/ispccdc.c    |    2 ++
>  drivers/media/video/omap3isp/ispccp2.c    |    2 ++
>  drivers/media/video/omap3isp/ispcsi2.c    |    2 ++
>  drivers/media/video/omap3isp/isppreview.c |    2 ++
>  drivers/media/video/omap3isp/ispresizer.c |    2 ++
>  drivers/media/video/s5k6aa.c              |    2 ++
>  7 files changed, 16 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/media/video/noon010pc30.c
> b/drivers/media/video/noon010pc30.c index 50838bf..5af9b60 100644
> --- a/drivers/media/video/noon010pc30.c
> +++ b/drivers/media/video/noon010pc30.c
> @@ -519,13 +519,14 @@ static int noon010_get_fmt(struct v4l2_subdev *sd,
> struct v4l2_subdev_fh *fh, mf = &fmt->format;
> 
>  	mutex_lock(&info->lock);
> +	memset(mf, 0, sizeof(mf));
>  	mf->width = info->curr_win->width;
>  	mf->height = info->curr_win->height;
>  	mf->code = info->curr_fmt->code;
>  	mf->colorspace = info->curr_fmt->colorspace;
>  	mf->field = V4L2_FIELD_NONE;
> -
>  	mutex_unlock(&info->lock);
> +
>  	return 0;
>  }
> 
> @@ -546,12 +547,14 @@ static const struct noon010_format
> *noon010_try_fmt(struct v4l2_subdev *sd, static int noon010_set_fmt(struct
> v4l2_subdev *sd, struct v4l2_subdev_fh *fh, struct v4l2_subdev_format
> *fmt)
>  {
> +	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
>  	struct noon010_info *info = to_noon010(sd);
>  	const struct noon010_frmsize *size = NULL;
>  	const struct noon010_format *nf;
>  	struct v4l2_mbus_framefmt *mf;
>  	int ret = 0;
> 
> +	memset(&fmt->format + offset, 0, sizeof(fmt->format) - offset);

I'm not sure this is a good idea, as it will break when a new field will be 
added to struct v4l2_mbus_framefmt.

Wouldn't it be better to zero the whoel structure in the callers instead ?

>  	nf = noon010_try_fmt(sd, &fmt->format);
>  	noon010_try_frame_size(&fmt->format, &size);
>  	fmt->format.colorspace = V4L2_COLORSPACE_JPEG;
> diff --git a/drivers/media/video/omap3isp/ispccdc.c
> b/drivers/media/video/omap3isp/ispccdc.c index b0b0fa5..a608149 100644
> --- a/drivers/media/video/omap3isp/ispccdc.c
> +++ b/drivers/media/video/omap3isp/ispccdc.c
> @@ -1802,6 +1802,7 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct
> v4l2_subdev_fh *fh, unsigned int pad, struct v4l2_mbus_framefmt *fmt,
>  		enum v4l2_subdev_format_whence which)
>  {
> +	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
>  	struct v4l2_mbus_framefmt *format;
>  	const struct isp_format_info *info;
>  	unsigned int width = fmt->width;
> @@ -1863,6 +1864,7 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct
> v4l2_subdev_fh *fh, */
>  	fmt->colorspace = V4L2_COLORSPACE_SRGB;
>  	fmt->field = V4L2_FIELD_NONE;
> +	memset(fmt + offset, 0, sizeof(*fmt) - offset);
>  }
> 
>  /*
> diff --git a/drivers/media/video/omap3isp/ispccp2.c
> b/drivers/media/video/omap3isp/ispccp2.c index 904ca8c..a56a6ad 100644
> --- a/drivers/media/video/omap3isp/ispccp2.c
> +++ b/drivers/media/video/omap3isp/ispccp2.c
> @@ -673,6 +673,7 @@ static void ccp2_try_format(struct isp_ccp2_device
> *ccp2, struct v4l2_mbus_framefmt *fmt,
>  			       enum v4l2_subdev_format_whence which)
>  {
> +	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
>  	struct v4l2_mbus_framefmt *format;
> 
>  	switch (pad) {
> @@ -711,6 +712,7 @@ static void ccp2_try_format(struct isp_ccp2_device
> *ccp2,
> 
>  	fmt->field = V4L2_FIELD_NONE;
>  	fmt->colorspace = V4L2_COLORSPACE_SRGB;
> +	memset(fmt + offset, 0, sizeof(*fmt) - offset);
>  }
> 
>  /*
> diff --git a/drivers/media/video/omap3isp/ispcsi2.c
> b/drivers/media/video/omap3isp/ispcsi2.c index 0c5f1cb..c41443b 100644
> --- a/drivers/media/video/omap3isp/ispcsi2.c
> +++ b/drivers/media/video/omap3isp/ispcsi2.c
> @@ -846,6 +846,7 @@ csi2_try_format(struct isp_csi2_device *csi2, struct
> v4l2_subdev_fh *fh, unsigned int pad, struct v4l2_mbus_framefmt *fmt,
>  		enum v4l2_subdev_format_whence which)
>  {
> +	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
>  	enum v4l2_mbus_pixelcode pixelcode;
>  	struct v4l2_mbus_framefmt *format;
>  	const struct isp_format_info *info;
> @@ -888,6 +889,7 @@ csi2_try_format(struct isp_csi2_device *csi2, struct
> v4l2_subdev_fh *fh, /* RGB, non-interlaced */
>  	fmt->colorspace = V4L2_COLORSPACE_SRGB;
>  	fmt->field = V4L2_FIELD_NONE;
> +	memset(fmt + offset, 0, sizeof(*fmt) - offset);
>  }
> 
>  /*
> diff --git a/drivers/media/video/omap3isp/isppreview.c
> b/drivers/media/video/omap3isp/isppreview.c index ccb876f..23861c4 100644
> --- a/drivers/media/video/omap3isp/isppreview.c
> +++ b/drivers/media/video/omap3isp/isppreview.c
> @@ -1656,6 +1656,7 @@ static void preview_try_format(struct isp_prev_device
> *prev, struct v4l2_mbus_framefmt *fmt,
>  			       enum v4l2_subdev_format_whence which)
>  {
> +	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
>  	enum v4l2_mbus_pixelcode pixelcode;
>  	struct v4l2_rect *crop;
>  	unsigned int i;
> @@ -1720,6 +1721,7 @@ static void preview_try_format(struct isp_prev_device
> *prev, }
> 
>  	fmt->field = V4L2_FIELD_NONE;
> +	memset(fmt + offset, 0, sizeof(*fmt) - offset);
>  }
> 
>  /*
> diff --git a/drivers/media/video/omap3isp/ispresizer.c
> b/drivers/media/video/omap3isp/ispresizer.c index 50e593b..fff46e5 100644
> --- a/drivers/media/video/omap3isp/ispresizer.c
> +++ b/drivers/media/video/omap3isp/ispresizer.c
> @@ -1336,6 +1336,7 @@ static void resizer_try_format(struct isp_res_device
> *res, struct v4l2_mbus_framefmt *fmt,
>  			       enum v4l2_subdev_format_whence which)
>  {
> +	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
>  	struct v4l2_mbus_framefmt *format;
>  	struct resizer_ratio ratio;
>  	struct v4l2_rect crop;
> @@ -1363,6 +1364,7 @@ static void resizer_try_format(struct isp_res_device
> *res,
> 
>  	fmt->colorspace = V4L2_COLORSPACE_JPEG;
>  	fmt->field = V4L2_FIELD_NONE;
> +	memset(fmt + offset, 0, sizeof(*fmt) - offset);
>  }
> 
>  /*
> diff --git a/drivers/media/video/s5k6aa.c b/drivers/media/video/s5k6aa.c
> index 0df7f2a..b9d1f03 100644
> --- a/drivers/media/video/s5k6aa.c
> +++ b/drivers/media/video/s5k6aa.c
> @@ -1070,8 +1070,10 @@ __s5k6aa_get_crop_rect(struct s5k6aa *s5k6aa, struct
> v4l2_subdev_fh *fh, static void s5k6aa_try_format(struct s5k6aa *s5k6aa,
>  			      struct v4l2_mbus_framefmt *mf)
>  {
> +	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
>  	unsigned int index;
> 
> +	memset(mf + offset, 0, sizeof(*mf) - offset);
>  	v4l_bound_align_image(&mf->width, S5K6AA_WIN_WIDTH_MIN,
>  			      S5K6AA_WIN_WIDTH_MAX, 1,
>  			      &mf->height, S5K6AA_WIN_HEIGHT_MIN,

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH/RFC v3 4/4] v4l: Update subdev drivers to handle framesamples parameter
  2011-12-12  0:31       ` Laurent Pinchart
@ 2011-12-12 14:39         ` Sylwester Nawrocki
  2011-12-14 12:23         ` [PATCH/RFC v4 0/2] v4l2: Extend media bus format with framesamples field Sylwester Nawrocki
  1 sibling, 0 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-12 14:39 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-media, g.liakhovetski, sakari.ailus, riverful.kim,
	sw0312.kim, m.szyprowski, Kyungmin Park

Hi Laurent,

On 12/12/2011 01:31 AM, Laurent Pinchart wrote:
> On Friday 09 December 2011 18:59:52 Sylwester Nawrocki wrote:
>> Update the sub-device drivers having a devnode enabled so they properly
>> handle the new framesamples field of struct v4l2_mbus_framefmt.
>> These drivers don't support compressed (entropy encoded) formats so the
>> framesamples field is simply initialized to 0, altogether with the
>> reserved structure member.
>>
>> There is a few other drivers that expose a devnode (mt9p031, mt9t001,
>> mt9v032), but they already implicitly initialize the new data structure
>> field to 0, so they don't need to be touched.
>>
>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
>> ---
>> Hi,
>>
>> In this version the whole reserved field in struct v4l2_mbus_framefmt
>> is also cleared, rather than setting only framesamples to 0.
>>
>> The omap3isp driver changes have been only compile tested.
>>
>> Thanks,
>> Sylwester
>> ---
>>  drivers/media/video/noon010pc30.c         |    5 ++++-
>>  drivers/media/video/omap3isp/ispccdc.c    |    2 ++
>>  drivers/media/video/omap3isp/ispccp2.c    |    2 ++
>>  drivers/media/video/omap3isp/ispcsi2.c    |    2 ++
>>  drivers/media/video/omap3isp/isppreview.c |    2 ++
>>  drivers/media/video/omap3isp/ispresizer.c |    2 ++
>>  drivers/media/video/s5k6aa.c              |    2 ++
>>  7 files changed, 16 insertions(+), 1 deletions(-)
>>
>> diff --git a/drivers/media/video/noon010pc30.c
>> b/drivers/media/video/noon010pc30.c index 50838bf..5af9b60 100644
>> --- a/drivers/media/video/noon010pc30.c
>> +++ b/drivers/media/video/noon010pc30.c
>> @@ -519,13 +519,14 @@ static int noon010_get_fmt(struct v4l2_subdev *sd,
>> struct v4l2_subdev_fh *fh, mf = &fmt->format;
>>
>>  	mutex_lock(&info->lock);
>> +	memset(mf, 0, sizeof(mf));
>>  	mf->width = info->curr_win->width;
>>  	mf->height = info->curr_win->height;
>>  	mf->code = info->curr_fmt->code;
>>  	mf->colorspace = info->curr_fmt->colorspace;
>>  	mf->field = V4L2_FIELD_NONE;
>> -
>>  	mutex_unlock(&info->lock);
>> +
>>  	return 0;
>>  }
>>
>> @@ -546,12 +547,14 @@ static const struct noon010_format
>> *noon010_try_fmt(struct v4l2_subdev *sd, static int noon010_set_fmt(struct
>> v4l2_subdev *sd, struct v4l2_subdev_fh *fh, struct v4l2_subdev_format
>> *fmt)
>>  {
>> +	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
>>  	struct noon010_info *info = to_noon010(sd);
>>  	const struct noon010_frmsize *size = NULL;
>>  	const struct noon010_format *nf;
>>  	struct v4l2_mbus_framefmt *mf;
>>  	int ret = 0;
>>
>> +	memset(&fmt->format + offset, 0, sizeof(fmt->format) - offset);
> 
> I'm not sure this is a good idea, as it will break when a new field will be 
> added to struct v4l2_mbus_framefmt.

I'm not sure it will break. Now there everything cleared after (and including)
framesamples field.

struct v4l2_mbus_framefmt {
        __u32                   width;
        __u32                   height;
        __u32                   code;
        __u32                   field;
        __u32                   colorspace;
        __u32                   framesamples;
        __u32                   reserved[6];
};

Assuming we convert reserved[0] to new_field

struct v4l2_mbus_framefmt {
        __u32                   width;
        __u32                   height;
        __u32                   code;
        __u32                   field;
        __u32                   colorspace;
        __u32                   framesamples;
        __u32                   new_field;
        __u32                   reserved[5];
};

the code:

const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
memset(&fmt->format + offset, 0, sizeof(fmt->format) - offset);

would still clear 7 u32' at the structure end, wouldn't it?

> 
> Wouldn't it be better to zero the whoel structure in the callers instead ?

-- 

Regards
Sylwester

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH/RFC v4 0/2] v4l2: Extend media bus format with framesamples field
  2011-12-12  0:31       ` Laurent Pinchart
  2011-12-12 14:39         ` Sylwester Nawrocki
@ 2011-12-14 12:23         ` Sylwester Nawrocki
  2011-12-14 12:23           ` [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt Sylwester Nawrocki
  2011-12-14 12:23           ` [PATCHv4 2/2] v4l: Update subdev drivers to handle framesamples parameter Sylwester Nawrocki
  1 sibling, 2 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-14 12:23 UTC (permalink / raw)
  To: linux-media
  Cc: laurent.pinchart, g.liakhovetski, sakari.ailus, m.szyprowski,
	riverful.kim, s.nawrocki

Hello,

this is an updated version of the changeset extending struct v4l2_mbus_framefmt
with new framesamples field.

Changes since v1:
 - Docbook documentation improvements
 - drivers that are exposing a sub-device node are modified to initialize
   the new struct v4l2_mbus_framefmt member to 0 if they support only
   uncompressed formats
Changes since v3:
 - dropped patches for m5mols and s5p-fimc drivers which are unchanged;
 - in teh subdev drivers use local copy of the format data structure to clear
   all structure members which are unused by a driver, in way that it don't break
   after new fields are added to struct v4l2_mbus_framefmt;      

The omap3isp changes are only compile tested. I'd like to ask someone who 
has access to the hardware to test the patch.


The proposed semantics for the framesamples parameter is as follows:

 - the value is propagated at video pipeline entities where 'code' indicates
   compressed format;
 - the subdevs adjust the value if needed;
 - although currently there is only one compressed data format at the media 
   bus - V4L2_MBUS_FMT_JPEG_1X8 which corresponds to V4L2_PIX_FMT_JPEG and
   one sample at the media bus equals to one byte in memory, it is assumed
   that the host knows exactly what is framesamples/sizeimage ratio and it will 
   validate framesamples/sizeimage values before starting streaming;
 - the host will query internally a relevant subdev to properly handle 'sizeimage' 
   at the VIDIOC_TRY/S_FMT ioctl 
      
The initial RFC can be found here:
http://www.mail-archive.com/linux-media@vger.kernel.org/msg39321.html

Sylwester Nawrocki (2):
  v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  v4l: Update subdev drivers to handle framesamples parameter

 Documentation/DocBook/media/v4l/dev-subdev.xml     |   10 +++++-
 Documentation/DocBook/media/v4l/subdev-formats.xml |    9 +++++-
 drivers/media/video/noon010pc30.c                  |    3 ++
 drivers/media/video/omap3isp/ispccdc.c             |   12 +++++--
 drivers/media/video/omap3isp/ispccp2.c             |   31 +++++++++++--------
 drivers/media/video/omap3isp/ispcsi2.c             |   12 +++++--
 drivers/media/video/omap3isp/isppreview.c          |   23 +++++++++-----
 drivers/media/video/omap3isp/ispresizer.c          |   19 +++++++++---
 drivers/media/video/s5k6aa.c                       |    2 +
 include/linux/v4l2-mediabus.h                      |    4 ++-
 10 files changed, 87 insertions(+), 38 deletions(-)

-- 
1.7.8


-- 
Best regards, 

Sylwester Nawrocki 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-14 12:23         ` [PATCH/RFC v4 0/2] v4l2: Extend media bus format with framesamples field Sylwester Nawrocki
@ 2011-12-14 12:23           ` Sylwester Nawrocki
  2011-12-21  0:20             ` Laurent Pinchart
  2011-12-14 12:23           ` [PATCHv4 2/2] v4l: Update subdev drivers to handle framesamples parameter Sylwester Nawrocki
  1 sibling, 1 reply; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-14 12:23 UTC (permalink / raw)
  To: linux-media
  Cc: laurent.pinchart, g.liakhovetski, sakari.ailus, m.szyprowski,
	riverful.kim, s.nawrocki, Kyungmin Park

The purpose of the new field is to allow the video pipeline elements
to negotiate memory buffer size for compressed data frames, where
the buffer size cannot be derived from pixel width and height and
the pixel code.

For VIDIOC_SUBDEV_S_FMT and VIDIOC_SUBDEV_G_FMT ioctls, the
framesamples parameter should be calculated by the driver from pixel
width, height, color format and other parameters if required and
returned to the caller. This applies to compressed data formats only.

The application should propagate the framesamples value, whatever
returned at the first sub-device within a data pipeline, i.e. at the
pipeline's data source.

For compressed data formats the host drivers should internally
validate the framesamples parameter values before streaming is
enabled, to make sure the memory buffer size requirements are
satisfied along the pipeline.

Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
--
There is no changes in this patch comparing to v3.
---
 Documentation/DocBook/media/v4l/dev-subdev.xml     |   10 ++++++++--
 Documentation/DocBook/media/v4l/subdev-formats.xml |    9 ++++++++-
 include/linux/v4l2-mediabus.h                      |    4 +++-
 3 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml b/Documentation/DocBook/media/v4l/dev-subdev.xml
index 0916a73..b9d24eb 100644
--- a/Documentation/DocBook/media/v4l/dev-subdev.xml
+++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
@@ -113,7 +113,7 @@
     <para>Drivers that implement the <link linkend="media-controller-intro">media
     API</link> can expose pad-level image format configuration to applications.
     When they do, applications can use the &VIDIOC-SUBDEV-G-FMT; and
-    &VIDIOC-SUBDEV-S-FMT; ioctls. to negotiate formats on a per-pad basis.</para>
+    &VIDIOC-SUBDEV-S-FMT; ioctls to negotiate formats on a per-pad basis.</para>
 
     <para>Applications are responsible for configuring coherent parameters on
     the whole pipeline and making sure that connected pads have compatible
@@ -160,7 +160,13 @@
       guaranteed to be supported by the device. In particular, drivers guarantee
       that a returned format will not be further changed if passed to an
       &VIDIOC-SUBDEV-S-FMT; call as-is (as long as external parameters, such as
-      formats on other pads or links' configuration are not changed).</para>
+      formats on other pads or links' configuration are not changed). When a
+      device contains a data encoder, the <structfield>
+      <link linkend="v4l2-mbus-framefmt-framesamples">framesamples</link>
+      </structfield> field value may be further changed, if parameters of the
+      encoding process are changed after the format has been negotiated. In
+      such situation applications should use &VIDIOC-SUBDEV-G-FMT; ioctl to
+      query an updated format.</para>
 
       <para>Drivers automatically propagate formats inside sub-devices. When a
       try or active format is set on a pad, corresponding formats on other pads
diff --git a/Documentation/DocBook/media/v4l/subdev-formats.xml b/Documentation/DocBook/media/v4l/subdev-formats.xml
index 49c532e..7c202a1 100644
--- a/Documentation/DocBook/media/v4l/subdev-formats.xml
+++ b/Documentation/DocBook/media/v4l/subdev-formats.xml
@@ -33,9 +33,16 @@
 	  <entry>Image colorspace, from &v4l2-colorspace;. See
 	  <xref linkend="colorspaces" /> for details.</entry>
 	</row>
+	<row id="v4l2-mbus-framefmt-framesamples">
+	  <entry>__u32</entry>
+	  <entry><structfield>framesamples</structfield></entry>
+	  <entry>Maximum number of bus samples per frame for compressed data
+	    formats. For uncompressed formats drivers and applications must
+	    set this parameter to zero. </entry>
+	</row>
 	<row>
 	  <entry>__u32</entry>
-	  <entry><structfield>reserved</structfield>[7]</entry>
+	  <entry><structfield>reserved</structfield>[6]</entry>
 	  <entry>Reserved for future extensions. Applications and drivers must
 	  set the array to zero.</entry>
 	</row>
diff --git a/include/linux/v4l2-mediabus.h b/include/linux/v4l2-mediabus.h
index 5ea7f75..f18d6cd 100644
--- a/include/linux/v4l2-mediabus.h
+++ b/include/linux/v4l2-mediabus.h
@@ -101,6 +101,7 @@ enum v4l2_mbus_pixelcode {
  * @code:	data format code (from enum v4l2_mbus_pixelcode)
  * @field:	used interlacing type (from enum v4l2_field)
  * @colorspace:	colorspace of the data (from enum v4l2_colorspace)
+ * @framesamples: maximum number of bus samples per frame
  */
 struct v4l2_mbus_framefmt {
 	__u32			width;
@@ -108,7 +109,8 @@ struct v4l2_mbus_framefmt {
 	__u32			code;
 	__u32			field;
 	__u32			colorspace;
-	__u32			reserved[7];
+	__u32			framesamples;
+	__u32			reserved[6];
 };
 
 #endif
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv4 2/2] v4l: Update subdev drivers to handle framesamples parameter
  2011-12-14 12:23         ` [PATCH/RFC v4 0/2] v4l2: Extend media bus format with framesamples field Sylwester Nawrocki
  2011-12-14 12:23           ` [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt Sylwester Nawrocki
@ 2011-12-14 12:23           ` Sylwester Nawrocki
  2011-12-15 10:14             ` [PATCHv5] " Sylwester Nawrocki
  1 sibling, 1 reply; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-14 12:23 UTC (permalink / raw)
  To: linux-media
  Cc: laurent.pinchart, g.liakhovetski, sakari.ailus, m.szyprowski,
	riverful.kim, s.nawrocki, Kyungmin Park

Update the sub-device drivers having a devnode enabled so they properly
handle the new framesamples field of struct v4l2_mbus_framefmt.
These drivers don't support compressed (entropy encoded) formats so the
framesamples field is simply initialized to 0, altogether with the
reserved field.

There is a few other drivers that expose a devnode (mt9p031, mt9t001,
mt9v032) but they already implicitly initialize the new data structure
field to 0, so they don't need to be touched.

Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
--

Changes since v3:
- use local copy of the format data structure to clear all structure
  members which are unused by a driver, in way that it don't break
  after new fields are added to struct v4l2_mbus_framefmt; 	

The omap3isp changes are only compile tested. I'd like to ask someone who 
has access to the hardware to test the patch.

Regards,
Sylwester
---
 drivers/media/video/noon010pc30.c         |    3 ++
 drivers/media/video/omap3isp/ispccdc.c    |   12 +++++++---
 drivers/media/video/omap3isp/ispccp2.c    |   31 ++++++++++++++++------------
 drivers/media/video/omap3isp/ispcsi2.c    |   12 +++++++---
 drivers/media/video/omap3isp/isppreview.c |   23 ++++++++++++++-------
 drivers/media/video/omap3isp/ispresizer.c |   19 +++++++++++++----
 drivers/media/video/s5k6aa.c              |    2 +
 7 files changed, 68 insertions(+), 34 deletions(-)

diff --git a/drivers/media/video/noon010pc30.c b/drivers/media/video/noon010pc30.c
index 50838bf..9a6a7ac 100644
--- a/drivers/media/video/noon010pc30.c
+++ b/drivers/media/video/noon010pc30.c
@@ -519,6 +519,7 @@ static int noon010_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 	mf = &fmt->format;
 
 	mutex_lock(&info->lock);
+	memset(mf, 0, sizeof(mf));
 	mf->width = info->curr_win->width;
 	mf->height = info->curr_win->height;
 	mf->code = info->curr_fmt->code;
@@ -546,12 +547,14 @@ static const struct noon010_format *noon010_try_fmt(struct v4l2_subdev *sd,
 static int noon010_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 			   struct v4l2_subdev_format *fmt)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	struct noon010_info *info = to_noon010(sd);
 	const struct noon010_frmsize *size = NULL;
 	const struct noon010_format *nf;
 	struct v4l2_mbus_framefmt *mf;
 	int ret = 0;
 
+	memset(&fmt->format + offset, 0, sizeof(fmt->format) - offset);
 	nf = noon010_try_fmt(sd, &fmt->format);
 	noon010_try_frame_size(&fmt->format, &size);
 	fmt->format.colorspace = V4L2_COLORSPACE_JPEG;
diff --git a/drivers/media/video/omap3isp/ispccdc.c b/drivers/media/video/omap3isp/ispccdc.c
index b0b0fa5..0663278 100644
--- a/drivers/media/video/omap3isp/ispccdc.c
+++ b/drivers/media/video/omap3isp/ispccdc.c
@@ -1806,10 +1806,12 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
 	const struct isp_format_info *info;
 	unsigned int width = fmt->width;
 	unsigned int height = fmt->height;
+	struct v4l2_mbus_framefmt mf;
 	unsigned int i;
 
 	switch (pad) {
 	case CCDC_PAD_SINK:
+		memset(&mf, 0, sizeof(mf));
 		/* TODO: If the CCDC output formatter pad is connected directly
 		 * to the resizer, only YUV formats can be used.
 		 */
@@ -1820,11 +1822,13 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
 
 		/* If not found, use SGRBG10 as default */
 		if (i >= ARRAY_SIZE(ccdc_fmts))
-			fmt->code = V4L2_MBUS_FMT_SGRBG10_1X10;
-
+			mf.code = V4L2_MBUS_FMT_SGRBG10_1X10;
+		else
+			mf.code = fmt->code;
 		/* Clamp the input size. */
-		fmt->width = clamp_t(u32, width, 32, 4096);
-		fmt->height = clamp_t(u32, height, 32, 4096);
+		mf.width = clamp_t(u32, width, 32, 4096);
+		mf.height = clamp_t(u32, height, 32, 4096);
+		*fmt = mf;
 		break;
 
 	case CCDC_PAD_SOURCE_OF:
diff --git a/drivers/media/video/omap3isp/ispccp2.c b/drivers/media/video/omap3isp/ispccp2.c
index 904ca8c..90599e9 100644
--- a/drivers/media/video/omap3isp/ispccp2.c
+++ b/drivers/media/video/omap3isp/ispccp2.c
@@ -674,27 +674,32 @@ static void ccp2_try_format(struct isp_ccp2_device *ccp2,
 			       enum v4l2_subdev_format_whence which)
 {
 	struct v4l2_mbus_framefmt *format;
+	struct v4l2_mbus_framefmt mf;
 
 	switch (pad) {
 	case CCP2_PAD_SINK:
+		memset(&mf, 0, sizeof(mf));
 		if (fmt->code != V4L2_MBUS_FMT_SGRBG10_DPCM8_1X8)
-			fmt->code = V4L2_MBUS_FMT_SGRBG10_1X10;
+			mf.code = V4L2_MBUS_FMT_SGRBG10_1X10;
+		else
+			mf.code = fmt->code;
 
 		if (ccp2->input == CCP2_INPUT_SENSOR) {
-			fmt->width = clamp_t(u32, fmt->width,
-					     ISPCCP2_DAT_START_MIN,
-					     ISPCCP2_DAT_START_MAX);
-			fmt->height = clamp_t(u32, fmt->height,
-					      ISPCCP2_DAT_SIZE_MIN,
-					      ISPCCP2_DAT_SIZE_MAX);
+			mf.width = clamp_t(u32, fmt->width,
+					   ISPCCP2_DAT_START_MIN,
+					   ISPCCP2_DAT_START_MAX);
+			mf.height = clamp_t(u32, fmt->height,
+					    ISPCCP2_DAT_SIZE_MIN,
+					    ISPCCP2_DAT_SIZE_MAX);
 		} else if (ccp2->input == CCP2_INPUT_MEMORY) {
-			fmt->width = clamp_t(u32, fmt->width,
-					     ISPCCP2_LCM_HSIZE_COUNT_MIN,
-					     ISPCCP2_LCM_HSIZE_COUNT_MAX);
-			fmt->height = clamp_t(u32, fmt->height,
-					      ISPCCP2_LCM_VSIZE_MIN,
-					      ISPCCP2_LCM_VSIZE_MAX);
+			mf.width = clamp_t(u32, fmt->width,
+					   ISPCCP2_LCM_HSIZE_COUNT_MIN,
+					   ISPCCP2_LCM_HSIZE_COUNT_MAX);
+			mf.height = clamp_t(u32, fmt->height,
+					    ISPCCP2_LCM_VSIZE_MIN,
+					    ISPCCP2_LCM_VSIZE_MAX);
 		}
+		*fmt = mf;
 		break;
 
 	case CCP2_PAD_SOURCE:
diff --git a/drivers/media/video/omap3isp/ispcsi2.c b/drivers/media/video/omap3isp/ispcsi2.c
index 0c5f1cb..b989ef7 100644
--- a/drivers/media/video/omap3isp/ispcsi2.c
+++ b/drivers/media/video/omap3isp/ispcsi2.c
@@ -849,10 +849,12 @@ csi2_try_format(struct isp_csi2_device *csi2, struct v4l2_subdev_fh *fh,
 	enum v4l2_mbus_pixelcode pixelcode;
 	struct v4l2_mbus_framefmt *format;
 	const struct isp_format_info *info;
+	struct v4l2_mbus_framefmt mf;
 	unsigned int i;
 
 	switch (pad) {
 	case CSI2_PAD_SINK:
+		memset(&mf, 0, sizeof(mf));
 		/* Clamp the width and height to valid range (1-8191). */
 		for (i = 0; i < ARRAY_SIZE(csi2_input_fmts); i++) {
 			if (fmt->code == csi2_input_fmts[i])
@@ -861,10 +863,12 @@ csi2_try_format(struct isp_csi2_device *csi2, struct v4l2_subdev_fh *fh,
 
 		/* If not found, use SGRBG10 as default */
 		if (i >= ARRAY_SIZE(csi2_input_fmts))
-			fmt->code = V4L2_MBUS_FMT_SGRBG10_1X10;
-
-		fmt->width = clamp_t(u32, fmt->width, 1, 8191);
-		fmt->height = clamp_t(u32, fmt->height, 1, 8191);
+			mf.code = V4L2_MBUS_FMT_SGRBG10_1X10;
+		else
+			mf.code = fmt->code;
+		mf.width = clamp_t(u32, fmt->width, 1, 8191);
+		mf.height = clamp_t(u32, fmt->height, 1, 8191);
+		*fmt = mf;
 		break;
 
 	case CSI2_PAD_SOURCE:
diff --git a/drivers/media/video/omap3isp/isppreview.c b/drivers/media/video/omap3isp/isppreview.c
index ccb876f..31f2f5c 100644
--- a/drivers/media/video/omap3isp/isppreview.c
+++ b/drivers/media/video/omap3isp/isppreview.c
@@ -1657,9 +1657,12 @@ static void preview_try_format(struct isp_prev_device *prev,
 			       enum v4l2_subdev_format_whence which)
 {
 	enum v4l2_mbus_pixelcode pixelcode;
+	struct v4l2_mbus_framefmt mf;
 	struct v4l2_rect *crop;
 	unsigned int i;
 
+	memset(&mf, 0, sizeof(mf));
+
 	switch (pad) {
 	case PREV_PAD_SINK:
 		/* When reading data from the CCDC, the input size has already
@@ -1672,15 +1675,13 @@ static void preview_try_format(struct isp_prev_device *prev,
 		 * filter array interpolation.
 		 */
 		if (prev->input == PREVIEW_INPUT_MEMORY) {
-			fmt->width = clamp_t(u32, fmt->width, PREV_MIN_IN_WIDTH,
-					     preview_max_out_width(prev));
-			fmt->height = clamp_t(u32, fmt->height,
-					      PREV_MIN_IN_HEIGHT,
-					      PREV_MAX_IN_HEIGHT);
+			mf.width = clamp_t(u32, fmt->width, PREV_MIN_IN_WIDTH,
+					   preview_max_out_width(prev));
+			mf.height = clamp_t(u32, fmt->height,
+					    PREV_MIN_IN_HEIGHT,
+					    PREV_MAX_IN_HEIGHT);
 		}
 
-		fmt->colorspace = V4L2_COLORSPACE_SRGB;
-
 		for (i = 0; i < ARRAY_SIZE(preview_input_fmts); i++) {
 			if (fmt->code == preview_input_fmts[i])
 				break;
@@ -1688,11 +1689,17 @@ static void preview_try_format(struct isp_prev_device *prev,
 
 		/* If not found, use SGRBG10 as default */
 		if (i >= ARRAY_SIZE(preview_input_fmts))
-			fmt->code = V4L2_MBUS_FMT_SGRBG10_1X10;
+			mf.code = V4L2_MBUS_FMT_SGRBG10_1X10;
+		else
+			mf.code = fmt->code;
+
+		*fmt = mf;
+		fmt->colorspace = V4L2_COLORSPACE_SRGB;
 		break;
 
 	case PREV_PAD_SOURCE:
 		pixelcode = fmt->code;
+		memset(fmt, 0, sizeof(mf));
 		*fmt = *__preview_get_format(prev, fh, PREV_PAD_SINK, which);
 
 		switch (pixelcode) {
diff --git a/drivers/media/video/omap3isp/ispresizer.c b/drivers/media/video/omap3isp/ispresizer.c
index 50e593b..761b00d 100644
--- a/drivers/media/video/omap3isp/ispresizer.c
+++ b/drivers/media/video/omap3isp/ispresizer.c
@@ -1337,19 +1337,24 @@ static void resizer_try_format(struct isp_res_device *res,
 			       enum v4l2_subdev_format_whence which)
 {
 	struct v4l2_mbus_framefmt *format;
+	struct v4l2_mbus_framefmt mf;
 	struct resizer_ratio ratio;
 	struct v4l2_rect crop;
 
+	memset(&mf, 0, sizeof(mf));
+
 	switch (pad) {
 	case RESZ_PAD_SINK:
 		if (fmt->code != V4L2_MBUS_FMT_YUYV8_1X16 &&
 		    fmt->code != V4L2_MBUS_FMT_UYVY8_1X16)
-			fmt->code = V4L2_MBUS_FMT_YUYV8_1X16;
+			mf.code = V4L2_MBUS_FMT_YUYV8_1X16;
+		else
+			mf.code = fmt->code;
 
-		fmt->width = clamp_t(u32, fmt->width, MIN_IN_WIDTH,
-				     resizer_max_in_width(res));
-		fmt->height = clamp_t(u32, fmt->height, MIN_IN_HEIGHT,
-				      MAX_IN_HEIGHT);
+		mf.width = clamp_t(u32, fmt->width, MIN_IN_WIDTH,
+				   resizer_max_in_width(res));
+		mf.height = clamp_t(u32, fmt->height, MIN_IN_HEIGHT,
+				    MAX_IN_HEIGHT);
 		break;
 
 	case RESZ_PAD_SOURCE:
@@ -1358,9 +1363,13 @@ static void resizer_try_format(struct isp_res_device *res,
 
 		crop = *__resizer_get_crop(res, fh, which);
 		resizer_calc_ratios(res, &crop, fmt, &ratio);
+		mf.width = fmt->width;
+		mf.height = fmt->height;
+		mf.code = fmt->code;
 		break;
 	}
 
+	*fmt = mf;
 	fmt->colorspace = V4L2_COLORSPACE_JPEG;
 	fmt->field = V4L2_FIELD_NONE;
 }
diff --git a/drivers/media/video/s5k6aa.c b/drivers/media/video/s5k6aa.c
index 0df7f2a..b9d1f03 100644
--- a/drivers/media/video/s5k6aa.c
+++ b/drivers/media/video/s5k6aa.c
@@ -1070,8 +1070,10 @@ __s5k6aa_get_crop_rect(struct s5k6aa *s5k6aa, struct v4l2_subdev_fh *fh,
 static void s5k6aa_try_format(struct s5k6aa *s5k6aa,
 			      struct v4l2_mbus_framefmt *mf)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	unsigned int index;
 
+	memset(mf + offset, 0, sizeof(*mf) - offset);
 	v4l_bound_align_image(&mf->width, S5K6AA_WIN_WIDTH_MIN,
 			      S5K6AA_WIN_WIDTH_MAX, 1,
 			      &mf->height, S5K6AA_WIN_HEIGHT_MIN,
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCHv5] v4l: Update subdev drivers to handle framesamples parameter
  2011-12-14 12:23           ` [PATCHv4 2/2] v4l: Update subdev drivers to handle framesamples parameter Sylwester Nawrocki
@ 2011-12-15 10:14             ` Sylwester Nawrocki
  0 siblings, 0 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-15 10:14 UTC (permalink / raw)
  To: linux-media
  Cc: laurent.pinchart, sakari.ailus, riverful.kim, s.nawrocki, Kyungmin Park

Update the sub-device drivers having a devnode enabled so they properly
handle the new framesamples field of struct v4l2_mbus_framefmt.
These drivers don't support compressed (entropy encoded) formats so the
framesamples field is simply initialized to 0, altogether with the
reserved field.

There is a few other drivers that expose a devnode (mt9p031, mt9t001,
mt9v032) but they already implicitly initialize the new data structure
field to 0, so they don't need to be touched.

Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
---
Changes since v4:
 - in the try format functions use local copies of only width, height, code
   rather that caching whole struct v4l2_mbus_framefmt

Changes since v3:
 - use local copy of the format data structure to clear all structure
   members which are unused by a driver, in way that it don't break
   after new fields are added to struct v4l2_mbus_framefmt;

The omap3isp changes are only compile tested. Thus I'd like to ask someone 
who has access to the hardware to test the patch and provide Ack or 
Tested-by :)

Thanks,
Sylwester
---
 drivers/media/video/noon010pc30.c         |    3 +++
 drivers/media/video/omap3isp/ispccdc.c    |    8 ++++++--
 drivers/media/video/omap3isp/ispccp2.c    |   16 +++++++++++-----
 drivers/media/video/omap3isp/ispcsi2.c    |   14 +++++++++-----
 drivers/media/video/omap3isp/isppreview.c |   15 ++++++++++-----
 drivers/media/video/omap3isp/ispresizer.c |    3 +++
 drivers/media/video/s5k6aa.c              |    2 ++
 7 files changed, 44 insertions(+), 17 deletions(-)

diff --git a/drivers/media/video/noon010pc30.c b/drivers/media/video/noon010pc30.c
index 50838bf..9a6a7ac 100644
--- a/drivers/media/video/noon010pc30.c
+++ b/drivers/media/video/noon010pc30.c
@@ -519,6 +519,7 @@ static int noon010_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 	mf = &fmt->format;
 
 	mutex_lock(&info->lock);
+	memset(mf, 0, sizeof(mf));
 	mf->width = info->curr_win->width;
 	mf->height = info->curr_win->height;
 	mf->code = info->curr_fmt->code;
@@ -546,12 +547,14 @@ static const struct noon010_format *noon010_try_fmt(struct v4l2_subdev *sd,
 static int noon010_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
 			   struct v4l2_subdev_format *fmt)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	struct noon010_info *info = to_noon010(sd);
 	const struct noon010_frmsize *size = NULL;
 	const struct noon010_format *nf;
 	struct v4l2_mbus_framefmt *mf;
 	int ret = 0;
 
+	memset(&fmt->format + offset, 0, sizeof(fmt->format) - offset);
 	nf = noon010_try_fmt(sd, &fmt->format);
 	noon010_try_frame_size(&fmt->format, &size);
 	fmt->format.colorspace = V4L2_COLORSPACE_JPEG;
diff --git a/drivers/media/video/omap3isp/ispccdc.c b/drivers/media/video/omap3isp/ispccdc.c
index b0b0fa5..9e12548 100644
--- a/drivers/media/video/omap3isp/ispccdc.c
+++ b/drivers/media/video/omap3isp/ispccdc.c
@@ -1802,6 +1802,7 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
 		unsigned int pad, struct v4l2_mbus_framefmt *fmt,
 		enum v4l2_subdev_format_whence which)
 {
+	enum v4l2_mbus_pixelcode pixelcode = fmt->code;
 	struct v4l2_mbus_framefmt *format;
 	const struct isp_format_info *info;
 	unsigned int width = fmt->width;
@@ -1810,21 +1811,24 @@ ccdc_try_format(struct isp_ccdc_device *ccdc, struct v4l2_subdev_fh *fh,
 
 	switch (pad) {
 	case CCDC_PAD_SINK:
+		memset(fmt, 0, sizeof(*fmt));
+
 		/* TODO: If the CCDC output formatter pad is connected directly
 		 * to the resizer, only YUV formats can be used.
 		 */
 		for (i = 0; i < ARRAY_SIZE(ccdc_fmts); i++) {
-			if (fmt->code == ccdc_fmts[i])
+			if (pixelcode == ccdc_fmts[i])
 				break;
 		}
 
 		/* If not found, use SGRBG10 as default */
 		if (i >= ARRAY_SIZE(ccdc_fmts))
-			fmt->code = V4L2_MBUS_FMT_SGRBG10_1X10;
+			pixelcode = V4L2_MBUS_FMT_SGRBG10_1X10;
 
 		/* Clamp the input size. */
 		fmt->width = clamp_t(u32, width, 32, 4096);
 		fmt->height = clamp_t(u32, height, 32, 4096);
+		fmt->code = pixelcode;
 		break;
 
 	case CCDC_PAD_SOURCE_OF:
diff --git a/drivers/media/video/omap3isp/ispccp2.c b/drivers/media/video/omap3isp/ispccp2.c
index 904ca8c..864b220 100644
--- a/drivers/media/video/omap3isp/ispccp2.c
+++ b/drivers/media/video/omap3isp/ispccp2.c
@@ -673,25 +673,31 @@ static void ccp2_try_format(struct isp_ccp2_device *ccp2,
 			       struct v4l2_mbus_framefmt *fmt,
 			       enum v4l2_subdev_format_whence which)
 {
+	enum v4l2_mbus_pixelcode pixelcode = fmt->code;
+	unsigned int height = fmt->height;
+	unsigned int width = fmt->width;
 	struct v4l2_mbus_framefmt *format;
 
 	switch (pad) {
 	case CCP2_PAD_SINK:
-		if (fmt->code != V4L2_MBUS_FMT_SGRBG10_DPCM8_1X8)
+		memset(fmt, 0, sizeof(*fmt));
+		if (pixelcode != V4L2_MBUS_FMT_SGRBG10_DPCM8_1X8)
 			fmt->code = V4L2_MBUS_FMT_SGRBG10_1X10;
+		else
+			fmt->code = pixelcode;
 
 		if (ccp2->input == CCP2_INPUT_SENSOR) {
-			fmt->width = clamp_t(u32, fmt->width,
+			fmt->width = clamp_t(u32, width,
 					     ISPCCP2_DAT_START_MIN,
 					     ISPCCP2_DAT_START_MAX);
-			fmt->height = clamp_t(u32, fmt->height,
+			fmt->height = clamp_t(u32, height,
 					      ISPCCP2_DAT_SIZE_MIN,
 					      ISPCCP2_DAT_SIZE_MAX);
 		} else if (ccp2->input == CCP2_INPUT_MEMORY) {
-			fmt->width = clamp_t(u32, fmt->width,
+			fmt->width = clamp_t(u32, width,
 					     ISPCCP2_LCM_HSIZE_COUNT_MIN,
 					     ISPCCP2_LCM_HSIZE_COUNT_MAX);
-			fmt->height = clamp_t(u32, fmt->height,
+			fmt->height = clamp_t(u32, height,
 					      ISPCCP2_LCM_VSIZE_MIN,
 					      ISPCCP2_LCM_VSIZE_MAX);
 		}
diff --git a/drivers/media/video/omap3isp/ispcsi2.c b/drivers/media/video/omap3isp/ispcsi2.c
index 0c5f1cb..1d37c6b 100644
--- a/drivers/media/video/omap3isp/ispcsi2.c
+++ b/drivers/media/video/omap3isp/ispcsi2.c
@@ -846,32 +846,36 @@ csi2_try_format(struct isp_csi2_device *csi2, struct v4l2_subdev_fh *fh,
 		unsigned int pad, struct v4l2_mbus_framefmt *fmt,
 		enum v4l2_subdev_format_whence which)
 {
-	enum v4l2_mbus_pixelcode pixelcode;
+	enum v4l2_mbus_pixelcode pixelcode = fmt->code;
 	struct v4l2_mbus_framefmt *format;
 	const struct isp_format_info *info;
+	u32 width = fmt->width;
+	u32 height = fmt->height;
 	unsigned int i;
 
 	switch (pad) {
 	case CSI2_PAD_SINK:
+		memset(fmt, 0, sizeof(*fmt));
 		/* Clamp the width and height to valid range (1-8191). */
 		for (i = 0; i < ARRAY_SIZE(csi2_input_fmts); i++) {
-			if (fmt->code == csi2_input_fmts[i])
+			if (pixelcode == csi2_input_fmts[i])
 				break;
 		}
 
 		/* If not found, use SGRBG10 as default */
 		if (i >= ARRAY_SIZE(csi2_input_fmts))
 			fmt->code = V4L2_MBUS_FMT_SGRBG10_1X10;
+		else
+			fmt->code = pixelcode;
 
-		fmt->width = clamp_t(u32, fmt->width, 1, 8191);
-		fmt->height = clamp_t(u32, fmt->height, 1, 8191);
+		fmt->width = clamp_t(u32, width, 1, 8191);
+		fmt->height = clamp_t(u32, height, 1, 8191);
 		break;
 
 	case CSI2_PAD_SOURCE:
 		/* Source format same as sink format, except for DPCM
 		 * compression.
 		 */
-		pixelcode = fmt->code;
 		format = __csi2_get_format(csi2, fh, CSI2_PAD_SINK, which);
 		memcpy(fmt, format, sizeof(*fmt));
 
diff --git a/drivers/media/video/omap3isp/isppreview.c b/drivers/media/video/omap3isp/isppreview.c
index ccb876f..84e1095 100644
--- a/drivers/media/video/omap3isp/isppreview.c
+++ b/drivers/media/video/omap3isp/isppreview.c
@@ -1656,10 +1656,14 @@ static void preview_try_format(struct isp_prev_device *prev,
 			       struct v4l2_mbus_framefmt *fmt,
 			       enum v4l2_subdev_format_whence which)
 {
-	enum v4l2_mbus_pixelcode pixelcode;
+	enum v4l2_mbus_pixelcode pixelcode = fmt->code;
+	u32 height = fmt->height;
+	u32 width = fmt->width;
 	struct v4l2_rect *crop;
 	unsigned int i;
 
+	memset(fmt, 0, sizeof(*fmt));
+
 	switch (pad) {
 	case PREV_PAD_SINK:
 		/* When reading data from the CCDC, the input size has already
@@ -1672,9 +1676,9 @@ static void preview_try_format(struct isp_prev_device *prev,
 		 * filter array interpolation.
 		 */
 		if (prev->input == PREVIEW_INPUT_MEMORY) {
-			fmt->width = clamp_t(u32, fmt->width, PREV_MIN_IN_WIDTH,
+			fmt->width = clamp_t(u32, width, PREV_MIN_IN_WIDTH,
 					     preview_max_out_width(prev));
-			fmt->height = clamp_t(u32, fmt->height,
+			fmt->height = clamp_t(u32, height,
 					      PREV_MIN_IN_HEIGHT,
 					      PREV_MAX_IN_HEIGHT);
 		}
@@ -1682,17 +1686,18 @@ static void preview_try_format(struct isp_prev_device *prev,
 		fmt->colorspace = V4L2_COLORSPACE_SRGB;
 
 		for (i = 0; i < ARRAY_SIZE(preview_input_fmts); i++) {
-			if (fmt->code == preview_input_fmts[i])
+			if (pixelcode == preview_input_fmts[i])
 				break;
 		}
 
 		/* If not found, use SGRBG10 as default */
 		if (i >= ARRAY_SIZE(preview_input_fmts))
 			fmt->code = V4L2_MBUS_FMT_SGRBG10_1X10;
+		else
+			fmt->code = pixelcode;
 		break;
 
 	case PREV_PAD_SOURCE:
-		pixelcode = fmt->code;
 		*fmt = *__preview_get_format(prev, fh, PREV_PAD_SINK, which);
 
 		switch (pixelcode) {
diff --git a/drivers/media/video/omap3isp/ispresizer.c b/drivers/media/video/omap3isp/ispresizer.c
index 50e593b..82f6985 100644
--- a/drivers/media/video/omap3isp/ispresizer.c
+++ b/drivers/media/video/omap3isp/ispresizer.c
@@ -1336,6 +1336,7 @@ static void resizer_try_format(struct isp_res_device *res,
 			       struct v4l2_mbus_framefmt *fmt,
 			       enum v4l2_subdev_format_whence which)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	struct v4l2_mbus_framefmt *format;
 	struct resizer_ratio ratio;
 	struct v4l2_rect crop;
@@ -1363,6 +1364,8 @@ static void resizer_try_format(struct isp_res_device *res,
 
 	fmt->colorspace = V4L2_COLORSPACE_JPEG;
 	fmt->field = V4L2_FIELD_NONE;
+
+	memset(fmt + offset, 0, sizeof(*fmt) - offset);
 }
 
 /*
diff --git a/drivers/media/video/s5k6aa.c b/drivers/media/video/s5k6aa.c
index 0df7f2a..b9d1f03 100644
--- a/drivers/media/video/s5k6aa.c
+++ b/drivers/media/video/s5k6aa.c
@@ -1070,8 +1070,10 @@ __s5k6aa_get_crop_rect(struct s5k6aa *s5k6aa, struct v4l2_subdev_fh *fh,
 static void s5k6aa_try_format(struct s5k6aa *s5k6aa,
 			      struct v4l2_mbus_framefmt *mf)
 {
+	const int offset = offsetof(struct v4l2_mbus_framefmt, framesamples);
 	unsigned int index;
 
+	memset(mf + offset, 0, sizeof(*mf) - offset);
 	v4l_bound_align_image(&mf->width, S5K6AA_WIN_WIDTH_MIN,
 			      S5K6AA_WIN_WIDTH_MAX, 1,
 			      &mf->height, S5K6AA_WIN_HEIGHT_MIN,
-- 
1.7.8


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-14 12:23           ` [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt Sylwester Nawrocki
@ 2011-12-21  0:20             ` Laurent Pinchart
  2011-12-22 11:35               ` Sylwester Nawrocki
                                 ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Laurent Pinchart @ 2011-12-21  0:20 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: linux-media, g.liakhovetski, sakari.ailus, m.szyprowski,
	riverful.kim, Kyungmin Park

Hi Sylwester,

On Wednesday 14 December 2011 13:23:07 Sylwester Nawrocki wrote:
> The purpose of the new field is to allow the video pipeline elements
> to negotiate memory buffer size for compressed data frames, where
> the buffer size cannot be derived from pixel width and height and
> the pixel code.
> 
> For VIDIOC_SUBDEV_S_FMT and VIDIOC_SUBDEV_G_FMT ioctls, the
> framesamples parameter should be calculated by the driver from pixel
> width, height, color format and other parameters if required and
> returned to the caller. This applies to compressed data formats only.
> 
> The application should propagate the framesamples value, whatever
> returned at the first sub-device within a data pipeline, i.e. at the
> pipeline's data source.
> 
> For compressed data formats the host drivers should internally
> validate the framesamples parameter values before streaming is
> enabled, to make sure the memory buffer size requirements are
> satisfied along the pipeline.
> 
> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> --
> There is no changes in this patch comparing to v3.
> ---
>  Documentation/DocBook/media/v4l/dev-subdev.xml     |   10 ++++++++--
>  Documentation/DocBook/media/v4l/subdev-formats.xml |    9 ++++++++-
>  include/linux/v4l2-mediabus.h                      |    4 +++-
>  3 files changed, 19 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml
> b/Documentation/DocBook/media/v4l/dev-subdev.xml index 0916a73..b9d24eb
> 100644
> --- a/Documentation/DocBook/media/v4l/dev-subdev.xml
> +++ b/Documentation/DocBook/media/v4l/dev-subdev.xml

> @@ -160,7 +160,13 @@
>        guaranteed to be supported by the device. In particular, drivers
> guarantee that a returned format will not be further changed if passed to
> an &VIDIOC-SUBDEV-S-FMT; call as-is (as long as external parameters, such
> as
> -      formats on other pads or links' configuration are not changed).
> </para>
> +      formats on other pads or links' configuration are not changed). When
> +      a device contains a data encoder, the <structfield>
> +      <link linkend="v4l2-mbus-framefmt-framesamples">framesamples</link>
> +      </structfield> field value may be further changed, if parameters of
> the
> +      encoding process are changed after the format has been negotiated. In
> +      such situation applications should use &VIDIOC-SUBDEV-G-FMT; ioctl to
> +      query an updated format.</para>

Sorry for answering so late. I've been thinking about this topic (as well as 
the proposed new pixelclock field) quite a lot, and one question strikes me 
here (please don't hate me): does userspace need to care about the 
framesamples field ? It looks like the value is only used inside the kernel, 
and we're relying on on userspace to propagate those values between subdevs.

If that's the case, wouldn't it be better to have an in-kernel API to handle 
this ? I'm a bit concerned about forcing userspace to handle internal 
information to userspace if there's no reason to do so.

What's the rationale between your solution, is there a need for the 
framesamples information in userspace ?

>        <para>Drivers automatically propagate formats inside sub-devices.
> When a try or active format is set on a pad, corresponding formats on
> other pads diff --git a/Documentation/DocBook/media/v4l/subdev-formats.xml
> b/Documentation/DocBook/media/v4l/subdev-formats.xml index
> 49c532e..7c202a1 100644
> --- a/Documentation/DocBook/media/v4l/subdev-formats.xml
> +++ b/Documentation/DocBook/media/v4l/subdev-formats.xml
> @@ -33,9 +33,16 @@
>  	  <entry>Image colorspace, from &v4l2-colorspace;. See
>  	  <xref linkend="colorspaces" /> for details.</entry>
>  	</row>
> +	<row id="v4l2-mbus-framefmt-framesamples">
> +	  <entry>__u32</entry>
> +	  <entry><structfield>framesamples</structfield></entry>
> +	  <entry>Maximum number of bus samples per frame for compressed data
> +	    formats. For uncompressed formats drivers and applications must
> +	    set this parameter to zero. </entry>
> +	</row>
>  	<row>
>  	  <entry>__u32</entry>
> -	  <entry><structfield>reserved</structfield>[7]</entry>
> +	  <entry><structfield>reserved</structfield>[6]</entry>
>  	  <entry>Reserved for future extensions. Applications and drivers must
>  	  set the array to zero.</entry>
>  	</row>
> diff --git a/include/linux/v4l2-mediabus.h b/include/linux/v4l2-mediabus.h
> index 5ea7f75..f18d6cd 100644
> --- a/include/linux/v4l2-mediabus.h
> +++ b/include/linux/v4l2-mediabus.h
> @@ -101,6 +101,7 @@ enum v4l2_mbus_pixelcode {
>   * @code:	data format code (from enum v4l2_mbus_pixelcode)
>   * @field:	used interlacing type (from enum v4l2_field)
>   * @colorspace:	colorspace of the data (from enum v4l2_colorspace)
> + * @framesamples: maximum number of bus samples per frame
>   */
>  struct v4l2_mbus_framefmt {
>  	__u32			width;
> @@ -108,7 +109,8 @@ struct v4l2_mbus_framefmt {
>  	__u32			code;
>  	__u32			field;
>  	__u32			colorspace;
> -	__u32			reserved[7];
> +	__u32			framesamples;
> +	__u32			reserved[6];
>  };
> 
>  #endif

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-21  0:20             ` Laurent Pinchart
@ 2011-12-22 11:35               ` Sylwester Nawrocki
  2011-12-26 12:53               ` Sakari Ailus
  2012-01-06 14:04               ` Sylwester Nawrocki
  2 siblings, 0 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-22 11:35 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, linux-media, g.liakhovetski, sakari.ailus,
	m.szyprowski, riverful.kim, Kyungmin Park

Hi Laurent,

On 12/21/2011 01:20 AM, Laurent Pinchart wrote:
> On Wednesday 14 December 2011 13:23:07 Sylwester Nawrocki wrote:
>> The purpose of the new field is to allow the video pipeline elements
>> to negotiate memory buffer size for compressed data frames, where
>> the buffer size cannot be derived from pixel width and height and
>> the pixel code.
>>
>> For VIDIOC_SUBDEV_S_FMT and VIDIOC_SUBDEV_G_FMT ioctls, the
>> framesamples parameter should be calculated by the driver from pixel
>> width, height, color format and other parameters if required and
>> returned to the caller. This applies to compressed data formats only.
>>
>> The application should propagate the framesamples value, whatever
>> returned at the first sub-device within a data pipeline, i.e. at the
>> pipeline's data source.
>>
>> For compressed data formats the host drivers should internally
>> validate the framesamples parameter values before streaming is
>> enabled, to make sure the memory buffer size requirements are
>> satisfied along the pipeline.
>>
>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
...
>> --- a/Documentation/DocBook/media/v4l/dev-subdev.xml
>> +++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
> 
>> @@ -160,7 +160,13 @@
>>        guaranteed to be supported by the device. In particular, drivers
>> guarantee that a returned format will not be further changed if passed to
>> an &VIDIOC-SUBDEV-S-FMT; call as-is (as long as external parameters, such
>> as
>> -      formats on other pads or links' configuration are not changed).
>> </para>
>> +      formats on other pads or links' configuration are not changed). When
>> +      a device contains a data encoder, the <structfield>
>> +      <link linkend="v4l2-mbus-framefmt-framesamples">framesamples</link>
>> +      </structfield> field value may be further changed, if parameters of
>> the
>> +      encoding process are changed after the format has been negotiated. In
>> +      such situation applications should use &VIDIOC-SUBDEV-G-FMT; ioctl to
>> +      query an updated format.</para>
> 
> Sorry for answering so late. I've been thinking about this topic (as well as 
> the proposed new pixelclock field) quite a lot, and one question strikes me

Sure, that's OK. I knew from the beginning you had your doubts about adding
those new fields.

> here (please don't hate me): does userspace need to care about the 

No problem, the patch was probably good to NACK it.. ;)

> framesamples field ? It looks like the value is only used inside the kernel, 
> and we're relying on on userspace to propagate those values between subdevs.

It's mostly used in the kernel, yes. But I also had in mind retrieving some
metadata directly from a sensor subdev, e.g. using controls. And that metadata
would have relationship with maximum frame length output by the camera.

But that could be handled differently, e.g. by retrieving metadata from
a sensor through subdev callback and appending that to a user buffer as
a separate plane.

> If that's the case, wouldn't it be better to have an in-kernel API to handle 
> this ? I'm a bit concerned about forcing userspace to handle internal 
> information to userspace if there's no reason to do so.

The maximum frame length is relevant only at image source and the host (video
node), hence there should not be really a need to propagate anything. The host
would retrieve frame length internally from subdev.

What I wanted to avoid was creating another pair of subdev callbacks that
would look very similar to the pad level set_fmt/get_fmt or try/g/s_mbus_fmt
operations.

I've found it a bit difficult to handle VIDIOC_TRY_FMT at pipeline's final video
node.
By having framesamples (framelength is probably a better name) in
struct v4l_mbus_framefmt the subdev has most of the information needed to
calculate the framelength in one data structure and thus pad level set_fmt(TRY)
can be used to ask a subdev what is framelength for given media bus pixel format.

I realize this a bit cleaner in-kernel API adds complexity in user space.
Which might not be worth it. But if we shouldn't have framelength in media bus
format then it's even more questionable to have pixelrate there.

> What's the rationale between your solution, is there a need for the 
> framesamples information in userspace ?

It might be useful to control compression quality and file size trade-off directly
at a sensor subdev. However this could be also done with sizeimage field at struct
v4l2_pix_format. Which seems a proper way, since it's a video node that deals with
memory, not a subdev. But Samsung sensors (well cameras) buffer the image data
internally, mix various data (like, JPEG, YUV) in one memory plane and then send
it out in one go. Thus a memory view may be also applicable for them.


--

Thanks,
Sylwester

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-21  0:20             ` Laurent Pinchart
  2011-12-22 11:35               ` Sylwester Nawrocki
@ 2011-12-26 12:53               ` Sakari Ailus
  2011-12-28 17:09                 ` Sylwester Nawrocki
  2012-01-06 14:04               ` Sylwester Nawrocki
  2 siblings, 1 reply; 25+ messages in thread
From: Sakari Ailus @ 2011-12-26 12:53 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, linux-media, g.liakhovetski, m.szyprowski,
	riverful.kim, Kyungmin Park

Hi Laurent and Sylwester,

On Wed, Dec 21, 2011 at 01:20:56AM +0100, Laurent Pinchart wrote:
> Hi Sylwester,
> 
> On Wednesday 14 December 2011 13:23:07 Sylwester Nawrocki wrote:
> > The purpose of the new field is to allow the video pipeline elements
> > to negotiate memory buffer size for compressed data frames, where
> > the buffer size cannot be derived from pixel width and height and
> > the pixel code.
> > 
> > For VIDIOC_SUBDEV_S_FMT and VIDIOC_SUBDEV_G_FMT ioctls, the
> > framesamples parameter should be calculated by the driver from pixel
> > width, height, color format and other parameters if required and
> > returned to the caller. This applies to compressed data formats only.
> > 
> > The application should propagate the framesamples value, whatever
> > returned at the first sub-device within a data pipeline, i.e. at the
> > pipeline's data source.
> > 
> > For compressed data formats the host drivers should internally
> > validate the framesamples parameter values before streaming is
> > enabled, to make sure the memory buffer size requirements are
> > satisfied along the pipeline.
> > 
> > Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> > Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> > --
> > There is no changes in this patch comparing to v3.
> > ---
> >  Documentation/DocBook/media/v4l/dev-subdev.xml     |   10 ++++++++--
> >  Documentation/DocBook/media/v4l/subdev-formats.xml |    9 ++++++++-
> >  include/linux/v4l2-mediabus.h                      |    4 +++-
> >  3 files changed, 19 insertions(+), 4 deletions(-)
> > 
> > diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml
> > b/Documentation/DocBook/media/v4l/dev-subdev.xml index 0916a73..b9d24eb
> > 100644
> > --- a/Documentation/DocBook/media/v4l/dev-subdev.xml
> > +++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
> 
> > @@ -160,7 +160,13 @@
> >        guaranteed to be supported by the device. In particular, drivers
> > guarantee that a returned format will not be further changed if passed to
> > an &VIDIOC-SUBDEV-S-FMT; call as-is (as long as external parameters, such
> > as
> > -      formats on other pads or links' configuration are not changed).
> > </para>
> > +      formats on other pads or links' configuration are not changed). When
> > +      a device contains a data encoder, the <structfield>
> > +      <link linkend="v4l2-mbus-framefmt-framesamples">framesamples</link>
> > +      </structfield> field value may be further changed, if parameters of
> > the
> > +      encoding process are changed after the format has been negotiated. In
> > +      such situation applications should use &VIDIOC-SUBDEV-G-FMT; ioctl to
> > +      query an updated format.</para>
> 
> Sorry for answering so late. I've been thinking about this topic (as well as 
> the proposed new pixelclock field) quite a lot, and one question strikes me 
> here (please don't hate me): does userspace need to care about the 
> framesamples field ? It looks like the value is only used inside the kernel, 
> and we're relying on on userspace to propagate those values between subdevs.
> 
> If that's the case, wouldn't it be better to have an in-kernel API to handle 
> this ? I'm a bit concerned about forcing userspace to handle internal 
> information to userspace if there's no reason to do so.

I feel partly the same about pixelrate --- there are sound reasons to export
that to user space still, but the method to do that could be something else
than putting it to v4l2_mbus_framefmt.

I could think of an in-kernel counterpart for v4l2_mbus_framefmt, say,
v4l2_mbus_framedesc. This could then be passed from subdev to another using
a new subdev op.

Something else that should probably belong there is information on the frame
format: contrary to what I've previously thought, the sensor metadata is
often sent as part of the same CSI-2 channel. There also can be other types
of data, such as dummy data and data for black level calibration. I wouldn't
want to export all this to the user space --- it shouldn't probably need to
care about it.

The transmitter of the data (sensor) has this information and the CSI-2
receiver needs it. Same for the framesamples, as far as I understand.

Pixelrate is also used to figure out whether a pipeline can do streaming or
not; the pixel rate originating from the sensor could be higher than the
maximum of the ISP. For this reason, as well as for providing timing
information, access to pixelrate is reequired in the user space.

Configuring the framesamples could be done on the sensor using a control if
necessary.

Just my 5 euro cents. Perhaps we could discuss the topic on #v4l-meeting
some time?

Cheers,

-- 
Sakari Ailus
e-mail: sakari.ailus@iki.fi	jabber/XMPP/Gmail: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-26 12:53               ` Sakari Ailus
@ 2011-12-28 17:09                 ` Sylwester Nawrocki
  2011-12-31 13:16                   ` Sakari Ailus
  0 siblings, 1 reply; 25+ messages in thread
From: Sylwester Nawrocki @ 2011-12-28 17:09 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, Sylwester Nawrocki, linux-media,
	g.liakhovetski, m.szyprowski, riverful.kim, Kyungmin Park

Hi Sakari,

On 12/26/2011 01:53 PM, Sakari Ailus wrote:
> On Wed, Dec 21, 2011 at 01:20:56AM +0100, Laurent Pinchart wrote:
>> On Wednesday 14 December 2011 13:23:07 Sylwester Nawrocki wrote:
>>> The purpose of the new field is to allow the video pipeline elements
>>> to negotiate memory buffer size for compressed data frames, where
>>> the buffer size cannot be derived from pixel width and height and
>>> the pixel code.
>>>
>>> For VIDIOC_SUBDEV_S_FMT and VIDIOC_SUBDEV_G_FMT ioctls, the
>>> framesamples parameter should be calculated by the driver from pixel
>>> width, height, color format and other parameters if required and
>>> returned to the caller. This applies to compressed data formats only.
>>>
>>> The application should propagate the framesamples value, whatever
>>> returned at the first sub-device within a data pipeline, i.e. at the
>>> pipeline's data source.
>>>
>>> For compressed data formats the host drivers should internally
>>> validate the framesamples parameter values before streaming is
>>> enabled, to make sure the memory buffer size requirements are
>>> satisfied along the pipeline.
>>>
>>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
>>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
>>> --
>>> There is no changes in this patch comparing to v3.
>>> ---
>>>  Documentation/DocBook/media/v4l/dev-subdev.xml     |   10 ++++++++--
>>>  Documentation/DocBook/media/v4l/subdev-formats.xml |    9 ++++++++-
>>>  include/linux/v4l2-mediabus.h                      |    4 +++-
>>>  3 files changed, 19 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml
>>> b/Documentation/DocBook/media/v4l/dev-subdev.xml index 0916a73..b9d24eb
>>> 100644
>>> --- a/Documentation/DocBook/media/v4l/dev-subdev.xml
>>> +++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
>>
>>> @@ -160,7 +160,13 @@
>>>        guaranteed to be supported by the device. In particular, drivers
>>> guarantee that a returned format will not be further changed if passed to
>>> an &VIDIOC-SUBDEV-S-FMT; call as-is (as long as external parameters, such
>>> as
>>> -      formats on other pads or links' configuration are not changed).
>>> </para>
>>> +      formats on other pads or links' configuration are not changed). When
>>> +      a device contains a data encoder, the <structfield>
>>> +      <link linkend="v4l2-mbus-framefmt-framesamples">framesamples</link>
>>> +      </structfield> field value may be further changed, if parameters of
>>> the
>>> +      encoding process are changed after the format has been negotiated. In
>>> +      such situation applications should use &VIDIOC-SUBDEV-G-FMT; ioctl to
>>> +      query an updated format.</para>
>>
>> Sorry for answering so late. I've been thinking about this topic (as well as 
>> the proposed new pixelclock field) quite a lot, and one question strikes me 
>> here (please don't hate me): does userspace need to care about the 
>> framesamples field ? It looks like the value is only used inside the kernel, 
>> and we're relying on on userspace to propagate those values between subdevs.
>>
>> If that's the case, wouldn't it be better to have an in-kernel API to handle 
>> this ? I'm a bit concerned about forcing userspace to handle internal 
>> information to userspace if there's no reason to do so.
> 
> I feel partly the same about pixelrate --- there are sound reasons to export
> that to user space still, but the method to do that could be something else
> than putting it to v4l2_mbus_framefmt.
> 
> I could think of an in-kernel counterpart for v4l2_mbus_framefmt, say,
> v4l2_mbus_framedesc. This could then be passed from subdev to another using
> a new subdev op.

That might be needed eventually. But I'm not a great fan in general of yet
another set of callbacks for media bus frame format set up.

> Something else that should probably belong there is information on the frame
> format: contrary to what I've previously thought, the sensor metadata is
> often sent as part of the same CSI-2 channel. There also can be other types
> of data, such as dummy data and data for black level calibration. I wouldn't
> want to export all this to the user space --- it shouldn't probably need to
> care about it.
> 
> The transmitter of the data (sensor) has this information and the CSI-2
> receiver needs it. Same for the framesamples, as far as I understand.

We could try to design some standard data structure for frame metadata -
that's how I understood the meaning of struct v4l2_mbus_framedesc.
But I doubt such attempts will be sucessful. And how can we distinguish
which data is valid and applicable when there is lots of weird stuff in one
data structure ? Using media bus pixel code only ?

> Pixelrate is also used to figure out whether a pipeline can do streaming or
> not; the pixel rate originating from the sensor could be higher than the
> maximum of the ISP. For this reason, as well as for providing timing
> information, access to pixelrate is reequired in the user space.
> 
> Configuring the framesamples could be done on the sensor using a control if
> necessary.

Sure, that could work. But as I mentioned before, the host drivers would have
to be getting such control internally from subdevs. Not so nice IMHO. Although
I'm not in big opposition to that too.

Grepping for v4l2_ctrl_g_ctrl() all the drivers appear to use it locally only.

> Just my 5 euro cents. Perhaps we could discuss the topic on #v4l-meeting
> some time?

I'm available any time this week. :)

--
Thanks,
Sylwester

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-28 17:09                 ` Sylwester Nawrocki
@ 2011-12-31 13:16                   ` Sakari Ailus
  2012-01-01 18:56                     ` Sylwester Nawrocki
  0 siblings, 1 reply; 25+ messages in thread
From: Sakari Ailus @ 2011-12-31 13:16 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Laurent Pinchart, Sylwester Nawrocki, linux-media,
	g.liakhovetski, m.szyprowski, riverful.kim, Kyungmin Park

Hi Sylwester,

On Wed, Dec 28, 2011 at 06:09:17PM +0100, Sylwester Nawrocki wrote:
> On 12/26/2011 01:53 PM, Sakari Ailus wrote:
> > On Wed, Dec 21, 2011 at 01:20:56AM +0100, Laurent Pinchart wrote:
> >> On Wednesday 14 December 2011 13:23:07 Sylwester Nawrocki wrote:
> >>> The purpose of the new field is to allow the video pipeline elements
> >>> to negotiate memory buffer size for compressed data frames, where
> >>> the buffer size cannot be derived from pixel width and height and
> >>> the pixel code.
> >>>
> >>> For VIDIOC_SUBDEV_S_FMT and VIDIOC_SUBDEV_G_FMT ioctls, the
> >>> framesamples parameter should be calculated by the driver from pixel
> >>> width, height, color format and other parameters if required and
> >>> returned to the caller. This applies to compressed data formats only.
> >>>
> >>> The application should propagate the framesamples value, whatever
> >>> returned at the first sub-device within a data pipeline, i.e. at the
> >>> pipeline's data source.
> >>>
> >>> For compressed data formats the host drivers should internally
> >>> validate the framesamples parameter values before streaming is
> >>> enabled, to make sure the memory buffer size requirements are
> >>> satisfied along the pipeline.
> >>>
> >>> Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
> >>> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> >>> --
> >>> There is no changes in this patch comparing to v3.
> >>> ---
> >>>  Documentation/DocBook/media/v4l/dev-subdev.xml     |   10 ++++++++--
> >>>  Documentation/DocBook/media/v4l/subdev-formats.xml |    9 ++++++++-
> >>>  include/linux/v4l2-mediabus.h                      |    4 +++-
> >>>  3 files changed, 19 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml
> >>> b/Documentation/DocBook/media/v4l/dev-subdev.xml index 0916a73..b9d24eb
> >>> 100644
> >>> --- a/Documentation/DocBook/media/v4l/dev-subdev.xml
> >>> +++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
> >>
> >>> @@ -160,7 +160,13 @@
> >>>        guaranteed to be supported by the device. In particular, drivers
> >>> guarantee that a returned format will not be further changed if passed to
> >>> an &VIDIOC-SUBDEV-S-FMT; call as-is (as long as external parameters, such
> >>> as
> >>> -      formats on other pads or links' configuration are not changed).
> >>> </para>
> >>> +      formats on other pads or links' configuration are not changed). When
> >>> +      a device contains a data encoder, the <structfield>
> >>> +      <link linkend="v4l2-mbus-framefmt-framesamples">framesamples</link>
> >>> +      </structfield> field value may be further changed, if parameters of
> >>> the
> >>> +      encoding process are changed after the format has been negotiated. In
> >>> +      such situation applications should use &VIDIOC-SUBDEV-G-FMT; ioctl to
> >>> +      query an updated format.</para>
> >>
> >> Sorry for answering so late. I've been thinking about this topic (as well as 
> >> the proposed new pixelclock field) quite a lot, and one question strikes me 
> >> here (please don't hate me): does userspace need to care about the 
> >> framesamples field ? It looks like the value is only used inside the kernel, 
> >> and we're relying on on userspace to propagate those values between subdevs.
> >>
> >> If that's the case, wouldn't it be better to have an in-kernel API to handle 
> >> this ? I'm a bit concerned about forcing userspace to handle internal 
> >> information to userspace if there's no reason to do so.
> > 
> > I feel partly the same about pixelrate --- there are sound reasons to export
> > that to user space still, but the method to do that could be something else
> > than putting it to v4l2_mbus_framefmt.
> > 
> > I could think of an in-kernel counterpart for v4l2_mbus_framefmt, say,
> > v4l2_mbus_framedesc. This could then be passed from subdev to another using
> > a new subdev op.
> 
> That might be needed eventually. But I'm not a great fan in general of yet
> another set of callbacks for media bus frame format set up.
> 
> > Something else that should probably belong there is information on the frame
> > format: contrary to what I've previously thought, the sensor metadata is
> > often sent as part of the same CSI-2 channel. There also can be other types
> > of data, such as dummy data and data for black level calibration. I wouldn't
> > want to export all this to the user space --- it shouldn't probably need to
> > care about it.
> > 
> > The transmitter of the data (sensor) has this information and the CSI-2
> > receiver needs it. Same for the framesamples, as far as I understand.
> 
> We could try to design some standard data structure for frame metadata -
> that's how I understood the meaning of struct v4l2_mbus_framedesc.
> But I doubt such attempts will be sucessful. And how can we distinguish
> which data is valid and applicable when there is lots of weird stuff in one
> data structure ? Using media bus pixel code only ?

I think the media bus pixel code which is exported to the user space should
not be involved with the metadata.

The metadata is something that the user is likely interested only in the
form it is in the system memory. It won't be processed in any way before
it gets written to memory. The chosen mbus code may affect the format of the
metadata, but that's something the sensor driver knows  -- and I've yet to
see a case where the user could choose the desired metadata format.

Alternatively we could make the metadata path a separate path from the image
data. I wonder how feasible that approach would be --- the subdevs would
still be the same.

> > Pixelrate is also used to figure out whether a pipeline can do streaming or
> > not; the pixel rate originating from the sensor could be higher than the
> > maximum of the ISP. For this reason, as well as for providing timing
> > information, access to pixelrate is reequired in the user space.
> > 
> > Configuring the framesamples could be done on the sensor using a control if
> > necessary.
> 
> Sure, that could work. But as I mentioned before, the host drivers would have
> to be getting such control internally from subdevs. Not so nice IMHO. Although
> I'm not in big opposition to that too.
> 
> Grepping for v4l2_ctrl_g_ctrl() all the drivers appear to use it locally only.

I don't think there's anything that really would prohibit doing this. There
would need to be a way for the host to make a control read-only, to prevent
changing framesamples while streaming.

Pad-specific controls likely require more work than this.

> > Just my 5 euro cents. Perhaps we could discuss the topic on #v4l-meeting
> > some time?
> 
> I'm available any time this week. :)

I think the solution could be related to frame metadata if we intend to
specify the frame format. Btw. how does the framesamples relate to blanking?

The metadata in a regular frame spans a few lines in the top and sometimes
also on the bottom of that frame.

Late next week is fine for me.

-- 
Sakari Ailus
e-mail: sakari.ailus@iki.fi	jabber/XMPP/Gmail: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-31 13:16                   ` Sakari Ailus
@ 2012-01-01 18:56                     ` Sylwester Nawrocki
  2012-01-04 12:21                       ` Sakari Ailus
  2012-01-11 13:20                       ` Laurent Pinchart
  0 siblings, 2 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2012-01-01 18:56 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, linux-media, g.liakhovetski, m.szyprowski,
	riverful.kim, Kyungmin Park

Hi Sakari,

On 12/31/2011 02:16 PM, Sakari Ailus wrote:
>>> I could think of an in-kernel counterpart for v4l2_mbus_framefmt, say,
>>> v4l2_mbus_framedesc. This could then be passed from subdev to another using
>>> a new subdev op.
>>
>> That might be needed eventually. But I'm not a great fan in general of yet
>> another set of callbacks for media bus frame format set up.
>>
>>> Something else that should probably belong there is information on the frame
>>> format: contrary to what I've previously thought, the sensor metadata is
>>> often sent as part of the same CSI-2 channel. There also can be other types
>>> of data, such as dummy data and data for black level calibration. I wouldn't
>>> want to export all this to the user space --- it shouldn't probably need to
>>> care about it.
>>>
>>> The transmitter of the data (sensor) has this information and the CSI-2
>>> receiver needs it. Same for the framesamples, as far as I understand.
>>
>> We could try to design some standard data structure for frame metadata -
>> that's how I understood the meaning of struct v4l2_mbus_framedesc.
>> But I doubt such attempts will be sucessful. And how can we distinguish
>> which data is valid and applicable when there is lots of weird stuff in one
>> data structure ? Using media bus pixel code only ?
> 
> I think the media bus pixel code which is exported to the user space should
> not be involved with the metadata.

Then we need to find some method to distinguish streams with metadata on the
media bus, to be able to discard it before sending to user space.
I assume this is where struct v4l2_mbus_framedesc and related ops would help ?

Maybe we could create v4l2_mbus_framedesc with length (framesamples) member
in it and additionally 994 reserved bytes for future extensions ;-), e.g.

struct v4l2_mbus_framedesc {
	unsigned int length;
	unsigned int rserved[994];
};

struct v4l2_subdev_pad_ops {
	  ....
	int get_framedesc(int pad, struct v4l2_framedesc *fdesc);
	int set_framedesc(int pad, struct v4l2_framedesc fdesc);
};

This would ensure same media bus format code regardless of frame meta data
presence.

In case metadata is sent in same CSI channel, the required buffer length
might be greater than what would width/height and pixel code suggest.

> The metadata is something that the user is likely interested only in the
> form it is in the system memory. It won't be processed in any way before
> it gets written to memory. The chosen mbus code may affect the format of the
> metadata, but that's something the sensor driver knows  -- and I've yet to
> see a case where the user could choose the desired metadata format.

> Alternatively we could make the metadata path a separate path from the image
> data. I wonder how feasible that approach would be --- the subdevs would
> still be the same.

I was also considering metadata as sensor specific data structure retrieved
by the host after a frame has been captured and appending that data to a user
buffer. For such buffers a separate fourcc would be needed.

>>> Pixelrate is also used to figure out whether a pipeline can do streaming or
>>> not; the pixel rate originating from the sensor could be higher than the
>>> maximum of the ISP. For this reason, as well as for providing timing
>>> information, access to pixelrate is reequired in the user space.
>>>
>>> Configuring the framesamples could be done on the sensor using a control if
>>> necessary.
>>
>> Sure, that could work. But as I mentioned before, the host drivers would have
>> to be getting such control internally from subdevs. Not so nice IMHO. Although
>> I'm not in big opposition to that too.
>>
>> Grepping for v4l2_ctrl_g_ctrl() all the drivers appear to use it locally only.
> 
> I don't think there's anything that really would prohibit doing this. There
> would need to be a way for the host to make a control read-only, to prevent
> changing framesamples while streaming.

I would rather make subdev driver to ensure all negotiated paramaters, which
changed during streaming could crash the system, stay unchanged after streaming
started. It's as simple as checking entity stream_count in s_ctrl() and
prohibiting change of control value if stream_count > 0.

> Pad-specific controls likely require more work than this.

Hym, I'd forgotten, the fact framesamples are per pad was an argument against
using v4l2 control for this parameter. We still need per pad controls for the
blanking controls, but for framesamples maybe it's better to just add subdev
callback, as acessing this parameter on subdevs directly from space isn't
really essential..

>>> Just my 5 euro cents. Perhaps we could discuss the topic on #v4l-meeting
>>> some time?
>>
>> I'm available any time this week. :)
> 
> I think the solution could be related to frame metadata if we intend to
> specify the frame format. Btw. how does the framesamples relate to blanking?

Framesamples and blanking are on completely different levels. Framesamples
takes into account only active frame data, so H/V blanking doesn't matter here.
Framesamples is not intended for raw formats where blanking is applicable.

Framesamples only determines length of compressed stream, and blanking doesn't
really affect the data passed to and generated by a jpeg encoder.

> The metadata in a regular frame spans a few lines in the top and sometimes
> also on the bottom of that frame.

How do you handle it now, i.e. how the host finds out how much memory it needs
for a frame ? Or is the metadata just overwriting "valid" lines ?

> Late next week is fine for me.

Ok, I'll try to reserve some time then.

--

Regards,
Sylwester

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2012-01-01 18:56                     ` Sylwester Nawrocki
@ 2012-01-04 12:21                       ` Sakari Ailus
  2012-01-04 22:51                         ` Sylwester Nawrocki
  2012-01-11 13:20                       ` Laurent Pinchart
  1 sibling, 1 reply; 25+ messages in thread
From: Sakari Ailus @ 2012-01-04 12:21 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Laurent Pinchart, linux-media, g.liakhovetski, m.szyprowski,
	riverful.kim, Kyungmin Park

Hi Sylwester,

On Sun, Jan 01, 2012 at 07:56:03PM +0100, Sylwester Nawrocki wrote:
> Hi Sakari,
> 
> On 12/31/2011 02:16 PM, Sakari Ailus wrote:
> >>> I could think of an in-kernel counterpart for v4l2_mbus_framefmt, say,
> >>> v4l2_mbus_framedesc. This could then be passed from subdev to another using
> >>> a new subdev op.
> >>
> >> That might be needed eventually. But I'm not a great fan in general of yet
> >> another set of callbacks for media bus frame format set up.
> >>
> >>> Something else that should probably belong there is information on the frame
> >>> format: contrary to what I've previously thought, the sensor metadata is
> >>> often sent as part of the same CSI-2 channel. There also can be other types
> >>> of data, such as dummy data and data for black level calibration. I wouldn't
> >>> want to export all this to the user space --- it shouldn't probably need to
> >>> care about it.
> >>>
> >>> The transmitter of the data (sensor) has this information and the CSI-2
> >>> receiver needs it. Same for the framesamples, as far as I understand.
> >>
> >> We could try to design some standard data structure for frame metadata -
> >> that's how I understood the meaning of struct v4l2_mbus_framedesc.
> >> But I doubt such attempts will be sucessful. And how can we distinguish
> >> which data is valid and applicable when there is lots of weird stuff in one
> >> data structure ? Using media bus pixel code only ?
> > 
> > I think the media bus pixel code which is exported to the user space should
> > not be involved with the metadata.
> 
> Then we need to find some method to distinguish streams with metadata on the
> media bus, to be able to discard it before sending to user space.
> I assume this is where struct v4l2_mbus_framedesc and related ops would help ?

I'd think so.

> Maybe we could create v4l2_mbus_framedesc with length (framesamples) member
> in it and additionally 994 reserved bytes for future extensions ;-), e.g.
> 
> struct v4l2_mbus_framedesc {
> 	unsigned int length;
> 	unsigned int rserved[994];
> };

Do we need to export this to the user space? In the first phase I'd like to
keep that static (i.e. only get op would be supported) and only visible in
the kernel. That would leave much more room for changes later on, if needed.

> struct v4l2_subdev_pad_ops {
> 	  ....
> 	int get_framedesc(int pad, struct v4l2_framedesc *fdesc);
> 	int set_framedesc(int pad, struct v4l2_framedesc fdesc);
> };
> 
> This would ensure same media bus format code regardless of frame meta data
> presence.
> 
> In case metadata is sent in same CSI channel, the required buffer length
> might be greater than what would width/height and pixel code suggest.

Partly for this reason we have g_skip_top_lines() op in sensor ops. It
instructs the receiver to discard the metadata, and possibly other data
which isn't as interesting --- could be just dummy.

Some CSI-2 receivers are able to write this to a different memory location;
we could expose this as a different video node. I'm proposing a different
video node since this is a separate queue: the format (in-memory pixel
format and dimensions) is different, and it is beneficial to have access to
this data as soon as possible. There is a caveat, though, if we also wish to
support metadata which is appended to the frame, rather than prependeded.

> > The metadata is something that the user is likely interested only in the
> > form it is in the system memory. It won't be processed in any way before
> > it gets written to memory. The chosen mbus code may affect the format of the
> > metadata, but that's something the sensor driver knows  -- and I've yet to
> > see a case where the user could choose the desired metadata format.
> 
> > Alternatively we could make the metadata path a separate path from the image
> > data. I wonder how feasible that approach would be --- the subdevs would
> > still be the same.
> 
> I was also considering metadata as sensor specific data structure retrieved
> by the host after a frame has been captured and appending that data to a user
> buffer. For such buffers a separate fourcc would be needed.

Why after?

There are benefits in getting this to the user space without extra delays?

> >>> Pixelrate is also used to figure out whether a pipeline can do streaming or
> >>> not; the pixel rate originating from the sensor could be higher than the
> >>> maximum of the ISP. For this reason, as well as for providing timing
> >>> information, access to pixelrate is reequired in the user space.
> >>>
> >>> Configuring the framesamples could be done on the sensor using a control if
> >>> necessary.
> >>
> >> Sure, that could work. But as I mentioned before, the host drivers would have
> >> to be getting such control internally from subdevs. Not so nice IMHO. Although
> >> I'm not in big opposition to that too.
> >>
> >> Grepping for v4l2_ctrl_g_ctrl() all the drivers appear to use it locally only.
> > 
> > I don't think there's anything that really would prohibit doing this. There
> > would need to be a way for the host to make a control read-only, to prevent
> > changing framesamples while streaming.
> 
> I would rather make subdev driver to ensure all negotiated paramaters, which
> changed during streaming could crash the system, stay unchanged after streaming
> started. It's as simple as checking entity stream_count in s_ctrl() and
> prohibiting change of control value if stream_count > 0.

That's easy, but the values of these controls could still change between
pipeline validation and stream startup: the sensor driver always will be the
last one to start streaming.

> > Pad-specific controls likely require more work than this.
> 
> Hym, I'd forgotten, the fact framesamples are per pad was an argument against
> using v4l2 control for this parameter. We still need per pad controls for the
> blanking controls, but for framesamples maybe it's better to just add subdev
> callback, as acessing this parameter on subdevs directly from space isn't
> really essential..

Blanking controls are subdev-specific, not pad-specific. In practice the
pixel array subdevs will always have just one pad, but there is still the
principal difference. :-)

Pixel rate could be another per-pad control. That might have to be checked
before stream startup, just like framesamples. That's information which is
mostly needed in the kernel space, but the user space still would sometimes
like to take a look at it. Giving the responsibility to the user to carry
the pixel rate through the whole pipeline without a way to even modify would
be is a little excessive.

> >>> Just my 5 euro cents. Perhaps we could discuss the topic on #v4l-meeting
> >>> some time?
> >>
> >> I'm available any time this week. :)
> > 
> > I think the solution could be related to frame metadata if we intend to
> > specify the frame format. Btw. how does the framesamples relate to blanking?
> 
> Framesamples and blanking are on completely different levels. Framesamples
> takes into account only active frame data, so H/V blanking doesn't matter here.
> Framesamples is not intended for raw formats where blanking is applicable.
> 
> Framesamples only determines length of compressed stream, and blanking doesn't
> really affect the data passed to and generated by a jpeg encoder.

So in fact the blanking controls make no difference, but the hardware might
add some extra blanking, say, if it's not able to send the whole image over
the bus as one chunk?

> > The metadata in a regular frame spans a few lines in the top and sometimes
> > also on the bottom of that frame.
> 
> How do you handle it now, i.e. how the host finds out how much memory it needs
> for a frame ? Or is the metadata just overwriting "valid" lines ?

Well... we don't handle it. ;-) All that's being done is that it's
discarded.

Regards,

-- 
Sakari Ailus
e-mail: sakari.ailus@iki.fi	jabber/XMPP/Gmail: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2012-01-04 12:21                       ` Sakari Ailus
@ 2012-01-04 22:51                         ` Sylwester Nawrocki
  2012-01-06 15:44                           ` Sakari Ailus
  0 siblings, 1 reply; 25+ messages in thread
From: Sylwester Nawrocki @ 2012-01-04 22:51 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Laurent Pinchart, linux-media, g.liakhovetski, m.szyprowski,
	riverful.kim, Kyungmin Park

Hi Sakari,

On 01/04/2012 01:21 PM, Sakari Ailus wrote:
> On Sun, Jan 01, 2012 at 07:56:03PM +0100, Sylwester Nawrocki wrote:
>> On 12/31/2011 02:16 PM, Sakari Ailus wrote:
>>>>> Something else that should probably belong there is information on the frame
>>>>> format: contrary to what I've previously thought, the sensor metadata is
>>>>> often sent as part of the same CSI-2 channel. There also can be other types
>>>>> of data, such as dummy data and data for black level calibration. I wouldn't
>>>>> want to export all this to the user space --- it shouldn't probably need to
>>>>> care about it.
>>>>>
>>>>> The transmitter of the data (sensor) has this information and the CSI-2
>>>>> receiver needs it. Same for the framesamples, as far as I understand.
>>>>
>>>> We could try to design some standard data structure for frame metadata -
>>>> that's how I understood the meaning of struct v4l2_mbus_framedesc.
>>>> But I doubt such attempts will be sucessful. And how can we distinguish
>>>> which data is valid and applicable when there is lots of weird stuff in one
>>>> data structure ? Using media bus pixel code only ?
>>>
>>> I think the media bus pixel code which is exported to the user space should
>>> not be involved with the metadata.
>>
>> Then we need to find some method to distinguish streams with metadata on the
>> media bus, to be able to discard it before sending to user space.
>> I assume this is where struct v4l2_mbus_framedesc and related ops would help ?
> 
> I'd think so.
> 
>> Maybe we could create v4l2_mbus_framedesc with length (framesamples) member
>> in it and additionally 994 reserved bytes for future extensions ;-), e.g.
>>
>> struct v4l2_mbus_framedesc {
>> 	unsigned int length;
>> 	unsigned int rserved[994];
>> };
> 
> Do we need to export this to the user space? In the first phase I'd like to

No, that wasn't my intention. The reserved field was supposed to be a joke,
we of course don't need any reserved members in the kernel space.

> keep that static (i.e. only get op would be supported) and only visible in
> the kernel. That would leave much more room for changes later on, if needed.

I'd prefer it to be R/W, i.e. having both get and set ops available. Maybe
not for all fields though.

>> struct v4l2_subdev_pad_ops {
>> 	  ....
>> 	int get_framedesc(int pad, struct v4l2_framedesc *fdesc);
>> 	int set_framedesc(int pad, struct v4l2_framedesc fdesc);
>> };
>>
>> This would ensure same media bus format code regardless of frame meta data
>> presence.
>>
>> In case metadata is sent in same CSI channel, the required buffer length
>> might be greater than what would width/height and pixel code suggest.
> 
> Partly for this reason we have g_skip_top_lines() op in sensor ops. It
> instructs the receiver to discard the metadata, and possibly other data
> which isn't as interesting --- could be just dummy.

I see.

> Some CSI-2 receivers are able to write this to a different memory location;
> we could expose this as a different video node. I'm proposing a different
> video node since this is a separate queue: the format (in-memory pixel
> format and dimensions) is different, and it is beneficial to have access to
> this data as soon as possible. There is a caveat, though, if we also wish to
> support metadata which is appended to the frame, rather than prependeded.

I think it is recurring topic in our discussions, I guess nobody really needs
it since it haven't been implemented yet. ;)

Multi-planar buffers were meant also for handling meta data, only variable
number of planes support would need to be added. For instance the driver could
pass only the buffer with meta data plane if required.

>>> The metadata is something that the user is likely interested only in the
>>> form it is in the system memory. It won't be processed in any way before
>>> it gets written to memory. The chosen mbus code may affect the format of the
>>> metadata, but that's something the sensor driver knows  -- and I've yet to
>>> see a case where the user could choose the desired metadata format.
>>
>>> Alternatively we could make the metadata path a separate path from the image
>>> data. I wonder how feasible that approach would be --- the subdevs would
>>> still be the same.
>>
>> I was also considering metadata as sensor specific data structure retrieved
>> by the host after a frame has been captured and appending that data to a user
>> buffer. For such buffers a separate fourcc would be needed.
> 
> Why after?

Because there is no way to retrieve it before ? :)

> There are benefits in getting this to the user space without extra delays?

It doesn't matter that much because image data is already post-processed.
And the case I was mentioning was about still capture, asisted in the sensor
(SoC).

>>>>> Pixelrate is also used to figure out whether a pipeline can do streaming or
>>>>> not; the pixel rate originating from the sensor could be higher than the
>>>>> maximum of the ISP. For this reason, as well as for providing timing
>>>>> information, access to pixelrate is reequired in the user space.
>>>>>
>>>>> Configuring the framesamples could be done on the sensor using a control if
>>>>> necessary.
>>>>
>>>> Sure, that could work. But as I mentioned before, the host drivers would have
>>>> to be getting such control internally from subdevs. Not so nice IMHO. Although
>>>> I'm not in big opposition to that too.
>>>>
>>>> Grepping for v4l2_ctrl_g_ctrl() all the drivers appear to use it locally only.
>>>
>>> I don't think there's anything that really would prohibit doing this. There
>>> would need to be a way for the host to make a control read-only, to prevent
>>> changing framesamples while streaming.
>>
>> I would rather make subdev driver to ensure all negotiated paramaters, which
>> changed during streaming could crash the system, stay unchanged after streaming
>> started. It's as simple as checking entity stream_count in s_ctrl() and
>> prohibiting change of control value if stream_count > 0.
> 
> That's easy, but the values of these controls could still change between
> pipeline validation and stream startup: the sensor driver always will be the
> last one to start streaming.

Are you sure ? The host first calls media_pipeline_start(), this increments
stream_count on all subdevs, and only after that the host performs pipeline
validation. At streamoff media_pipeline_stop() is called and the controls
may be changed again. It only requires special treatment of stream_count
at the subdevs.

Btw, for setting controls busy the V4L2_CTRL_FLAG_GRABBED flag can be used.
However, IMHO it shouldn't be the host's business to mess with its subdevs'
control properties. If controls aren't inherited by the host and they belong
to a subdev it's probably better to leave the low level control operations
to the subdev driver only.

>>> Pad-specific controls likely require more work than this.
>>
>> Hym, I'd forgotten, the fact framesamples are per pad was an argument against
>> using v4l2 control for this parameter. We still need per pad controls for the
>> blanking controls, but for framesamples maybe it's better to just add subdev
>> callback, as acessing this parameter on subdevs directly from space isn't
>> really essential..
> 
> Blanking controls are subdev-specific, not pad-specific. In practice the
> pixel array subdevs will always have just one pad, but there is still the
> principal difference. :-)

Yeah, that makes sense. :-) Even in case when we have two output pads from
a MIPI-CSI  transmitter, each for a separate channel, and per pad media bus
formats the per pad blanking wouldn't rather make sense.

> Pixel rate could be another per-pad control. That might have to be checked
> before stream startup, just like framesamples. That's information which is
> mostly needed in the kernel space, but the user space still would sometimes
> like to take a look at it. Giving the responsibility to the user to carry
> the pixel rate through the whole pipeline without a way to even modify would
> be is a little excessive.

But it's supposed to be read-only ? What would be a reason to propagate it then ?

>>>>> Just my 5 euro cents. Perhaps we could discuss the topic on #v4l-meeting
>>>>> some time?
>>>>
>>>> I'm available any time this week. :)
>>>
>>> I think the solution could be related to frame metadata if we intend to
>>> specify the frame format. Btw. how does the framesamples relate to blanking?
>>
>> Framesamples and blanking are on completely different levels. Framesamples
>> takes into account only active frame data, so H/V blanking doesn't matter here.
>> Framesamples is not intended for raw formats where blanking is applicable.
>>
>> Framesamples only determines length of compressed stream, and blanking doesn't
>> really affect the data passed to and generated by a jpeg encoder.
> 
> So in fact the blanking controls make no difference, but the hardware might

Yeah.

> add some extra blanking, say, if it's not able to send the whole image over
> the bus as one chunk?

Concept of blanking really doesn’t make sense on media bus when sending compressed
stream. The transmission of one frame could be performed in any number of bursts
of any length.

>>> The metadata in a regular frame spans a few lines in the top and sometimes
>>> also on the bottom of that frame.
>>
>> How do you handle it now, i.e. how the host finds out how much memory it needs
>> for a frame ? Or is the metadata just overwriting "valid" lines ?
> 
> Well... we don't handle it. ;-) All that's being done is that it's
> discarded.

OK, that's the simpler way then:)

--

Regards,
Sylwester

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2011-12-21  0:20             ` Laurent Pinchart
  2011-12-22 11:35               ` Sylwester Nawrocki
  2011-12-26 12:53               ` Sakari Ailus
@ 2012-01-06 14:04               ` Sylwester Nawrocki
  2 siblings, 0 replies; 25+ messages in thread
From: Sylwester Nawrocki @ 2012-01-06 14:04 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sylwester Nawrocki, linux-media, g.liakhovetski, sakari.ailus,
	m.szyprowski, riverful.kim, Kyungmin Park

Hi Laurent,

On 12/21/2011 01:20 AM, Laurent Pinchart wrote:
> On Wednesday 14 December 2011 13:23:07 Sylwester Nawrocki wrote:
>> The purpose of the new field is to allow the video pipeline elements
>> to negotiate memory buffer size for compressed data frames, where
>> the buffer size cannot be derived from pixel width and height and
>> the pixel code.
>>
>> For VIDIOC_SUBDEV_S_FMT and VIDIOC_SUBDEV_G_FMT ioctls, the
>> framesamples parameter should be calculated by the driver from pixel
>> width, height, color format and other parameters if required and
>> returned to the caller. This applies to compressed data formats only.
>>
>> The application should propagate the framesamples value, whatever
>> returned at the first sub-device within a data pipeline, i.e. at the
>> pipeline's data source.
>>
>> For compressed data formats the host drivers should internally
>> validate the framesamples parameter values before streaming is
>> enabled, to make sure the memory buffer size requirements are
>> satisfied along the pipeline.
>>
>> Signed-off-by: Sylwester Nawrocki<s.nawrocki@samsung.com>
>> Signed-off-by: Kyungmin Park<kyungmin.park@samsung.com>
>> --
>> There is no changes in this patch comparing to v3.
>> ---
>>   Documentation/DocBook/media/v4l/dev-subdev.xml     |   10 ++++++++--
>>   Documentation/DocBook/media/v4l/subdev-formats.xml |    9 ++++++++-
>>   include/linux/v4l2-mediabus.h                      |    4 +++-
>>   3 files changed, 19 insertions(+), 4 deletions(-)
>>
>> diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml
>> b/Documentation/DocBook/media/v4l/dev-subdev.xml index 0916a73..b9d24eb
>> 100644
>> --- a/Documentation/DocBook/media/v4l/dev-subdev.xml
>> +++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
>
>> @@ -160,7 +160,13 @@
>>         guaranteed to be supported by the device. In particular, drivers
>> guarantee that a returned format will not be further changed if passed to
>> an&VIDIOC-SUBDEV-S-FMT; call as-is (as long as external parameters, such
>> as
>> -      formats on other pads or links' configuration are not changed).
>> </para>
>> +      formats on other pads or links' configuration are not changed). When
>> +      a device contains a data encoder, the<structfield>
>> +<link linkend="v4l2-mbus-framefmt-framesamples">framesamples</link>
>> +</structfield>  field value may be further changed, if parameters of
>> the
>> +      encoding process are changed after the format has been negotiated. In
>> +      such situation applications should use&VIDIOC-SUBDEV-G-FMT; ioctl to
>> +      query an updated format.</para>
>
> Sorry for answering so late. I've been thinking about this topic (as well as
> the proposed new pixelclock field) quite a lot, and one question strikes me
> here (please don't hate me): does userspace need to care about the
> framesamples field ? It looks like the value is only used inside the kernel,
> and we're relying on on userspace to propagate those values between subdevs.

How about a requirements for applications to configure the frame length only 
on sensor (data source) subdev ? The sensor subdev would adjust it, if it 
wouldn't have been consistent with other parameters in struct 
v4l2_mbus_framefmt. And having it "undefined" for non-compressed formats 
rather than requiring it to be set by subdevs to 0 ?

A standard function in the media core could be implemented, if ever needed,
to set framesamples on any remaining subdevs in the pipeline. 

Also the name "framesamples" is a bit odd, just "length" sounds better to me.

> If that's the case, wouldn't it be better to have an in-kernel API to handle
> this ? I'm a bit concerned about forcing userspace to handle internal
> information to userspace if there's no reason to do so.
>
> What's the rationale between your solution, is there a need for the
> framesamples information in userspace ?

Yes, it would be useful. And the control API doesn't seem relevant for it.
Maximum frame length is really a property of data frame on the media bus
which struct v4l2_framefmt describes.
Some sensors allow fine grained configuration of their embedded JPEG 
encoders and having frame length configurable directly on subdevs would
be useful.

--
Regards,
Sylwester

>>         <para>Drivers automatically propagate formats inside sub-devices.
>> When a try or active format is set on a pad, corresponding formats on
>> other pads diff --git a/Documentation/DocBook/media/v4l/subdev-formats.xml
>> b/Documentation/DocBook/media/v4l/subdev-formats.xml index
>> 49c532e..7c202a1 100644
>> --- a/Documentation/DocBook/media/v4l/subdev-formats.xml
>> +++ b/Documentation/DocBook/media/v4l/subdev-formats.xml
>> @@ -33,9 +33,16 @@
>>   	<entry>Image colorspace, from&v4l2-colorspace;. See
>>   	<xref linkend="colorspaces" />  for details.</entry>
>>   	</row>
>> +	<row id="v4l2-mbus-framefmt-framesamples">
>> +	<entry>__u32</entry>
>> +	<entry><structfield>framesamples</structfield></entry>
>> +	<entry>Maximum number of bus samples per frame for compressed data
>> +	    formats. For uncompressed formats drivers and applications must
>> +	    set this parameter to zero.</entry>
>> +	</row>
>>   	<row>
>>   	<entry>__u32</entry>
>> -	<entry><structfield>reserved</structfield>[7]</entry>
>> +	<entry><structfield>reserved</structfield>[6]</entry>
>>   	<entry>Reserved for future extensions. Applications and drivers must
>>   	  set the array to zero.</entry>
>>   	</row>
>> diff --git a/include/linux/v4l2-mediabus.h b/include/linux/v4l2-mediabus.h
>> index 5ea7f75..f18d6cd 100644
>> --- a/include/linux/v4l2-mediabus.h
>> +++ b/include/linux/v4l2-mediabus.h
>> @@ -101,6 +101,7 @@ enum v4l2_mbus_pixelcode {
>>    * @code:	data format code (from enum v4l2_mbus_pixelcode)
>>    * @field:	used interlacing type (from enum v4l2_field)
>>    * @colorspace:	colorspace of the data (from enum v4l2_colorspace)
>> + * @framesamples: maximum number of bus samples per frame
>>    */
>>   struct v4l2_mbus_framefmt {
>>   	__u32			width;
>> @@ -108,7 +109,8 @@ struct v4l2_mbus_framefmt {
>>   	__u32			code;
>>   	__u32			field;
>>   	__u32			colorspace;
>> -	__u32			reserved[7];
>> +	__u32			framesamples;
>> +	__u32			reserved[6];
>>   };
>>
>>   #endif


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2012-01-04 22:51                         ` Sylwester Nawrocki
@ 2012-01-06 15:44                           ` Sakari Ailus
  0 siblings, 0 replies; 25+ messages in thread
From: Sakari Ailus @ 2012-01-06 15:44 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Laurent Pinchart, linux-media, g.liakhovetski, m.szyprowski,
	riverful.kim, Kyungmin Park

Hi Sylwester,

On Wed, Jan 04, 2012 at 11:51:00PM +0100, Sylwester Nawrocki wrote:
> On 01/04/2012 01:21 PM, Sakari Ailus wrote:
> > On Sun, Jan 01, 2012 at 07:56:03PM +0100, Sylwester Nawrocki wrote:
> >> On 12/31/2011 02:16 PM, Sakari Ailus wrote:
> >>>>> Something else that should probably belong there is information on the frame
> >>>>> format: contrary to what I've previously thought, the sensor metadata is
> >>>>> often sent as part of the same CSI-2 channel. There also can be other types
> >>>>> of data, such as dummy data and data for black level calibration. I wouldn't
> >>>>> want to export all this to the user space --- it shouldn't probably need to
> >>>>> care about it.
> >>>>>
> >>>>> The transmitter of the data (sensor) has this information and the CSI-2
> >>>>> receiver needs it. Same for the framesamples, as far as I understand.
> >>>>
> >>>> We could try to design some standard data structure for frame metadata -
> >>>> that's how I understood the meaning of struct v4l2_mbus_framedesc.
> >>>> But I doubt such attempts will be sucessful. And how can we distinguish
> >>>> which data is valid and applicable when there is lots of weird stuff in one
> >>>> data structure ? Using media bus pixel code only ?
> >>>
> >>> I think the media bus pixel code which is exported to the user space should
> >>> not be involved with the metadata.
> >>
> >> Then we need to find some method to distinguish streams with metadata on the
> >> media bus, to be able to discard it before sending to user space.
> >> I assume this is where struct v4l2_mbus_framedesc and related ops would help ?
> > 
> > I'd think so.
> > 
> >> Maybe we could create v4l2_mbus_framedesc with length (framesamples) member
> >> in it and additionally 994 reserved bytes for future extensions ;-), e.g.
> >>
> >> struct v4l2_mbus_framedesc {
> >> 	unsigned int length;
> >> 	unsigned int rserved[994];
> >> };
> > 
> > Do we need to export this to the user space? In the first phase I'd like to
> 
> No, that wasn't my intention. The reserved field was supposed to be a joke,
> we of course don't need any reserved members in the kernel space.

Oh, I have to admit I missed the joke completely. ;)

> > keep that static (i.e. only get op would be supported) and only visible in
> > the kernel. That would leave much more room for changes later on, if needed.
> 
> I'd prefer it to be R/W, i.e. having both get and set ops available. Maybe
> not for all fields though.

I guess we should gather all the requirements and write and RFC based on the
findings. I know at least about three different scenarios:

- metadata as part of the image, receiver writes it to the same buffer,
- metadata as part of the image, receiver writes it to a different buffer and
- metadata arriver through a separate csi or ccp2 channel

All may include metadata at bottom and / or top of the image.

> >> struct v4l2_subdev_pad_ops {
> >> 	  ....
> >> 	int get_framedesc(int pad, struct v4l2_framedesc *fdesc);
> >> 	int set_framedesc(int pad, struct v4l2_framedesc fdesc);
> >> };
> >>
> >> This would ensure same media bus format code regardless of frame meta data
> >> presence.
> >>
> >> In case metadata is sent in same CSI channel, the required buffer length
> >> might be greater than what would width/height and pixel code suggest.
> > 
> > Partly for this reason we have g_skip_top_lines() op in sensor ops. It
> > instructs the receiver to discard the metadata, and possibly other data
> > which isn't as interesting --- could be just dummy.
> 
> I see.
> 
> > Some CSI-2 receivers are able to write this to a different memory location;
> > we could expose this as a different video node. I'm proposing a different
> > video node since this is a separate queue: the format (in-memory pixel
> > format and dimensions) is different, and it is beneficial to have access to
> > this data as soon as possible. There is a caveat, though, if we also wish to
> > support metadata which is appended to the frame, rather than prependeded.
> 
> I think it is recurring topic in our discussions, I guess nobody really needs
> it since it haven't been implemented yet. ;)

It is needed, especially if you have camera control algorithms in the user
space. You can get around it in some cases, but the end result isn't pretty
nor reliable.

> Multi-planar buffers were meant also for handling meta data, only variable
> number of planes support would need to be added. For instance the driver could
> pass only the buffer with meta data plane if required.

Multi-planar buffers are definitely a part of the solution when the metadata
is part of the same memory buffer.

What I think would be needed is separation of pixel formats of different
planes: we do not want to create new pixel format out of every possible
combination of metadata and pixel format.

> >>> The metadata is something that the user is likely interested only in the
> >>> form it is in the system memory. It won't be processed in any way before
> >>> it gets written to memory. The chosen mbus code may affect the format of the
> >>> metadata, but that's something the sensor driver knows  -- and I've yet to
> >>> see a case where the user could choose the desired metadata format.
> >>
> >>> Alternatively we could make the metadata path a separate path from the image
> >>> data. I wonder how feasible that approach would be --- the subdevs would
> >>> still be the same.
> >>
> >> I was also considering metadata as sensor specific data structure retrieved
> >> by the host after a frame has been captured and appending that data to a user
> >> buffer. For such buffers a separate fourcc would be needed.
> > 
> > Why after?
> 
> Because there is no way to retrieve it before ? :)
> 
> > There are benefits in getting this to the user space without extra delays?
> 
> It doesn't matter that much because image data is already post-processed.
> And the case I was mentioning was about still capture, asisted in the sensor
> (SoC).

Your use case is different and it might not make a difference in that one.
However, I'm thinking of camera control algorithms; they need to know how
the frame is exposes so they can program new setting for the sensor. To
avoid introducing unnecessary delays in the process, this needs to be done
as soon as possible.

> >>>>> Pixelrate is also used to figure out whether a pipeline can do streaming or
> >>>>> not; the pixel rate originating from the sensor could be higher than the
> >>>>> maximum of the ISP. For this reason, as well as for providing timing
> >>>>> information, access to pixelrate is reequired in the user space.
> >>>>>
> >>>>> Configuring the framesamples could be done on the sensor using a control if
> >>>>> necessary.
> >>>>
> >>>> Sure, that could work. But as I mentioned before, the host drivers would have
> >>>> to be getting such control internally from subdevs. Not so nice IMHO. Although
> >>>> I'm not in big opposition to that too.
> >>>>
> >>>> Grepping for v4l2_ctrl_g_ctrl() all the drivers appear to use it locally only.
> >>>
> >>> I don't think there's anything that really would prohibit doing this. There
> >>> would need to be a way for the host to make a control read-only, to prevent
> >>> changing framesamples while streaming.
> >>
> >> I would rather make subdev driver to ensure all negotiated paramaters, which
> >> changed during streaming could crash the system, stay unchanged after streaming
> >> started. It's as simple as checking entity stream_count in s_ctrl() and
> >> prohibiting change of control value if stream_count > 0.
> > 
> > That's easy, but the values of these controls could still change between
> > pipeline validation and stream startup: the sensor driver always will be the
> > last one to start streaming.
> 
> Are you sure ? The host first calls media_pipeline_start(), this increments
> stream_count on all subdevs, and only after that the host performs pipeline
> validation. At streamoff media_pipeline_stop() is called and the controls
> may be changed again. It only requires special treatment of stream_count
> at the subdevs.

Good point. The driver itself could make these controls return EBUSY.

> Btw, for setting controls busy the V4L2_CTRL_FLAG_GRABBED flag can be used.
> However, IMHO it shouldn't be the host's business to mess with its subdevs'
> control properties. If controls aren't inherited by the host and they belong
> to a subdev it's probably better to leave the low level control operations
> to the subdev driver only.

There is no other way in some cases I can see --- Scott Jiang had such a
case: the parallel receiver cannot tolerate changes to blanking while
streaming. All other such receivers can, so it definitely musn't be sensor's
decision to deny such changes.

> >>> Pad-specific controls likely require more work than this.
> >>
> >> Hym, I'd forgotten, the fact framesamples are per pad was an argument against
> >> using v4l2 control for this parameter. We still need per pad controls for the
> >> blanking controls, but for framesamples maybe it's better to just add subdev
> >> callback, as acessing this parameter on subdevs directly from space isn't
> >> really essential..
> > 
> > Blanking controls are subdev-specific, not pad-specific. In practice the
> > pixel array subdevs will always have just one pad, but there is still the
> > principal difference. :-)
> 
> Yeah, that makes sense. :-) Even in case when we have two output pads from
> a MIPI-CSI  transmitter, each for a separate channel, and per pad media bus
> formats the per pad blanking wouldn't rather make sense.

Should we have more pads in that case? There's still just a single bus...

> > Pixel rate could be another per-pad control. That might have to be checked
> > before stream startup, just like framesamples. That's information which is
> > mostly needed in the kernel space, but the user space still would sometimes
> > like to take a look at it. Giving the responsibility to the user to carry
> > the pixel rate through the whole pipeline without a way to even modify would
> > be is a little excessive.
> 
> But it's supposed to be read-only ? What would be a reason to propagate it then ?

It is read-only but it still has to be propagated. It is used to configure
subdevs along the pipeline, as well as to check the pixel rate isn't too
high for some of them.

If it is not propagated by the user, the subdevs that need the information
will never get it since no propagation will be done by the kernel.

> >>>>> Just my 5 euro cents. Perhaps we could discuss the topic on #v4l-meeting
> >>>>> some time?
> >>>>
> >>>> I'm available any time this week. :)
> >>>
> >>> I think the solution could be related to frame metadata if we intend to
> >>> specify the frame format. Btw. how does the framesamples relate to blanking?
> >>
> >> Framesamples and blanking are on completely different levels. Framesamples
> >> takes into account only active frame data, so H/V blanking doesn't matter here.
> >> Framesamples is not intended for raw formats where blanking is applicable.
> >>
> >> Framesamples only determines length of compressed stream, and blanking doesn't
> >> really affect the data passed to and generated by a jpeg encoder.
> > 
> > So in fact the blanking controls make no difference, but the hardware might
> 
> Yeah.
> 
> > add some extra blanking, say, if it's not able to send the whole image over
> > the bus as one chunk?
> 
> Concept of blanking really doesn???t make sense on media bus when sending compressed
> stream. The transmission of one frame could be performed in any number of bursts
> of any length.
> 
> >>> The metadata in a regular frame spans a few lines in the top and sometimes
> >>> also on the bottom of that frame.
> >>
> >> How do you handle it now, i.e. how the host finds out how much memory it needs
> >> for a frame ? Or is the metadata just overwriting "valid" lines ?
> > 
> > Well... we don't handle it. ;-) All that's being done is that it's
> > discarded.
> 
> OK, that's the simpler way then:)

>From drivers' point of view, yes. However, this causes issues elsewhere,
including extra delays and needing to know detailed hardware specific timing
information in user space.

Kind regards,

-- 
Sakari Ailus
e-mail: sakari.ailus@iki.fi	jabber/XMPP/Gmail: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt
  2012-01-01 18:56                     ` Sylwester Nawrocki
  2012-01-04 12:21                       ` Sakari Ailus
@ 2012-01-11 13:20                       ` Laurent Pinchart
  1 sibling, 0 replies; 25+ messages in thread
From: Laurent Pinchart @ 2012-01-11 13:20 UTC (permalink / raw)
  To: Sylwester Nawrocki
  Cc: Sakari Ailus, linux-media, g.liakhovetski, m.szyprowski,
	riverful.kim, Kyungmin Park

Hi Sylwester and Sakari,

On Sunday 01 January 2012 19:56:03 Sylwester Nawrocki wrote:
> On 12/31/2011 02:16 PM, Sakari Ailus wrote:
> >>> I could think of an in-kernel counterpart for v4l2_mbus_framefmt, say,
> >>> v4l2_mbus_framedesc. This could then be passed from subdev to another
> >>> using a new subdev op.
> >> 
> >> That might be needed eventually. But I'm not a great fan in general of
> >> yet another set of callbacks for media bus frame format set up.
> >> 
> >>> Something else that should probably belong there is information on the
> >>> frame format: contrary to what I've previously thought, the sensor
> >>> metadata is often sent as part of the same CSI-2 channel. There also
> >>> can be other types of data, such as dummy data and data for black
> >>> level calibration. I wouldn't want to export all this to the user
> >>> space --- it shouldn't probably need to care about it.
> >>> 
> >>> The transmitter of the data (sensor) has this information and the CSI-2
> >>> receiver needs it. Same for the framesamples, as far as I understand.
> >> 
> >> We could try to design some standard data structure for frame metadata -
> >> that's how I understood the meaning of struct v4l2_mbus_framedesc.
> >> But I doubt such attempts will be sucessful. And how can we distinguish
> >> which data is valid and applicable when there is lots of weird stuff in
> >> one data structure ? Using media bus pixel code only ?
> > 
> > I think the media bus pixel code which is exported to the user space
> > should not be involved with the metadata.
> 
> Then we need to find some method to distinguish streams with metadata on
> the media bus, to be able to discard it before sending to user space.
> I assume this is where struct v4l2_mbus_framedesc and related ops would
> help ?
> 
> Maybe we could create v4l2_mbus_framedesc with length (framesamples) member
> in it and additionally 994 reserved bytes for future extensions ;-), e.g.
> 
> struct v4l2_mbus_framedesc {
> 	unsigned int length;
> 	unsigned int rserved[994];
> };
> 
> struct v4l2_subdev_pad_ops {
> 	  ....
> 	int get_framedesc(int pad, struct v4l2_framedesc *fdesc);
> 	int set_framedesc(int pad, struct v4l2_framedesc fdesc);
> };
> 
> This would ensure same media bus format code regardless of frame meta data
> presence.
> 
> In case metadata is sent in same CSI channel, the required buffer length
> might be greater than what would width/height and pixel code suggest.
> 
> > The metadata is something that the user is likely interested only in the
> > form it is in the system memory. It won't be processed in any way before
> > it gets written to memory. The chosen mbus code may affect the format of
> > the metadata, but that's something the sensor driver knows  -- and I've
> > yet to see a case where the user could choose the desired metadata
> > format.
> > 
> > Alternatively we could make the metadata path a separate path from the
> > image data. I wonder how feasible that approach would be --- the subdevs
> > would still be the same.
> 
> I was also considering metadata as sensor specific data structure retrieved
> by the host after a frame has been captured and appending that data to a
> user buffer. For such buffers a separate fourcc would be needed.
> 
> >>> Pixelrate is also used to figure out whether a pipeline can do
> >>> streaming or not; the pixel rate originating from the sensor could be
> >>> higher than the maximum of the ISP. For this reason, as well as for
> >>> providing timing information, access to pixelrate is reequired in the
> >>> user space.
> >>> 
> >>> Configuring the framesamples could be done on the sensor using a
> >>> control if necessary.
> >> 
> >> Sure, that could work. But as I mentioned before, the host drivers would
> >> have to be getting such control internally from subdevs. Not so nice
> >> IMHO. Although I'm not in big opposition to that too.
> >> 
> >> Grepping for v4l2_ctrl_g_ctrl() all the drivers appear to use it locally
> >> only.
> > 
> > I don't think there's anything that really would prohibit doing this.
> > There would need to be a way for the host to make a control read-only,
> > to prevent changing framesamples while streaming.
> 
> I would rather make subdev driver to ensure all negotiated paramaters,
> which changed during streaming could crash the system, stay unchanged
> after streaming started. It's as simple as checking entity stream_count in
> s_ctrl() and prohibiting change of control value if stream_count > 0.
> 
> > Pad-specific controls likely require more work than this.
> 
> Hym, I'd forgotten, the fact framesamples are per pad was an argument
> against using v4l2 control for this parameter. We still need per pad
> controls for the blanking controls, but for framesamples maybe it's better
> to just add subdev callback, as acessing this parameter on subdevs
> directly from space isn't really essential..

All this has moved around my mind for some time now, and I'm starting to see 
the situation from a slightly different point of view. I'll try to explain 
that first, and then we'll see what kind of APIs we need.

With the media devices being split into subdevs, the V4L/MC APIs have evolved 
quite a lot lately. We have subdev-level controls and pad-level formats, and 
will soon get pad-level selections. If you take a step back, we essentially 
have 3 logical elements (entities, pads and links) and properties associated 
with those elements. Those properties are currently classified into

- subdev properties: controls and old-style (pad-unaware) formats and crop 
restangles

- pad properties: pad-level formats and crop rectangles, and soon pad-level 
selections

- link properties: just a couple of flags for now

Recent discussions showed that we will need pad-level controls, and that some 
controls (such as setting the AF area) would be better handled by the 
selection API than the control API.

Controls are usually exposed to userspace, but we're seeing an increasing need 
to have in-kernel controls that should not be exposed to applications.

If I had to redesign all this right now, I would create an in-kernel 
properties API with a way to expose selected properties to userspace. This 
would also solve the control/format relationships an atomicity issues.

Is this something we should consider for the in-kernel APIs ? I'm of course 
not advocating dropping our current interfaces, or even deprecating them, but 
maybe this should be seen as a target to keep in mind for all APIs we add or 
modify.

Or maybe I should just spend my nights sleeping instead of thinking :-)

> >>> Just my 5 euro cents. Perhaps we could discuss the topic on
> >>> #v4l-meeting some time?
> >> 
> >> I'm available any time this week. :)
> > 
> > I think the solution could be related to frame metadata if we intend to
> > specify the frame format. Btw. how does the framesamples relate to
> > blanking?
> 
> Framesamples and blanking are on completely different levels. Framesamples
> takes into account only active frame data, so H/V blanking doesn't matter
> here. Framesamples is not intended for raw formats where blanking is
> applicable.
> 
> Framesamples only determines length of compressed stream, and blanking
> doesn't really affect the data passed to and generated by a jpeg encoder.
> 
> > The metadata in a regular frame spans a few lines in the top and
> > sometimes also on the bottom of that frame.
> 
> How do you handle it now, i.e. how the host finds out how much memory it
> needs for a frame ? Or is the metadata just overwriting "valid" lines ?
> 
> > Late next week is fine for me.
> 
> Ok, I'll try to reserve some time then.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2012-01-11 13:19 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-01 10:20 [RFC v2] v4l2: Compressed data formats on the video bus Sylwester Nawrocki
2011-12-01 10:20 ` [PATCH/RFC v2 1/4] v4l: Add new framesamples field to struct v4l2_mbus_framefmt Sylwester Nawrocki
2011-12-01 10:20 ` [PATCH/RFC v2 2/4] m5mols: Add support for buffer size configuration for compressed formats Sylwester Nawrocki
2011-12-01 10:20 ` [PATCH/RFC v2 3/4] s5p-fimc: Add support for media bus framesamples parameter Sylwester Nawrocki
2011-12-01 10:20 ` [PATCH/RFC v2 4/4] v4l: Update subdev drivers to handle " Sylwester Nawrocki
2011-12-06 16:12   ` Laurent Pinchart
2011-12-06 17:36     ` Sylwester Nawrocki
2011-12-09 17:59     ` [PATCH/RFC v3 " Sylwester Nawrocki
2011-12-12  0:31       ` Laurent Pinchart
2011-12-12 14:39         ` Sylwester Nawrocki
2011-12-14 12:23         ` [PATCH/RFC v4 0/2] v4l2: Extend media bus format with framesamples field Sylwester Nawrocki
2011-12-14 12:23           ` [PATCHv4 1/2] v4l: Add new framesamples field to struct v4l2_mbus_framefmt Sylwester Nawrocki
2011-12-21  0:20             ` Laurent Pinchart
2011-12-22 11:35               ` Sylwester Nawrocki
2011-12-26 12:53               ` Sakari Ailus
2011-12-28 17:09                 ` Sylwester Nawrocki
2011-12-31 13:16                   ` Sakari Ailus
2012-01-01 18:56                     ` Sylwester Nawrocki
2012-01-04 12:21                       ` Sakari Ailus
2012-01-04 22:51                         ` Sylwester Nawrocki
2012-01-06 15:44                           ` Sakari Ailus
2012-01-11 13:20                       ` Laurent Pinchart
2012-01-06 14:04               ` Sylwester Nawrocki
2011-12-14 12:23           ` [PATCHv4 2/2] v4l: Update subdev drivers to handle framesamples parameter Sylwester Nawrocki
2011-12-15 10:14             ` [PATCHv5] " Sylwester Nawrocki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.