linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/36] i.MX Media Driver
@ 2017-02-16  2:19 Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver Steve Longerbeam
                   ` (37 more replies)
  0 siblings, 38 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

In version 4:

Changes suggested by Philipp Zabel <p.zabel@pengutronix.de> and
Jean-Michel Hautbois <jean-michel.hautbois@veo-labs.com>:

- split out VDIC from imx-ic-prpvf into a distinct VDIC subdev.

Changes suggested by Philipp Zabel <p.zabel@pengutronix.de>:

- Re-org of pre-process entities. Created an ipuX_ic_prp entity
  that receives on a single sink pad from the CSIs and the VDIC.
  Two source pads route to ipuX_ic_prpenc and ipuX_ic_prpvf. The
  code for ipuX_ic_prpenc and ipuX_ic_prpvf is now identical, which
  adds rotation to ipuX_ic_prpvf.

- renamed media node in DT to capture-subsystem, compatible string to
  "fsl,imx-capture-subsystem".

- the ov564x subdevs get the xclk rate from clk_get_rate() instead of
  attempting to change the rate. "xclk" property in ov564x DT nodes is
  removed.

- changed "pix" clock to IMX6QDL_CLK_EIM_PODF in mipi_csi node.

- added comptible string "snps,dw-mipi-csi2" to mipi_csi node in DT.

- silenced many of the v4l2_info()'s.

- move conversion of ALTERNATE field type to SEQ_BT/TB to output pad
  of ipuX_csiY entity.

- added bounds checks to set_fmt in ipuX_csiY and ipuX_vdic entities.

- Get rid of SMFC entity. CSI frame output via SMFC and IDMAC channel
  is now built into the CSI entities via a new source pad. So CSI
  entities now have two source pads : direct and IDMAC.

- the IPU internal pads (direct between subunuits, not via IDMAC channels),
  should only accept the pixel formats used internally by the IPU:
  MEDIA_BUS_FMT_AYUV8_1X32 and MEDIA_BUS_FMT_ARGB8888_1X32.

- export V4L2_EVENT_IMX_EOF_TIMEOUT as V4L2_EVENT_FRAME_TIMEOUT for
  general use.

- export imx_media_inherit_controls() as v4l2_pipeline_inherit_controls()
  for general use.

- completely removed dma_buf ring support. There is no capture interface
  or ic-pp subdevs any longer. The CSI and ic-prp enc/vf subdevs now attach
  directly to a capture device node from their IDMAC (non-direct) source pads.


Changes suggested by Javier Martinez Canillas <javier@dowhile0.org>:

- add missing MODULE_DEVICE_TABLE() to video mux subdev.


Changes suggested by Hans Verkuil <hverkuil@xs4all.nl>:

- entity function type MEDIA_ENT_F_MUX renamed to MEDIA_ENT_F_VID_MUX.

- removed use of g_mbus_config subdev op. Sensor bus config is instead
  gotten from the sensor DT node via v4l2_of_parse_endpoint().

- use v4l2_ctrl_handler_setup() for restoring current control values
  in the ov564x subdevs, rather than a custom control cache.


Changes suggested by Russell King <linux@armlinux.org.uk>:

- re-ordered clock lane and data lane # assignments in device tree.

- fixed module unload.

- propagate the return code from ipu_ic_task_idma_init(), don't start
  streaming if it returned error!


Changes suggested by Laurent Pinchart <laurent.pinchart@ideasonboard.com>:

- ov5640 subdev is improved and moved to drivers/media/i2c, along with
  binding docs. The ov5642 subdev has been dropped for now.

- regulator DT properties are now required in ov5640 subdev, and resewt/power
  GPIOs are optional. Created dummy regulator nodes in imx6qdl-sabrelite.dtsi
  for the ov5640 node (the ov5640 regulators are fixed regulators on the
  OV5640 module for sabrelite).

- removed use of endpoint ID in device tree as a way to specify a MIPI CSI-2
  virtual channel for the OV5640. The ov5640 subdev now hard-codes the
  virtual channel to 1 until a new subdev API becomes available to allow
  run-time virtual channel selection.


Other changes:

- v4l2-compliance fixes.

- since dma_buf ring support is gone, the VDIC subdev is modified to
  potentially receive frames from a future output device node on its
  IDMAC sink pad.

- fixed mbus pixel format enumeration and selection. The source pads
  and capture device select the correct formats based on the sink
  formats. For example the capture device can only report and allow
  selecting an RGB format if the attached source pad's format is RGB.
  Likewise for YUV space, with the added benefit that the capture
  device can select a YUV planar format in this case, and the attached
  subdev will comply and output planar.

- stripped out sensor input OF properties and parsing for now. It is
  problematic since there is currently no subdev op so that the bridge
  can retrieve this information and use for VIDIOC_{ENUM|S|G}_INPUT.

- modified imx6-mipi-csi2 subdev to comply strictly with the MIPI CSI-2
  startup sequence described in the i.MX6 reference manual.



Philipp Zabel (6):
  ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their
    connections
  add mux and video interface bridge entity functions
  platform: add video-multiplexer subdevice driver
  media: imx: csi: fix crop rectangle changes in set_fmt
  media: imx: csi: add frame skipping support
  media: imx: csi: fix crop rectangle reset in sink set_fmt

Russell King (3):
  media: imx: add support for bayer formats
  media: imx: csi: add support for bayer formats
  media: imx: mipi-csi2: enable setting and getting of frame rates

Steve Longerbeam (27):
  [media] dt-bindings: Add bindings for i.MX media driver
  ARM: dts: imx6qdl: Add compatible, clocks, irqs to MIPI CSI-2 node
  ARM: dts: imx6qdl: add capture-subsystem device
  ARM: dts: imx6qdl-sabrelite: remove erratum ERR006687 workaround
  ARM: dts: imx6-sabrelite: add OV5642 and OV5640 camera sensors
  ARM: dts: imx6-sabresd: add OV5642 and OV5640 camera sensors
  ARM: dts: imx6-sabreauto: create i2cmux for i2c3
  ARM: dts: imx6-sabreauto: add reset-gpios property for max7310_b
  ARM: dts: imx6-sabreauto: add pinctrl for gpt input capture
  ARM: dts: imx6-sabreauto: add the ADV7180 video decoder
  [media] v4l2: add a frame timeout event
  [media] v4l2-mc: add a function to inherit controls from a pipeline
  UAPI: Add media UAPI Kbuild file
  media: Add userspace header file for i.MX
  media: Add i.MX media core driver
  media: imx: Add Capture Device Interface
  media: imx: Add CSI subdev driver
  media: imx: Add VDIC subdev driver
  media: imx: Add IC subdev drivers
  media: imx: Add MIPI CSI-2 Receiver subdev driver
  [media] add Omnivision OV5640 sensor driver
  ARM: imx_v6_v7_defconfig: Enable staging video4linux drivers
  media: imx: update capture dev format on IDMAC output pad set_fmt
  media: imx: csi: add __csi_get_fmt
  media: imx: csi/fim: add support for frame intervals
  media: imx: redo pixel format enumeration and negotiation
  media: imx: propagate sink pad formats to source pads

 .../devicetree/bindings/media/i2c/ov5640.txt       |   43 +
 Documentation/devicetree/bindings/media/imx.txt    |   66 +
 .../bindings/media/video-multiplexer.txt           |   59 +
 Documentation/media/uapi/mediactl/media-types.rst  |   22 +
 Documentation/media/uapi/v4l/vidioc-dqevent.rst    |    5 +
 Documentation/media/v4l-drivers/imx.rst            |  542 +++++
 Documentation/media/videodev2.h.rst.exceptions     |    1 +
 arch/arm/boot/dts/imx6dl-sabrelite.dts             |    5 +
 arch/arm/boot/dts/imx6dl-sabresd.dts               |    5 +
 arch/arm/boot/dts/imx6dl.dtsi                      |  185 ++
 arch/arm/boot/dts/imx6q-sabrelite.dts              |    5 +
 arch/arm/boot/dts/imx6q-sabresd.dts                |    5 +
 arch/arm/boot/dts/imx6q.dtsi                       |  121 ++
 arch/arm/boot/dts/imx6qdl-sabreauto.dtsi           |  144 +-
 arch/arm/boot/dts/imx6qdl-sabrelite.dtsi           |  152 +-
 arch/arm/boot/dts/imx6qdl-sabresd.dtsi             |  114 +-
 arch/arm/boot/dts/imx6qdl.dtsi                     |   17 +-
 arch/arm/configs/imx_v6_v7_defconfig               |   14 +-
 drivers/media/i2c/Kconfig                          |    7 +
 drivers/media/i2c/Makefile                         |    1 +
 drivers/media/i2c/ov5640.c                         | 2109 ++++++++++++++++++++
 drivers/media/platform/Kconfig                     |    8 +
 drivers/media/platform/Makefile                    |    2 +
 drivers/media/platform/video-multiplexer.c         |  474 +++++
 drivers/media/v4l2-core/v4l2-mc.c                  |   48 +
 drivers/staging/media/Kconfig                      |    2 +
 drivers/staging/media/Makefile                     |    1 +
 drivers/staging/media/imx/Kconfig                  |   20 +
 drivers/staging/media/imx/Makefile                 |   12 +
 drivers/staging/media/imx/TODO                     |   36 +
 drivers/staging/media/imx/imx-ic-common.c          |  113 ++
 drivers/staging/media/imx/imx-ic-prp.c             |  458 +++++
 drivers/staging/media/imx/imx-ic-prpencvf.c        | 1138 +++++++++++
 drivers/staging/media/imx/imx-ic.h                 |   38 +
 drivers/staging/media/imx/imx-media-capture.c      |  671 +++++++
 drivers/staging/media/imx/imx-media-csi.c          | 1483 ++++++++++++++
 drivers/staging/media/imx/imx-media-dev.c          |  487 +++++
 drivers/staging/media/imx/imx-media-fim.c          |  463 +++++
 drivers/staging/media/imx/imx-media-internal-sd.c  |  349 ++++
 drivers/staging/media/imx/imx-media-of.c           |  267 +++
 drivers/staging/media/imx/imx-media-utils.c        |  930 +++++++++
 drivers/staging/media/imx/imx-media-vdic.c         |  915 +++++++++
 drivers/staging/media/imx/imx-media.h              |  305 +++
 drivers/staging/media/imx/imx6-mipi-csi2.c         |  600 ++++++
 include/media/imx.h                                |   15 +
 include/media/v4l2-mc.h                            |   25 +
 include/uapi/Kbuild                                |    1 +
 include/uapi/linux/media.h                         |    6 +
 include/uapi/linux/v4l2-controls.h                 |    4 +
 include/uapi/linux/videodev2.h                     |    1 +
 include/uapi/media/Kbuild                          |    2 +
 include/uapi/media/imx.h                           |   29 +
 52 files changed, 12496 insertions(+), 29 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/media/i2c/ov5640.txt
 create mode 100644 Documentation/devicetree/bindings/media/imx.txt
 create mode 100644 Documentation/devicetree/bindings/media/video-multiplexer.txt
 create mode 100644 Documentation/media/v4l-drivers/imx.rst
 create mode 100644 drivers/media/i2c/ov5640.c
 create mode 100644 drivers/media/platform/video-multiplexer.c
 create mode 100644 drivers/staging/media/imx/Kconfig
 create mode 100644 drivers/staging/media/imx/Makefile
 create mode 100644 drivers/staging/media/imx/TODO
 create mode 100644 drivers/staging/media/imx/imx-ic-common.c
 create mode 100644 drivers/staging/media/imx/imx-ic-prp.c
 create mode 100644 drivers/staging/media/imx/imx-ic-prpencvf.c
 create mode 100644 drivers/staging/media/imx/imx-ic.h
 create mode 100644 drivers/staging/media/imx/imx-media-capture.c
 create mode 100644 drivers/staging/media/imx/imx-media-csi.c
 create mode 100644 drivers/staging/media/imx/imx-media-dev.c
 create mode 100644 drivers/staging/media/imx/imx-media-fim.c
 create mode 100644 drivers/staging/media/imx/imx-media-internal-sd.c
 create mode 100644 drivers/staging/media/imx/imx-media-of.c
 create mode 100644 drivers/staging/media/imx/imx-media-utils.c
 create mode 100644 drivers/staging/media/imx/imx-media-vdic.c
 create mode 100644 drivers/staging/media/imx/imx-media.h
 create mode 100644 drivers/staging/media/imx/imx6-mipi-csi2.c
 create mode 100644 include/media/imx.h
 create mode 100644 include/uapi/media/Kbuild
 create mode 100644 include/uapi/media/imx.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 228+ messages in thread

* [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 11:54   ` Philipp Zabel
  2017-02-27 14:38   ` Rob Herring
  2017-02-16  2:19 ` [PATCH v4 02/36] ARM: dts: imx6qdl: Add compatible, clocks, irqs to MIPI CSI-2 node Steve Longerbeam
                   ` (36 subsequent siblings)
  37 siblings, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Add bindings documentation for the i.MX media driver.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 Documentation/devicetree/bindings/media/imx.txt | 66 +++++++++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/media/imx.txt

diff --git a/Documentation/devicetree/bindings/media/imx.txt b/Documentation/devicetree/bindings/media/imx.txt
new file mode 100644
index 0000000..fd5af50
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/imx.txt
@@ -0,0 +1,66 @@
+Freescale i.MX Media Video Device
+=================================
+
+Video Media Controller node
+---------------------------
+
+This is the media controller node for video capture support. It is a
+virtual device that lists the camera serial interface nodes that the
+media device will control.
+
+Required properties:
+- compatible : "fsl,imx-capture-subsystem";
+- ports      : Should contain a list of phandles pointing to camera
+		sensor interface ports of IPU devices
+
+example:
+
+capture-subsystem {
+	compatible = "fsl,capture-subsystem";
+	ports = <&ipu1_csi0>, <&ipu1_csi1>;
+};
+
+fim child node
+--------------
+
+This is an optional child node of the ipu_csi port nodes. If present and
+available, it enables the Frame Interval Monitor. Its properties can be
+used to modify the method in which the FIM measures frame intervals.
+Refer to Documentation/media/v4l-drivers/imx.rst for more info on the
+Frame Interval Monitor.
+
+Optional properties:
+- fsl,input-capture-channel: an input capture channel and channel flags,
+			     specified as <chan flags>. The channel number
+			     must be 0 or 1. The flags can be
+			     IRQ_TYPE_EDGE_RISING, IRQ_TYPE_EDGE_FALLING, or
+			     IRQ_TYPE_EDGE_BOTH, and specify which input
+			     capture signal edge will trigger the input
+			     capture event. If an input capture channel is
+			     specified, the FIM will use this method to
+			     measure frame intervals instead of via the EOF
+			     interrupt. The input capture method is much
+			     preferred over EOF as it is not subject to
+			     interrupt latency errors. However it requires
+			     routing the VSYNC or FIELD output signals of
+			     the camera sensor to one of the i.MX input
+			     capture pads (SD1_DAT0, SD1_DAT1), which also
+			     gives up support for SD1.
+
+
+mipi_csi2 node
+--------------
+
+This is the device node for the MIPI CSI-2 Receiver, required for MIPI
+CSI-2 sensors.
+
+Required properties:
+- compatible	: "fsl,imx6-mipi-csi2", "snps,dw-mipi-csi2";
+- reg           : physical base address and length of the register set;
+- clocks	: the MIPI CSI-2 receiver requires three clocks: hsi_tx
+                  (the DPHY clock), video_27m, and eim_podf;
+- clock-names	: must contain "dphy", "cfg", "pix";
+
+Optional properties:
+- interrupts	: must contain two level-triggered interrupts,
+                  in order: 100 and 101;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 02/36] ARM: dts: imx6qdl: Add compatible, clocks, irqs to MIPI CSI-2 node
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 03/36] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections Steve Longerbeam
                   ` (35 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Add to the MIPI CSI2 receiver node: compatible strings,
interrupt sources, and clocks.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6qdl.dtsi | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
index 61569c8..aac70b9 100644
--- a/arch/arm/boot/dts/imx6qdl.dtsi
+++ b/arch/arm/boot/dts/imx6qdl.dtsi
@@ -1125,7 +1125,14 @@
 			};
 
 			mipi_csi: mipi@021dc000 {
+				compatible = "fsl,imx6-mipi-csi2", "snps,dw-mipi-csi2";
 				reg = <0x021dc000 0x4000>;
+				interrupts = <0 100 0x04>, <0 101 0x04>;
+				clocks = <&clks IMX6QDL_CLK_HSI_TX>,
+					 <&clks IMX6QDL_CLK_VIDEO_27M>,
+					 <&clks IMX6QDL_CLK_EIM_PODF>;
+				clock-names = "dphy", "cfg", "pix";
+				status = "disabled";
 			};
 
 			mipi_dsi: mipi@021e0000 {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 03/36] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 02/36] ARM: dts: imx6qdl: Add compatible, clocks, irqs to MIPI CSI-2 node Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 04/36] ARM: dts: imx6qdl: add capture-subsystem device Steve Longerbeam
                   ` (34 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

From: Philipp Zabel <p.zabel@pengutronix.de>

This patch adds the device tree graph connecting the input multiplexers
to the IPU CSIs and the MIPI-CSI2 gasket on i.MX6. The MIPI_IPU
multiplexers are added as children of the iomuxc-gpr syscon device node.
On i.MX6Q/D two two-input multiplexers in front of IPU1 CSI0 and IPU2
CSI1 allow to select between CSI0/1 parallel input pads and the MIPI
CSI-2 virtual channels 0/3.
On i.MX6DL/S two five-input multiplexers in front of IPU1 CSI0 and IPU1
CSI1 allow to select between CSI0/1 parallel input pads and any of the
four MIPI CSI-2 virtual channels.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>

--

- Removed some dangling/unused endpoints (ipu2_csi0_from_csi2ipu)
- Renamed the mipi virtual channel endpoint labels, from "mipi_csiX_..."
  to "mipi_vcX...".
- Added input endpoint anchors to the video muxes for the connections
  from parallel sensors.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6dl.dtsi  | 180 +++++++++++++++++++++++++++++++++++++++++
 arch/arm/boot/dts/imx6q.dtsi   | 116 ++++++++++++++++++++++++++
 arch/arm/boot/dts/imx6qdl.dtsi |  10 ++-
 3 files changed, 305 insertions(+), 1 deletion(-)

diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
index 1ade195..371288a 100644
--- a/arch/arm/boot/dts/imx6dl.dtsi
+++ b/arch/arm/boot/dts/imx6dl.dtsi
@@ -181,6 +181,186 @@
 		      "di0", "di1";
 };
 
+&gpr {
+	ipu1_csi0_mux: ipu1_csi0_mux@34 {
+		compatible = "video-multiplexer";
+		reg = <0x34>;
+		bit-mask = <0x7>;
+		bit-shift = <0>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+		status = "okay";
+
+		port@0 {
+			reg = <0>;
+
+			ipu1_csi0_mux_from_mipi_vc0: endpoint {
+				remote-endpoint = <&mipi_vc0_to_ipu1_csi0_mux>;
+			};
+		};
+
+		port@1 {
+			reg = <1>;
+
+			ipu1_csi0_mux_from_mipi_vc1: endpoint {
+				remote-endpoint = <&mipi_vc1_to_ipu1_csi0_mux>;
+			};
+		};
+
+		port@2 {
+			reg = <2>;
+
+			ipu1_csi0_mux_from_mipi_vc2: endpoint {
+				remote-endpoint = <&mipi_vc2_to_ipu1_csi0_mux>;
+			};
+		};
+
+		port@3 {
+			reg = <3>;
+
+			ipu1_csi0_mux_from_mipi_vc3: endpoint {
+				remote-endpoint = <&mipi_vc3_to_ipu1_csi0_mux>;
+			};
+		};
+
+		port@4 {
+			reg = <4>;
+
+			ipu1_csi0_mux_from_parallel_sensor: endpoint {
+			};
+		};
+
+		port@5 {
+			reg = <5>;
+
+			ipu1_csi0_mux_to_ipu1_csi0: endpoint {
+				remote-endpoint = <&ipu1_csi0_from_ipu1_csi0_mux>;
+			};
+		};
+	};
+
+	ipu1_csi1_mux: ipu1_csi1_mux@34 {
+		compatible = "video-multiplexer";
+		reg = <0x34>;
+		bit-mask = <0x7>;
+		bit-shift = <3>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+		status = "okay";
+
+		port@0 {
+			reg = <0>;
+
+			ipu1_csi1_mux_from_mipi_vc0: endpoint {
+				remote-endpoint = <&mipi_vc0_to_ipu1_csi1_mux>;
+			};
+		};
+
+		port@1 {
+			reg = <1>;
+
+			ipu1_csi1_mux_from_mipi_vc1: endpoint {
+				remote-endpoint = <&mipi_vc1_to_ipu1_csi1_mux>;
+			};
+		};
+
+		port@2 {
+			reg = <2>;
+
+			ipu1_csi1_mux_from_mipi_vc2: endpoint {
+				remote-endpoint = <&mipi_vc2_to_ipu1_csi1_mux>;
+			};
+		};
+
+		port@3 {
+			reg = <3>;
+
+			ipu1_csi1_mux_from_mipi_vc3: endpoint {
+				remote-endpoint = <&mipi_vc3_to_ipu1_csi1_mux>;
+			};
+		};
+
+		port@4 {
+			reg = <4>;
+
+			ipu1_csi1_mux_from_parallel_sensor: endpoint {
+			};
+		};
+
+		port@5 {
+			reg = <5>;
+
+			ipu1_csi1_mux_to_ipu1_csi1: endpoint {
+				remote-endpoint = <&ipu1_csi1_from_ipu1_csi1_mux>;
+			};
+		};
+	};
+};
+
+&ipu1_csi1 {
+	ipu1_csi1_from_ipu1_csi1_mux: endpoint {
+		remote-endpoint = <&ipu1_csi1_mux_to_ipu1_csi1>;
+	};
+};
+
+&mipi_csi {
+	port@1 {
+		reg = <1>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		mipi_vc0_to_ipu1_csi0_mux: endpoint@0 {
+			remote-endpoint = <&ipu1_csi0_mux_from_mipi_vc0>;
+		};
+
+		mipi_vc0_to_ipu1_csi1_mux: endpoint@1 {
+			remote-endpoint = <&ipu1_csi1_mux_from_mipi_vc0>;
+		};
+	};
+
+	port@2 {
+		reg = <2>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		mipi_vc1_to_ipu1_csi0_mux: endpoint@0 {
+			remote-endpoint = <&ipu1_csi0_mux_from_mipi_vc1>;
+		};
+
+		mipi_vc1_to_ipu1_csi1_mux: endpoint@1 {
+			remote-endpoint = <&ipu1_csi1_mux_from_mipi_vc1>;
+		};
+	};
+
+	port@3 {
+		reg = <3>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		mipi_vc2_to_ipu1_csi0_mux: endpoint@0 {
+			remote-endpoint = <&ipu1_csi0_mux_from_mipi_vc2>;
+		};
+
+		mipi_vc2_to_ipu1_csi1_mux: endpoint@1 {
+			remote-endpoint = <&ipu1_csi1_mux_from_mipi_vc2>;
+		};
+	};
+
+	port@4 {
+		reg = <4>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		mipi_vc3_to_ipu1_csi0_mux: endpoint@0 {
+			remote-endpoint = <&ipu1_csi0_mux_from_mipi_vc3>;
+		};
+
+		mipi_vc3_to_ipu1_csi1_mux: endpoint@1 {
+			remote-endpoint = <&ipu1_csi1_mux_from_mipi_vc3>;
+		};
+	};
+};
+
 &vpu {
 	compatible = "fsl,imx6dl-vpu", "cnm,coda960";
 };
diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
index e9a5d0b..b833b0d 100644
--- a/arch/arm/boot/dts/imx6q.dtsi
+++ b/arch/arm/boot/dts/imx6q.dtsi
@@ -143,10 +143,18 @@
 
 			ipu2_csi0: port@0 {
 				reg = <0>;
+
+				ipu2_csi0_from_mipi_vc2: endpoint {
+					remote-endpoint = <&mipi_vc2_to_ipu2_csi0>;
+				};
 			};
 
 			ipu2_csi1: port@1 {
 				reg = <1>;
+
+				ipu2_csi1_from_ipu2_csi1_mux: endpoint {
+					remote-endpoint = <&ipu2_csi1_mux_to_ipu2_csi1>;
+				};
 			};
 
 			ipu2_di0: port@2 {
@@ -266,6 +274,80 @@
 	};
 };
 
+&gpr {
+	ipu1_csi0_mux: ipu1_csi0_mux@4 {
+		compatible = "video-multiplexer";
+		reg = <0x04>;
+		bit-mask = <1>;
+		bit-shift = <19>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+		status = "okay";
+
+		port@0 {
+			reg = <0>;
+
+			ipu1_csi0_mux_from_mipi_vc0: endpoint {
+				remote-endpoint = <&mipi_vc0_to_ipu1_csi0_mux>;
+			};
+		};
+
+		port@1 {
+			reg = <1>;
+
+			ipu1_csi0_mux_from_parallel_sensor: endpoint {
+			};
+		};
+
+		port@2 {
+			reg = <2>;
+
+			ipu1_csi0_mux_to_ipu1_csi0: endpoint {
+				remote-endpoint = <&ipu1_csi0_from_ipu1_csi0_mux>;
+			};
+		};
+	};
+
+	ipu2_csi1_mux: ipu2_csi1_mux@4 {
+		compatible = "video-multiplexer";
+		reg = <0x04>;
+		bit-mask = <1>;
+		bit-shift = <20>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+		status = "okay";
+
+		port@0 {
+			reg = <0>;
+
+			ipu2_csi1_mux_from_mipi_vc3: endpoint {
+				remote-endpoint = <&mipi_vc3_to_ipu2_csi1_mux>;
+			};
+		};
+
+		port@1 {
+			reg = <1>;
+
+			ipu2_csi1_mux_from_parallel_sensor: endpoint {
+			};
+		};
+
+		port@2 {
+			reg = <2>;
+
+			ipu2_csi1_mux_to_ipu2_csi1: endpoint {
+				remote-endpoint = <&ipu2_csi1_from_ipu2_csi1_mux>;
+			};
+		};
+	};
+};
+
+&ipu1_csi1 {
+	ipu1_csi1_from_mipi_vc1: endpoint {
+		remote-endpoint = <&mipi_vc1_to_ipu1_csi1>;
+	};
+};
+
 &ldb {
 	clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>, <&clks IMX6QDL_CLK_LDB_DI1_SEL>,
 		 <&clks IMX6QDL_CLK_IPU1_DI0_SEL>, <&clks IMX6QDL_CLK_IPU1_DI1_SEL>,
@@ -312,6 +394,40 @@
 	};
 };
 
+&mipi_csi {
+	port@1 {
+		reg = <1>;
+
+		mipi_vc0_to_ipu1_csi0_mux: endpoint {
+			remote-endpoint = <&ipu1_csi0_mux_from_mipi_vc0>;
+		};
+	};
+
+	port@2 {
+		reg = <2>;
+
+		mipi_vc1_to_ipu1_csi1: endpoint {
+			remote-endpoint = <&ipu1_csi1_from_mipi_vc1>;
+		};
+	};
+
+	port@3 {
+		reg = <3>;
+
+		mipi_vc2_to_ipu2_csi0: endpoint {
+			remote-endpoint = <&ipu2_csi0_from_mipi_vc2>;
+		};
+	};
+
+	port@4 {
+		reg = <4>;
+
+		mipi_vc3_to_ipu2_csi1_mux: endpoint {
+			remote-endpoint = <&ipu2_csi1_mux_from_mipi_vc3>;
+		};
+	};
+};
+
 &mipi_dsi {
 	ports {
 		port@2 {
diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
index aac70b9..c4130d5 100644
--- a/arch/arm/boot/dts/imx6qdl.dtsi
+++ b/arch/arm/boot/dts/imx6qdl.dtsi
@@ -799,8 +799,10 @@
 			};
 
 			gpr: iomuxc-gpr@020e0000 {
-				compatible = "fsl,imx6q-iomuxc-gpr", "syscon";
+				compatible = "fsl,imx6q-iomuxc-gpr", "syscon", "simple-mfd";
 				reg = <0x020e0000 0x38>;
+				#address-cells = <1>;
+				#size-cells = <0>;
 			};
 
 			iomuxc: iomuxc@020e0000 {
@@ -1127,6 +1129,8 @@
 			mipi_csi: mipi@021dc000 {
 				compatible = "fsl,imx6-mipi-csi2", "snps,dw-mipi-csi2";
 				reg = <0x021dc000 0x4000>;
+				#address-cells = <1>;
+				#size-cells = <0>;
 				interrupts = <0 100 0x04>, <0 101 0x04>;
 				clocks = <&clks IMX6QDL_CLK_HSI_TX>,
 					 <&clks IMX6QDL_CLK_VIDEO_27M>,
@@ -1234,6 +1238,10 @@
 
 			ipu1_csi0: port@0 {
 				reg = <0>;
+
+				ipu1_csi0_from_ipu1_csi0_mux: endpoint {
+					remote-endpoint = <&ipu1_csi0_mux_to_ipu1_csi0>;
+				};
 			};
 
 			ipu1_csi1: port@1 {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 04/36] ARM: dts: imx6qdl: add capture-subsystem device
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (2 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 03/36] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 05/36] ARM: dts: imx6qdl-sabrelite: remove erratum ERR006687 workaround Steve Longerbeam
                   ` (33 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6dl.dtsi | 5 +++++
 arch/arm/boot/dts/imx6q.dtsi  | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
index 371288a..f1743fc 100644
--- a/arch/arm/boot/dts/imx6dl.dtsi
+++ b/arch/arm/boot/dts/imx6dl.dtsi
@@ -100,6 +100,11 @@
 		};
 	};
 
+	capture-subsystem {
+		compatible = "fsl,imx-capture-subsystem";
+		ports = <&ipu1_csi0>, <&ipu1_csi1>;
+	};
+
 	display-subsystem {
 		compatible = "fsl,imx-display-subsystem";
 		ports = <&ipu1_di0>, <&ipu1_di1>;
diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
index b833b0d..4cc6579 100644
--- a/arch/arm/boot/dts/imx6q.dtsi
+++ b/arch/arm/boot/dts/imx6q.dtsi
@@ -206,6 +206,11 @@
 		};
 	};
 
+	capture-subsystem {
+		compatible = "fsl,imx-capture-subsystem";
+		ports = <&ipu1_csi0>, <&ipu1_csi1>, <&ipu2_csi0>, <&ipu2_csi1>;
+	};
+
 	display-subsystem {
 		compatible = "fsl,imx-display-subsystem";
 		ports = <&ipu1_di0>, <&ipu1_di1>, <&ipu2_di0>, <&ipu2_di1>;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 05/36] ARM: dts: imx6qdl-sabrelite: remove erratum ERR006687 workaround
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (3 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 04/36] ARM: dts: imx6qdl: add capture-subsystem device Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 06/36] ARM: dts: imx6-sabrelite: add OV5642 and OV5640 camera sensors Steve Longerbeam
                   ` (32 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

There is a pin conflict with GPIO_6. This pin functions as a power
input pin to the OV5642 camera sensor, but ENET uses it as the h/w
workaround for erratum ERR006687, to wake-up the ARM cores on normal
RX and TX packet done events. So we need to remove the h/w workaround
to support the OV5642. The result is that the CPUidle driver will no
longer allow entering the deep idle states on the sabrelite.

This is a partial revert of

commit 6261c4c8f13e ("ARM: dts: imx6qdl-sabrelite: use GPIO_6 for FEC
			interrupt.")
commit a28eeb43ee57 ("ARM: dts: imx6: tag boards that have the HW workaround
			for ERR006687")

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6qdl-sabrelite.dtsi | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi b/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
index 1f9076e..795b5a5 100644
--- a/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
@@ -271,9 +271,6 @@
 	txd1-skew-ps = <0>;
 	txd2-skew-ps = <0>;
 	txd3-skew-ps = <0>;
-	interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
-			      <&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
-	fsl,err006687-workaround-present;
 	status = "okay";
 };
 
@@ -374,7 +371,6 @@
 				MX6QDL_PAD_RGMII_RX_CTL__RGMII_RX_CTL	0x1b030
 				/* Phy reset */
 				MX6QDL_PAD_EIM_D23__GPIO3_IO23		0x000b0
-				MX6QDL_PAD_GPIO_6__ENET_IRQ		0x000b1
 			>;
 		};
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 06/36] ARM: dts: imx6-sabrelite: add OV5642 and OV5640 camera sensors
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (4 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 05/36] ARM: dts: imx6qdl-sabrelite: remove erratum ERR006687 workaround Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 07/36] ARM: dts: imx6-sabresd: " Steve Longerbeam
                   ` (31 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Adds the OV5642 parallel-bus sensor, and the OV5640 MIPI CSI-2 sensor.
Both hang off the same i2c2 bus, so they require different (and non-
default) i2c slave addresses.

The OV5642 connects to the parallel-bus mux input port on ipu1_csi0_mux.

The OV5640 connects to the input port on the MIPI CSI-2 receiver on
mipi_csi.

The OV5642 node is disabled temporarily while the subdev driver is
cleaned up and submitted later.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6dl-sabrelite.dts   |   5 ++
 arch/arm/boot/dts/imx6q-sabrelite.dts    |   5 ++
 arch/arm/boot/dts/imx6qdl-sabrelite.dtsi | 148 +++++++++++++++++++++++++++++++
 3 files changed, 158 insertions(+)

diff --git a/arch/arm/boot/dts/imx6dl-sabrelite.dts b/arch/arm/boot/dts/imx6dl-sabrelite.dts
index 0f06ca5..cfd5110 100644
--- a/arch/arm/boot/dts/imx6dl-sabrelite.dts
+++ b/arch/arm/boot/dts/imx6dl-sabrelite.dts
@@ -48,3 +48,8 @@
 	model = "Freescale i.MX6 DualLite SABRE Lite Board";
 	compatible = "fsl,imx6dl-sabrelite", "fsl,imx6dl";
 };
+
+&ipu1_csi1_from_ipu1_csi1_mux {
+	clock-lanes = <0>;
+	data-lanes = <1 2>;
+};
diff --git a/arch/arm/boot/dts/imx6q-sabrelite.dts b/arch/arm/boot/dts/imx6q-sabrelite.dts
index 66d10d8..e00fc06 100644
--- a/arch/arm/boot/dts/imx6q-sabrelite.dts
+++ b/arch/arm/boot/dts/imx6q-sabrelite.dts
@@ -52,3 +52,8 @@
 &sata {
 	status = "okay";
 };
+
+&ipu1_csi1_from_mipi_vc1 {
+	clock-lanes = <0>;
+	data-lanes = <1 2>;
+};
diff --git a/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi b/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
index 795b5a5..7958a0c 100644
--- a/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
@@ -39,6 +39,8 @@
  *     FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
  *     OTHER DEALINGS IN THE SOFTWARE.
  */
+
+#include <dt-bindings/clock/imx6qdl-clock.h>
 #include <dt-bindings/gpio/gpio.h>
 #include <dt-bindings/input/input.h>
 
@@ -94,6 +96,42 @@
 			pinctrl-0 = <&pinctrl_can_xcvr>;
 			gpio = <&gpio1 2 GPIO_ACTIVE_LOW>;
 		};
+
+		reg_1p5v: regulator@4 {
+			compatible = "regulator-fixed";
+			reg = <4>;
+			regulator-name = "1P5V";
+			regulator-min-microvolt = <1500000>;
+			regulator-max-microvolt = <1500000>;
+			regulator-always-on;
+		};
+
+		reg_1p8v: regulator@5 {
+			compatible = "regulator-fixed";
+			reg = <5>;
+			regulator-name = "1P8V";
+			regulator-min-microvolt = <1800000>;
+			regulator-max-microvolt = <1800000>;
+			regulator-always-on;
+		};
+
+		reg_2p8v: regulator@6 {
+			compatible = "regulator-fixed";
+			reg = <6>;
+			regulator-name = "2P8V";
+			regulator-min-microvolt = <2800000>;
+			regulator-max-microvolt = <2800000>;
+			regulator-always-on;
+		};
+	};
+
+	mipi_xclk: mipi_xclk {
+		compatible = "pwm-clock";
+		#clock-cells = <0>;
+		clock-frequency = <22000000>;
+		clock-output-names = "mipi_pwm3";
+		pwms = <&pwm3 0 45>; /* 1 / 45 ns = 22 MHz */
+		status = "okay";
 	};
 
 	gpio-keys {
@@ -220,6 +258,22 @@
 	};
 };
 
+&ipu1_csi0_from_ipu1_csi0_mux {
+	bus-width = <8>;
+	data-shift = <12>; /* Lines 19:12 used */
+	hsync-active = <1>;
+	vync-active = <1>;
+};
+
+&ipu1_csi0_mux_from_parallel_sensor {
+	remote-endpoint = <&ov5642_to_ipu1_csi0_mux>;
+};
+
+&ipu1_csi0 {
+	pinctrl-names = "default";
+	pinctrl-0 = <&pinctrl_ipu1_csi0>;
+};
+
 &audmux {
 	pinctrl-names = "default";
 	pinctrl-0 = <&pinctrl_audmux>;
@@ -299,6 +353,53 @@
 	pinctrl-names = "default";
 	pinctrl-0 = <&pinctrl_i2c2>;
 	status = "okay";
+
+	ov5640: camera@40 {
+		compatible = "ovti,ov5640";
+		pinctrl-names = "default";
+		pinctrl-0 = <&pinctrl_ov5640>;
+		reg = <0x40>;
+		clocks = <&mipi_xclk>;
+		clock-names = "xclk";
+		DOVDD-supply = <&reg_1p8v>;
+		AVDD-supply = <&reg_2p8v>;
+		DVDD-supply = <&reg_1p5v>;
+		reset-gpios = <&gpio2 5 GPIO_ACTIVE_LOW>; /* NANDF_D5 */
+		pwdn-gpios = <&gpio6 9 GPIO_ACTIVE_HIGH>; /* NANDF_WP_B */
+
+		port {
+			#address-cells = <1>;
+			#size-cells = <0>;
+
+			ov5640_to_mipi_csi2: endpoint {
+				remote-endpoint = <&mipi_csi2_in>;
+				clock-lanes = <0>;
+				data-lanes = <1 2>;
+			};
+		};
+	};
+
+	ov5642: camera@42 {
+		compatible = "ovti,ov5642";
+		pinctrl-names = "default";
+		pinctrl-0 = <&pinctrl_ov5642>;
+		clocks = <&clks IMX6QDL_CLK_CKO2>;
+		clock-names = "xclk";
+		reg = <0x42>;
+		reset-gpios = <&gpio1 8 GPIO_ACTIVE_LOW>;
+		pwdn-gpios = <&gpio1 6 GPIO_ACTIVE_HIGH>;
+		gp-gpios = <&gpio1 16 GPIO_ACTIVE_HIGH>;
+		status = "disabled";
+
+		port {
+			ov5642_to_ipu1_csi0_mux: endpoint {
+				remote-endpoint = <&ipu1_csi0_mux_from_parallel_sensor>;
+				bus-width = <8>;
+				hsync-active = <1>;
+				vsync-active = <1>;
+			};
+		};
+	};
 };
 
 &i2c3 {
@@ -412,6 +513,23 @@
 			>;
 		};
 
+		pinctrl_ipu1_csi0: ipu1csi0grp {
+			fsl,pins = <
+				MX6QDL_PAD_CSI0_DAT12__IPU1_CSI0_DATA12    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT13__IPU1_CSI0_DATA13    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT14__IPU1_CSI0_DATA14    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT15__IPU1_CSI0_DATA15    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT16__IPU1_CSI0_DATA16    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT17__IPU1_CSI0_DATA17    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT18__IPU1_CSI0_DATA18    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT19__IPU1_CSI0_DATA19    0x1b0b0
+				MX6QDL_PAD_CSI0_PIXCLK__IPU1_CSI0_PIXCLK   0x1b0b0
+				MX6QDL_PAD_CSI0_MCLK__IPU1_CSI0_HSYNC      0x1b0b0
+				MX6QDL_PAD_CSI0_VSYNC__IPU1_CSI0_VSYNC     0x1b0b0
+				MX6QDL_PAD_CSI0_DATA_EN__IPU1_CSI0_DATA_EN 0x1b0b0
+			>;
+		};
+
 		pinctrl_j15: j15grp {
 			fsl,pins = <
 				MX6QDL_PAD_DI0_DISP_CLK__IPU1_DI0_DISP_CLK 0x10
@@ -445,6 +563,22 @@
 			>;
 		};
 
+		pinctrl_ov5640: ov5640grp {
+			fsl,pins = <
+				MX6QDL_PAD_NANDF_D5__GPIO2_IO05   0x000b0
+				MX6QDL_PAD_NANDF_WP_B__GPIO6_IO09 0x0b0b0
+			>;
+		};
+
+		pinctrl_ov5642: ov5642grp {
+			fsl,pins = <
+				MX6QDL_PAD_SD1_DAT0__GPIO1_IO16 0x1b0b0
+				MX6QDL_PAD_GPIO_6__GPIO1_IO06   0x1b0b0
+				MX6QDL_PAD_GPIO_8__GPIO1_IO08   0x130b0
+				MX6QDL_PAD_GPIO_3__CCM_CLKO2    0x000b0
+			>;
+		};
+
 		pinctrl_pwm1: pwm1grp {
 			fsl,pins = <
 				MX6QDL_PAD_SD1_DAT3__PWM1_OUT 0x1b0b1
@@ -601,3 +735,17 @@
 	vmmc-supply = <&reg_3p3v>;
 	status = "okay";
 };
+
+&mipi_csi {
+	status = "okay";
+
+	port@0 {
+		reg = <0>;
+
+		mipi_csi2_in: endpoint {
+			remote-endpoint = <&ov5640_to_mipi_csi2>;
+			clock-lanes = <0>;
+			data-lanes = <1 2>;
+		};
+	};
+};
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 07/36] ARM: dts: imx6-sabresd: add OV5642 and OV5640 camera sensors
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (5 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 06/36] ARM: dts: imx6-sabrelite: add OV5642 and OV5640 camera sensors Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-17  0:51   ` Fabio Estevam
  2017-02-16  2:19 ` [PATCH v4 08/36] ARM: dts: imx6-sabreauto: create i2cmux for i2c3 Steve Longerbeam
                   ` (30 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Enables the OV5642 parallel-bus sensor, and the OV5640 MIPI CSI-2 sensor.

The OV5642 connects to the parallel-bus mux input port on ipu1_csi0_mux.

The OV5640 connects to the input port on the MIPI CSI-2 receiver on
mipi_csi.

Until the OV5652 sensor module compatible with the SabreSD becomes
available for testing, the ov5642 node is currently disabled.
---
 arch/arm/boot/dts/imx6dl-sabresd.dts   |   5 ++
 arch/arm/boot/dts/imx6q-sabresd.dts    |   5 ++
 arch/arm/boot/dts/imx6qdl-sabresd.dtsi | 114 ++++++++++++++++++++++++++++++++-
 3 files changed, 123 insertions(+), 1 deletion(-)

diff --git a/arch/arm/boot/dts/imx6dl-sabresd.dts b/arch/arm/boot/dts/imx6dl-sabresd.dts
index 1e45f2f..9607afe 100644
--- a/arch/arm/boot/dts/imx6dl-sabresd.dts
+++ b/arch/arm/boot/dts/imx6dl-sabresd.dts
@@ -15,3 +15,8 @@
 	model = "Freescale i.MX6 DualLite SABRE Smart Device Board";
 	compatible = "fsl,imx6dl-sabresd", "fsl,imx6dl";
 };
+
+&ipu1_csi1_from_ipu1_csi1_mux {
+	clock-lanes = <0>;
+	data-lanes = <1 2>;
+};
diff --git a/arch/arm/boot/dts/imx6q-sabresd.dts b/arch/arm/boot/dts/imx6q-sabresd.dts
index 9cbdfe7..527772b 100644
--- a/arch/arm/boot/dts/imx6q-sabresd.dts
+++ b/arch/arm/boot/dts/imx6q-sabresd.dts
@@ -23,3 +23,8 @@
 &sata {
 	status = "okay";
 };
+
+&ipu1_csi1_from_mipi_vc1 {
+	clock-lanes = <0>;
+	data-lanes = <1 2>;
+};
diff --git a/arch/arm/boot/dts/imx6qdl-sabresd.dtsi b/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
index 55ef535..f4e13c6 100644
--- a/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
@@ -10,6 +10,7 @@
  * http://www.gnu.org/copyleft/gpl.html
  */
 
+#include <dt-bindings/clock/imx6qdl-clock.h>
 #include <dt-bindings/gpio/gpio.h>
 #include <dt-bindings/input/input.h>
 
@@ -146,6 +147,36 @@
 	};
 };
 
+&ipu1_csi0_from_ipu1_csi0_mux {
+	bus-width = <8>;
+	data-shift = <12>; /* Lines 19:12 used */
+	hsync-active = <1>;
+	vsync-active = <1>;
+};
+
+&ipu1_csi0_mux_from_parallel_sensor {
+	remote-endpoint = <&ov5642_to_ipu1_csi0_mux>;
+};
+
+&ipu1_csi0 {
+	pinctrl-names = "default";
+	pinctrl-0 = <&pinctrl_ipu1_csi0>;
+};
+
+&mipi_csi {
+	status = "okay";
+
+	port@0 {
+		reg = <0>;
+
+		mipi_csi2_in: endpoint {
+			remote-endpoint = <&ov5640_to_mipi_csi2>;
+			clock-lanes = <0>;
+			data-lanes = <1 2>;
+		};
+	};
+};
+
 &audmux {
 	pinctrl-names = "default";
 	pinctrl-0 = <&pinctrl_audmux>;
@@ -214,7 +245,32 @@
 			0x8014 /* 4:FN_DMICCDAT */
 			0x0000 /* 5:Default */
 		>;
-       };
+	};
+
+	ov5642: camera@3c {
+		compatible = "ovti,ov5642";
+		pinctrl-names = "default";
+		pinctrl-0 = <&pinctrl_ov5642>;
+		clocks = <&clks IMX6QDL_CLK_CKO>;
+		clock-names = "xclk";
+		reg = <0x3c>;
+		DOVDD-supply = <&vgen4_reg>; /* 1.8v */
+		AVDD-supply = <&vgen3_reg>;  /* 2.8v, rev C board is VGEN3
+						rev B board is VGEN5 */
+		DVDD-supply = <&vgen2_reg>;  /* 1.5v*/
+		pwdn-gpios = <&gpio1 16 GPIO_ACTIVE_HIGH>;
+		reset-gpios = <&gpio1 17 GPIO_ACTIVE_LOW>;
+		status = "disabled";
+
+		port {
+			ov5642_to_ipu1_csi0_mux: endpoint {
+				remote-endpoint = <&ipu1_csi0_mux_from_parallel_sensor>;
+				bus-width = <8>;
+				hsync-active = <1>;
+				vsync-active = <1>;
+			};
+		};
+	};
 };
 
 &i2c2 {
@@ -223,6 +279,32 @@
 	pinctrl-0 = <&pinctrl_i2c2>;
 	status = "okay";
 
+	ov5640: camera@3c {
+		compatible = "ovti,ov5640";
+		pinctrl-names = "default";
+		pinctrl-0 = <&pinctrl_ov5640>;
+		reg = <0x3c>;
+		clocks = <&clks IMX6QDL_CLK_CKO>;
+		clock-names = "xclk";
+		DOVDD-supply = <&vgen4_reg>; /* 1.8v */
+		AVDD-supply = <&vgen3_reg>;  /* 2.8v, rev C board is VGEN3
+						rev B board is VGEN5 */
+		DVDD-supply = <&vgen2_reg>;  /* 1.5v*/
+		pwdn-gpios = <&gpio1 19 GPIO_ACTIVE_HIGH>;
+		reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>;
+
+		port {
+			#address-cells = <1>;
+			#size-cells = <0>;
+
+			ov5640_to_mipi_csi2: endpoint {
+				remote-endpoint = <&mipi_csi2_in>;
+				clock-lanes = <0>;
+				data-lanes = <1 2>;
+			};
+		};
+	};
+
 	pmic: pfuze100@08 {
 		compatible = "fsl,pfuze100";
 		reg = <0x08>;
@@ -426,6 +508,36 @@
 			>;
 		};
 
+		pinctrl_ipu1_csi0: ipu1csi0grp {
+			fsl,pins = <
+				MX6QDL_PAD_CSI0_DAT12__IPU1_CSI0_DATA12    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT13__IPU1_CSI0_DATA13    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT14__IPU1_CSI0_DATA14    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT15__IPU1_CSI0_DATA15    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT16__IPU1_CSI0_DATA16    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT17__IPU1_CSI0_DATA17    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT18__IPU1_CSI0_DATA18    0x1b0b0
+				MX6QDL_PAD_CSI0_DAT19__IPU1_CSI0_DATA19    0x1b0b0
+				MX6QDL_PAD_CSI0_PIXCLK__IPU1_CSI0_PIXCLK   0x1b0b0
+				MX6QDL_PAD_CSI0_MCLK__IPU1_CSI0_HSYNC      0x1b0b0
+				MX6QDL_PAD_CSI0_VSYNC__IPU1_CSI0_VSYNC     0x1b0b0
+			>;
+		};
+
+		pinctrl_ov5640: ov5640grp {
+			fsl,pins = <
+				MX6QDL_PAD_SD1_DAT2__GPIO1_IO19 0x1b0b0
+				MX6QDL_PAD_SD1_CLK__GPIO1_IO20  0x1b0b0
+			>;
+		};
+
+		pinctrl_ov5642: ov5642grp {
+			fsl,pins = <
+				MX6QDL_PAD_SD1_DAT0__GPIO1_IO16 0x1b0b0
+				MX6QDL_PAD_SD1_DAT1__GPIO1_IO17 0x1b0b0
+			>;
+		};
+
 		pinctrl_pcie: pciegrp {
 			fsl,pins = <
 				MX6QDL_PAD_GPIO_17__GPIO7_IO12	0x1b0b0
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 08/36] ARM: dts: imx6-sabreauto: create i2cmux for i2c3
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (6 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 07/36] ARM: dts: imx6-sabresd: " Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 09/36] ARM: dts: imx6-sabreauto: add reset-gpios property for max7310_b Steve Longerbeam
                   ` (29 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

The sabreauto uses a steering pin to select between the SDA signal on
i2c3 bus, and a data-in pin for an SPI NOR chip. Use i2cmux to control
this steering pin. Idle state of the i2cmux selects SPI NOR. This is not
a classic way to use i2cmux, since one side of the mux selects something
other than an i2c bus, but it works and is probably the cleanest
solution. Note that if one thread is attempting to access SPI NOR while
another thread is accessing i2c3, the SPI NOR access will fail since the
i2cmux has selected the SDA pin rather than SPI NOR data-in. This couldn't
be avoided in any case, the board is not designed to allow concurrent
i2c3 and SPI NOR functions (and the default device-tree does not enable
SPI NOR anyway).

Devices hanging off i2c3 should now be defined under i2cmux, so
that the steering pin can be properly controlled to access those
devices. The port expanders (MAX7310) are thus moved into i2cmux.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6qdl-sabreauto.dtsi | 65 +++++++++++++++++++++-----------
 1 file changed, 44 insertions(+), 21 deletions(-)

diff --git a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
index 52390ba..cace88c 100644
--- a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
@@ -108,6 +108,44 @@
 		default-brightness-level = <7>;
 		status = "okay";
 	};
+
+	i2cmux {
+		compatible = "i2c-mux-gpio";
+		#address-cells = <1>;
+		#size-cells = <0>;
+		pinctrl-names = "default";
+		pinctrl-0 = <&pinctrl_i2c3mux>;
+		mux-gpios = <&gpio5 4 0>;
+		i2c-parent = <&i2c3>;
+		idle-state = <0>;
+
+		i2c@1 {
+			#address-cells = <1>;
+			#size-cells = <0>;
+			reg = <1>;
+
+			max7310_a: gpio@30 {
+				compatible = "maxim,max7310";
+				reg = <0x30>;
+				gpio-controller;
+				#gpio-cells = <2>;
+			};
+
+			max7310_b: gpio@32 {
+				compatible = "maxim,max7310";
+				reg = <0x32>;
+				gpio-controller;
+				#gpio-cells = <2>;
+			};
+
+			max7310_c: gpio@34 {
+				compatible = "maxim,max7310";
+				reg = <0x34>;
+				gpio-controller;
+				#gpio-cells = <2>;
+			};
+		};
+	};
 };
 
 &clks {
@@ -291,27 +329,6 @@
 	pinctrl-names = "default";
 	pinctrl-0 = <&pinctrl_i2c3>;
 	status = "okay";
-
-	max7310_a: gpio@30 {
-		compatible = "maxim,max7310";
-		reg = <0x30>;
-		gpio-controller;
-		#gpio-cells = <2>;
-	};
-
-	max7310_b: gpio@32 {
-		compatible = "maxim,max7310";
-		reg = <0x32>;
-		gpio-controller;
-		#gpio-cells = <2>;
-	};
-
-	max7310_c: gpio@34 {
-		compatible = "maxim,max7310";
-		reg = <0x34>;
-		gpio-controller;
-		#gpio-cells = <2>;
-	};
 };
 
 &iomuxc {
@@ -419,6 +436,12 @@
 			>;
 		};
 
+		pinctrl_i2c3mux: i2c3muxgrp {
+			fsl,pins = <
+				MX6QDL_PAD_EIM_A24__GPIO5_IO04 0x0b0b1
+			>;
+		};
+
 		pinctrl_pwm3: pwm1grp {
 			fsl,pins = <
 				MX6QDL_PAD_SD4_DAT1__PWM3_OUT		0x1b0b1
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 09/36] ARM: dts: imx6-sabreauto: add reset-gpios property for max7310_b
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (7 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 08/36] ARM: dts: imx6-sabreauto: create i2cmux for i2c3 Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 10/36] ARM: dts: imx6-sabreauto: add pinctrl for gpt input capture Steve Longerbeam
                   ` (28 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

The reset pin to the port expander chip (MAX7310) is controlled by a gpio,
so define a reset-gpios property to control it. There are three MAX7310's
on the SabreAuto CPU card (max7310_[abc]), but all use the same pin for
their reset. Since all can't acquire the same pin, assign it to max7310_b,
that chip is needed by more functions (usb and adv7180).

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6qdl-sabreauto.dtsi | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
index cace88c..967c3b8 100644
--- a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
@@ -136,6 +136,9 @@
 				reg = <0x32>;
 				gpio-controller;
 				#gpio-cells = <2>;
+				pinctrl-names = "default";
+				pinctrl-0 = <&pinctrl_max7310>;
+				reset-gpios = <&gpio1 15 GPIO_ACTIVE_LOW>;
 			};
 
 			max7310_c: gpio@34 {
@@ -442,6 +445,12 @@
 			>;
 		};
 
+		pinctrl_max7310: max7310grp {
+			fsl,pins = <
+				MX6QDL_PAD_SD2_DAT0__GPIO1_IO15 0x1b0b0
+			>;
+		};
+
 		pinctrl_pwm3: pwm1grp {
 			fsl,pins = <
 				MX6QDL_PAD_SD4_DAT1__PWM3_OUT		0x1b0b1
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 10/36] ARM: dts: imx6-sabreauto: add pinctrl for gpt input capture
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (8 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 09/36] ARM: dts: imx6-sabreauto: add reset-gpios property for max7310_b Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 11/36] ARM: dts: imx6-sabreauto: add the ADV7180 video decoder Steve Longerbeam
                   ` (27 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Add pinctrl groups for both GPT input capture channels.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6qdl-sabreauto.dtsi | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
index 967c3b8..495709f 100644
--- a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
@@ -457,6 +457,18 @@
 			>;
 		};
 
+		pinctrl_gpt_input_capture0: gptinputcapture0grp {
+			fsl,pins = <
+				MX6QDL_PAD_SD1_DAT0__GPT_CAPTURE1	0x1b0b0
+			>;
+		};
+
+		pinctrl_gpt_input_capture1: gptinputcapture1grp {
+			fsl,pins = <
+				MX6QDL_PAD_SD1_DAT1__GPT_CAPTURE2	0x1b0b0
+			>;
+		};
+
 		pinctrl_spdif: spdifgrp {
 			fsl,pins = <
 				MX6QDL_PAD_KEY_COL3__SPDIF_IN 0x1b0b0
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 11/36] ARM: dts: imx6-sabreauto: add the ADV7180 video decoder
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (9 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 10/36] ARM: dts: imx6-sabreauto: add pinctrl for gpt input capture Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 12/36] add mux and video interface bridge entity functions Steve Longerbeam
                   ` (26 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Enables the ADV7180 decoder sensor. The ADV7180 connects to the
parallel-bus mux input on ipu1_csi0_mux.

The ADV7180 power pin is via max7310_b port expander.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/boot/dts/imx6qdl-sabreauto.dtsi | 58 ++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
index 495709f..f03057b 100644
--- a/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
@@ -124,6 +124,21 @@
 			#size-cells = <0>;
 			reg = <1>;
 
+			adv7180: camera@21 {
+				compatible = "adi,adv7180";
+				reg = <0x21>;
+				powerdown-gpios = <&max7310_b 2 GPIO_ACTIVE_LOW>;
+				interrupt-parent = <&gpio1>;
+				interrupts = <27 0x8>;
+
+				port {
+					adv7180_to_ipu1_csi0_mux: endpoint {
+						remote-endpoint = <&ipu1_csi0_mux_from_parallel_sensor>;
+						bus-width = <8>;
+					};
+				};
+			};
+
 			max7310_a: gpio@30 {
 				compatible = "maxim,max7310";
 				reg = <0x30>;
@@ -151,6 +166,25 @@
 	};
 };
 
+&ipu1_csi0_from_ipu1_csi0_mux {
+	bus-width = <8>;
+};
+
+&ipu1_csi0_mux_from_parallel_sensor {
+	remote-endpoint = <&adv7180_to_ipu1_csi0_mux>;
+	bus-width = <8>;
+};
+
+&ipu1_csi0 {
+	pinctrl-names = "default";
+	pinctrl-0 = <&pinctrl_ipu1_csi0>;
+
+	/* enable frame interval monitor on this port */
+	fim {
+		status = "okay";
+	};
+};
+
 &clks {
 	assigned-clocks = <&clks IMX6QDL_PLL4_BYPASS_SRC>,
 			  <&clks IMX6QDL_PLL4_BYPASS>,
@@ -445,6 +479,30 @@
 			>;
 		};
 
+		pinctrl_ipu1_csi0: ipu1csi0grp {
+			fsl,pins = <
+				MX6QDL_PAD_CSI0_DAT4__IPU1_CSI0_DATA04   0x1b0b0
+				MX6QDL_PAD_CSI0_DAT5__IPU1_CSI0_DATA05   0x1b0b0
+				MX6QDL_PAD_CSI0_DAT6__IPU1_CSI0_DATA06   0x1b0b0
+				MX6QDL_PAD_CSI0_DAT7__IPU1_CSI0_DATA07   0x1b0b0
+				MX6QDL_PAD_CSI0_DAT8__IPU1_CSI0_DATA08   0x1b0b0
+				MX6QDL_PAD_CSI0_DAT9__IPU1_CSI0_DATA09   0x1b0b0
+				MX6QDL_PAD_CSI0_DAT10__IPU1_CSI0_DATA10  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT11__IPU1_CSI0_DATA11  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT12__IPU1_CSI0_DATA12  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT13__IPU1_CSI0_DATA13  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT14__IPU1_CSI0_DATA14  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT15__IPU1_CSI0_DATA15  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT16__IPU1_CSI0_DATA16  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT17__IPU1_CSI0_DATA17  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT18__IPU1_CSI0_DATA18  0x1b0b0
+				MX6QDL_PAD_CSI0_DAT19__IPU1_CSI0_DATA19  0x1b0b0
+				MX6QDL_PAD_CSI0_PIXCLK__IPU1_CSI0_PIXCLK 0x1b0b0
+				MX6QDL_PAD_CSI0_MCLK__IPU1_CSI0_HSYNC    0x1b0b0
+				MX6QDL_PAD_CSI0_VSYNC__IPU1_CSI0_VSYNC   0x1b0b0
+			>;
+		};
+
 		pinctrl_max7310: max7310grp {
 			fsl,pins = <
 				MX6QDL_PAD_SD2_DAT0__GPIO1_IO15 0x1b0b0
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 12/36] add mux and video interface bridge entity functions
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (10 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 11/36] ARM: dts: imx6-sabreauto: add the ADV7180 video decoder Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-19 21:28   ` Pavel Machek
  2017-02-16  2:19 ` [PATCH v4 13/36] [media] v4l2: add a frame timeout event Steve Longerbeam
                   ` (25 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

From: Philipp Zabel <p.zabel@pengutronix.de>

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>

- renamed MEDIA_ENT_F_MUX to MEDIA_ENT_F_VID_MUX

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 Documentation/media/uapi/mediactl/media-types.rst | 22 ++++++++++++++++++++++
 include/uapi/linux/media.h                        |  6 ++++++
 2 files changed, 28 insertions(+)

diff --git a/Documentation/media/uapi/mediactl/media-types.rst b/Documentation/media/uapi/mediactl/media-types.rst
index 3e03dc2..023be29 100644
--- a/Documentation/media/uapi/mediactl/media-types.rst
+++ b/Documentation/media/uapi/mediactl/media-types.rst
@@ -298,6 +298,28 @@ Types and flags used to represent the media graph elements
 	  received on its sink pad and outputs the statistics data on
 	  its source pad.
 
+    -  ..  row 29
+
+       ..  _MEDIA-ENT-F-MUX:
+
+       -  ``MEDIA_ENT_F_MUX``
+
+       - Video multiplexer. An entity capable of multiplexing must have at
+         least two sink pads and one source pad, and must pass the video
+         frame(s) received from the active sink pad to the source pad. Video
+         frame(s) from the inactive sink pads are discarded.
+
+    -  ..  row 30
+
+       ..  _MEDIA-ENT-F-VID-IF-BRIDGE:
+
+       -  ``MEDIA_ENT_F_VID_IF_BRIDGE``
+
+       - Video interface bridge. A video interface bridge entity must have at
+         least one sink pad and one source pad. It receives video frame(s) on
+         its sink pad in one bus format (HDMI, eDP, MIPI CSI-2, ...) and
+         converts them and outputs them on its source pad in another bus format
+         (eDP, MIPI CSI-2, parallel, ...).
 
 ..  tabularcolumns:: |p{5.5cm}|p{12.0cm}|
 
diff --git a/include/uapi/linux/media.h b/include/uapi/linux/media.h
index 4890787..fac96c6 100644
--- a/include/uapi/linux/media.h
+++ b/include/uapi/linux/media.h
@@ -105,6 +105,12 @@ struct media_device_info {
 #define MEDIA_ENT_F_PROC_VIDEO_STATISTICS	(MEDIA_ENT_F_BASE + 0x4006)
 
 /*
+ * Switch and bridge entitites
+ */
+#define MEDIA_ENT_F_VID_MUX			(MEDIA_ENT_F_BASE + 0x5001)
+#define MEDIA_ENT_F_VID_IF_BRIDGE		(MEDIA_ENT_F_BASE + 0x5002)
+
+/*
  * Connectors
  */
 /* It is a responsibility of the entity drivers to add connectors and links */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (11 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 12/36] add mux and video interface bridge entity functions Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-03-02 15:53   ` Sakari Ailus
  2017-02-16  2:19 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Steve Longerbeam
                   ` (24 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Add a new FRAME_TIMEOUT event to signal that a video capture or
output device has timed out waiting for reception or transmit
completion of a video frame.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 Documentation/media/uapi/v4l/vidioc-dqevent.rst | 5 +++++
 Documentation/media/videodev2.h.rst.exceptions  | 1 +
 include/uapi/linux/videodev2.h                  | 1 +
 3 files changed, 7 insertions(+)

diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
index 8d663a7..dd77d9b 100644
--- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
+++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
@@ -197,6 +197,11 @@ call.
 	the regions changes. This event has a struct
 	:c:type:`v4l2_event_motion_det`
 	associated with it.
+    * - ``V4L2_EVENT_FRAME_TIMEOUT``
+      - 7
+      - This event is triggered when the video capture or output device
+	has timed out waiting for the reception or transmit completion of
+	a frame of video.
     * - ``V4L2_EVENT_PRIVATE_START``
       - 0x08000000
       - Base event number for driver-private events.
diff --git a/Documentation/media/videodev2.h.rst.exceptions b/Documentation/media/videodev2.h.rst.exceptions
index e11a0d0..5b0f767 100644
--- a/Documentation/media/videodev2.h.rst.exceptions
+++ b/Documentation/media/videodev2.h.rst.exceptions
@@ -459,6 +459,7 @@ replace define V4L2_EVENT_CTRL event-type
 replace define V4L2_EVENT_FRAME_SYNC event-type
 replace define V4L2_EVENT_SOURCE_CHANGE event-type
 replace define V4L2_EVENT_MOTION_DET event-type
+replace define V4L2_EVENT_FRAME_TIMEOUT event-type
 replace define V4L2_EVENT_PRIVATE_START event-type
 
 replace define V4L2_EVENT_CTRL_CH_VALUE ctrl-changes-flags
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index 46e8a2e3..e174c45 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -2132,6 +2132,7 @@ struct v4l2_streamparm {
 #define V4L2_EVENT_FRAME_SYNC			4
 #define V4L2_EVENT_SOURCE_CHANGE		5
 #define V4L2_EVENT_MOTION_DET			6
+#define V4L2_EVENT_FRAME_TIMEOUT		7
 #define V4L2_EVENT_PRIVATE_START		0x08000000
 
 /* Payload for V4L2_EVENT_VSYNC */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (12 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 13/36] [media] v4l2: add a frame timeout event Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-19 21:44   ` Pavel Machek
  2017-03-02 16:02   ` Sakari Ailus
  2017-02-16  2:19 ` [PATCH v4 15/36] platform: add video-multiplexer subdevice driver Steve Longerbeam
                   ` (23 subsequent siblings)
  37 siblings, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

v4l2_pipeline_inherit_controls() will add the v4l2 controls from
all subdev entities in a pipeline to a given video device.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
 include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
 2 files changed, 73 insertions(+)

diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
index 303980b..09d4d97 100644
--- a/drivers/media/v4l2-core/v4l2-mc.c
+++ b/drivers/media/v4l2-core/v4l2-mc.c
@@ -22,6 +22,7 @@
 #include <linux/usb.h>
 #include <media/media-device.h>
 #include <media/media-entity.h>
+#include <media/v4l2-ctrls.h>
 #include <media/v4l2-fh.h>
 #include <media/v4l2-mc.h>
 #include <media/v4l2-subdev.h>
@@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
 }
 EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
 
+int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
+				     struct media_entity *start_entity)
+{
+	struct media_device *mdev = start_entity->graph_obj.mdev;
+	struct media_entity *entity;
+	struct media_graph graph;
+	struct v4l2_subdev *sd;
+	int ret;
+
+	ret = media_graph_walk_init(&graph, mdev);
+	if (ret)
+		return ret;
+
+	media_graph_walk_start(&graph, start_entity);
+
+	while ((entity = media_graph_walk_next(&graph))) {
+		if (!is_media_entity_v4l2_subdev(entity))
+			continue;
+
+		sd = media_entity_to_v4l2_subdev(entity);
+
+		ret = v4l2_ctrl_add_handler(vfd->ctrl_handler,
+					    sd->ctrl_handler,
+					    NULL);
+		if (ret)
+			break;
+	}
+
+	media_graph_walk_cleanup(&graph);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(__v4l2_pipeline_inherit_controls);
+
+int v4l2_pipeline_inherit_controls(struct video_device *vfd,
+				   struct media_entity *start_entity)
+{
+	struct media_device *mdev = start_entity->graph_obj.mdev;
+	int ret;
+
+	mutex_lock(&mdev->graph_mutex);
+	ret = __v4l2_pipeline_inherit_controls(vfd, start_entity);
+	mutex_unlock(&mdev->graph_mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(v4l2_pipeline_inherit_controls);
+
 /* -----------------------------------------------------------------------------
  * Pipeline power management
  *
diff --git a/include/media/v4l2-mc.h b/include/media/v4l2-mc.h
index 2634d9d..9848e77 100644
--- a/include/media/v4l2-mc.h
+++ b/include/media/v4l2-mc.h
@@ -171,6 +171,17 @@ void v4l_disable_media_source(struct video_device *vdev);
  */
 int v4l_vb2q_enable_media_source(struct vb2_queue *q);
 
+/**
+ * v4l2_pipeline_inherit_controls - Add the v4l2 controls from all
+ *				    subdev entities in a pipeline to
+ *				    the given video device.
+ * @vfd: the video device
+ * @start_entity: Starting entity
+ */
+int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
+				     struct media_entity *start_entity);
+int v4l2_pipeline_inherit_controls(struct video_device *vfd,
+				   struct media_entity *start_entity);
 
 /**
  * v4l2_pipeline_pm_use - Update the use count of an entity
@@ -231,6 +242,20 @@ static inline int v4l_vb2q_enable_media_source(struct vb2_queue *q)
 	return 0;
 }
 
+static inline int __v4l2_pipeline_inherit_controls(
+	struct video_device *vfd,
+	struct media_entity *start_entity)
+{
+	return 0;
+}
+
+static inline int v4l2_pipeline_inherit_controls(
+	struct video_device *vfd,
+	struct media_entity *start_entity)
+{
+	return 0;
+}
+
 static inline int v4l2_pipeline_pm_use(struct media_entity *entity, int use)
 {
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 15/36] platform: add video-multiplexer subdevice driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (13 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-19 22:02   ` Pavel Machek
  2017-02-27 14:41   ` Rob Herring
  2017-02-16  2:19 ` [PATCH v4 16/36] UAPI: Add media UAPI Kbuild file Steve Longerbeam
                   ` (22 subsequent siblings)
  37 siblings, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Sascha Hauer, Steve Longerbeam

From: Philipp Zabel <p.zabel@pengutronix.de>

This driver can handle SoC internal and external video bus multiplexers,
controlled either by register bit fields or by a GPIO. The subdevice
passes through frame interval and mbus configuration of the active input
to the output side.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>

--

- fixed a cut&paste error in vidsw_remove(): v4l2_async_register_subdev()
  should be unregister.

- added media_entity_cleanup() and v4l2_device_unregister_subdev()
  to vidsw_remove().

- added missing MODULE_DEVICE_TABLE().
  Suggested-by: Javier Martinez Canillas <javier@dowhile0.org>

- there was a line left over from a previous iteration that negated
  the new way of determining the pad count just before it which
  has been removed (num_pads = of_get_child_count(np)).

- Philipp Zabel has developed a set of patches that allow adding
  to the subdev async notifier waiting list using a chaining method
  from the async registered callbacks (v4l2_of_subdev_registered()
  and the prep patches for that). For now, I've removed the use of
  v4l2_of_subdev_registered() for the vidmux driver's registered
  callback. This doesn't affect the functionality of this driver,
  but allows for it to be merged now, before adding the chaining
  support.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 .../bindings/media/video-multiplexer.txt           |  59 +++
 drivers/media/platform/Kconfig                     |   8 +
 drivers/media/platform/Makefile                    |   2 +
 drivers/media/platform/video-multiplexer.c         | 474 +++++++++++++++++++++
 4 files changed, 543 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/media/video-multiplexer.txt
 create mode 100644 drivers/media/platform/video-multiplexer.c

diff --git a/Documentation/devicetree/bindings/media/video-multiplexer.txt b/Documentation/devicetree/bindings/media/video-multiplexer.txt
new file mode 100644
index 0000000..9d133d9
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/video-multiplexer.txt
@@ -0,0 +1,59 @@
+Video Multiplexer
+=================
+
+Video multiplexers allow to select between multiple input ports. Video received
+on the active input port is passed through to the output port. Muxes described
+by this binding may be controlled by a syscon register bitfield or by a GPIO.
+
+Required properties:
+- compatible : should be "video-multiplexer"
+- reg: should be register base of the register containing the control bitfield
+- bit-mask: bitmask of the control bitfield in the control register
+- bit-shift: bit offset of the control bitfield in the control register
+- gpios: alternatively to reg, bit-mask, and bit-shift, a single GPIO phandle
+  may be given to switch between two inputs
+- #address-cells: should be <1>
+- #size-cells: should be <0>
+- port@*: at least three port nodes containing endpoints connecting to the
+  source and sink devices according to of_graph bindings. The last port is
+  the output port, all others are inputs.
+
+Example:
+
+syscon {
+	compatible = "syscon", "simple-mfd";
+
+	mux {
+		compatible = "video-multiplexer";
+		/* Single bit (1 << 19) in syscon register 0x04: */
+		reg = <0x04>;
+		bit-mask = <1>;
+		bit-shift = <19>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		port@0 {
+			reg = <0>;
+
+			mux_in0: endpoint {
+				remote-endpoint = <&video_source0_out>;
+			};
+		};
+
+		port@1 {
+			reg = <1>;
+
+			mux_in1: endpoint {
+				remote-endpoint = <&video_source1_out>;
+			};
+		};
+
+		port@2 {
+			reg = <2>;
+
+			mux_out: endpoint {
+				remote-endpoint = <&capture_interface_in>;
+			};
+		};
+	};
+};
diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index c9106e1..3d60d4c 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -74,6 +74,14 @@ config VIDEO_M32R_AR_M64278
 	  To compile this driver as a module, choose M here: the
 	  module will be called arv.
 
+config VIDEO_MULTIPLEXER
+	tristate "Video Multiplexer"
+	depends on VIDEO_V4L2_SUBDEV_API && MEDIA_CONTROLLER
+	help
+	  This driver provides support for SoC internal N:1 video bus
+	  multiplexers controlled by register bitfields as well as external
+	  2:1 video multiplexers controlled by a single GPIO.
+
 config VIDEO_OMAP3
 	tristate "OMAP 3 Camera support"
 	depends on VIDEO_V4L2 && I2C && VIDEO_V4L2_SUBDEV_API && ARCH_OMAP3
diff --git a/drivers/media/platform/Makefile b/drivers/media/platform/Makefile
index 349ddf6..31bfa99 100644
--- a/drivers/media/platform/Makefile
+++ b/drivers/media/platform/Makefile
@@ -27,6 +27,8 @@ obj-$(CONFIG_VIDEO_SH_VEU)		+= sh_veu.o
 
 obj-$(CONFIG_VIDEO_MEM2MEM_DEINTERLACE)	+= m2m-deinterlace.o
 
+obj-$(CONFIG_VIDEO_MULTIPLEXER)		+= video-multiplexer.o
+
 obj-$(CONFIG_VIDEO_S3C_CAMIF) 		+= s3c-camif/
 obj-$(CONFIG_VIDEO_SAMSUNG_EXYNOS4_IS) 	+= exynos4-is/
 obj-$(CONFIG_VIDEO_SAMSUNG_S5P_JPEG)	+= s5p-jpeg/
diff --git a/drivers/media/platform/video-multiplexer.c b/drivers/media/platform/video-multiplexer.c
new file mode 100644
index 0000000..6cc2821
--- /dev/null
+++ b/drivers/media/platform/video-multiplexer.c
@@ -0,0 +1,474 @@
+/*
+ * video stream multiplexer controlled via gpio or syscon
+ *
+ * Copyright (C) 2013 Pengutronix, Sascha Hauer <kernel@pengutronix.de>
+ * Copyright (C) 2016 Pengutronix, Philipp Zabel <kernel@pengutronix.de>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/err.h>
+#include <linux/gpio/consumer.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_graph.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <media/v4l2-async.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-subdev.h>
+#include <media/v4l2-of.h>
+
+struct vidsw {
+	struct v4l2_subdev subdev;
+	unsigned int num_pads;
+	struct media_pad *pads;
+	struct v4l2_mbus_framefmt *format_mbus;
+	struct v4l2_fract timeperframe;
+	struct v4l2_of_endpoint *endpoint;
+	struct regmap_field *field;
+	struct gpio_desc *gpio;
+	int active;
+};
+
+static inline struct vidsw *v4l2_subdev_to_vidsw(struct v4l2_subdev *sd)
+{
+	return container_of(sd, struct vidsw, subdev);
+}
+
+static void vidsw_set_active(struct vidsw *vidsw, int active)
+{
+	vidsw->active = active;
+	if (active < 0)
+		return;
+
+	dev_dbg(vidsw->subdev.dev, "setting %d active\n", active);
+
+	if (vidsw->field)
+		regmap_field_write(vidsw->field, active);
+	else if (vidsw->gpio)
+		gpiod_set_value(vidsw->gpio, active);
+}
+
+static int vidsw_link_setup(struct media_entity *entity,
+			    const struct media_pad *local,
+			    const struct media_pad *remote, u32 flags)
+{
+	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+
+	/* We have no limitations on enabling or disabling our output link */
+	if (local->index == vidsw->num_pads - 1)
+		return 0;
+
+	dev_dbg(sd->dev, "link setup %s -> %s", remote->entity->name,
+		local->entity->name);
+
+	if (!(flags & MEDIA_LNK_FL_ENABLED)) {
+		if (local->index == vidsw->active) {
+			dev_dbg(sd->dev, "going inactive\n");
+			vidsw->active = -1;
+		}
+		return 0;
+	}
+
+	if (vidsw->active >= 0) {
+		struct media_pad *pad;
+
+		if (vidsw->active == local->index)
+			return 0;
+
+		pad = media_entity_remote_pad(&vidsw->pads[vidsw->active]);
+		if (pad) {
+			struct media_link *link;
+			int ret;
+
+			link = media_entity_find_link(pad,
+						&vidsw->pads[vidsw->active]);
+			if (link) {
+				ret = __media_entity_setup_link(link, 0);
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	vidsw_set_active(vidsw, local->index);
+
+	return 0;
+}
+
+static struct media_entity_operations vidsw_ops = {
+	.link_setup = vidsw_link_setup,
+};
+
+static bool vidsw_endpoint_disabled(struct device_node *ep)
+{
+	struct device_node *rpp;
+
+	if (!of_device_is_available(ep))
+		return true;
+
+	rpp = of_graph_get_remote_port_parent(ep);
+	if (!rpp)
+		return true;
+
+	return !of_device_is_available(rpp);
+}
+
+static int vidsw_async_init(struct vidsw *vidsw, struct device_node *node)
+{
+	struct device_node *ep;
+	u32 portno;
+	int numports;
+	int ret;
+	int i;
+	bool active_link = false;
+
+	numports = vidsw->num_pads;
+
+	for (i = 0; i < numports - 1; i++)
+		vidsw->pads[i].flags = MEDIA_PAD_FL_SINK;
+	vidsw->pads[numports - 1].flags = MEDIA_PAD_FL_SOURCE;
+
+	vidsw->subdev.entity.function = MEDIA_ENT_F_VID_MUX;
+	ret = media_entity_pads_init(&vidsw->subdev.entity, numports,
+				     vidsw->pads);
+	if (ret < 0)
+		return ret;
+
+	vidsw->subdev.entity.ops = &vidsw_ops;
+
+	for_each_endpoint_of_node(node, ep) {
+		struct v4l2_of_endpoint endpoint;
+
+		v4l2_of_parse_endpoint(ep, &endpoint);
+
+		portno = endpoint.base.port;
+		if (portno >= numports - 1)
+			continue;
+
+		if (vidsw_endpoint_disabled(ep)) {
+			dev_dbg(vidsw->subdev.dev,
+				"port %d disabled\n", portno);
+			continue;
+		}
+
+		vidsw->endpoint[portno] = endpoint;
+
+		if (portno == vidsw->active)
+			active_link = true;
+	}
+
+	for (portno = 0; portno < numports - 1; portno++) {
+		if (!vidsw->endpoint[portno].base.local_node)
+			continue;
+
+		/* If the active input is not connected, use another */
+		if (!active_link) {
+			vidsw_set_active(vidsw, portno);
+			active_link = true;
+		}
+	}
+
+	return v4l2_async_register_subdev(&vidsw->subdev);
+}
+
+int vidsw_g_mbus_config(struct v4l2_subdev *sd, struct v4l2_mbus_config *cfg)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+	struct media_pad *pad;
+	int ret;
+
+	if (vidsw->active == -1) {
+		dev_err(sd->dev, "no configuration for inactive mux\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Retrieve media bus configuration from the entity connected to the
+	 * active input
+	 */
+	pad = media_entity_remote_pad(&vidsw->pads[vidsw->active]);
+	if (pad) {
+		sd = media_entity_to_v4l2_subdev(pad->entity);
+		ret = v4l2_subdev_call(sd, video, g_mbus_config, cfg);
+		if (ret == -ENOIOCTLCMD)
+			pad = NULL;
+		else if (ret < 0) {
+			dev_err(sd->dev, "failed to get source configuration\n");
+			return ret;
+		}
+	}
+	if (!pad) {
+		/* Mirror the input side on the output side */
+		cfg->type = vidsw->endpoint[vidsw->active].bus_type;
+		if (cfg->type == V4L2_MBUS_PARALLEL ||
+		    cfg->type == V4L2_MBUS_BT656)
+			cfg->flags = vidsw->endpoint[vidsw->active].bus.parallel.flags;
+	}
+
+	return 0;
+}
+
+static int vidsw_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+	struct v4l2_subdev *upstream_sd;
+	struct media_pad *pad;
+
+	if (vidsw->active == -1) {
+		dev_err(sd->dev, "Can not start streaming on inactive mux\n");
+		return -EINVAL;
+	}
+
+	pad = media_entity_remote_pad(&sd->entity.pads[vidsw->active]);
+	if (!pad) {
+		dev_err(sd->dev, "Failed to find remote source pad\n");
+		return -ENOLINK;
+	}
+
+	if (!is_media_entity_v4l2_subdev(pad->entity)) {
+		dev_err(sd->dev, "Upstream entity is not a v4l2 subdev\n");
+		return -ENODEV;
+	}
+
+	upstream_sd = media_entity_to_v4l2_subdev(pad->entity);
+
+	return v4l2_subdev_call(upstream_sd, video, s_stream, enable);
+}
+
+static int vidsw_g_frame_interval(struct v4l2_subdev *sd,
+				  struct v4l2_subdev_frame_interval *fi)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+
+	fi->interval = vidsw->timeperframe;
+
+	return 0;
+}
+
+static int vidsw_s_frame_interval(struct v4l2_subdev *sd,
+				  struct v4l2_subdev_frame_interval *fi)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+
+	vidsw->timeperframe = fi->interval;
+
+	return 0;
+}
+
+static const struct v4l2_subdev_video_ops vidsw_subdev_video_ops = {
+	.g_mbus_config = vidsw_g_mbus_config,
+	.s_stream = vidsw_s_stream,
+	.g_frame_interval = vidsw_g_frame_interval,
+	.s_frame_interval = vidsw_s_frame_interval,
+};
+
+static struct v4l2_mbus_framefmt *
+__vidsw_get_pad_format(struct v4l2_subdev *sd,
+		       struct v4l2_subdev_pad_config *cfg,
+		       unsigned int pad, u32 which)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+
+	switch (which) {
+	case V4L2_SUBDEV_FORMAT_TRY:
+		return v4l2_subdev_get_try_format(sd, cfg, pad);
+	case V4L2_SUBDEV_FORMAT_ACTIVE:
+		return &vidsw->format_mbus[pad];
+	default:
+		return NULL;
+	}
+}
+
+static int vidsw_get_format(struct v4l2_subdev *sd,
+			    struct v4l2_subdev_pad_config *cfg,
+			    struct v4l2_subdev_format *sdformat)
+{
+	sdformat->format = *__vidsw_get_pad_format(sd, cfg, sdformat->pad,
+						   sdformat->which);
+	return 0;
+}
+
+static int vidsw_set_format(struct v4l2_subdev *sd,
+			    struct v4l2_subdev_pad_config *cfg,
+			    struct v4l2_subdev_format *sdformat)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+	struct v4l2_mbus_framefmt *mbusformat;
+
+	if (sdformat->pad >= vidsw->num_pads)
+		return -EINVAL;
+
+	mbusformat = __vidsw_get_pad_format(sd, cfg, sdformat->pad,
+					    sdformat->which);
+	if (!mbusformat)
+		return -EINVAL;
+
+	/* Output pad mirrors active input pad, no limitations on input pads */
+	if (sdformat->pad == (vidsw->num_pads - 1) && vidsw->active >= 0)
+		sdformat->format = vidsw->format_mbus[vidsw->active];
+
+	*mbusformat = sdformat->format;
+
+	return 0;
+}
+
+static struct v4l2_subdev_pad_ops vidsw_pad_ops = {
+	.get_fmt = vidsw_get_format,
+	.set_fmt = vidsw_set_format,
+};
+
+static struct v4l2_subdev_ops vidsw_subdev_ops = {
+	.pad = &vidsw_pad_ops,
+	.video = &vidsw_subdev_video_ops,
+};
+
+static int of_get_reg_field(struct device_node *node, struct reg_field *field)
+{
+	u32 bit_mask;
+	int ret;
+
+	ret = of_property_read_u32(node, "reg", &field->reg);
+	if (ret < 0)
+		return ret;
+
+	ret = of_property_read_u32(node, "bit-mask", &bit_mask);
+	if (ret < 0)
+		return ret;
+
+	ret = of_property_read_u32(node, "bit-shift", &field->lsb);
+	if (ret < 0)
+		return ret;
+
+	field->msb = field->lsb + fls(bit_mask) - 1;
+
+	return 0;
+}
+
+static int vidsw_probe(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct of_endpoint endpoint;
+	struct device_node *ep;
+	struct reg_field field;
+	struct vidsw *vidsw;
+	struct regmap *map;
+	unsigned int num_pads;
+	int ret;
+
+	vidsw = devm_kzalloc(&pdev->dev, sizeof(*vidsw), GFP_KERNEL);
+	if (!vidsw)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, vidsw);
+
+	v4l2_subdev_init(&vidsw->subdev, &vidsw_subdev_ops);
+	snprintf(vidsw->subdev.name, sizeof(vidsw->subdev.name), "%s",
+			np->name);
+	vidsw->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+	vidsw->subdev.dev = &pdev->dev;
+
+	/*
+	 * The largest numbered port is the output port. It determines
+	 * total number of pads
+	 */
+	num_pads = 0;
+	for_each_endpoint_of_node(np, ep) {
+		of_graph_parse_endpoint(ep, &endpoint);
+		num_pads = max(num_pads, endpoint.port + 1);
+	}
+
+	if (num_pads < 2) {
+		dev_err(&pdev->dev, "Not enough ports %d\n", num_pads);
+		return -EINVAL;
+	}
+
+	ret = of_get_reg_field(np, &field);
+	if (ret == 0) {
+		map = syscon_node_to_regmap(np->parent);
+		if (!map) {
+			dev_err(&pdev->dev, "Failed to get syscon register map\n");
+			return PTR_ERR(map);
+		}
+
+		vidsw->field = devm_regmap_field_alloc(&pdev->dev, map, field);
+		if (IS_ERR(vidsw->field)) {
+			dev_err(&pdev->dev, "Failed to allocate regmap field\n");
+			return PTR_ERR(vidsw->field);
+		}
+
+		regmap_field_read(vidsw->field, &vidsw->active);
+	} else {
+		if (num_pads > 3) {
+			dev_err(&pdev->dev, "Too many ports %d\n", num_pads);
+			return -EINVAL;
+		}
+
+		vidsw->gpio = devm_gpiod_get(&pdev->dev, NULL, GPIOD_OUT_LOW);
+		if (IS_ERR(vidsw->gpio)) {
+			dev_warn(&pdev->dev,
+				 "could not request control gpio: %d\n", ret);
+			vidsw->gpio = NULL;
+		}
+
+		vidsw->active = gpiod_get_value(vidsw->gpio) ? 1 : 0;
+	}
+
+	vidsw->num_pads = num_pads;
+	vidsw->pads = devm_kzalloc(&pdev->dev, sizeof(*vidsw->pads) * num_pads,
+			GFP_KERNEL);
+	vidsw->format_mbus = devm_kzalloc(&pdev->dev,
+			sizeof(*vidsw->format_mbus) * num_pads, GFP_KERNEL);
+	vidsw->endpoint = devm_kzalloc(&pdev->dev,
+			sizeof(*vidsw->endpoint) * (num_pads - 1), GFP_KERNEL);
+
+	ret = vidsw_async_init(vidsw, np);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int vidsw_remove(struct platform_device *pdev)
+{
+	struct vidsw *vidsw = platform_get_drvdata(pdev);
+	struct v4l2_subdev *sd = &vidsw->subdev;
+
+	v4l2_async_unregister_subdev(sd);
+	media_entity_cleanup(&sd->entity);
+	v4l2_device_unregister_subdev(sd);
+
+	return 0;
+}
+
+static const struct of_device_id vidsw_dt_ids[] = {
+	{ .compatible = "video-multiplexer", },
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, vidsw_dt_ids);
+
+static struct platform_driver vidsw_driver = {
+	.probe		= vidsw_probe,
+	.remove		= vidsw_remove,
+	.driver		= {
+		.of_match_table = vidsw_dt_ids,
+		.name = "video-multiplexer",
+	},
+};
+
+module_platform_driver(vidsw_driver);
+
+MODULE_DESCRIPTION("video stream multiplexer");
+MODULE_AUTHOR("Sascha Hauer, Pengutronix");
+MODULE_AUTHOR("Philipp Zabel, Pengutronix");
+MODULE_LICENSE("GPL");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 16/36] UAPI: Add media UAPI Kbuild file
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (14 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 15/36] platform: add video-multiplexer subdevice driver Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 17/36] media: Add userspace header file for i.MX Steve Longerbeam
                   ` (21 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Add an empty UAPI Kbuild file for media UAPI headers.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 include/uapi/Kbuild       | 1 +
 include/uapi/media/Kbuild | 1 +
 2 files changed, 2 insertions(+)
 create mode 100644 include/uapi/media/Kbuild

diff --git a/include/uapi/Kbuild b/include/uapi/Kbuild
index 245aa6e..9a51957 100644
--- a/include/uapi/Kbuild
+++ b/include/uapi/Kbuild
@@ -6,6 +6,7 @@
 header-y += asm-generic/
 header-y += linux/
 header-y += sound/
+header-y += media/
 header-y += mtd/
 header-y += rdma/
 header-y += video/
diff --git a/include/uapi/media/Kbuild b/include/uapi/media/Kbuild
new file mode 100644
index 0000000..aafaa5a
--- /dev/null
+++ b/include/uapi/media/Kbuild
@@ -0,0 +1 @@
+# UAPI Header export list
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 17/36] media: Add userspace header file for i.MX
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (15 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 16/36] UAPI: Add media UAPI Kbuild file Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 11:33   ` Philipp Zabel
  2017-02-16  2:19 ` [PATCH v4 18/36] media: Add i.MX media core driver Steve Longerbeam
                   ` (20 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

This adds a header file for use by userspace programs wanting to interact
with the i.MX media driver. It defines custom v4l2 controls and events
generated by the i.MX v4l2 subdevices.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 include/uapi/media/Kbuild |  1 +
 include/uapi/media/imx.h  | 29 +++++++++++++++++++++++++++++
 2 files changed, 30 insertions(+)
 create mode 100644 include/uapi/media/imx.h

diff --git a/include/uapi/media/Kbuild b/include/uapi/media/Kbuild
index aafaa5a..fa78958 100644
--- a/include/uapi/media/Kbuild
+++ b/include/uapi/media/Kbuild
@@ -1 +1,2 @@
 # UAPI Header export list
+header-y += imx.h
diff --git a/include/uapi/media/imx.h b/include/uapi/media/imx.h
new file mode 100644
index 0000000..1fdd1c1
--- /dev/null
+++ b/include/uapi/media/imx.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright (c) 2014-2015 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#ifndef __UAPI_MEDIA_IMX_H__
+#define __UAPI_MEDIA_IMX_H__
+
+/*
+ * events from the subdevs
+ */
+#define V4L2_EVENT_IMX_CLASS          V4L2_EVENT_PRIVATE_START
+#define V4L2_EVENT_IMX_NFB4EOF        (V4L2_EVENT_IMX_CLASS + 1)
+#define V4L2_EVENT_IMX_FRAME_INTERVAL (V4L2_EVENT_IMX_CLASS + 2)
+
+enum imx_ctrl_id {
+	V4L2_CID_IMX_MOTION = (V4L2_CID_USER_IMX_BASE + 0),
+	V4L2_CID_IMX_FIM_ENABLE,
+	V4L2_CID_IMX_FIM_NUM,
+	V4L2_CID_IMX_FIM_TOLERANCE_MIN,
+	V4L2_CID_IMX_FIM_TOLERANCE_MAX,
+	V4L2_CID_IMX_FIM_NUM_SKIP,
+};
+
+#endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 18/36] media: Add i.MX media core driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (16 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 17/36] media: Add userspace header file for i.MX Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 10:27   ` Russell King - ARM Linux
  2017-02-16 13:02   ` Philipp Zabel
  2017-02-16  2:19 ` [PATCH v4 19/36] media: imx: Add Capture Device Interface Steve Longerbeam
                   ` (19 subsequent siblings)
  37 siblings, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Add the core media driver for i.MX SOC.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 Documentation/media/v4l-drivers/imx.rst           | 542 +++++++++++++++++
 drivers/staging/media/Kconfig                     |   2 +
 drivers/staging/media/Makefile                    |   1 +
 drivers/staging/media/imx/Kconfig                 |   7 +
 drivers/staging/media/imx/Makefile                |   6 +
 drivers/staging/media/imx/TODO                    |  36 ++
 drivers/staging/media/imx/imx-media-dev.c         | 487 +++++++++++++++
 drivers/staging/media/imx/imx-media-fim.c         | 471 +++++++++++++++
 drivers/staging/media/imx/imx-media-internal-sd.c | 349 +++++++++++
 drivers/staging/media/imx/imx-media-of.c          | 267 ++++++++
 drivers/staging/media/imx/imx-media-utils.c       | 701 ++++++++++++++++++++++
 drivers/staging/media/imx/imx-media.h             | 297 +++++++++
 include/media/imx.h                               |  15 +
 include/uapi/linux/v4l2-controls.h                |   4 +
 14 files changed, 3185 insertions(+)
 create mode 100644 Documentation/media/v4l-drivers/imx.rst
 create mode 100644 drivers/staging/media/imx/Kconfig
 create mode 100644 drivers/staging/media/imx/Makefile
 create mode 100644 drivers/staging/media/imx/TODO
 create mode 100644 drivers/staging/media/imx/imx-media-dev.c
 create mode 100644 drivers/staging/media/imx/imx-media-fim.c
 create mode 100644 drivers/staging/media/imx/imx-media-internal-sd.c
 create mode 100644 drivers/staging/media/imx/imx-media-of.c
 create mode 100644 drivers/staging/media/imx/imx-media-utils.c
 create mode 100644 drivers/staging/media/imx/imx-media.h
 create mode 100644 include/media/imx.h

diff --git a/Documentation/media/v4l-drivers/imx.rst b/Documentation/media/v4l-drivers/imx.rst
new file mode 100644
index 0000000..f085e43
--- /dev/null
+++ b/Documentation/media/v4l-drivers/imx.rst
@@ -0,0 +1,542 @@
+i.MX Video Capture Driver
+=========================
+
+Introduction
+------------
+
+The Freescale i.MX5/6 contains an Image Processing Unit (IPU), which
+handles the flow of image frames to and from capture devices and
+display devices.
+
+For image capture, the IPU contains the following internal subunits:
+
+- Image DMA Controller (IDMAC)
+- Camera Serial Interface (CSI)
+- Image Converter (IC)
+- Sensor Multi-FIFO Controller (SMFC)
+- Image Rotator (IRT)
+- Video De-Interlacing or Combining Block (VDIC)
+
+The IDMAC is the DMA controller for transfer of image frames to and from
+memory. Various dedicated DMA channels exist for both video capture and
+display paths. During transfer, the IDMAC is also capable of vertical
+image flip, 8x8 block transfer (see IRT description), pixel component
+re-ordering (for example UYVY to YUYV) within the same colorspace, and
+even packed <--> planar conversion. It can also perform a simple
+de-interlacing by interleaving even and odd lines during transfer
+(without motion compensation which requires the VDIC).
+
+The CSI is the backend capture unit that interfaces directly with
+camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 busses.
+
+The IC handles color-space conversion, resizing (downscaling and
+upscaling), horizontal flip, and 90/270 degree rotation operations.
+
+There are three independent "tasks" within the IC that can carry out
+conversions concurrently: pre-process encoding, pre-process viewfinder,
+and post-processing. Within each task, conversions are split into three
+sections: downsizing section, main section (upsizing, flip, colorspace
+conversion, and graphics plane combining), and rotation section.
+
+The IPU time-shares the IC task operations. The time-slice granularity
+is one burst of eight pixels in the downsizing section, one image line
+in the main processing section, one image frame in the rotation section.
+
+The SMFC is composed of four independent FIFOs that each can transfer
+captured frames from sensors directly to memory concurrently via four
+IDMAC channels.
+
+The IRT carries out 90 and 270 degree image rotation operations. The
+rotation operation is carried out on 8x8 pixel blocks at a time. This
+operation is supported by the IDMAC which handles the 8x8 block transfer
+along with block reordering, in coordination with vertical flip.
+
+The VDIC handles the conversion of interlaced video to progressive, with
+support for different motion compensation modes (low, medium, and high
+motion). The deinterlaced output frames from the VDIC can be sent to the
+IC pre-process viewfinder task for further conversions. The VDIC also
+contains a Combiner that combines two image planes, with alpha blending
+and color keying.
+
+In addition to the IPU internal subunits, there are also two units
+outside the IPU that are also involved in video capture on i.MX:
+
+- MIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus
+  interface. This is a Synopsys DesignWare core.
+- Two video multiplexers for selecting among multiple sensor inputs
+  to send to a CSI.
+
+For more info, refer to the latest versions of the i.MX5/6 reference
+manuals listed under References.
+
+
+Features
+--------
+
+Some of the features of this driver include:
+
+- Many different pipelines can be configured via media controller API,
+  that correspond to the hardware video capture pipelines supported in
+  the i.MX.
+
+- Supports parallel, BT.565, and MIPI CSI-2 interfaces.
+
+- Up to four concurrent sensor acquisitions, by configuring each
+  sensor's pipeline using independent entities. This is currently
+  demonstrated with the SabreSD and SabreLite reference boards with
+  independent OV5642 and MIPI CSI-2 OV5640 sensor modules.
+
+- Scaling, color-space conversion, horizontal and vertical flip, and
+  image rotation via IC task subdevs.
+
+- Many pixel formats supported (RGB, packed and planar YUV, partial
+  planar YUV).
+
+- The VDIC subdev supports motion compensated de-interlacing, with three
+  motion compensation modes: low, medium, and high motion. The mode is
+  specified with a custom control. Pipelines are defined that allow
+  sending frames to the VDIC subdev directly from the CSI or from
+  memory buffers via an output/mem2mem device node. For low and medium
+  motion modes, the VDIC must receive from memory buffers via a device
+  node.
+
+- Includes a Frame Interval Monitor (FIM) that can correct vertical sync
+  problems with the ADV718x video decoders. See below for a description
+  of the FIM.
+
+
+Entities
+--------
+
+imx6-mipi-csi2
+--------------
+
+This is the MIPI CSI-2 receiver entity. It has one sink pad to receive
+the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has
+four source pads, corresponding to the four MIPI CSI-2 demuxed virtual
+channel outputs.
+
+This entity actually consists of two sub-blocks. One is the MIPI CSI-2
+core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block
+is a "CSI-2 to IPU gasket". The gasket acts as a demultiplexer of the
+four virtual channels streams, providing four separate parallel buses
+containing each virtual channel that are routed to CSIs or video
+multiplexers as described below.
+
+On i.MX6 solo/dual-lite, all four virtual channel buses are routed to
+two video multiplexers. Both CSI0 and CSI1 can receive any virtual
+channel, as selected by the video multiplexers.
+
+On i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected
+by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1
+and IPU2-CSI0, respectively, and virtual channel 3 is routed to
+IPU2-CSI1 (again selected by a video mux).
+
+ipuX_csiY_mux
+-------------
+
+These are the video multiplexers. They have two or more sink pads to
+select from either camera sensors with a parallel interface, or from
+MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a
+single source pad that routes to a CSI (ipuX_csiY entities).
+
+On i.MX6 solo/dual-lite, there are two video mux entities. One sits
+in front of IPU1-CSI0 to select between a parallel sensor and any of
+the four MIPI CSI-2 virtual channels (a total of five sink pads). The
+other mux sits in front of IPU1-CSI1, and again has five sink pads to
+select between a parallel sensor and any of the four MIPI CSI-2 virtual
+channels.
+
+On i.MX6 Quad, there are two video mux entities. One sits in front of
+IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual
+channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to
+select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two
+sink pads).
+
+ipuX_csiY
+---------
+
+These are the CSI entities. They have a single sink pad receiving from
+either a video mux or from a MIPI CSI-2 virtual channel as described
+above.
+
+This entity has two source pads. The first source pad can link directly
+to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links
+that require no IDMAC memory buffer transfer.
+
+When the direct source pad is routed to the ipuX_ic_prp entity, frames
+from the CSI will be processed by one of the IC pre-processing tasks.
+
+When the direct source pad is routed to the ipuX_vdic entity, the VDIC
+will carry out motion-compensated de-interlace using "high motion" mode
+(see description of ipuX_vdic entity).
+
+The second source pad sends video frames to memory buffers via the SMFC
+and an IDMAC channel. This source pad is routed to a capture device
+node.
+
+Note that since the IDMAC source pad makes use of an IDMAC channel, it
+can do pixel reordering within the same colorspace. For example, the
+sink pad can take UYVY2X8, but the IDMAC source pad can output YUYV2X8.
+If the sink pad is receiving YUV, the output at the capture device can
+also be converted to a planar YUV format such as YUV420.
+
+It will also perform simple de-interlace without motion compensation,
+which is activated if the sink pad's field type is an interlaced type,
+and the IDMAC source pad field type is set to none.
+
+ipuX_vdic
+---------
+
+The VDIC carries out motion compensated de-interlacing, with three
+motion compensation modes: low, medium, and high motion. The mode is
+specified with a custom v4l2 control. It has two sink pads and a
+single source pad.
+
+The direct sink pad receives from an ipuX_csiY direct pad. With this
+link the VDIC can only operate in high motion mode.
+
+When the IDMAC sink pad is activated, it receives from an output
+or mem2mem device node. With this pipeline, it can also operate
+in low and medium modes, because these modes require receiving
+frames from memory buffers. Note that an output or mem2mem device
+is not implemented yet, so this sink pad currently has no links.
+
+The source pad routes to the IC pre-processing entity ipuX_ic_prp.
+
+ipuX_ic_prp
+-----------
+
+This is the IC pre-processing entity. It acts as a router, routing
+data from its sink pad to one or both of its source pads.
+
+It has a single sink pad. The sink pad can receive from the ipuX_csiY
+direct pad, or from ipuX_vdic.
+
+This entity has two source pads. One source pad routes to the
+pre-process encode task entity (ipuX_ic_prpenc), the other to the
+pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads
+can be activated at the same time if the sink pad is receiving from
+ipuX_csiY. Only the source pad to the pre-process viewfinder task entity
+can be activated if the sink pad is receiving from ipuX_vdic (frames
+from the VDIC can only be processed by the pre-process viewfinder task).
+
+ipuX_ic_prpenc
+--------------
+
+This is the IC pre-processing encode entity. It has a single sink pad
+from ipuX_ic_prp, and a single source pad. The source pad is routed
+to a capture device node.
+
+This entity performs the IC pre-process encode task operations:
+color-space conversion, resizing (downscaling and upscaling), horizontal
+and vertical flip, and 90/270 degree rotation.
+
+Like the ipuX_csiY IDMAC source, it can also perform simple de-interlace
+without motion compensation, and pixel reordering.
+
+ipuX_ic_prpvf
+-------------
+
+This is the IC pre-processing viewfinder entity. It has a single sink pad
+from ipuX_ic_prp, and a single source pad. The source pad is routed to
+a capture device node.
+
+It is identical in operation to ipuX_ic_prpenc. It will receive and
+process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is
+receiving from ipuX_vdic.
+
+Like the ipuX_csiY IDMAC source, it can perform simple de-interlace
+without motion compensation. However, note that if the ipuX_vdic is
+included in the pipeline (ipuX_ic_prp is receiving from ipuX_vdic),
+it's not possible to use simple de-interlace in ipuX_ic_prpvf, since
+the ipuX_vdic has already carried out de-interlacing (with motion
+compensation) and therefore the field type output from ipuX_ic_prp can
+only be none.
+
+Capture Pipelines
+-----------------
+
+The following describe the various use-cases supported by the pipelines.
+
+The links shown do not include the backend sensor, video mux, or mipi
+csi-2 receiver links. This depends on the type of sensor interface
+(parallel or mipi csi-2). So in all cases, these pipelines begin with:
+
+sensor -> ipuX_csiY_mux -> ...
+
+for parallel sensors, or:
+
+sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ...
+
+for mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route
+to the video mux (ipuX_csiY_mux) before sending to the CSI, depending
+on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in
+parenthesis.
+
+Unprocessed Video Capture:
+--------------------------
+
+Send frames directly from sensor to camera device interface node, with
+no conversions:
+
+-> ipuX_csiY IDMAC pad -> capture node
+
+IC Direct Conversions:
+----------------------
+
+This pipeline uses the preprocess encode entity to route frames directly
+from the CSI to the IC, to carry out scaling up to 1024x1024 resolution,
+CSC, flipping, and image rotation:
+
+-> ipuX_csiY direct pad -> ipuX_ic_prp -> ipuX_ic_prpenc -> capture node
+
+Motion Compensated De-interlace:
+--------------------------------
+
+This pipeline routes frames from the CSI direct pad to the VDIC entity to
+support motion-compensated de-interlacing (high motion mode only),
+scaling up to 1024x1024, CSC, flip, and rotation:
+
+-> ipuX_csiY direct pad -> ipuX_vdic direct pad -> ipuX_ic_prp ->
+   ipuX_ic_prpvf -> capture node
+
+
+Usage Notes
+-----------
+
+Many of the subdevs require information from the active sensor in the
+current pipeline when configuring pad formats. Therefore the media links
+should be established before configuring the media pad formats.
+
+Similarly, the capture device interfaces inherit controls from the
+active entities in the current pipeline at link-setup time. Therefore
+the capture device node links should be the last links established in
+order for the capture interfaces to "see" and inherit all possible
+controls.
+
+The following are usage notes for Sabre- reference platforms:
+
+
+SabreLite with OV5642 and OV5640
+--------------------------------
+
+This platform requires the OmniVision OV5642 module with a parallel
+camera interface, and the OV5640 module with a MIPI CSI-2
+interface. Both modules are available from Boundary Devices:
+
+https://boundarydevices.com/products/nit6x_5mp
+https://boundarydevices.com/product/nit6x_5mp_mipi
+
+Note that if only one camera module is available, the other sensor
+node can be disabled in the device tree.
+
+The OV5642 module is connected to the parallel bus input on the i.MX
+internal video mux to IPU1 CSI0. It's i2c bus connects to i2c bus 2.
+
+The MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2
+receiver, and the four virtual channel outputs from the receiver are
+routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1,
+vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is
+also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and
+OV5640 must not share the same i2c slave address.
+
+The following basic example configures unprocessed video capture
+pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and
+the OV5640 (transmitting on mipi csi-2 virtual channel 1) is routed
+to ipu1_csi1. Both sensors are configured to output 640x480, the
+OV5642 outputs YUYV2X8, the OV5640 UYVY2X8:
+
+.. code-block:: none
+
+   # Setup links for OV5642
+   media-ctl -l '"ov5642 1-0042":0 -> "ipu1_csi0_mux":1[1]'
+   media-ctl -l '"ipu1_csi0_mux":2 -> "ipu1_csi0":0[1]'
+   media-ctl -l '"ipu1_csi0":2 -> "ipu1_csi0 capture":0[1]'
+   # Setup links for OV5640
+   media-ctl -l '"ov5640_mipi 1-0040":0 -> "imx6-mipi-csi2":0[1]'
+   media-ctl -l '"imx6-mipi-csi2":2 -> "ipu1_csi1":0[1]'
+   media-ctl -l '"ipu1_csi1":2 -> "ipu1_csi1 capture":0[1]'
+   # Configure pads for OV5642 pipeline
+   media-ctl -V "\"ov5642 1-0042\":0 [fmt:YUYV2X8/640x480 field:none]"
+   media-ctl -V "\"ipu1_csi0_mux\":2 [fmt:YUYV2X8/640x480 field:none]"
+   media-ctl -V "\"ipu1_csi0\":2 [fmt:YUYV2X8/640x480 field:none]"
+   # Configure pads for OV5640 pipeline
+   media-ctl -V "\"ov5640_mipi 1-0040\":0 [fmt:UYVY2X8/640x480 field:none]"
+   media-ctl -V "\"imx6-mipi-csi2\":2 [fmt:UYVY2X8/640x480 field:none]"
+   media-ctl -V "\"ipu1_csi1\":2 [fmt:UYVY2X8/640x480 field:none]"
+
+Streaming can then begin independently on the capture device nodes
+"ipu1_csi0 capture" and "ipu1_csi1 capture".
+
+SabreAuto with ADV7180 decoder
+------------------------------
+
+On the SabreAuto, an on-board ADV7180 SD decoder is connected to the
+parallel bus input on the internal video mux to IPU1 CSI0.
+
+The following example configures a pipeline to capture from the ADV7180
+video decoder, assuming NTSC 720x480 input signals, with Motion
+Compensated de-interlacing. Pad field types assume the adv7180 outputs
+"alternate", which the ipu1_csi0 entity converts to "seq-tb" at its
+source pad. $outputfmt can be any format supported by the ipu1_ic_prpvf
+entity at its output pad:
+
+.. code-block:: none
+
+   # Setup links
+   media-ctl -l '"adv7180 4-0021":0 -> "ipu1_csi0_mux":1[1]'
+   media-ctl -l '"ipu1_csi0_mux":2 -> "ipu1_csi0":0[1]'
+   media-ctl -l '"ipu1_csi0":2 -> "ipu1_vdic":0[1]'
+   media-ctl -l '"ipu1_vdic":2 -> "ipu1_ic_prp":0[1]'
+   media-ctl -l '"ipu1_ic_prp":2 -> "ipu1_ic_prpvf":0[1]'
+   media-ctl -l '"ipu1_ic_prpvf":1 -> "ipu1_ic_prpvf capture":0[1]'
+   # Configure pads
+   media-ctl -V "\"adv7180 4-0021\":0 [fmt:UYVY2X8/720x480]"
+   media-ctl -V "\"ipu1_csi0_mux\":2 [fmt:UYVY2X8/720x480 field:alternate]"
+   media-ctl -V "\"ipu1_csi0\":1 [fmt:AYUV32/720x480 field:seq-tb]"
+   media-ctl -V "\"ipu1_vdic\":2 [fmt:AYUV32/720x480 field:none]"
+   media-ctl -V "\"ipu1_ic_prp\":2 [fmt:AYUV32/720x480 field:none]"
+   media-ctl -V "\"ipu1_ic_prpvf\":1 [fmt:$outputfmt field:none]"
+
+Streaming can then begin on the capture device node at
+"ipu1_ic_prpvf capture".
+
+This platform accepts Composite Video analog inputs to the ADV7180 on
+Ain1 (connector J42).
+
+Frame Interval Monitor
+----------------------
+
+The adv718x decoders can occasionally send corrupt fields during
+NTSC/PAL signal re-sync (too little or too many video lines). When
+this happens, the IPU triggers a mechanism to re-establish vertical
+sync by adding 1 dummy line every frame, which causes a rolling effect
+from image to image, and can last a long time before a stable image is
+recovered. Or sometimes the mechanism doesn't work at all, causing a
+permanent split image (one frame contains lines from two consecutive
+captured images).
+
+From experiment it was found that during image rolling, the frame
+intervals (elapsed time between two EOF's) drop below the nominal
+value for the current standard, by about one frame time (60 usec),
+and remain at that value until rolling stops.
+
+While the reason for this observation isn't known (the IPU dummy
+line mechanism should show an increase in the intervals by 1 line
+time every frame, not a fixed value), we can use it to detect the
+corrupt fields using a frame interval monitor. If the FIM detects a
+bad frame interval, a subdev event is sent. In response, userland can
+issue a streaming restart to correct the rolling/split image.
+
+The FIM is implemented in the ipuX_csiY entity, and the entities that
+generate End-Of-Frame interrupts call into the FIM to monitor the frame
+intervals: ipuX_ic_prpenc, and ipuX_ic_prpvf. Userland can register with
+the FIM event notifications on the ipuX_csiY subdev device node
+(V4L2_EVENT_IMX_FRAME_INTERVAL).
+
+The ipuX_csiY entity includes custom controls to tweak some dials for
+FIM. If one of these controls is changed during streaming, the FIM will
+be reset and will continue at the new settings.
+
+- V4L2_CID_IMX_FIM_ENABLE
+
+Enable/disable the FIM.
+
+- V4L2_CID_IMX_FIM_NUM
+
+How many frame interval errors to average before comparing against the
+nominal frame interval reported by the sensor. This can reduce noise
+from interrupt latency.
+
+- V4L2_CID_IMX_FIM_TOLERANCE_MIN
+
+If the averaged intervals fall outside nominal by this amount, in
+microseconds, streaming will be restarted.
+
+- V4L2_CID_IMX_FIM_TOLERANCE_MAX
+
+If any interval errors are higher than this value, those error samples
+are discarded and do not enter into the average. This can be used to
+discard really high interval errors that might be due to very high
+system load, causing excessive interrupt latencies.
+
+- V4L2_CID_IMX_FIM_NUM_SKIP
+
+How many frames to skip after a FIM reset or stream restart before
+FIM begins to average intervals. It has been found that there can
+be a few bad frame intervals after stream restart which are not
+attributed to adv718x sending a corrupt field, so this is used to
+skip those frames to prevent unnecessary restarts.
+
+
+SabreSD with MIPI CSI-2 OV5640
+------------------------------
+
+Similarly to SabreLite, the SabreSD supports a parallel interface
+OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642
+connects to i2c bus 1 and the OV5640 to i2c bus 2.
+
+The device tree for SabreSD includes OF graphs for both the parallel
+OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI
+CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled.
+The OV5640 module connects to MIPI connector J5 (sorry I don't have the
+compatible module part number or URL).
+
+The following example configures a direct conversion pipeline to capture
+from the OV5640. $sensorfmt can be any format supported by the OV5640.
+$sensordim is the frame dimension part of $sensorfmt (minus the mbus
+pixel code). $outputfmt can be any format supported by the
+ipu1_ic_prpenc entity at its output pad:
+
+.. code-block:: none
+
+   # Setup links
+   media-ctl -l '"ov5640_mipi 1-003c":0 -> "imx6-mipi-csi2":0[1]'
+   media-ctl -l '"imx6-mipi-csi2":2 -> "ipu1_csi1":0[1]'
+   media-ctl -l '"ipu1_csi1":1 -> "ipu1_ic_prp":0[1]'
+   media-ctl -l '"ipu1_ic_prp":1 -> "ipu1_ic_prpenc":0[1]'
+   media-ctl -l '"ipu1_ic_prpenc":1 -> "ipu1_ic_prpenc capture":0[1]'
+   # Configure pads
+   media-ctl -V "\"ov5640_mipi 1-003c\":0 [fmt:$sensorfmt field:none]"
+   media-ctl -V "\"imx6-mipi-csi2\":2 [fmt:$sensorfmt field:none]"
+   media-ctl -V "\"ipu1_csi1\":1 [fmt:AYUV32/$sensordim field:none]"
+   media-ctl -V "\"ipu1_ic_prp\":1 [fmt:AYUV32/$sensordim field:none]"
+   media-ctl -V "\"ipu1_ic_prpenc\":1 [fmt:$outputfmt field:none]"
+
+Streaming can then begin on "ipu1_ic_prpenc capture" node.
+
+
+
+Known Issues
+------------
+
+1. When using 90 or 270 degree rotation control at capture resolutions
+   near the IC resizer limit of 1024x1024, and combined with planar
+   pixel formats (YUV420, YUV422p), frame capture will often fail with
+   no end-of-frame interrupts from the IDMAC channel. To work around
+   this, use lower resolution and/or packed formats (YUYV, RGB3, etc.)
+   when 90 or 270 rotations are needed.
+
+
+File list
+---------
+
+drivers/staging/media/imx/
+include/media/imx.h
+include/uapi/media/imx.h
+
+References
+----------
+
+[1] "i.MX 6Dual/6Quad Applications Processor Reference Manual"
+[2] "i.MX 6Solo/6DualLite Applications Processor Reference Manual"
+
+
+Authors
+-------
+Steve Longerbeam <steve_longerbeam@mentor.com>
+Philipp Zabel <kernel@pengutronix.de>
+Russell King - ARM Linux <linux@armlinux.org.uk>
+
+Copyright (C) 2012-2017 Mentor Graphics Inc.
diff --git a/drivers/staging/media/Kconfig b/drivers/staging/media/Kconfig
index ffb8fa7..05b55a8 100644
--- a/drivers/staging/media/Kconfig
+++ b/drivers/staging/media/Kconfig
@@ -25,6 +25,8 @@ source "drivers/staging/media/cxd2099/Kconfig"
 
 source "drivers/staging/media/davinci_vpfe/Kconfig"
 
+source "drivers/staging/media/imx/Kconfig"
+
 source "drivers/staging/media/omap4iss/Kconfig"
 
 source "drivers/staging/media/s5p-cec/Kconfig"
diff --git a/drivers/staging/media/Makefile b/drivers/staging/media/Makefile
index a28e82c..6f50ddd 100644
--- a/drivers/staging/media/Makefile
+++ b/drivers/staging/media/Makefile
@@ -1,6 +1,7 @@
 obj-$(CONFIG_I2C_BCM2048)	+= bcm2048/
 obj-$(CONFIG_VIDEO_SAMSUNG_S5P_CEC) += s5p-cec/
 obj-$(CONFIG_DVB_CXD2099)	+= cxd2099/
+obj-$(CONFIG_VIDEO_IMX_MEDIA)	+= imx/
 obj-$(CONFIG_LIRC_STAGING)	+= lirc/
 obj-$(CONFIG_VIDEO_DM365_VPFE)	+= davinci_vpfe/
 obj-$(CONFIG_VIDEO_OMAP4)	+= omap4iss/
diff --git a/drivers/staging/media/imx/Kconfig b/drivers/staging/media/imx/Kconfig
new file mode 100644
index 0000000..722ed55
--- /dev/null
+++ b/drivers/staging/media/imx/Kconfig
@@ -0,0 +1,7 @@
+config VIDEO_IMX_MEDIA
+	tristate "i.MX5/6 V4L2 media core driver"
+	depends on MEDIA_CONTROLLER && VIDEO_V4L2 && ARCH_MXC && IMX_IPUV3_CORE
+	---help---
+	  Say yes here to enable support for video4linux media controller
+	  driver for the i.MX5/6 SOC.
+
diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
new file mode 100644
index 0000000..ba8e4fb
--- /dev/null
+++ b/drivers/staging/media/imx/Makefile
@@ -0,0 +1,6 @@
+imx-media-objs := imx-media-dev.o imx-media-internal-sd.o imx-media-of.o
+imx-media-common-objs := imx-media-utils.o imx-media-fim.o
+
+obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o
+obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o
+
diff --git a/drivers/staging/media/imx/TODO b/drivers/staging/media/imx/TODO
new file mode 100644
index 0000000..f6d2bac
--- /dev/null
+++ b/drivers/staging/media/imx/TODO
@@ -0,0 +1,36 @@
+
+- Finish v4l2-compliance
+
+- imx-csi subdev is not being autoloaded as a kernel module, probably
+  because ipu_add_client_devices() does not register the IPU client
+  platform devices, but only allocates those devices.
+
+- Currently registering with notifications from subdevs are only
+  available through the subdev device nodes and not through the main
+  capture device node. Need to come up with a way to find the capture
+  device in the current pipeline that owns the subdev that sent the
+  notify.
+
+- Clean up and move the ov5642 subdev driver to drivers/media/i2c, and
+  create the binding docs for it.
+
+- The Frame Interval Monitor could be exported to v4l2-core for
+  general use.
+
+- The subdev that is the original source of video data (referred to as
+  the "sensor" in the code), is called from various subdevs in the
+  pipeline in order to set/query the video standard ({g|s|enum}_std)
+  and to get/set the original frame interval from the capture interface
+  ([gs]_parm). Instead, the entities that need this info should call its
+  direct neighbor, and the neighbor should propagate the call to its
+  neighbor in turn if necessary.
+
+- At driver load time, the device-tree node that is the original source
+  (the "sensor"), is parsed to record its media bus configuration, and
+  this info is required in various subdevs to setup the pipeline.
+  Laurent Pinchart argues that instead the subdev should call its
+  neighbor's g_mbus_config op (which should be propagated if necessary)
+  to get this info. However Hans Verkuil is planning to remove the
+  g_mbus_config op. For now this driver uses the parsed DT mbus config
+  method until this issue is resolved.
+
diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c
new file mode 100644
index 0000000..e2041ad
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-dev.c
@@ -0,0 +1,487 @@
+/*
+ * V4L2 Media Controller Driver for Freescale i.MX5/6 SOC
+ *
+ * Copyright (c) 2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/module.h>
+#include <linux/of_platform.h>
+#include <linux/pinctrl/consumer.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mc.h>
+#include <video/imx-ipu-v3.h>
+#include <media/imx.h>
+#include "imx-media.h"
+
+static inline struct imx_media_dev *notifier2dev(struct v4l2_async_notifier *n)
+{
+	return container_of(n, struct imx_media_dev, subdev_notifier);
+}
+
+/*
+ * Find a subdev by device node or device name. This is called during
+ * driver load to form the async subdev list and bind them.
+ */
+struct imx_media_subdev *
+imx_media_find_async_subdev(struct imx_media_dev *imxmd,
+			    struct device_node *np,
+			    const char *devname)
+{
+	struct imx_media_subdev *imxsd;
+	int i;
+
+	for (i = 0; i < imxmd->subdev_notifier.num_subdevs; i++) {
+		imxsd = &imxmd->subdev[i];
+		switch (imxsd->asd.match_type) {
+		case V4L2_ASYNC_MATCH_OF:
+			if (np && imxsd->asd.match.of.node == np)
+				return imxsd;
+			break;
+		case V4L2_ASYNC_MATCH_DEVNAME:
+			if (devname &&
+			    !strcmp(imxsd->asd.match.device_name.name, devname))
+				return imxsd;
+			break;
+		default:
+			break;
+		}
+	}
+
+	return NULL;
+}
+
+/*
+ * Adds a subdev to the async subdev list. If np is non-NULL, adds
+ * the async as a V4L2_ASYNC_MATCH_OF match type, otherwise as a
+ * V4L2_ASYNC_MATCH_DEVNAME match type using the dev_name of the
+ * given platform_device. This is called during driver load when
+ * forming the async subdev list.
+ */
+struct imx_media_subdev *
+imx_media_add_async_subdev(struct imx_media_dev *imxmd,
+			   struct device_node *np,
+			   struct platform_device *pdev)
+{
+	struct imx_media_subdev *imxsd;
+	struct v4l2_async_subdev *asd;
+	const char *devname = NULL;
+	int sd_idx;
+
+	if (pdev)
+		devname = dev_name(&pdev->dev);
+
+	/* return NULL if this subdev already added */
+	if (imx_media_find_async_subdev(imxmd, np, devname)) {
+		dev_dbg(imxmd->md.dev, "%s: already added %s\n",
+			__func__, np ? np->name : devname);
+		return NULL;
+	}
+
+	sd_idx = imxmd->subdev_notifier.num_subdevs;
+	if (sd_idx >= IMX_MEDIA_MAX_SUBDEVS) {
+		dev_err(imxmd->md.dev, "%s: too many subdevs! can't add %s\n",
+			__func__, np ? np->name : devname);
+		return ERR_PTR(-ENOSPC);
+	}
+
+	imxsd = &imxmd->subdev[sd_idx];
+
+	asd = &imxsd->asd;
+	if (np) {
+		asd->match_type = V4L2_ASYNC_MATCH_OF;
+		asd->match.of.node = np;
+	} else {
+		asd->match_type = V4L2_ASYNC_MATCH_DEVNAME;
+		strncpy(imxsd->devname, devname, sizeof(imxsd->devname));
+		asd->match.device_name.name = imxsd->devname;
+		imxsd->pdev = pdev;
+	}
+
+	imxmd->async_ptrs[sd_idx] = asd;
+	imxmd->subdev_notifier.num_subdevs++;
+
+	dev_dbg(imxmd->md.dev, "%s: added %s, match type %s\n",
+		__func__, np ? np->name : devname, np ? "OF" : "DEVNAME");
+
+	return imxsd;
+}
+
+/*
+ * Adds an imx-media link to a subdev pad's link list. This is called
+ * during driver load when forming the links between subdevs.
+ *
+ * @pad: the local pad
+ * @remote_node: the device node of the remote subdev
+ * @remote_devname: the device name of the remote subdev
+ * @local_pad: local pad index
+ * @remote_pad: remote pad index
+ */
+int imx_media_add_pad_link(struct imx_media_dev *imxmd,
+			   struct imx_media_pad *pad,
+			   struct device_node *remote_node,
+			   const char *remote_devname,
+			   int local_pad, int remote_pad)
+{
+	struct imx_media_link *link;
+	int link_idx;
+
+	link_idx = pad->num_links;
+	if (link_idx >= IMX_MEDIA_MAX_LINKS) {
+		dev_err(imxmd->md.dev, "%s: too many links!\n", __func__);
+		return -ENOSPC;
+	}
+
+	link = &pad->link[link_idx];
+
+	link->remote_sd_node = remote_node;
+	if (remote_devname)
+		strncpy(link->remote_devname, remote_devname,
+			sizeof(link->remote_devname));
+
+	link->local_pad = local_pad;
+	link->remote_pad = remote_pad;
+
+	pad->num_links++;
+
+	return 0;
+}
+
+/*
+ * get IPU from this CSI and add it to the list of IPUs
+ * the media driver will control.
+ */
+static int imx_media_get_ipu(struct imx_media_dev *imxmd,
+			     struct v4l2_subdev *csi_sd)
+{
+	struct ipu_soc *ipu;
+	int ipu_id;
+
+	ipu = dev_get_drvdata(csi_sd->dev->parent);
+	if (!ipu) {
+		v4l2_err(&imxmd->v4l2_dev,
+			 "CSI %s has no parent IPU!\n", csi_sd->name);
+		return -ENODEV;
+	}
+
+	ipu_id = ipu_get_num(ipu);
+	if (ipu_id > 1) {
+		v4l2_err(&imxmd->v4l2_dev, "invalid IPU id %d!\n", ipu_id);
+		return -ENODEV;
+	}
+
+	if (!imxmd->ipu[ipu_id])
+		imxmd->ipu[ipu_id] = ipu;
+
+	return 0;
+}
+
+/* async subdev bound notifier */
+static int imx_media_subdev_bound(struct v4l2_async_notifier *notifier,
+				  struct v4l2_subdev *sd,
+				  struct v4l2_async_subdev *asd)
+{
+	struct imx_media_dev *imxmd = notifier2dev(notifier);
+	struct imx_media_subdev *imxsd;
+	int ret = -EINVAL;
+
+	imxsd = imx_media_find_async_subdev(imxmd, sd->dev->of_node,
+					    dev_name(sd->dev));
+	if (!imxsd)
+		goto out;
+
+	imxsd->sd = sd;
+
+	if (sd->grp_id & IMX_MEDIA_GRP_ID_CSI) {
+		ret = imx_media_get_ipu(imxmd, sd);
+		if (ret)
+			return ret;
+	} else if (sd->entity.function == MEDIA_ENT_F_VID_MUX) {
+		/* this is a video mux */
+		sd->grp_id = IMX_MEDIA_GRP_ID_VIDMUX;
+	} else if (imxsd->num_sink_pads == 0) {
+		/*
+		 * this is an original source of video frames, it
+		 * could be a camera sensor, an analog decoder, or
+		 * a bridge device (HDMI -> MIPI CSI-2 for example).
+		 * This group ID is used to locate the entity that
+		 * is the original source of video in a pipeline.
+		 */
+		sd->grp_id = IMX_MEDIA_GRP_ID_SENSOR;
+	}
+
+	ret = 0;
+out:
+	if (ret)
+		v4l2_warn(&imxmd->v4l2_dev,
+			  "Received unknown subdev %s\n", sd->name);
+	else
+		v4l2_info(&imxmd->v4l2_dev,
+			  "Registered subdev %s\n", sd->name);
+
+	return ret;
+}
+
+/*
+ * create a single media link given a local subdev, a single pad from that
+ * subdev, and a single link from that pad. Called after all subdevs have
+ * registered.
+ */
+static int imx_media_create_link(struct imx_media_dev *imxmd,
+				 struct imx_media_subdev *local_sd,
+				 struct imx_media_pad *pad,
+				 struct imx_media_link *link)
+{
+	struct imx_media_subdev *remote_sd;
+	struct v4l2_subdev *source, *sink;
+	u16 source_pad, sink_pad;
+	int ret;
+
+	/* only create the source->sink links */
+	if (pad->pad.flags & MEDIA_PAD_FL_SINK)
+		return 0;
+
+	remote_sd = imx_media_find_async_subdev(imxmd, link->remote_sd_node,
+						link->remote_devname);
+	if (!remote_sd) {
+		v4l2_warn(&imxmd->v4l2_dev, "%s: no remote for %s:%d\n",
+			  __func__, local_sd->sd->name, link->local_pad);
+		return 0;
+	}
+
+	source = local_sd->sd;
+	sink = remote_sd->sd;
+	source_pad = link->local_pad;
+	sink_pad = link->remote_pad;
+
+	v4l2_info(&imxmd->v4l2_dev, "%s: %s:%d -> %s:%d\n", __func__,
+		  source->name, source_pad, sink->name, sink_pad);
+
+	ret = media_create_pad_link(&source->entity, source_pad,
+				    &sink->entity, sink_pad, 0);
+	if (ret)
+		v4l2_err(&imxmd->v4l2_dev,
+			 "create_pad_link failed: %d\n", ret);
+
+	return ret;
+}
+
+/*
+ * create the media links from all imx-media pads and their links.
+ * Called after all subdevs have registered.
+ */
+static int imx_media_create_links(struct imx_media_dev *imxmd)
+{
+	struct imx_media_subdev *local_sd;
+	struct imx_media_link *link;
+	struct imx_media_pad *pad;
+	int num_pads, i, j, k;
+	int ret = 0;
+
+	for (i = 0; i < imxmd->num_subdevs; i++) {
+		local_sd = &imxmd->subdev[i];
+		num_pads = local_sd->num_sink_pads + local_sd->num_src_pads;
+
+		for (j = 0; j < num_pads; j++) {
+			pad = &local_sd->pad[j];
+
+			for (k = 0; k < pad->num_links; k++) {
+				link = &pad->link[k];
+
+				ret = imx_media_create_link(imxmd, local_sd,
+							    pad, link);
+				if (ret)
+					goto out;
+			}
+		}
+	}
+
+out:
+	return ret;
+}
+
+/* async subdev complete notifier */
+static int imx_media_probe_complete(struct v4l2_async_notifier *notifier)
+{
+	struct imx_media_dev *imxmd = notifier2dev(notifier);
+	int ret;
+
+	mutex_lock(&imxmd->md.graph_mutex);
+
+	ret = imx_media_create_links(imxmd);
+	if (ret)
+		goto unlock;
+
+	ret = v4l2_device_register_subdev_nodes(&imxmd->v4l2_dev);
+unlock:
+	mutex_unlock(&imxmd->md.graph_mutex);
+	if (ret)
+		return ret;
+
+	return media_device_register(&imxmd->md);
+}
+
+static int imx_media_link_notify(struct media_link *link, u32 flags,
+				 unsigned int notification)
+{
+	struct media_entity *sink = link->sink->entity;
+	struct v4l2_subdev *sink_sd;
+	struct imx_media_dev *imxmd;
+	struct media_graph *graph;
+	int ret = 0;
+
+	if (is_media_entity_v4l2_video_device(sink))
+		return 0;
+	sink_sd = media_entity_to_v4l2_subdev(sink);
+	imxmd = dev_get_drvdata(sink_sd->v4l2_dev->dev);
+	graph = &imxmd->link_notify_graph;
+
+	if (notification == MEDIA_DEV_NOTIFY_PRE_LINK_CH) {
+		ret = media_graph_walk_init(graph, &imxmd->md);
+		if (ret)
+			return ret;
+
+		if (!(flags & MEDIA_LNK_FL_ENABLED)) {
+			/* Before link disconnection */
+			ret = imx_media_pipeline_set_power(imxmd, graph,
+							   sink, false);
+		}
+	} else if (notification == MEDIA_DEV_NOTIFY_POST_LINK_CH) {
+		if (link->flags & MEDIA_LNK_FL_ENABLED) {
+			/* After link activation */
+			ret = imx_media_pipeline_set_power(imxmd, graph,
+							   sink, true);
+		}
+
+		media_graph_walk_cleanup(graph);
+	}
+
+	return ret ? -EPIPE : 0;
+}
+
+static const struct media_device_ops imx_media_md_ops = {
+	.link_notify = imx_media_link_notify,
+};
+
+static int imx_media_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *node = dev->of_node;
+	struct imx_media_subdev *csi[4] = {0};
+	struct imx_media_dev *imxmd;
+	int ret;
+
+	imxmd = devm_kzalloc(dev, sizeof(*imxmd), GFP_KERNEL);
+	if (!imxmd)
+		return -ENOMEM;
+
+	dev_set_drvdata(dev, imxmd);
+
+	strlcpy(imxmd->md.model, "imx-media", sizeof(imxmd->md.model));
+	imxmd->md.ops = &imx_media_md_ops;
+	imxmd->md.dev = dev;
+
+	imxmd->v4l2_dev.mdev = &imxmd->md;
+	strlcpy(imxmd->v4l2_dev.name, "imx-media",
+		sizeof(imxmd->v4l2_dev.name));
+
+	media_device_init(&imxmd->md);
+
+	ret = v4l2_device_register(dev, &imxmd->v4l2_dev);
+	if (ret < 0) {
+		v4l2_err(&imxmd->v4l2_dev,
+			 "Failed to register v4l2_device: %d\n", ret);
+		return ret;
+	}
+
+	dev_set_drvdata(imxmd->v4l2_dev.dev, imxmd);
+
+	ret = imx_media_of_parse(imxmd, &csi, node);
+	if (ret) {
+		v4l2_err(&imxmd->v4l2_dev,
+			 "imx_media_of_parse failed with %d\n", ret);
+		goto unreg_dev;
+	}
+
+	ret = imx_media_add_internal_subdevs(imxmd, csi);
+	if (ret) {
+		v4l2_err(&imxmd->v4l2_dev,
+			 "add_internal_subdevs failed with %d\n", ret);
+		goto unreg_dev;
+	}
+
+	/* no subdevs? just bail for this media device */
+	imxmd->num_subdevs = imxmd->subdev_notifier.num_subdevs;
+	if (imxmd->num_subdevs == 0) {
+		ret = -ENODEV;
+		goto unreg_dev;
+	}
+
+	/* prepare the async subdev notifier and register it */
+	imxmd->subdev_notifier.subdevs = imxmd->async_ptrs;
+	imxmd->subdev_notifier.bound = imx_media_subdev_bound;
+	imxmd->subdev_notifier.complete = imx_media_probe_complete;
+	ret = v4l2_async_notifier_register(&imxmd->v4l2_dev,
+					   &imxmd->subdev_notifier);
+	if (ret) {
+		v4l2_err(&imxmd->v4l2_dev,
+			 "v4l2_async_notifier_register failed with %d\n", ret);
+		goto unreg_dev;
+	}
+
+	return 0;
+
+unreg_dev:
+	v4l2_device_unregister(&imxmd->v4l2_dev);
+	return ret;
+}
+
+static int imx_media_remove(struct platform_device *pdev)
+{
+	struct imx_media_dev *imxmd =
+		(struct imx_media_dev *)platform_get_drvdata(pdev);
+
+	v4l2_info(&imxmd->v4l2_dev, "Removing imx-media\n");
+
+	v4l2_async_notifier_unregister(&imxmd->subdev_notifier);
+	v4l2_device_unregister(&imxmd->v4l2_dev);
+	media_device_unregister(&imxmd->md);
+	media_device_cleanup(&imxmd->md);
+
+	imx_media_remove_internal_subdevs(imxmd);
+
+	return 0;
+}
+
+static const struct of_device_id imx_media_dt_ids[] = {
+	{ .compatible = "fsl,imx-capture-subsystem" },
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, imx_media_dt_ids);
+
+static struct platform_driver imx_media_pdrv = {
+	.probe		= imx_media_probe,
+	.remove		= imx_media_remove,
+	.driver		= {
+		.name	= "imx-media",
+		.of_match_table	= imx_media_dt_ids,
+	},
+};
+
+module_platform_driver(imx_media_pdrv);
+
+MODULE_DESCRIPTION("i.MX5/6 v4l2 media controller driver");
+MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/staging/media/imx/imx-media-fim.c b/drivers/staging/media/imx/imx-media-fim.c
new file mode 100644
index 0000000..acc7e39
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-fim.c
@@ -0,0 +1,471 @@
+/*
+ * Frame Interval Monitor.
+ *
+ * Copyright (c) 2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-subdev.h>
+#include <media/imx.h>
+#include "imx-media.h"
+
+enum {
+	FIM_CL_ENABLE = 0,
+	FIM_CL_NUM,
+	FIM_CL_TOLERANCE_MIN,
+	FIM_CL_TOLERANCE_MAX,
+	FIM_CL_NUM_SKIP,
+	FIM_NUM_CONTROLS,
+};
+
+#define FIM_CL_ENABLE_DEF          1 /* FIM enabled by default */
+#define FIM_CL_NUM_DEF             8 /* average 8 frames */
+#define FIM_CL_NUM_SKIP_DEF        2 /* skip 2 frames after restart */
+#define FIM_CL_TOLERANCE_MIN_DEF  50 /* usec */
+#define FIM_CL_TOLERANCE_MAX_DEF   0 /* no max tolerance (unbounded) */
+
+struct imx_media_fim {
+	struct imx_media_dev *md;
+
+	/* the owning subdev of this fim instance */
+	struct v4l2_subdev *sd;
+
+	/* FIM's control handler */
+	struct v4l2_ctrl_handler ctrl_handler;
+
+	/* control cluster */
+	struct v4l2_ctrl  *ctrl[FIM_NUM_CONTROLS];
+
+	/* current control values */
+	bool              enabled;
+	int               num_avg;
+	int               num_skip;
+	unsigned long     tolerance_min; /* usec */
+	unsigned long     tolerance_max; /* usec */
+
+	int               counter;
+	struct timespec   last_ts;
+	unsigned long     sum;       /* usec */
+	unsigned long     nominal;   /* usec */
+
+	/*
+	 * input capture method of measuring FI (channel and flags
+	 * from device tree)
+	 */
+	int               icap_channel;
+	int               icap_flags;
+	struct completion icap_first_event;
+};
+
+static void update_fim_nominal(struct imx_media_fim *fim,
+			       struct imx_media_subdev *sensor)
+{
+	struct v4l2_streamparm parm;
+	struct v4l2_fract tpf;
+	int ret;
+
+	parm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+	ret = v4l2_subdev_call(sensor->sd, video, g_parm, &parm);
+	tpf = parm.parm.capture.timeperframe;
+
+	if (ret || tpf.denominator == 0) {
+		dev_dbg(fim->sd->dev, "no tpf from sensor, FIM disabled\n");
+		fim->enabled = false;
+		return;
+	}
+
+	fim->nominal = DIV_ROUND_CLOSEST(1000 * 1000 * tpf.numerator,
+					 tpf.denominator);
+
+	dev_dbg(fim->sd->dev, "sensor FI=%lu usec\n", fim->nominal);
+}
+
+static void reset_fim(struct imx_media_fim *fim, bool curval)
+{
+	struct v4l2_ctrl *en = fim->ctrl[FIM_CL_ENABLE];
+	struct v4l2_ctrl *num = fim->ctrl[FIM_CL_NUM];
+	struct v4l2_ctrl *skip = fim->ctrl[FIM_CL_NUM_SKIP];
+	struct v4l2_ctrl *tol_min = fim->ctrl[FIM_CL_TOLERANCE_MIN];
+	struct v4l2_ctrl *tol_max = fim->ctrl[FIM_CL_TOLERANCE_MAX];
+
+	if (curval) {
+		fim->enabled = en->cur.val;
+		fim->num_avg = num->cur.val;
+		fim->num_skip = skip->cur.val;
+		fim->tolerance_min = tol_min->cur.val;
+		fim->tolerance_max = tol_max->cur.val;
+	} else {
+		fim->enabled = en->val;
+		fim->num_avg = num->val;
+		fim->num_skip = skip->val;
+		fim->tolerance_min = tol_min->val;
+		fim->tolerance_max = tol_max->val;
+	}
+
+	/* disable tolerance range if max <= min */
+	if (fim->tolerance_max <= fim->tolerance_min)
+		fim->tolerance_max = 0;
+
+	fim->counter = -fim->num_skip;
+	fim->sum = 0;
+}
+
+static void send_fim_event(struct imx_media_fim *fim, unsigned long error)
+{
+	static const struct v4l2_event ev = {
+		.type = V4L2_EVENT_IMX_FRAME_INTERVAL,
+	};
+
+	v4l2_subdev_notify_event(fim->sd, &ev);
+}
+
+/*
+ * Monitor an averaged frame interval. If the average deviates too much
+ * from the sensor's nominal frame rate, send the frame interval error
+ * event. The frame intervals are averaged in order to quiet noise from
+ * (presumably random) interrupt latency.
+ */
+static void frame_interval_monitor(struct imx_media_fim *fim,
+				   struct timespec *ts)
+{
+	unsigned long interval, error, error_avg;
+	struct timespec diff;
+	bool send_event = false;
+
+	if (!fim->enabled || ++fim->counter <= 0)
+		goto out_update_ts;
+
+	diff = timespec_sub(*ts, fim->last_ts);
+	interval = diff.tv_sec * 1000 * 1000 + diff.tv_nsec / 1000;
+	error = abs(interval - fim->nominal);
+
+	if (fim->tolerance_max && error >= fim->tolerance_max) {
+		dev_dbg(fim->sd->dev,
+			"FIM: %lu ignored, out of tolerance bounds\n",
+			error);
+		fim->counter--;
+		goto out_update_ts;
+	}
+
+	fim->sum += error;
+
+	if (fim->counter == fim->num_avg) {
+		error_avg = DIV_ROUND_CLOSEST(fim->sum, fim->num_avg);
+
+		if (error_avg > fim->tolerance_min)
+			send_event = true;
+
+		dev_dbg(fim->sd->dev, "FIM: error: %lu usec%s\n",
+			error_avg, send_event ? " (!!!)" : "");
+
+		fim->counter = 0;
+		fim->sum = 0;
+	}
+
+out_update_ts:
+	fim->last_ts = *ts;
+	if (send_event)
+		send_fim_event(fim, error_avg);
+}
+
+#ifdef CONFIG_IMX_GPT_ICAP
+/*
+ * Input Capture method of measuring frame intervals. Not subject
+ * to interrupt latency.
+ */
+static void fim_input_capture_handler(int channel, void *dev_id,
+				      struct timespec *ts)
+{
+	struct imx_media_fim *fim = dev_id;
+
+	frame_interval_monitor(fim, ts);
+
+	if (!completion_done(&fim->icap_first_event))
+		complete(&fim->icap_first_event);
+}
+
+static int fim_request_input_capture(struct imx_media_fim *fim)
+{
+	init_completion(&fim->icap_first_event);
+
+	return mxc_request_input_capture(fim->icap_channel,
+					 fim_input_capture_handler,
+					 fim->icap_flags, fim);
+}
+
+static void fim_free_input_capture(struct imx_media_fim *fim)
+{
+	mxc_free_input_capture(fim->icap_channel, fim);
+}
+
+#else /* CONFIG_IMX_GPT_ICAP */
+
+static int fim_request_input_capture(struct imx_media_fim *fim)
+{
+	return 0;
+}
+
+static void fim_free_input_capture(struct imx_media_fim *fim)
+{
+}
+
+#endif /* CONFIG_IMX_GPT_ICAP */
+
+/*
+ * In case we are monitoring the first frame interval after streamon
+ * (when fim->num_skip = 0), we need a valid fim->last_ts before we
+ * can begin. This only applies to the input capture method. It is not
+ * possible to accurately measure the first FI after streamon using the
+ * EOF method, so fim->num_skip minimum is set to 1 in that case, so this
+ * function is a noop when the EOF method is used.
+ */
+static void fim_acquire_first_ts(struct imx_media_fim *fim)
+{
+	unsigned long ret;
+
+	if (!fim->enabled || fim->num_skip > 0)
+		return;
+
+	ret = wait_for_completion_timeout(
+		&fim->icap_first_event,
+		msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+	if (ret == 0)
+		v4l2_warn(fim->sd, "wait first icap event timeout\n");
+}
+
+/* FIM Controls */
+static int fim_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct imx_media_fim *fim = container_of(ctrl->handler,
+						 struct imx_media_fim,
+						 ctrl_handler);
+
+	switch (ctrl->id) {
+	case V4L2_CID_IMX_FIM_ENABLE:
+		reset_fim(fim, false);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops fim_ctrl_ops = {
+	.s_ctrl = fim_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config imx_media_fim_ctrl[] = {
+	[FIM_CL_ENABLE] = {
+		.ops = &fim_ctrl_ops,
+		.id = V4L2_CID_IMX_FIM_ENABLE,
+		.name = "FIM Enable",
+		.type = V4L2_CTRL_TYPE_BOOLEAN,
+		.def = FIM_CL_ENABLE_DEF,
+		.min = 0,
+		.max = 1,
+		.step = 1,
+	},
+	[FIM_CL_NUM] = {
+		.ops = &fim_ctrl_ops,
+		.id = V4L2_CID_IMX_FIM_NUM,
+		.name = "FIM Num Average",
+		.type = V4L2_CTRL_TYPE_INTEGER,
+		.def = FIM_CL_NUM_DEF,
+		.min =  1, /* no averaging */
+		.max = 64, /* average 64 frames */
+		.step = 1,
+	},
+	[FIM_CL_TOLERANCE_MIN] = {
+		.ops = &fim_ctrl_ops,
+		.id = V4L2_CID_IMX_FIM_TOLERANCE_MIN,
+		.name = "FIM Tolerance Min",
+		.type = V4L2_CTRL_TYPE_INTEGER,
+		.def = FIM_CL_TOLERANCE_MIN_DEF,
+		.min =    2,
+		.max =  200,
+		.step =   1,
+	},
+	[FIM_CL_TOLERANCE_MAX] = {
+		.ops = &fim_ctrl_ops,
+		.id = V4L2_CID_IMX_FIM_TOLERANCE_MAX,
+		.name = "FIM Tolerance Max",
+		.type = V4L2_CTRL_TYPE_INTEGER,
+		.def = FIM_CL_TOLERANCE_MAX_DEF,
+		.min =    0,
+		.max =  500,
+		.step =   1,
+	},
+	[FIM_CL_NUM_SKIP] = {
+		.ops = &fim_ctrl_ops,
+		.id = V4L2_CID_IMX_FIM_NUM_SKIP,
+		.name = "FIM Num Skip",
+		.type = V4L2_CTRL_TYPE_INTEGER,
+		.def = FIM_CL_NUM_SKIP_DEF,
+		.min =   0, /* skip no frames */
+		.max = 256, /* skip 256 frames */
+		.step =  1,
+	},
+};
+
+static int init_fim_controls(struct imx_media_fim *fim)
+{
+	struct v4l2_ctrl_handler *hdlr = &fim->ctrl_handler;
+	struct v4l2_ctrl_config fim_c;
+	int i, ret;
+
+	v4l2_ctrl_handler_init(hdlr, FIM_NUM_CONTROLS);
+
+	for (i = 0; i < FIM_NUM_CONTROLS; i++) {
+		fim_c = imx_media_fim_ctrl[i];
+
+		/*
+		 * it's not possible to accurately measure the first
+		 * FI after streamon using the EOF method, so force
+		 * num_skip minimum to 1 in that case.
+		 */
+		if (i == FIM_CL_NUM_SKIP && fim->icap_channel < 0)
+			fim_c.min = 1;
+
+		fim->ctrl[i] = v4l2_ctrl_new_custom(hdlr, &fim_c, NULL);
+	}
+
+	if (hdlr->error) {
+		ret = hdlr->error;
+		goto err_free;
+	}
+
+	v4l2_ctrl_cluster(FIM_NUM_CONTROLS, fim->ctrl);
+
+	/* add the FIM controls to the calling subdev ctrl handler */
+	ret = v4l2_ctrl_add_handler(fim->sd->ctrl_handler,
+				    &fim->ctrl_handler, NULL);
+	if (ret)
+		goto err_free;
+
+	return 0;
+err_free:
+	v4l2_ctrl_handler_free(hdlr);
+	return ret;
+}
+
+static int of_parse_fim(struct imx_media_fim *fim, struct device_node *np)
+{
+	struct device_node *fim_np;
+	u32 icap[2];
+	int ret;
+
+	/* by default EOF method is used */
+	fim->icap_channel = -1;
+
+	fim_np = of_get_child_by_name(np, "fim");
+	if (!fim_np || !of_device_is_available(fim_np)) {
+		of_node_put(fim_np);
+		return -ENODEV;
+	}
+
+	if (IS_ENABLED(CONFIG_IMX_GPT_ICAP)) {
+		ret = of_property_read_u32_array(fim_np,
+						 "fsl,input-capture-channel",
+						 icap, 2);
+		if (!ret) {
+			fim->icap_channel = icap[0];
+			fim->icap_flags = icap[1];
+		}
+	}
+
+	of_node_put(fim_np);
+	return 0;
+}
+
+/*
+ * Monitor frame intervals via EOF interrupt. This method is
+ * subject to uncertainty errors introduced by interrupt latency.
+ *
+ * This is a noop if the Input Capture method is being used, since
+ * the frame_interval_monitor() is called by the input capture event
+ * callback handler in that case.
+ */
+void imx_media_fim_eof_monitor(struct imx_media_fim *fim, struct timespec *ts)
+{
+	if (fim->icap_channel >= 0)
+		return;
+
+	frame_interval_monitor(fim, ts);
+}
+EXPORT_SYMBOL_GPL(imx_media_fim_eof_monitor);
+
+/* Called by the subdev in its s_power callback */
+int imx_media_fim_set_power(struct imx_media_fim *fim, bool on)
+{
+	int ret = 0;
+
+	if (fim->icap_channel >= 0) {
+		if (on)
+			ret = fim_request_input_capture(fim);
+		else
+			fim_free_input_capture(fim);
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(imx_media_fim_set_power);
+
+/* Called by the subdev in its s_stream callback */
+int imx_media_fim_set_stream(struct imx_media_fim *fim,
+			     struct imx_media_subdev *sensor,
+			     bool on)
+{
+	if (on) {
+		reset_fim(fim, true);
+		update_fim_nominal(fim, sensor);
+
+		if (fim->icap_channel >= 0)
+			fim_acquire_first_ts(fim);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_fim_set_stream);
+
+/* Called by the subdev in its subdev registered callback */
+struct imx_media_fim *imx_media_fim_init(struct v4l2_subdev *sd)
+{
+	struct device_node *node = sd->of_node;
+	struct imx_media_fim *fim;
+	int ret;
+
+	fim = devm_kzalloc(sd->dev, sizeof(*fim), GFP_KERNEL);
+	if (!fim)
+		return ERR_PTR(-ENOMEM);
+
+	/* get media device */
+	fim->md = dev_get_drvdata(sd->v4l2_dev->dev);
+	fim->sd = sd;
+
+	ret = of_parse_fim(fim, node);
+	if (ret)
+		return (ret == -ENODEV) ? NULL : ERR_PTR(ret);
+
+	ret = init_fim_controls(fim);
+	if (ret)
+		return ERR_PTR(ret);
+
+	return fim;
+}
+EXPORT_SYMBOL_GPL(imx_media_fim_init);
+
+void imx_media_fim_free(struct imx_media_fim *fim)
+{
+	v4l2_ctrl_handler_free(&fim->ctrl_handler);
+}
+EXPORT_SYMBOL_GPL(imx_media_fim_free);
diff --git a/drivers/staging/media/imx/imx-media-internal-sd.c b/drivers/staging/media/imx/imx-media-internal-sd.c
new file mode 100644
index 0000000..cdfbf40
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-internal-sd.c
@@ -0,0 +1,349 @@
+/*
+ * Media driver for Freescale i.MX5/6 SOC
+ *
+ * Adds the internal subdevices and the media links between them.
+ *
+ * Copyright (c) 2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/platform_device.h>
+#include "imx-media.h"
+
+enum isd_enum {
+	isd_csi0 = 0,
+	isd_csi1,
+	isd_vdic,
+	isd_ic_prp,
+	isd_ic_prpenc,
+	isd_ic_prpvf,
+	num_isd,
+};
+
+static const struct internal_subdev_id {
+	enum isd_enum index;
+	const char *name;
+	u32 grp_id;
+} isd_id[num_isd] = {
+	[isd_csi0] = {
+		.index = isd_csi0,
+		.grp_id = IMX_MEDIA_GRP_ID_CSI0,
+		.name = "imx-ipuv3-csi",
+	},
+	[isd_csi1] = {
+		.index = isd_csi1,
+		.grp_id = IMX_MEDIA_GRP_ID_CSI1,
+		.name = "imx-ipuv3-csi",
+	},
+	[isd_vdic] = {
+		.index = isd_vdic,
+		.grp_id = IMX_MEDIA_GRP_ID_VDIC,
+		.name = "imx-ipuv3-vdic",
+	},
+	[isd_ic_prp] = {
+		.index = isd_ic_prp,
+		.grp_id = IMX_MEDIA_GRP_ID_IC_PRP,
+		.name = "imx-ipuv3-ic",
+	},
+	[isd_ic_prpenc] = {
+		.index = isd_ic_prpenc,
+		.grp_id = IMX_MEDIA_GRP_ID_IC_PRPENC,
+		.name = "imx-ipuv3-ic",
+	},
+	[isd_ic_prpvf] = {
+		.index = isd_ic_prpvf,
+		.grp_id = IMX_MEDIA_GRP_ID_IC_PRPVF,
+		.name = "imx-ipuv3-ic",
+	},
+};
+
+struct internal_link {
+	const struct internal_subdev_id *remote_id;
+	int remote_pad;
+};
+
+struct internal_pad {
+	bool devnode; /* does this pad link to a device node */
+	struct internal_link link[IMX_MEDIA_MAX_LINKS];
+};
+
+static const struct internal_subdev {
+	const struct internal_subdev_id *id;
+	struct internal_pad pad[IMX_MEDIA_MAX_PADS];
+	int num_sink_pads;
+	int num_src_pads;
+} internal_subdev[num_isd] = {
+	[isd_csi0] = {
+		.id = &isd_id[isd_csi0],
+		.num_sink_pads = CSI_NUM_SINK_PADS,
+		.num_src_pads = CSI_NUM_SRC_PADS,
+		.pad[CSI_SRC_PAD_DIRECT] = {
+			.link = {
+				{
+					.remote_id = &isd_id[isd_ic_prp],
+					.remote_pad = PRP_SINK_PAD,
+				}, {
+					.remote_id =  &isd_id[isd_vdic],
+					.remote_pad = VDIC_SINK_PAD_DIRECT,
+				},
+			},
+		},
+		.pad[CSI_SRC_PAD_IDMAC] = {
+			.devnode = true,
+		},
+	},
+
+	[isd_csi1] = {
+		.id = &isd_id[isd_csi1],
+		.num_sink_pads = CSI_NUM_SINK_PADS,
+		.num_src_pads = CSI_NUM_SRC_PADS,
+		.pad[CSI_SRC_PAD_DIRECT] = {
+			.link = {
+				{
+					.remote_id = &isd_id[isd_ic_prp],
+					.remote_pad = PRP_SINK_PAD,
+				}, {
+					.remote_id =  &isd_id[isd_vdic],
+					.remote_pad = VDIC_SINK_PAD_DIRECT,
+				},
+			},
+		},
+		.pad[CSI_SRC_PAD_IDMAC] = {
+			.devnode = true,
+		},
+	},
+
+	[isd_vdic] = {
+		.id = &isd_id[isd_vdic],
+		.num_sink_pads = VDIC_NUM_SINK_PADS,
+		.num_src_pads = VDIC_NUM_SRC_PADS,
+		.pad[VDIC_SINK_PAD_IDMAC] = {
+			.devnode = true,
+		},
+		.pad[VDIC_SRC_PAD_DIRECT] = {
+			.link = {
+				{
+					.remote_id =  &isd_id[isd_ic_prp],
+					.remote_pad = PRP_SINK_PAD,
+				},
+			},
+		},
+	},
+
+	[isd_ic_prp] = {
+		.id = &isd_id[isd_ic_prp],
+		.num_sink_pads = PRP_NUM_SINK_PADS,
+		.num_src_pads = PRP_NUM_SRC_PADS,
+		.pad[PRP_SRC_PAD_PRPENC] = {
+			.link = {
+				{
+					.remote_id = &isd_id[isd_ic_prpenc],
+					.remote_pad = 0,
+				},
+			},
+		},
+		.pad[PRP_SRC_PAD_PRPVF] = {
+			.link = {
+				{
+					.remote_id = &isd_id[isd_ic_prpvf],
+					.remote_pad = 0,
+				},
+			},
+		},
+	},
+
+	[isd_ic_prpenc] = {
+		.id = &isd_id[isd_ic_prpenc],
+		.num_sink_pads = PRPENCVF_NUM_SINK_PADS,
+		.num_src_pads = PRPENCVF_NUM_SRC_PADS,
+		.pad[PRPENCVF_SRC_PAD] = {
+			.devnode = true,
+		},
+	},
+
+	[isd_ic_prpvf] = {
+		.id = &isd_id[isd_ic_prpvf],
+		.num_sink_pads = PRPENCVF_NUM_SINK_PADS,
+		.num_src_pads = PRPENCVF_NUM_SRC_PADS,
+		.pad[PRPENCVF_SRC_PAD] = {
+			.devnode = true,
+		},
+	},
+};
+
+/* form a device name given a group id and ipu id */
+static inline void isd_id_to_devname(char *devname, int sz,
+				     const struct internal_subdev_id *id,
+				     int ipu_id)
+{
+	int pdev_id = ipu_id * num_isd + id->index;
+
+	snprintf(devname, sz, "%s.%d", id->name, pdev_id);
+}
+
+/* adds the links from given internal subdev */
+static int add_internal_links(struct imx_media_dev *imxmd,
+			      const struct internal_subdev *isd,
+			      struct imx_media_subdev *imxsd,
+			      int ipu_id)
+{
+	int i, num_pads, ret;
+
+	num_pads = isd->num_sink_pads + isd->num_src_pads;
+
+	for (i = 0; i < num_pads; i++) {
+		const struct internal_pad *intpad = &isd->pad[i];
+		struct imx_media_pad *pad = &imxsd->pad[i];
+		int j;
+
+		/* init the pad flags for this internal subdev */
+		pad->pad.flags = (i < isd->num_sink_pads) ?
+			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
+		/* export devnode pad flag to the subdevs */
+		pad->devnode = intpad->devnode;
+
+		for (j = 0; ; j++) {
+			const struct internal_link *link;
+			char remote_devname[32];
+
+			link = &intpad->link[j];
+
+			if (!link->remote_id)
+				break;
+
+			isd_id_to_devname(remote_devname,
+					  sizeof(remote_devname),
+					  link->remote_id, ipu_id);
+
+			ret = imx_media_add_pad_link(imxmd, pad,
+						     NULL, remote_devname,
+						     i, link->remote_pad);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+/* register an internal subdev as a platform device */
+static struct imx_media_subdev *
+add_internal_subdev(struct imx_media_dev *imxmd,
+		    const struct internal_subdev *isd,
+		    int ipu_id)
+{
+	struct imx_media_internal_sd_platformdata pdata;
+	struct platform_device_info pdevinfo = {0};
+	struct imx_media_subdev *imxsd;
+	struct platform_device *pdev;
+
+	pdata.grp_id = isd->id->grp_id;
+
+	/* the id of IPU this subdev will control */
+	pdata.ipu_id = ipu_id;
+
+	/* create subdev name */
+	imx_media_grp_id_to_sd_name(pdata.sd_name, sizeof(pdata.sd_name),
+				    pdata.grp_id, ipu_id);
+
+	pdevinfo.name = isd->id->name;
+	pdevinfo.id = ipu_id * num_isd + isd->id->index;
+	pdevinfo.parent = imxmd->md.dev;
+	pdevinfo.data = &pdata;
+	pdevinfo.size_data = sizeof(pdata);
+	pdevinfo.dma_mask = DMA_BIT_MASK(32);
+
+	pdev = platform_device_register_full(&pdevinfo);
+	if (IS_ERR(pdev))
+		return ERR_CAST(pdev);
+
+	imxsd = imx_media_add_async_subdev(imxmd, NULL, pdev);
+	if (IS_ERR(imxsd))
+		return imxsd;
+
+	imxsd->num_sink_pads = isd->num_sink_pads;
+	imxsd->num_src_pads = isd->num_src_pads;
+
+	return imxsd;
+}
+
+/* adds the internal subdevs in one ipu */
+static int add_ipu_internal_subdevs(struct imx_media_dev *imxmd,
+				    struct imx_media_subdev *csi0,
+				    struct imx_media_subdev *csi1,
+				    int ipu_id)
+{
+	enum isd_enum i;
+	int ret;
+
+	for (i = 0; i < num_isd; i++) {
+		const struct internal_subdev *isd = &internal_subdev[i];
+		struct imx_media_subdev *imxsd;
+
+		/*
+		 * the CSIs are represented in the device-tree, so those
+		 * devices are added already, and are added to the async
+		 * subdev list by of_parse_subdev(), so we are given those
+		 * subdevs as csi0 and csi1.
+		 */
+		switch (isd->id->grp_id) {
+		case IMX_MEDIA_GRP_ID_CSI0:
+			imxsd = csi0;
+			break;
+		case IMX_MEDIA_GRP_ID_CSI1:
+			imxsd = csi1;
+			break;
+		default:
+			imxsd = add_internal_subdev(imxmd, isd, ipu_id);
+			break;
+		}
+
+		if (IS_ERR(imxsd))
+			return PTR_ERR(imxsd);
+
+		/* add the links from this subdev */
+		if (imxsd) {
+			ret = add_internal_links(imxmd, isd, imxsd, ipu_id);
+			if (ret)
+				return ret;
+		}
+	}
+
+	return 0;
+}
+
+int imx_media_add_internal_subdevs(struct imx_media_dev *imxmd,
+				   struct imx_media_subdev *csi[4])
+{
+	int ret;
+
+	ret = add_ipu_internal_subdevs(imxmd, csi[0], csi[1], 0);
+	if (ret)
+		goto remove;
+
+	ret = add_ipu_internal_subdevs(imxmd, csi[2], csi[3], 1);
+	if (ret)
+		goto remove;
+
+	return 0;
+
+remove:
+	imx_media_remove_internal_subdevs(imxmd);
+	return ret;
+}
+
+void imx_media_remove_internal_subdevs(struct imx_media_dev *imxmd)
+{
+	struct imx_media_subdev *imxsd;
+	int i;
+
+	for (i = 0; i < imxmd->subdev_notifier.num_subdevs; i++) {
+		imxsd = &imxmd->subdev[i];
+		if (!imxsd->pdev)
+			continue;
+		platform_device_unregister(imxsd->pdev);
+	}
+}
diff --git a/drivers/staging/media/imx/imx-media-of.c b/drivers/staging/media/imx/imx-media-of.c
new file mode 100644
index 0000000..b383be4
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-of.c
@@ -0,0 +1,267 @@
+/*
+ * Media driver for Freescale i.MX5/6 SOC
+ *
+ * Open Firmware parsing.
+ *
+ * Copyright (c) 2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/of_platform.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-subdev.h>
+#include <media/videobuf2-dma-contig.h>
+#include <video/imx-ipu-v3.h>
+#include "imx-media.h"
+
+static int of_add_pad_link(struct imx_media_dev *imxmd,
+			   struct imx_media_pad *pad,
+			   struct device_node *local_sd_node,
+			   struct device_node *remote_sd_node,
+			   int local_pad, int remote_pad)
+{
+	dev_dbg(imxmd->md.dev, "%s: adding %s:%d -> %s:%d\n", __func__,
+		local_sd_node->name, local_pad,
+		remote_sd_node->name, remote_pad);
+
+	return imx_media_add_pad_link(imxmd, pad, remote_sd_node, NULL,
+				      local_pad, remote_pad);
+}
+
+static void of_parse_sensor(struct imx_media_dev *imxmd,
+			    struct imx_media_subdev *sensor,
+			    struct device_node *sensor_np)
+{
+	struct device_node *endpoint;
+
+	endpoint = of_graph_get_next_endpoint(sensor_np, NULL);
+	if (endpoint) {
+		v4l2_of_parse_endpoint(endpoint, &sensor->sensor_ep);
+		of_node_put(endpoint);
+	}
+}
+
+static int of_get_port_count(const struct device_node *np)
+{
+	struct device_node *ports, *child;
+	int num = 0;
+
+	/* check if this node has a ports subnode */
+	ports = of_get_child_by_name(np, "ports");
+	if (ports)
+		np = ports;
+
+	for_each_child_of_node(np, child)
+		if (of_node_cmp(child->name, "port") == 0)
+			num++;
+
+	of_node_put(ports);
+	return num;
+}
+
+/*
+ * find the remote device node and remote port id (remote pad #)
+ * given local endpoint node
+ */
+static void of_get_remote_pad(struct device_node *epnode,
+			      struct device_node **remote_node,
+			      int *remote_pad)
+{
+	struct device_node *rp, *rpp;
+	struct device_node *remote;
+
+	rp = of_graph_get_remote_port(epnode);
+	rpp = of_graph_get_remote_port_parent(epnode);
+
+	if (of_device_is_compatible(rpp, "fsl,imx6q-ipu")) {
+		/* the remote is one of the CSI ports */
+		remote = rp;
+		*remote_pad = 0;
+		of_node_put(rpp);
+	} else {
+		remote = rpp;
+		of_property_read_u32(rp, "reg", remote_pad);
+		of_node_put(rp);
+	}
+
+	if (!remote || !of_device_is_available(remote)) {
+		of_node_put(remote);
+		*remote_node = NULL;
+	} else {
+		*remote_node = remote;
+	}
+}
+
+static struct imx_media_subdev *
+of_parse_subdev(struct imx_media_dev *imxmd, struct device_node *sd_np,
+		bool is_csi_port)
+{
+	struct imx_media_subdev *imxsd;
+	int i, num_pads, ret;
+
+	if (!of_device_is_available(sd_np)) {
+		dev_dbg(imxmd->md.dev, "%s: %s not enabled\n", __func__,
+			sd_np->name);
+		return NULL;
+	}
+
+	/* register this subdev with async notifier */
+	imxsd = imx_media_add_async_subdev(imxmd, sd_np, NULL);
+	if (IS_ERR_OR_NULL(imxsd))
+		return imxsd;
+
+	if (is_csi_port) {
+		/*
+		 * the ipu-csi has one sink port and two source ports.
+		 * The source ports are not represented in the device tree,
+		 * but are described by the internal pads and links later.
+		 */
+		num_pads = CSI_NUM_PADS;
+		imxsd->num_sink_pads = CSI_NUM_SINK_PADS;
+	} else if (of_device_is_compatible(sd_np, "fsl,imx6-mipi-csi2")) {
+		num_pads = of_get_port_count(sd_np);
+		/* the mipi csi2 receiver has only one sink port */
+		imxsd->num_sink_pads = 1;
+	} else if (of_device_is_compatible(sd_np, "video-multiplexer")) {
+		num_pads = of_get_port_count(sd_np);
+		/* for the video mux, all but the last port are sinks */
+		imxsd->num_sink_pads = num_pads - 1;
+	} else {
+		num_pads = of_get_port_count(sd_np);
+		if (num_pads != 1) {
+			dev_warn(imxmd->md.dev,
+				 "%s: unknown device %s with %d ports\n",
+				 __func__, sd_np->name, num_pads);
+			return NULL;
+		}
+
+		/*
+		 * we got to this node from this single source port,
+		 * there are no sink pads.
+		 */
+		imxsd->num_sink_pads = 0;
+	}
+
+	if (imxsd->num_sink_pads >= num_pads)
+		return ERR_PTR(-EINVAL);
+
+	imxsd->num_src_pads = num_pads - imxsd->num_sink_pads;
+
+	dev_dbg(imxmd->md.dev, "%s: %s has %d pads (%d sink, %d src)\n",
+		__func__, sd_np->name, num_pads,
+		imxsd->num_sink_pads, imxsd->num_src_pads);
+
+	/*
+	 * With no sink, this subdev node is the original source
+	 * of video, parse it's media bus fur use by the pipeline.
+	 */
+	if (imxsd->num_sink_pads == 0)
+		of_parse_sensor(imxmd, imxsd, sd_np);
+
+	for (i = 0; i < num_pads; i++) {
+		struct device_node *epnode = NULL, *port, *remote_np;
+		struct imx_media_subdev *remote_imxsd;
+		struct imx_media_pad *pad;
+		int remote_pad;
+
+		/* init this pad */
+		pad = &imxsd->pad[i];
+		pad->pad.flags = (i < imxsd->num_sink_pads) ?
+			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
+
+		if (is_csi_port)
+			port = (i < imxsd->num_sink_pads) ? sd_np : NULL;
+		else
+			port = of_graph_get_port_by_id(sd_np, i);
+		if (!port)
+			continue;
+
+		for_each_child_of_node(port, epnode) {
+			of_get_remote_pad(epnode, &remote_np, &remote_pad);
+			if (!remote_np)
+				continue;
+
+			ret = of_add_pad_link(imxmd, pad, sd_np, remote_np,
+					      i, remote_pad);
+			if (ret) {
+				imxsd = ERR_PTR(ret);
+				break;
+			}
+
+			if (i < imxsd->num_sink_pads) {
+				/* follow sink endpoints upstream */
+				remote_imxsd = of_parse_subdev(imxmd,
+							       remote_np,
+							       false);
+				if (IS_ERR(remote_imxsd)) {
+					imxsd = remote_imxsd;
+					break;
+				}
+			}
+
+			of_node_put(remote_np);
+		}
+
+		if (port != sd_np)
+			of_node_put(port);
+		if (IS_ERR(imxsd)) {
+			of_node_put(remote_np);
+			of_node_put(epnode);
+			break;
+		}
+	}
+
+	return imxsd;
+}
+
+int imx_media_of_parse(struct imx_media_dev *imxmd,
+		       struct imx_media_subdev *(*csi)[4],
+		       struct device_node *np)
+{
+	struct imx_media_subdev *lcsi;
+	struct device_node *csi_np;
+	u32 ipu_id, csi_id;
+	int i, ret;
+
+	for (i = 0; ; i++) {
+		csi_np = of_parse_phandle(np, "ports", i);
+		if (!csi_np)
+			break;
+
+		lcsi = of_parse_subdev(imxmd, csi_np, true);
+		if (IS_ERR(lcsi)) {
+			ret = PTR_ERR(lcsi);
+			goto err_put;
+		}
+
+		ret = of_property_read_u32(csi_np, "reg", &csi_id);
+		if (ret) {
+			dev_err(imxmd->md.dev,
+				"%s: csi port missing reg property!\n",
+				__func__);
+			goto err_put;
+		}
+
+		ipu_id = of_alias_get_id(csi_np->parent, "ipu");
+		of_node_put(csi_np);
+
+		if (ipu_id > 1 || csi_id > 1) {
+			dev_err(imxmd->md.dev,
+				"%s: invalid ipu/csi id (%u/%u)\n",
+				__func__, ipu_id, csi_id);
+			return -EINVAL;
+		}
+
+		(*csi)[ipu_id * 2 + csi_id] = lcsi;
+	}
+
+	return 0;
+err_put:
+	of_node_put(csi_np);
+	return ret;
+}
diff --git a/drivers/staging/media/imx/imx-media-utils.c b/drivers/staging/media/imx/imx-media-utils.c
new file mode 100644
index 0000000..55603d9
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-utils.c
@@ -0,0 +1,701 @@
+/*
+ * V4L2 Media Controller Driver for Freescale i.MX5/6 SOC
+ *
+ * Copyright (c) 2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/module.h>
+#include "imx-media.h"
+
+/*
+ * List of pixel formats for the subdevs. This must be a super-set of
+ * the formats supported by the ipu image converter.
+ *
+ * The non-mbus formats (planar and BGR) must all fall at the end of
+ * this table, otherwise enum_fmt() at media pads will stop before
+ * seeing all the supported mbus formats.
+ */
+static const struct imx_media_pixfmt imx_media_formats[] = {
+	{
+		.fourcc	= V4L2_PIX_FMT_UYVY,
+		.codes  = {
+			MEDIA_BUS_FMT_UYVY8_2X8,
+			MEDIA_BUS_FMT_UYVY8_1X16
+		},
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 16,
+	}, {
+		.fourcc	= V4L2_PIX_FMT_YUYV,
+		.codes  = {
+			MEDIA_BUS_FMT_YUYV8_2X8,
+			MEDIA_BUS_FMT_YUYV8_1X16
+		},
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 16,
+	}, {
+		.fourcc = V4L2_PIX_FMT_YUV32,
+		.codes  = {MEDIA_BUS_FMT_AYUV8_1X32},
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 32,
+		.ipufmt = true,
+	}, {
+		.fourcc	= V4L2_PIX_FMT_RGB565,
+		.codes  = {MEDIA_BUS_FMT_RGB565_2X8_LE},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 16,
+	}, {
+		.fourcc	= V4L2_PIX_FMT_RGB24,
+		.codes  = {
+			MEDIA_BUS_FMT_RGB888_1X24,
+			MEDIA_BUS_FMT_RGB888_2X12_LE
+		},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 24,
+	}, {
+		.fourcc	= V4L2_PIX_FMT_RGB32,
+		.codes  = {MEDIA_BUS_FMT_ARGB8888_1X32},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 32,
+		.ipufmt = true,
+	},
+	/*** non-mbus formats start here ***/
+	{
+		.fourcc	= V4L2_PIX_FMT_BGR24,
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 24,
+	}, {
+		.fourcc	= V4L2_PIX_FMT_BGR32,
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 32,
+	}, {
+		.fourcc	= V4L2_PIX_FMT_YUV420,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 12,
+		.planar = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_YVU420,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 12,
+		.planar = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_YUV422P,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 16,
+		.planar = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_NV12,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 12,
+		.planar = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_NV16,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 16,
+		.planar = true,
+	},
+};
+
+static const u32 imx_media_ipu_internal_codes[] = {
+	MEDIA_BUS_FMT_AYUV8_1X32, MEDIA_BUS_FMT_ARGB8888_1X32,
+};
+
+static inline u32 pixfmt_to_colorspace(const struct imx_media_pixfmt *fmt)
+{
+	return (fmt->cs == IPUV3_COLORSPACE_RGB) ?
+		V4L2_COLORSPACE_SRGB : V4L2_COLORSPACE_SMPTE170M;
+}
+
+static const struct imx_media_pixfmt *find_format(u32 fourcc, u32 code,
+						  bool allow_rgb,
+						  bool allow_planar,
+						  bool ipu_fmt_only)
+{
+	const struct imx_media_pixfmt *fmt, *ret = NULL;
+	int i, j;
+
+	for (i = 0; i < ARRAY_SIZE(imx_media_formats); i++) {
+		fmt = &imx_media_formats[i];
+
+		if (ipu_fmt_only && !fmt->ipufmt)
+			continue;
+
+		if (fourcc && fmt->fourcc == fourcc &&
+		    (fmt->cs != IPUV3_COLORSPACE_RGB || allow_rgb) &&
+		    (!fmt->planar || allow_planar)) {
+			ret = fmt;
+			goto out;
+		}
+
+		for (j = 0; code && fmt->codes[j]; j++) {
+			if (fmt->codes[j] == code && !fmt->planar &&
+			    (fmt->cs != IPUV3_COLORSPACE_RGB || allow_rgb)) {
+				ret = fmt;
+				goto out;
+			}
+		}
+	}
+out:
+	return ret;
+}
+
+const struct imx_media_pixfmt *imx_media_find_format(u32 fourcc, u32 code,
+						     bool allow_rgb,
+						     bool allow_planar)
+{
+	return find_format(fourcc, code, allow_rgb, allow_planar, false);
+}
+EXPORT_SYMBOL_GPL(imx_media_find_format);
+
+const struct imx_media_pixfmt *imx_media_find_ipu_format(u32 fourcc,
+							 u32 code,
+							 bool allow_rgb)
+{
+	return find_format(fourcc, code, allow_rgb, false, true);
+}
+EXPORT_SYMBOL_GPL(imx_media_find_ipu_format);
+
+int imx_media_enum_format(u32 *fourcc, u32 *code, u32 index,
+			  bool allow_rgb, bool allow_planar)
+{
+	const struct imx_media_pixfmt *fmt;
+
+	if (index >= ARRAY_SIZE(imx_media_formats))
+		return -EINVAL;
+
+	fmt = &imx_media_formats[index];
+
+	if ((fmt->cs == IPUV3_COLORSPACE_RGB && !allow_rgb) ||
+	    (fmt->planar && !allow_planar))
+		return -EINVAL;
+
+	if (code)
+		*code = fmt->codes[0];
+	if (fourcc)
+		*fourcc = fmt->fourcc;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_enum_format);
+
+int imx_media_enum_ipu_format(u32 *fourcc, u32 *code, u32 index,
+			      bool allow_rgb)
+{
+	const struct imx_media_pixfmt *fmt;
+	u32 lcode;
+
+	if (index >= ARRAY_SIZE(imx_media_ipu_internal_codes))
+		return -EINVAL;
+
+	lcode = imx_media_ipu_internal_codes[index];
+
+	fmt = find_format(0, lcode, allow_rgb, false, true);
+	if (!fmt)
+		return -EINVAL;
+
+	if (code)
+		*code = fmt->codes[0];
+	if (fourcc)
+		*fourcc = fmt->fourcc;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_enum_ipu_format);
+
+int imx_media_init_mbus_fmt(struct v4l2_mbus_framefmt *mbus,
+			    u32 width, u32 height, u32 code, u32 field,
+			    const struct imx_media_pixfmt **cc)
+{
+	const struct imx_media_pixfmt *lcc;
+
+	mbus->width = width;
+	mbus->height = height;
+	mbus->field = field;
+	if (code == 0)
+		imx_media_enum_format(NULL, &code, 0, true, false);
+	lcc = imx_media_find_format(0, code, true, false);
+	if (!lcc)
+		return -EINVAL;
+	mbus->code = code;
+	mbus->colorspace = pixfmt_to_colorspace(lcc);
+
+	if (cc)
+		*cc = lcc;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_init_mbus_fmt);
+
+int imx_media_mbus_fmt_to_pix_fmt(struct v4l2_pix_format *pix,
+				  struct v4l2_mbus_framefmt *mbus,
+				  const struct imx_media_pixfmt *cc)
+{
+	u32 stride;
+
+	if (!cc) {
+		cc = imx_media_find_format(0, mbus->code, true, false);
+		if (!cc)
+			return -EINVAL;
+	}
+
+	stride = cc->planar ? mbus->width : (mbus->width * cc->bpp) >> 3;
+
+	pix->width = mbus->width;
+	pix->height = mbus->height;
+	pix->pixelformat = cc->fourcc;
+	pix->colorspace = pixfmt_to_colorspace(cc);
+	pix->field = mbus->field;
+	pix->bytesperline = stride;
+	pix->sizeimage = (pix->width * pix->height * cc->bpp) >> 3;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_mbus_fmt_to_pix_fmt);
+
+int imx_media_mbus_fmt_to_ipu_image(struct ipu_image *image,
+				    struct v4l2_mbus_framefmt *mbus)
+{
+	int ret;
+
+	memset(image, 0, sizeof(*image));
+
+	ret = imx_media_mbus_fmt_to_pix_fmt(&image->pix, mbus, NULL);
+	if (ret)
+		return ret;
+
+	image->rect.width = mbus->width;
+	image->rect.height = mbus->height;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_mbus_fmt_to_ipu_image);
+
+int imx_media_ipu_image_to_mbus_fmt(struct v4l2_mbus_framefmt *mbus,
+				    struct ipu_image *image)
+{
+	const struct imx_media_pixfmt *fmt;
+
+	fmt = imx_media_find_format(image->pix.pixelformat, 0, true, false);
+	if (!fmt)
+		return -EINVAL;
+
+	memset(mbus, 0, sizeof(*mbus));
+	mbus->width = image->pix.width;
+	mbus->height = image->pix.height;
+	mbus->code = fmt->codes[0];
+	mbus->colorspace = pixfmt_to_colorspace(fmt);
+	mbus->field = image->pix.field;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_ipu_image_to_mbus_fmt);
+
+void imx_media_free_dma_buf(struct imx_media_dev *imxmd,
+			    struct imx_media_dma_buf *buf)
+{
+	if (buf->virt)
+		dma_free_coherent(imxmd->md.dev, buf->len,
+				  buf->virt, buf->phys);
+
+	buf->virt = NULL;
+	buf->phys = 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_free_dma_buf);
+
+int imx_media_alloc_dma_buf(struct imx_media_dev *imxmd,
+			    struct imx_media_dma_buf *buf,
+			    int size)
+{
+	imx_media_free_dma_buf(imxmd, buf);
+
+	buf->len = PAGE_ALIGN(size);
+	buf->virt = dma_alloc_coherent(imxmd->md.dev, buf->len, &buf->phys,
+				       GFP_DMA | GFP_KERNEL);
+	if (!buf->virt) {
+		dev_err(imxmd->md.dev, "failed to alloc dma buffer\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(imx_media_alloc_dma_buf);
+
+/* form a subdev name given a group id and ipu id */
+void imx_media_grp_id_to_sd_name(char *sd_name, int sz, u32 grp_id, int ipu_id)
+{
+	int id;
+
+	switch (grp_id) {
+	case IMX_MEDIA_GRP_ID_CSI0...IMX_MEDIA_GRP_ID_CSI1:
+		id = (grp_id >> IMX_MEDIA_GRP_ID_CSI_BIT) - 1;
+		snprintf(sd_name, sz, "ipu%d_csi%d", ipu_id + 1, id);
+		break;
+	case IMX_MEDIA_GRP_ID_VDIC:
+		snprintf(sd_name, sz, "ipu%d_vdic", ipu_id + 1);
+		break;
+	case IMX_MEDIA_GRP_ID_IC_PRP:
+		snprintf(sd_name, sz, "ipu%d_ic_prp", ipu_id + 1);
+		break;
+	case IMX_MEDIA_GRP_ID_IC_PRPENC:
+		snprintf(sd_name, sz, "ipu%d_ic_prpenc", ipu_id + 1);
+		break;
+	case IMX_MEDIA_GRP_ID_IC_PRPVF:
+		snprintf(sd_name, sz, "ipu%d_ic_prpvf", ipu_id + 1);
+		break;
+	default:
+		break;
+	}
+}
+EXPORT_SYMBOL_GPL(imx_media_grp_id_to_sd_name);
+
+struct imx_media_subdev *
+imx_media_find_subdev_by_sd(struct imx_media_dev *imxmd,
+			    struct v4l2_subdev *sd)
+{
+	struct imx_media_subdev *imxsd;
+	int i;
+
+	for (i = 0; i < imxmd->num_subdevs; i++) {
+		imxsd = &imxmd->subdev[i];
+		if (sd == imxsd->sd)
+			return imxsd;
+	}
+
+	return ERR_PTR(-ENODEV);
+}
+EXPORT_SYMBOL_GPL(imx_media_find_subdev_by_sd);
+
+struct imx_media_subdev *
+imx_media_find_subdev_by_id(struct imx_media_dev *imxmd, u32 grp_id)
+{
+	struct imx_media_subdev *imxsd;
+	int i;
+
+	for (i = 0; i < imxmd->num_subdevs; i++) {
+		imxsd = &imxmd->subdev[i];
+		if (imxsd->sd && imxsd->sd->grp_id == grp_id)
+			return imxsd;
+	}
+
+	return ERR_PTR(-ENODEV);
+}
+EXPORT_SYMBOL_GPL(imx_media_find_subdev_by_id);
+
+/*
+ * Search for a subdev in the current pipeline with given grp_id.
+ * Called with mdev->graph_mutex held.
+ */
+static struct v4l2_subdev *
+find_pipeline_subdev(struct imx_media_dev *imxmd,
+		     struct media_graph *graph,
+		     struct media_entity *start_entity,
+		     u32 grp_id)
+{
+	struct media_entity *entity;
+	struct v4l2_subdev *sd;
+
+	media_graph_walk_start(graph, start_entity);
+
+	while ((entity = media_graph_walk_next(graph))) {
+		if (!is_media_entity_v4l2_subdev(entity))
+			continue;
+
+		sd = media_entity_to_v4l2_subdev(entity);
+		if (sd->grp_id & grp_id)
+			return sd;
+	}
+
+	return NULL;
+}
+
+/*
+ * Search for an entity in the current pipeline with given grp_id,
+ * then locate the remote enabled source pad from that entity.
+ * Called with mdev->graph_mutex held.
+ */
+static struct media_pad *
+find_pipeline_remote_source_pad(struct imx_media_dev *imxmd,
+				struct media_graph *graph,
+				struct media_entity *start_entity,
+				u32 grp_id)
+{
+	struct media_pad *pad = NULL;
+	struct media_entity *me;
+	struct v4l2_subdev *sd;
+	int i;
+
+	sd = find_pipeline_subdev(imxmd, graph, start_entity, grp_id);
+	if (!sd)
+		return NULL;
+	me = &sd->entity;
+
+	/* Find remote source pad */
+	for (i = 0; i < me->num_pads; i++) {
+		struct media_pad *spad = &me->pads[i];
+
+		if (!(spad->flags & MEDIA_PAD_FL_SINK))
+			continue;
+		pad = media_entity_remote_pad(spad);
+		if (pad)
+			return pad;
+	}
+
+	return NULL;
+}
+
+/*
+ * Find the mipi-csi2 virtual channel reached from the given
+ * start entity in the current pipeline.
+ * Must be called with mdev->graph_mutex held.
+ */
+int imx_media_find_mipi_csi2_channel(struct imx_media_dev *imxmd,
+				     struct media_entity *start_entity)
+{
+	struct media_graph graph;
+	struct v4l2_subdev *sd;
+	struct media_pad *pad;
+	int ret;
+
+	ret = media_graph_walk_init(&graph, &imxmd->md);
+	if (ret)
+		return ret;
+
+	/* first try to locate the mipi-csi2 from the video mux */
+	pad = find_pipeline_remote_source_pad(imxmd, &graph, start_entity,
+					      IMX_MEDIA_GRP_ID_VIDMUX);
+	/* if couldn't reach it from there, try from a CSI */
+	if (!pad)
+		pad = find_pipeline_remote_source_pad(imxmd, &graph,
+						      start_entity,
+						      IMX_MEDIA_GRP_ID_CSI);
+	if (pad) {
+		sd = media_entity_to_v4l2_subdev(pad->entity);
+		if (sd->grp_id & IMX_MEDIA_GRP_ID_CSI2) {
+			ret = pad->index - 1; /* found it! */
+			dev_dbg(imxmd->md.dev, "found vc%d from %s\n",
+				ret, start_entity->name);
+			goto cleanup;
+		}
+	}
+
+	ret = -EPIPE;
+
+cleanup:
+	media_graph_walk_cleanup(&graph);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(imx_media_find_mipi_csi2_channel);
+
+/*
+ * Find a subdev reached from the given start entity in the
+ * current pipeline.
+ * Must be called with mdev->graph_mutex held.
+ */
+struct imx_media_subdev *
+imx_media_find_pipeline_subdev(struct imx_media_dev *imxmd,
+			       struct media_entity *start_entity,
+			       u32 grp_id)
+{
+	struct imx_media_subdev *imxsd;
+	struct media_graph graph;
+	struct v4l2_subdev *sd;
+	int ret;
+
+	ret = media_graph_walk_init(&graph, &imxmd->md);
+	if (ret)
+		return ERR_PTR(ret);
+
+	sd = find_pipeline_subdev(imxmd, &graph, start_entity, grp_id);
+	if (!sd) {
+		imxsd = ERR_PTR(-ENODEV);
+		goto cleanup;
+	}
+
+	imxsd = imx_media_find_subdev_by_sd(imxmd, sd);
+cleanup:
+	media_graph_walk_cleanup(&graph);
+	return imxsd;
+}
+EXPORT_SYMBOL_GPL(imx_media_find_pipeline_subdev);
+
+struct imx_media_subdev *
+__imx_media_find_sensor(struct imx_media_dev *imxmd,
+			struct media_entity *start_entity)
+{
+	return imx_media_find_pipeline_subdev(imxmd, start_entity,
+					      IMX_MEDIA_GRP_ID_SENSOR);
+}
+EXPORT_SYMBOL_GPL(__imx_media_find_sensor);
+
+struct imx_media_subdev *
+imx_media_find_sensor(struct imx_media_dev *imxmd,
+		      struct media_entity *start_entity)
+{
+	struct imx_media_subdev *sensor;
+
+	mutex_lock(&imxmd->md.graph_mutex);
+	sensor = __imx_media_find_sensor(imxmd, start_entity);
+	mutex_unlock(&imxmd->md.graph_mutex);
+
+	return sensor;
+}
+EXPORT_SYMBOL_GPL(imx_media_find_sensor);
+
+/*
+ * The subdevs have to be powered on/off, and streaming
+ * enabled/disabled, in a specific sequence.
+ */
+static const u32 stream_on_seq[] = {
+	IMX_MEDIA_GRP_ID_IC_PRPVF,
+	IMX_MEDIA_GRP_ID_IC_PRPENC,
+	IMX_MEDIA_GRP_ID_IC_PRP,
+	IMX_MEDIA_GRP_ID_VDIC,
+	IMX_MEDIA_GRP_ID_CSI2,
+	IMX_MEDIA_GRP_ID_SENSOR,
+	IMX_MEDIA_GRP_ID_VIDMUX,
+	IMX_MEDIA_GRP_ID_CSI,
+};
+
+static const u32 stream_off_seq[] = {
+	IMX_MEDIA_GRP_ID_IC_PRPVF,
+	IMX_MEDIA_GRP_ID_IC_PRPENC,
+	IMX_MEDIA_GRP_ID_IC_PRP,
+	IMX_MEDIA_GRP_ID_VDIC,
+	IMX_MEDIA_GRP_ID_CSI,
+	IMX_MEDIA_GRP_ID_VIDMUX,
+	IMX_MEDIA_GRP_ID_SENSOR,
+	IMX_MEDIA_GRP_ID_CSI2,
+};
+
+#define NUM_STREAM_ENTITIES ARRAY_SIZE(stream_on_seq)
+
+static const u32 power_on_seq[] = {
+	IMX_MEDIA_GRP_ID_CSI2,
+	IMX_MEDIA_GRP_ID_SENSOR,
+	IMX_MEDIA_GRP_ID_VIDMUX,
+	IMX_MEDIA_GRP_ID_CSI,
+	IMX_MEDIA_GRP_ID_VDIC,
+	IMX_MEDIA_GRP_ID_IC_PRPENC,
+	IMX_MEDIA_GRP_ID_IC_PRPVF,
+};
+
+static const u32 power_off_seq[] = {
+	IMX_MEDIA_GRP_ID_IC_PRPVF,
+	IMX_MEDIA_GRP_ID_IC_PRPENC,
+	IMX_MEDIA_GRP_ID_VDIC,
+	IMX_MEDIA_GRP_ID_CSI,
+	IMX_MEDIA_GRP_ID_VIDMUX,
+	IMX_MEDIA_GRP_ID_SENSOR,
+	IMX_MEDIA_GRP_ID_CSI2,
+};
+
+#define NUM_POWER_ENTITIES ARRAY_SIZE(power_on_seq)
+
+static int imx_media_set_stream(struct imx_media_dev *imxmd,
+				struct media_entity *start_entity,
+				bool on)
+{
+	struct media_graph graph;
+	struct v4l2_subdev *sd;
+	int i, ret;
+	u32 id;
+
+	mutex_lock(&imxmd->md.graph_mutex);
+
+	ret = media_graph_walk_init(&graph, &imxmd->md);
+	if (ret)
+		goto unlock;
+
+	for (i = 0; i < NUM_STREAM_ENTITIES; i++) {
+		id = on ? stream_on_seq[i] : stream_off_seq[i];
+		sd = find_pipeline_subdev(imxmd, &graph,
+					  start_entity, id);
+		if (!sd)
+			continue;
+
+		ret = v4l2_subdev_call(sd, video, s_stream, on);
+		if (on && ret && ret != -ENOIOCTLCMD)
+			break;
+	}
+
+	media_graph_walk_cleanup(&graph);
+unlock:
+	mutex_unlock(&imxmd->md.graph_mutex);
+
+	return (on && ret && ret != -ENOIOCTLCMD) ? ret : 0;
+}
+
+/*
+ * Turn current pipeline streaming on/off starting from entity.
+ */
+int imx_media_pipeline_set_stream(struct imx_media_dev *imxmd,
+				  struct media_entity *entity,
+				  struct media_pipeline *pipe,
+				  bool on)
+{
+	int ret = 0;
+
+	if (on) {
+		ret = media_pipeline_start(entity, pipe);
+		if (ret)
+			return ret;
+		ret = imx_media_set_stream(imxmd, entity, true);
+		if (!ret)
+			return 0;
+		/* fall through */
+	}
+
+	imx_media_set_stream(imxmd, entity, false);
+	if (entity->pipe)
+		media_pipeline_stop(entity);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(imx_media_pipeline_set_stream);
+
+static int imx_media_set_power(struct imx_media_dev *imxmd,
+			       struct media_graph *graph,
+			       struct media_entity *start_entity, bool on)
+{
+	struct v4l2_subdev *sd;
+	int i, ret = 0;
+	u32 id;
+
+	for (i = 0; i < NUM_POWER_ENTITIES; i++) {
+		id = on ? power_on_seq[i] : power_off_seq[i];
+		sd = find_pipeline_subdev(imxmd, graph, start_entity, id);
+		if (!sd)
+			continue;
+
+		ret = v4l2_subdev_call(sd, core, s_power, on);
+		if (on && ret && ret != -ENOIOCTLCMD)
+			break;
+	}
+
+	return (on && ret && ret != -ENOIOCTLCMD) ? ret : 0;
+}
+
+/*
+ * Turn current pipeline power on/off starting from start_entity.
+ * Must be called with mdev->graph_mutex held.
+ */
+int imx_media_pipeline_set_power(struct imx_media_dev *imxmd,
+				 struct media_graph *graph,
+				 struct media_entity *start_entity, bool on)
+{
+	int ret;
+
+	ret = imx_media_set_power(imxmd, graph, start_entity, on);
+	if (ret)
+		imx_media_set_power(imxmd, graph, start_entity, false);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(imx_media_pipeline_set_power);
+
+MODULE_DESCRIPTION("i.MX5/6 v4l2 media controller driver");
+MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/staging/media/imx/imx-media.h b/drivers/staging/media/imx/imx-media.h
new file mode 100644
index 0000000..3d4f3c7
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media.h
@@ -0,0 +1,297 @@
+/*
+ * V4L2 Media Controller Driver for Freescale i.MX5/6 SOC
+ *
+ * Copyright (c) 2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#ifndef _IMX_MEDIA_H
+#define _IMX_MEDIA_H
+
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-subdev.h>
+#include <media/videobuf2-dma-contig.h>
+#include <video/imx-ipu-v3.h>
+
+/*
+ * This is somewhat arbitrary, but we need at least:
+ * - 2 camera interface subdevs
+ * - 3 IC subdevs
+ * - 2 CSI subdevs
+ * - 1 mipi-csi2 receiver subdev
+ * - 2 video-mux subdevs
+ * - 3 camera sensor subdevs (2 parallel, 1 mipi-csi2)
+ *
+ * And double the above numbers for quad i.mx!
+ */
+#define IMX_MEDIA_MAX_SUBDEVS       48
+/* max pads per subdev */
+#define IMX_MEDIA_MAX_PADS          16
+/* max links per pad */
+#define IMX_MEDIA_MAX_LINKS          8
+
+/*
+ * Pad definitions for the subdevs with multiple source or
+ * sink pads
+ */
+/* ipu_csi */
+enum {
+	CSI_SINK_PAD = 0,
+	CSI_SRC_PAD_DIRECT,
+	CSI_SRC_PAD_IDMAC,
+	CSI_NUM_PADS,
+};
+
+#define CSI_NUM_SINK_PADS 1
+#define CSI_NUM_SRC_PADS  2
+
+/* ipu_vdic */
+enum {
+	VDIC_SINK_PAD_DIRECT = 0,
+	VDIC_SINK_PAD_IDMAC,
+	VDIC_SRC_PAD_DIRECT,
+	VDIC_NUM_PADS,
+};
+
+#define VDIC_NUM_SINK_PADS 2
+#define VDIC_NUM_SRC_PADS  1
+
+/* ipu_ic_prp */
+enum {
+	PRP_SINK_PAD = 0,
+	PRP_SRC_PAD_PRPENC,
+	PRP_SRC_PAD_PRPVF,
+	PRP_NUM_PADS,
+};
+
+#define PRP_NUM_SINK_PADS 1
+#define PRP_NUM_SRC_PADS  2
+
+/* ipu_ic_prpencvf */
+enum {
+	PRPENCVF_SINK_PAD = 0,
+	PRPENCVF_SRC_PAD,
+	PRPENCVF_NUM_PADS,
+};
+
+#define PRPENCVF_NUM_SINK_PADS 1
+#define PRPENCVF_NUM_SRC_PADS  1
+
+/* How long to wait for EOF interrupts in the buffer-capture subdevs */
+#define IMX_MEDIA_EOF_TIMEOUT       1000
+
+struct imx_media_pixfmt {
+	u32     fourcc;
+	u32     codes[4];
+	int     bpp;     /* total bpp */
+	enum ipu_color_space cs;
+	bool    planar;  /* is a planar format */
+	bool    ipufmt;  /* is one of the IPU internal formats */
+};
+
+struct imx_media_buffer {
+	struct vb2_v4l2_buffer vbuf; /* v4l buffer must be first */
+	struct list_head  list;
+};
+
+struct imx_media_video_dev {
+	struct video_device *vfd;
+
+	/* the user format */
+	struct v4l2_format fmt;
+	const struct imx_media_pixfmt *cc;
+};
+
+static inline struct imx_media_buffer *to_imx_media_vb(struct vb2_buffer *vb)
+{
+	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+
+	return container_of(vbuf, struct imx_media_buffer, vbuf);
+}
+
+struct imx_media_link {
+	struct device_node *remote_sd_node;
+	char               remote_devname[32];
+	int                local_pad;
+	int                remote_pad;
+};
+
+struct imx_media_pad {
+	struct media_pad  pad;
+	struct imx_media_link link[IMX_MEDIA_MAX_LINKS];
+	bool devnode; /* does this pad link to a device node */
+	int num_links;
+};
+
+struct imx_media_internal_sd_platformdata {
+	char sd_name[V4L2_SUBDEV_NAME_SIZE];
+	u32 grp_id;
+	int ipu_id;
+};
+
+struct imx_media_subdev {
+	struct v4l2_async_subdev asd;
+	struct v4l2_subdev       *sd; /* set when bound */
+
+	struct imx_media_pad     pad[IMX_MEDIA_MAX_PADS];
+	int num_sink_pads;
+	int num_src_pads;
+
+	/* the platform device if this is an internal subdev */
+	struct platform_device *pdev;
+	/* the devname is needed for async devname match */
+	char devname[32];
+
+	/* if this is a sensor */
+	struct v4l2_of_endpoint sensor_ep;
+};
+
+struct imx_media_dev {
+	struct media_device md;
+	struct v4l2_device  v4l2_dev;
+
+	/* master subdev list */
+	struct imx_media_subdev subdev[IMX_MEDIA_MAX_SUBDEVS];
+	int num_subdevs;
+
+	/* IPUs this media driver control, valid after subdevs bound */
+	struct ipu_soc *ipu[2];
+
+	/* used during link_notify */
+	struct media_graph link_notify_graph;
+
+	/* for async subdev registration */
+	struct v4l2_async_subdev *async_ptrs[IMX_MEDIA_MAX_SUBDEVS];
+	struct v4l2_async_notifier subdev_notifier;
+};
+
+const struct imx_media_pixfmt *imx_media_find_format(u32 fourcc, u32 code,
+						     bool allow_rgb,
+						     bool allow_planar);
+const struct imx_media_pixfmt *imx_media_find_ipu_format(u32 fourcc, u32 code,
+							 bool allow_rgb);
+
+int imx_media_enum_format(u32 *fourcc, u32 *code, u32 index,
+			  bool allow_rgb, bool allow_planar);
+int imx_media_enum_ipu_format(u32 *fourcc, u32 *code, u32 index,
+			      bool allow_rgb);
+
+int imx_media_init_mbus_fmt(struct v4l2_mbus_framefmt *mbus,
+			    u32 width, u32 height, u32 code, u32 field,
+			    const struct imx_media_pixfmt **cc);
+
+int imx_media_mbus_fmt_to_pix_fmt(struct v4l2_pix_format *pix,
+				  struct v4l2_mbus_framefmt *mbus,
+				  const struct imx_media_pixfmt *cc);
+int imx_media_mbus_fmt_to_ipu_image(struct ipu_image *image,
+				    struct v4l2_mbus_framefmt *mbus);
+int imx_media_ipu_image_to_mbus_fmt(struct v4l2_mbus_framefmt *mbus,
+				    struct ipu_image *image);
+
+struct imx_media_subdev *
+imx_media_find_async_subdev(struct imx_media_dev *imxmd,
+			    struct device_node *np,
+			    const char *devname);
+struct imx_media_subdev *
+imx_media_add_async_subdev(struct imx_media_dev *imxmd,
+			   struct device_node *np,
+			   struct platform_device *pdev);
+int imx_media_add_pad_link(struct imx_media_dev *imxmd,
+			   struct imx_media_pad *pad,
+			   struct device_node *remote_node,
+			   const char *remote_devname,
+			   int local_pad, int remote_pad);
+
+void imx_media_grp_id_to_sd_name(char *sd_name, int sz,
+				 u32 grp_id, int ipu_id);
+
+int imx_media_add_internal_subdevs(struct imx_media_dev *imxmd,
+				   struct imx_media_subdev *csi[4]);
+void imx_media_remove_internal_subdevs(struct imx_media_dev *imxmd);
+
+struct imx_media_subdev *
+imx_media_find_subdev_by_sd(struct imx_media_dev *imxmd,
+			    struct v4l2_subdev *sd);
+struct imx_media_subdev *
+imx_media_find_subdev_by_id(struct imx_media_dev *imxmd,
+			    u32 grp_id);
+int imx_media_find_mipi_csi2_channel(struct imx_media_dev *imxmd,
+				     struct media_entity *start_entity);
+struct imx_media_subdev *
+imx_media_find_pipeline_subdev(struct imx_media_dev *imxmd,
+			       struct media_entity *start_entity,
+			       u32 grp_id);
+struct imx_media_subdev *
+__imx_media_find_sensor(struct imx_media_dev *imxmd,
+			struct media_entity *start_entity);
+struct imx_media_subdev *
+imx_media_find_sensor(struct imx_media_dev *imxmd,
+		      struct media_entity *start_entity);
+
+struct imx_media_dma_buf {
+	void          *virt;
+	dma_addr_t     phys;
+	unsigned long  len;
+};
+
+void imx_media_free_dma_buf(struct imx_media_dev *imxmd,
+			    struct imx_media_dma_buf *buf);
+int imx_media_alloc_dma_buf(struct imx_media_dev *imxmd,
+			    struct imx_media_dma_buf *buf,
+			    int size);
+
+int imx_media_pipeline_set_power(struct imx_media_dev *imxmd,
+				 struct media_graph *graph,
+				 struct media_entity *entity, bool on);
+int imx_media_pipeline_set_stream(struct imx_media_dev *imxmd,
+				  struct media_entity *entity,
+				  struct media_pipeline *pipe,
+				  bool on);
+
+/* imx-media-fim.c */
+struct imx_media_fim;
+void imx_media_fim_eof_monitor(struct imx_media_fim *fim, struct timespec *ts);
+int imx_media_fim_set_power(struct imx_media_fim *fim, bool on);
+int imx_media_fim_set_stream(struct imx_media_fim *fim,
+			     struct imx_media_subdev *sensor,
+			     bool on);
+struct imx_media_fim *imx_media_fim_init(struct v4l2_subdev *sd);
+void imx_media_fim_free(struct imx_media_fim *fim);
+
+/* imx-media-of.c */
+struct imx_media_subdev *
+imx_media_of_find_subdev(struct imx_media_dev *imxmd,
+			 struct device_node *np,
+			 const char *name);
+int imx_media_of_parse(struct imx_media_dev *dev,
+		       struct imx_media_subdev *(*csi)[4],
+		       struct device_node *np);
+
+/* imx-media-capture.c */
+struct imx_media_video_dev *
+imx_media_capture_device_init(struct v4l2_subdev *src_sd, int pad);
+void imx_media_capture_device_remove(struct imx_media_video_dev *vdev);
+int imx_media_capture_device_register(struct imx_media_video_dev *vdev);
+void imx_media_capture_device_unregister(struct imx_media_video_dev *vdev);
+struct imx_media_buffer *
+imx_media_capture_device_next_buf(struct imx_media_video_dev *vdev);
+
+/* subdev group ids */
+#define IMX_MEDIA_GRP_ID_SENSOR    (1 << 8)
+#define IMX_MEDIA_GRP_ID_VIDMUX    (1 << 9)
+#define IMX_MEDIA_GRP_ID_CSI2      (1 << 10)
+#define IMX_MEDIA_GRP_ID_CSI_BIT   11
+#define IMX_MEDIA_GRP_ID_CSI       (0x3 << IMX_MEDIA_GRP_ID_CSI_BIT)
+#define IMX_MEDIA_GRP_ID_CSI0      (1 << IMX_MEDIA_GRP_ID_CSI_BIT)
+#define IMX_MEDIA_GRP_ID_CSI1      (2 << IMX_MEDIA_GRP_ID_CSI_BIT)
+#define IMX_MEDIA_GRP_ID_VDIC      (1 << 13)
+#define IMX_MEDIA_GRP_ID_IC_PRP    (1 << 14)
+#define IMX_MEDIA_GRP_ID_IC_PRPENC (1 << 15)
+#define IMX_MEDIA_GRP_ID_IC_PRPVF  (1 << 16)
+
+#endif
diff --git a/include/media/imx.h b/include/media/imx.h
new file mode 100644
index 0000000..5025a72
--- /dev/null
+++ b/include/media/imx.h
@@ -0,0 +1,15 @@
+/*
+ * Copyright (c) 2014-2015 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#ifndef __MEDIA_IMX_H__
+#define __MEDIA_IMX_H__
+
+#include <uapi/media/imx.h>
+
+#endif
diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
index 0d2e1e0..6c29f42 100644
--- a/include/uapi/linux/v4l2-controls.h
+++ b/include/uapi/linux/v4l2-controls.h
@@ -180,6 +180,10 @@ enum v4l2_colorfx {
  * We reserve 16 controls for this driver. */
 #define V4L2_CID_USER_TC358743_BASE		(V4L2_CID_USER_BASE + 0x1080)
 
+/* The base for the imx driver controls.
+ * We reserve 16 controls for this driver. */
+#define V4L2_CID_USER_IMX_BASE			(V4L2_CID_USER_BASE + 0x1090)
+
 /* MPEG-class control IDs */
 /* The MPEG controls are applicable to all codec controls
  * and the 'MPEG' part of the define is historical */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 19/36] media: imx: Add Capture Device Interface
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (17 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 18/36] media: Add i.MX media core driver Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 20/36] media: imx: Add CSI subdev driver Steve Longerbeam
                   ` (18 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

This is the capture device interface driver that provides the v4l2
user interface. Frames can be received from various sources:

- directly from CSI for capturing unconverted images directly from
  camera sensors.

- from the IC pre-process encode task.

- from the IC pre-process viewfinder task.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/Makefile            |   2 +-
 drivers/staging/media/imx/imx-media-capture.c | 654 ++++++++++++++++++++++++++
 2 files changed, 655 insertions(+), 1 deletion(-)
 create mode 100644 drivers/staging/media/imx/imx-media-capture.c

diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
index ba8e4fb..4606a3a 100644
--- a/drivers/staging/media/imx/Makefile
+++ b/drivers/staging/media/imx/Makefile
@@ -3,4 +3,4 @@ imx-media-common-objs := imx-media-utils.o imx-media-fim.o
 
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o
-
+obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-capture.o
diff --git a/drivers/staging/media/imx/imx-media-capture.c b/drivers/staging/media/imx/imx-media-capture.c
new file mode 100644
index 0000000..fbf6067
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-capture.c
@@ -0,0 +1,654 @@
+/*
+ * Video Capture Subdev for Freescale i.MX5/6 SOC
+ *
+ * Copyright (c) 2012-2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/module.h>
+#include <linux/of_platform.h>
+#include <linux/pinctrl/consumer.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-mc.h>
+#include <media/v4l2-subdev.h>
+#include <media/videobuf2-dma-contig.h>
+#include <video/imx-ipu-v3.h>
+#include <media/imx.h>
+#include "imx-media.h"
+
+struct capture_priv {
+	struct imx_media_video_dev vdev;
+
+	struct v4l2_subdev    *src_sd;
+	int                   src_sd_pad;
+	struct device         *dev;
+
+	struct media_pipeline mp;
+	struct imx_media_dev  *md;
+
+	struct media_pad      vdev_pad;
+
+	struct mutex          mutex;       /* capture device mutex */
+
+	/* the videobuf2 queue */
+	struct vb2_queue       q;
+	/* list of ready imx_media_buffer's from q */
+	struct list_head       ready_q;
+	/* protect ready_q */
+	spinlock_t             q_lock;
+
+	/* controls inherited from subdevs */
+	struct v4l2_ctrl_handler ctrl_hdlr;
+
+	/* misc status */
+	bool                  stop;          /* streaming is stopping */
+};
+
+#define to_capture_priv(v) container_of(v, struct capture_priv, vdev)
+
+/* In bytes, per queue */
+#define VID_MEM_LIMIT	SZ_64M
+
+static struct vb2_ops capture_qops;
+
+/*
+ * Video ioctls follow
+ */
+
+static int vidioc_querycap(struct file *file, void *fh,
+			   struct v4l2_capability *cap)
+{
+	struct capture_priv *priv = video_drvdata(file);
+
+	strncpy(cap->driver, "imx-media-capture", sizeof(cap->driver) - 1);
+	strncpy(cap->card, "imx-media-capture", sizeof(cap->card) - 1);
+	snprintf(cap->bus_info, sizeof(cap->bus_info),
+		 "platform:%s", dev_name(priv->dev));
+
+	return 0;
+}
+
+static int capture_enum_fmt_vid_cap(struct file *file, void *fh,
+				    struct v4l2_fmtdesc *f)
+{
+	u32 fourcc;
+	int ret;
+
+	ret = imx_media_enum_format(&fourcc, NULL, f->index, true, true);
+	if (ret)
+		return ret;
+
+	f->pixelformat = fourcc;
+
+	return 0;
+}
+
+static int capture_g_fmt_vid_cap(struct file *file, void *fh,
+				 struct v4l2_format *f)
+{
+	struct capture_priv *priv = video_drvdata(file);
+
+	*f = priv->vdev.fmt;
+
+	return 0;
+}
+
+static int capture_try_fmt_vid_cap(struct file *file, void *fh,
+				   struct v4l2_format *f)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	struct v4l2_subdev_format fmt_src;
+	const struct imx_media_pixfmt *cc, *src_cc;
+	u32 fourcc;
+	int ret;
+
+	fourcc = f->fmt.pix.pixelformat;
+	cc = imx_media_find_format(fourcc, 0, true, true);
+	if (!cc) {
+		imx_media_enum_format(&fourcc, NULL, 0, true, true);
+		cc = imx_media_find_format(fourcc, 0, true, true);
+	}
+
+	/*
+	 * user frame dimensions are the same as src_sd's pad.
+	 */
+	fmt_src.pad = priv->src_sd_pad;
+	fmt_src.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+	ret = v4l2_subdev_call(priv->src_sd, pad, get_fmt, NULL, &fmt_src);
+	if (ret)
+		return ret;
+
+	/*
+	 * but we can allow planar pixel formats if the src_sd's
+	 * pad configured a YUV format
+	 */
+	src_cc = imx_media_find_format(0, fmt_src.format.code, true, false);
+	if (src_cc->cs == IPUV3_COLORSPACE_YUV &&
+	    cc->cs == IPUV3_COLORSPACE_YUV) {
+		imx_media_mbus_fmt_to_pix_fmt(&f->fmt.pix,
+					      &fmt_src.format, cc);
+	} else {
+		imx_media_mbus_fmt_to_pix_fmt(&f->fmt.pix,
+					      &fmt_src.format, src_cc);
+	}
+
+	return 0;
+}
+
+static int capture_s_fmt_vid_cap(struct file *file, void *fh,
+				 struct v4l2_format *f)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	int ret;
+
+	if (vb2_is_busy(&priv->q)) {
+		v4l2_err(priv->src_sd, "%s queue busy\n", __func__);
+		return -EBUSY;
+	}
+
+	ret = capture_try_fmt_vid_cap(file, priv, f);
+	if (ret)
+		return ret;
+
+	priv->vdev.fmt.fmt.pix = f->fmt.pix;
+	priv->vdev.cc = imx_media_find_format(f->fmt.pix.pixelformat, 0,
+					      true, true);
+
+	return 0;
+}
+
+static int capture_querystd(struct file *file, void *fh, v4l2_std_id *std)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	struct imx_media_subdev *sensor;
+
+	sensor = imx_media_find_sensor(priv->md, &priv->src_sd->entity);
+	if (IS_ERR(sensor)) {
+		v4l2_err(priv->src_sd, "no sensor attached\n");
+		return PTR_ERR(sensor);
+	}
+
+	return v4l2_subdev_call(sensor->sd, video, querystd, std);
+}
+
+static int capture_g_std(struct file *file, void *fh, v4l2_std_id *std)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	struct imx_media_subdev *sensor;
+
+	sensor = imx_media_find_sensor(priv->md, &priv->src_sd->entity);
+	if (IS_ERR(sensor)) {
+		v4l2_err(priv->src_sd, "no sensor attached\n");
+		return PTR_ERR(sensor);
+	}
+
+	return v4l2_subdev_call(sensor->sd, video, g_std, std);
+}
+
+static int capture_s_std(struct file *file, void *fh, v4l2_std_id std)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	struct imx_media_subdev *sensor;
+
+	if (vb2_is_busy(&priv->q))
+		return -EBUSY;
+
+	sensor = imx_media_find_sensor(priv->md, &priv->src_sd->entity);
+	if (IS_ERR(sensor)) {
+		v4l2_err(priv->src_sd, "no sensor attached\n");
+		return PTR_ERR(sensor);
+	}
+
+	return v4l2_subdev_call(sensor->sd, video, s_std, std);
+}
+
+static int capture_g_parm(struct file *file, void *fh,
+			  struct v4l2_streamparm *a)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	struct imx_media_subdev *sensor;
+
+	sensor = imx_media_find_sensor(priv->md, &priv->src_sd->entity);
+	if (IS_ERR(sensor)) {
+		v4l2_err(priv->src_sd, "no sensor attached\n");
+		return PTR_ERR(sensor);
+	}
+
+	if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	return v4l2_subdev_call(sensor->sd, video, g_parm, a);
+}
+
+static int capture_s_parm(struct file *file, void *fh,
+			  struct v4l2_streamparm *a)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	struct imx_media_subdev *sensor;
+
+	sensor = imx_media_find_sensor(priv->md, &priv->src_sd->entity);
+	if (IS_ERR(sensor)) {
+		v4l2_err(priv->src_sd, "no sensor attached\n");
+		return PTR_ERR(sensor);
+	}
+
+	if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	return v4l2_subdev_call(sensor->sd, video, s_parm, a);
+}
+
+static const struct v4l2_ioctl_ops capture_ioctl_ops = {
+	.vidioc_querycap	= vidioc_querycap,
+
+	.vidioc_enum_fmt_vid_cap        = capture_enum_fmt_vid_cap,
+	.vidioc_g_fmt_vid_cap           = capture_g_fmt_vid_cap,
+	.vidioc_try_fmt_vid_cap         = capture_try_fmt_vid_cap,
+	.vidioc_s_fmt_vid_cap           = capture_s_fmt_vid_cap,
+
+	.vidioc_querystd        = capture_querystd,
+	.vidioc_g_std           = capture_g_std,
+	.vidioc_s_std           = capture_s_std,
+
+	.vidioc_g_parm          = capture_g_parm,
+	.vidioc_s_parm          = capture_s_parm,
+
+	.vidioc_reqbufs		= vb2_ioctl_reqbufs,
+	.vidioc_create_bufs     = vb2_ioctl_create_bufs,
+	.vidioc_prepare_buf     = vb2_ioctl_prepare_buf,
+	.vidioc_querybuf	= vb2_ioctl_querybuf,
+	.vidioc_qbuf		= vb2_ioctl_qbuf,
+	.vidioc_dqbuf		= vb2_ioctl_dqbuf,
+	.vidioc_expbuf		= vb2_ioctl_expbuf,
+	.vidioc_streamon	= vb2_ioctl_streamon,
+	.vidioc_streamoff	= vb2_ioctl_streamoff,
+};
+
+/*
+ * Queue operations
+ */
+
+static int capture_queue_setup(struct vb2_queue *vq,
+			       unsigned int *nbuffers,
+			       unsigned int *nplanes,
+			       unsigned int sizes[],
+			       struct device *alloc_devs[])
+{
+	struct capture_priv *priv = vb2_get_drv_priv(vq);
+	struct v4l2_pix_format *pix = &priv->vdev.fmt.fmt.pix;
+	unsigned int count = *nbuffers;
+
+	if (vq->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	if (*nplanes) {
+		if (*nplanes != 1 || sizes[0] < pix->sizeimage)
+			return -EINVAL;
+		count += vq->num_buffers;
+	}
+
+	count = min_t(__u32, VID_MEM_LIMIT / pix->sizeimage, count);
+
+	if (*nplanes)
+		*nbuffers = (count < vq->num_buffers) ? 0 :
+			count - vq->num_buffers;
+	else
+		*nbuffers = count;
+
+	*nplanes = 1;
+	sizes[0] = pix->sizeimage;
+
+	return 0;
+}
+
+static int capture_buf_init(struct vb2_buffer *vb)
+{
+	struct imx_media_buffer *buf = to_imx_media_vb(vb);
+
+	INIT_LIST_HEAD(&buf->list);
+
+	return 0;
+}
+
+static int capture_buf_prepare(struct vb2_buffer *vb)
+{
+	struct vb2_queue *vq = vb->vb2_queue;
+	struct capture_priv *priv = vb2_get_drv_priv(vq);
+	struct v4l2_pix_format *pix = &priv->vdev.fmt.fmt.pix;
+
+	if (vb2_plane_size(vb, 0) < pix->sizeimage) {
+		v4l2_err(priv->src_sd,
+			 "data will not fit into plane (%lu < %lu)\n",
+			 vb2_plane_size(vb, 0), (long)pix->sizeimage);
+		return -EINVAL;
+	}
+
+	vb2_set_plane_payload(vb, 0, pix->sizeimage);
+
+	return 0;
+}
+
+static void capture_buf_queue(struct vb2_buffer *vb)
+{
+	struct capture_priv *priv = vb2_get_drv_priv(vb->vb2_queue);
+	struct imx_media_buffer *buf = to_imx_media_vb(vb);
+	unsigned long flags;
+
+	spin_lock_irqsave(&priv->q_lock, flags);
+
+	list_add_tail(&buf->list, &priv->ready_q);
+
+	spin_unlock_irqrestore(&priv->q_lock, flags);
+}
+
+static int capture_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+	struct capture_priv *priv = vb2_get_drv_priv(vq);
+	struct imx_media_buffer *buf, *tmp;
+	unsigned long flags;
+	int ret;
+
+	if (vb2_is_streaming(vq))
+		return 0;
+
+	ret = imx_media_pipeline_set_stream(priv->md, &priv->src_sd->entity,
+					    &priv->mp, true);
+	if (ret) {
+		v4l2_err(priv->src_sd, "pipeline_set_stream failed with %d\n",
+			 ret);
+		goto return_bufs;
+	}
+
+	priv->stop = false;
+
+	return 0;
+
+return_bufs:
+	spin_lock_irqsave(&priv->q_lock, flags);
+	list_for_each_entry_safe(buf, tmp, &priv->ready_q, list) {
+		list_del(&buf->list);
+		vb2_buffer_done(&buf->vbuf.vb2_buf, VB2_BUF_STATE_QUEUED);
+	}
+	spin_unlock_irqrestore(&priv->q_lock, flags);
+	return ret;
+}
+
+static void capture_stop_streaming(struct vb2_queue *vq)
+{
+	struct capture_priv *priv = vb2_get_drv_priv(vq);
+	struct imx_media_buffer *frame;
+	unsigned long flags;
+	int ret;
+
+	if (!vb2_is_streaming(vq))
+		return;
+
+	spin_lock_irqsave(&priv->q_lock, flags);
+	priv->stop = true;
+	spin_unlock_irqrestore(&priv->q_lock, flags);
+
+	ret = imx_media_pipeline_set_stream(priv->md, &priv->src_sd->entity,
+					    &priv->mp, false);
+	if (ret)
+		v4l2_warn(priv->src_sd, "pipeline_set_stream failed with %d\n",
+			  ret);
+
+	/* release all active buffers */
+	spin_lock_irqsave(&priv->q_lock, flags);
+	while (!list_empty(&priv->ready_q)) {
+		frame = list_entry(priv->ready_q.next,
+				   struct imx_media_buffer, list);
+		list_del(&frame->list);
+		vb2_buffer_done(&frame->vbuf.vb2_buf, VB2_BUF_STATE_ERROR);
+	}
+	spin_unlock_irqrestore(&priv->q_lock, flags);
+}
+
+static struct vb2_ops capture_qops = {
+	.queue_setup	 = capture_queue_setup,
+	.buf_init        = capture_buf_init,
+	.buf_prepare	 = capture_buf_prepare,
+	.buf_queue	 = capture_buf_queue,
+	.wait_prepare	 = vb2_ops_wait_prepare,
+	.wait_finish	 = vb2_ops_wait_finish,
+	.start_streaming = capture_start_streaming,
+	.stop_streaming  = capture_stop_streaming,
+};
+
+/*
+ * File operations
+ */
+static int capture_open(struct file *file)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	int ret;
+
+	if (mutex_lock_interruptible(&priv->mutex))
+		return -ERESTARTSYS;
+
+	ret = v4l2_fh_open(file);
+	if (ret)
+		v4l2_err(priv->src_sd, "v4l2_fh_open failed\n");
+
+	mutex_unlock(&priv->mutex);
+	return ret;
+}
+
+static int capture_release(struct file *file)
+{
+	struct capture_priv *priv = video_drvdata(file);
+	struct vb2_queue *vq = &priv->q;
+	int ret = 0;
+
+	mutex_lock(&priv->mutex);
+
+	if (file->private_data == vq->owner) {
+		vb2_queue_release(vq);
+		vq->owner = NULL;
+	}
+
+	v4l2_fh_release(file);
+	mutex_unlock(&priv->mutex);
+	return ret;
+}
+
+static const struct v4l2_file_operations capture_fops = {
+	.owner		= THIS_MODULE,
+	.open		= capture_open,
+	.release	= capture_release,
+	.poll		= vb2_fop_poll,
+	.unlocked_ioctl	= video_ioctl2,
+	.mmap		= vb2_fop_mmap,
+};
+
+static struct video_device capture_videodev = {
+	.fops		= &capture_fops,
+	.ioctl_ops	= &capture_ioctl_ops,
+	.minor		= -1,
+	.release	= video_device_release,
+	.vfl_dir	= VFL_DIR_RX,
+	.tvnorms	= V4L2_STD_NTSC | V4L2_STD_PAL | V4L2_STD_SECAM,
+	.device_caps	= V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING,
+};
+
+struct imx_media_buffer *
+imx_media_capture_device_next_buf(struct imx_media_video_dev *vdev)
+{
+	struct capture_priv *priv = to_capture_priv(vdev);
+	struct imx_media_buffer *buf = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&priv->q_lock, flags);
+
+	/* get next queued buffer */
+	if (!list_empty(&priv->ready_q)) {
+		buf = list_entry(priv->ready_q.next, struct imx_media_buffer,
+				 list);
+		list_del(&buf->list);
+	}
+
+	spin_unlock_irqrestore(&priv->q_lock, flags);
+
+	return buf;
+}
+EXPORT_SYMBOL_GPL(imx_media_capture_device_next_buf);
+
+int imx_media_capture_device_register(struct imx_media_video_dev *vdev)
+{
+	struct capture_priv *priv = to_capture_priv(vdev);
+	struct v4l2_subdev *sd = priv->src_sd;
+	struct video_device *vfd = vdev->vfd;
+	struct vb2_queue *vq = &priv->q;
+	struct v4l2_subdev_format fmt_src;
+	int ret;
+
+	/* get media device */
+	priv->md = dev_get_drvdata(sd->v4l2_dev->dev);
+
+	vfd->v4l2_dev = sd->v4l2_dev;
+
+	ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
+	if (ret) {
+		v4l2_err(sd, "Failed to register video device\n");
+		return ret;
+	}
+
+	vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+	vq->io_modes = VB2_MMAP | VB2_DMABUF;
+	vq->drv_priv = priv;
+	vq->buf_struct_size = sizeof(struct imx_media_buffer);
+	vq->ops = &capture_qops;
+	vq->mem_ops = &vb2_dma_contig_memops;
+	vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+	vq->lock = &priv->mutex;
+	vq->min_buffers_needed = 2;
+	vq->dev = priv->dev;
+
+	ret = vb2_queue_init(vq);
+	if (ret) {
+		v4l2_err(sd, "vb2_queue_init failed\n");
+		goto unreg;
+	}
+
+	INIT_LIST_HEAD(&priv->ready_q);
+
+	priv->vdev_pad.flags = MEDIA_PAD_FL_SINK;
+	ret = media_entity_pads_init(&vfd->entity, 1, &priv->vdev_pad);
+	if (ret) {
+		v4l2_err(sd, "failed to init dev pad\n");
+		goto unreg;
+	}
+
+	/* create the link from the src_sd devnode pad to device node */
+	ret = media_create_pad_link(&sd->entity, priv->src_sd_pad,
+				    &vfd->entity, 0, 0);
+	if (ret) {
+		v4l2_err(sd, "failed to create link to device node\n");
+		goto unreg;
+	}
+
+	/* setup default format */
+	fmt_src.pad = priv->src_sd_pad;
+	fmt_src.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+	v4l2_subdev_call(sd, pad, get_fmt, NULL, &fmt_src);
+	if (ret) {
+		v4l2_err(sd, "failed to get src_sd format\n");
+		goto unreg;
+	}
+
+	vdev->fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+	imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
+				      &fmt_src.format, NULL);
+	vdev->cc = imx_media_find_format(0, fmt_src.format.code,
+					 true, false);
+
+	v4l2_info(sd, "Registered %s as /dev/%s\n", vfd->name,
+		  video_device_node_name(vfd));
+
+	vfd->ctrl_handler = &priv->ctrl_hdlr;
+
+	return 0;
+unreg:
+	video_unregister_device(vfd);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(imx_media_capture_device_register);
+
+void imx_media_capture_device_unregister(struct imx_media_video_dev *vdev)
+{
+	struct capture_priv *priv = to_capture_priv(vdev);
+	struct video_device *vfd = priv->vdev.vfd;
+
+	mutex_lock(&priv->mutex);
+
+	if (video_is_registered(vfd)) {
+		video_unregister_device(vfd);
+		media_entity_cleanup(&vfd->entity);
+	}
+
+	mutex_unlock(&priv->mutex);
+}
+EXPORT_SYMBOL_GPL(imx_media_capture_device_unregister);
+
+struct imx_media_video_dev *
+imx_media_capture_device_init(struct v4l2_subdev *src_sd, int pad)
+{
+	struct capture_priv *priv;
+	struct video_device *vfd;
+
+	priv = devm_kzalloc(src_sd->dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return ERR_PTR(-ENOMEM);
+
+	priv->src_sd = src_sd;
+	priv->src_sd_pad = pad;
+	priv->dev = src_sd->dev;
+
+	mutex_init(&priv->mutex);
+	spin_lock_init(&priv->q_lock);
+
+	snprintf(capture_videodev.name, sizeof(capture_videodev.name),
+		 "%s capture", src_sd->name);
+
+	vfd = video_device_alloc();
+	if (!vfd)
+		return ERR_PTR(-ENOMEM);
+
+	*vfd = capture_videodev;
+	vfd->lock = &priv->mutex;
+	vfd->queue = &priv->q;
+	priv->vdev.vfd = vfd;
+
+	video_set_drvdata(vfd, priv);
+
+	v4l2_ctrl_handler_init(&priv->ctrl_hdlr, 0);
+
+	return &priv->vdev;
+}
+EXPORT_SYMBOL_GPL(imx_media_capture_device_init);
+
+void imx_media_capture_device_remove(struct imx_media_video_dev *vdev)
+{
+	struct capture_priv *priv = to_capture_priv(vdev);
+
+	v4l2_ctrl_handler_free(&priv->ctrl_hdlr);
+}
+EXPORT_SYMBOL_GPL(imx_media_capture_device_remove);
+
+MODULE_DESCRIPTION("i.MX5/6 v4l2 video capture interface driver");
+MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
+MODULE_LICENSE("GPL");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 20/36] media: imx: Add CSI subdev driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (18 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 19/36] media: imx: Add Capture Device Interface Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 11:52   ` Russell King - ARM Linux
  2017-02-16  2:19 ` [PATCH v4 21/36] media: imx: Add VDIC " Steve Longerbeam
                   ` (17 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

This is a media entity subdevice for the i.MX Camera
Sensor Interface module.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/Kconfig         |   13 +
 drivers/staging/media/imx/Makefile        |    2 +
 drivers/staging/media/imx/imx-media-csi.c | 1220 +++++++++++++++++++++++++++++
 3 files changed, 1235 insertions(+)
 create mode 100644 drivers/staging/media/imx/imx-media-csi.c

diff --git a/drivers/staging/media/imx/Kconfig b/drivers/staging/media/imx/Kconfig
index 722ed55..e27ad6d 100644
--- a/drivers/staging/media/imx/Kconfig
+++ b/drivers/staging/media/imx/Kconfig
@@ -5,3 +5,16 @@ config VIDEO_IMX_MEDIA
 	  Say yes here to enable support for video4linux media controller
 	  driver for the i.MX5/6 SOC.
 
+if VIDEO_IMX_MEDIA
+menu "i.MX5/6 Media Sub devices"
+
+config VIDEO_IMX_CSI
+	tristate "i.MX5/6 Camera Sensor Interface driver"
+	depends on VIDEO_IMX_MEDIA && VIDEO_DEV && I2C
+	select VIDEOBUF2_DMA_CONTIG
+	default y
+	---help---
+	  A video4linux camera sensor interface driver for i.MX5/6.
+
+endmenu
+endif
diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
index 4606a3a..c054490 100644
--- a/drivers/staging/media/imx/Makefile
+++ b/drivers/staging/media/imx/Makefile
@@ -4,3 +4,5 @@ imx-media-common-objs := imx-media-utils.o imx-media-fim.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-capture.o
+
+obj-$(CONFIG_VIDEO_IMX_CSI) += imx-media-csi.o
diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
new file mode 100644
index 0000000..0343fc3
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -0,0 +1,1220 @@
+/*
+ * V4L2 Capture CSI Subdev for Freescale i.MX5/6 SOC
+ *
+ * Copyright (c) 2014-2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mc.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-subdev.h>
+#include <media/videobuf2-dma-contig.h>
+#include <video/imx-ipu-v3.h>
+#include <media/imx.h>
+#include "imx-media.h"
+
+/*
+ * Min/Max supported width and heights.
+ *
+ * We allow planar output, so we have to align width by 16 pixels
+ * to meet IDMAC alignment requirements.
+ *
+ * TODO: move this into pad format negotiation, if capture device
+ * has not requested planar formats, we should allow 8 pixel
+ * alignment.
+ */
+#define MIN_W       176
+#define MIN_H       144
+#define MAX_W      4096
+#define MAX_H      4096
+#define W_ALIGN    4 /* multiple of 16 pixels */
+#define H_ALIGN    1 /* multiple of 2 lines */
+#define S_ALIGN    1 /* multiple of 2 */
+
+struct csi_priv {
+	struct device *dev;
+	struct ipu_soc *ipu;
+	struct imx_media_dev *md;
+	struct v4l2_subdev sd;
+	struct media_pad pad[CSI_NUM_PADS];
+	int active_output_pad;
+	int csi_id;
+	int smfc_id;
+
+	struct ipuv3_channel *idmac_ch;
+	struct ipu_smfc *smfc;
+	struct ipu_csi *csi;
+
+	struct v4l2_mbus_framefmt format_mbus[CSI_NUM_PADS];
+	const struct imx_media_pixfmt *cc[CSI_NUM_PADS];
+	struct v4l2_rect crop;
+
+	/* the video device at IDMAC output pad */
+	struct imx_media_video_dev *vdev;
+
+	/* active vb2 buffers to send to video dev sink */
+	struct imx_media_buffer *active_vb2_buf[2];
+	struct imx_media_dma_buf underrun_buf;
+
+	int ipu_buf_num;  /* ipu double buffer index: 0-1 */
+
+	/* the sink for the captured frames */
+	struct media_entity *sink;
+	enum ipu_csi_dest dest;
+	/* the source subdev */
+	struct v4l2_subdev *src_sd;
+
+	/* the mipi virtual channel number at link validate */
+	int vc_num;
+
+	/* the attached sensor at stream on */
+	struct imx_media_subdev *sensor;
+
+	spinlock_t irqlock; /* protect eof_irq handler */
+	struct timer_list eof_timeout_timer;
+	int eof_irq;
+	int nfb4eof_irq;
+
+	struct v4l2_ctrl_handler ctrl_hdlr;
+	struct imx_media_fim *fim;
+
+	bool power_on;  /* power is on */
+	bool stream_on; /* streaming is on */
+	bool last_eof;  /* waiting for last EOF at stream off */
+	struct completion last_eof_comp;
+};
+
+static inline struct csi_priv *sd_to_dev(struct v4l2_subdev *sdev)
+{
+	return container_of(sdev, struct csi_priv, sd);
+}
+
+static void csi_idmac_put_ipu_resources(struct csi_priv *priv)
+{
+	if (!IS_ERR_OR_NULL(priv->idmac_ch))
+		ipu_idmac_put(priv->idmac_ch);
+	priv->idmac_ch = NULL;
+
+	if (!IS_ERR_OR_NULL(priv->smfc))
+		ipu_smfc_put(priv->smfc);
+	priv->smfc = NULL;
+}
+
+static int csi_idmac_get_ipu_resources(struct csi_priv *priv)
+{
+	int ch_num, ret;
+
+	ch_num = IPUV3_CHANNEL_CSI0 + priv->smfc_id;
+
+	priv->smfc = ipu_smfc_get(priv->ipu, ch_num);
+	if (IS_ERR(priv->smfc)) {
+		v4l2_err(&priv->sd, "failed to get SMFC\n");
+		ret = PTR_ERR(priv->smfc);
+		goto out;
+	}
+
+	priv->idmac_ch = ipu_idmac_get(priv->ipu, ch_num);
+	if (IS_ERR(priv->idmac_ch)) {
+		v4l2_err(&priv->sd, "could not get IDMAC channel %u\n",
+			 ch_num);
+		ret = PTR_ERR(priv->idmac_ch);
+		goto out;
+	}
+
+	return 0;
+out:
+	csi_idmac_put_ipu_resources(priv);
+	return ret;
+}
+
+static void csi_vb2_buf_done(struct csi_priv *priv)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct imx_media_buffer *done, *next;
+	struct vb2_buffer *vb;
+	dma_addr_t phys;
+
+	done = priv->active_vb2_buf[priv->ipu_buf_num];
+	if (done) {
+		vb = &done->vbuf.vb2_buf;
+		vb->timestamp = ktime_get_ns();
+		vb2_buffer_done(vb, VB2_BUF_STATE_DONE);
+	}
+
+	/* get next queued buffer */
+	next = imx_media_capture_device_next_buf(vdev);
+	if (next) {
+		phys = vb2_dma_contig_plane_dma_addr(&next->vbuf.vb2_buf, 0);
+		priv->active_vb2_buf[priv->ipu_buf_num] = next;
+	} else {
+		phys = priv->underrun_buf.phys;
+		priv->active_vb2_buf[priv->ipu_buf_num] = NULL;
+	}
+
+	if (ipu_idmac_buffer_is_ready(priv->idmac_ch, priv->ipu_buf_num))
+		ipu_idmac_clear_buffer(priv->idmac_ch, priv->ipu_buf_num);
+
+	ipu_cpmem_set_buffer(priv->idmac_ch, priv->ipu_buf_num, phys);
+}
+
+static void csi_call_fim(struct csi_priv *priv)
+{
+	if (priv->fim) {
+		struct timespec cur_ts;
+
+		ktime_get_ts(&cur_ts);
+		/* call frame interval monitor */
+		imx_media_fim_eof_monitor(priv->fim, &cur_ts);
+	}
+}
+
+static irqreturn_t csi_idmac_eof_interrupt(int irq, void *dev_id)
+{
+	struct csi_priv *priv = dev_id;
+
+	spin_lock(&priv->irqlock);
+
+	if (priv->last_eof) {
+		complete(&priv->last_eof_comp);
+		priv->last_eof = false;
+		goto unlock;
+	}
+
+	csi_call_fim(priv);
+
+	csi_vb2_buf_done(priv);
+
+	/* select new IPU buf */
+	ipu_idmac_select_buffer(priv->idmac_ch, priv->ipu_buf_num);
+	/* toggle IPU double-buffer index */
+	priv->ipu_buf_num ^= 1;
+
+	/* bump the EOF timeout timer */
+	mod_timer(&priv->eof_timeout_timer,
+		  jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+
+unlock:
+	spin_unlock(&priv->irqlock);
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t csi_idmac_nfb4eof_interrupt(int irq, void *dev_id)
+{
+	struct csi_priv *priv = dev_id;
+	static const struct v4l2_event ev = {
+		.type = V4L2_EVENT_IMX_NFB4EOF,
+	};
+
+	v4l2_err(&priv->sd, "NFB4EOF\n");
+
+	v4l2_subdev_notify_event(&priv->sd, &ev);
+
+	return IRQ_HANDLED;
+}
+
+/*
+ * EOF timeout timer function.
+ */
+static void csi_idmac_eof_timeout(unsigned long data)
+{
+	struct csi_priv *priv = (struct csi_priv *)data;
+	static const struct v4l2_event ev = {
+		.type = V4L2_EVENT_FRAME_TIMEOUT,
+	};
+
+	v4l2_err(&priv->sd, "EOF timeout\n");
+
+	v4l2_subdev_notify_event(&priv->sd, &ev);
+}
+
+static void csi_idmac_setup_vb2_buf(struct csi_priv *priv, dma_addr_t *phys)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct imx_media_buffer *buf;
+	int i;
+
+	for (i = 0; i < 2; i++) {
+		buf = imx_media_capture_device_next_buf(vdev);
+		priv->active_vb2_buf[i] = buf;
+		phys[i] = vb2_dma_contig_plane_dma_addr(&buf->vbuf.vb2_buf, 0);
+	}
+}
+
+static void csi_idmac_unsetup_vb2_buf(struct csi_priv *priv)
+{
+	struct imx_media_buffer *buf;
+	int i;
+
+	/* return any remaining active frames with error */
+	for (i = 0; i < 2; i++) {
+		buf = priv->active_vb2_buf[i];
+		if (buf) {
+			struct vb2_buffer *vb = &buf->vbuf.vb2_buf;
+
+			vb->timestamp = ktime_get_ns();
+			vb2_buffer_done(vb, VB2_BUF_STATE_ERROR);
+		}
+	}
+}
+
+/* init the SMFC IDMAC channel */
+static int csi_idmac_setup_channel(struct csi_priv *priv)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct v4l2_of_endpoint *sensor_ep;
+	struct v4l2_mbus_framefmt *infmt;
+	unsigned int burst_size;
+	struct ipu_image image;
+	dma_addr_t phys[2];
+	bool passthrough;
+	int ret;
+
+	infmt = &priv->format_mbus[CSI_SINK_PAD];
+	sensor_ep = &priv->sensor->sensor_ep;
+
+	ipu_cpmem_zero(priv->idmac_ch);
+
+	memset(&image, 0, sizeof(image));
+	image.pix = vdev->fmt.fmt.pix;
+	image.rect.width = image.pix.width;
+	image.rect.height = image.pix.height;
+
+	csi_idmac_setup_vb2_buf(priv, phys);
+
+	image.phys0 = phys[0];
+	image.phys1 = phys[1];
+
+	ret = ipu_cpmem_set_image(priv->idmac_ch, &image);
+	if (ret)
+		return ret;
+
+	burst_size = (image.pix.width & 0xf) ? 8 : 16;
+
+	ipu_cpmem_set_burstsize(priv->idmac_ch, burst_size);
+
+	/*
+	 * If the sensor uses 16-bit parallel CSI bus, we must handle
+	 * the data internally in the IPU as 16-bit generic, aka
+	 * passthrough mode.
+	 */
+	passthrough = (sensor_ep->bus_type != V4L2_MBUS_CSI2 &&
+		       sensor_ep->bus.parallel.bus_width >= 16);
+
+	if (passthrough)
+		ipu_cpmem_set_format_passthrough(priv->idmac_ch, 16);
+
+	/*
+	 * Set the channel for the direct CSI-->memory via SMFC
+	 * use-case to very high priority, by enabling the watermark
+	 * signal in the SMFC, enabling WM in the channel, and setting
+	 * the channel priority to high.
+	 *
+	 * Refer to the i.mx6 rev. D TRM Table 36-8: Calculated priority
+	 * value.
+	 *
+	 * The WM's are set very low by intention here to ensure that
+	 * the SMFC FIFOs do not overflow.
+	 */
+	ipu_smfc_set_watermark(priv->smfc, 0x02, 0x01);
+	ipu_cpmem_set_high_priority(priv->idmac_ch);
+	ipu_idmac_enable_watermark(priv->idmac_ch, true);
+	ipu_cpmem_set_axi_id(priv->idmac_ch, 0);
+	ipu_idmac_lock_enable(priv->idmac_ch, 8);
+
+	burst_size = ipu_cpmem_get_burstsize(priv->idmac_ch);
+	burst_size = passthrough ?
+		(burst_size >> 3) - 1 : (burst_size >> 2) - 1;
+
+	ipu_smfc_set_burstsize(priv->smfc, burst_size);
+
+	if (image.pix.field == V4L2_FIELD_NONE &&
+	    V4L2_FIELD_HAS_BOTH(infmt->field))
+		ipu_cpmem_interlaced_scan(priv->idmac_ch,
+					  image.pix.bytesperline);
+
+	ipu_idmac_set_double_buffer(priv->idmac_ch, true);
+
+	return 0;
+}
+
+static void csi_idmac_unsetup(struct csi_priv *priv)
+{
+	ipu_idmac_disable_channel(priv->idmac_ch);
+	ipu_smfc_disable(priv->smfc);
+
+	csi_idmac_unsetup_vb2_buf(priv);
+}
+
+static int csi_idmac_setup(struct csi_priv *priv)
+{
+	int ret;
+
+	ret = csi_idmac_setup_channel(priv);
+	if (ret)
+		return ret;
+
+	ipu_cpmem_dump(priv->idmac_ch);
+	ipu_dump(priv->ipu);
+
+	ipu_smfc_enable(priv->smfc);
+
+	/* set buffers ready */
+	ipu_idmac_select_buffer(priv->idmac_ch, 0);
+	ipu_idmac_select_buffer(priv->idmac_ch, 1);
+
+	/* enable the channels */
+	ipu_idmac_enable_channel(priv->idmac_ch);
+
+	return 0;
+}
+
+static int csi_idmac_start(struct csi_priv *priv)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct v4l2_pix_format *outfmt;
+	int ret;
+
+	ret = csi_idmac_get_ipu_resources(priv);
+	if (ret)
+		return ret;
+
+	ipu_smfc_map_channel(priv->smfc, priv->csi_id, priv->vc_num);
+
+	outfmt = &vdev->fmt.fmt.pix;
+
+	ret = imx_media_alloc_dma_buf(priv->md, &priv->underrun_buf,
+				      outfmt->sizeimage);
+	if (ret)
+		goto out_put_ipu;
+
+	priv->ipu_buf_num = 0;
+
+	/* init EOF completion waitq */
+	init_completion(&priv->last_eof_comp);
+	priv->last_eof = false;
+
+	ret = csi_idmac_setup(priv);
+	if (ret) {
+		v4l2_err(&priv->sd, "csi_idmac_setup failed: %d\n", ret);
+		goto out_free_dma_buf;
+	}
+
+	priv->nfb4eof_irq = ipu_idmac_channel_irq(priv->ipu,
+						 priv->idmac_ch,
+						 IPU_IRQ_NFB4EOF);
+	ret = devm_request_irq(priv->dev, priv->nfb4eof_irq,
+			       csi_idmac_nfb4eof_interrupt, 0,
+			       "imx-smfc-nfb4eof", priv);
+	if (ret) {
+		v4l2_err(&priv->sd,
+			 "Error registering NFB4EOF irq: %d\n", ret);
+		goto out_unsetup;
+	}
+
+	priv->eof_irq = ipu_idmac_channel_irq(priv->ipu, priv->idmac_ch,
+					      IPU_IRQ_EOF);
+
+	ret = devm_request_irq(priv->dev, priv->eof_irq,
+			       csi_idmac_eof_interrupt, 0,
+			       "imx-smfc-eof", priv);
+	if (ret) {
+		v4l2_err(&priv->sd,
+			 "Error registering eof irq: %d\n", ret);
+		goto out_free_nfb4eof_irq;
+	}
+
+	/* start the EOF timeout timer */
+	mod_timer(&priv->eof_timeout_timer,
+		  jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+
+	return 0;
+
+out_free_nfb4eof_irq:
+	devm_free_irq(priv->dev, priv->nfb4eof_irq, priv);
+out_unsetup:
+	csi_idmac_unsetup(priv);
+out_free_dma_buf:
+	imx_media_free_dma_buf(priv->md, &priv->underrun_buf);
+out_put_ipu:
+	csi_idmac_put_ipu_resources(priv);
+	return ret;
+}
+
+static void csi_idmac_stop(struct csi_priv *priv)
+{
+	unsigned long flags;
+	int ret;
+
+	/* mark next EOF interrupt as the last before stream off */
+	spin_lock_irqsave(&priv->irqlock, flags);
+	priv->last_eof = true;
+	spin_unlock_irqrestore(&priv->irqlock, flags);
+
+	/*
+	 * and then wait for interrupt handler to mark completion.
+	 */
+	ret = wait_for_completion_timeout(
+		&priv->last_eof_comp, msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+	if (ret == 0)
+		v4l2_warn(&priv->sd, "wait last EOF timeout\n");
+
+	devm_free_irq(priv->dev, priv->eof_irq, priv);
+	devm_free_irq(priv->dev, priv->nfb4eof_irq, priv);
+
+	csi_idmac_unsetup(priv);
+
+	imx_media_free_dma_buf(priv->md, &priv->underrun_buf);
+
+	/* cancel the EOF timeout timer */
+	del_timer_sync(&priv->eof_timeout_timer);
+
+	csi_idmac_put_ipu_resources(priv);
+}
+
+/* Update the CSI whole sensor and active windows */
+static int csi_setup(struct csi_priv *priv)
+{
+	struct v4l2_mbus_framefmt *infmt, *outfmt;
+	struct v4l2_mbus_config sensor_mbus_cfg;
+	struct v4l2_of_endpoint *sensor_ep;
+	struct v4l2_mbus_framefmt if_fmt;
+
+	infmt = &priv->format_mbus[CSI_SINK_PAD];
+	outfmt = &priv->format_mbus[priv->active_output_pad];
+	sensor_ep = &priv->sensor->sensor_ep;
+
+	/* compose mbus_config from sensor endpoint */
+	sensor_mbus_cfg.type = sensor_ep->bus_type;
+	sensor_mbus_cfg.flags = (sensor_ep->bus_type == V4L2_MBUS_CSI2) ?
+		sensor_ep->bus.mipi_csi2.flags :
+		sensor_ep->bus.parallel.flags;
+
+	/*
+	 * we need to pass input sensor frame to CSI interface, but
+	 * with translated field type from output format
+	 */
+	if_fmt = *infmt;
+	if_fmt.field = outfmt->field;
+
+	ipu_csi_set_window(priv->csi, &priv->crop);
+
+	ipu_csi_init_interface(priv->csi, &sensor_mbus_cfg, &if_fmt);
+
+	ipu_csi_set_dest(priv->csi, priv->dest);
+
+	ipu_csi_dump(priv->csi);
+
+	return 0;
+}
+
+static int csi_start(struct csi_priv *priv)
+{
+	int ret;
+
+	if (!priv->sensor) {
+		v4l2_err(&priv->sd, "no sensor attached\n");
+		return -EINVAL;
+	}
+
+	if (priv->dest == IPU_CSI_DEST_IDMAC) {
+		ret = csi_idmac_start(priv);
+		if (ret)
+			return ret;
+	}
+
+	ret = csi_setup(priv);
+	if (ret)
+		goto idmac_stop;
+
+	/* start the frame interval monitor */
+	if (priv->fim) {
+		ret = imx_media_fim_set_stream(priv->fim, priv->sensor, true);
+		if (ret)
+			goto idmac_stop;
+	}
+
+	ret = ipu_csi_enable(priv->csi);
+	if (ret) {
+		v4l2_err(&priv->sd, "CSI enable error: %d\n", ret);
+		goto fim_off;
+	}
+
+	return 0;
+
+fim_off:
+	if (priv->fim)
+		imx_media_fim_set_stream(priv->fim, priv->sensor, false);
+idmac_stop:
+	if (priv->dest == IPU_CSI_DEST_IDMAC)
+		csi_idmac_stop(priv);
+	return ret;
+}
+
+static void csi_stop(struct csi_priv *priv)
+{
+	if (priv->dest == IPU_CSI_DEST_IDMAC)
+		csi_idmac_stop(priv);
+
+	/* stop the frame interval monitor */
+	if (priv->fim)
+		imx_media_fim_set_stream(priv->fim, priv->sensor, false);
+
+	ipu_csi_disable(priv->csi);
+}
+
+static int csi_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	int ret = 0;
+
+	if (!priv->src_sd || !priv->sink)
+		return -EPIPE;
+
+	dev_dbg(priv->dev, "stream %s\n", enable ? "ON" : "OFF");
+
+	if (enable && !priv->stream_on)
+		ret = csi_start(priv);
+	else if (!enable && priv->stream_on)
+		csi_stop(priv);
+
+	if (!ret)
+		priv->stream_on = enable;
+	return ret;
+}
+
+static int csi_s_power(struct v4l2_subdev *sd, int on)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	int ret = 0;
+
+	dev_dbg(priv->dev, "power %s\n", on ? "ON" : "OFF");
+
+	if (priv->fim && on != priv->power_on)
+		ret = imx_media_fim_set_power(priv->fim, on);
+
+	if (!ret)
+		priv->power_on = on;
+	return ret;
+}
+
+static int csi_link_setup(struct media_entity *entity,
+			  const struct media_pad *local,
+			  const struct media_pad *remote, u32 flags)
+{
+	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct v4l2_subdev *remote_sd;
+	int ret;
+
+	dev_dbg(priv->dev, "link setup %s -> %s", remote->entity->name,
+		local->entity->name);
+
+	if (local->flags & MEDIA_PAD_FL_SINK) {
+		if (!is_media_entity_v4l2_subdev(remote->entity))
+			return -EINVAL;
+
+		remote_sd = media_entity_to_v4l2_subdev(remote->entity);
+
+		if (flags & MEDIA_LNK_FL_ENABLED) {
+			if (priv->src_sd)
+				return -EBUSY;
+			priv->src_sd = remote_sd;
+		} else {
+			priv->src_sd = NULL;
+		}
+
+		return 0;
+	}
+
+	/* this is a source pad */
+
+	if (flags & MEDIA_LNK_FL_ENABLED) {
+		if (priv->sink)
+			return -EBUSY;
+	} else {
+		/* reset video device controls */
+		v4l2_ctrl_handler_free(vdev->vfd->ctrl_handler);
+		v4l2_ctrl_handler_init(vdev->vfd->ctrl_handler, 0);
+
+		priv->sink = NULL;
+		return 0;
+	}
+
+	/* record which output pad is now active */
+	priv->active_output_pad = local->index;
+
+	/* set CSI destination */
+	if (local->index == CSI_SRC_PAD_IDMAC) {
+		if (!is_media_entity_v4l2_video_device(remote->entity))
+			return -EINVAL;
+
+		/* reset video device controls to refresh from subdevs */
+		v4l2_ctrl_handler_free(vdev->vfd->ctrl_handler);
+		v4l2_ctrl_handler_init(vdev->vfd->ctrl_handler, 0);
+
+		ret = __v4l2_pipeline_inherit_controls(vdev->vfd,
+						       &priv->sd.entity);
+		if (ret)
+			return ret;
+
+		priv->dest = IPU_CSI_DEST_IDMAC;
+	} else {
+		if (!is_media_entity_v4l2_subdev(remote->entity))
+			return -EINVAL;
+
+		remote_sd = media_entity_to_v4l2_subdev(remote->entity);
+		switch (remote_sd->grp_id) {
+		case IMX_MEDIA_GRP_ID_VDIC:
+			priv->dest = IPU_CSI_DEST_VDIC;
+			break;
+		case IMX_MEDIA_GRP_ID_IC_PRP:
+			priv->dest = IPU_CSI_DEST_IC;
+			break;
+		default:
+			return -EINVAL;
+		}
+	}
+
+	priv->sink = remote->entity;
+
+	return 0;
+}
+
+static int csi_link_validate(struct v4l2_subdev *sd,
+			     struct media_link *link,
+			     struct v4l2_subdev_format *source_fmt,
+			     struct v4l2_subdev_format *sink_fmt)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	struct v4l2_of_endpoint *sensor_ep;
+	bool is_csi2;
+	int ret;
+
+	ret = v4l2_subdev_link_validate_default(sd, link,
+						source_fmt, sink_fmt);
+	if (ret)
+		return ret;
+
+	priv->sensor = __imx_media_find_sensor(priv->md, &priv->sd.entity);
+	if (IS_ERR(priv->sensor)) {
+		v4l2_err(&priv->sd, "no sensor attached\n");
+		ret = PTR_ERR(priv->sensor);
+		priv->sensor = NULL;
+		return ret;
+	}
+
+	sensor_ep = &priv->sensor->sensor_ep;
+
+	is_csi2 = (sensor_ep->bus_type == V4L2_MBUS_CSI2);
+
+	if (is_csi2) {
+		int vc_num = 0;
+		/*
+		 * NOTE! It seems the virtual channels from the mipi csi-2
+		 * receiver are used only for routing by the video mux's,
+		 * or for hard-wired routing to the CSI's. Once the stream
+		 * enters the CSI's however, they are treated internally
+		 * in the IPU as virtual channel 0.
+		 */
+#if 0
+		vc_num = imx_media_find_mipi_csi2_channel(priv->md,
+							  &priv->sd.entity);
+		if (vc_num < 0)
+			return vc_num;
+#endif
+		ipu_csi_set_mipi_datatype(priv->csi, vc_num,
+					  &priv->format_mbus[CSI_SINK_PAD]);
+	}
+
+	/* select either parallel or MIPI-CSI2 as input to CSI */
+	ipu_set_csi_src_mux(priv->ipu, priv->csi_id, is_csi2);
+
+	return 0;
+}
+
+static int csi_eof_isr(struct v4l2_subdev *sd, u32 status, bool *handled)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+
+	csi_call_fim(priv);
+
+	return 0;
+}
+
+static int csi_try_crop(struct csi_priv *priv, struct v4l2_rect *crop,
+			struct imx_media_subdev *sensor)
+{
+	struct v4l2_of_endpoint *sensor_ep;
+	struct v4l2_mbus_framefmt *infmt;
+	v4l2_std_id std;
+	int ret;
+
+	infmt = &priv->format_mbus[CSI_SINK_PAD];
+	sensor_ep = &sensor->sensor_ep;
+
+	crop->width = min_t(__u32, infmt->width, crop->width);
+	if (crop->left + crop->width > infmt->width)
+		crop->left = infmt->width - crop->width;
+	/* adjust crop left/width to h/w alignment restrictions */
+	crop->left &= ~0x3;
+	crop->width &= ~0x7;
+
+	/*
+	 * FIXME: not sure why yet, but on interlaced bt.656,
+	 * changing the vertical cropping causes loss of vertical
+	 * sync, so fix it to NTSC/PAL active lines. NTSC contains
+	 * 2 extra lines of active video that need to be cropped.
+	 */
+	if (sensor_ep->bus_type == V4L2_MBUS_BT656) {
+		ret = v4l2_subdev_call(sensor->sd, video, g_std, &std);
+		if (ret)
+			return ret;
+		if (std & V4L2_STD_525_60) {
+			crop->top = 2;
+			crop->height = 480;
+		} else {
+			crop->top = 0;
+			crop->height = 576;
+		}
+	} else {
+		crop->height = min_t(__u32, infmt->height, crop->height);
+		if (crop->top + crop->height > infmt->height)
+			crop->top = infmt->height - crop->height;
+	}
+
+	return 0;
+}
+
+static int csi_enum_mbus_code(struct v4l2_subdev *sd,
+			      struct v4l2_subdev_pad_config *cfg,
+			      struct v4l2_subdev_mbus_code_enum *code)
+{
+	if (code->pad >= CSI_NUM_PADS)
+		return -EINVAL;
+
+	if (code->pad == CSI_SRC_PAD_DIRECT)
+		return imx_media_enum_ipu_format(NULL, &code->code,
+						 code->index, true);
+
+	return imx_media_enum_format(NULL, &code->code, code->index,
+				     true, false);
+}
+
+static int csi_get_fmt(struct v4l2_subdev *sd,
+		       struct v4l2_subdev_pad_config *cfg,
+		       struct v4l2_subdev_format *sdformat)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+
+	if (sdformat->pad >= CSI_NUM_PADS)
+		return -EINVAL;
+
+	sdformat->format = priv->format_mbus[sdformat->pad];
+
+	return 0;
+}
+
+static int csi_set_fmt(struct v4l2_subdev *sd,
+		       struct v4l2_subdev_pad_config *cfg,
+		       struct v4l2_subdev_format *sdformat)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	const struct imx_media_pixfmt *cc, *incc;
+	struct v4l2_mbus_framefmt *infmt;
+	struct imx_media_subdev *sensor;
+	struct v4l2_rect crop;
+	u32 code;
+	int ret;
+
+	if (sdformat->pad >= CSI_NUM_PADS)
+		return -EINVAL;
+
+	if (priv->stream_on)
+		return -EBUSY;
+
+	infmt = &priv->format_mbus[CSI_SINK_PAD];
+
+	sensor = imx_media_find_sensor(priv->md, &priv->sd.entity);
+	if (IS_ERR(sensor)) {
+		v4l2_err(&priv->sd, "no sensor attached\n");
+		return PTR_ERR(sensor);
+	}
+
+	v4l_bound_align_image(&sdformat->format.width, MIN_W, MAX_W,
+			      W_ALIGN, &sdformat->format.height,
+			      MIN_H, MAX_H, H_ALIGN, S_ALIGN);
+
+	switch (sdformat->pad) {
+	case CSI_SRC_PAD_DIRECT:
+	case CSI_SRC_PAD_IDMAC:
+		crop.left = priv->crop.left;
+		crop.top = priv->crop.top;
+		crop.width = sdformat->format.width;
+		crop.height = sdformat->format.height;
+		ret = csi_try_crop(priv, &crop, sensor);
+		if (ret)
+			return ret;
+		sdformat->format.width = crop.width;
+		sdformat->format.height = crop.height;
+
+		if (sdformat->pad == CSI_SRC_PAD_IDMAC) {
+			cc = imx_media_find_format(0, sdformat->format.code,
+						   true, false);
+			if (!cc) {
+				imx_media_enum_format(NULL, &code, 0,
+						      true, false);
+				cc = imx_media_find_format(0, code,
+							   true, false);
+				sdformat->format.code = cc->codes[0];
+			}
+
+			incc = priv->cc[CSI_SINK_PAD];
+			if (cc->cs != incc->cs) {
+				sdformat->format.code = infmt->code;
+				cc = imx_media_find_format(
+					0, sdformat->format.code,
+					true, false);
+			}
+
+			if (sdformat->format.field != V4L2_FIELD_NONE)
+				sdformat->format.field = infmt->field;
+		} else {
+			cc = imx_media_find_ipu_format(0, sdformat->format.code,
+						       true);
+			if (!cc) {
+				imx_media_enum_ipu_format(NULL, &code, 0, true);
+				cc = imx_media_find_ipu_format(0, code, true);
+				sdformat->format.code = cc->codes[0];
+			}
+
+			sdformat->format.field = infmt->field;
+		}
+
+		/*
+		 * translate V4L2_FIELD_ALTERNATE to SEQ_TB or SEQ_BT
+		 * depending on video standard from sensor
+		 */
+		if (sdformat->format.field == V4L2_FIELD_ALTERNATE) {
+			v4l2_std_id std;
+
+			ret = v4l2_subdev_call(sensor->sd, video, g_std, &std);
+			if (ret)
+				return ret;
+			sdformat->format.field = (std & V4L2_STD_525_60) ?
+				V4L2_FIELD_SEQ_TB : V4L2_FIELD_SEQ_BT;
+		}
+		break;
+	case CSI_SINK_PAD:
+		cc = imx_media_find_format(0, sdformat->format.code,
+					   true, false);
+		if (!cc) {
+			imx_media_enum_format(NULL, &code, 0, true, false);
+			cc = imx_media_find_format(0, code, true, false);
+			sdformat->format.code = cc->codes[0];
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
+		cfg->try_fmt = sdformat->format;
+	} else {
+		priv->format_mbus[sdformat->pad] = sdformat->format;
+		priv->cc[sdformat->pad] = cc;
+		/* Update the crop window if this is an output pad  */
+		if (sdformat->pad == CSI_SRC_PAD_DIRECT ||
+		    sdformat->pad == CSI_SRC_PAD_IDMAC)
+			priv->crop = crop;
+	}
+
+	return 0;
+}
+
+static int csi_get_selection(struct v4l2_subdev *sd,
+			     struct v4l2_subdev_pad_config *cfg,
+			     struct v4l2_subdev_selection *sel)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	struct v4l2_mbus_framefmt *infmt;
+
+	if (sel->pad >= CSI_NUM_PADS || sel->pad == CSI_SINK_PAD)
+		return -EINVAL;
+
+	infmt = &priv->format_mbus[CSI_SINK_PAD];
+
+	switch (sel->target) {
+	case V4L2_SEL_TGT_CROP_BOUNDS:
+		sel->r.left = 0;
+		sel->r.top = 0;
+		sel->r.width = infmt->width;
+		sel->r.height = infmt->height;
+		break;
+	case V4L2_SEL_TGT_CROP:
+		sel->r = priv->crop;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int csi_set_selection(struct v4l2_subdev *sd,
+			     struct v4l2_subdev_pad_config *cfg,
+			     struct v4l2_subdev_selection *sel)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	struct v4l2_mbus_framefmt *outfmt;
+	struct imx_media_subdev *sensor;
+	int ret;
+
+	if (sel->pad >= CSI_NUM_PADS ||
+	    sel->pad == CSI_SINK_PAD ||
+	    sel->target != V4L2_SEL_TGT_CROP)
+		return -EINVAL;
+
+	if (priv->stream_on)
+		return -EBUSY;
+
+	sensor = imx_media_find_sensor(priv->md, &priv->sd.entity);
+	if (IS_ERR(sensor)) {
+		v4l2_err(&priv->sd, "no sensor attached\n");
+		return PTR_ERR(sensor);
+	}
+
+	/*
+	 * Modifying the crop rectangle always changes the format on the source
+	 * pad. If the KEEP_CONFIG flag is set, just return the current crop
+	 * rectangle.
+	 */
+	if (sel->flags & V4L2_SEL_FLAG_KEEP_CONFIG) {
+		sel->r = priv->crop;
+		if (sel->which == V4L2_SUBDEV_FORMAT_TRY)
+			cfg->try_crop = sel->r;
+		return 0;
+	}
+
+	outfmt = &priv->format_mbus[sel->pad];
+
+	ret = csi_try_crop(priv, &sel->r, sensor);
+	if (ret)
+		return ret;
+
+	if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
+		cfg->try_crop = sel->r;
+	} else {
+		priv->crop = sel->r;
+		/* Update the source format */
+		outfmt->width = sel->r.width;
+		outfmt->height = sel->r.height;
+	}
+
+	return 0;
+}
+
+/*
+ * retrieve our pads parsed from the OF graph by the media device
+ */
+static int csi_registered(struct v4l2_subdev *sd)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	int i, ret;
+	u32 code;
+
+	/* get media device */
+	priv->md = dev_get_drvdata(sd->v4l2_dev->dev);
+
+	/* get handle to IPU CSI */
+	priv->csi = ipu_csi_get(priv->ipu, priv->csi_id);
+	if (IS_ERR(priv->csi)) {
+		v4l2_err(&priv->sd, "failed to get CSI%d\n", priv->csi_id);
+		return PTR_ERR(priv->csi);
+	}
+
+	for (i = 0; i < CSI_NUM_PADS; i++) {
+		priv->pad[i].flags = (i == CSI_SINK_PAD) ?
+			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
+
+		code = 0;
+		if (i == CSI_SRC_PAD_DIRECT)
+			imx_media_enum_ipu_format(NULL, &code, 0, true);
+
+		/* set a default mbus format  */
+		ret = imx_media_init_mbus_fmt(&priv->format_mbus[i],
+					      640, 480, code, V4L2_FIELD_NONE,
+					      &priv->cc[i]);
+		if (ret)
+			goto put_csi;
+	}
+
+	priv->fim = imx_media_fim_init(&priv->sd);
+	if (IS_ERR(priv->fim)) {
+		ret = PTR_ERR(priv->fim);
+		goto put_csi;
+	}
+
+	ret = media_entity_pads_init(&sd->entity, CSI_NUM_PADS, priv->pad);
+	if (ret)
+		goto free_fim;
+
+	ret = imx_media_capture_device_register(priv->vdev);
+	if (ret)
+		goto free_fim;
+
+	return 0;
+
+free_fim:
+	if (priv->fim)
+		imx_media_fim_free(priv->fim);
+put_csi:
+	ipu_csi_put(priv->csi);
+	return ret;
+}
+
+static void csi_unregistered(struct v4l2_subdev *sd)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+
+	imx_media_capture_device_unregister(priv->vdev);
+
+	if (priv->fim)
+		imx_media_fim_free(priv->fim);
+
+	if (!IS_ERR_OR_NULL(priv->csi))
+		ipu_csi_put(priv->csi);
+}
+
+static struct media_entity_operations csi_entity_ops = {
+	.link_setup = csi_link_setup,
+	.link_validate = v4l2_subdev_link_validate,
+};
+
+static struct v4l2_subdev_core_ops csi_core_ops = {
+	.s_power = csi_s_power,
+	.interrupt_service_routine = csi_eof_isr,
+};
+
+static struct v4l2_subdev_video_ops csi_video_ops = {
+	.s_stream = csi_s_stream,
+};
+
+static struct v4l2_subdev_pad_ops csi_pad_ops = {
+	.enum_mbus_code = csi_enum_mbus_code,
+	.get_fmt = csi_get_fmt,
+	.set_fmt = csi_set_fmt,
+	.get_selection = csi_get_selection,
+	.set_selection = csi_set_selection,
+	.link_validate = csi_link_validate,
+};
+
+static struct v4l2_subdev_ops csi_subdev_ops = {
+	.core = &csi_core_ops,
+	.video = &csi_video_ops,
+	.pad = &csi_pad_ops,
+};
+
+static struct v4l2_subdev_internal_ops csi_internal_ops = {
+	.registered = csi_registered,
+	.unregistered = csi_unregistered,
+};
+
+static int imx_csi_probe(struct platform_device *pdev)
+{
+	struct ipu_client_platformdata *pdata;
+	struct csi_priv *priv;
+	int ret;
+
+	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, &priv->sd);
+	priv->dev = &pdev->dev;
+
+	ret = dma_set_coherent_mask(priv->dev, DMA_BIT_MASK(32));
+	if (ret)
+		return ret;
+
+	/* get parent IPU */
+	priv->ipu = dev_get_drvdata(priv->dev->parent);
+
+	/* get our CSI id */
+	pdata = priv->dev->platform_data;
+	priv->csi_id = pdata->csi;
+	priv->smfc_id = (priv->csi_id == 0) ? 0 : 2;
+
+	init_timer(&priv->eof_timeout_timer);
+	priv->eof_timeout_timer.data = (unsigned long)priv;
+	priv->eof_timeout_timer.function = csi_idmac_eof_timeout;
+	spin_lock_init(&priv->irqlock);
+
+	v4l2_subdev_init(&priv->sd, &csi_subdev_ops);
+	v4l2_set_subdevdata(&priv->sd, priv);
+	priv->sd.internal_ops = &csi_internal_ops;
+	priv->sd.entity.ops = &csi_entity_ops;
+	priv->sd.entity.function = MEDIA_ENT_F_PROC_VIDEO_PIXEL_FORMATTER;
+	priv->sd.dev = &pdev->dev;
+	priv->sd.owner = THIS_MODULE;
+	priv->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
+	priv->sd.grp_id = priv->csi_id ?
+		IMX_MEDIA_GRP_ID_CSI1 : IMX_MEDIA_GRP_ID_CSI0;
+	imx_media_grp_id_to_sd_name(priv->sd.name, sizeof(priv->sd.name),
+				    priv->sd.grp_id, ipu_get_num(priv->ipu));
+
+	priv->vdev = imx_media_capture_device_init(&priv->sd,
+						   CSI_SRC_PAD_IDMAC);
+	if (IS_ERR(priv->vdev))
+		return PTR_ERR(priv->vdev);
+
+	v4l2_ctrl_handler_init(&priv->ctrl_hdlr, 0);
+	priv->sd.ctrl_handler = &priv->ctrl_hdlr;
+
+	ret = v4l2_async_register_subdev(&priv->sd);
+	if (ret)
+		v4l2_ctrl_handler_free(&priv->ctrl_hdlr);
+
+	return ret;
+}
+
+static int imx_csi_remove(struct platform_device *pdev)
+{
+	struct v4l2_subdev *sd = platform_get_drvdata(pdev);
+	struct csi_priv *priv = sd_to_dev(sd);
+
+	imx_media_capture_device_remove(priv->vdev);
+	v4l2_async_unregister_subdev(sd);
+	media_entity_cleanup(&sd->entity);
+
+	return 0;
+}
+
+static const struct platform_device_id imx_csi_ids[] = {
+	{ .name = "imx-ipuv3-csi" },
+	{ },
+};
+MODULE_DEVICE_TABLE(platform, imx_csi_ids);
+
+static struct platform_driver imx_csi_driver = {
+	.probe = imx_csi_probe,
+	.remove = imx_csi_remove,
+	.id_table = imx_csi_ids,
+	.driver = {
+		.name = "imx-ipuv3-csi",
+	},
+};
+module_platform_driver(imx_csi_driver);
+
+MODULE_DESCRIPTION("i.MX CSI subdev driver");
+MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS("platform:imx-ipuv3-csi");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 21/36] media: imx: Add VDIC subdev driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (19 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 20/36] media: imx: Add CSI subdev driver Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 22/36] media: imx: Add IC subdev drivers Steve Longerbeam
                   ` (16 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

This is a media entity subdevice driver for the i.MX Video De-Interlacing
or Combining Block. So far this entity does not implement the Combining
function but only motion compensated deinterlacing. Video frames are
received from the CSI and are routed to the IC PRPVF entity.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/Makefile         |   1 +
 drivers/staging/media/imx/imx-media-vdic.c | 886 +++++++++++++++++++++++++++++
 2 files changed, 887 insertions(+)
 create mode 100644 drivers/staging/media/imx/imx-media-vdic.c

diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
index c054490..1f01520 100644
--- a/drivers/staging/media/imx/Makefile
+++ b/drivers/staging/media/imx/Makefile
@@ -4,5 +4,6 @@ imx-media-common-objs := imx-media-utils.o imx-media-fim.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-capture.o
+obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-vdic.o
 
 obj-$(CONFIG_VIDEO_IMX_CSI) += imx-media-csi.o
diff --git a/drivers/staging/media/imx/imx-media-vdic.c b/drivers/staging/media/imx/imx-media-vdic.c
new file mode 100644
index 0000000..5bb21e9
--- /dev/null
+++ b/drivers/staging/media/imx/imx-media-vdic.c
@@ -0,0 +1,886 @@
+/*
+ * V4L2 Deinterlacer Subdev for Freescale i.MX5/6 SOC
+ *
+ * Copyright (c) 2017 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/timer.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mc.h>
+#include <media/v4l2-subdev.h>
+#include <media/imx.h>
+#include "imx-media.h"
+
+/*
+ * This subdev implements two different video pipelines:
+ *
+ * CSI -> VDIC
+ *
+ * In this pipeline, the CSI sends a single interlaced field F(n-1)
+ * directly to the VDIC (and optionally the following field F(n)
+ * can be sent to memory via IDMAC channel 13). This pipeline only works
+ * in VDIC's high motion mode, which only requires a single field for
+ * processing. The other motion modes (low and medium) require three
+ * fields, so this pipeline does not work in those modes. Also, it is
+ * not clear how this pipeline can deal with the various field orders
+ * (sequential BT/TB, interlaced BT/TB).
+ *
+ * MEM -> CH8,9,10 -> VDIC
+ *
+ * In this pipeline, previous field F(n-1), current field F(n), and next
+ * field F(n+1) are transferred to the VDIC via IDMAC channels 8,9,10.
+ * These memory buffers can come from a video output or mem2mem device.
+ * All motion modes are supported by this pipeline.
+ *
+ * The "direct" CSI->VDIC pipeline requires no DMA, but it can only be
+ * used in high motion mode.
+ */
+
+struct vdic_priv;
+
+struct vdic_pipeline_ops {
+	int (*setup)(struct vdic_priv *priv);
+	void (*start)(struct vdic_priv *priv);
+	void (*stop)(struct vdic_priv *priv);
+	void (*disable)(struct vdic_priv *priv);
+};
+
+/*
+ * Min/Max supported width and heights.
+ */
+#define MIN_W       176
+#define MIN_H       144
+#define MAX_W_VDIC  968
+#define MAX_H_VDIC 2048
+#define W_ALIGN    4 /* multiple of 16 pixels */
+#define H_ALIGN    1 /* multiple of 2 lines */
+#define S_ALIGN    1 /* multiple of 2 */
+
+struct vdic_priv {
+	struct device        *dev;
+	struct ipu_soc       *ipu;
+	struct imx_media_dev *md;
+	struct v4l2_subdev   sd;
+	int ipu_id;
+
+	/* IPU units we require */
+	struct ipu_vdi *vdi;
+
+	struct media_pad pad[VDIC_NUM_PADS];
+	int active_input_pad;
+
+	struct ipuv3_channel *vdi_in_ch_p; /* F(n-1) transfer channel */
+	struct ipuv3_channel *vdi_in_ch;   /* F(n) transfer channel */
+	struct ipuv3_channel *vdi_in_ch_n; /* F(n+1) transfer channel */
+
+	/* pipeline operations */
+	struct vdic_pipeline_ops *ops;
+
+	/* current and previous input buffers indirect path */
+	struct imx_media_buffer *curr_in_buf;
+	struct imx_media_buffer *prev_in_buf;
+
+	/*
+	 * translated field type, input line stride, and field size
+	 * for indirect path
+	 */
+	u32 fieldtype;
+	u32 in_stride;
+	u32 field_size;
+
+	/* the source (a video device or subdev) */
+	struct media_entity *src;
+	/* the sink that will receive the progressive out buffers */
+	struct v4l2_subdev *sink_sd;
+
+	/* the attached sensor at stream on */
+	struct imx_media_subdev *sensor;
+
+	/* the video standard from sensor at time of streamon */
+	v4l2_std_id std;
+
+	struct v4l2_mbus_framefmt format_mbus[VDIC_NUM_PADS];
+	const struct imx_media_pixfmt *cc[VDIC_NUM_PADS];
+
+	/* the video device at IDMAC input pad */
+	struct imx_media_video_dev *vdev;
+
+	bool csi_direct;  /* using direct CSI->VDIC->IC pipeline */
+
+	/* motion select control */
+	struct v4l2_ctrl_handler ctrl_hdlr;
+	enum ipu_motion_sel motion;
+
+	bool stream_on; /* streaming is on */
+};
+
+static void vdic_put_ipu_resources(struct vdic_priv *priv)
+{
+	if (!IS_ERR_OR_NULL(priv->vdi_in_ch_p))
+		ipu_idmac_put(priv->vdi_in_ch_p);
+	priv->vdi_in_ch_p = NULL;
+
+	if (!IS_ERR_OR_NULL(priv->vdi_in_ch))
+		ipu_idmac_put(priv->vdi_in_ch);
+	priv->vdi_in_ch = NULL;
+
+	if (!IS_ERR_OR_NULL(priv->vdi_in_ch_n))
+		ipu_idmac_put(priv->vdi_in_ch_n);
+	priv->vdi_in_ch_n = NULL;
+
+	if (!IS_ERR_OR_NULL(priv->vdi))
+		ipu_vdi_put(priv->vdi);
+	priv->vdi = NULL;
+}
+
+static int vdic_get_ipu_resources(struct vdic_priv *priv)
+{
+	int ret, err_chan;
+
+	priv->ipu = priv->md->ipu[priv->ipu_id];
+
+	priv->vdi = ipu_vdi_get(priv->ipu);
+	if (IS_ERR(priv->vdi)) {
+		v4l2_err(&priv->sd, "failed to get VDIC\n");
+		ret = PTR_ERR(priv->vdi);
+		goto out;
+	}
+
+	if (!priv->csi_direct) {
+		priv->vdi_in_ch_p = ipu_idmac_get(priv->ipu,
+						  IPUV3_CHANNEL_MEM_VDI_PREV);
+		if (IS_ERR(priv->vdi_in_ch_p)) {
+			err_chan = IPUV3_CHANNEL_MEM_VDI_PREV;
+			ret = PTR_ERR(priv->vdi_in_ch_p);
+			goto out_err_chan;
+		}
+
+		priv->vdi_in_ch = ipu_idmac_get(priv->ipu,
+						IPUV3_CHANNEL_MEM_VDI_CUR);
+		if (IS_ERR(priv->vdi_in_ch)) {
+			err_chan = IPUV3_CHANNEL_MEM_VDI_CUR;
+			ret = PTR_ERR(priv->vdi_in_ch);
+			goto out_err_chan;
+		}
+
+		priv->vdi_in_ch_n = ipu_idmac_get(priv->ipu,
+						  IPUV3_CHANNEL_MEM_VDI_NEXT);
+		if (IS_ERR(priv->vdi_in_ch_n)) {
+			err_chan = IPUV3_CHANNEL_MEM_VDI_NEXT;
+			ret = PTR_ERR(priv->vdi_in_ch_n);
+			goto out_err_chan;
+		}
+	}
+
+	return 0;
+
+out_err_chan:
+	v4l2_err(&priv->sd, "could not get IDMAC channel %u\n", err_chan);
+out:
+	vdic_put_ipu_resources(priv);
+	return ret;
+}
+
+/*
+ * This function is currently unused, but will be called when the
+ * output/mem2mem device at the IDMAC input pad sends us a new
+ * buffer. It kicks off the IDMAC read channels to bring in the
+ * buffer fields from memory and begin the conversions.
+ */
+static void __maybe_unused prepare_vdi_in_buffers(struct vdic_priv *priv,
+						  struct imx_media_buffer *curr)
+{
+	dma_addr_t prev_phys, curr_phys, next_phys;
+	struct imx_media_buffer *prev;
+	struct vb2_buffer *curr_vb, *prev_vb;
+	u32 fs = priv->field_size;
+	u32 is = priv->in_stride;
+
+	/* current input buffer is now previous */
+	priv->prev_in_buf = priv->curr_in_buf;
+	priv->curr_in_buf = curr;
+	prev = priv->prev_in_buf ? priv->prev_in_buf : curr;
+
+	prev_vb = &prev->vbuf.vb2_buf;
+	curr_vb = &curr->vbuf.vb2_buf;
+
+	switch (priv->fieldtype) {
+	case V4L2_FIELD_SEQ_TB:
+		prev_phys = vb2_dma_contig_plane_dma_addr(prev_vb, 0);
+		curr_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0) + fs;
+		next_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0);
+		break;
+	case V4L2_FIELD_SEQ_BT:
+		prev_phys = vb2_dma_contig_plane_dma_addr(prev_vb, 0) + fs;
+		curr_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0);
+		next_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0) + fs;
+		break;
+	case V4L2_FIELD_INTERLACED_BT:
+		prev_phys = vb2_dma_contig_plane_dma_addr(prev_vb, 0) + is;
+		curr_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0);
+		next_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0) + is;
+		break;
+	default:
+		/* assume V4L2_FIELD_INTERLACED_TB */
+		prev_phys = vb2_dma_contig_plane_dma_addr(prev_vb, 0);
+		curr_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0) + is;
+		next_phys = vb2_dma_contig_plane_dma_addr(curr_vb, 0);
+		break;
+	}
+
+	ipu_cpmem_set_buffer(priv->vdi_in_ch_p, 0, prev_phys);
+	ipu_cpmem_set_buffer(priv->vdi_in_ch,   0, curr_phys);
+	ipu_cpmem_set_buffer(priv->vdi_in_ch_n, 0, next_phys);
+
+	ipu_idmac_select_buffer(priv->vdi_in_ch_p, 0);
+	ipu_idmac_select_buffer(priv->vdi_in_ch, 0);
+	ipu_idmac_select_buffer(priv->vdi_in_ch_n, 0);
+}
+
+static int setup_vdi_channel(struct vdic_priv *priv,
+			     struct ipuv3_channel *channel,
+			     dma_addr_t phys0, dma_addr_t phys1)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	unsigned int burst_size;
+	struct ipu_image image;
+	int ret;
+
+	ipu_cpmem_zero(channel);
+
+	memset(&image, 0, sizeof(image));
+	image.pix = vdev->fmt.fmt.pix;
+	/* one field to VDIC channels */
+	image.pix.height /= 2;
+	image.rect.width = image.pix.width;
+	image.rect.height = image.pix.height;
+	image.phys0 = phys0;
+	image.phys1 = phys1;
+
+	ret = ipu_cpmem_set_image(channel, &image);
+	if (ret)
+		return ret;
+
+	burst_size = (image.pix.width & 0xf) ? 8 : 16;
+	ipu_cpmem_set_burstsize(channel, burst_size);
+
+	ipu_cpmem_set_axi_id(channel, 1);
+
+	ipu_idmac_set_double_buffer(channel, false);
+
+	return 0;
+}
+
+static int vdic_setup_direct(struct vdic_priv *priv)
+{
+	/* set VDIC to receive from CSI for direct path */
+	ipu_fsu_link(priv->ipu, IPUV3_CHANNEL_CSI_DIRECT,
+		     IPUV3_CHANNEL_CSI_VDI_PREV);
+
+	return 0;
+}
+
+static void vdic_start_direct(struct vdic_priv *priv)
+{
+}
+
+static void vdic_stop_direct(struct vdic_priv *priv)
+{
+}
+
+static void vdic_disable_direct(struct vdic_priv *priv)
+{
+	ipu_fsu_unlink(priv->ipu, IPUV3_CHANNEL_CSI_DIRECT,
+		       IPUV3_CHANNEL_CSI_VDI_PREV);
+}
+
+static int vdic_setup_indirect(struct vdic_priv *priv)
+{
+	struct v4l2_mbus_framefmt *infmt;
+	const struct imx_media_pixfmt *incc;
+	int in_size, ret;
+
+	infmt = &priv->format_mbus[VDIC_SINK_PAD_IDMAC];
+	incc = priv->cc[VDIC_SINK_PAD_IDMAC];
+
+	in_size = (infmt->width * incc->bpp * infmt->height) >> 3;
+
+	/* 1/2 full image size */
+	priv->field_size = in_size / 2;
+	priv->in_stride = incc->planar ?
+		infmt->width : (infmt->width * incc->bpp) >> 3;
+
+	priv->prev_in_buf = NULL;
+	priv->curr_in_buf = NULL;
+
+	priv->fieldtype = infmt->field;
+
+	/* init the vdi-in channels */
+	ret = setup_vdi_channel(priv, priv->vdi_in_ch_p, 0, 0);
+	if (ret)
+		return ret;
+	ret = setup_vdi_channel(priv, priv->vdi_in_ch, 0, 0);
+	if (ret)
+		return ret;
+	return setup_vdi_channel(priv, priv->vdi_in_ch_n, 0, 0);
+}
+
+static void vdic_start_indirect(struct vdic_priv *priv)
+{
+	/* enable the channels */
+	ipu_idmac_enable_channel(priv->vdi_in_ch_p);
+	ipu_idmac_enable_channel(priv->vdi_in_ch);
+	ipu_idmac_enable_channel(priv->vdi_in_ch_n);
+}
+
+static void vdic_stop_indirect(struct vdic_priv *priv)
+{
+	/* disable channels */
+	ipu_idmac_disable_channel(priv->vdi_in_ch_p);
+	ipu_idmac_disable_channel(priv->vdi_in_ch);
+	ipu_idmac_disable_channel(priv->vdi_in_ch_n);
+}
+
+static void vdic_disable_indirect(struct vdic_priv *priv)
+{
+}
+
+static struct vdic_pipeline_ops direct_ops = {
+	.setup = vdic_setup_direct,
+	.start = vdic_start_direct,
+	.stop = vdic_stop_direct,
+	.disable = vdic_disable_direct,
+};
+
+static struct vdic_pipeline_ops indirect_ops = {
+	.setup = vdic_setup_indirect,
+	.start = vdic_start_indirect,
+	.stop = vdic_stop_indirect,
+	.disable = vdic_disable_indirect,
+};
+
+static int vdic_start(struct vdic_priv *priv)
+{
+	struct v4l2_mbus_framefmt *infmt;
+	int ret;
+
+	if (!priv->sensor) {
+		v4l2_err(&priv->sd, "no sensor attached\n");
+		return -EINVAL;
+	}
+
+	infmt = &priv->format_mbus[priv->active_input_pad];
+
+	priv->ops = priv->csi_direct ? &direct_ops : &indirect_ops;
+
+	ret = vdic_get_ipu_resources(priv);
+	if (ret)
+		return ret;
+
+	ret = v4l2_subdev_call(priv->sensor->sd, video, g_std, &priv->std);
+	if (ret)
+		goto out_put_ipu;
+
+	/*
+	 * init the VDIC.
+	 *
+	 * note we don't give infmt->code to ipu_vdi_setup(). The VDIC
+	 * only supports 4:2:2 or 4:2:0, and this subdev will only
+	 * negotiate 4:2:2 at its sink pads.
+	 */
+	ipu_vdi_setup(priv->vdi, MEDIA_BUS_FMT_UYVY8_2X8,
+		      infmt->width, infmt->height);
+	ipu_vdi_set_field_order(priv->vdi, priv->std, infmt->field);
+	ipu_vdi_set_motion(priv->vdi, priv->motion);
+
+	ret = priv->ops->setup(priv);
+	if (ret)
+		goto out_put_ipu;
+
+	ipu_vdi_enable(priv->vdi);
+
+	priv->ops->start(priv);
+
+	return 0;
+
+out_put_ipu:
+	vdic_put_ipu_resources(priv);
+	return ret;
+}
+
+static void vdic_stop(struct vdic_priv *priv)
+{
+	priv->ops->stop(priv);
+	ipu_vdi_disable(priv->vdi);
+	priv->ops->disable(priv);
+
+	vdic_put_ipu_resources(priv);
+}
+
+static int vdic_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vdic_priv *priv = container_of(ctrl->handler,
+					      struct vdic_priv, ctrl_hdlr);
+	enum ipu_motion_sel motion;
+
+	switch (ctrl->id) {
+	case V4L2_CID_IMX_MOTION:
+		motion = ctrl->val;
+		if (motion != priv->motion) {
+			/* can't change motion control mid-streaming */
+			if (priv->stream_on)
+				return -EBUSY;
+			priv->motion = motion;
+		}
+		break;
+	default:
+		v4l2_err(&priv->sd, "Invalid control\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vdic_ctrl_ops = {
+	.s_ctrl = vdic_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config vdic_custom_ctrl[] = {
+	{
+		.ops = &vdic_ctrl_ops,
+		.id = V4L2_CID_IMX_MOTION,
+		.name = "Motion Compensation",
+		.type = V4L2_CTRL_TYPE_INTEGER,
+		.def = HIGH_MOTION,
+		.min = MOTION_NONE,
+		.max = HIGH_MOTION,
+		.step = 1,
+	},
+};
+
+#define VDIC_NUM_CONTROLS ARRAY_SIZE(vdic_custom_ctrl)
+
+static int vdic_init_controls(struct vdic_priv *priv)
+{
+	struct v4l2_ctrl_handler *hdlr = &priv->ctrl_hdlr;
+	const struct v4l2_ctrl_config *c;
+	int i, ret;
+
+	v4l2_ctrl_handler_init(hdlr, VDIC_NUM_CONTROLS);
+
+	for (i = 0; i < VDIC_NUM_CONTROLS; i++) {
+		c = &vdic_custom_ctrl[i];
+		v4l2_ctrl_new_custom(hdlr, c, NULL);
+	}
+
+	priv->sd.ctrl_handler = hdlr;
+
+	if (hdlr->error) {
+		ret = hdlr->error;
+		goto out_free;
+	}
+
+	v4l2_ctrl_handler_setup(hdlr);
+	return 0;
+
+out_free:
+	v4l2_ctrl_handler_free(hdlr);
+	return ret;
+}
+
+static int vdic_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct vdic_priv *priv = v4l2_get_subdevdata(sd);
+	int ret = 0;
+
+	if (!priv->src || !priv->sink_sd)
+		return -EPIPE;
+
+	dev_dbg(priv->dev, "stream %s\n", enable ? "ON" : "OFF");
+
+	if (enable && !priv->stream_on)
+		ret = vdic_start(priv);
+	else if (!enable && priv->stream_on)
+		vdic_stop(priv);
+
+	if (!ret)
+		priv->stream_on = enable;
+	return ret;
+}
+
+static int vdic_enum_mbus_code(struct v4l2_subdev *sd,
+			       struct v4l2_subdev_pad_config *cfg,
+			       struct v4l2_subdev_mbus_code_enum *code)
+{
+	if (code->pad >= VDIC_NUM_PADS)
+		return -EINVAL;
+
+	if (code->pad == VDIC_SINK_PAD_IDMAC)
+		return imx_media_enum_format(NULL, &code->code, code->index,
+					     false, false);
+
+	return imx_media_enum_ipu_format(NULL, &code->code, code->index, false);
+}
+
+static struct v4l2_mbus_framefmt *
+__vdic_get_fmt(struct vdic_priv *priv, struct v4l2_subdev_pad_config *cfg,
+	       unsigned int pad, enum v4l2_subdev_format_whence which)
+{
+	if (which == V4L2_SUBDEV_FORMAT_TRY)
+		return v4l2_subdev_get_try_format(&priv->sd, cfg, pad);
+	else
+		return &priv->format_mbus[pad];
+}
+
+static int vdic_get_fmt(struct v4l2_subdev *sd,
+			struct v4l2_subdev_pad_config *cfg,
+			struct v4l2_subdev_format *sdformat)
+{
+	struct vdic_priv *priv = v4l2_get_subdevdata(sd);
+	struct v4l2_mbus_framefmt *fmt;
+
+	if (sdformat->pad >= VDIC_NUM_PADS)
+		return -EINVAL;
+
+	fmt = __vdic_get_fmt(priv, cfg, sdformat->pad, sdformat->which);
+	if (!fmt)
+		return -EINVAL;
+
+	sdformat->format = *fmt;
+
+	return 0;
+}
+
+static int vdic_set_fmt(struct v4l2_subdev *sd,
+			struct v4l2_subdev_pad_config *cfg,
+			struct v4l2_subdev_format *sdformat)
+{
+	struct vdic_priv *priv = v4l2_get_subdevdata(sd);
+	const struct imx_media_pixfmt *cc;
+	struct v4l2_mbus_framefmt *infmt;
+	u32 code;
+
+	if (sdformat->pad >= VDIC_NUM_PADS)
+		return -EINVAL;
+
+	if (priv->stream_on)
+		return -EBUSY;
+
+	v4l_bound_align_image(&sdformat->format.width, MIN_W, MAX_W_VDIC,
+			      W_ALIGN, &sdformat->format.height,
+			      MIN_H, MAX_H_VDIC, H_ALIGN, S_ALIGN);
+
+	switch (sdformat->pad) {
+	case VDIC_SRC_PAD_DIRECT:
+		infmt = __vdic_get_fmt(priv, cfg, priv->active_input_pad,
+				       sdformat->which);
+
+		cc = imx_media_find_ipu_format(0, sdformat->format.code, false);
+		if (!cc) {
+			imx_media_enum_ipu_format(NULL, &code, 0, false);
+			cc = imx_media_find_ipu_format(0, code, false);
+			sdformat->format.code = cc->codes[0];
+		}
+
+		sdformat->format.width = infmt->width;
+		sdformat->format.height = infmt->height;
+		/* output is always progressive! */
+		sdformat->format.field = V4L2_FIELD_NONE;
+		break;
+	case VDIC_SINK_PAD_IDMAC:
+	case VDIC_SINK_PAD_DIRECT:
+		if (sdformat->pad == VDIC_SINK_PAD_DIRECT) {
+			cc = imx_media_find_ipu_format(0, sdformat->format.code,
+						       false);
+			if (!cc) {
+				imx_media_enum_ipu_format(NULL, &code, 0,
+							  false);
+				cc = imx_media_find_ipu_format(0, code, false);
+				sdformat->format.code = cc->codes[0];
+			}
+		} else {
+			cc = imx_media_find_format(0, sdformat->format.code,
+						   false, false);
+			if (!cc) {
+				imx_media_enum_format(NULL, &code, 0,
+						      false, false);
+				cc = imx_media_find_format(0, code,
+							   false, false);
+				sdformat->format.code = cc->codes[0];
+			}
+		}
+
+		/* input must be interlaced! Choose SEQ_TB if not */
+		if (!V4L2_FIELD_HAS_BOTH(sdformat->format.field))
+			sdformat->format.field = V4L2_FIELD_SEQ_TB;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
+		cfg->try_fmt = sdformat->format;
+	} else {
+		priv->format_mbus[sdformat->pad] = sdformat->format;
+		priv->cc[sdformat->pad] = cc;
+	}
+
+	return 0;
+}
+
+static int vdic_link_setup(struct media_entity *entity,
+			    const struct media_pad *local,
+			    const struct media_pad *remote, u32 flags)
+{
+	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+	struct vdic_priv *priv = v4l2_get_subdevdata(sd);
+	struct v4l2_subdev *remote_sd;
+	int ret;
+
+	dev_dbg(priv->dev, "link setup %s -> %s", remote->entity->name,
+		local->entity->name);
+
+	if (local->flags & MEDIA_PAD_FL_SOURCE) {
+		if (!is_media_entity_v4l2_subdev(remote->entity))
+			return -EINVAL;
+
+		remote_sd = media_entity_to_v4l2_subdev(remote->entity);
+
+		if (flags & MEDIA_LNK_FL_ENABLED) {
+			if (priv->sink_sd)
+				return -EBUSY;
+			priv->sink_sd = remote_sd;
+		} else {
+			priv->sink_sd = NULL;
+		}
+
+		return 0;
+	}
+
+	/* this is a sink pad */
+
+	if (flags & MEDIA_LNK_FL_ENABLED) {
+		if (priv->src)
+			return -EBUSY;
+	} else {
+		priv->src = NULL;
+		return 0;
+	}
+
+	if (local->index == VDIC_SINK_PAD_IDMAC) {
+		struct imx_media_video_dev *vdev = priv->vdev;
+
+		if (!is_media_entity_v4l2_video_device(remote->entity))
+			return -EINVAL;
+
+		if (!vdev)
+			return -ENOSYS;
+
+		/* reset video device controls to refresh from subdevs */
+		v4l2_ctrl_handler_free(vdev->vfd->ctrl_handler);
+		v4l2_ctrl_handler_init(vdev->vfd->ctrl_handler, 0);
+		ret = __v4l2_pipeline_inherit_controls(vdev->vfd,
+						       &priv->sd.entity);
+		if (ret)
+			return ret;
+
+		priv->csi_direct = false;
+	} else {
+		if (!is_media_entity_v4l2_subdev(remote->entity))
+			return -EINVAL;
+
+		remote_sd = media_entity_to_v4l2_subdev(remote->entity);
+
+		/* direct pad must connect to a CSI */
+		if (!(remote_sd->grp_id & IMX_MEDIA_GRP_ID_CSI) ||
+		    remote->index != CSI_SRC_PAD_DIRECT)
+			return -EINVAL;
+
+		priv->csi_direct = true;
+	}
+
+	priv->src = remote->entity;
+	/* record which input pad is now active */
+	priv->active_input_pad = local->index;
+
+	return 0;
+}
+
+static int vdic_link_validate(struct v4l2_subdev *sd,
+			      struct media_link *link,
+			      struct v4l2_subdev_format *source_fmt,
+			      struct v4l2_subdev_format *sink_fmt)
+{
+	struct vdic_priv *priv = v4l2_get_subdevdata(sd);
+	int ret;
+
+	ret = v4l2_subdev_link_validate_default(sd, link,
+						source_fmt, sink_fmt);
+	if (ret)
+		return ret;
+
+	priv->sensor = __imx_media_find_sensor(priv->md, &priv->sd.entity);
+	if (IS_ERR(priv->sensor)) {
+		v4l2_err(&priv->sd, "no sensor attached\n");
+		ret = PTR_ERR(priv->sensor);
+		priv->sensor = NULL;
+		return ret;
+	}
+
+	if (priv->csi_direct && priv->motion != HIGH_MOTION) {
+		v4l2_err(&priv->sd,
+			 "direct CSI pipeline requires high motion\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/*
+ * retrieve our pads parsed from the OF graph by the media device
+ */
+static int vdic_registered(struct v4l2_subdev *sd)
+{
+	struct vdic_priv *priv = v4l2_get_subdevdata(sd);
+	int i, ret;
+	u32 code;
+
+	/* get media device */
+	priv->md = dev_get_drvdata(sd->v4l2_dev->dev);
+
+	for (i = 0; i < VDIC_NUM_PADS; i++) {
+		priv->pad[i].flags = (i == VDIC_SRC_PAD_DIRECT) ?
+			MEDIA_PAD_FL_SOURCE : MEDIA_PAD_FL_SINK;
+
+		code = 0;
+		if (i != VDIC_SINK_PAD_IDMAC)
+			imx_media_enum_ipu_format(NULL, &code, 0, true);
+
+		/* set a default mbus format  */
+		ret = imx_media_init_mbus_fmt(&priv->format_mbus[i],
+					      640, 480, code, V4L2_FIELD_NONE,
+					      &priv->cc[i]);
+		if (ret)
+			return ret;
+	}
+
+	priv->active_input_pad = VDIC_SINK_PAD_DIRECT;
+
+	ret = vdic_init_controls(priv);
+	if (ret)
+		return ret;
+
+	ret = media_entity_pads_init(&sd->entity, VDIC_NUM_PADS, priv->pad);
+	if (ret)
+		v4l2_ctrl_handler_free(&priv->ctrl_hdlr);
+
+	return ret;
+}
+
+static void vdic_unregistered(struct v4l2_subdev *sd)
+{
+	struct vdic_priv *priv = v4l2_get_subdevdata(sd);
+
+	v4l2_ctrl_handler_free(&priv->ctrl_hdlr);
+}
+
+static struct v4l2_subdev_pad_ops vdic_pad_ops = {
+	.enum_mbus_code = vdic_enum_mbus_code,
+	.get_fmt = vdic_get_fmt,
+	.set_fmt = vdic_set_fmt,
+	.link_validate = vdic_link_validate,
+};
+
+static struct v4l2_subdev_video_ops vdic_video_ops = {
+	.s_stream = vdic_s_stream,
+};
+
+static struct media_entity_operations vdic_entity_ops = {
+	.link_setup = vdic_link_setup,
+	.link_validate = v4l2_subdev_link_validate,
+};
+
+static struct v4l2_subdev_ops vdic_subdev_ops = {
+	.video = &vdic_video_ops,
+	.pad = &vdic_pad_ops,
+};
+
+static struct v4l2_subdev_internal_ops vdic_internal_ops = {
+	.registered = vdic_registered,
+	.unregistered = vdic_unregistered,
+};
+
+static int imx_vdic_probe(struct platform_device *pdev)
+{
+	struct imx_media_internal_sd_platformdata *pdata;
+	struct vdic_priv *priv;
+
+	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, &priv->sd);
+	priv->dev = &pdev->dev;
+
+	pdata = priv->dev->platform_data;
+	priv->ipu_id = pdata->ipu_id;
+
+	v4l2_subdev_init(&priv->sd, &vdic_subdev_ops);
+	v4l2_set_subdevdata(&priv->sd, priv);
+	priv->sd.internal_ops = &vdic_internal_ops;
+	priv->sd.entity.ops = &vdic_entity_ops;
+	priv->sd.entity.function = MEDIA_ENT_F_PROC_VIDEO_PIXEL_FORMATTER;
+	priv->sd.dev = &pdev->dev;
+	priv->sd.owner = THIS_MODULE;
+	priv->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE;
+	/* get our group id */
+	priv->sd.grp_id = pdata->grp_id;
+	strncpy(priv->sd.name, pdata->sd_name, sizeof(priv->sd.name));
+
+	return v4l2_async_register_subdev(&priv->sd);
+}
+
+static int imx_vdic_remove(struct platform_device *pdev)
+{
+	struct v4l2_subdev *sd = platform_get_drvdata(pdev);
+
+	v4l2_info(sd, "Removing\n");
+
+	v4l2_async_unregister_subdev(sd);
+	media_entity_cleanup(&sd->entity);
+
+	return 0;
+}
+
+static const struct platform_device_id imx_vdic_ids[] = {
+	{ .name = "imx-ipuv3-vdic" },
+	{ },
+};
+MODULE_DEVICE_TABLE(platform, imx_vdic_ids);
+
+static struct platform_driver imx_vdic_driver = {
+	.probe = imx_vdic_probe,
+	.remove = imx_vdic_remove,
+	.id_table = imx_vdic_ids,
+	.driver = {
+		.name = "imx-ipuv3-vdic",
+	},
+};
+module_platform_driver(imx_vdic_driver);
+
+MODULE_DESCRIPTION("i.MX VDIC subdev driver");
+MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS("platform:imx-ipuv3-vdic");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 22/36] media: imx: Add IC subdev drivers
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (20 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 21/36] media: imx: Add VDIC " Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver Steve Longerbeam
                   ` (15 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

This is a set of four media entity subdevice drivers for the i.MX
Image Converter:

- Pre-process Router: Takes input frames from CSI0, CSI1, or VDIC.
  Two output pads enable either or both of the preprocess tasks
  below. If the input is from one of the CSIs, both proprocess task
  links can be enabled to process frames from that CSI simultaneously.
  If the input is the VDIC, only the Pre-processing Viewfinder task
  link can be enabled.

- Pre-processing Encode task: video frames are routed directly from
  the CSI and can be scaled, color-space converted, and rotated.
  Scaled output is limited to 1024x1024 resolution. Output frames
  are routed to the capture device.

- Pre-processing Viewfinder task: this task can perform the same
  conversions as the pre-process encode task, but in addition can
  be used for hardware motion compensated deinterlacing. Frames can
  come either directly from the CSI or from the VDIC. Scaled output
  is limited to 1024x1024 resolution. Output frames are routed to
  the capture device.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/Makefile          |    2 +
 drivers/staging/media/imx/imx-ic-common.c   |  113 +++
 drivers/staging/media/imx/imx-ic-prp.c      |  427 ++++++++++
 drivers/staging/media/imx/imx-ic-prpencvf.c | 1116 +++++++++++++++++++++++++++
 drivers/staging/media/imx/imx-ic.h          |   38 +
 5 files changed, 1696 insertions(+)
 create mode 100644 drivers/staging/media/imx/imx-ic-common.c
 create mode 100644 drivers/staging/media/imx/imx-ic-prp.c
 create mode 100644 drivers/staging/media/imx/imx-ic-prpencvf.c
 create mode 100644 drivers/staging/media/imx/imx-ic.h

diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
index 1f01520..878a126 100644
--- a/drivers/staging/media/imx/Makefile
+++ b/drivers/staging/media/imx/Makefile
@@ -1,9 +1,11 @@
 imx-media-objs := imx-media-dev.o imx-media-internal-sd.o imx-media-of.o
 imx-media-common-objs := imx-media-utils.o imx-media-fim.o
+imx-media-ic-objs := imx-ic-common.o imx-ic-prp.o imx-ic-prpencvf.o
 
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-capture.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-vdic.o
+obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-ic.o
 
 obj-$(CONFIG_VIDEO_IMX_CSI) += imx-media-csi.o
diff --git a/drivers/staging/media/imx/imx-ic-common.c b/drivers/staging/media/imx/imx-ic-common.c
new file mode 100644
index 0000000..cfdd490
--- /dev/null
+++ b/drivers/staging/media/imx/imx-ic-common.c
@@ -0,0 +1,113 @@
+/*
+ * V4L2 Image Converter Subdev for Freescale i.MX5/6 SOC
+ *
+ * Copyright (c) 2014-2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-subdev.h>
+#include "imx-media.h"
+#include "imx-ic.h"
+
+#define IC_TASK_PRP IC_NUM_TASKS
+#define IC_NUM_OPS  (IC_NUM_TASKS + 1)
+
+static struct imx_ic_ops *ic_ops[IC_NUM_OPS] = {
+	[IC_TASK_PRP]            = &imx_ic_prp_ops,
+	[IC_TASK_ENCODER]        = &imx_ic_prpencvf_ops,
+	[IC_TASK_VIEWFINDER]     = &imx_ic_prpencvf_ops,
+};
+
+static int imx_ic_probe(struct platform_device *pdev)
+{
+	struct imx_media_internal_sd_platformdata *pdata;
+	struct imx_ic_priv *priv;
+	int ret;
+
+	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, &priv->sd);
+	priv->dev = &pdev->dev;
+
+	/* get our ipu_id, grp_id and IC task id */
+	pdata = priv->dev->platform_data;
+	priv->ipu_id = pdata->ipu_id;
+	switch (pdata->grp_id) {
+	case IMX_MEDIA_GRP_ID_IC_PRP:
+		priv->task_id = IC_TASK_PRP;
+		break;
+	case IMX_MEDIA_GRP_ID_IC_PRPENC:
+		priv->task_id = IC_TASK_ENCODER;
+		break;
+	case IMX_MEDIA_GRP_ID_IC_PRPVF:
+		priv->task_id = IC_TASK_VIEWFINDER;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	v4l2_subdev_init(&priv->sd, ic_ops[priv->task_id]->subdev_ops);
+	v4l2_set_subdevdata(&priv->sd, priv);
+	priv->sd.internal_ops = ic_ops[priv->task_id]->internal_ops;
+	priv->sd.entity.ops = ic_ops[priv->task_id]->entity_ops;
+	priv->sd.entity.function = MEDIA_ENT_F_PROC_VIDEO_SCALER;
+	priv->sd.dev = &pdev->dev;
+	priv->sd.owner = THIS_MODULE;
+	priv->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
+	priv->sd.grp_id = pdata->grp_id;
+	strncpy(priv->sd.name, pdata->sd_name, sizeof(priv->sd.name));
+
+	ret = ic_ops[priv->task_id]->init(priv);
+	if (ret)
+		return ret;
+
+	ret = v4l2_async_register_subdev(&priv->sd);
+	if (ret)
+		ic_ops[priv->task_id]->remove(priv);
+
+	return ret;
+}
+
+static int imx_ic_remove(struct platform_device *pdev)
+{
+	struct v4l2_subdev *sd = platform_get_drvdata(pdev);
+	struct imx_ic_priv *priv = container_of(sd, struct imx_ic_priv, sd);
+
+	v4l2_info(sd, "Removing\n");
+
+	ic_ops[priv->task_id]->remove(priv);
+
+	v4l2_async_unregister_subdev(sd);
+	media_entity_cleanup(&sd->entity);
+
+	return 0;
+}
+
+static const struct platform_device_id imx_ic_ids[] = {
+	{ .name = "imx-ipuv3-ic" },
+	{ },
+};
+MODULE_DEVICE_TABLE(platform, imx_ic_ids);
+
+static struct platform_driver imx_ic_driver = {
+	.probe = imx_ic_probe,
+	.remove = imx_ic_remove,
+	.id_table = imx_ic_ids,
+	.driver = {
+		.name = "imx-ipuv3-ic",
+	},
+};
+module_platform_driver(imx_ic_driver);
+
+MODULE_DESCRIPTION("i.MX IC subdev driver");
+MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS("platform:imx-ipuv3-ic");
diff --git a/drivers/staging/media/imx/imx-ic-prp.c b/drivers/staging/media/imx/imx-ic-prp.c
new file mode 100644
index 0000000..3683f7c
--- /dev/null
+++ b/drivers/staging/media/imx/imx-ic-prp.c
@@ -0,0 +1,427 @@
+/*
+ * V4L2 Capture IC Preprocess Subdev for Freescale i.MX5/6 SOC
+ *
+ * This subdevice handles capture of video frames from the CSI or VDIC,
+ * which are routed directly to the Image Converter preprocess tasks,
+ * for resizing, colorspace conversion, and rotation.
+ *
+ * Copyright (c) 2012-2017 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-subdev.h>
+#include <media/imx.h>
+#include "imx-media.h"
+#include "imx-ic.h"
+
+/*
+ * Min/Max supported width and heights.
+ */
+#define MIN_W       176
+#define MIN_H       144
+#define MAX_W      4096
+#define MAX_H      4096
+#define W_ALIGN    4 /* multiple of 16 pixels */
+#define H_ALIGN    1 /* multiple of 2 lines */
+#define S_ALIGN    1 /* multiple of 2 */
+
+struct prp_priv {
+	struct imx_media_dev *md;
+	struct imx_ic_priv *ic_priv;
+
+	/* IPU units we require */
+	struct ipu_soc *ipu;
+
+	struct media_pad pad[PRP_NUM_PADS];
+
+	struct v4l2_subdev *src_sd;
+	struct v4l2_subdev *sink_sd_prpenc;
+	struct v4l2_subdev *sink_sd_prpvf;
+
+	/* the CSI id at link validate */
+	int csi_id;
+
+	/* the attached CSI at stream on */
+	struct v4l2_subdev *csi_sd;
+	/* the attached sensor at stream on */
+	struct imx_media_subdev *sensor;
+
+	struct v4l2_mbus_framefmt format_mbus[PRP_NUM_PADS];
+	const struct imx_media_pixfmt *cc[PRP_NUM_PADS];
+
+	bool stream_on; /* streaming is on */
+};
+
+static inline struct prp_priv *sd_to_priv(struct v4l2_subdev *sd)
+{
+	struct imx_ic_priv *ic_priv = v4l2_get_subdevdata(sd);
+
+	return ic_priv->prp_priv;
+}
+
+static int prp_start(struct prp_priv *priv)
+{
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+
+	if (!priv->sensor) {
+		v4l2_err(&ic_priv->sd, "no sensor attached\n");
+		return -EINVAL;
+	}
+
+	priv->ipu = priv->md->ipu[ic_priv->ipu_id];
+
+	/* set IC to receive from CSI or VDI depending on source */
+	if (priv->src_sd->grp_id & IMX_MEDIA_GRP_ID_VDIC)
+		ipu_set_ic_src_mux(priv->ipu, 0, true);
+	else
+		ipu_set_ic_src_mux(priv->ipu, priv->csi_id, false);
+
+	return 0;
+}
+
+static void prp_stop(struct prp_priv *priv)
+{
+}
+
+static int prp_enum_mbus_code(struct v4l2_subdev *sd,
+			      struct v4l2_subdev_pad_config *cfg,
+			      struct v4l2_subdev_mbus_code_enum *code)
+{
+	if (code->pad >= PRP_NUM_PADS)
+		return -EINVAL;
+
+	return imx_media_enum_ipu_format(NULL, &code->code, code->index, true);
+}
+
+static struct v4l2_mbus_framefmt *
+__prp_get_fmt(struct prp_priv *priv, struct v4l2_subdev_pad_config *cfg,
+	      unsigned int pad, enum v4l2_subdev_format_whence which)
+{
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+
+	if (which == V4L2_SUBDEV_FORMAT_TRY)
+		return v4l2_subdev_get_try_format(&ic_priv->sd, cfg, pad);
+	else
+		return &priv->format_mbus[pad];
+}
+
+static int prp_get_fmt(struct v4l2_subdev *sd,
+		       struct v4l2_subdev_pad_config *cfg,
+		       struct v4l2_subdev_format *sdformat)
+{
+	struct prp_priv *priv = sd_to_priv(sd);
+	struct v4l2_mbus_framefmt *fmt;
+
+	if (sdformat->pad >= PRP_NUM_PADS)
+		return -EINVAL;
+
+	fmt = __prp_get_fmt(priv, cfg, sdformat->pad, sdformat->which);
+	if (!fmt)
+		return -EINVAL;
+
+	sdformat->format = *fmt;
+
+	return 0;
+}
+
+static int prp_set_fmt(struct v4l2_subdev *sd,
+		       struct v4l2_subdev_pad_config *cfg,
+		       struct v4l2_subdev_format *sdformat)
+{
+	struct prp_priv *priv = sd_to_priv(sd);
+	const struct imx_media_pixfmt *cc;
+	struct v4l2_mbus_framefmt *infmt;
+	u32 code;
+
+	if (sdformat->pad >= PRP_NUM_PADS)
+		return -EINVAL;
+
+	if (priv->stream_on)
+		return -EBUSY;
+
+	cc = imx_media_find_ipu_format(0, sdformat->format.code, true);
+	if (!cc) {
+		imx_media_enum_ipu_format(NULL, &code, 0, true);
+		cc = imx_media_find_ipu_format(0, code, true);
+		sdformat->format.code = cc->codes[0];
+	}
+
+	v4l_bound_align_image(&sdformat->format.width, MIN_W, MAX_W,
+			      W_ALIGN, &sdformat->format.height,
+			      MIN_H, MAX_H, H_ALIGN, S_ALIGN);
+
+	/* Output pads mirror input pad */
+	if (sdformat->pad == PRP_SRC_PAD_PRPENC ||
+	    sdformat->pad == PRP_SRC_PAD_PRPVF) {
+		infmt = __prp_get_fmt(priv, cfg, PRP_SINK_PAD,
+				      sdformat->which);
+		sdformat->format = *infmt;
+	}
+
+	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
+		cfg->try_fmt = sdformat->format;
+	} else {
+		priv->format_mbus[sdformat->pad] = sdformat->format;
+		priv->cc[sdformat->pad] = cc;
+	}
+
+	return 0;
+}
+
+static int prp_link_setup(struct media_entity *entity,
+			  const struct media_pad *local,
+			  const struct media_pad *remote, u32 flags)
+{
+	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+	struct imx_ic_priv *ic_priv = v4l2_get_subdevdata(sd);
+	struct prp_priv *priv = ic_priv->prp_priv;
+	struct v4l2_subdev *remote_sd;
+
+	dev_dbg(ic_priv->dev, "link setup %s -> %s", remote->entity->name,
+		local->entity->name);
+
+	remote_sd = media_entity_to_v4l2_subdev(remote->entity);
+
+	if (local->flags & MEDIA_PAD_FL_SINK) {
+		if (flags & MEDIA_LNK_FL_ENABLED) {
+			if (priv->src_sd)
+				return -EBUSY;
+			if (priv->sink_sd_prpenc && (remote_sd->grp_id &
+						     IMX_MEDIA_GRP_ID_VDIC))
+				return -EINVAL;
+			priv->src_sd = remote_sd;
+		} else {
+			priv->src_sd = NULL;
+		}
+
+		return 0;
+	}
+
+	/* this is a source pad */
+	if (flags & MEDIA_LNK_FL_ENABLED) {
+		switch (local->index) {
+		case PRP_SRC_PAD_PRPENC:
+			if (priv->sink_sd_prpenc)
+				return -EBUSY;
+			if (priv->src_sd && (priv->src_sd->grp_id &
+					     IMX_MEDIA_GRP_ID_VDIC))
+				return -EINVAL;
+			priv->sink_sd_prpenc = remote_sd;
+			break;
+		case PRP_SRC_PAD_PRPVF:
+			if (priv->sink_sd_prpvf)
+				return -EBUSY;
+			priv->sink_sd_prpvf = remote_sd;
+			break;
+		default:
+			return -EINVAL;
+		}
+	} else {
+		switch (local->index) {
+		case PRP_SRC_PAD_PRPENC:
+			priv->sink_sd_prpenc = NULL;
+			break;
+		case PRP_SRC_PAD_PRPVF:
+			priv->sink_sd_prpvf = NULL;
+			break;
+		default:
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static int prp_link_validate(struct v4l2_subdev *sd,
+			     struct media_link *link,
+			     struct v4l2_subdev_format *source_fmt,
+			     struct v4l2_subdev_format *sink_fmt)
+{
+	struct imx_ic_priv *ic_priv = v4l2_get_subdevdata(sd);
+	struct prp_priv *priv = ic_priv->prp_priv;
+	struct v4l2_of_endpoint *sensor_ep;
+	int ret;
+
+	ret = v4l2_subdev_link_validate_default(sd, link,
+						source_fmt, sink_fmt);
+	if (ret)
+		return ret;
+
+	/* the ->PRPENC link cannot be enabled if the source is the VDIC */
+	if (priv->sink_sd_prpenc && (priv->src_sd->grp_id &
+				     IMX_MEDIA_GRP_ID_VDIC))
+		return -EINVAL;
+
+	priv->sensor = __imx_media_find_sensor(priv->md, &ic_priv->sd.entity);
+	if (IS_ERR(priv->sensor)) {
+		v4l2_err(&ic_priv->sd, "no sensor attached\n");
+		ret = PTR_ERR(priv->sensor);
+		priv->sensor = NULL;
+		return ret;
+	}
+
+	sensor_ep = &priv->sensor->sensor_ep;
+
+	if (priv->src_sd->grp_id & IMX_MEDIA_GRP_ID_CSI) {
+		priv->csi_sd = priv->src_sd;
+	} else {
+		struct imx_media_subdev *csi =
+			imx_media_find_pipeline_subdev(
+				priv->md, &ic_priv->sd.entity,
+				IMX_MEDIA_GRP_ID_CSI);
+		if (IS_ERR(csi)) {
+			v4l2_err(&ic_priv->sd, "no CSI attached\n");
+			ret = PTR_ERR(csi);
+			return ret;
+		}
+
+		priv->csi_sd = csi->sd;
+	}
+
+	switch (priv->csi_sd->grp_id) {
+	case IMX_MEDIA_GRP_ID_CSI0:
+		priv->csi_id = 0;
+		break;
+	case IMX_MEDIA_GRP_ID_CSI1:
+		priv->csi_id = 1;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (sensor_ep->bus_type == V4L2_MBUS_CSI2) {
+		int vc_num = 0;
+		/* see NOTE in imx-csi.c */
+#if 0
+		vc_num = imx_media_find_mipi_csi2_channel(
+			priv->md, &ic_priv->sd.entity);
+		if (vc_num < 0)
+			return vc_num;
+#endif
+		/* only virtual channel 0 can be sent to IC */
+		if (vc_num != 0)
+			return -EINVAL;
+	} else {
+		/*
+		 * only 8-bit pixels can be sent to IC for parallel
+		 * busses
+		 */
+		if (sensor_ep->bus.parallel.bus_width >= 16)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int prp_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct imx_ic_priv *ic_priv = v4l2_get_subdevdata(sd);
+	struct prp_priv *priv = ic_priv->prp_priv;
+	int ret = 0;
+
+	if (!priv->src_sd || (!priv->sink_sd_prpenc && !priv->sink_sd_prpvf))
+		return -EPIPE;
+
+	dev_dbg(ic_priv->dev, "stream %s\n", enable ? "ON" : "OFF");
+
+	if (enable && !priv->stream_on)
+		ret = prp_start(priv);
+	else if (!enable && priv->stream_on)
+		prp_stop(priv);
+
+	if (!ret)
+		priv->stream_on = enable;
+	return ret;
+}
+
+/*
+ * retrieve our pads parsed from the OF graph by the media device
+ */
+static int prp_registered(struct v4l2_subdev *sd)
+{
+	struct prp_priv *priv = sd_to_priv(sd);
+	int i, ret;
+	u32 code;
+
+	/* get media device */
+	priv->md = dev_get_drvdata(sd->v4l2_dev->dev);
+
+	for (i = 0; i < PRP_NUM_PADS; i++) {
+		priv->pad[i].flags = (i == PRP_SINK_PAD) ?
+			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
+
+		/* set a default mbus format  */
+		imx_media_enum_ipu_format(NULL, &code, 0, true);
+		ret = imx_media_init_mbus_fmt(&priv->format_mbus[i],
+					      640, 480, code, V4L2_FIELD_NONE,
+					      &priv->cc[i]);
+		if (ret)
+			return ret;
+	}
+
+	return media_entity_pads_init(&sd->entity, PRP_NUM_PADS, priv->pad);
+}
+
+static struct v4l2_subdev_pad_ops prp_pad_ops = {
+	.enum_mbus_code = prp_enum_mbus_code,
+	.get_fmt = prp_get_fmt,
+	.set_fmt = prp_set_fmt,
+	.link_validate = prp_link_validate,
+};
+
+static struct v4l2_subdev_video_ops prp_video_ops = {
+	.s_stream = prp_s_stream,
+};
+
+static struct media_entity_operations prp_entity_ops = {
+	.link_setup = prp_link_setup,
+	.link_validate = v4l2_subdev_link_validate,
+};
+
+static struct v4l2_subdev_ops prp_subdev_ops = {
+	.video = &prp_video_ops,
+	.pad = &prp_pad_ops,
+};
+
+static struct v4l2_subdev_internal_ops prp_internal_ops = {
+	.registered = prp_registered,
+};
+
+static int prp_init(struct imx_ic_priv *ic_priv)
+{
+	struct prp_priv *priv;
+
+	priv = devm_kzalloc(ic_priv->dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	ic_priv->prp_priv = priv;
+	priv->ic_priv = ic_priv;
+
+	return 0;
+}
+
+static void prp_remove(struct imx_ic_priv *ic_priv)
+{
+}
+
+struct imx_ic_ops imx_ic_prp_ops = {
+	.subdev_ops = &prp_subdev_ops,
+	.internal_ops = &prp_internal_ops,
+	.entity_ops = &prp_entity_ops,
+	.init = prp_init,
+	.remove = prp_remove,
+};
diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
new file mode 100644
index 0000000..2be8845
--- /dev/null
+++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
@@ -0,0 +1,1116 @@
+/*
+ * V4L2 Capture IC Preprocess Subdev for Freescale i.MX5/6 SOC
+ *
+ * This subdevice handles capture of video frames from the CSI or VDIC,
+ * which are routed directly to the Image Converter preprocess tasks,
+ * for resizing, colorspace conversion, and rotation.
+ *
+ * Copyright (c) 2012-2017 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mc.h>
+#include <media/v4l2-subdev.h>
+#include <media/imx.h>
+#include "imx-media.h"
+#include "imx-ic.h"
+
+/*
+ * Min/Max supported width and heights.
+ *
+ * We allow planar output, so we have to align width at the source pad
+ * by 16 pixels to meet IDMAC alignment requirements for possible planar
+ * output.
+ *
+ * TODO: move this into pad format negotiation, if capture device
+ * has not requested a planar format, we should allow 8 pixel
+ * alignment at the source pad.
+ */
+#define MIN_W_SINK  176
+#define MIN_H_SINK  144
+#define MAX_W_SINK 4096
+#define MAX_H_SINK 4096
+#define W_ALIGN_SINK  3 /* multiple of 8 pixels */
+#define H_ALIGN_SINK  1 /* multiple of 2 lines */
+
+#define MAX_W_SRC  1024
+#define MAX_H_SRC  1024
+#define W_ALIGN_SRC   4 /* multiple of 16 pixels */
+#define H_ALIGN_SRC   1 /* multiple of 2 lines */
+
+#define S_ALIGN       1 /* multiple of 2 */
+
+struct prp_priv {
+	struct imx_media_dev *md;
+	struct imx_ic_priv *ic_priv;
+
+	/* IPU units we require */
+	struct ipu_soc *ipu;
+	struct ipu_ic *ic;
+	struct ipuv3_channel *out_ch;
+	struct ipuv3_channel *rot_in_ch;
+	struct ipuv3_channel *rot_out_ch;
+
+	struct media_pad pad[PRPENCVF_NUM_PADS];
+
+	/* the video device at output pad */
+	struct imx_media_video_dev *vdev;
+
+	/* active vb2 buffers to send to video dev sink */
+	struct imx_media_buffer *active_vb2_buf[2];
+	struct imx_media_dma_buf underrun_buf;
+
+	int ipu_buf_num;  /* ipu double buffer index: 0-1 */
+
+	/* the sink for the captured frames */
+	struct media_entity *sink;
+	/* the source subdev */
+	struct v4l2_subdev *src_sd;
+
+	/* the attached CSI at stream on */
+	struct v4l2_subdev *csi_sd;
+
+	struct v4l2_mbus_framefmt format_mbus[PRPENCVF_NUM_PADS];
+	const struct imx_media_pixfmt *cc[PRPENCVF_NUM_PADS];
+
+	struct imx_media_dma_buf rot_buf[2];
+
+	/* controls */
+	struct v4l2_ctrl_handler ctrl_hdlr;
+	int  rotation; /* degrees */
+	bool hflip;
+	bool vflip;
+
+	/* derived from rotation, hflip, vflip controls */
+	enum ipu_rotate_mode rot_mode;
+
+	spinlock_t irqlock; /* protect eof_irq handler */
+
+	struct timer_list eof_timeout_timer;
+	int eof_irq;
+	int nfb4eof_irq;
+
+	bool stream_on; /* streaming is on */
+	bool last_eof;  /* waiting for last EOF at stream off */
+	struct completion last_eof_comp;
+};
+
+static const struct prp_channels {
+	u32 out_ch;
+	u32 rot_in_ch;
+	u32 rot_out_ch;
+} prp_channel[] = {
+	[IC_TASK_ENCODER] = {
+		.out_ch = IPUV3_CHANNEL_IC_PRP_ENC_MEM,
+		.rot_in_ch = IPUV3_CHANNEL_MEM_ROT_ENC,
+		.rot_out_ch = IPUV3_CHANNEL_ROT_ENC_MEM,
+	},
+	[IC_TASK_VIEWFINDER] = {
+		.out_ch = IPUV3_CHANNEL_IC_PRP_VF_MEM,
+		.rot_in_ch = IPUV3_CHANNEL_MEM_ROT_VF,
+		.rot_out_ch = IPUV3_CHANNEL_ROT_VF_MEM,
+	},
+};
+
+static inline struct prp_priv *sd_to_priv(struct v4l2_subdev *sd)
+{
+	struct imx_ic_priv *ic_priv = v4l2_get_subdevdata(sd);
+
+	return ic_priv->task_priv;
+}
+
+static void prp_put_ipu_resources(struct prp_priv *priv)
+{
+	if (!IS_ERR_OR_NULL(priv->ic))
+		ipu_ic_put(priv->ic);
+	priv->ic = NULL;
+
+	if (!IS_ERR_OR_NULL(priv->out_ch))
+		ipu_idmac_put(priv->out_ch);
+	priv->out_ch = NULL;
+
+	if (!IS_ERR_OR_NULL(priv->rot_in_ch))
+		ipu_idmac_put(priv->rot_in_ch);
+	priv->rot_in_ch = NULL;
+
+	if (!IS_ERR_OR_NULL(priv->rot_out_ch))
+		ipu_idmac_put(priv->rot_out_ch);
+	priv->rot_out_ch = NULL;
+}
+
+static int prp_get_ipu_resources(struct prp_priv *priv)
+{
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	int ret, task = ic_priv->task_id;
+
+	priv->ipu = priv->md->ipu[ic_priv->ipu_id];
+
+	priv->ic = ipu_ic_get(priv->ipu, task);
+	if (IS_ERR(priv->ic)) {
+		v4l2_err(&ic_priv->sd, "failed to get IC\n");
+		ret = PTR_ERR(priv->ic);
+		goto out;
+	}
+
+	priv->out_ch = ipu_idmac_get(priv->ipu,
+				     prp_channel[task].out_ch);
+	if (IS_ERR(priv->out_ch)) {
+		v4l2_err(&ic_priv->sd, "could not get IDMAC channel %u\n",
+			 prp_channel[task].out_ch);
+		ret = PTR_ERR(priv->out_ch);
+		goto out;
+	}
+
+	priv->rot_in_ch = ipu_idmac_get(priv->ipu,
+					prp_channel[task].rot_in_ch);
+	if (IS_ERR(priv->rot_in_ch)) {
+		v4l2_err(&ic_priv->sd, "could not get IDMAC channel %u\n",
+			 prp_channel[task].rot_in_ch);
+		ret = PTR_ERR(priv->rot_in_ch);
+		goto out;
+	}
+
+	priv->rot_out_ch = ipu_idmac_get(priv->ipu,
+					 prp_channel[task].rot_out_ch);
+	if (IS_ERR(priv->rot_out_ch)) {
+		v4l2_err(&ic_priv->sd, "could not get IDMAC channel %u\n",
+			 prp_channel[task].rot_out_ch);
+		ret = PTR_ERR(priv->rot_out_ch);
+		goto out;
+	}
+
+	return 0;
+out:
+	prp_put_ipu_resources(priv);
+	return ret;
+}
+
+static void prp_vb2_buf_done(struct prp_priv *priv, struct ipuv3_channel *ch)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct imx_media_buffer *done, *next;
+	struct vb2_buffer *vb;
+	dma_addr_t phys;
+
+	done = priv->active_vb2_buf[priv->ipu_buf_num];
+	if (done) {
+		vb = &done->vbuf.vb2_buf;
+		vb->timestamp = ktime_get_ns();
+		vb2_buffer_done(vb, VB2_BUF_STATE_DONE);
+	}
+
+	/* get next queued buffer */
+	next = imx_media_capture_device_next_buf(vdev);
+	if (next) {
+		phys = vb2_dma_contig_plane_dma_addr(&next->vbuf.vb2_buf, 0);
+		priv->active_vb2_buf[priv->ipu_buf_num] = next;
+	} else {
+		phys = priv->underrun_buf.phys;
+		priv->active_vb2_buf[priv->ipu_buf_num] = NULL;
+	}
+
+	if (ipu_idmac_buffer_is_ready(ch, priv->ipu_buf_num))
+		ipu_idmac_clear_buffer(ch, priv->ipu_buf_num);
+
+	ipu_cpmem_set_buffer(ch, priv->ipu_buf_num, phys);
+}
+
+static irqreturn_t prp_eof_interrupt(int irq, void *dev_id)
+{
+	struct prp_priv *priv = dev_id;
+	struct ipuv3_channel *channel;
+
+	spin_lock(&priv->irqlock);
+
+	if (priv->last_eof) {
+		complete(&priv->last_eof_comp);
+		priv->last_eof = false;
+		goto unlock;
+	}
+
+	/* inform CSI of this EOF so it can monitor frame intervals */
+	v4l2_subdev_call(priv->csi_sd, core, interrupt_service_routine,
+			 0, NULL);
+
+	channel = (ipu_rot_mode_is_irt(priv->rot_mode)) ?
+		priv->rot_out_ch : priv->out_ch;
+
+	prp_vb2_buf_done(priv, channel);
+
+	/* select new IPU buf */
+	ipu_idmac_select_buffer(channel, priv->ipu_buf_num);
+	/* toggle IPU double-buffer index */
+	priv->ipu_buf_num ^= 1;
+
+	/* bump the EOF timeout timer */
+	mod_timer(&priv->eof_timeout_timer,
+		  jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+
+unlock:
+	spin_unlock(&priv->irqlock);
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t prp_nfb4eof_interrupt(int irq, void *dev_id)
+{
+	struct prp_priv *priv = dev_id;
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	static const struct v4l2_event ev = {
+		.type = V4L2_EVENT_IMX_NFB4EOF,
+	};
+
+	v4l2_err(&ic_priv->sd, "NFB4EOF\n");
+
+	v4l2_subdev_notify_event(&ic_priv->sd, &ev);
+
+	return IRQ_HANDLED;
+}
+
+/*
+ * EOF timeout timer function.
+ */
+static void prp_eof_timeout(unsigned long data)
+{
+	struct prp_priv *priv = (struct prp_priv *)data;
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	static const struct v4l2_event ev = {
+		.type = V4L2_EVENT_FRAME_TIMEOUT,
+	};
+
+	v4l2_err(&ic_priv->sd, "EOF timeout\n");
+
+	v4l2_subdev_notify_event(&ic_priv->sd, &ev);
+}
+
+static void prp_setup_vb2_buf(struct prp_priv *priv, dma_addr_t *phys)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct imx_media_buffer *buf;
+	int i;
+
+	for (i = 0; i < 2; i++) {
+		buf = imx_media_capture_device_next_buf(vdev);
+		priv->active_vb2_buf[i] = buf;
+		phys[i] = vb2_dma_contig_plane_dma_addr(&buf->vbuf.vb2_buf, 0);
+	}
+}
+
+static void prp_unsetup_vb2_buf(struct prp_priv *priv)
+{
+	struct imx_media_buffer *buf;
+	int i;
+
+	/* return any remaining active frames with error */
+	for (i = 0; i < 2; i++) {
+		buf = priv->active_vb2_buf[i];
+		if (buf) {
+			struct vb2_buffer *vb = &buf->vbuf.vb2_buf;
+
+			vb->timestamp = ktime_get_ns();
+			vb2_buffer_done(vb, VB2_BUF_STATE_ERROR);
+		}
+	}
+}
+
+static int prp_setup_channel(struct prp_priv *priv,
+			     struct ipuv3_channel *channel,
+			     enum ipu_rotate_mode rot_mode,
+			     dma_addr_t addr0, dma_addr_t addr1,
+			     bool rot_swap_width_height)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	const struct imx_media_pixfmt *outcc;
+	struct v4l2_mbus_framefmt *infmt;
+	unsigned int burst_size;
+	struct ipu_image image;
+	int ret;
+
+	infmt = &priv->format_mbus[PRPENCVF_SINK_PAD];
+	outcc = vdev->cc;
+
+	ipu_cpmem_zero(channel);
+
+	memset(&image, 0, sizeof(image));
+	image.pix = vdev->fmt.fmt.pix;
+	image.rect.width = image.pix.width;
+	image.rect.height = image.pix.height;
+
+	if (rot_swap_width_height) {
+		swap(image.pix.width, image.pix.height);
+		swap(image.rect.width, image.rect.height);
+		/* recalc stride using swapped width */
+		image.pix.bytesperline = outcc->planar ?
+			image.pix.width :
+			(image.pix.width * outcc->bpp) >> 3;
+	}
+
+	image.phys0 = addr0;
+	image.phys1 = addr1;
+
+	ret = ipu_cpmem_set_image(channel, &image);
+	if (ret)
+		return ret;
+
+	if (channel == priv->rot_in_ch ||
+	    channel == priv->rot_out_ch) {
+		burst_size = 8;
+		ipu_cpmem_set_block_mode(channel);
+	} else {
+		burst_size = (image.pix.width & 0xf) ? 8 : 16;
+	}
+
+	ipu_cpmem_set_burstsize(channel, burst_size);
+
+	if (rot_mode)
+		ipu_cpmem_set_rotation(channel, rot_mode);
+
+	if (image.pix.field == V4L2_FIELD_NONE &&
+	    V4L2_FIELD_HAS_BOTH(infmt->field) &&
+	    channel == priv->out_ch)
+		ipu_cpmem_interlaced_scan(channel, image.pix.bytesperline);
+
+	ret = ipu_ic_task_idma_init(priv->ic, channel,
+				    image.pix.width, image.pix.height,
+				    burst_size, rot_mode);
+	if (ret)
+		return ret;
+
+	ipu_cpmem_set_axi_id(channel, 1);
+
+	ipu_idmac_set_double_buffer(channel, true);
+
+	return 0;
+}
+
+static int prp_setup_rotation(struct prp_priv *priv)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	const struct imx_media_pixfmt *outcc, *incc;
+	struct v4l2_mbus_framefmt *infmt;
+	struct v4l2_pix_format *outfmt;
+	dma_addr_t phys[2];
+	int ret;
+
+	infmt = &priv->format_mbus[PRPENCVF_SINK_PAD];
+	outfmt = &vdev->fmt.fmt.pix;
+	incc = priv->cc[PRPENCVF_SINK_PAD];
+	outcc = vdev->cc;
+
+	ret = imx_media_alloc_dma_buf(priv->md, &priv->rot_buf[0],
+				      outfmt->sizeimage);
+	if (ret) {
+		v4l2_err(&ic_priv->sd, "failed to alloc rot_buf[0], %d\n", ret);
+		return ret;
+	}
+	ret = imx_media_alloc_dma_buf(priv->md, &priv->rot_buf[1],
+				      outfmt->sizeimage);
+	if (ret) {
+		v4l2_err(&ic_priv->sd, "failed to alloc rot_buf[1], %d\n", ret);
+		goto free_rot0;
+	}
+
+	ret = ipu_ic_task_init(priv->ic,
+			       infmt->width, infmt->height,
+			       outfmt->height, outfmt->width,
+			       incc->cs, outcc->cs);
+	if (ret) {
+		v4l2_err(&ic_priv->sd, "ipu_ic_task_init failed, %d\n", ret);
+		goto free_rot1;
+	}
+
+	/* init the IC-PRP-->MEM IDMAC channel */
+	ret = prp_setup_channel(priv, priv->out_ch, IPU_ROTATE_NONE,
+				priv->rot_buf[0].phys, priv->rot_buf[1].phys,
+				true);
+	if (ret) {
+		v4l2_err(&ic_priv->sd,
+			 "prp_setup_channel(out_ch) failed, %d\n", ret);
+		goto free_rot1;
+	}
+
+	/* init the MEM-->IC-PRP ROT IDMAC channel */
+	ret = prp_setup_channel(priv, priv->rot_in_ch, priv->rot_mode,
+				priv->rot_buf[0].phys, priv->rot_buf[1].phys,
+				true);
+	if (ret) {
+		v4l2_err(&ic_priv->sd,
+			 "prp_setup_channel(rot_in_ch) failed, %d\n", ret);
+		goto free_rot1;
+	}
+
+	prp_setup_vb2_buf(priv, phys);
+
+	/* init the destination IC-PRP ROT-->MEM IDMAC channel */
+	ret = prp_setup_channel(priv, priv->rot_out_ch, IPU_ROTATE_NONE,
+				phys[0], phys[1],
+				false);
+	if (ret) {
+		v4l2_err(&ic_priv->sd,
+			 "prp_setup_channel(rot_out_ch) failed, %d\n", ret);
+		goto free_rot1;
+	}
+
+	/* now link IC-PRP-->MEM to MEM-->IC-PRP ROT */
+	ipu_idmac_link(priv->out_ch, priv->rot_in_ch);
+
+	/* enable the IC */
+	ipu_ic_enable(priv->ic);
+
+	/* set buffers ready */
+	ipu_idmac_select_buffer(priv->out_ch, 0);
+	ipu_idmac_select_buffer(priv->out_ch, 1);
+	ipu_idmac_select_buffer(priv->rot_out_ch, 0);
+	ipu_idmac_select_buffer(priv->rot_out_ch, 1);
+
+	/* enable the channels */
+	ipu_idmac_enable_channel(priv->out_ch);
+	ipu_idmac_enable_channel(priv->rot_in_ch);
+	ipu_idmac_enable_channel(priv->rot_out_ch);
+
+	/* and finally enable the IC PRP task */
+	ipu_ic_task_enable(priv->ic);
+
+	return 0;
+
+free_rot1:
+	imx_media_free_dma_buf(priv->md, &priv->rot_buf[1]);
+free_rot0:
+	imx_media_free_dma_buf(priv->md, &priv->rot_buf[0]);
+	return ret;
+}
+
+static void prp_unsetup_rotation(struct prp_priv *priv)
+{
+	ipu_ic_task_disable(priv->ic);
+
+	ipu_idmac_disable_channel(priv->out_ch);
+	ipu_idmac_disable_channel(priv->rot_in_ch);
+	ipu_idmac_disable_channel(priv->rot_out_ch);
+
+	ipu_idmac_unlink(priv->out_ch, priv->rot_in_ch);
+
+	ipu_ic_disable(priv->ic);
+
+	imx_media_free_dma_buf(priv->md, &priv->rot_buf[0]);
+	imx_media_free_dma_buf(priv->md, &priv->rot_buf[1]);
+}
+
+static int prp_setup_norotation(struct prp_priv *priv)
+{
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	const struct imx_media_pixfmt *outcc, *incc;
+	struct v4l2_mbus_framefmt *infmt;
+	struct v4l2_pix_format *outfmt;
+	dma_addr_t phys[2];
+	int ret;
+
+	infmt = &priv->format_mbus[PRPENCVF_SINK_PAD];
+	outfmt = &vdev->fmt.fmt.pix;
+	incc = priv->cc[PRPENCVF_SINK_PAD];
+	outcc = vdev->cc;
+
+	ret = ipu_ic_task_init(priv->ic,
+			       infmt->width, infmt->height,
+			       outfmt->width, outfmt->height,
+			       incc->cs, outcc->cs);
+	if (ret) {
+		v4l2_err(&ic_priv->sd, "ipu_ic_task_init failed, %d\n", ret);
+		return ret;
+	}
+
+	prp_setup_vb2_buf(priv, phys);
+
+	/* init the IC PRP-->MEM IDMAC channel */
+	ret = prp_setup_channel(priv, priv->out_ch, priv->rot_mode,
+				phys[0], phys[1], false);
+	if (ret) {
+		v4l2_err(&ic_priv->sd,
+			 "prp_setup_channel(out_ch) failed, %d\n", ret);
+		return ret;
+	}
+
+	ipu_cpmem_dump(priv->out_ch);
+	ipu_ic_dump(priv->ic);
+	ipu_dump(priv->ipu);
+
+	ipu_ic_enable(priv->ic);
+
+	/* set buffers ready */
+	ipu_idmac_select_buffer(priv->out_ch, 0);
+	ipu_idmac_select_buffer(priv->out_ch, 1);
+
+	/* enable the channels */
+	ipu_idmac_enable_channel(priv->out_ch);
+
+	/* enable the IC task */
+	ipu_ic_task_enable(priv->ic);
+
+	return 0;
+}
+
+static void prp_unsetup_norotation(struct prp_priv *priv)
+{
+	ipu_ic_task_disable(priv->ic);
+	ipu_idmac_disable_channel(priv->out_ch);
+	ipu_ic_disable(priv->ic);
+}
+
+static void prp_unsetup(struct prp_priv *priv)
+{
+	if (ipu_rot_mode_is_irt(priv->rot_mode))
+		prp_unsetup_rotation(priv);
+	else
+		prp_unsetup_norotation(priv);
+
+	prp_unsetup_vb2_buf(priv);
+}
+
+static int prp_start(struct prp_priv *priv)
+{
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct v4l2_pix_format *outfmt;
+	int ret;
+
+	ret = prp_get_ipu_resources(priv);
+	if (ret)
+		return ret;
+
+	outfmt = &vdev->fmt.fmt.pix;
+
+	ret = imx_media_alloc_dma_buf(priv->md, &priv->underrun_buf,
+				      outfmt->sizeimage);
+	if (ret)
+		goto out_put_ipu;
+
+	priv->ipu_buf_num = 0;
+
+	/* init EOF completion waitq */
+	init_completion(&priv->last_eof_comp);
+	priv->last_eof = false;
+
+	if (ipu_rot_mode_is_irt(priv->rot_mode))
+		ret = prp_setup_rotation(priv);
+	else
+		ret = prp_setup_norotation(priv);
+	if (ret)
+		goto out_free_underrun;
+
+	priv->nfb4eof_irq = ipu_idmac_channel_irq(priv->ipu,
+						  priv->out_ch,
+						  IPU_IRQ_NFB4EOF);
+	ret = devm_request_irq(ic_priv->dev, priv->nfb4eof_irq,
+			       prp_nfb4eof_interrupt, 0,
+			       "imx-ic-prp-nfb4eof", priv);
+	if (ret) {
+		v4l2_err(&ic_priv->sd,
+			 "Error registering NFB4EOF irq: %d\n", ret);
+		goto out_unsetup;
+	}
+
+	if (ipu_rot_mode_is_irt(priv->rot_mode))
+		priv->eof_irq = ipu_idmac_channel_irq(
+			priv->ipu, priv->rot_out_ch, IPU_IRQ_EOF);
+	else
+		priv->eof_irq = ipu_idmac_channel_irq(
+			priv->ipu, priv->out_ch, IPU_IRQ_EOF);
+
+	ret = devm_request_irq(ic_priv->dev, priv->eof_irq,
+			       prp_eof_interrupt, 0,
+			       "imx-ic-prp-eof", priv);
+	if (ret) {
+		v4l2_err(&ic_priv->sd,
+			 "Error registering eof irq: %d\n", ret);
+		goto out_free_nfb4eof_irq;
+	}
+
+	/* start the EOF timeout timer */
+	mod_timer(&priv->eof_timeout_timer,
+		  jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+
+	return 0;
+
+out_free_nfb4eof_irq:
+	devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
+out_unsetup:
+	prp_unsetup(priv);
+out_free_underrun:
+	imx_media_free_dma_buf(priv->md, &priv->underrun_buf);
+out_put_ipu:
+	prp_put_ipu_resources(priv);
+	return ret;
+}
+
+static void prp_stop(struct prp_priv *priv)
+{
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	unsigned long flags;
+	int ret;
+
+	/* mark next EOF interrupt as the last before stream off */
+	spin_lock_irqsave(&priv->irqlock, flags);
+	priv->last_eof = true;
+	spin_unlock_irqrestore(&priv->irqlock, flags);
+
+	/*
+	 * and then wait for interrupt handler to mark completion.
+	 */
+	ret = wait_for_completion_timeout(
+		&priv->last_eof_comp,
+		msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+	if (ret == 0)
+		v4l2_warn(&ic_priv->sd, "wait last EOF timeout\n");
+
+	devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
+	devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
+
+	prp_unsetup(priv);
+
+	imx_media_free_dma_buf(priv->md, &priv->underrun_buf);
+
+	/* cancel the EOF timeout timer */
+	del_timer_sync(&priv->eof_timeout_timer);
+
+	prp_put_ipu_resources(priv);
+}
+
+static int prp_enum_mbus_code(struct v4l2_subdev *sd,
+			      struct v4l2_subdev_pad_config *cfg,
+			      struct v4l2_subdev_mbus_code_enum *code)
+{
+	if (code->pad >= PRPENCVF_NUM_PADS)
+		return -EINVAL;
+
+	if (code->pad == PRPENCVF_SRC_PAD)
+		return imx_media_enum_format(NULL, &code->code, code->index,
+					     true, false);
+
+	return imx_media_enum_ipu_format(NULL, &code->code, code->index, true);
+}
+
+static struct v4l2_mbus_framefmt *
+__prp_get_fmt(struct prp_priv *priv, struct v4l2_subdev_pad_config *cfg,
+	      unsigned int pad, enum v4l2_subdev_format_whence which)
+{
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+
+	if (which == V4L2_SUBDEV_FORMAT_TRY)
+		return v4l2_subdev_get_try_format(&ic_priv->sd, cfg, pad);
+	else
+		return &priv->format_mbus[pad];
+}
+
+static int prp_get_fmt(struct v4l2_subdev *sd,
+		       struct v4l2_subdev_pad_config *cfg,
+		       struct v4l2_subdev_format *sdformat)
+{
+	struct prp_priv *priv = sd_to_priv(sd);
+	struct v4l2_mbus_framefmt *fmt;
+
+	if (sdformat->pad >= PRPENCVF_NUM_PADS)
+		return -EINVAL;
+
+	fmt = __prp_get_fmt(priv, cfg, sdformat->pad, sdformat->which);
+	if (!fmt)
+		return -EINVAL;
+
+	sdformat->format = *fmt;
+
+	return 0;
+}
+
+static int prp_set_fmt(struct v4l2_subdev *sd,
+		       struct v4l2_subdev_pad_config *cfg,
+		       struct v4l2_subdev_format *sdformat)
+{
+	struct prp_priv *priv = sd_to_priv(sd);
+	const struct imx_media_pixfmt *cc;
+	struct v4l2_mbus_framefmt *infmt;
+	u32 code;
+
+	if (sdformat->pad >= PRPENCVF_NUM_PADS)
+		return -EINVAL;
+
+	if (priv->stream_on)
+		return -EBUSY;
+
+	if (sdformat->pad == PRPENCVF_SRC_PAD) {
+		infmt = __prp_get_fmt(priv, cfg, PRPENCVF_SINK_PAD,
+				      sdformat->which);
+
+		cc = imx_media_find_format(0, sdformat->format.code,
+					   true, false);
+		if (!cc) {
+			imx_media_enum_format(NULL, &code, 0, true, false);
+			cc = imx_media_find_format(0, code, true, false);
+			sdformat->format.code = cc->codes[0];
+		}
+
+		if (sdformat->format.field != V4L2_FIELD_NONE)
+			sdformat->format.field = infmt->field;
+
+		/* IC resizer cannot downsize more than 4:1 */
+		if (ipu_rot_mode_is_irt(priv->rot_mode))
+			v4l_bound_align_image(&sdformat->format.width,
+					      infmt->height / 4, MAX_H_SRC,
+					      H_ALIGN_SRC,
+					      &sdformat->format.height,
+					      infmt->width / 4, MAX_W_SRC,
+					      W_ALIGN_SRC, S_ALIGN);
+		else
+			v4l_bound_align_image(&sdformat->format.width,
+					      infmt->width / 4, MAX_W_SRC,
+					      W_ALIGN_SRC,
+					      &sdformat->format.height,
+					      infmt->height / 4, MAX_H_SRC,
+					      H_ALIGN_SRC, S_ALIGN);
+	} else {
+		cc = imx_media_find_ipu_format(0, sdformat->format.code,
+					       true);
+		if (!cc) {
+			imx_media_enum_ipu_format(NULL, &code, 0, true);
+			cc = imx_media_find_ipu_format(0, code, true);
+			sdformat->format.code = cc->codes[0];
+		}
+
+		v4l_bound_align_image(&sdformat->format.width,
+				      MIN_W_SINK, MAX_W_SINK, W_ALIGN_SINK,
+				      &sdformat->format.height,
+				      MIN_H_SINK, MAX_H_SINK, H_ALIGN_SINK,
+				      S_ALIGN);
+	}
+
+	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
+		cfg->try_fmt = sdformat->format;
+	} else {
+		priv->format_mbus[sdformat->pad] = sdformat->format;
+		priv->cc[sdformat->pad] = cc;
+	}
+
+	return 0;
+}
+
+static int prp_link_setup(struct media_entity *entity,
+			  const struct media_pad *local,
+			  const struct media_pad *remote, u32 flags)
+{
+	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+	struct imx_ic_priv *ic_priv = v4l2_get_subdevdata(sd);
+	struct prp_priv *priv = ic_priv->task_priv;
+	struct imx_media_video_dev *vdev = priv->vdev;
+	struct v4l2_subdev *remote_sd;
+	int ret;
+
+	dev_dbg(ic_priv->dev, "link setup %s -> %s", remote->entity->name,
+		local->entity->name);
+
+	if (local->flags & MEDIA_PAD_FL_SINK) {
+		if (!is_media_entity_v4l2_subdev(remote->entity))
+			return -EINVAL;
+
+		remote_sd = media_entity_to_v4l2_subdev(remote->entity);
+
+		if (flags & MEDIA_LNK_FL_ENABLED) {
+			if (priv->src_sd)
+				return -EBUSY;
+			priv->src_sd = remote_sd;
+		} else {
+			priv->src_sd = NULL;
+		}
+
+		return 0;
+	}
+
+	/* this is the source pad */
+
+	/* the remote must be the device node */
+	if (!is_media_entity_v4l2_video_device(remote->entity))
+		return -EINVAL;
+
+	if (flags & MEDIA_LNK_FL_ENABLED) {
+		if (priv->sink)
+			return -EBUSY;
+	} else {
+		/* reset video device controls */
+		v4l2_ctrl_handler_free(vdev->vfd->ctrl_handler);
+		v4l2_ctrl_handler_init(vdev->vfd->ctrl_handler, 0);
+
+		priv->sink = NULL;
+		return 0;
+	}
+
+	/* reset video device controls to refresh from subdevs */
+	v4l2_ctrl_handler_free(vdev->vfd->ctrl_handler);
+	v4l2_ctrl_handler_init(vdev->vfd->ctrl_handler, 0);
+
+	ret = __v4l2_pipeline_inherit_controls(vdev->vfd,
+					       &ic_priv->sd.entity);
+	if (ret)
+		return ret;
+
+	priv->sink = remote->entity;
+
+	return 0;
+}
+
+static int prp_link_validate(struct v4l2_subdev *sd,
+			     struct media_link *link,
+			     struct v4l2_subdev_format *source_fmt,
+			     struct v4l2_subdev_format *sink_fmt)
+{
+	struct imx_ic_priv *ic_priv = v4l2_get_subdevdata(sd);
+	struct prp_priv *priv = ic_priv->task_priv;
+	struct imx_media_subdev *csi;
+	int ret;
+
+	ret = v4l2_subdev_link_validate_default(sd, link,
+						source_fmt, sink_fmt);
+	if (ret)
+		return ret;
+
+	csi = imx_media_find_pipeline_subdev(priv->md, &ic_priv->sd.entity,
+					     IMX_MEDIA_GRP_ID_CSI);
+	if (IS_ERR(csi)) {
+		v4l2_err(&ic_priv->sd, "no CSI attached\n");
+		ret = PTR_ERR(csi);
+		return ret;
+	}
+
+	priv->csi_sd = csi->sd;
+
+	return 0;
+}
+
+static int prp_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct prp_priv *priv = container_of(ctrl->handler,
+					       struct prp_priv, ctrl_hdlr);
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	enum ipu_rotate_mode rot_mode;
+	bool hflip, vflip;
+	int rotation, ret;
+
+	rotation = priv->rotation;
+	hflip = priv->hflip;
+	vflip = priv->vflip;
+
+	switch (ctrl->id) {
+	case V4L2_CID_HFLIP:
+		hflip = (ctrl->val == 1);
+		break;
+	case V4L2_CID_VFLIP:
+		vflip = (ctrl->val == 1);
+		break;
+	case V4L2_CID_ROTATE:
+		rotation = ctrl->val;
+		break;
+	default:
+		v4l2_err(&ic_priv->sd, "Invalid control\n");
+		return -EINVAL;
+	}
+
+	ret = ipu_degrees_to_rot_mode(&rot_mode, rotation, hflip, vflip);
+	if (ret)
+		return ret;
+
+	if (rot_mode != priv->rot_mode) {
+		/* can't change rotation mid-streaming */
+		if (priv->stream_on)
+			return -EBUSY;
+
+		priv->rot_mode = rot_mode;
+		priv->rotation = rotation;
+		priv->hflip = hflip;
+		priv->vflip = vflip;
+	}
+
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops prp_ctrl_ops = {
+	.s_ctrl = prp_s_ctrl,
+};
+
+static int prp_init_controls(struct prp_priv *priv)
+{
+	struct imx_ic_priv *ic_priv = priv->ic_priv;
+	struct v4l2_ctrl_handler *hdlr = &priv->ctrl_hdlr;
+	int ret;
+
+	v4l2_ctrl_handler_init(hdlr, 3);
+
+	v4l2_ctrl_new_std(hdlr, &prp_ctrl_ops, V4L2_CID_HFLIP,
+			  0, 1, 1, 0);
+	v4l2_ctrl_new_std(hdlr, &prp_ctrl_ops, V4L2_CID_VFLIP,
+			  0, 1, 1, 0);
+	v4l2_ctrl_new_std(hdlr, &prp_ctrl_ops, V4L2_CID_ROTATE,
+			  0, 270, 90, 0);
+
+	ic_priv->sd.ctrl_handler = hdlr;
+
+	if (hdlr->error) {
+		ret = hdlr->error;
+		goto out_free;
+	}
+
+	v4l2_ctrl_handler_setup(hdlr);
+	return 0;
+
+out_free:
+	v4l2_ctrl_handler_free(hdlr);
+	return ret;
+}
+
+static int prp_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct imx_ic_priv *ic_priv = v4l2_get_subdevdata(sd);
+	struct prp_priv *priv = ic_priv->task_priv;
+	int ret = 0;
+
+	if (!priv->src_sd || !priv->sink)
+		return -EPIPE;
+
+	dev_dbg(ic_priv->dev, "stream %s\n", enable ? "ON" : "OFF");
+
+	if (enable && !priv->stream_on)
+		ret = prp_start(priv);
+	else if (!enable && priv->stream_on)
+		prp_stop(priv);
+
+	if (!ret)
+		priv->stream_on = enable;
+	return ret;
+}
+
+/*
+ * retrieve our pads parsed from the OF graph by the media device
+ */
+static int prp_registered(struct v4l2_subdev *sd)
+{
+	struct prp_priv *priv = sd_to_priv(sd);
+	int i, ret;
+	u32 code;
+
+	/* get media device */
+	priv->md = dev_get_drvdata(sd->v4l2_dev->dev);
+
+	for (i = 0; i < PRPENCVF_NUM_PADS; i++) {
+		if (i == PRPENCVF_SINK_PAD) {
+			priv->pad[i].flags = MEDIA_PAD_FL_SINK;
+			imx_media_enum_ipu_format(NULL, &code, 0, true);
+		} else {
+			priv->pad[i].flags = MEDIA_PAD_FL_SOURCE;
+			code = 0;
+		}
+
+		/* set a default mbus format  */
+		ret = imx_media_init_mbus_fmt(&priv->format_mbus[i],
+					      640, 480, code, V4L2_FIELD_NONE,
+					      &priv->cc[i]);
+		if (ret)
+			return ret;
+	}
+
+	ret = media_entity_pads_init(&sd->entity, PRPENCVF_NUM_PADS,
+				     priv->pad);
+	if (ret)
+		return ret;
+
+	ret = imx_media_capture_device_register(priv->vdev);
+	if (ret)
+		return ret;
+
+	ret = prp_init_controls(priv);
+	if (ret)
+		imx_media_capture_device_unregister(priv->vdev);
+
+	return ret;
+}
+
+static void prp_unregistered(struct v4l2_subdev *sd)
+{
+	struct prp_priv *priv = sd_to_priv(sd);
+
+	imx_media_capture_device_unregister(priv->vdev);
+	v4l2_ctrl_handler_free(&priv->ctrl_hdlr);
+}
+
+static struct v4l2_subdev_pad_ops prp_pad_ops = {
+	.enum_mbus_code = prp_enum_mbus_code,
+	.get_fmt = prp_get_fmt,
+	.set_fmt = prp_set_fmt,
+	.link_validate = prp_link_validate,
+};
+
+static struct v4l2_subdev_video_ops prp_video_ops = {
+	.s_stream = prp_s_stream,
+};
+
+static struct media_entity_operations prp_entity_ops = {
+	.link_setup = prp_link_setup,
+	.link_validate = v4l2_subdev_link_validate,
+};
+
+static struct v4l2_subdev_ops prp_subdev_ops = {
+	.video = &prp_video_ops,
+	.pad = &prp_pad_ops,
+};
+
+static struct v4l2_subdev_internal_ops prp_internal_ops = {
+	.registered = prp_registered,
+	.unregistered = prp_unregistered,
+};
+
+static int prp_init(struct imx_ic_priv *ic_priv)
+{
+	struct prp_priv *priv;
+
+	priv = devm_kzalloc(ic_priv->dev, sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	ic_priv->task_priv = priv;
+	priv->ic_priv = ic_priv;
+
+	spin_lock_init(&priv->irqlock);
+	init_timer(&priv->eof_timeout_timer);
+	priv->eof_timeout_timer.data = (unsigned long)priv;
+	priv->eof_timeout_timer.function = prp_eof_timeout;
+
+	priv->vdev = imx_media_capture_device_init(&ic_priv->sd,
+						   PRPENCVF_SRC_PAD);
+	if (IS_ERR(priv->vdev))
+		return PTR_ERR(priv->vdev);
+
+	return 0;
+}
+
+static void prp_remove(struct imx_ic_priv *ic_priv)
+{
+	struct prp_priv *priv = ic_priv->task_priv;
+
+	imx_media_capture_device_remove(priv->vdev);
+}
+
+struct imx_ic_ops imx_ic_prpencvf_ops = {
+	.subdev_ops = &prp_subdev_ops,
+	.internal_ops = &prp_internal_ops,
+	.entity_ops = &prp_entity_ops,
+	.init = prp_init,
+	.remove = prp_remove,
+};
diff --git a/drivers/staging/media/imx/imx-ic.h b/drivers/staging/media/imx/imx-ic.h
new file mode 100644
index 0000000..5535111
--- /dev/null
+++ b/drivers/staging/media/imx/imx-ic.h
@@ -0,0 +1,38 @@
+/*
+ * V4L2 Image Converter Subdev for Freescale i.MX5/6 SOC
+ *
+ * Copyright (c) 2016 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#ifndef _IMX_IC_H
+#define _IMX_IC_H
+
+#include <media/v4l2-subdev.h>
+
+struct imx_ic_priv {
+	struct device *dev;
+	struct v4l2_subdev sd;
+	int    ipu_id;
+	int    task_id;
+	void   *prp_priv;
+	void   *task_priv;
+};
+
+struct imx_ic_ops {
+	struct v4l2_subdev_ops *subdev_ops;
+	struct v4l2_subdev_internal_ops *internal_ops;
+	struct media_entity_operations *entity_ops;
+
+	int (*init)(struct imx_ic_priv *ic_priv);
+	void (*remove)(struct imx_ic_priv *ic_priv);
+};
+
+extern struct imx_ic_ops imx_ic_prp_ops;
+extern struct imx_ic_ops imx_ic_prpencvf_ops;
+extern struct imx_ic_ops imx_ic_pp_ops;
+
+#endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (21 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 22/36] media: imx: Add IC subdev drivers Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 10:28   ` Russell King - ARM Linux
  2017-02-17 10:47   ` Philipp Zabel
  2017-02-16  2:19 ` [PATCH v4 24/36] [media] add Omnivision OV5640 sensor driver Steve Longerbeam
                   ` (14 subsequent siblings)
  37 siblings, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Adds MIPI CSI-2 Receiver subdev driver. This subdev is required
for sensors with a MIPI CSI2 interface.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/Makefile         |   1 +
 drivers/staging/media/imx/imx6-mipi-csi2.c | 573 +++++++++++++++++++++++++++++
 2 files changed, 574 insertions(+)
 create mode 100644 drivers/staging/media/imx/imx6-mipi-csi2.c

diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
index 878a126..3569625 100644
--- a/drivers/staging/media/imx/Makefile
+++ b/drivers/staging/media/imx/Makefile
@@ -9,3 +9,4 @@ obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-vdic.o
 obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-ic.o
 
 obj-$(CONFIG_VIDEO_IMX_CSI) += imx-media-csi.o
+obj-$(CONFIG_VIDEO_IMX_CSI) += imx6-mipi-csi2.o
diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
new file mode 100644
index 0000000..23dca80
--- /dev/null
+++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
@@ -0,0 +1,573 @@
+/*
+ * MIPI CSI-2 Receiver Subdev for Freescale i.MX6 SOC.
+ *
+ * Copyright (c) 2012-2017 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+#include <linux/clk.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/irq.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-subdev.h>
+#include "imx-media.h"
+
+/*
+ * there must be 5 pads: 1 input pad from sensor, and
+ * the 4 virtual channel output pads
+ */
+#define CSI2_SINK_PAD       0
+#define CSI2_NUM_SINK_PADS  1
+#define CSI2_NUM_SRC_PADS   4
+#define CSI2_NUM_PADS       5
+
+struct csi2_dev {
+	struct device          *dev;
+	struct v4l2_subdev      sd;
+	struct media_pad       pad[CSI2_NUM_PADS];
+	struct v4l2_mbus_framefmt format_mbus;
+	struct clk             *dphy_clk;
+	struct clk             *cfg_clk;
+	struct clk             *pix_clk; /* what is this? */
+	void __iomem           *base;
+	struct v4l2_of_bus_mipi_csi2 bus;
+	bool                    on;
+	bool                    stream_on;
+	bool                    src_linked;
+	bool                    sink_linked[CSI2_NUM_SRC_PADS];
+};
+
+#define DEVICE_NAME "imx6-mipi-csi2"
+
+/* Register offsets */
+#define CSI2_VERSION            0x000
+#define CSI2_N_LANES            0x004
+#define CSI2_PHY_SHUTDOWNZ      0x008
+#define CSI2_DPHY_RSTZ          0x00c
+#define CSI2_RESETN             0x010
+#define CSI2_PHY_STATE          0x014
+#define PHY_STOPSTATEDATA_BIT   4
+#define PHY_STOPSTATEDATA(n)    BIT(PHY_STOPSTATEDATA_BIT + (n))
+#define PHY_RXCLKACTIVEHS       BIT(8)
+#define PHY_RXULPSCLKNOT        BIT(9)
+#define PHY_STOPSTATECLK        BIT(10)
+#define CSI2_DATA_IDS_1         0x018
+#define CSI2_DATA_IDS_2         0x01c
+#define CSI2_ERR1               0x020
+#define CSI2_ERR2               0x024
+#define CSI2_MSK1               0x028
+#define CSI2_MSK2               0x02c
+#define CSI2_PHY_TST_CTRL0      0x030
+#define PHY_TESTCLR		BIT(0)
+#define PHY_TESTCLK		BIT(1)
+#define CSI2_PHY_TST_CTRL1      0x034
+#define PHY_TESTEN		BIT(16)
+#define CSI2_SFT_RESET          0xf00
+
+static inline struct csi2_dev *sd_to_dev(struct v4l2_subdev *sdev)
+{
+	return container_of(sdev, struct csi2_dev, sd);
+}
+
+static void csi2_enable(struct csi2_dev *csi2, bool enable)
+{
+	if (enable) {
+		writel(0x1, csi2->base + CSI2_PHY_SHUTDOWNZ);
+		writel(0x1, csi2->base + CSI2_DPHY_RSTZ);
+		writel(0x1, csi2->base + CSI2_RESETN);
+	} else {
+		writel(0x0, csi2->base + CSI2_PHY_SHUTDOWNZ);
+		writel(0x0, csi2->base + CSI2_DPHY_RSTZ);
+		writel(0x0, csi2->base + CSI2_RESETN);
+	}
+}
+
+static void csi2_set_lanes(struct csi2_dev *csi2)
+{
+	int lanes = csi2->bus.num_data_lanes;
+
+	writel(lanes - 1, csi2->base + CSI2_N_LANES);
+}
+
+static void dw_mipi_csi2_phy_write(struct csi2_dev *csi2,
+				   u32 test_code, u32 test_data)
+{
+	/* Clear PHY test interface */
+	writel(PHY_TESTCLR, csi2->base + CSI2_PHY_TST_CTRL0);
+	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL1);
+	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
+
+	/* Raise test interface strobe signal */
+	writel(PHY_TESTCLK, csi2->base + CSI2_PHY_TST_CTRL0);
+
+	/* Configure address write on falling edge and lower strobe signal */
+	writel(PHY_TESTEN | test_code, csi2->base + CSI2_PHY_TST_CTRL1);
+	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
+
+	/* Configure data write on rising edge and raise strobe signal */
+	writel(test_data, csi2->base + CSI2_PHY_TST_CTRL1);
+	writel(PHY_TESTCLK, csi2->base + CSI2_PHY_TST_CTRL0);
+
+	/* Clear strobe signal */
+	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
+}
+
+static void csi2_dphy_init(struct csi2_dev *csi2)
+{
+	/*
+	 * FIXME: 0x14 is derived from a fixed D-PHY reference
+	 * clock from the HSI_TX PLL, and a fixed target lane max
+	 * bandwidth of 300 Mbps. This value should be derived
+	 * from the dphy_clk rate and the desired max lane bandwidth.
+	 * See drivers/gpu/drm/rockchip/dw-mipi-dsi.c for more info
+	 * on how this value is derived.
+	 */
+	dw_mipi_csi2_phy_write(csi2, 0x44, 0x14);
+}
+
+/*
+ * Waits for ultra-low-power state on D-PHY clock lane. This is currently
+ * unused and may not be needed at all, but keep around just in case.
+ */
+static int __maybe_unused csi2_dphy_wait_ulp(struct csi2_dev *csi2)
+{
+	u32 reg;
+	int ret;
+
+	/* wait for ULP on clock lane */
+	ret = readl_poll_timeout(csi2->base + CSI2_PHY_STATE, reg,
+				 !(reg & PHY_RXULPSCLKNOT), 0, 500000);
+	if (ret) {
+		v4l2_err(&csi2->sd, "ULP timeout, phy_state = 0x%08x\n", reg);
+		return ret;
+	}
+
+	/* wait until no errors on bus */
+	ret = readl_poll_timeout(csi2->base + CSI2_ERR1, reg,
+				 reg == 0x0, 0, 500000);
+	if (ret) {
+		v4l2_err(&csi2->sd, "stable bus timeout, err1 = 0x%08x\n", reg);
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Waits for low-power LP-11 state (aka STOPSTATE) on data and clock
+ * lanes.
+ */
+static int csi2_dphy_wait_stopstate(struct csi2_dev *csi2)
+{
+	u32 mask, reg;
+	int ret;
+
+	mask = PHY_STOPSTATECLK |
+		((csi2->bus.num_data_lanes - 1) << PHY_STOPSTATEDATA_BIT);
+
+	ret = readl_poll_timeout(csi2->base + CSI2_PHY_STATE, reg,
+				 (reg & mask) == mask, 0, 500000);
+	if (ret) {
+		v4l2_err(&csi2->sd, "LP-11 timeout, phy_state = 0x%08x\n", reg);
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * Wait for active clock on the clock lane.
+ *
+ * FIXME: Currently unused, but it should be! It should be called
+ * from csi2_s_stream() below, at stream ON, but the required sequence
+ * of MIPI CSI-2 startup does not allow for an opportunity for this to
+ * be called. The sequence as specified in the i.MX6 reference manual
+ * is as follows:
+ *
+ * 1. Deassert presetn signal (global reset).
+ *        It's not clear what this "global reset" signal is (maybe APB
+ *        global reset), but in any case this step corresponds to
+ *        csi2_s_power(ON) here.
+ *
+ * 2. Configure MIPI Camera Sensor to put all Tx lanes in PL-11 state.
+ *        This must be carried out by the MIPI sensor's s_power(ON) subdev
+ *        op.
+ *
+ * 3. D-PHY initialization.
+ * 4. CSI2 Controller programming (Set N_LANES, deassert PHY_SHUTDOWNZ,
+ *    deassert PHY_RSTZ, deassert CSI2_RESETN).
+ * 5. Read the PHY status register (PHY_STATE) to confirm that all data and
+ *    clock lanes of the D-PHY are in Stop State.
+ *        These steps (3,4,5) are carried out by csi2_s_stream(ON) here.
+ *
+ * 6. Configure the MIPI Camera Sensor to start transmitting a clock on the
+ *    D-PHY clock lane.
+ *        This must be carried out by the MIPI sensor's s_stream(ON) subdev
+ *        op.
+ * 
+ * 7. CSI2 Controller programming - Read the PHY status register (PHY_STATE)
+ *    to confirm that the D-PHY is receiving a clock on the D-PHY clock lane.
+ *        This is implemented by this unused function, and _should_ be called
+ *        by csi2_s_stream(ON) here, but csi2_s_stream(ON) has been taken up
+ *        by steps 3,4,5 above already.
+ *
+ * In summary, a temporary solution would require a hard-coded delay in the
+ * MIPI sensor's s_stream(ON) op, to allow time for a stable clock lane.
+ *
+ * A longer term solution might be to create a new subdev op, perhaps
+ * called prepare_stream, that can be implemented here, and would be
+ * assigned steps 3,4,5. Then csi2_s_stream(ON) would become available
+ * as step 7.
+ */
+static int __maybe_unused csi2_dphy_wait_clock_lane(struct csi2_dev *csi2)
+{
+	u32 reg;
+	int ret;
+
+	ret = readl_poll_timeout(csi2->base + CSI2_PHY_STATE, reg,
+				 (reg & PHY_RXCLKACTIVEHS), 0, 500000);
+	if (ret) {
+		v4l2_err(&csi2->sd, "clock lane timeout, phy_state = 0x%08x\n",
+			 reg);
+		return ret;
+	}
+
+	return 0;
+}
+
+/*
+ * V4L2 subdev operations.
+ */
+
+/* Startup Sequence Step 1 */
+static int csi2_s_power(struct v4l2_subdev *sd, int on)
+{
+	struct csi2_dev *csi2 = sd_to_dev(sd);
+	int ret;
+
+	if (on && !csi2->on) {
+		dev_dbg(csi2->dev, "power ON\n");
+		ret = clk_prepare_enable(csi2->cfg_clk);
+		if (ret)
+			return ret;
+		ret = clk_prepare_enable(csi2->dphy_clk);
+		if (ret)
+			goto cfg_clk_off;
+	} else if (!on && csi2->on) {
+		dev_dbg(csi2->dev, "power OFF\n");
+		clk_disable_unprepare(csi2->dphy_clk);
+		clk_disable_unprepare(csi2->cfg_clk);
+	}
+
+	csi2->on = on;
+
+	return 0;
+
+cfg_clk_off:
+	clk_disable_unprepare(csi2->cfg_clk);
+	return ret;
+}
+
+/* Startup Sequence Steps 3, 4, 5 */
+static int csi2_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct csi2_dev *csi2 = sd_to_dev(sd);
+	int i, ret = 0;
+
+	if (!csi2->src_linked)
+		return -EPIPE;
+	for (i = 0; i < CSI2_NUM_SRC_PADS; i++) {
+		if (csi2->sink_linked[i])
+			break;
+	}
+	if (i >= CSI2_NUM_SRC_PADS)
+		return -EPIPE;
+
+	if (enable && !csi2->stream_on) {
+		dev_dbg(csi2->dev, "stream ON\n");
+
+		ret = clk_prepare_enable(csi2->pix_clk);
+		if (ret)
+			return ret;
+
+		/* Step 3 */
+		csi2_dphy_init(csi2);
+		/* Step 4 */
+		csi2_set_lanes(csi2);
+		csi2_enable(csi2, true);
+
+		/* Step 5 */
+		ret = csi2_dphy_wait_stopstate(csi2);
+		if (ret) {
+			csi2_enable(csi2, false);
+			clk_disable_unprepare(csi2->pix_clk);
+			return ret;
+		}
+	} else if (!enable && csi2->stream_on) {
+		dev_dbg(csi2->dev, "stream OFF\n");
+		csi2_enable(csi2, false);
+		clk_disable_unprepare(csi2->pix_clk);
+	}
+
+	csi2->stream_on = enable;
+	return 0;
+}
+
+static int csi2_link_setup(struct media_entity *entity,
+			   const struct media_pad *local,
+			   const struct media_pad *remote, u32 flags)
+{
+	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+	struct csi2_dev *csi2 = sd_to_dev(sd);
+	struct v4l2_subdev *remote_sd;
+
+	dev_dbg(csi2->dev, "link setup %s -> %s", remote->entity->name,
+		local->entity->name);
+
+	remote_sd = media_entity_to_v4l2_subdev(remote->entity);
+
+	if (local->flags & MEDIA_PAD_FL_SOURCE) {
+		if (flags & MEDIA_LNK_FL_ENABLED) {
+			if (csi2->sink_linked[local->index])
+				return -EBUSY;
+			csi2->sink_linked[local->index] = true;
+		} else {
+			csi2->sink_linked[local->index] = false;
+		}
+	} else {
+		if (flags & MEDIA_LNK_FL_ENABLED) {
+			if (csi2->src_linked)
+				return -EBUSY;
+			csi2->src_linked = true;
+		} else {
+			csi2->src_linked = false;
+		}
+	}
+
+	return 0;
+}
+
+static int csi2_get_fmt(struct v4l2_subdev *sd,
+			struct v4l2_subdev_pad_config *cfg,
+			struct v4l2_subdev_format *sdformat)
+{
+	struct csi2_dev *csi2 = sd_to_dev(sd);
+	struct v4l2_mbus_framefmt *fmt;
+
+	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY)
+		fmt = v4l2_subdev_get_try_format(&csi2->sd, cfg,
+						 sdformat->pad);
+	else
+		fmt = &csi2->format_mbus;
+
+	sdformat->format = *fmt;
+
+	return 0;
+}
+
+static int csi2_set_fmt(struct v4l2_subdev *sd,
+			struct v4l2_subdev_pad_config *cfg,
+			struct v4l2_subdev_format *sdformat)
+{
+	struct csi2_dev *csi2 = sd_to_dev(sd);
+
+	if (sdformat->pad >= CSI2_NUM_PADS)
+		return -EINVAL;
+
+	if (csi2->stream_on)
+		return -EBUSY;
+
+	/* Output pads mirror active input pad, no limits on input pads */
+	if (sdformat->pad != CSI2_SINK_PAD)
+		sdformat->format = csi2->format_mbus;
+
+	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY)
+		cfg->try_fmt = sdformat->format;
+	else
+		csi2->format_mbus = sdformat->format;
+
+	return 0;
+}
+
+/*
+ * retrieve our pads parsed from the OF graph by the media device
+ */
+static int csi2_registered(struct v4l2_subdev *sd)
+{
+	struct csi2_dev *csi2 = sd_to_dev(sd);
+	int i, ret;
+
+	for (i = 0; i < CSI2_NUM_PADS; i++) {
+		csi2->pad[i].flags = (i == CSI2_SINK_PAD) ?
+		MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
+	}
+
+	/* set a default mbus format  */
+	ret = imx_media_init_mbus_fmt(&csi2->format_mbus,
+				      640, 480, 0, V4L2_FIELD_NONE, NULL);
+	if (ret)
+		return ret;
+
+	return media_entity_pads_init(&sd->entity, CSI2_NUM_PADS, csi2->pad);
+}
+
+static struct media_entity_operations csi2_entity_ops = {
+	.link_setup = csi2_link_setup,
+	.link_validate = v4l2_subdev_link_validate,
+};
+
+static struct v4l2_subdev_core_ops csi2_core_ops = {
+	.s_power = csi2_s_power,
+};
+
+static struct v4l2_subdev_video_ops csi2_video_ops = {
+	.s_stream = csi2_s_stream,
+};
+
+static struct v4l2_subdev_pad_ops csi2_pad_ops = {
+	.get_fmt = csi2_get_fmt,
+	.set_fmt = csi2_set_fmt,
+};
+
+static struct v4l2_subdev_ops csi2_subdev_ops = {
+	.core = &csi2_core_ops,
+	.video = &csi2_video_ops,
+	.pad = &csi2_pad_ops,
+};
+
+static struct v4l2_subdev_internal_ops csi2_internal_ops = {
+	.registered = csi2_registered,
+};
+
+static int csi2_parse_endpoints(struct csi2_dev *csi2)
+{
+	struct device_node *node = csi2->dev->of_node;
+	struct device_node *epnode;
+	struct v4l2_of_endpoint ep;
+
+	epnode = of_graph_get_endpoint_by_regs(node, 0, -1);
+	if (!epnode) {
+		v4l2_err(&csi2->sd, "failed to get sink endpoint node\n");
+		return -EINVAL;
+	}
+
+	v4l2_of_parse_endpoint(epnode, &ep);
+	of_node_put(epnode);
+
+	if (ep.bus_type != V4L2_MBUS_CSI2) {
+		v4l2_err(&csi2->sd, "invalid bus type, must be MIPI CSI2\n");
+		return -EINVAL;
+	}
+
+	csi2->bus = ep.bus.mipi_csi2;
+
+	dev_dbg(csi2->dev, "data lanes: %d\n", csi2->bus.num_data_lanes);
+	dev_dbg(csi2->dev, "flags: 0x%08x\n", csi2->bus.flags);
+	return 0;
+}
+
+static int csi2_probe(struct platform_device *pdev)
+{
+	struct csi2_dev *csi2;
+	struct resource *res;
+	int ret;
+
+	csi2 = devm_kzalloc(&pdev->dev, sizeof(*csi2), GFP_KERNEL);
+	if (!csi2)
+		return -ENOMEM;
+
+	csi2->dev = &pdev->dev;
+
+	v4l2_subdev_init(&csi2->sd, &csi2_subdev_ops);
+	v4l2_set_subdevdata(&csi2->sd, &pdev->dev);
+	csi2->sd.internal_ops = &csi2_internal_ops;
+	csi2->sd.entity.ops = &csi2_entity_ops;
+	csi2->sd.dev = &pdev->dev;
+	csi2->sd.owner = THIS_MODULE;
+	csi2->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE;
+	strcpy(csi2->sd.name, DEVICE_NAME);
+	csi2->sd.entity.function = MEDIA_ENT_F_VID_IF_BRIDGE;
+	csi2->sd.grp_id = IMX_MEDIA_GRP_ID_CSI2;
+
+	ret = csi2_parse_endpoints(csi2);
+	if (ret)
+		return ret;
+
+	csi2->cfg_clk = devm_clk_get(&pdev->dev, "cfg");
+	if (IS_ERR(csi2->cfg_clk)) {
+		v4l2_err(&csi2->sd, "failed to get cfg clock\n");
+		ret = PTR_ERR(csi2->cfg_clk);
+		return ret;
+	}
+
+	csi2->dphy_clk = devm_clk_get(&pdev->dev, "dphy");
+	if (IS_ERR(csi2->dphy_clk)) {
+		v4l2_err(&csi2->sd, "failed to get dphy clock\n");
+		ret = PTR_ERR(csi2->dphy_clk);
+		return ret;
+	}
+
+	csi2->pix_clk = devm_clk_get(&pdev->dev, "pix");
+	if (IS_ERR(csi2->pix_clk)) {
+		v4l2_err(&csi2->sd, "failed to get pixel clock\n");
+		ret = PTR_ERR(csi2->pix_clk);
+		return ret;
+	}
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	if (!res) {
+		v4l2_err(&csi2->sd, "failed to get platform resources\n");
+		return -ENODEV;
+	}
+
+	csi2->base = devm_ioremap(&pdev->dev, res->start, PAGE_SIZE);
+	if (!csi2->base) {
+		v4l2_err(&csi2->sd, "failed to map CSI-2 registers\n");
+		return -ENOMEM;
+	}
+
+	platform_set_drvdata(pdev, &csi2->sd);
+
+	return v4l2_async_register_subdev(&csi2->sd);
+}
+
+static int csi2_remove(struct platform_device *pdev)
+{
+	struct v4l2_subdev *sd = platform_get_drvdata(pdev);
+
+	csi2_s_power(sd, 0);
+
+	v4l2_async_unregister_subdev(sd);
+	media_entity_cleanup(&sd->entity);
+
+	return 0;
+}
+
+static const struct of_device_id csi2_dt_ids[] = {
+	{ .compatible = "fsl,imx6-mipi-csi2", },
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, csi2_dt_ids);
+
+static struct platform_driver csi2_driver = {
+	.driver = {
+		.name = DEVICE_NAME,
+		.of_match_table = csi2_dt_ids,
+	},
+	.probe = csi2_probe,
+	.remove = csi2_remove,
+};
+
+module_platform_driver(csi2_driver);
+
+MODULE_DESCRIPTION("i.MX5/6 MIPI CSI-2 Receiver driver");
+MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
+MODULE_LICENSE("GPL");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 24/36] [media] add Omnivision OV5640 sensor driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (22 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-27 14:45   ` Rob Herring
  2017-02-16  2:19 ` [PATCH v4 25/36] ARM: imx_v6_v7_defconfig: Enable staging video4linux drivers Steve Longerbeam
                   ` (13 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

This driver is based on ov5640_mipi.c from Freescale imx_3.10.17_1.0.0_beta
branch, modified heavily to bring forward to latest interfaces and code
cleanup.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 .../devicetree/bindings/media/i2c/ov5640.txt       |   43 +
 drivers/media/i2c/Kconfig                          |    7 +
 drivers/media/i2c/Makefile                         |    1 +
 drivers/media/i2c/ov5640.c                         | 2109 ++++++++++++++++++++
 4 files changed, 2160 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/media/i2c/ov5640.txt
 create mode 100644 drivers/media/i2c/ov5640.c

diff --git a/Documentation/devicetree/bindings/media/i2c/ov5640.txt b/Documentation/devicetree/bindings/media/i2c/ov5640.txt
new file mode 100644
index 0000000..4607bbe
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/i2c/ov5640.txt
@@ -0,0 +1,43 @@
+* Omnivision OV5640 MIPI CSI-2 sensor
+
+Required Properties:
+- compatible: should be "ovti,ov5640"
+- clocks: reference to the xclk input clock.
+- clock-names: should be "xclk".
+- DOVDD-supply: Digital I/O voltage supply, 1.8 volts
+- AVDD-supply: Analog voltage supply, 2.8 volts
+- DVDD-supply: Digital core voltage supply, 1.5 volts
+
+Optional Properties:
+- reset-gpios: reference to the GPIO connected to the reset pin, if any.
+- pwdn-gpios: reference to the GPIO connected to the pwdn pin, if any.
+
+The device node must contain one 'port' child node for its digital output
+video port, in accordance with the video interface bindings defined in
+Documentation/devicetree/bindings/media/video-interfaces.txt.
+
+Example:
+
+&i2c1 {
+	ov5640: camera@3c {
+		compatible = "ovti,ov5640";
+		pinctrl-names = "default";
+		pinctrl-0 = <&pinctrl_ov5640>;
+		reg = <0x3c>;
+		clocks = <&clks IMX6QDL_CLK_CKO>;
+		clock-names = "xclk";
+		DOVDD-supply = <&vgen4_reg>; /* 1.8v */
+		AVDD-supply = <&vgen3_reg>;  /* 2.8v */
+		DVDD-supply = <&vgen2_reg>;  /* 1.5v */
+		pwdn-gpios = <&gpio1 19 GPIO_ACTIVE_HIGH>;
+		reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>;
+
+		port {
+			ov5640_to_mipi_csi2: endpoint {
+				remote-endpoint = <&mipi_csi2_from_ov5640>;
+				clock-lanes = <0>;
+				data-lanes = <1 2>;
+			};
+		};
+	};
+};
diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
index cee1dae..bf67661 100644
--- a/drivers/media/i2c/Kconfig
+++ b/drivers/media/i2c/Kconfig
@@ -531,6 +531,13 @@ config VIDEO_OV2659
 	  To compile this driver as a module, choose M here: the
 	  module will be called ov2659.
 
+config VIDEO_OV5640
+	tristate "OmniVision OV5640 sensor support"
+	depends on GPIOLIB && VIDEO_V4L2 && I2C && VIDEO_V4L2_SUBDEV_API
+	---help---
+	  This is a V4L2 sensor-level driver for the Omnivision
+	  OV5640 camera sensor with a MIPI CSI-2 interface.
+
 config VIDEO_OV7640
 	tristate "OmniVision OV7640 sensor support"
 	depends on I2C && VIDEO_V4L2
diff --git a/drivers/media/i2c/Makefile b/drivers/media/i2c/Makefile
index 5bc7bbe..3a9d73a 100644
--- a/drivers/media/i2c/Makefile
+++ b/drivers/media/i2c/Makefile
@@ -57,6 +57,7 @@ obj-$(CONFIG_VIDEO_VP27SMPX) += vp27smpx.o
 obj-$(CONFIG_VIDEO_SONY_BTF_MPX) += sony-btf-mpx.o
 obj-$(CONFIG_VIDEO_UPD64031A) += upd64031a.o
 obj-$(CONFIG_VIDEO_UPD64083) += upd64083.o
+obj-$(CONFIG_VIDEO_OV5640) += ov5640.o
 obj-$(CONFIG_VIDEO_OV7640) += ov7640.o
 obj-$(CONFIG_VIDEO_OV7670) += ov7670.o
 obj-$(CONFIG_VIDEO_OV9650) += ov9650.o
diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
new file mode 100644
index 0000000..b3535af
--- /dev/null
+++ b/drivers/media/i2c/ov5640.c
@@ -0,0 +1,2109 @@
+/*
+ * Copyright (C) 2011-2013 Freescale Semiconductor, Inc. All Rights Reserved.
+ * Copyright (C) 2014-2017 Mentor Graphics Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/clk.h>
+#include <linux/clk-provider.h>
+#include <linux/clkdev.h>
+#include <linux/ctype.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/i2c.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/gpio/consumer.h>
+#include <linux/regulator/consumer.h>
+#include <media/v4l2-async.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-subdev.h>
+
+/* min/typical/max system clock (xclk) frequencies */
+#define OV5640_XCLK_MIN  6000000
+#define OV5640_XCLK_MAX 24000000
+
+/*
+ * FIXME: there is no subdev API to set the MIPI CSI-2
+ * virtual channel yet, so this is hardcoded for now.
+ */
+#define OV5640_MIPI_VC	1
+
+#define OV5640_DEFAULT_SLAVE_ID 0x3c
+
+#define OV5640_REG_CHIP_ID		0x300a
+#define OV5640_REG_PAD_OUTPUT00		0x3019
+#define OV5640_REG_SC_PLL_CTRL0		0x3034
+#define OV5640_REG_SC_PLL_CTRL1		0x3035
+#define OV5640_REG_SC_PLL_CTRL2		0x3036
+#define OV5640_REG_SC_PLL_CTRL3		0x3037
+#define OV5640_REG_SLAVE_ID		0x3100
+#define OV5640_REG_SYS_ROOT_DIVIDER	0x3108
+#define OV5640_REG_AWB_R_GAIN		0x3400
+#define OV5640_REG_AWB_G_GAIN		0x3402
+#define OV5640_REG_AWB_B_GAIN		0x3404
+#define OV5640_REG_AWB_MANUAL_CTRL	0x3406
+#define OV5640_REG_AEC_PK_EXPOSURE_HI	0x3500
+#define OV5640_REG_AEC_PK_EXPOSURE_MED	0x3501
+#define OV5640_REG_AEC_PK_EXPOSURE_LO	0x3502
+#define OV5640_REG_AEC_PK_MANUAL	0x3503
+#define OV5640_REG_AEC_PK_REAL_GAIN	0x350a
+#define OV5640_REG_AEC_PK_VTS		0x350c
+#define OV5640_REG_TIMING_HTS		0x380c
+#define OV5640_REG_TIMING_VTS		0x380e
+#define OV5640_REG_TIMING_TC_REG21	0x3821
+#define OV5640_REG_AEC_CTRL00		0x3a00
+#define OV5640_REG_AEC_B50_STEP		0x3a08
+#define OV5640_REG_AEC_B60_STEP		0x3a0a
+#define OV5640_REG_AEC_CTRL0D		0x3a0d
+#define OV5640_REG_AEC_CTRL0E		0x3a0e
+#define OV5640_REG_AEC_CTRL0F		0x3a0f
+#define OV5640_REG_AEC_CTRL10		0x3a10
+#define OV5640_REG_AEC_CTRL11		0x3a11
+#define OV5640_REG_AEC_CTRL1B		0x3a1b
+#define OV5640_REG_AEC_CTRL1E		0x3a1e
+#define OV5640_REG_AEC_CTRL1F		0x3a1f
+#define OV5640_REG_HZ5060_CTRL00	0x3c00
+#define OV5640_REG_HZ5060_CTRL01	0x3c01
+#define OV5640_REG_SIGMADELTA_CTRL0C	0x3c0c
+#define OV5640_REG_FRAME_CTRL01		0x4202
+#define OV5640_REG_MIPI_CTRL00		0x4800
+#define OV5640_REG_DEBUG_MODE		0x4814
+#define OV5640_REG_PRE_ISP_TEST_SET1	0x503d
+#define OV5640_REG_SDE_CTRL0		0x5580
+#define OV5640_REG_SDE_CTRL1		0x5581
+#define OV5640_REG_SDE_CTRL3		0x5583
+#define OV5640_REG_SDE_CTRL4		0x5584
+#define OV5640_REG_SDE_CTRL5		0x5585
+#define OV5640_REG_AVG_READOUT		0x56a1
+
+enum ov5640_mode_id {
+	OV5640_MODE_QCIF_176_144 = 0,
+	OV5640_MODE_QVGA_320_240,
+	OV5640_MODE_VGA_640_480,
+	OV5640_MODE_NTSC_720_480,
+	OV5640_MODE_PAL_720_576,
+	OV5640_MODE_XGA_1024_768,
+	OV5640_MODE_720P_1280_720,
+	OV5640_MODE_1080P_1920_1080,
+	OV5640_MODE_QSXGA_2592_1944,
+	OV5640_NUM_MODES,
+};
+
+enum ov5640_frame_rate {
+	OV5640_15_FPS = 0,
+	OV5640_30_FPS,
+	OV5640_NUM_FRAMERATES,
+};
+
+static const int ov5640_framerates[] = {
+	[OV5640_15_FPS] = 15,
+	[OV5640_30_FPS] = 30,
+};
+
+/* regulator supplies */
+static const char * const ov5640_supply_name[] = {
+	"DOVDD", /* Digital I/O (1.8V) suppply */
+	"DVDD",  /* Digital Core (1.5V) supply */
+	"AVDD",  /* Analog (2.8V) supply */
+};
+
+#define OV5640_NUM_SUPPLIES ARRAY_SIZE(ov5640_supply_name)
+
+/*
+ * image size under 1280 * 960 are SUBSAMPLING
+ * image size upper 1280 * 960 are SCALING
+ */
+enum ov5640_downsize_mode {
+	SUBSAMPLING,
+	SCALING,
+};
+
+struct reg_value {
+	u16 reg_addr;
+	u8 val;
+	u8 mask;
+	u32 delay_ms;
+};
+
+struct ov5640_mode_info {
+	enum ov5640_mode_id id;
+	enum ov5640_downsize_mode dn_mode;
+	u32 width;
+	u32 height;
+	const struct reg_value *reg_data;
+	u32 reg_data_size;
+};
+
+struct ov5640_ctrls {
+	struct v4l2_ctrl_handler handler;
+	struct {
+		struct v4l2_ctrl *auto_exp;
+		struct v4l2_ctrl *exposure;
+	};
+	struct {
+		struct v4l2_ctrl *auto_wb;
+		struct v4l2_ctrl *blue_balance;
+		struct v4l2_ctrl *red_balance;
+	};
+	struct {
+		struct v4l2_ctrl *auto_gain;
+		struct v4l2_ctrl *gain;
+	};
+	struct v4l2_ctrl *brightness;
+	struct v4l2_ctrl *saturation;
+	struct v4l2_ctrl *contrast;
+	struct v4l2_ctrl *hue;
+	struct v4l2_ctrl *test_pattern;
+};
+
+struct ov5640_dev {
+	struct i2c_client *i2c_client;
+	struct v4l2_subdev sd;
+	struct media_pad pad;
+	struct v4l2_of_endpoint ep; /* the parsed DT endpoint info */
+	struct v4l2_mbus_framefmt fmt;
+	struct v4l2_captureparm streamcap;
+	struct clk *xclk; /* system clock to OV5640 */
+	u32 xclk_freq;
+
+	struct mutex power_lock; /* lock to protect power_count */
+	int power_count;
+
+	const struct ov5640_mode_info *current_mode;
+	enum ov5640_frame_rate current_fr;
+
+	struct ov5640_ctrls ctrls;
+
+	struct gpio_desc *reset_gpio;
+	struct gpio_desc *pwdn_gpio;
+
+	u32 prev_sysclk, prev_hts;
+	u32 ae_low, ae_high, ae_target;
+
+	struct regulator_bulk_data supplies[OV5640_NUM_SUPPLIES];
+};
+
+static inline struct ov5640_dev *to_ov5640_dev(struct v4l2_subdev *sd)
+{
+	return container_of(sd, struct ov5640_dev, sd);
+}
+
+static inline struct v4l2_subdev *ctrl_to_sd(struct v4l2_ctrl *ctrl)
+{
+	return &container_of(ctrl->handler, struct ov5640_dev,
+			     ctrls.handler)->sd;
+}
+
+/*
+ * FIXME: all of these register tables are likely filled with
+ * entries that set the register to their power-on default values,
+ * and which are otherwise not touched by this driver. Those entries
+ * should be identified and removed to speed register load time
+ * over i2c.
+ */
+
+static const struct reg_value ov5640_init_setting_30fps_VGA[] = {
+
+	{0x3103, 0x11, 0, 0}, {0x3008, 0x82, 0, 5}, {0x3008, 0x42, 0, 0},
+	{0x3103, 0x03, 0, 0}, {0x3017, 0x00, 0, 0}, {0x3018, 0x00, 0, 0},
+	{0x3034, 0x18, 0, 0}, {0x3035, 0x14, 0, 0}, {0x3036, 0x38, 0, 0},
+	{0x3037, 0x13, 0, 0}, {0x3108, 0x01, 0, 0}, {0x3630, 0x36, 0, 0},
+	{0x3631, 0x0e, 0, 0}, {0x3632, 0xe2, 0, 0}, {0x3633, 0x12, 0, 0},
+	{0x3621, 0xe0, 0, 0}, {0x3704, 0xa0, 0, 0}, {0x3703, 0x5a, 0, 0},
+	{0x3715, 0x78, 0, 0}, {0x3717, 0x01, 0, 0}, {0x370b, 0x60, 0, 0},
+	{0x3705, 0x1a, 0, 0}, {0x3905, 0x02, 0, 0}, {0x3906, 0x10, 0, 0},
+	{0x3901, 0x0a, 0, 0}, {0x3731, 0x12, 0, 0}, {0x3600, 0x08, 0, 0},
+	{0x3601, 0x33, 0, 0}, {0x302d, 0x60, 0, 0}, {0x3620, 0x52, 0, 0},
+	{0x371b, 0x20, 0, 0}, {0x471c, 0x50, 0, 0}, {0x3a13, 0x43, 0, 0},
+	{0x3a18, 0x00, 0, 0}, {0x3a19, 0xf8, 0, 0}, {0x3635, 0x13, 0, 0},
+	{0x3636, 0x03, 0, 0}, {0x3634, 0x40, 0, 0}, {0x3622, 0x01, 0, 0},
+	{0x3c01, 0xa4, 0, 0}, {0x3c04, 0x28, 0, 0}, {0x3c05, 0x98, 0, 0},
+	{0x3c06, 0x00, 0, 0}, {0x3c07, 0x08, 0, 0}, {0x3c08, 0x00, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0x80, 0, 0}, {0x380a, 0x01, 0, 0},
+	{0x380b, 0xe0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x3000, 0x00, 0, 0},
+	{0x3002, 0x1c, 0, 0}, {0x3004, 0xff, 0, 0}, {0x3006, 0xc3, 0, 0},
+	{0x300e, 0x45, 0, 0}, {0x302e, 0x08, 0, 0}, {0x4300, 0x3f, 0, 0},
+	{0x501f, 0x00, 0, 0}, {0x4713, 0x03, 0, 0}, {0x4407, 0x04, 0, 0},
+	{0x440e, 0x00, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x4837, 0x0a, 0, 0}, {0x4800, 0x04, 0, 0}, {0x3824, 0x02, 0, 0},
+	{0x5000, 0xa7, 0, 0}, {0x5001, 0xa3, 0, 0}, {0x5180, 0xff, 0, 0},
+	{0x5181, 0xf2, 0, 0}, {0x5182, 0x00, 0, 0}, {0x5183, 0x14, 0, 0},
+	{0x5184, 0x25, 0, 0}, {0x5185, 0x24, 0, 0}, {0x5186, 0x09, 0, 0},
+	{0x5187, 0x09, 0, 0}, {0x5188, 0x09, 0, 0}, {0x5189, 0x88, 0, 0},
+	{0x518a, 0x54, 0, 0}, {0x518b, 0xee, 0, 0}, {0x518c, 0xb2, 0, 0},
+	{0x518d, 0x50, 0, 0}, {0x518e, 0x34, 0, 0}, {0x518f, 0x6b, 0, 0},
+	{0x5190, 0x46, 0, 0}, {0x5191, 0xf8, 0, 0}, {0x5192, 0x04, 0, 0},
+	{0x5193, 0x70, 0, 0}, {0x5194, 0xf0, 0, 0}, {0x5195, 0xf0, 0, 0},
+	{0x5196, 0x03, 0, 0}, {0x5197, 0x01, 0, 0}, {0x5198, 0x04, 0, 0},
+	{0x5199, 0x6c, 0, 0}, {0x519a, 0x04, 0, 0}, {0x519b, 0x00, 0, 0},
+	{0x519c, 0x09, 0, 0}, {0x519d, 0x2b, 0, 0}, {0x519e, 0x38, 0, 0},
+	{0x5381, 0x1e, 0, 0}, {0x5382, 0x5b, 0, 0}, {0x5383, 0x08, 0, 0},
+	{0x5384, 0x0a, 0, 0}, {0x5385, 0x7e, 0, 0}, {0x5386, 0x88, 0, 0},
+	{0x5387, 0x7c, 0, 0}, {0x5388, 0x6c, 0, 0}, {0x5389, 0x10, 0, 0},
+	{0x538a, 0x01, 0, 0}, {0x538b, 0x98, 0, 0}, {0x5300, 0x08, 0, 0},
+	{0x5301, 0x30, 0, 0}, {0x5302, 0x10, 0, 0}, {0x5303, 0x00, 0, 0},
+	{0x5304, 0x08, 0, 0}, {0x5305, 0x30, 0, 0}, {0x5306, 0x08, 0, 0},
+	{0x5307, 0x16, 0, 0}, {0x5309, 0x08, 0, 0}, {0x530a, 0x30, 0, 0},
+	{0x530b, 0x04, 0, 0}, {0x530c, 0x06, 0, 0}, {0x5480, 0x01, 0, 0},
+	{0x5481, 0x08, 0, 0}, {0x5482, 0x14, 0, 0}, {0x5483, 0x28, 0, 0},
+	{0x5484, 0x51, 0, 0}, {0x5485, 0x65, 0, 0}, {0x5486, 0x71, 0, 0},
+	{0x5487, 0x7d, 0, 0}, {0x5488, 0x87, 0, 0}, {0x5489, 0x91, 0, 0},
+	{0x548a, 0x9a, 0, 0}, {0x548b, 0xaa, 0, 0}, {0x548c, 0xb8, 0, 0},
+	{0x548d, 0xcd, 0, 0}, {0x548e, 0xdd, 0, 0}, {0x548f, 0xea, 0, 0},
+	{0x5490, 0x1d, 0, 0}, {0x5580, 0x02, 0, 0}, {0x5583, 0x40, 0, 0},
+	{0x5584, 0x10, 0, 0}, {0x5589, 0x10, 0, 0}, {0x558a, 0x00, 0, 0},
+	{0x558b, 0xf8, 0, 0}, {0x5800, 0x23, 0, 0}, {0x5801, 0x14, 0, 0},
+	{0x5802, 0x0f, 0, 0}, {0x5803, 0x0f, 0, 0}, {0x5804, 0x12, 0, 0},
+	{0x5805, 0x26, 0, 0}, {0x5806, 0x0c, 0, 0}, {0x5807, 0x08, 0, 0},
+	{0x5808, 0x05, 0, 0}, {0x5809, 0x05, 0, 0}, {0x580a, 0x08, 0, 0},
+	{0x580b, 0x0d, 0, 0}, {0x580c, 0x08, 0, 0}, {0x580d, 0x03, 0, 0},
+	{0x580e, 0x00, 0, 0}, {0x580f, 0x00, 0, 0}, {0x5810, 0x03, 0, 0},
+	{0x5811, 0x09, 0, 0}, {0x5812, 0x07, 0, 0}, {0x5813, 0x03, 0, 0},
+	{0x5814, 0x00, 0, 0}, {0x5815, 0x01, 0, 0}, {0x5816, 0x03, 0, 0},
+	{0x5817, 0x08, 0, 0}, {0x5818, 0x0d, 0, 0}, {0x5819, 0x08, 0, 0},
+	{0x581a, 0x05, 0, 0}, {0x581b, 0x06, 0, 0}, {0x581c, 0x08, 0, 0},
+	{0x581d, 0x0e, 0, 0}, {0x581e, 0x29, 0, 0}, {0x581f, 0x17, 0, 0},
+	{0x5820, 0x11, 0, 0}, {0x5821, 0x11, 0, 0}, {0x5822, 0x15, 0, 0},
+	{0x5823, 0x28, 0, 0}, {0x5824, 0x46, 0, 0}, {0x5825, 0x26, 0, 0},
+	{0x5826, 0x08, 0, 0}, {0x5827, 0x26, 0, 0}, {0x5828, 0x64, 0, 0},
+	{0x5829, 0x26, 0, 0}, {0x582a, 0x24, 0, 0}, {0x582b, 0x22, 0, 0},
+	{0x582c, 0x24, 0, 0}, {0x582d, 0x24, 0, 0}, {0x582e, 0x06, 0, 0},
+	{0x582f, 0x22, 0, 0}, {0x5830, 0x40, 0, 0}, {0x5831, 0x42, 0, 0},
+	{0x5832, 0x24, 0, 0}, {0x5833, 0x26, 0, 0}, {0x5834, 0x24, 0, 0},
+	{0x5835, 0x22, 0, 0}, {0x5836, 0x22, 0, 0}, {0x5837, 0x26, 0, 0},
+	{0x5838, 0x44, 0, 0}, {0x5839, 0x24, 0, 0}, {0x583a, 0x26, 0, 0},
+	{0x583b, 0x28, 0, 0}, {0x583c, 0x42, 0, 0}, {0x583d, 0xce, 0, 0},
+	{0x5025, 0x00, 0, 0}, {0x3a0f, 0x30, 0, 0}, {0x3a10, 0x28, 0, 0},
+	{0x3a1b, 0x30, 0, 0}, {0x3a1e, 0x26, 0, 0}, {0x3a11, 0x60, 0, 0},
+	{0x3a1f, 0x14, 0, 0}, {0x3008, 0x02, 0, 0}, {0x3c00, 0x04, 0, 300},
+};
+
+static const struct reg_value ov5640_setting_30fps_VGA_640_480[] = {
+
+	{0x3035, 0x14, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0x80, 0, 0}, {0x380a, 0x01, 0, 0},
+	{0x380b, 0xe0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x04, 0, 0}, {0x380f, 0x38, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x0e, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0}, {0x3503, 0x00, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_15fps_VGA_640_480[] = {
+	{0x3035, 0x22, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0x80, 0, 0}, {0x380a, 0x01, 0, 0},
+	{0x380b, 0xe0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_30fps_XGA_1024_768[] = {
+
+	{0x3035, 0x14, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0x80, 0, 0}, {0x380a, 0x01, 0, 0},
+	{0x380b, 0xe0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x04, 0, 0}, {0x380f, 0x38, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x0e, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0}, {0x3503, 0x00, 0, 0},
+	{0x3808, 0x04, 0, 0}, {0x3809, 0x00, 0, 0}, {0x380a, 0x03, 0, 0},
+	{0x380b, 0x00, 0, 0}, {0x3035, 0x12, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_15fps_XGA_1024_768[] = {
+	{0x3035, 0x22, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0x80, 0, 0}, {0x380a, 0x01, 0, 0},
+	{0x380b, 0xe0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0}, {0x3808, 0x04, 0, 0},
+	{0x3809, 0x00, 0, 0}, {0x380a, 0x03, 0, 0}, {0x380b, 0x00, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_30fps_QVGA_320_240[] = {
+	{0x3035, 0x14, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x01, 0, 0}, {0x3809, 0x40, 0, 0}, {0x380a, 0x00, 0, 0},
+	{0x380b, 0xf0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_15fps_QVGA_320_240[] = {
+	{0x3035, 0x22, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x01, 0, 0}, {0x3809, 0x40, 0, 0}, {0x380a, 0x00, 0, 0},
+	{0x380b, 0xf0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_30fps_QCIF_176_144[] = {
+	{0x3035, 0x14, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x00, 0, 0}, {0x3809, 0xb0, 0, 0}, {0x380a, 0x00, 0, 0},
+	{0x380b, 0x90, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+static const struct reg_value ov5640_setting_15fps_QCIF_176_144[] = {
+	{0x3035, 0x22, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x00, 0, 0}, {0x3809, 0xb0, 0, 0}, {0x380a, 0x00, 0, 0},
+	{0x380b, 0x90, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_30fps_NTSC_720_480[] = {
+	{0x3035, 0x12, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0xd0, 0, 0}, {0x380a, 0x01, 0, 0},
+	{0x380b, 0xe0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x3c, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_15fps_NTSC_720_480[] = {
+	{0x3035, 0x22, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0xd0, 0, 0}, {0x380a, 0x01, 0, 0},
+	{0x380b, 0xe0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x3c, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_30fps_PAL_720_576[] = {
+	{0x3035, 0x12, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0xd0, 0, 0}, {0x380a, 0x02, 0, 0},
+	{0x380b, 0x40, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x38, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_15fps_PAL_720_576[] = {
+	{0x3035, 0x22, 0, 0}, {0x3036, 0x38, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x04, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9b, 0, 0},
+	{0x3808, 0x02, 0, 0}, {0x3809, 0xd0, 0, 0}, {0x380a, 0x02, 0, 0},
+	{0x380b, 0x40, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x68, 0, 0},
+	{0x380e, 0x03, 0, 0}, {0x380f, 0xd8, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x38, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x06, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_30fps_720P_1280_720[] = {
+	{0x3008, 0x42, 0, 0},
+	{0x3035, 0x21, 0, 0}, {0x3036, 0x54, 0, 0}, {0x3c07, 0x07, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0xfa, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x06, 0, 0}, {0x3807, 0xa9, 0, 0},
+	{0x3808, 0x05, 0, 0}, {0x3809, 0x00, 0, 0}, {0x380a, 0x02, 0, 0},
+	{0x380b, 0xd0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x64, 0, 0},
+	{0x380e, 0x02, 0, 0}, {0x380f, 0xe4, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x04, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x02, 0, 0},
+	{0x3a03, 0xe4, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0xbc, 0, 0},
+	{0x3a0a, 0x01, 0, 0}, {0x3a0b, 0x72, 0, 0}, {0x3a0e, 0x01, 0, 0},
+	{0x3a0d, 0x02, 0, 0}, {0x3a14, 0x02, 0, 0}, {0x3a15, 0xe4, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x02, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x37, 0, 0}, {0x460c, 0x20, 0, 0},
+	{0x3824, 0x04, 0, 0}, {0x5001, 0x83, 0, 0}, {0x4005, 0x1a, 0, 0},
+	{0x3008, 0x02, 0, 0}, {0x3503, 0,    0, 0},
+};
+
+static const struct reg_value ov5640_setting_15fps_720P_1280_720[] = {
+	{0x3035, 0x41, 0, 0}, {0x3036, 0x54, 0, 0}, {0x3c07, 0x07, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x41, 0, 0}, {0x3821, 0x07, 0, 0}, {0x3814, 0x31, 0, 0},
+	{0x3815, 0x31, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0xfa, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x06, 0, 0}, {0x3807, 0xa9, 0, 0},
+	{0x3808, 0x05, 0, 0}, {0x3809, 0x00, 0, 0}, {0x380a, 0x02, 0, 0},
+	{0x380b, 0xd0, 0, 0}, {0x380c, 0x07, 0, 0}, {0x380d, 0x64, 0, 0},
+	{0x380e, 0x02, 0, 0}, {0x380f, 0xe4, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x04, 0, 0},
+	{0x3618, 0x00, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3709, 0x52, 0, 0}, {0x370c, 0x03, 0, 0}, {0x3a02, 0x02, 0, 0},
+	{0x3a03, 0xe4, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0xbc, 0, 0},
+	{0x3a0a, 0x01, 0, 0}, {0x3a0b, 0x72, 0, 0}, {0x3a0e, 0x01, 0, 0},
+	{0x3a0d, 0x02, 0, 0}, {0x3a14, 0x02, 0, 0}, {0x3a15, 0xe4, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x4713, 0x02, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x37, 0, 0}, {0x460c, 0x20, 0, 0},
+	{0x3824, 0x04, 0, 0}, {0x5001, 0x83, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_30fps_1080P_1920_1080[] = {
+	{0x3008, 0x42, 0, 0},
+	{0x3035, 0x21, 0, 0}, {0x3036, 0x54, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x40, 0, 0}, {0x3821, 0x06, 0, 0}, {0x3814, 0x11, 0, 0},
+	{0x3815, 0x11, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x00, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9f, 0, 0},
+	{0x3808, 0x0a, 0, 0}, {0x3809, 0x20, 0, 0}, {0x380a, 0x07, 0, 0},
+	{0x380b, 0x98, 0, 0}, {0x380c, 0x0b, 0, 0}, {0x380d, 0x1c, 0, 0},
+	{0x380e, 0x07, 0, 0}, {0x380f, 0xb0, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x04, 0, 0},
+	{0x3618, 0x04, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x21, 0, 0},
+	{0x3709, 0x12, 0, 0}, {0x370c, 0x00, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x06, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0x83, 0, 0}, {0x3035, 0x11, 0, 0},
+	{0x3036, 0x54, 0, 0}, {0x3c07, 0x07, 0, 0}, {0x3c08, 0x00, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3800, 0x01, 0, 0}, {0x3801, 0x50, 0, 0}, {0x3802, 0x01, 0, 0},
+	{0x3803, 0xb2, 0, 0}, {0x3804, 0x08, 0, 0}, {0x3805, 0xef, 0, 0},
+	{0x3806, 0x05, 0, 0}, {0x3807, 0xf1, 0, 0}, {0x3808, 0x07, 0, 0},
+	{0x3809, 0x80, 0, 0}, {0x380a, 0x04, 0, 0}, {0x380b, 0x38, 0, 0},
+	{0x380c, 0x09, 0, 0}, {0x380d, 0xc4, 0, 0}, {0x380e, 0x04, 0, 0},
+	{0x380f, 0x60, 0, 0}, {0x3612, 0x2b, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3a02, 0x04, 0, 0}, {0x3a03, 0x60, 0, 0}, {0x3a08, 0x01, 0, 0},
+	{0x3a09, 0x50, 0, 0}, {0x3a0a, 0x01, 0, 0}, {0x3a0b, 0x18, 0, 0},
+	{0x3a0e, 0x03, 0, 0}, {0x3a0d, 0x04, 0, 0}, {0x3a14, 0x04, 0, 0},
+	{0x3a15, 0x60, 0, 0}, {0x4713, 0x02, 0, 0}, {0x4407, 0x04, 0, 0},
+	{0x460b, 0x37, 0, 0}, {0x460c, 0x20, 0, 0}, {0x3824, 0x04, 0, 0},
+	{0x4005, 0x1a, 0, 0}, {0x3008, 0x02, 0, 0},
+	{0x3503, 0, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_15fps_1080P_1920_1080[] = {
+	{0x3008, 0x42, 0, 0},
+	{0x3035, 0x21, 0, 0}, {0x3036, 0x54, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x40, 0, 0}, {0x3821, 0x06, 0, 0}, {0x3814, 0x11, 0, 0},
+	{0x3815, 0x11, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x00, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9f, 0, 0},
+	{0x3808, 0x0a, 0, 0}, {0x3809, 0x20, 0, 0}, {0x380a, 0x07, 0, 0},
+	{0x380b, 0x98, 0, 0}, {0x380c, 0x0b, 0, 0}, {0x380d, 0x1c, 0, 0},
+	{0x380e, 0x07, 0, 0}, {0x380f, 0xb0, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x04, 0, 0},
+	{0x3618, 0x04, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x21, 0, 0},
+	{0x3709, 0x12, 0, 0}, {0x370c, 0x00, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x06, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0x83, 0, 0}, {0x3035, 0x21, 0, 0},
+	{0x3036, 0x54, 0, 1}, {0x3c07, 0x07, 0, 0}, {0x3c08, 0x00, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3800, 0x01, 0, 0}, {0x3801, 0x50, 0, 0}, {0x3802, 0x01, 0, 0},
+	{0x3803, 0xb2, 0, 0}, {0x3804, 0x08, 0, 0}, {0x3805, 0xef, 0, 0},
+	{0x3806, 0x05, 0, 0}, {0x3807, 0xf1, 0, 0}, {0x3808, 0x07, 0, 0},
+	{0x3809, 0x80, 0, 0}, {0x380a, 0x04, 0, 0}, {0x380b, 0x38, 0, 0},
+	{0x380c, 0x09, 0, 0}, {0x380d, 0xc4, 0, 0}, {0x380e, 0x04, 0, 0},
+	{0x380f, 0x60, 0, 0}, {0x3612, 0x2b, 0, 0}, {0x3708, 0x64, 0, 0},
+	{0x3a02, 0x04, 0, 0}, {0x3a03, 0x60, 0, 0}, {0x3a08, 0x01, 0, 0},
+	{0x3a09, 0x50, 0, 0}, {0x3a0a, 0x01, 0, 0}, {0x3a0b, 0x18, 0, 0},
+	{0x3a0e, 0x03, 0, 0}, {0x3a0d, 0x04, 0, 0}, {0x3a14, 0x04, 0, 0},
+	{0x3a15, 0x60, 0, 0}, {0x4713, 0x02, 0, 0}, {0x4407, 0x04, 0, 0},
+	{0x460b, 0x37, 0, 0}, {0x460c, 0x20, 0, 0}, {0x3824, 0x04, 0, 0},
+	{0x4005, 0x1a, 0, 0}, {0x3008, 0x02, 0, 0}, {0x3503, 0, 0, 0},
+};
+
+static const struct reg_value ov5640_setting_15fps_QSXGA_2592_1944[] = {
+	{0x3820, 0x40, 0, 0}, {0x3821, 0x06, 0, 0},
+	{0x3035, 0x21, 0, 0}, {0x3036, 0x54, 0, 0}, {0x3c07, 0x08, 0, 0},
+	{0x3c09, 0x1c, 0, 0}, {0x3c0a, 0x9c, 0, 0}, {0x3c0b, 0x40, 0, 0},
+	{0x3820, 0x40, 0, 0}, {0x3821, 0x06, 0, 0}, {0x3814, 0x11, 0, 0},
+	{0x3815, 0x11, 0, 0}, {0x3800, 0x00, 0, 0}, {0x3801, 0x00, 0, 0},
+	{0x3802, 0x00, 0, 0}, {0x3803, 0x00, 0, 0}, {0x3804, 0x0a, 0, 0},
+	{0x3805, 0x3f, 0, 0}, {0x3806, 0x07, 0, 0}, {0x3807, 0x9f, 0, 0},
+	{0x3808, 0x0a, 0, 0}, {0x3809, 0x20, 0, 0}, {0x380a, 0x07, 0, 0},
+	{0x380b, 0x98, 0, 0}, {0x380c, 0x0b, 0, 0}, {0x380d, 0x1c, 0, 0},
+	{0x380e, 0x07, 0, 0}, {0x380f, 0xb0, 0, 0}, {0x3810, 0x00, 0, 0},
+	{0x3811, 0x10, 0, 0}, {0x3812, 0x00, 0, 0}, {0x3813, 0x04, 0, 0},
+	{0x3618, 0x04, 0, 0}, {0x3612, 0x29, 0, 0}, {0x3708, 0x21, 0, 0},
+	{0x3709, 0x12, 0, 0}, {0x370c, 0x00, 0, 0}, {0x3a02, 0x03, 0, 0},
+	{0x3a03, 0xd8, 0, 0}, {0x3a08, 0x01, 0, 0}, {0x3a09, 0x27, 0, 0},
+	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+	{0x4001, 0x02, 0, 0}, {0x4004, 0x06, 0, 0}, {0x4713, 0x03, 0, 0},
+	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+	{0x3824, 0x02, 0, 0}, {0x5001, 0x83, 0, 70},
+};
+
+/* power-on sensor init reg table */
+static const struct ov5640_mode_info ov5640_mode_init_data = {
+	0, SUBSAMPLING, 640, 480, ov5640_init_setting_30fps_VGA,
+	ARRAY_SIZE(ov5640_init_setting_30fps_VGA),
+};
+
+static const struct ov5640_mode_info
+ov5640_mode_data[OV5640_NUM_FRAMERATES][OV5640_NUM_MODES] = {
+	{
+		{OV5640_MODE_QCIF_176_144, SUBSAMPLING, 176, 144,
+		 ov5640_setting_15fps_QCIF_176_144,
+		 ARRAY_SIZE(ov5640_setting_15fps_QCIF_176_144)},
+		{OV5640_MODE_QVGA_320_240, SUBSAMPLING, 320,  240,
+		 ov5640_setting_15fps_QVGA_320_240,
+		 ARRAY_SIZE(ov5640_setting_15fps_QVGA_320_240)},
+		{OV5640_MODE_VGA_640_480, SUBSAMPLING, 640,  480,
+		 ov5640_setting_15fps_VGA_640_480,
+		 ARRAY_SIZE(ov5640_setting_15fps_VGA_640_480)},
+		{OV5640_MODE_NTSC_720_480, SUBSAMPLING, 720, 480,
+		 ov5640_setting_15fps_NTSC_720_480,
+		 ARRAY_SIZE(ov5640_setting_15fps_NTSC_720_480)},
+		{OV5640_MODE_PAL_720_576, SUBSAMPLING, 720, 576,
+		 ov5640_setting_15fps_PAL_720_576,
+		 ARRAY_SIZE(ov5640_setting_15fps_PAL_720_576)},
+		{OV5640_MODE_XGA_1024_768, SUBSAMPLING, 1024, 768,
+		 ov5640_setting_15fps_XGA_1024_768,
+		 ARRAY_SIZE(ov5640_setting_15fps_XGA_1024_768)},
+		{OV5640_MODE_720P_1280_720, SUBSAMPLING, 1280, 720,
+		 ov5640_setting_15fps_720P_1280_720,
+		 ARRAY_SIZE(ov5640_setting_15fps_720P_1280_720)},
+		{OV5640_MODE_1080P_1920_1080, SCALING, 1920, 1080,
+		 ov5640_setting_15fps_1080P_1920_1080,
+		 ARRAY_SIZE(ov5640_setting_15fps_1080P_1920_1080)},
+		{OV5640_MODE_QSXGA_2592_1944, SCALING, 2592, 1944,
+		 ov5640_setting_15fps_QSXGA_2592_1944,
+		 ARRAY_SIZE(ov5640_setting_15fps_QSXGA_2592_1944)},
+	}, {
+		{OV5640_MODE_QCIF_176_144, SUBSAMPLING, 176, 144,
+		 ov5640_setting_30fps_QCIF_176_144,
+		 ARRAY_SIZE(ov5640_setting_30fps_QCIF_176_144)},
+		{OV5640_MODE_QVGA_320_240, SUBSAMPLING, 320,  240,
+		 ov5640_setting_30fps_QVGA_320_240,
+		 ARRAY_SIZE(ov5640_setting_30fps_QVGA_320_240)},
+		{OV5640_MODE_VGA_640_480, SUBSAMPLING, 640,  480,
+		 ov5640_setting_30fps_VGA_640_480,
+		 ARRAY_SIZE(ov5640_setting_30fps_VGA_640_480)},
+		{OV5640_MODE_NTSC_720_480, SUBSAMPLING, 720, 480,
+		 ov5640_setting_30fps_NTSC_720_480,
+		 ARRAY_SIZE(ov5640_setting_30fps_NTSC_720_480)},
+		{OV5640_MODE_PAL_720_576, SUBSAMPLING, 720, 576,
+		 ov5640_setting_30fps_PAL_720_576,
+		 ARRAY_SIZE(ov5640_setting_30fps_PAL_720_576)},
+		{OV5640_MODE_XGA_1024_768, SUBSAMPLING, 1024, 768,
+		 ov5640_setting_30fps_XGA_1024_768,
+		 ARRAY_SIZE(ov5640_setting_30fps_XGA_1024_768)},
+		{OV5640_MODE_720P_1280_720, SUBSAMPLING, 1280, 720,
+		 ov5640_setting_30fps_720P_1280_720,
+		 ARRAY_SIZE(ov5640_setting_30fps_720P_1280_720)},
+		{OV5640_MODE_1080P_1920_1080, SCALING, 1920, 1080,
+		 ov5640_setting_30fps_1080P_1920_1080,
+		 ARRAY_SIZE(ov5640_setting_30fps_1080P_1920_1080)},
+		{OV5640_MODE_QSXGA_2592_1944, -1, 0, 0, NULL, 0},
+	},
+};
+
+static int ov5640_init_slave_id(struct ov5640_dev *sensor)
+{
+	struct i2c_client *client = sensor->i2c_client;
+	struct i2c_msg msg;
+	u8 buf[3];
+	int ret;
+
+	if (client->addr == OV5640_DEFAULT_SLAVE_ID)
+		return 0;
+
+	buf[0] = OV5640_REG_SLAVE_ID >> 8;
+	buf[1] = OV5640_REG_SLAVE_ID & 0xff;
+	buf[2] = client->addr << 1;
+
+	msg.addr = OV5640_DEFAULT_SLAVE_ID;
+	msg.flags = 0;
+	msg.buf = buf;
+	msg.len = sizeof(buf);
+
+	ret = i2c_transfer(client->adapter, &msg, 1);
+	if (ret < 0) {
+		dev_err(&client->dev, "%s: failed with %d\n", __func__, ret);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int ov5640_write_reg(struct ov5640_dev *sensor, u16 reg, u8 val)
+{
+	struct i2c_client *client = sensor->i2c_client;
+	struct i2c_msg msg;
+	u8 buf[3];
+	int ret;
+
+	buf[0] = reg >> 8;
+	buf[1] = reg & 0xff;
+	buf[2] = val;
+
+	msg.addr = client->addr;
+	msg.flags = client->flags;
+	msg.buf = buf;
+	msg.len = sizeof(buf);
+
+	ret = i2c_transfer(client->adapter, &msg, 1);
+	if (ret < 0) {
+		v4l2_err(&sensor->sd, "%s: error: reg=%x, val=%x\n",
+			__func__, reg, val);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int ov5640_read_reg(struct ov5640_dev *sensor, u16 reg, u8 *val)
+{
+	struct i2c_client *client = sensor->i2c_client;
+	struct i2c_msg msg[2];
+	u8 buf[2];
+	int ret;
+
+	buf[0] = reg >> 8;
+	buf[1] = reg & 0xff;
+
+	msg[0].addr = client->addr;
+	msg[0].flags = client->flags;
+	msg[0].buf = buf;
+	msg[0].len = sizeof(buf);
+
+	msg[1].addr = client->addr;
+	msg[1].flags = client->flags | I2C_M_RD;
+	msg[1].buf = buf;
+	msg[1].len = 1;
+
+	ret = i2c_transfer(client->adapter, msg, 2);
+	if (ret < 0)
+		return ret;
+
+	*val = buf[0];
+	return 0;
+}
+
+static int ov5640_read_reg16(struct ov5640_dev *sensor, u16 reg, u16 *val)
+{
+	u8 hi, lo;
+
+	ov5640_read_reg(sensor, reg, &hi);
+	ov5640_read_reg(sensor, reg+1, &lo);
+
+	*val = ((u16)hi << 8) | (u16)lo;
+	return 0;
+}
+
+static int ov5640_write_reg16(struct ov5640_dev *sensor, u16 reg, u16 val)
+{
+	ov5640_write_reg(sensor, reg, val >> 8);
+	ov5640_write_reg(sensor, reg + 1, val & 0xff);
+	return 0;
+}
+
+static int ov5640_mod_reg(struct ov5640_dev *sensor, u16 reg,
+			  u8 mask, u8 val)
+{
+	u8 readval;
+
+	ov5640_read_reg(sensor, reg, &readval);
+
+	readval &= ~mask;
+	val &= mask;
+	val |= readval;
+
+	ov5640_write_reg(sensor, reg, val);
+	return 0;
+}
+
+/* download ov5640 settings to sensor through i2c */
+static int ov5640_load_regs(struct ov5640_dev *sensor,
+			    const struct ov5640_mode_info *mode)
+{
+	const struct reg_value *regs = mode->reg_data;
+	unsigned int i;
+	u32 delay_ms;
+	u16 reg_addr;
+	u8 mask, val;
+
+	for (i = 0; i < mode->reg_data_size; ++i, ++regs) {
+		delay_ms = regs->delay_ms;
+		reg_addr = regs->reg_addr;
+		val = regs->val;
+		mask = regs->mask;
+
+		if (mask)
+			ov5640_mod_reg(sensor, reg_addr, mask, val);
+		else
+			ov5640_write_reg(sensor, reg_addr, val);
+		if (delay_ms)
+			usleep_range(1000*delay_ms, 1000*delay_ms+100);
+	}
+
+	return 0;
+}
+
+static int ov5640_set_hue(struct ov5640_dev *sensor, int value)
+{
+	if (value) {
+		ov5640_mod_reg(sensor, OV5640_REG_SDE_CTRL0, BIT(0), BIT(0));
+		ov5640_write_reg16(sensor, OV5640_REG_SDE_CTRL1, value);
+	} else {
+		ov5640_mod_reg(sensor, OV5640_REG_SDE_CTRL0, BIT(0), 0);
+	}
+
+	return 0;
+}
+
+static int ov5640_set_contrast(struct ov5640_dev *sensor, int value)
+{
+	if (value) {
+		ov5640_mod_reg(sensor, OV5640_REG_SDE_CTRL0, BIT(2), BIT(2));
+		ov5640_write_reg(sensor, OV5640_REG_SDE_CTRL5, value & 0xff);
+	} else {
+		ov5640_mod_reg(sensor, OV5640_REG_SDE_CTRL0, BIT(2), 0);
+	}
+
+	return 0;
+}
+
+static int ov5640_set_saturation(struct ov5640_dev *sensor, int value)
+{
+	if (value) {
+		ov5640_mod_reg(sensor, OV5640_REG_SDE_CTRL0, BIT(1), BIT(1));
+		ov5640_write_reg(sensor, OV5640_REG_SDE_CTRL3, value & 0xff);
+		ov5640_write_reg(sensor, OV5640_REG_SDE_CTRL4, value & 0xff);
+	} else {
+		ov5640_mod_reg(sensor, OV5640_REG_SDE_CTRL0, BIT(1), 0);
+	}
+
+	return 0;
+}
+
+static int ov5640_set_white_balance(struct ov5640_dev *sensor, int awb)
+{
+	ov5640_mod_reg(sensor, OV5640_REG_AWB_MANUAL_CTRL,
+		       BIT(0), awb ? 0 : 1);
+
+	if (!awb) {
+		u16 red = (u16)sensor->ctrls.red_balance->val;
+		u16 blue = (u16)sensor->ctrls.blue_balance->val;
+
+		ov5640_write_reg16(sensor, OV5640_REG_AWB_R_GAIN, red);
+		ov5640_write_reg16(sensor, OV5640_REG_AWB_B_GAIN, blue);
+	}
+
+	return 0;
+}
+
+static int ov5640_set_exposure(struct ov5640_dev *sensor, int exp)
+{
+	struct ov5640_ctrls *ctrls = &sensor->ctrls;
+	bool auto_exposure = (exp == V4L2_EXPOSURE_AUTO);
+
+	if (ctrls->auto_exp->is_new) {
+		ov5640_mod_reg(sensor, OV5640_REG_AEC_PK_MANUAL,
+			       BIT(0), auto_exposure ? 0 : BIT(0));
+	}
+
+	if (!auto_exposure && ctrls->exposure->is_new) {
+		u16 max_exp;
+
+		ov5640_read_reg16(sensor, OV5640_REG_AEC_PK_VTS, &max_exp);
+		if (ctrls->exposure->val < max_exp) {
+			u32 exposure = ctrls->exposure->val << 4;
+
+			ov5640_write_reg(sensor, OV5640_REG_AEC_PK_EXPOSURE_LO,
+					 exposure & 0xff);
+			ov5640_write_reg(sensor, OV5640_REG_AEC_PK_EXPOSURE_MED,
+					 (exposure >> 8) & 0xff);
+			ov5640_write_reg(sensor, OV5640_REG_AEC_PK_EXPOSURE_HI,
+					 (exposure >> 16) & 0x0f);
+		}
+	}
+
+	return 0;
+}
+
+/* read exposure, in number of line periods */
+static int ov5640_get_exposure(struct ov5640_dev *sensor)
+{
+	u8 temp;
+	int exp;
+
+	ov5640_read_reg(sensor, OV5640_REG_AEC_PK_EXPOSURE_HI, &temp);
+	exp = ((int)temp & 0x0f) << 16;
+	ov5640_read_reg(sensor, OV5640_REG_AEC_PK_EXPOSURE_MED, &temp);
+	exp |= ((int)temp << 8);
+	ov5640_read_reg(sensor, OV5640_REG_AEC_PK_EXPOSURE_LO, &temp);
+	exp |= (int)temp;
+
+	return exp >> 4;
+}
+
+static int ov5640_set_gain(struct ov5640_dev *sensor, int auto_gain)
+{
+	struct ov5640_ctrls *ctrls = &sensor->ctrls;
+
+	if (ctrls->auto_gain->is_new) {
+		ov5640_mod_reg(sensor, OV5640_REG_AEC_PK_MANUAL,
+			       BIT(1), ctrls->auto_gain->val ? 0 : BIT(1));
+	}
+
+	if (!auto_gain && ctrls->gain->is_new) {
+		u16 gain = (u16)ctrls->gain->val;
+
+		ov5640_write_reg16(sensor, OV5640_REG_AEC_PK_REAL_GAIN,
+				   gain & 0x3ff);
+	}
+
+	return 0;
+}
+
+static int ov5640_get_gain(struct ov5640_dev *sensor)
+{
+	u16 gain;
+
+	ov5640_read_reg16(sensor, OV5640_REG_AEC_PK_REAL_GAIN, &gain);
+
+	return gain & 0x3ff;
+}
+
+static int ov5640_set_agc_aec(struct ov5640_dev *sensor, bool on)
+{
+	ov5640_mod_reg(sensor, OV5640_REG_AEC_PK_MANUAL,
+		       0x3, on ? 0 : 0x3);
+	return 0;
+}
+
+static int ov5640_set_test_pattern(struct ov5640_dev *sensor, int value)
+{
+	ov5640_mod_reg(sensor, OV5640_REG_PRE_ISP_TEST_SET1,
+		       0xa4, value ? 0xa4 : 0);
+	return 0;
+}
+
+static int ov5640_set_stream(struct ov5640_dev *sensor, bool on)
+{
+	ov5640_mod_reg(sensor, OV5640_REG_MIPI_CTRL00, BIT(5),
+		       on ? 0 : BIT(5));
+	ov5640_write_reg(sensor, OV5640_REG_PAD_OUTPUT00,
+			 on ? 0x00 : 0x70);
+	ov5640_write_reg(sensor, OV5640_REG_FRAME_CTRL01,
+			 on ? 0x00 : 0x0f);
+	return 0;
+}
+
+static int ov5640_get_sysclk(struct ov5640_dev *sensor)
+{
+	 /* calculate sysclk */
+	u32 xvclk = sensor->xclk_freq / 10000;
+	u32 multiplier, prediv, VCO, sysdiv, pll_rdiv;
+	u32 sclk_rdiv_map[] = {1, 2, 4, 8};
+	u32 bit_div2x = 1, sclk_rdiv, sysclk;
+	u8 temp1, temp2;
+
+	ov5640_read_reg(sensor, OV5640_REG_SC_PLL_CTRL0, &temp1);
+	temp2 = temp1 & 0x0f;
+	if (temp2 == 8 || temp2 == 10)
+		bit_div2x = temp2 / 2;
+
+	ov5640_read_reg(sensor, OV5640_REG_SC_PLL_CTRL1, &temp1);
+	sysdiv = temp1 >> 4;
+	if (sysdiv == 0)
+		sysdiv = 16;
+
+	ov5640_read_reg(sensor, OV5640_REG_SC_PLL_CTRL2, &temp1);
+	multiplier = temp1;
+
+	ov5640_read_reg(sensor, OV5640_REG_SC_PLL_CTRL3, &temp1);
+	prediv = temp1 & 0x0f;
+	pll_rdiv = ((temp1 >> 4) & 0x01) + 1;
+
+	ov5640_read_reg(sensor, OV5640_REG_SYS_ROOT_DIVIDER, &temp1);
+	temp2 = temp1 & 0x03;
+	sclk_rdiv = sclk_rdiv_map[temp2];
+
+	VCO = xvclk * multiplier / prediv;
+
+	sysclk = VCO / sysdiv / pll_rdiv * 2 / bit_div2x / sclk_rdiv;
+
+	return sysclk;
+}
+
+static int ov5640_set_night_mode(struct ov5640_dev *sensor)
+{
+	 /* read HTS from register settings */
+	u8 mode;
+
+	ov5640_read_reg(sensor, OV5640_REG_AEC_CTRL00, &mode);
+	mode &= 0xfb;
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL00, mode);
+	return 0;
+}
+
+static int ov5640_get_hts(struct ov5640_dev *sensor)
+{
+	/* read HTS from register settings */
+	u16 hts;
+
+	ov5640_read_reg16(sensor, OV5640_REG_TIMING_HTS, &hts);
+	return hts;
+}
+
+static int ov5640_get_vts(struct ov5640_dev *sensor)
+{
+	u16 vts;
+
+	ov5640_read_reg16(sensor, OV5640_REG_TIMING_VTS, &vts);
+	return vts;
+}
+
+static int ov5640_set_vts(struct ov5640_dev *sensor, int vts)
+{
+	ov5640_write_reg16(sensor, OV5640_REG_TIMING_VTS, vts);
+	return 0;
+}
+
+static int ov5640_get_light_freq(struct ov5640_dev *sensor)
+{
+	/* get banding filter value */
+	u8 temp, temp1;
+	int light_freq = 0;
+
+	ov5640_read_reg(sensor, OV5640_REG_HZ5060_CTRL01, &temp);
+
+	if (temp & 0x80) {
+		/* manual */
+		ov5640_read_reg(sensor, OV5640_REG_HZ5060_CTRL00, &temp1);
+		if (temp1 & 0x04) {
+			/* 50Hz */
+			light_freq = 50;
+		} else {
+			/* 60Hz */
+			light_freq = 60;
+		}
+	} else {
+		/* auto */
+		ov5640_read_reg(sensor, OV5640_REG_SIGMADELTA_CTRL0C,
+				&temp1);
+		if (temp1 & 0x01) {
+			/* 50Hz */
+			light_freq = 50;
+		} else {
+			/* 60Hz */
+		}
+	}
+
+	return light_freq;
+}
+
+static int ov5640_set_bandingfilter(struct ov5640_dev *sensor)
+{
+	u32 band_step60, max_band60, band_step50, max_band50, prev_vts;
+	int ret;
+
+	/* read preview PCLK */
+	ret = ov5640_get_sysclk(sensor);
+	if (ret < 0)
+		return ret;
+	sensor->prev_sysclk = ret;
+	/* read preview HTS */
+	ret = ov5640_get_hts(sensor);
+	if (ret < 0)
+		return ret;
+	sensor->prev_hts = ret;
+
+	/* read preview VTS */
+	ret = ov5640_get_vts(sensor);
+	if (ret < 0)
+		return ret;
+	prev_vts = ret;
+
+	/* calculate banding filter */
+	/* 60Hz */
+	band_step60 = sensor->prev_sysclk * 100 / sensor->prev_hts * 100/120;
+	ov5640_write_reg16(sensor, OV5640_REG_AEC_B60_STEP, band_step60);
+
+	max_band60 = (int)((prev_vts-4)/band_step60);
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL0D, max_band60);
+
+	/* 50Hz */
+	band_step50 = sensor->prev_sysclk * 100 / sensor->prev_hts;
+	ov5640_write_reg16(sensor, OV5640_REG_AEC_B50_STEP, band_step50);
+
+	max_band50 = (int)((prev_vts-4)/band_step50);
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL0E, max_band50);
+
+	return 0;
+}
+
+static int ov5640_set_ae_target(struct ov5640_dev *sensor, int target)
+{
+	/* stable in high */
+	u32 fast_high, fast_low;
+
+	sensor->ae_low = target * 23 / 25;	/* 0.92 */
+	sensor->ae_high = target * 27 / 25;	/* 1.08 */
+
+	fast_high = sensor->ae_high << 1;
+	if (fast_high > 255)
+		fast_high = 255;
+
+	fast_low = sensor->ae_low >> 1;
+
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL0F, sensor->ae_high);
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL10, sensor->ae_low);
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL1B, sensor->ae_high);
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL1E, sensor->ae_low);
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL11, fast_high);
+	ov5640_write_reg(sensor, OV5640_REG_AEC_CTRL1F, fast_low);
+
+	return 0;
+}
+
+static int ov5640_binning_on(struct ov5640_dev *sensor)
+{
+	u8 temp;
+
+	ov5640_read_reg(sensor, OV5640_REG_TIMING_TC_REG21, &temp);
+	temp &= 0xfe;
+
+	return temp ? 1 : 0;
+}
+
+static int ov5640_set_virtual_channel(struct ov5640_dev *sensor)
+{
+	u8 temp, channel = OV5640_MIPI_VC;
+
+	ov5640_read_reg(sensor, OV5640_REG_DEBUG_MODE, &temp);
+	temp &= ~(3 << 6);
+	temp |= (channel << 6);
+	ov5640_write_reg(sensor, OV5640_REG_DEBUG_MODE, temp);
+
+	return 0;
+}
+
+static const struct ov5640_mode_info *
+ov5640_find_mode(struct ov5640_dev *sensor, enum ov5640_frame_rate fr,
+		 int width, int height, bool nearest)
+{
+	const struct ov5640_mode_info *mode = NULL;
+	int i;
+
+	for (i = OV5640_NUM_MODES - 1; i >= 0; i--) {
+		mode = &ov5640_mode_data[fr][i];
+
+		if (!mode->reg_data)
+			continue;
+
+		if ((nearest && mode->width <= width &&
+		     mode->height <= height) ||
+		    (!nearest && mode->width == width &&
+		     mode->height == height))
+			break;
+	}
+
+	if (nearest && i < 0)
+		mode = &ov5640_mode_data[fr][0];
+
+	return mode;
+}
+
+/*
+ * sensor changes between scaling and subsampling, go through
+ * exposure calculation
+ */
+static int ov5640_change_mode_exposure_calc(
+	struct ov5640_dev *sensor, const struct ov5640_mode_info *mode)
+{
+	u32 prev_shutter, prev_gain16;
+	u32 cap_shutter, cap_gain16;
+	u32 cap_sysclk, cap_hts, cap_vts;
+	u32 light_freq, cap_bandfilt, cap_maxband;
+	u32 cap_gain16_shutter;
+	u8 average;
+	int ret;
+
+	if (mode->reg_data == NULL)
+		return -EINVAL;
+
+	/* turn off AE/AG */
+	ret = ov5640_set_agc_aec(sensor, false);
+	if (ret < 0)
+		return ret;
+
+	/* read preview shutter */
+	ret = ov5640_get_exposure(sensor);
+	if (ret < 0)
+		return ret;
+	prev_shutter = ret;
+	ret = ov5640_binning_on(sensor);
+	if (ret < 0)
+		return ret;
+	if (ret && mode->id != OV5640_MODE_720P_1280_720 &&
+	    mode->id != OV5640_MODE_1080P_1920_1080)
+		prev_shutter *= 2;
+
+	/* read preview gain */
+	ret = ov5640_get_gain(sensor);
+	if (ret < 0)
+		return ret;
+	prev_gain16 = ret;
+
+	/* get average */
+	ov5640_read_reg(sensor, OV5640_REG_AVG_READOUT, &average);
+
+	/* turn off night mode for capture */
+	ret = ov5640_set_night_mode(sensor);
+	if (ret < 0)
+		return ret;
+
+	/* Write capture setting */
+	ret = ov5640_load_regs(sensor, mode);
+	if (ret < 0)
+		return ret;
+
+	/* read capture VTS */
+	ret = ov5640_get_vts(sensor);
+	if (ret < 0)
+		return ret;
+	cap_vts = ret;
+	ret = ov5640_get_hts(sensor);
+	if (ret < 0)
+		return ret;
+	cap_hts = ret;
+	ret = ov5640_get_sysclk(sensor);
+	if (ret < 0)
+		return ret;
+	cap_sysclk = ret;
+
+	/* calculate capture banding filter */
+	ret = ov5640_get_light_freq(sensor);
+	if (ret < 0)
+		return ret;
+	light_freq = ret;
+
+	if (light_freq == 60) {
+		/* 60Hz */
+		cap_bandfilt = cap_sysclk * 100 / cap_hts * 100 / 120;
+	} else {
+		/* 50Hz */
+		cap_bandfilt = cap_sysclk * 100 / cap_hts;
+	}
+	cap_maxband = (int)((cap_vts - 4) / cap_bandfilt);
+
+	/* calculate capture shutter/gain16 */
+	if (average > sensor->ae_low && average < sensor->ae_high) {
+		/* in stable range */
+		cap_gain16_shutter =
+			prev_gain16 * prev_shutter *
+			cap_sysclk / sensor->prev_sysclk *
+			sensor->prev_hts / cap_hts *
+			sensor->ae_target / average;
+	} else {
+		cap_gain16_shutter =
+			prev_gain16 * prev_shutter *
+			cap_sysclk / sensor->prev_sysclk *
+			sensor->prev_hts / cap_hts;
+	}
+
+	/* gain to shutter */
+	if (cap_gain16_shutter < (cap_bandfilt * 16)) {
+		/* shutter < 1/100 */
+		cap_shutter = cap_gain16_shutter / 16;
+		if (cap_shutter < 1)
+			cap_shutter = 1;
+
+		cap_gain16 = cap_gain16_shutter / cap_shutter;
+		if (cap_gain16 < 16)
+			cap_gain16 = 16;
+	} else {
+		if (cap_gain16_shutter > (cap_bandfilt * cap_maxband * 16)) {
+			/* exposure reach max */
+			cap_shutter = cap_bandfilt * cap_maxband;
+			cap_gain16 = cap_gain16_shutter / cap_shutter;
+		} else {
+			/* 1/100 < (cap_shutter = n/100) =< max */
+			cap_shutter =
+				((int)(cap_gain16_shutter / 16 / cap_bandfilt))
+				* cap_bandfilt;
+			cap_gain16 = cap_gain16_shutter / cap_shutter;
+		}
+	}
+
+	/* write capture gain */
+	ret = ov5640_set_gain(sensor, cap_gain16);
+	if (ret < 0)
+		return ret;
+
+	/* write capture shutter */
+	if (cap_shutter > (cap_vts - 4)) {
+		cap_vts = cap_shutter + 4;
+		ret = ov5640_set_vts(sensor, cap_vts);
+		if (ret < 0)
+			return ret;
+	}
+
+	return ov5640_set_exposure(sensor, cap_shutter);
+}
+
+/*
+ * if sensor changes inside scaling or subsampling
+ * change mode directly
+ */
+static int ov5640_change_mode_direct(struct ov5640_dev *sensor,
+				     const struct ov5640_mode_info *mode)
+{
+	int ret;
+
+	if (mode->reg_data == NULL)
+		return -EINVAL;
+
+	/* turn off AE/AG */
+	ret = ov5640_set_agc_aec(sensor, false);
+	if (ret < 0)
+		return ret;
+
+	/* Write capture setting */
+	ret = ov5640_load_regs(sensor, mode);
+	if (ret < 0)
+		return ret;
+
+	return ov5640_set_agc_aec(sensor, true);
+}
+
+static int ov5640_change_mode(struct ov5640_dev *sensor,
+			      enum ov5640_frame_rate frame_rate,
+			      const struct ov5640_mode_info *mode,
+			      const struct ov5640_mode_info *orig_mode)
+{
+	enum ov5640_downsize_mode dn_mode, orig_dn_mode;
+	int ret;
+
+	dn_mode = mode->dn_mode;
+	orig_dn_mode = orig_mode->dn_mode;
+
+	if ((dn_mode == SUBSAMPLING && orig_dn_mode == SCALING) ||
+	    (dn_mode == SCALING && orig_dn_mode == SUBSAMPLING)) {
+		/*
+		 * change between subsampling and scaling
+		 * go through exposure calucation
+		 */
+		ret = ov5640_change_mode_exposure_calc(sensor, mode);
+	} else {
+		/*
+		 * change inside subsampling or scaling
+		 * download firmware directly
+		 */
+		ret = ov5640_change_mode_direct(sensor, mode);
+	}
+
+	if (ret < 0)
+		return ret;
+
+	ret = ov5640_set_ae_target(sensor, sensor->ae_target);
+	if (ret < 0)
+		return ret;
+	ret = ov5640_get_light_freq(sensor);
+	if (ret < 0)
+		return ret;
+	ret = ov5640_set_bandingfilter(sensor);
+	if (ret < 0)
+		return ret;
+	ret = ov5640_set_virtual_channel(sensor);
+	if (ret < 0)
+		return ret;
+
+	/* restore controls */
+	v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
+
+	sensor->current_mode = mode;
+	sensor->current_fr = frame_rate;
+
+	return 0;
+}
+
+/* restore the last set video mode after chip power-on */
+static int ov5640_restore_mode(struct ov5640_dev *sensor)
+{
+	int ret;
+
+	/* first load the initial register values */
+	ret = ov5640_load_regs(sensor, &ov5640_mode_init_data);
+	if (ret < 0)
+		return ret;
+
+	/* now restore the last capture mode */
+	return ov5640_change_mode(sensor, sensor->current_fr,
+				  sensor->current_mode,
+				  &ov5640_mode_init_data);
+}
+
+static void ov5640_power(struct ov5640_dev *sensor, bool enable)
+{
+	if (sensor->pwdn_gpio)
+		gpiod_set_value(sensor->pwdn_gpio, enable ? 0 : 1);
+}
+
+static void ov5640_reset(struct ov5640_dev *sensor)
+{
+	if (!sensor->reset_gpio)
+		return;
+
+	gpiod_set_value(sensor->reset_gpio, 0);
+
+	/* camera power cycle */
+	ov5640_power(sensor, false);
+	usleep_range(5000, 10000);
+	ov5640_power(sensor, true);
+	usleep_range(5000, 10000);
+
+	gpiod_set_value(sensor->reset_gpio, 1);
+	usleep_range(1000, 2000);
+
+	gpiod_set_value(sensor->reset_gpio, 0);
+	usleep_range(5000, 10000);
+}
+
+static int ov5640_set_power(struct ov5640_dev *sensor, bool on)
+{
+	int ret;
+
+	if (on) {
+		clk_prepare_enable(sensor->xclk);
+
+		ret = regulator_bulk_enable(OV5640_NUM_SUPPLIES,
+					    sensor->supplies);
+		if (ret)
+			return ret;
+
+		ov5640_reset(sensor);
+		ov5640_power(sensor, true);
+
+		ret = ov5640_init_slave_id(sensor);
+		if (ret)
+			return ret;
+
+		ret = ov5640_restore_mode(sensor);
+		if (ret)
+			return ret;
+
+		/*
+		 * start streaming briefly followed by stream off in
+		 * order to coax the clock lane into LP-11 state.
+		 */
+		ov5640_set_stream(sensor, true);
+		usleep_range(1000, 2000);
+		ov5640_set_stream(sensor, false);
+	} else {
+		ov5640_power(sensor, false);
+
+		regulator_bulk_disable(OV5640_NUM_SUPPLIES,
+				       sensor->supplies);
+
+		clk_disable_unprepare(sensor->xclk);
+	}
+
+	return 0;
+}
+
+/* --------------- Subdev Operations --------------- */
+
+static int ov5640_s_power(struct v4l2_subdev *sd, int on)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+	int ret = 0;
+
+	mutex_lock(&sensor->power_lock);
+
+	/*
+	 * If the power count is modified from 0 to != 0 or from != 0 to 0,
+	 * update the power state.
+	 */
+	if (sensor->power_count == !on) {
+		ret = ov5640_set_power(sensor, !!on);
+		if (ret)
+			goto out;
+	}
+
+	/* Update the power count. */
+	sensor->power_count += on ? 1 : -1;
+	WARN_ON(sensor->power_count < 0);
+out:
+	mutex_unlock(&sensor->power_lock);
+	return ret;
+}
+
+static int ov5640_g_parm(struct v4l2_subdev *sd, struct v4l2_streamparm *a)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+	struct v4l2_captureparm *cparm = &a->parm.capture;
+
+	if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	/* This is the only case currently handled. */
+	memset(a, 0, sizeof(*a));
+	a->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+	cparm->capability = sensor->streamcap.capability;
+	cparm->timeperframe = sensor->streamcap.timeperframe;
+	cparm->capturemode = sensor->streamcap.capturemode;
+
+	return 0;
+}
+
+static int ov5640_s_parm(struct v4l2_subdev *sd, struct v4l2_streamparm *a)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+	struct v4l2_fract *tpf = &a->parm.capture.timeperframe;
+	enum ov5640_frame_rate frame_rate;
+	u32 tgt_fps; /* target frames per secound */
+	int ret;
+
+	if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	/* Check that the new frame rate is allowed. */
+	if (tpf->numerator == 0 || tpf->denominator == 0) {
+		tpf->denominator = ov5640_framerates[OV5640_30_FPS];
+		tpf->numerator = 1;
+	}
+
+	tgt_fps = DIV_ROUND_CLOSEST(tpf->denominator, tpf->numerator);
+
+	if (tgt_fps > ov5640_framerates[OV5640_30_FPS]) {
+		tpf->denominator = ov5640_framerates[OV5640_30_FPS];
+		tpf->numerator = 1;
+	} else if (tgt_fps < ov5640_framerates[OV5640_15_FPS]) {
+		tpf->denominator = ov5640_framerates[OV5640_15_FPS];
+		tpf->numerator = 1;
+	}
+
+	/* Actual frame rate we use */
+	tgt_fps = DIV_ROUND_CLOSEST(tpf->denominator, tpf->numerator);
+
+	if (tgt_fps == 15)
+		frame_rate = OV5640_15_FPS;
+	else if (tgt_fps == 30)
+		frame_rate = OV5640_30_FPS;
+	else
+		return -EINVAL;
+
+	ret = ov5640_change_mode(sensor, frame_rate,
+				 sensor->current_mode,
+				 sensor->current_mode);
+	if (ret < 0)
+		return ret;
+
+	sensor->streamcap.timeperframe = *tpf;
+
+	return 0;
+}
+
+static int ov5640_get_fmt(struct v4l2_subdev *sd,
+			  struct v4l2_subdev_pad_config *cfg,
+			  struct v4l2_subdev_format *format)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+	struct v4l2_mbus_framefmt *fmt;
+
+	if (format->pad != 0)
+		return -EINVAL;
+
+	if (format->which == V4L2_SUBDEV_FORMAT_TRY)
+		fmt = v4l2_subdev_get_try_format(&sensor->sd, cfg,
+						 format->pad);
+	else
+		fmt = &sensor->fmt;
+
+	format->format = *fmt;
+
+	return 0;
+}
+
+static int ov5640_try_fmt_internal(struct v4l2_subdev *sd,
+				   struct v4l2_mbus_framefmt *fmt,
+				   enum ov5640_frame_rate fr,
+				   const struct ov5640_mode_info **new_mode)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+	const struct ov5640_mode_info *mode;
+
+	mode = ov5640_find_mode(sensor, fr, fmt->width, fmt->height, true);
+	if (!mode)
+		return -EINVAL;
+
+	fmt->width = mode->width;
+	fmt->height = mode->height;
+	fmt->code = sensor->fmt.code;
+
+	if (new_mode)
+		*new_mode = mode;
+	return 0;
+}
+
+static int ov5640_set_fmt(struct v4l2_subdev *sd,
+			  struct v4l2_subdev_pad_config *cfg,
+			  struct v4l2_subdev_format *format)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+	const struct ov5640_mode_info *new_mode;
+	int ret;
+
+	if (format->pad != 0)
+		return -EINVAL;
+
+	ret = ov5640_try_fmt_internal(sd, &format->format,
+				      sensor->current_fr, &new_mode);
+	if (ret)
+		return ret;
+
+	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
+		struct v4l2_mbus_framefmt *fmt =
+			v4l2_subdev_get_try_format(sd, cfg, 0);
+
+		*fmt = format->format;
+		return 0;
+	}
+
+	ret = ov5640_change_mode(sensor, sensor->current_fr,
+				 new_mode, sensor->current_mode);
+	if (ret < 0)
+		return ret;
+
+	sensor->fmt = format->format;
+	return 0;
+}
+
+
+/*
+ * Sensor Controls.
+ */
+
+static int ov5640_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct v4l2_subdev *sd = ctrl_to_sd(ctrl);
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+
+	switch (ctrl->id) {
+	case V4L2_CID_AUTOGAIN:
+		if (!ctrl->val)
+			return 0;
+		sensor->ctrls.gain->val = ov5640_get_gain(sensor);
+		break;
+	case V4L2_CID_EXPOSURE_AUTO:
+		if (ctrl->val == V4L2_EXPOSURE_MANUAL)
+			return 0;
+		sensor->ctrls.exposure->val = ov5640_get_exposure(sensor);
+		break;
+	}
+
+	return 0;
+}
+
+static int ov5640_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct v4l2_subdev *sd = ctrl_to_sd(ctrl);
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+	int ret = -EINVAL;
+
+	switch (ctrl->id) {
+	case V4L2_CID_AUTOGAIN:
+		ret = ov5640_set_gain(sensor, ctrl->val);
+		break;
+	case V4L2_CID_EXPOSURE_AUTO:
+		ret = ov5640_set_exposure(sensor, ctrl->val);
+		break;
+	case V4L2_CID_AUTO_WHITE_BALANCE:
+		ret = ov5640_set_white_balance(sensor, ctrl->val);
+		break;
+	case V4L2_CID_HUE:
+		ret = ov5640_set_hue(sensor, ctrl->val);
+		break;
+	case V4L2_CID_CONTRAST:
+		ret = ov5640_set_contrast(sensor, ctrl->val);
+		break;
+	case V4L2_CID_SATURATION:
+		ret = ov5640_set_saturation(sensor, ctrl->val);
+		break;
+	case V4L2_CID_TEST_PATTERN:
+		ret = ov5640_set_test_pattern(sensor, ctrl->val);
+		break;
+	}
+
+	return ret;
+}
+
+static const struct v4l2_ctrl_ops ov5640_ctrl_ops = {
+	.g_volatile_ctrl = ov5640_g_volatile_ctrl,
+	.s_ctrl = ov5640_s_ctrl,
+};
+
+static const char * const test_pattern_menu[] = {
+	"Disabled",
+	"Color bars",
+};
+
+static int ov5640_init_controls(struct ov5640_dev *sensor)
+{
+	const struct v4l2_ctrl_ops *ops = &ov5640_ctrl_ops;
+	struct ov5640_ctrls *ctrls = &sensor->ctrls;
+	struct v4l2_ctrl_handler *hdl = &ctrls->handler;
+	int ret;
+
+	v4l2_ctrl_handler_init(hdl, 32);
+
+	/* Auto/manual white balance */
+	ctrls->auto_wb = v4l2_ctrl_new_std(hdl, ops,
+					   V4L2_CID_AUTO_WHITE_BALANCE,
+					   0, 1, 1, 1);
+	ctrls->blue_balance = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_BLUE_BALANCE,
+						0, 4095, 1, 0);
+	ctrls->red_balance = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_RED_BALANCE,
+					       0, 4095, 1, 0);
+	/* Auto/manual exposure */
+	ctrls->auto_exp = v4l2_ctrl_new_std_menu(hdl, ops,
+						 V4L2_CID_EXPOSURE_AUTO,
+						 V4L2_EXPOSURE_MANUAL, 0,
+						 V4L2_EXPOSURE_AUTO);
+	ctrls->exposure = v4l2_ctrl_new_std(hdl, ops,
+					    V4L2_CID_EXPOSURE_ABSOLUTE,
+					    0, 65535, 1, 0);
+	/* Auto/manual gain */
+	ctrls->auto_gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_AUTOGAIN,
+					     0, 1, 1, 1);
+	ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_GAIN,
+					0, 1023, 1, 0);
+
+	ctrls->saturation = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_SATURATION,
+					      0, 255, 1, 64);
+	ctrls->hue = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_HUE,
+				       0, 359, 1, 0);
+	ctrls->contrast = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_CONTRAST,
+					    0, 255, 1, 0);
+	ctrls->test_pattern =
+		v4l2_ctrl_new_std_menu_items(hdl, ops, V4L2_CID_TEST_PATTERN,
+					     ARRAY_SIZE(test_pattern_menu) - 1,
+					     0, 0, test_pattern_menu);
+
+	if (hdl->error) {
+		ret = hdl->error;
+		goto free_ctrls;
+	}
+
+	ctrls->gain->flags |= V4L2_CTRL_FLAG_VOLATILE;
+	ctrls->exposure->flags |= V4L2_CTRL_FLAG_VOLATILE;
+
+	v4l2_ctrl_auto_cluster(3, &ctrls->auto_wb, 0, false);
+	v4l2_ctrl_auto_cluster(2, &ctrls->auto_gain, 0, true);
+	v4l2_ctrl_auto_cluster(2, &ctrls->auto_exp, 1, true);
+
+	sensor->sd.ctrl_handler = hdl;
+	return 0;
+
+free_ctrls:
+	v4l2_ctrl_handler_free(hdl);
+	return ret;
+}
+
+static int ov5640_enum_frame_size(struct v4l2_subdev *sd,
+				  struct v4l2_subdev_pad_config *cfg,
+				  struct v4l2_subdev_frame_size_enum *fse)
+{
+	if (fse->pad != 0)
+		return -EINVAL;
+	if (fse->index >= OV5640_NUM_MODES)
+		return -EINVAL;
+
+	fse->min_width = fse->max_width =
+		ov5640_mode_data[0][fse->index].width;
+	fse->min_height = fse->max_height =
+		ov5640_mode_data[0][fse->index].height;
+
+	return 0;
+}
+
+static int ov5640_enum_frame_interval(
+	struct v4l2_subdev *sd,
+	struct v4l2_subdev_pad_config *cfg,
+	struct v4l2_subdev_frame_interval_enum *fie)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+	const struct ov5640_mode_info *mode;
+
+	if (fie->pad != 0)
+		return -EINVAL;
+	if (fie->index >= OV5640_NUM_FRAMERATES)
+		return -EINVAL;
+
+	mode = ov5640_find_mode(sensor, fie->index,
+				fie->width, fie->height, false);
+	if (!mode)
+		return -EINVAL;
+
+	fie->interval.numerator = 1;
+	fie->interval.denominator = ov5640_framerates[fie->index];
+
+	return 0;
+}
+
+static int ov5640_enum_mbus_code(struct v4l2_subdev *sd,
+				  struct v4l2_subdev_pad_config *cfg,
+				  struct v4l2_subdev_mbus_code_enum *code)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+
+	if (code->pad != 0)
+		return -EINVAL;
+	if (code->index != 0)
+		return -EINVAL;
+
+	code->code = sensor->fmt.code;
+
+	return 0;
+}
+
+static int ov5640_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+
+	return ov5640_set_stream(sensor, enable);
+}
+
+static const struct v4l2_subdev_core_ops ov5640_core_ops = {
+	.s_power = ov5640_s_power,
+};
+
+static const struct v4l2_subdev_video_ops ov5640_video_ops = {
+	.s_parm = ov5640_s_parm,
+	.g_parm = ov5640_g_parm,
+	.s_stream = ov5640_s_stream,
+};
+
+static const struct v4l2_subdev_pad_ops ov5640_pad_ops = {
+	.enum_mbus_code = ov5640_enum_mbus_code,
+	.get_fmt = ov5640_get_fmt,
+	.set_fmt = ov5640_set_fmt,
+	.enum_frame_size = ov5640_enum_frame_size,
+	.enum_frame_interval = ov5640_enum_frame_interval,
+};
+
+static const struct v4l2_subdev_ops ov5640_subdev_ops = {
+	.core = &ov5640_core_ops,
+	.video = &ov5640_video_ops,
+	.pad = &ov5640_pad_ops,
+};
+
+static int ov5640_get_regulators(struct ov5640_dev *sensor)
+{
+	int i;
+
+	for (i = 0; i < OV5640_NUM_SUPPLIES; i++)
+		sensor->supplies[i].supply = ov5640_supply_name[i];
+
+	return devm_regulator_bulk_get(&sensor->i2c_client->dev,
+				       OV5640_NUM_SUPPLIES,
+				       sensor->supplies);
+}
+
+static int ov5640_probe(struct i2c_client *client,
+			const struct i2c_device_id *id)
+{
+	struct device *dev = &client->dev;
+	struct device_node *endpoint;
+	struct ov5640_dev *sensor;
+	int ret;
+
+	sensor = devm_kzalloc(dev, sizeof(*sensor), GFP_KERNEL);
+	if (!sensor)
+		return -ENOMEM;
+
+	sensor->i2c_client = client;
+	sensor->fmt.code = MEDIA_BUS_FMT_UYVY8_2X8;
+	sensor->fmt.width = 640;
+	sensor->fmt.height = 480;
+	sensor->fmt.field = V4L2_FIELD_NONE;
+	sensor->streamcap.capability = V4L2_CAP_TIMEPERFRAME;
+	sensor->streamcap.capturemode = 0;
+	sensor->streamcap.timeperframe.denominator =
+		ov5640_framerates[OV5640_30_FPS];
+	sensor->streamcap.timeperframe.numerator = 1;
+	sensor->current_fr = OV5640_30_FPS;
+
+	sensor->current_mode =
+		&ov5640_mode_data[OV5640_30_FPS][OV5640_MODE_VGA_640_480];
+
+	sensor->ae_target = 52;
+
+	endpoint = of_graph_get_next_endpoint(client->dev.of_node, NULL);
+	if (!endpoint) {
+		dev_err(dev, "endpoint node not found\n");
+		return -EINVAL;
+	}
+
+	v4l2_of_parse_endpoint(endpoint, &sensor->ep);
+	of_node_put(endpoint);
+
+	if (sensor->ep.bus_type != V4L2_MBUS_CSI2) {
+		dev_err(dev, "invalid bus type, must be MIPI CSI2\n");
+		return -EINVAL;
+	}
+
+	/* get system clock (xclk) */
+	sensor->xclk = devm_clk_get(dev, "xclk");
+	if (IS_ERR(sensor->xclk)) {
+		dev_err(dev, "failed to get xclk\n");
+		return PTR_ERR(sensor->xclk);
+	}
+
+	sensor->xclk_freq = clk_get_rate(sensor->xclk);
+	if (sensor->xclk_freq < OV5640_XCLK_MIN ||
+	    sensor->xclk_freq > OV5640_XCLK_MAX) {
+		dev_err(dev, "xclk frequency out of range: %d Hz\n",
+			sensor->xclk_freq);
+		return -EINVAL;
+	}
+
+	/* request optional power down pin */
+	sensor->pwdn_gpio = devm_gpiod_get_optional(dev, "pwdn",
+						    GPIOD_OUT_HIGH);
+	/* request optional reset pin */
+	sensor->reset_gpio = devm_gpiod_get_optional(dev, "reset",
+						     GPIOD_OUT_HIGH);
+
+	v4l2_i2c_subdev_init(&sensor->sd, client, &ov5640_subdev_ops);
+
+	sensor->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE;
+	sensor->pad.flags = MEDIA_PAD_FL_SOURCE;
+	sensor->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR;
+	ret = media_entity_pads_init(&sensor->sd.entity, 1, &sensor->pad);
+	if (ret)
+		return ret;
+
+	ret = ov5640_get_regulators(sensor);
+	if (ret)
+		return ret;
+
+	mutex_init(&sensor->power_lock);
+
+	ret = ov5640_init_controls(sensor);
+	if (ret)
+		goto entity_cleanup;
+
+	ret = v4l2_async_register_subdev(&sensor->sd);
+	if (ret)
+		goto free_ctrls;
+
+	return 0;
+
+free_ctrls:
+	v4l2_ctrl_handler_free(&sensor->ctrls.handler);
+entity_cleanup:
+	mutex_destroy(&sensor->power_lock);
+	media_entity_cleanup(&sensor->sd.entity);
+	regulator_bulk_disable(OV5640_NUM_SUPPLIES, sensor->supplies);
+	return ret;
+}
+
+static int ov5640_remove(struct i2c_client *client)
+{
+	struct v4l2_subdev *sd = i2c_get_clientdata(client);
+	struct ov5640_dev *sensor = to_ov5640_dev(sd);
+
+	regulator_bulk_disable(OV5640_NUM_SUPPLIES, sensor->supplies);
+
+	v4l2_async_unregister_subdev(&sensor->sd);
+	mutex_destroy(&sensor->power_lock);
+	media_entity_cleanup(&sensor->sd.entity);
+	v4l2_ctrl_handler_free(&sensor->ctrls.handler);
+
+	return 0;
+}
+
+static const struct i2c_device_id ov5640_id[] = {
+	{"ov5640", 0},
+	{},
+};
+MODULE_DEVICE_TABLE(i2c, ov5640_id);
+
+static const struct of_device_id ov5640_dt_ids[] = {
+	{ .compatible = "ovti,ov5640" },
+	{ /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, ov5640_dt_ids);
+
+static struct i2c_driver ov5640_i2c_driver = {
+	.driver = {
+		.name  = "ov5640",
+		.of_match_table	= ov5640_dt_ids,
+	},
+	.id_table = ov5640_id,
+	.probe    = ov5640_probe,
+	.remove   = ov5640_remove,
+};
+
+module_i2c_driver(ov5640_i2c_driver);
+
+MODULE_DESCRIPTION("OV5640 MIPI Camera Subdev Driver");
+MODULE_LICENSE("GPL");
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 25/36] ARM: imx_v6_v7_defconfig: Enable staging video4linux drivers
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (23 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 24/36] [media] add Omnivision OV5640 sensor driver Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 26/36] media: imx: add support for bayer formats Steve Longerbeam
                   ` (12 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Enable i.MX v4l2 media staging driver. For video capture on i.MX, the
video multiplexer subdev is required. On the SabreAuto, the ADV7180
video decoder is required along with i2c-mux-gpio. The Sabrelite
and SabreSD require the OV5640 and the SabreLite requires PWM clocks
for the OV5640.

Increase max zoneorder to allow larger video buffer allocations.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 arch/arm/configs/imx_v6_v7_defconfig | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig
index cbe7faf..daeb164 100644
--- a/arch/arm/configs/imx_v6_v7_defconfig
+++ b/arch/arm/configs/imx_v6_v7_defconfig
@@ -51,6 +51,7 @@ CONFIG_PREEMPT_VOLUNTARY=y
 CONFIG_AEABI=y
 CONFIG_HIGHMEM=y
 CONFIG_CMA=y
+CONFIG_FORCE_MAX_ZONEORDER=14
 CONFIG_CMDLINE="noinitrd console=ttymxc0,115200"
 CONFIG_KEXEC=y
 CONFIG_CPU_FREQ=y
@@ -174,13 +175,13 @@ CONFIG_INPUT_MISC=y
 CONFIG_INPUT_MMA8450=y
 CONFIG_SERIO_SERPORT=m
 # CONFIG_LEGACY_PTYS is not set
-# CONFIG_DEVKMEM is not set
 CONFIG_SERIAL_IMX=y
 CONFIG_SERIAL_IMX_CONSOLE=y
 CONFIG_SERIAL_FSL_LPUART=y
 CONFIG_SERIAL_FSL_LPUART_CONSOLE=y
 # CONFIG_I2C_COMPAT is not set
 CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_MUX=y
 CONFIG_I2C_MUX_GPIO=y
 # CONFIG_I2C_HELPER_AUTO is not set
 CONFIG_I2C_ALGOPCF=m
@@ -194,11 +195,11 @@ CONFIG_GPIO_SYSFS=y
 CONFIG_GPIO_MC9S08DZ60=y
 CONFIG_GPIO_PCA953X=y
 CONFIG_GPIO_STMPE=y
-CONFIG_POWER_SUPPLY=y
 CONFIG_POWER_RESET=y
 CONFIG_POWER_RESET_IMX=y
 CONFIG_POWER_RESET_SYSCON=y
 CONFIG_POWER_RESET_SYSCON_POWEROFF=y
+CONFIG_POWER_SUPPLY=y
 CONFIG_SENSORS_GPIO_FAN=y
 CONFIG_SENSORS_IIO_HWMON=y
 CONFIG_THERMAL=y
@@ -221,14 +222,20 @@ CONFIG_REGULATOR_PFUZE100=y
 CONFIG_MEDIA_SUPPORT=y
 CONFIG_MEDIA_CAMERA_SUPPORT=y
 CONFIG_MEDIA_RC_SUPPORT=y
+CONFIG_MEDIA_CONTROLLER=y
+CONFIG_VIDEO_V4L2_SUBDEV_API=y
 CONFIG_RC_DEVICES=y
 CONFIG_IR_GPIO_CIR=y
 CONFIG_MEDIA_USB_SUPPORT=y
 CONFIG_USB_VIDEO_CLASS=m
 CONFIG_V4L_PLATFORM_DRIVERS=y
+CONFIG_VIDEO_MULTIPLEXER=y
 CONFIG_SOC_CAMERA=y
 CONFIG_V4L_MEM2MEM_DRIVERS=y
 CONFIG_VIDEO_CODA=y
+# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set
+CONFIG_VIDEO_ADV7180=m
+CONFIG_VIDEO_OV5640=m
 CONFIG_SOC_CAMERA_OV2640=y
 CONFIG_IMX_IPUV3_CORE=y
 CONFIG_DRM=y
@@ -338,6 +345,9 @@ CONFIG_FSL_EDMA=y
 CONFIG_IMX_SDMA=y
 CONFIG_MXS_DMA=y
 CONFIG_STAGING=y
+CONFIG_STAGING_MEDIA=y
+CONFIG_VIDEO_IMX_MEDIA=y
+CONFIG_COMMON_CLK_PWM=y
 CONFIG_IIO=y
 CONFIG_VF610_ADC=y
 CONFIG_MPL3115=y
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 26/36] media: imx: add support for bayer formats
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (24 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 25/36] ARM: imx_v6_v7_defconfig: Enable staging video4linux drivers Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 27/36] media: imx: csi: " Steve Longerbeam
                   ` (11 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam

From: Russell King <rmk+kernel@armlinux.org.uk>

Add the bayer formats to imx-media's list of supported pixel and bus
formats.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

- added a bayer boolean to struct imx_media_pixfmt.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx-media-utils.c | 68 +++++++++++++++++++++++++++++
 drivers/staging/media/imx/imx-media.h       |  1 +
 2 files changed, 69 insertions(+)

diff --git a/drivers/staging/media/imx/imx-media-utils.c b/drivers/staging/media/imx/imx-media-utils.c
index 55603d9..6855560 100644
--- a/drivers/staging/media/imx/imx-media-utils.c
+++ b/drivers/staging/media/imx/imx-media-utils.c
@@ -61,6 +61,74 @@ static const struct imx_media_pixfmt imx_media_formats[] = {
 		.cs     = IPUV3_COLORSPACE_RGB,
 		.bpp    = 32,
 		.ipufmt = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_SBGGR8,
+		.codes  = {MEDIA_BUS_FMT_SBGGR8_1X8},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 8,
+		.bayer  = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_SGBRG8,
+		.codes  = {MEDIA_BUS_FMT_SGBRG8_1X8},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 8,
+		.bayer  = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_SGRBG8,
+		.codes  = {MEDIA_BUS_FMT_SGRBG8_1X8},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 8,
+		.bayer  = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_SRGGB8,
+		.codes  = {MEDIA_BUS_FMT_SRGGB8_1X8},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 8,
+		.bayer  = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_SBGGR16,
+		.codes  = {
+			MEDIA_BUS_FMT_SBGGR10_1X10,
+			MEDIA_BUS_FMT_SBGGR12_1X12,
+			MEDIA_BUS_FMT_SBGGR14_1X14,
+			MEDIA_BUS_FMT_SBGGR16_1X16
+		},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 16,
+		.bayer  = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_SGBRG16,
+		.codes  = {
+			MEDIA_BUS_FMT_SGBRG10_1X10,
+			MEDIA_BUS_FMT_SGBRG12_1X12,
+			MEDIA_BUS_FMT_SGBRG14_1X14,
+			MEDIA_BUS_FMT_SGBRG16_1X16,
+		},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 16,
+		.bayer  = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_SGRBG16,
+		.codes  = {
+			MEDIA_BUS_FMT_SGRBG10_1X10,
+			MEDIA_BUS_FMT_SGRBG12_1X12,
+			MEDIA_BUS_FMT_SGRBG14_1X14,
+			MEDIA_BUS_FMT_SGRBG16_1X16,
+		},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 16,
+		.bayer  = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_SRGGB16,
+		.codes  = {
+			MEDIA_BUS_FMT_SRGGB10_1X10,
+			MEDIA_BUS_FMT_SRGGB12_1X12,
+			MEDIA_BUS_FMT_SRGGB14_1X14,
+			MEDIA_BUS_FMT_SRGGB16_1X16,
+		},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 16,
+		.bayer  = true,
 	},
 	/*** non-mbus formats start here ***/
 	{
diff --git a/drivers/staging/media/imx/imx-media.h b/drivers/staging/media/imx/imx-media.h
index 3d4f3c7..ae3af0d 100644
--- a/drivers/staging/media/imx/imx-media.h
+++ b/drivers/staging/media/imx/imx-media.h
@@ -91,6 +91,7 @@ struct imx_media_pixfmt {
 	int     bpp;     /* total bpp */
 	enum ipu_color_space cs;
 	bool    planar;  /* is a planar format */
+	bool    bayer;   /* is a raw bayer format */
 	bool    ipufmt;  /* is one of the IPU internal formats */
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 27/36] media: imx: csi: add support for bayer formats
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (25 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 26/36] media: imx: add support for bayer formats Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 28/36] media: imx: csi: fix crop rectangle changes in set_fmt Steve Longerbeam
                   ` (10 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam

From: Russell King <rmk+kernel@armlinux.org.uk>

Bayer formats must be treated as generic data and passthrough mode must
be used.  Add the correct setup for these formats.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

- added check to csi_link_validate() to verify that destination is
  IDMAC output pad when passthrough conditions exist: bayer formats
  and 16-bit parallel buses.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx-media-csi.c | 50 ++++++++++++++++++++++++++-----
 1 file changed, 42 insertions(+), 8 deletions(-)

diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index 0343fc3..ae24b42 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -2,6 +2,7 @@
  * V4L2 Capture CSI Subdev for Freescale i.MX5/6 SOC
  *
  * Copyright (c) 2014-2016 Mentor Graphics Inc.
+ * Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de>
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License as published by
@@ -271,10 +272,11 @@ static int csi_idmac_setup_channel(struct csi_priv *priv)
 	struct imx_media_video_dev *vdev = priv->vdev;
 	struct v4l2_of_endpoint *sensor_ep;
 	struct v4l2_mbus_framefmt *infmt;
-	unsigned int burst_size;
 	struct ipu_image image;
+	u32 passthrough_bits;
 	dma_addr_t phys[2];
 	bool passthrough;
+	u32 burst_size;
 	int ret;
 
 	infmt = &priv->format_mbus[CSI_SINK_PAD];
@@ -301,15 +303,38 @@ static int csi_idmac_setup_channel(struct csi_priv *priv)
 	ipu_cpmem_set_burstsize(priv->idmac_ch, burst_size);
 
 	/*
-	 * If the sensor uses 16-bit parallel CSI bus, we must handle
-	 * the data internally in the IPU as 16-bit generic, aka
-	 * passthrough mode.
+	 * Check for conditions that require the IPU to handle the
+	 * data internally as generic data, aka passthrough mode:
+	 * - raw bayer formats
+	 * - the sensor bus is 16-bit parallel
 	 */
-	passthrough = (sensor_ep->bus_type != V4L2_MBUS_CSI2 &&
-		       sensor_ep->bus.parallel.bus_width >= 16);
+	switch (image.pix.pixelformat) {
+	case V4L2_PIX_FMT_SBGGR8:
+	case V4L2_PIX_FMT_SGBRG8:
+	case V4L2_PIX_FMT_SGRBG8:
+	case V4L2_PIX_FMT_SRGGB8:
+		burst_size = 8;
+		passthrough = true;
+		passthrough_bits = 8;
+		break;
+	case V4L2_PIX_FMT_SBGGR16:
+	case V4L2_PIX_FMT_SGBRG16:
+	case V4L2_PIX_FMT_SGRBG16:
+	case V4L2_PIX_FMT_SRGGB16:
+		burst_size = 4;
+		passthrough = true;
+		passthrough_bits = 16;
+		break;
+	default:
+		passthrough = (sensor_ep->bus_type != V4L2_MBUS_CSI2 &&
+			       sensor_ep->bus.parallel.bus_width >= 16);
+		passthrough_bits = 16;
+		break;
+	}
 
 	if (passthrough)
-		ipu_cpmem_set_format_passthrough(priv->idmac_ch, 16);
+		ipu_cpmem_set_format_passthrough(priv->idmac_ch,
+						 passthrough_bits);
 
 	/*
 	 * Set the channel for the direct CSI-->memory via SMFC
@@ -695,6 +720,7 @@ static int csi_link_validate(struct v4l2_subdev *sd,
 			     struct v4l2_subdev_format *sink_fmt)
 {
 	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	const struct imx_media_pixfmt *incc;
 	struct v4l2_of_endpoint *sensor_ep;
 	bool is_csi2;
 	int ret;
@@ -713,8 +739,16 @@ static int csi_link_validate(struct v4l2_subdev *sd,
 	}
 
 	sensor_ep = &priv->sensor->sensor_ep;
-
 	is_csi2 = (sensor_ep->bus_type == V4L2_MBUS_CSI2);
+	incc = priv->cc[CSI_SINK_PAD];
+
+	if (priv->dest != IPU_CSI_DEST_IDMAC &&
+	    (incc->bayer || (!is_csi2 &&
+			     sensor_ep->bus.parallel.bus_width >= 16))) {
+		v4l2_err(&priv->sd,
+			 "bayer/16-bit parallel buses must go to IDMAC pad\n");
+		return -EINVAL;
+	}
 
 	if (is_csi2) {
 		int vc_num = 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 28/36] media: imx: csi: fix crop rectangle changes in set_fmt
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (26 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 27/36] media: imx: csi: " Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 11:05   ` Russell King - ARM Linux
  2017-02-16  2:19 ` [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates Steve Longerbeam
                   ` (9 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

From: Philipp Zabel <p.zabel@pengutronix.de>

The cropping rectangle was being modified by the output pad's
set_fmt, which is the wrong pad to do this. The crop rectangle
should not be modified by the output pad set_fmt. It instead
should be reset to the full input frame when the input pad format
is set.

The output pad set_fmt should set width/height to the current
crop dimensions, or 1/2 the crop width/height to enable
downscaling.

So the other part of this patch is to enable downscaling if
the output pad dimension(s) are 1/2 the crop dimension(s) at
csi_setup() time.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx-media-csi.c | 35 ++++++++++++++++++++-----------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index ae24b42..3cb97e2 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -531,6 +531,10 @@ static int csi_setup(struct csi_priv *priv)
 
 	ipu_csi_set_window(priv->csi, &priv->crop);
 
+	ipu_csi_set_downsize(priv->csi,
+			     priv->crop.width == 2 * outfmt->width,
+			     priv->crop.height == 2 * outfmt->height);
+
 	ipu_csi_init_interface(priv->csi, &sensor_mbus_cfg, &if_fmt);
 
 	ipu_csi_set_dest(priv->csi, priv->dest);
@@ -890,15 +894,15 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 	switch (sdformat->pad) {
 	case CSI_SRC_PAD_DIRECT:
 	case CSI_SRC_PAD_IDMAC:
-		crop.left = priv->crop.left;
-		crop.top = priv->crop.top;
-		crop.width = sdformat->format.width;
-		crop.height = sdformat->format.height;
-		ret = csi_try_crop(priv, &crop, sensor);
-		if (ret)
-			return ret;
-		sdformat->format.width = crop.width;
-		sdformat->format.height = crop.height;
+		if (sdformat->format.width < priv->crop.width * 3 / 4)
+			sdformat->format.width = priv->crop.width / 2;
+		else
+			sdformat->format.width = priv->crop.width;
+
+		if (sdformat->format.height < priv->crop.height * 3 / 4)
+			sdformat->format.height = priv->crop.height / 2;
+		else
+			sdformat->format.height = priv->crop.height;
 
 		if (sdformat->pad == CSI_SRC_PAD_IDMAC) {
 			cc = imx_media_find_format(0, sdformat->format.code,
@@ -948,6 +952,14 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 		}
 		break;
 	case CSI_SINK_PAD:
+		crop.left = 0;
+		crop.top = 0;
+		crop.width = sdformat->format.width;
+		crop.height = sdformat->format.height;
+		ret = csi_try_crop(priv, &crop, sensor);
+		if (ret)
+			return ret;
+
 		cc = imx_media_find_format(0, sdformat->format.code,
 					   true, false);
 		if (!cc) {
@@ -965,9 +977,8 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 	} else {
 		priv->format_mbus[sdformat->pad] = sdformat->format;
 		priv->cc[sdformat->pad] = cc;
-		/* Update the crop window if this is an output pad  */
-		if (sdformat->pad == CSI_SRC_PAD_DIRECT ||
-		    sdformat->pad == CSI_SRC_PAD_IDMAC)
+		/* Reset the crop window if this is the input pad */
+		if (sdformat->pad == CSI_SINK_PAD)
 			priv->crop = crop;
 	}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (27 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 28/36] media: imx: csi: fix crop rectangle changes in set_fmt Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-18  1:11   ` Steve Longerbeam
                     ` (2 more replies)
  2017-02-16  2:19 ` [PATCH v4 30/36] media: imx: update capture dev format on IDMAC output pad set_fmt Steve Longerbeam
                   ` (8 subsequent siblings)
  37 siblings, 3 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam

From: Russell King <rmk+kernel@armlinux.org.uk>

Setting and getting frame rates is part of the negotiation mechanism
between subdevs.  The lack of support means that a frame rate at the
sensor can't be negotiated through the subdev path.

Add support at MIPI CSI2 level for handling this part of the
negotiation.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx6-mipi-csi2.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
index 23dca80..c62f14e 100644
--- a/drivers/staging/media/imx/imx6-mipi-csi2.c
+++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
@@ -34,6 +34,7 @@ struct csi2_dev {
 	struct v4l2_subdev      sd;
 	struct media_pad       pad[CSI2_NUM_PADS];
 	struct v4l2_mbus_framefmt format_mbus;
+	struct v4l2_fract      frame_interval;
 	struct clk             *dphy_clk;
 	struct clk             *cfg_clk;
 	struct clk             *pix_clk; /* what is this? */
@@ -397,6 +398,30 @@ static int csi2_set_fmt(struct v4l2_subdev *sd,
 	return 0;
 }
 
+static int csi2_g_frame_interval(struct v4l2_subdev *sd,
+				 struct v4l2_subdev_frame_interval *fi)
+{
+	struct csi2_dev *csi2 = sd_to_dev(sd);
+
+	fi->interval = csi2->frame_interval;
+
+	return 0;
+}
+
+static int csi2_s_frame_interval(struct v4l2_subdev *sd,
+				 struct v4l2_subdev_frame_interval *fi)
+{
+	struct csi2_dev *csi2 = sd_to_dev(sd);
+
+	/* Output pads mirror active input pad, no limits on input pads */
+	if (fi->pad != CSI2_SINK_PAD)
+		fi->interval = csi2->frame_interval;
+
+	csi2->frame_interval = fi->interval;
+
+	return 0;
+}
+
 /*
  * retrieve our pads parsed from the OF graph by the media device
  */
@@ -430,6 +455,8 @@ static struct v4l2_subdev_core_ops csi2_core_ops = {
 
 static struct v4l2_subdev_video_ops csi2_video_ops = {
 	.s_stream = csi2_s_stream,
+	.g_frame_interval = csi2_g_frame_interval,
+	.s_frame_interval = csi2_s_frame_interval,
 };
 
 static struct v4l2_subdev_pad_ops csi2_pad_ops = {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 30/36] media: imx: update capture dev format on IDMAC output pad set_fmt
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (28 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 11:29   ` Philipp Zabel
  2017-02-16  2:19 ` [PATCH v4 31/36] media: imx: csi: add __csi_get_fmt Steve Longerbeam
                   ` (7 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

When configuring the IDMAC output pad formats (in ipu_csi,
ipu_ic_prpenc, and ipu_ic_prpvf subdevs), the attached capture
device format must also be updated.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
Suggested-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 drivers/staging/media/imx/imx-ic-prpencvf.c | 9 +++++++++
 drivers/staging/media/imx/imx-media-csi.c   | 9 +++++++++
 2 files changed, 18 insertions(+)

diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
index 2be8845..6e45975 100644
--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
+++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
@@ -739,6 +739,7 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
 		       struct v4l2_subdev_format *sdformat)
 {
 	struct prp_priv *priv = sd_to_priv(sd);
+	struct imx_media_video_dev *vdev = priv->vdev;
 	const struct imx_media_pixfmt *cc;
 	struct v4l2_mbus_framefmt *infmt;
 	u32 code;
@@ -800,6 +801,14 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
 	} else {
 		priv->format_mbus[sdformat->pad] = sdformat->format;
 		priv->cc[sdformat->pad] = cc;
+		if (sdformat->pad == PRPENCVF_SRC_PAD) {
+			/*
+			 * update the capture device format if this is
+			 * the IDMAC output pad
+			 */
+			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
+						      &sdformat->format, cc);
+		}
 	}
 
 	return 0;
diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index 3cb97e2..63555dc 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -866,6 +866,7 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 		       struct v4l2_subdev_format *sdformat)
 {
 	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	struct imx_media_video_dev *vdev = priv->vdev;
 	const struct imx_media_pixfmt *cc, *incc;
 	struct v4l2_mbus_framefmt *infmt;
 	struct imx_media_subdev *sensor;
@@ -980,6 +981,14 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 		/* Reset the crop window if this is the input pad */
 		if (sdformat->pad == CSI_SINK_PAD)
 			priv->crop = crop;
+		else if (sdformat->pad == CSI_SRC_PAD_IDMAC) {
+			/*
+			 * update the capture device format if this is
+			 * the IDMAC output pad
+			 */
+			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
+						      &sdformat->format, cc);
+		}
 	}
 
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 31/36] media: imx: csi: add __csi_get_fmt
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (29 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 30/36] media: imx: update capture dev format on IDMAC output pad set_fmt Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 32/36] media: imx: csi/fim: add support for frame intervals Steve Longerbeam
                   ` (6 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Add __csi_get_fmt() and use it to return the correct mbus format
(active or try) in get_fmt. Use it in other places as well.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
Suggested-by: Russell King <linux@armlinux.org.uk>
---
 drivers/staging/media/imx/imx-media-csi.c | 52 ++++++++++++++++++++++++-------
 1 file changed, 40 insertions(+), 12 deletions(-)

diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index 63555dc..b0aac82 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -788,7 +788,20 @@ static int csi_eof_isr(struct v4l2_subdev *sd, u32 status, bool *handled)
 	return 0;
 }
 
-static int csi_try_crop(struct csi_priv *priv, struct v4l2_rect *crop,
+static struct v4l2_mbus_framefmt *
+__csi_get_fmt(struct csi_priv *priv, struct v4l2_subdev_pad_config *cfg,
+	      unsigned int pad, enum v4l2_subdev_format_whence which)
+{
+	if (which == V4L2_SUBDEV_FORMAT_TRY)
+		return v4l2_subdev_get_try_format(&priv->sd, cfg, pad);
+	else
+		return &priv->format_mbus[pad];
+}
+
+static int csi_try_crop(struct csi_priv *priv,
+			struct v4l2_rect *crop,
+			struct v4l2_subdev_pad_config *cfg,
+			enum v4l2_subdev_format_whence which,
 			struct imx_media_subdev *sensor)
 {
 	struct v4l2_of_endpoint *sensor_ep;
@@ -796,7 +809,7 @@ static int csi_try_crop(struct csi_priv *priv, struct v4l2_rect *crop,
 	v4l2_std_id std;
 	int ret;
 
-	infmt = &priv->format_mbus[CSI_SINK_PAD];
+	infmt = __csi_get_fmt(priv, cfg, CSI_SINK_PAD, which);
 	sensor_ep = &sensor->sensor_ep;
 
 	crop->width = min_t(__u32, infmt->width, crop->width);
@@ -852,11 +865,16 @@ static int csi_get_fmt(struct v4l2_subdev *sd,
 		       struct v4l2_subdev_format *sdformat)
 {
 	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	struct v4l2_mbus_framefmt *fmt;
 
 	if (sdformat->pad >= CSI_NUM_PADS)
 		return -EINVAL;
 
-	sdformat->format = priv->format_mbus[sdformat->pad];
+	fmt = __csi_get_fmt(priv, cfg, sdformat->pad, sdformat->which);
+	if (!fmt)
+		return -EINVAL;
+
+	sdformat->format = *fmt;
 
 	return 0;
 }
@@ -880,8 +898,6 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 	if (priv->stream_on)
 		return -EBUSY;
 
-	infmt = &priv->format_mbus[CSI_SINK_PAD];
-
 	sensor = imx_media_find_sensor(priv->md, &priv->sd.entity);
 	if (IS_ERR(sensor)) {
 		v4l2_err(&priv->sd, "no sensor attached\n");
@@ -895,6 +911,8 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 	switch (sdformat->pad) {
 	case CSI_SRC_PAD_DIRECT:
 	case CSI_SRC_PAD_IDMAC:
+		infmt = __csi_get_fmt(priv, cfg, CSI_SINK_PAD, sdformat->which);
+
 		if (sdformat->format.width < priv->crop.width * 3 / 4)
 			sdformat->format.width = priv->crop.width / 2;
 		else
@@ -957,7 +975,8 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 		crop.top = 0;
 		crop.width = sdformat->format.width;
 		crop.height = sdformat->format.height;
-		ret = csi_try_crop(priv, &crop, sensor);
+		ret = csi_try_crop(priv, &crop, cfg,
+				   sdformat->which, sensor);
 		if (ret)
 			return ret;
 
@@ -1004,7 +1023,9 @@ static int csi_get_selection(struct v4l2_subdev *sd,
 	if (sel->pad >= CSI_NUM_PADS || sel->pad == CSI_SINK_PAD)
 		return -EINVAL;
 
-	infmt = &priv->format_mbus[CSI_SINK_PAD];
+	infmt = __csi_get_fmt(priv, cfg, CSI_SINK_PAD, sel->which);
+	if (!infmt)
+		return -EINVAL;
 
 	switch (sel->target) {
 	case V4L2_SEL_TGT_CROP_BOUNDS:
@@ -1014,7 +1035,14 @@ static int csi_get_selection(struct v4l2_subdev *sd,
 		sel->r.height = infmt->height;
 		break;
 	case V4L2_SEL_TGT_CROP:
-		sel->r = priv->crop;
+		if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
+			struct v4l2_rect *try_crop =
+				v4l2_subdev_get_try_crop(&priv->sd,
+							 cfg, sel->pad);
+			sel->r = *try_crop;
+		} else {
+			sel->r = priv->crop;
+		}
 		break;
 	default:
 		return -EINVAL;
@@ -1028,7 +1056,6 @@ static int csi_set_selection(struct v4l2_subdev *sd,
 			     struct v4l2_subdev_selection *sel)
 {
 	struct csi_priv *priv = v4l2_get_subdevdata(sd);
-	struct v4l2_mbus_framefmt *outfmt;
 	struct imx_media_subdev *sensor;
 	int ret;
 
@@ -1058,15 +1085,16 @@ static int csi_set_selection(struct v4l2_subdev *sd,
 		return 0;
 	}
 
-	outfmt = &priv->format_mbus[sel->pad];
-
-	ret = csi_try_crop(priv, &sel->r, sensor);
+	ret = csi_try_crop(priv, &sel->r, cfg, sel->which, sensor);
 	if (ret)
 		return ret;
 
 	if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
 		cfg->try_crop = sel->r;
 	} else {
+		struct v4l2_mbus_framefmt *outfmt =
+			&priv->format_mbus[sel->pad];
+
 		priv->crop = sel->r;
 		/* Update the source format */
 		outfmt->width = sel->r.width;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 32/36] media: imx: csi/fim: add support for frame intervals
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (30 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 31/36] media: imx: csi: add __csi_get_fmt Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:38   ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation Steve Longerbeam
                   ` (5 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Russell King

Add support to CSI for negotiation of frame intervals, and use this
information to configure the frame interval monitor.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx-media-csi.c | 36 ++++++++++++++++++++++++++++---
 drivers/staging/media/imx/imx-media-fim.c | 28 +++++++++---------------
 drivers/staging/media/imx/imx-media.h     |  2 +-
 3 files changed, 44 insertions(+), 22 deletions(-)

diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index b0aac82..040cca6 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -56,6 +56,7 @@ struct csi_priv {
 
 	struct v4l2_mbus_framefmt format_mbus[CSI_NUM_PADS];
 	const struct imx_media_pixfmt *cc[CSI_NUM_PADS];
+	struct v4l2_fract frame_interval;
 	struct v4l2_rect crop;
 
 	/* the video device at IDMAC output pad */
@@ -565,7 +566,8 @@ static int csi_start(struct csi_priv *priv)
 
 	/* start the frame interval monitor */
 	if (priv->fim) {
-		ret = imx_media_fim_set_stream(priv->fim, priv->sensor, true);
+		ret = imx_media_fim_set_stream(priv->fim,
+					       &priv->frame_interval, true);
 		if (ret)
 			goto idmac_stop;
 	}
@@ -580,7 +582,8 @@ static int csi_start(struct csi_priv *priv)
 
 fim_off:
 	if (priv->fim)
-		imx_media_fim_set_stream(priv->fim, priv->sensor, false);
+		imx_media_fim_set_stream(priv->fim,
+					 &priv->frame_interval, false);
 idmac_stop:
 	if (priv->dest == IPU_CSI_DEST_IDMAC)
 		csi_idmac_stop(priv);
@@ -594,11 +597,36 @@ static void csi_stop(struct csi_priv *priv)
 
 	/* stop the frame interval monitor */
 	if (priv->fim)
-		imx_media_fim_set_stream(priv->fim, priv->sensor, false);
+		imx_media_fim_set_stream(priv->fim,
+					 &priv->frame_interval, false);
 
 	ipu_csi_disable(priv->csi);
 }
 
+static int csi_g_frame_interval(struct v4l2_subdev *sd,
+				struct v4l2_subdev_frame_interval *fi)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+
+	fi->interval = priv->frame_interval;
+
+	return 0;
+}
+
+static int csi_s_frame_interval(struct v4l2_subdev *sd,
+				struct v4l2_subdev_frame_interval *fi)
+{
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+
+	/* Output pads mirror active input pad, no limits on input pads */
+	if (fi->pad == CSI_SRC_PAD_IDMAC || fi->pad == CSI_SRC_PAD_DIRECT)
+		fi->interval = priv->frame_interval;
+
+	priv->frame_interval = fi->interval;
+
+	return 0;
+}
+
 static int csi_s_stream(struct v4l2_subdev *sd, int enable)
 {
 	struct csi_priv *priv = v4l2_get_subdevdata(sd);
@@ -1187,6 +1215,8 @@ static struct v4l2_subdev_core_ops csi_core_ops = {
 };
 
 static struct v4l2_subdev_video_ops csi_video_ops = {
+	.g_frame_interval = csi_g_frame_interval,
+	.s_frame_interval = csi_s_frame_interval,
 	.s_stream = csi_s_stream,
 };
 
diff --git a/drivers/staging/media/imx/imx-media-fim.c b/drivers/staging/media/imx/imx-media-fim.c
index acc7e39..a6ed57e 100644
--- a/drivers/staging/media/imx/imx-media-fim.c
+++ b/drivers/staging/media/imx/imx-media-fim.c
@@ -67,26 +67,18 @@ struct imx_media_fim {
 };
 
 static void update_fim_nominal(struct imx_media_fim *fim,
-			       struct imx_media_subdev *sensor)
+			       const struct v4l2_fract *fi)
 {
-	struct v4l2_streamparm parm;
-	struct v4l2_fract tpf;
-	int ret;
-
-	parm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
-	ret = v4l2_subdev_call(sensor->sd, video, g_parm, &parm);
-	tpf = parm.parm.capture.timeperframe;
-
-	if (ret || tpf.denominator == 0) {
-		dev_dbg(fim->sd->dev, "no tpf from sensor, FIM disabled\n");
+	if (fi->denominator == 0) {
+		dev_dbg(fim->sd->dev, "no frame interval, FIM disabled\n");
 		fim->enabled = false;
 		return;
 	}
 
-	fim->nominal = DIV_ROUND_CLOSEST(1000 * 1000 * tpf.numerator,
-					 tpf.denominator);
+	fim->nominal = DIV_ROUND_CLOSEST_ULL(1000000ULL * (u64)fi->numerator,
+					     fi->denominator);
 
-	dev_dbg(fim->sd->dev, "sensor FI=%lu usec\n", fim->nominal);
+	dev_dbg(fim->sd->dev, "FI=%lu usec\n", fim->nominal);
 }
 
 static void reset_fim(struct imx_media_fim *fim, bool curval)
@@ -130,8 +122,8 @@ static void send_fim_event(struct imx_media_fim *fim, unsigned long error)
 
 /*
  * Monitor an averaged frame interval. If the average deviates too much
- * from the sensor's nominal frame rate, send the frame interval error
- * event. The frame intervals are averaged in order to quiet noise from
+ * from the nominal frame rate, send the frame interval error event. The
+ * frame intervals are averaged in order to quiet noise from
  * (presumably random) interrupt latency.
  */
 static void frame_interval_monitor(struct imx_media_fim *fim,
@@ -422,12 +414,12 @@ EXPORT_SYMBOL_GPL(imx_media_fim_set_power);
 
 /* Called by the subdev in its s_stream callback */
 int imx_media_fim_set_stream(struct imx_media_fim *fim,
-			     struct imx_media_subdev *sensor,
+			     const struct v4l2_fract *fi,
 			     bool on)
 {
 	if (on) {
 		reset_fim(fim, true);
-		update_fim_nominal(fim, sensor);
+		update_fim_nominal(fim, fi);
 
 		if (fim->icap_channel >= 0)
 			fim_acquire_first_ts(fim);
diff --git a/drivers/staging/media/imx/imx-media.h b/drivers/staging/media/imx/imx-media.h
index ae3af0d..7f19739 100644
--- a/drivers/staging/media/imx/imx-media.h
+++ b/drivers/staging/media/imx/imx-media.h
@@ -259,7 +259,7 @@ struct imx_media_fim;
 void imx_media_fim_eof_monitor(struct imx_media_fim *fim, struct timespec *ts);
 int imx_media_fim_set_power(struct imx_media_fim *fim, bool on);
 int imx_media_fim_set_stream(struct imx_media_fim *fim,
-			     struct imx_media_subdev *sensor,
+			     const struct v4l2_fract *frame_interval,
 			     bool on);
 struct imx_media_fim *imx_media_fim_init(struct v4l2_subdev *sd);
 void imx_media_fim_free(struct imx_media_fim *fim);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (31 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 32/36] media: imx: csi/fim: add support for frame intervals Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 11:32   ` Philipp Zabel
  2017-02-16  2:19 ` [PATCH v4 34/36] media: imx: csi: add frame skipping support Steve Longerbeam
                   ` (4 subsequent siblings)
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

The previous API and negotiation of mbus codes and pixel formats
was broken, and has been completely redone.

The negotiation of media bus codes should be as follows:

CSI:

sink pad     direct src pad      IDMAC src pad
--------     ----------------    -------------
RGB (any)        IPU RGB           RGB (any)
YUV (any)        IPU YUV           YUV (any)
Bayer              N/A             must be same bayer code as sink

VDIC:

direct sink pad    IDMAC sink pad    direct src pad
---------------    --------------    --------------
IPU YUV only       YUV (any)         IPU YUV only

PRP:

direct sink pad    direct src pads
---------------    ---------------
IPU (any)          same as sink code

PRP ENC/VF:

direct sink pad    IDMAC src pads
---------------    --------------
IPU (any)          any RGB or YUV

Given the above, a new internal API is created:

enum codespace_sel {
       CS_SEL_YUV = 0, /* find or enumerate only YUV codes */
       CS_SEL_RGB,     /* find or enumerate only RGB codes */
       CS_SEL_ANY,     /* find or enumerate both YUV and RGB codes */
};

/* Find and enumerate fourcc pixel formats */
const struct imx_media_pixfmt *
imx_media_find_format(u32 fourcc, enum codespace_sel cs_sel);
int imx_media_enum_format(u32 *fourcc, u32 index, enum codespace_sel cs_sel);

/* Find and enumerate media bus codes */
const struct imx_media_pixfmt *
imx_media_find_mbus_format(u32 code, enum codespace_sel cs_sel,
                          bool allow_bayer);
int imx_media_enum_mbus_format(u32 *code, u32 index, enum codespace_sel cs_sel,
                              bool allow_bayer);

/* Find and enumerate IPU internal media bus codes */
const struct imx_media_pixfmt *
imx_media_find_ipu_format(u32 code, enum codespace_sel cs_sel);
int imx_media_enum_ipu_format(u32 *code, u32 index, enum codespace_sel cs_sel);

The tables have been split into separate tables for YUV and RGB formats
to support the implementation of the above.

The subdev's .enum_mbus_code() and .set_fmt() operations have
been rewritten using the above APIs.

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx-ic-prp.c        |  72 ++++--
 drivers/staging/media/imx/imx-ic-prpencvf.c   |  53 ++--
 drivers/staging/media/imx/imx-media-capture.c |  77 +++---
 drivers/staging/media/imx/imx-media-csi.c     | 107 +++++---
 drivers/staging/media/imx/imx-media-utils.c   | 357 +++++++++++++++++++-------
 drivers/staging/media/imx/imx-media-vdic.c    | 100 +++++---
 drivers/staging/media/imx/imx-media.h         |  27 +-
 7 files changed, 528 insertions(+), 265 deletions(-)

diff --git a/drivers/staging/media/imx/imx-ic-prp.c b/drivers/staging/media/imx/imx-ic-prp.c
index 3683f7c..b9ee8fb 100644
--- a/drivers/staging/media/imx/imx-ic-prp.c
+++ b/drivers/staging/media/imx/imx-ic-prp.c
@@ -96,16 +96,6 @@ static void prp_stop(struct prp_priv *priv)
 {
 }
 
-static int prp_enum_mbus_code(struct v4l2_subdev *sd,
-			      struct v4l2_subdev_pad_config *cfg,
-			      struct v4l2_subdev_mbus_code_enum *code)
-{
-	if (code->pad >= PRP_NUM_PADS)
-		return -EINVAL;
-
-	return imx_media_enum_ipu_format(NULL, &code->code, code->index, true);
-}
-
 static struct v4l2_mbus_framefmt *
 __prp_get_fmt(struct prp_priv *priv, struct v4l2_subdev_pad_config *cfg,
 	      unsigned int pad, enum v4l2_subdev_format_whence which)
@@ -118,6 +108,33 @@ __prp_get_fmt(struct prp_priv *priv, struct v4l2_subdev_pad_config *cfg,
 		return &priv->format_mbus[pad];
 }
 
+static int prp_enum_mbus_code(struct v4l2_subdev *sd,
+			      struct v4l2_subdev_pad_config *cfg,
+			      struct v4l2_subdev_mbus_code_enum *code)
+{
+	struct prp_priv *priv = sd_to_priv(sd);
+	struct v4l2_mbus_framefmt *infmt;
+	int ret = 0;
+
+	switch (code->pad) {
+	case PRP_SINK_PAD:
+		ret = imx_media_enum_ipu_format(&code->code, code->index,
+						CS_SEL_ANY);
+		break;
+	case PRP_SRC_PAD_PRPENC:
+	case PRP_SRC_PAD_PRPVF:
+		if (code->index != 0)
+			return -EINVAL;
+		infmt = __prp_get_fmt(priv, cfg, PRP_SINK_PAD, code->which);
+		code->code = infmt->code;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
 static int prp_get_fmt(struct v4l2_subdev *sd,
 		       struct v4l2_subdev_pad_config *cfg,
 		       struct v4l2_subdev_format *sdformat)
@@ -152,23 +169,28 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
 	if (priv->stream_on)
 		return -EBUSY;
 
-	cc = imx_media_find_ipu_format(0, sdformat->format.code, true);
-	if (!cc) {
-		imx_media_enum_ipu_format(NULL, &code, 0, true);
-		cc = imx_media_find_ipu_format(0, code, true);
-		sdformat->format.code = cc->codes[0];
-	}
-
-	v4l_bound_align_image(&sdformat->format.width, MIN_W, MAX_W,
-			      W_ALIGN, &sdformat->format.height,
-			      MIN_H, MAX_H, H_ALIGN, S_ALIGN);
-
-	/* Output pads mirror input pad */
-	if (sdformat->pad == PRP_SRC_PAD_PRPENC ||
-	    sdformat->pad == PRP_SRC_PAD_PRPVF) {
+	switch (sdformat->pad) {
+	case PRP_SINK_PAD:
+		v4l_bound_align_image(&sdformat->format.width, MIN_W, MAX_W,
+				      W_ALIGN, &sdformat->format.height,
+				      MIN_H, MAX_H, H_ALIGN, S_ALIGN);
+
+		cc = imx_media_find_ipu_format(sdformat->format.code,
+					       CS_SEL_ANY);
+		if (!cc) {
+			imx_media_enum_ipu_format(&code, 0, CS_SEL_ANY);
+			cc = imx_media_find_ipu_format(code, CS_SEL_ANY);
+			sdformat->format.code = cc->codes[0];
+		}
+		break;
+	case PRP_SRC_PAD_PRPENC:
+	case PRP_SRC_PAD_PRPVF:
+		/* Output pads mirror input pad */
 		infmt = __prp_get_fmt(priv, cfg, PRP_SINK_PAD,
 				      sdformat->which);
+		cc = imx_media_find_ipu_format(infmt->code, CS_SEL_ANY);
 		sdformat->format = *infmt;
+		break;
 	}
 
 	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
@@ -364,7 +386,7 @@ static int prp_registered(struct v4l2_subdev *sd)
 			MEDIA_PAD_FL_SINK : MEDIA_PAD_FL_SOURCE;
 
 		/* set a default mbus format  */
-		imx_media_enum_ipu_format(NULL, &code, 0, true);
+		imx_media_enum_ipu_format(&code, 0, CS_SEL_YUV);
 		ret = imx_media_init_mbus_fmt(&priv->format_mbus[i],
 					      640, 480, code, V4L2_FIELD_NONE,
 					      &priv->cc[i]);
diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
index 6e45975..dd9d499 100644
--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
+++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
@@ -689,20 +689,6 @@ static void prp_stop(struct prp_priv *priv)
 	prp_put_ipu_resources(priv);
 }
 
-static int prp_enum_mbus_code(struct v4l2_subdev *sd,
-			      struct v4l2_subdev_pad_config *cfg,
-			      struct v4l2_subdev_mbus_code_enum *code)
-{
-	if (code->pad >= PRPENCVF_NUM_PADS)
-		return -EINVAL;
-
-	if (code->pad == PRPENCVF_SRC_PAD)
-		return imx_media_enum_format(NULL, &code->code, code->index,
-					     true, false);
-
-	return imx_media_enum_ipu_format(NULL, &code->code, code->index, true);
-}
-
 static struct v4l2_mbus_framefmt *
 __prp_get_fmt(struct prp_priv *priv, struct v4l2_subdev_pad_config *cfg,
 	      unsigned int pad, enum v4l2_subdev_format_whence which)
@@ -715,6 +701,25 @@ __prp_get_fmt(struct prp_priv *priv, struct v4l2_subdev_pad_config *cfg,
 		return &priv->format_mbus[pad];
 }
 
+static int prp_enum_mbus_code(struct v4l2_subdev *sd,
+			      struct v4l2_subdev_pad_config *cfg,
+			      struct v4l2_subdev_mbus_code_enum *code)
+{
+	int ret;
+
+	if (code->pad >= PRPENCVF_NUM_PADS)
+		return -EINVAL;
+
+	if (code->pad == PRPENCVF_SINK_PAD)
+		ret = imx_media_enum_ipu_format(&code->code, code->index,
+						CS_SEL_ANY);
+	else
+		ret = imx_media_enum_mbus_format(&code->code, code->index,
+						 CS_SEL_ANY, false);
+
+	return ret;
+}
+
 static int prp_get_fmt(struct v4l2_subdev *sd,
 		       struct v4l2_subdev_pad_config *cfg,
 		       struct v4l2_subdev_format *sdformat)
@@ -754,11 +759,13 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
 		infmt = __prp_get_fmt(priv, cfg, PRPENCVF_SINK_PAD,
 				      sdformat->which);
 
-		cc = imx_media_find_format(0, sdformat->format.code,
-					   true, false);
+		cc = imx_media_find_mbus_format(sdformat->format.code,
+						CS_SEL_ANY, false);
 		if (!cc) {
-			imx_media_enum_format(NULL, &code, 0, true, false);
-			cc = imx_media_find_format(0, code, true, false);
+			imx_media_enum_mbus_format(&code, 0,
+						   CS_SEL_ANY, false);
+			cc = imx_media_find_mbus_format(code,
+							CS_SEL_ANY, false);
 			sdformat->format.code = cc->codes[0];
 		}
 
@@ -781,11 +788,11 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
 					      infmt->height / 4, MAX_H_SRC,
 					      H_ALIGN_SRC, S_ALIGN);
 	} else {
-		cc = imx_media_find_ipu_format(0, sdformat->format.code,
-					       true);
+		cc = imx_media_find_ipu_format(sdformat->format.code,
+					       CS_SEL_ANY);
 		if (!cc) {
-			imx_media_enum_ipu_format(NULL, &code, 0, true);
-			cc = imx_media_find_ipu_format(0, code, true);
+			imx_media_enum_ipu_format(&code, 0, CS_SEL_ANY);
+			cc = imx_media_find_ipu_format(code, CS_SEL_ANY);
 			sdformat->format.code = cc->codes[0];
 		}
 
@@ -1021,7 +1028,7 @@ static int prp_registered(struct v4l2_subdev *sd)
 	for (i = 0; i < PRPENCVF_NUM_PADS; i++) {
 		if (i == PRPENCVF_SINK_PAD) {
 			priv->pad[i].flags = MEDIA_PAD_FL_SINK;
-			imx_media_enum_ipu_format(NULL, &code, 0, true);
+			imx_media_enum_ipu_format(&code, 0, CS_SEL_YUV);
 		} else {
 			priv->pad[i].flags = MEDIA_PAD_FL_SOURCE;
 			code = 0;
diff --git a/drivers/staging/media/imx/imx-media-capture.c b/drivers/staging/media/imx/imx-media-capture.c
index fbf6067..9ef1cc2 100644
--- a/drivers/staging/media/imx/imx-media-capture.c
+++ b/drivers/staging/media/imx/imx-media-capture.c
@@ -85,12 +85,33 @@ static int vidioc_querycap(struct file *file, void *fh,
 static int capture_enum_fmt_vid_cap(struct file *file, void *fh,
 				    struct v4l2_fmtdesc *f)
 {
-	u32 fourcc;
+	struct capture_priv *priv = video_drvdata(file);
+	const struct imx_media_pixfmt *cc_src;
+	struct v4l2_subdev_format fmt_src;
+	u32 fourcc, cs_sel;
 	int ret;
 
-	ret = imx_media_enum_format(&fourcc, NULL, f->index, true, true);
-	if (ret)
+	fmt_src.pad = priv->src_sd_pad;
+	fmt_src.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+	ret = v4l2_subdev_call(priv->src_sd, pad, get_fmt, NULL, &fmt_src);
+	if (ret) {
+		v4l2_err(priv->src_sd, "failed to get src_sd format\n");
 		return ret;
+	}
+	cc_src = imx_media_find_mbus_format(fmt_src.format.code,
+					    CS_SEL_ANY, true);
+	if (cc_src->bayer) {
+		if (f->index != 0)
+			return -EINVAL;
+		fourcc = cc_src->fourcc;
+	} else {
+		cs_sel = (cc_src->cs == IPUV3_COLORSPACE_YUV) ?
+			CS_SEL_YUV : CS_SEL_RGB;
+
+		ret = imx_media_enum_format(&fourcc, f->index, cs_sel);
+		if (ret)
+			return ret;
+	}
 
 	f->pixelformat = fourcc;
 
@@ -112,40 +133,36 @@ static int capture_try_fmt_vid_cap(struct file *file, void *fh,
 {
 	struct capture_priv *priv = video_drvdata(file);
 	struct v4l2_subdev_format fmt_src;
-	const struct imx_media_pixfmt *cc, *src_cc;
-	u32 fourcc;
+	const struct imx_media_pixfmt *cc, *cc_src;
 	int ret;
 
-	fourcc = f->fmt.pix.pixelformat;
-	cc = imx_media_find_format(fourcc, 0, true, true);
-	if (!cc) {
-		imx_media_enum_format(&fourcc, NULL, 0, true, true);
-		cc = imx_media_find_format(fourcc, 0, true, true);
-	}
-
-	/*
-	 * user frame dimensions are the same as src_sd's pad.
-	 */
 	fmt_src.pad = priv->src_sd_pad;
 	fmt_src.which = V4L2_SUBDEV_FORMAT_ACTIVE;
 	ret = v4l2_subdev_call(priv->src_sd, pad, get_fmt, NULL, &fmt_src);
 	if (ret)
 		return ret;
 
-	/*
-	 * but we can allow planar pixel formats if the src_sd's
-	 * pad configured a YUV format
-	 */
-	src_cc = imx_media_find_format(0, fmt_src.format.code, true, false);
-	if (src_cc->cs == IPUV3_COLORSPACE_YUV &&
-	    cc->cs == IPUV3_COLORSPACE_YUV) {
-		imx_media_mbus_fmt_to_pix_fmt(&f->fmt.pix,
-					      &fmt_src.format, cc);
+	cc_src = imx_media_find_mbus_format(fmt_src.format.code,
+					    CS_SEL_ANY, true);
+
+	if (cc_src->bayer) {
+		cc = cc_src;
 	} else {
-		imx_media_mbus_fmt_to_pix_fmt(&f->fmt.pix,
-					      &fmt_src.format, src_cc);
+		u32 fourcc, cs_sel;
+
+		cs_sel = (cc_src->cs == IPUV3_COLORSPACE_YUV) ?
+			CS_SEL_YUV : CS_SEL_RGB;
+		fourcc = f->fmt.pix.pixelformat;
+
+		cc = imx_media_find_format(fourcc, cs_sel);
+		if (!cc) {
+			imx_media_enum_format(&fourcc, 0, cs_sel);
+			cc = imx_media_find_format(fourcc, cs_sel);
+		}
 	}
 
+	imx_media_mbus_fmt_to_pix_fmt(&f->fmt.pix, &fmt_src.format, cc);
+
 	return 0;
 }
 
@@ -165,8 +182,8 @@ static int capture_s_fmt_vid_cap(struct file *file, void *fh,
 		return ret;
 
 	priv->vdev.fmt.fmt.pix = f->fmt.pix;
-	priv->vdev.cc = imx_media_find_format(f->fmt.pix.pixelformat, 0,
-					      true, true);
+	priv->vdev.cc = imx_media_find_format(f->fmt.pix.pixelformat,
+					      CS_SEL_ANY);
 
 	return 0;
 }
@@ -573,8 +590,8 @@ int imx_media_capture_device_register(struct imx_media_video_dev *vdev)
 	vdev->fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
 	imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
 				      &fmt_src.format, NULL);
-	vdev->cc = imx_media_find_format(0, fmt_src.format.code,
-					 true, false);
+	vdev->cc = imx_media_find_format(vdev->fmt.fmt.pix.pixelformat,
+					 CS_SEL_ANY);
 
 	v4l2_info(sd, "Registered %s as /dev/%s\n", vfd->name,
 		  video_device_node_name(vfd));
diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index 040cca6..1d4e746 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -877,15 +877,44 @@ static int csi_enum_mbus_code(struct v4l2_subdev *sd,
 			      struct v4l2_subdev_pad_config *cfg,
 			      struct v4l2_subdev_mbus_code_enum *code)
 {
-	if (code->pad >= CSI_NUM_PADS)
-		return -EINVAL;
+	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	const struct imx_media_pixfmt *incc;
+	struct v4l2_mbus_framefmt *infmt;
+	int ret = 0;
+	u32 cs_sel;
 
-	if (code->pad == CSI_SRC_PAD_DIRECT)
-		return imx_media_enum_ipu_format(NULL, &code->code,
-						 code->index, true);
+	infmt = __csi_get_fmt(priv, cfg, CSI_SINK_PAD, code->which);
+	incc = imx_media_find_mbus_format(infmt->code, CS_SEL_ANY, true);
 
-	return imx_media_enum_format(NULL, &code->code, code->index,
-				     true, false);
+	switch (code->pad) {
+	case CSI_SINK_PAD:
+		ret = imx_media_enum_mbus_format(&code->code, code->index,
+						 CS_SEL_ANY, true);
+		break;
+	case CSI_SRC_PAD_DIRECT:
+		cs_sel = (incc->cs == IPUV3_COLORSPACE_YUV) ?
+			CS_SEL_YUV : CS_SEL_RGB;
+		ret = imx_media_enum_ipu_format(&code->code, code->index,
+						cs_sel);
+		break;
+	case CSI_SRC_PAD_IDMAC:
+		if (incc->bayer) {
+			if (code->index != 0)
+				return -EINVAL;
+			code->code = infmt->code;
+		} else {
+			cs_sel = (incc->cs == IPUV3_COLORSPACE_YUV) ?
+				CS_SEL_YUV : CS_SEL_RGB;
+			ret = imx_media_enum_mbus_format(&code->code,
+							 code->index,
+							 cs_sel, false);
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return ret;
 }
 
 static int csi_get_fmt(struct v4l2_subdev *sd,
@@ -917,7 +946,7 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 	struct v4l2_mbus_framefmt *infmt;
 	struct imx_media_subdev *sensor;
 	struct v4l2_rect crop;
-	u32 code;
+	u32 code, cs_sel;
 	int ret;
 
 	if (sdformat->pad >= CSI_NUM_PADS)
@@ -932,14 +961,16 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 		return PTR_ERR(sensor);
 	}
 
-	v4l_bound_align_image(&sdformat->format.width, MIN_W, MAX_W,
-			      W_ALIGN, &sdformat->format.height,
-			      MIN_H, MAX_H, H_ALIGN, S_ALIGN);
-
 	switch (sdformat->pad) {
 	case CSI_SRC_PAD_DIRECT:
 	case CSI_SRC_PAD_IDMAC:
-		infmt = __csi_get_fmt(priv, cfg, CSI_SINK_PAD, sdformat->which);
+		infmt = __csi_get_fmt(priv, cfg, CSI_SINK_PAD,
+				      sdformat->which);
+		incc = imx_media_find_mbus_format(infmt->code,
+						  CS_SEL_ANY, true);
+
+		cs_sel = (incc->cs == IPUV3_COLORSPACE_YUV) ?
+			CS_SEL_YUV : CS_SEL_RGB;
 
 		if (sdformat->format.width < priv->crop.width * 3 / 4)
 			sdformat->format.width = priv->crop.width / 2;
@@ -952,32 +983,29 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 			sdformat->format.height = priv->crop.height;
 
 		if (sdformat->pad == CSI_SRC_PAD_IDMAC) {
-			cc = imx_media_find_format(0, sdformat->format.code,
-						   true, false);
-			if (!cc) {
-				imx_media_enum_format(NULL, &code, 0,
-						      true, false);
-				cc = imx_media_find_format(0, code,
-							   true, false);
-				sdformat->format.code = cc->codes[0];
-			}
-
-			incc = priv->cc[CSI_SINK_PAD];
-			if (cc->cs != incc->cs) {
+			if (incc->bayer) {
 				sdformat->format.code = infmt->code;
-				cc = imx_media_find_format(
-					0, sdformat->format.code,
-					true, false);
+				cc = incc;
+			} else {
+				cc = imx_media_find_mbus_format(
+					sdformat->format.code, cs_sel, false);
+				if (!cc) {
+					imx_media_enum_mbus_format(
+						&code, 0, cs_sel, false);
+					cc = imx_media_find_mbus_format(
+						code, cs_sel, false);
+					sdformat->format.code = cc->codes[0];
+				}
 			}
 
 			if (sdformat->format.field != V4L2_FIELD_NONE)
 				sdformat->format.field = infmt->field;
 		} else {
-			cc = imx_media_find_ipu_format(0, sdformat->format.code,
-						       true);
+			cc = imx_media_find_ipu_format(sdformat->format.code,
+						       cs_sel);
 			if (!cc) {
-				imx_media_enum_ipu_format(NULL, &code, 0, true);
-				cc = imx_media_find_ipu_format(0, code, true);
+				imx_media_enum_ipu_format(&code, 0, cs_sel);
+				cc = imx_media_find_ipu_format(code, cs_sel);
 				sdformat->format.code = cc->codes[0];
 			}
 
@@ -999,6 +1027,9 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 		}
 		break;
 	case CSI_SINK_PAD:
+		v4l_bound_align_image(&sdformat->format.width, MIN_W, MAX_W,
+				      W_ALIGN, &sdformat->format.height,
+				      MIN_H, MAX_H, H_ALIGN, S_ALIGN);
 		crop.left = 0;
 		crop.top = 0;
 		crop.width = sdformat->format.width;
@@ -1008,11 +1039,13 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 		if (ret)
 			return ret;
 
-		cc = imx_media_find_format(0, sdformat->format.code,
-					   true, false);
+		cc = imx_media_find_mbus_format(sdformat->format.code,
+						CS_SEL_ANY, true);
 		if (!cc) {
-			imx_media_enum_format(NULL, &code, 0, true, false);
-			cc = imx_media_find_format(0, code, true, false);
+			imx_media_enum_mbus_format(&code, 0,
+						   CS_SEL_ANY, false);
+			cc = imx_media_find_mbus_format(code,
+							CS_SEL_ANY, false);
 			sdformat->format.code = cc->codes[0];
 		}
 		break;
@@ -1157,7 +1190,7 @@ static int csi_registered(struct v4l2_subdev *sd)
 
 		code = 0;
 		if (i == CSI_SRC_PAD_DIRECT)
-			imx_media_enum_ipu_format(NULL, &code, 0, true);
+			imx_media_enum_ipu_format(&code, 0, CS_SEL_YUV);
 
 		/* set a default mbus format  */
 		ret = imx_media_init_mbus_fmt(&priv->format_mbus[i],
diff --git a/drivers/staging/media/imx/imx-media-utils.c b/drivers/staging/media/imx/imx-media-utils.c
index 6855560..a7fa84a 100644
--- a/drivers/staging/media/imx/imx-media-utils.c
+++ b/drivers/staging/media/imx/imx-media-utils.c
@@ -12,14 +12,13 @@
 #include "imx-media.h"
 
 /*
- * List of pixel formats for the subdevs. This must be a super-set of
- * the formats supported by the ipu image converter.
+ * List of supported pixel formats for the subdevs.
  *
- * The non-mbus formats (planar and BGR) must all fall at the end of
- * this table, otherwise enum_fmt() at media pads will stop before
- * seeing all the supported mbus formats.
+ * In all of these tables, the non-mbus formats (with no
+ * mbus codes) must all fall at the end of the table.
  */
-static const struct imx_media_pixfmt imx_media_formats[] = {
+
+static const struct imx_media_pixfmt yuv_formats[] = {
 	{
 		.fourcc	= V4L2_PIX_FMT_UYVY,
 		.codes  = {
@@ -36,13 +35,45 @@ static const struct imx_media_pixfmt imx_media_formats[] = {
 		},
 		.cs     = IPUV3_COLORSPACE_YUV,
 		.bpp    = 16,
+	},
+	/***
+	 * non-mbus YUV formats start here. NOTE! when adding non-mbus
+	 * formats, NUM_NON_MBUS_YUV_FORMATS must be updated below.
+	 ***/
+	{
+		.fourcc	= V4L2_PIX_FMT_YUV420,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 12,
+		.planar = true,
 	}, {
-		.fourcc = V4L2_PIX_FMT_YUV32,
-		.codes  = {MEDIA_BUS_FMT_AYUV8_1X32},
+		.fourcc = V4L2_PIX_FMT_YVU420,
 		.cs     = IPUV3_COLORSPACE_YUV,
-		.bpp    = 32,
-		.ipufmt = true,
+		.bpp    = 12,
+		.planar = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_YUV422P,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 16,
+		.planar = true,
+	}, {
+		.fourcc = V4L2_PIX_FMT_NV12,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 12,
+		.planar = true,
 	}, {
+		.fourcc = V4L2_PIX_FMT_NV16,
+		.cs     = IPUV3_COLORSPACE_YUV,
+		.bpp    = 16,
+		.planar = true,
+	},
+};
+
+#define NUM_NON_MBUS_YUV_FORMATS 5
+#define NUM_YUV_FORMATS ARRAY_SIZE(yuv_formats)
+#define NUM_MBUS_YUV_FORMATS (NUM_YUV_FORMATS - NUM_NON_MBUS_YUV_FORMATS)
+
+static const struct imx_media_pixfmt rgb_formats[] = {
+	{
 		.fourcc	= V4L2_PIX_FMT_RGB565,
 		.codes  = {MEDIA_BUS_FMT_RGB565_2X8_LE},
 		.cs     = IPUV3_COLORSPACE_RGB,
@@ -61,7 +92,9 @@ static const struct imx_media_pixfmt imx_media_formats[] = {
 		.cs     = IPUV3_COLORSPACE_RGB,
 		.bpp    = 32,
 		.ipufmt = true,
-	}, {
+	},
+	/*** raw bayer formats start here ***/
+	{
 		.fourcc = V4L2_PIX_FMT_SBGGR8,
 		.codes  = {MEDIA_BUS_FMT_SBGGR8_1X8},
 		.cs     = IPUV3_COLORSPACE_RGB,
@@ -130,7 +163,10 @@ static const struct imx_media_pixfmt imx_media_formats[] = {
 		.bpp    = 16,
 		.bayer  = true,
 	},
-	/*** non-mbus formats start here ***/
+	/***
+	 * non-mbus RGB formats start here. NOTE! when adding non-mbus
+	 * formats, NUM_NON_MBUS_RGB_FORMATS must be updated below.
+	 ***/
 	{
 		.fourcc	= V4L2_PIX_FMT_BGR24,
 		.cs     = IPUV3_COLORSPACE_RGB,
@@ -139,135 +175,256 @@ static const struct imx_media_pixfmt imx_media_formats[] = {
 		.fourcc	= V4L2_PIX_FMT_BGR32,
 		.cs     = IPUV3_COLORSPACE_RGB,
 		.bpp    = 32,
-	}, {
-		.fourcc	= V4L2_PIX_FMT_YUV420,
-		.cs     = IPUV3_COLORSPACE_YUV,
-		.bpp    = 12,
-		.planar = true,
-	}, {
-		.fourcc = V4L2_PIX_FMT_YVU420,
-		.cs     = IPUV3_COLORSPACE_YUV,
-		.bpp    = 12,
-		.planar = true,
-	}, {
-		.fourcc = V4L2_PIX_FMT_YUV422P,
-		.cs     = IPUV3_COLORSPACE_YUV,
-		.bpp    = 16,
-		.planar = true,
-	}, {
-		.fourcc = V4L2_PIX_FMT_NV12,
-		.cs     = IPUV3_COLORSPACE_YUV,
-		.bpp    = 12,
-		.planar = true,
-	}, {
-		.fourcc = V4L2_PIX_FMT_NV16,
+	},
+};
+
+#define NUM_NON_MBUS_RGB_FORMATS 2
+#define NUM_RGB_FORMATS ARRAY_SIZE(rgb_formats)
+#define NUM_MBUS_RGB_FORMATS (NUM_RGB_FORMATS - NUM_NON_MBUS_RGB_FORMATS)
+
+static const struct imx_media_pixfmt ipu_yuv_formats[] = {
+	{
+		.fourcc = V4L2_PIX_FMT_YUV32,
+		.codes  = {MEDIA_BUS_FMT_AYUV8_1X32},
 		.cs     = IPUV3_COLORSPACE_YUV,
-		.bpp    = 16,
-		.planar = true,
+		.bpp    = 32,
+		.ipufmt = true,
 	},
 };
 
-static const u32 imx_media_ipu_internal_codes[] = {
-	MEDIA_BUS_FMT_AYUV8_1X32, MEDIA_BUS_FMT_ARGB8888_1X32,
+#define NUM_IPU_YUV_FORMATS ARRAY_SIZE(ipu_yuv_formats)
+
+static const struct imx_media_pixfmt ipu_rgb_formats[] = {
+	{
+		.fourcc	= V4L2_PIX_FMT_RGB32,
+		.codes  = {MEDIA_BUS_FMT_ARGB8888_1X32},
+		.cs     = IPUV3_COLORSPACE_RGB,
+		.bpp    = 32,
+		.ipufmt = true,
+	},
 };
 
+#define NUM_IPU_RGB_FORMATS ARRAY_SIZE(ipu_rgb_formats)
+
 static inline u32 pixfmt_to_colorspace(const struct imx_media_pixfmt *fmt)
 {
 	return (fmt->cs == IPUV3_COLORSPACE_RGB) ?
 		V4L2_COLORSPACE_SRGB : V4L2_COLORSPACE_SMPTE170M;
 }
 
-static const struct imx_media_pixfmt *find_format(u32 fourcc, u32 code,
-						  bool allow_rgb,
-						  bool allow_planar,
-						  bool ipu_fmt_only)
+static const struct imx_media_pixfmt *find_format(u32 fourcc,
+						  u32 code,
+						  enum codespace_sel cs_sel,
+						  bool allow_non_mbus,
+						  bool allow_bayer)
 {
-	const struct imx_media_pixfmt *fmt, *ret = NULL;
+	const struct imx_media_pixfmt *array, *fmt, *ret = NULL;
+	u32 array_size;
 	int i, j;
 
-	for (i = 0; i < ARRAY_SIZE(imx_media_formats); i++) {
-		fmt = &imx_media_formats[i];
+	switch (cs_sel) {
+	case CS_SEL_YUV:
+		array_size = NUM_YUV_FORMATS;
+		array = yuv_formats;
+		break;
+	case CS_SEL_RGB:
+		array_size = NUM_RGB_FORMATS;
+		array = rgb_formats;
+		break;
+	case CS_SEL_ANY:
+		array_size = NUM_YUV_FORMATS + NUM_RGB_FORMATS;
+		array = yuv_formats;
+		break;
+	default:
+		return NULL;
+	}
+
+	for (i = 0; i < array_size; i++) {
+		if (cs_sel == CS_SEL_ANY && i >= NUM_YUV_FORMATS)
+			fmt = &rgb_formats[i - NUM_YUV_FORMATS];
+		else
+			fmt = &array[i];
 
-		if (ipu_fmt_only && !fmt->ipufmt)
+		if ((!allow_non_mbus && fmt->codes[0] == 0) ||
+		    (!allow_bayer && fmt->bayer))
 			continue;
 
-		if (fourcc && fmt->fourcc == fourcc &&
-		    (fmt->cs != IPUV3_COLORSPACE_RGB || allow_rgb) &&
-		    (!fmt->planar || allow_planar)) {
+		if (fourcc && fmt->fourcc == fourcc) {
 			ret = fmt;
 			goto out;
 		}
 
 		for (j = 0; code && fmt->codes[j]; j++) {
-			if (fmt->codes[j] == code && !fmt->planar &&
-			    (fmt->cs != IPUV3_COLORSPACE_RGB || allow_rgb)) {
+			if (code == fmt->codes[j]) {
 				ret = fmt;
 				goto out;
 			}
 		}
 	}
+
 out:
 	return ret;
 }
 
-const struct imx_media_pixfmt *imx_media_find_format(u32 fourcc, u32 code,
-						     bool allow_rgb,
-						     bool allow_planar)
+static int enum_format(u32 *fourcc, u32 *code, u32 index,
+		       enum codespace_sel cs_sel,
+		       bool allow_non_mbus,
+		       bool allow_bayer)
 {
-	return find_format(fourcc, code, allow_rgb, allow_planar, false);
+	const struct imx_media_pixfmt *fmt;
+	u32 mbus_yuv_sz = NUM_MBUS_YUV_FORMATS;
+	u32 mbus_rgb_sz = NUM_MBUS_RGB_FORMATS;
+	u32 yuv_sz = NUM_YUV_FORMATS;
+	u32 rgb_sz = NUM_RGB_FORMATS;
+
+	switch (cs_sel) {
+	case CS_SEL_YUV:
+		if (index >= yuv_sz ||
+		    (!allow_non_mbus && index >= mbus_yuv_sz))
+			return -EINVAL;
+		fmt = &yuv_formats[index];
+		break;
+	case CS_SEL_RGB:
+		if (index >= rgb_sz ||
+		    (!allow_non_mbus && index >= mbus_rgb_sz))
+			return -EINVAL;
+		fmt = &rgb_formats[index];
+		if (!allow_bayer && fmt->bayer)
+			return -EINVAL;
+		break;
+	case CS_SEL_ANY:
+		if (!allow_non_mbus) {
+			if (index >= mbus_yuv_sz) {
+				index -= mbus_yuv_sz;
+				if (index >= mbus_rgb_sz)
+					return -EINVAL;
+				fmt = &rgb_formats[index];
+				if (!allow_bayer && fmt->bayer)
+					return -EINVAL;
+			} else {
+				fmt = &yuv_formats[index];
+			}
+		} else {
+			if (index >= yuv_sz + rgb_sz)
+				return -EINVAL;
+			if (index >= yuv_sz) {
+				fmt = &rgb_formats[index - yuv_sz];
+				if (!allow_bayer && fmt->bayer)
+					return -EINVAL;
+			} else {
+				fmt = &yuv_formats[index];
+			}
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (fourcc)
+		*fourcc = fmt->fourcc;
+	if (code)
+		*code = fmt->codes[0];
+
+	return 0;
+}
+
+const struct imx_media_pixfmt *
+imx_media_find_format(u32 fourcc, enum codespace_sel cs_sel)
+{
+	return find_format(fourcc, 0, cs_sel, true, false);
 }
 EXPORT_SYMBOL_GPL(imx_media_find_format);
 
-const struct imx_media_pixfmt *imx_media_find_ipu_format(u32 fourcc,
-							 u32 code,
-							 bool allow_rgb)
+int imx_media_enum_format(u32 *fourcc, u32 index, enum codespace_sel cs_sel)
 {
-	return find_format(fourcc, code, allow_rgb, false, true);
+	return enum_format(fourcc, NULL, index, cs_sel, true, false);
 }
-EXPORT_SYMBOL_GPL(imx_media_find_ipu_format);
+EXPORT_SYMBOL_GPL(imx_media_enum_format);
 
-int imx_media_enum_format(u32 *fourcc, u32 *code, u32 index,
-			  bool allow_rgb, bool allow_planar)
+const struct imx_media_pixfmt *
+imx_media_find_mbus_format(u32 code, enum codespace_sel cs_sel,
+			   bool allow_bayer)
 {
-	const struct imx_media_pixfmt *fmt;
+	return find_format(0, code, cs_sel, false, allow_bayer);
+}
+EXPORT_SYMBOL_GPL(imx_media_find_mbus_format);
 
-	if (index >= ARRAY_SIZE(imx_media_formats))
-		return -EINVAL;
+int imx_media_enum_mbus_format(u32 *code, u32 index, enum codespace_sel cs_sel,
+			       bool allow_bayer)
+{
+	return enum_format(NULL, code, index, cs_sel, false, allow_bayer);
+}
+EXPORT_SYMBOL_GPL(imx_media_enum_mbus_format);
 
-	fmt = &imx_media_formats[index];
+const struct imx_media_pixfmt *
+imx_media_find_ipu_format(u32 code, enum codespace_sel cs_sel)
+{
+	const struct imx_media_pixfmt *array, *fmt, *ret = NULL;
+	u32 array_size;
+	int i, j;
 
-	if ((fmt->cs == IPUV3_COLORSPACE_RGB && !allow_rgb) ||
-	    (fmt->planar && !allow_planar))
-		return -EINVAL;
+	switch (cs_sel) {
+	case CS_SEL_YUV:
+		array_size = NUM_IPU_YUV_FORMATS;
+		array = ipu_yuv_formats;
+		break;
+	case CS_SEL_RGB:
+		array_size = NUM_IPU_RGB_FORMATS;
+		array = ipu_rgb_formats;
+		break;
+	case CS_SEL_ANY:
+		array_size = NUM_IPU_YUV_FORMATS + NUM_IPU_RGB_FORMATS;
+		array = ipu_yuv_formats;
+		break;
+	default:
+		return NULL;
+	}
 
-	if (code)
-		*code = fmt->codes[0];
-	if (fourcc)
-		*fourcc = fmt->fourcc;
+	for (i = 0; i < array_size; i++) {
+		if (cs_sel == CS_SEL_ANY && i >= NUM_IPU_YUV_FORMATS)
+			fmt = &ipu_rgb_formats[i - NUM_IPU_YUV_FORMATS];
+		else
+			fmt = &array[i];
 
-	return 0;
+		for (j = 0; code && fmt->codes[j]; j++) {
+			if (code == fmt->codes[j]) {
+				ret = fmt;
+				goto out;
+			}
+		}
+	}
+
+out:
+	return ret;
 }
-EXPORT_SYMBOL_GPL(imx_media_enum_format);
+EXPORT_SYMBOL_GPL(imx_media_find_ipu_format);
 
-int imx_media_enum_ipu_format(u32 *fourcc, u32 *code, u32 index,
-			      bool allow_rgb)
+int imx_media_enum_ipu_format(u32 *code, u32 index, enum codespace_sel cs_sel)
 {
-	const struct imx_media_pixfmt *fmt;
-	u32 lcode;
-
-	if (index >= ARRAY_SIZE(imx_media_ipu_internal_codes))
-		return -EINVAL;
-
-	lcode = imx_media_ipu_internal_codes[index];
-
-	fmt = find_format(0, lcode, allow_rgb, false, true);
-	if (!fmt)
+	switch (cs_sel) {
+	case CS_SEL_YUV:
+		if (index >= NUM_IPU_YUV_FORMATS)
+			return -EINVAL;
+		*code = ipu_yuv_formats[index].codes[0];
+		break;
+	case CS_SEL_RGB:
+		if (index >= NUM_IPU_RGB_FORMATS)
+			return -EINVAL;
+		*code = ipu_rgb_formats[index].codes[0];
+		break;
+	case CS_SEL_ANY:
+		if (index >= NUM_IPU_YUV_FORMATS + NUM_IPU_RGB_FORMATS)
+			return -EINVAL;
+		if (index >= NUM_IPU_YUV_FORMATS) {
+			index -= NUM_IPU_YUV_FORMATS;
+			*code = ipu_rgb_formats[index].codes[0];
+		} else {
+			*code = ipu_yuv_formats[index].codes[0];
+		}
+		break;
+	default:
 		return -EINVAL;
-
-	if (code)
-		*code = fmt->codes[0];
-	if (fourcc)
-		*fourcc = fmt->fourcc;
+	}
 
 	return 0;
 }
@@ -283,10 +440,14 @@ int imx_media_init_mbus_fmt(struct v4l2_mbus_framefmt *mbus,
 	mbus->height = height;
 	mbus->field = field;
 	if (code == 0)
-		imx_media_enum_format(NULL, &code, 0, true, false);
-	lcc = imx_media_find_format(0, code, true, false);
-	if (!lcc)
-		return -EINVAL;
+		imx_media_enum_mbus_format(&code, 0, CS_SEL_YUV, false);
+	lcc = imx_media_find_mbus_format(code, CS_SEL_ANY, false);
+	if (!lcc) {
+		lcc = imx_media_find_ipu_format(code, CS_SEL_ANY);
+		if (!lcc)
+			return -EINVAL;
+	}
+
 	mbus->code = code;
 	mbus->colorspace = pixfmt_to_colorspace(lcc);
 
@@ -304,7 +465,7 @@ int imx_media_mbus_fmt_to_pix_fmt(struct v4l2_pix_format *pix,
 	u32 stride;
 
 	if (!cc) {
-		cc = imx_media_find_format(0, mbus->code, true, false);
+		cc = imx_media_find_mbus_format(mbus->code, CS_SEL_ANY, true);
 		if (!cc)
 			return -EINVAL;
 	}
@@ -346,7 +507,7 @@ int imx_media_ipu_image_to_mbus_fmt(struct v4l2_mbus_framefmt *mbus,
 {
 	const struct imx_media_pixfmt *fmt;
 
-	fmt = imx_media_find_format(image->pix.pixelformat, 0, true, false);
+	fmt = imx_media_find_format(image->pix.pixelformat, CS_SEL_ANY);
 	if (!fmt)
 		return -EINVAL;
 
diff --git a/drivers/staging/media/imx/imx-media-vdic.c b/drivers/staging/media/imx/imx-media-vdic.c
index 5bb21e9..61e6017 100644
--- a/drivers/staging/media/imx/imx-media-vdic.c
+++ b/drivers/staging/media/imx/imx-media-vdic.c
@@ -520,20 +520,6 @@ static int vdic_s_stream(struct v4l2_subdev *sd, int enable)
 	return ret;
 }
 
-static int vdic_enum_mbus_code(struct v4l2_subdev *sd,
-			       struct v4l2_subdev_pad_config *cfg,
-			       struct v4l2_subdev_mbus_code_enum *code)
-{
-	if (code->pad >= VDIC_NUM_PADS)
-		return -EINVAL;
-
-	if (code->pad == VDIC_SINK_PAD_IDMAC)
-		return imx_media_enum_format(NULL, &code->code, code->index,
-					     false, false);
-
-	return imx_media_enum_ipu_format(NULL, &code->code, code->index, false);
-}
-
 static struct v4l2_mbus_framefmt *
 __vdic_get_fmt(struct vdic_priv *priv, struct v4l2_subdev_pad_config *cfg,
 	       unsigned int pad, enum v4l2_subdev_format_whence which)
@@ -544,6 +530,32 @@ __vdic_get_fmt(struct vdic_priv *priv, struct v4l2_subdev_pad_config *cfg,
 		return &priv->format_mbus[pad];
 }
 
+static int vdic_enum_mbus_code(struct v4l2_subdev *sd,
+			       struct v4l2_subdev_pad_config *cfg,
+			       struct v4l2_subdev_mbus_code_enum *code)
+{
+	int ret;
+
+	switch (code->pad) {
+	case VDIC_SINK_PAD_IDMAC:
+		ret = imx_media_enum_mbus_format(&code->code, code->index,
+						 CS_SEL_YUV, false);
+		break;
+	case VDIC_SINK_PAD_DIRECT:
+		ret = imx_media_enum_ipu_format(&code->code, code->index,
+						CS_SEL_YUV);
+		break;
+	case VDIC_SRC_PAD_DIRECT:
+		ret = imx_media_enum_ipu_format(&code->code, code->index,
+						CS_SEL_YUV);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
 static int vdic_get_fmt(struct v4l2_subdev *sd,
 			struct v4l2_subdev_pad_config *cfg,
 			struct v4l2_subdev_format *sdformat)
@@ -578,19 +590,16 @@ static int vdic_set_fmt(struct v4l2_subdev *sd,
 	if (priv->stream_on)
 		return -EBUSY;
 
-	v4l_bound_align_image(&sdformat->format.width, MIN_W, MAX_W_VDIC,
-			      W_ALIGN, &sdformat->format.height,
-			      MIN_H, MAX_H_VDIC, H_ALIGN, S_ALIGN);
-
 	switch (sdformat->pad) {
 	case VDIC_SRC_PAD_DIRECT:
 		infmt = __vdic_get_fmt(priv, cfg, priv->active_input_pad,
 				       sdformat->which);
 
-		cc = imx_media_find_ipu_format(0, sdformat->format.code, false);
+		cc = imx_media_find_ipu_format(sdformat->format.code,
+					       CS_SEL_YUV);
 		if (!cc) {
-			imx_media_enum_ipu_format(NULL, &code, 0, false);
-			cc = imx_media_find_ipu_format(0, code, false);
+			imx_media_enum_ipu_format(&code, 0, CS_SEL_YUV);
+			cc = imx_media_find_ipu_format(code, CS_SEL_YUV);
 			sdformat->format.code = cc->codes[0];
 		}
 
@@ -599,29 +608,36 @@ static int vdic_set_fmt(struct v4l2_subdev *sd,
 		/* output is always progressive! */
 		sdformat->format.field = V4L2_FIELD_NONE;
 		break;
-	case VDIC_SINK_PAD_IDMAC:
 	case VDIC_SINK_PAD_DIRECT:
-		if (sdformat->pad == VDIC_SINK_PAD_DIRECT) {
-			cc = imx_media_find_ipu_format(0, sdformat->format.code,
-						       false);
-			if (!cc) {
-				imx_media_enum_ipu_format(NULL, &code, 0,
-							  false);
-				cc = imx_media_find_ipu_format(0, code, false);
-				sdformat->format.code = cc->codes[0];
-			}
-		} else {
-			cc = imx_media_find_format(0, sdformat->format.code,
-						   false, false);
-			if (!cc) {
-				imx_media_enum_format(NULL, &code, 0,
-						      false, false);
-				cc = imx_media_find_format(0, code,
-							   false, false);
-				sdformat->format.code = cc->codes[0];
-			}
+		v4l_bound_align_image(&sdformat->format.width,
+				      MIN_W, MAX_W_VDIC, W_ALIGN,
+				      &sdformat->format.height,
+				      MIN_H, MAX_H_VDIC, H_ALIGN, S_ALIGN);
+
+		cc = imx_media_find_ipu_format(sdformat->format.code,
+					       CS_SEL_YUV);
+		if (!cc) {
+			imx_media_enum_ipu_format(&code, 0, CS_SEL_YUV);
+			cc = imx_media_find_ipu_format(code, CS_SEL_YUV);
+			sdformat->format.code = cc->codes[0];
 		}
+		/* input must be interlaced! Choose SEQ_TB if not */
+		if (!V4L2_FIELD_HAS_BOTH(sdformat->format.field))
+			sdformat->format.field = V4L2_FIELD_SEQ_TB;
+		break;
+	case VDIC_SINK_PAD_IDMAC:
+		v4l_bound_align_image(&sdformat->format.width,
+				      MIN_W, MAX_W_VDIC, W_ALIGN,
+				      &sdformat->format.height,
+				      MIN_H, MAX_H_VDIC, H_ALIGN, S_ALIGN);
 
+		cc = imx_media_find_mbus_format(sdformat->format.code,
+						CS_SEL_YUV, false);
+		if (!cc) {
+			imx_media_enum_mbus_format(&code, 0, CS_SEL_YUV, false);
+			cc = imx_media_find_mbus_format(code, CS_SEL_YUV, false);
+			sdformat->format.code = cc->codes[0];
+		}
 		/* input must be interlaced! Choose SEQ_TB if not */
 		if (!V4L2_FIELD_HAS_BOTH(sdformat->format.field))
 			sdformat->format.field = V4L2_FIELD_SEQ_TB;
@@ -766,7 +782,7 @@ static int vdic_registered(struct v4l2_subdev *sd)
 
 		code = 0;
 		if (i != VDIC_SINK_PAD_IDMAC)
-			imx_media_enum_ipu_format(NULL, &code, 0, true);
+			imx_media_enum_ipu_format(&code, 0, CS_SEL_YUV);
 
 		/* set a default mbus format  */
 		ret = imx_media_init_mbus_fmt(&priv->format_mbus[i],
diff --git a/drivers/staging/media/imx/imx-media.h b/drivers/staging/media/imx/imx-media.h
index 7f19739..4ac2bb6 100644
--- a/drivers/staging/media/imx/imx-media.h
+++ b/drivers/staging/media/imx/imx-media.h
@@ -171,16 +171,23 @@ struct imx_media_dev {
 	struct v4l2_async_notifier subdev_notifier;
 };
 
-const struct imx_media_pixfmt *imx_media_find_format(u32 fourcc, u32 code,
-						     bool allow_rgb,
-						     bool allow_planar);
-const struct imx_media_pixfmt *imx_media_find_ipu_format(u32 fourcc, u32 code,
-							 bool allow_rgb);
-
-int imx_media_enum_format(u32 *fourcc, u32 *code, u32 index,
-			  bool allow_rgb, bool allow_planar);
-int imx_media_enum_ipu_format(u32 *fourcc, u32 *code, u32 index,
-			      bool allow_rgb);
+enum codespace_sel {
+	CS_SEL_YUV = 0,
+	CS_SEL_RGB,
+	CS_SEL_ANY,
+};
+
+const struct imx_media_pixfmt *
+imx_media_find_format(u32 fourcc, enum codespace_sel cs_sel);
+int imx_media_enum_format(u32 *fourcc, u32 index, enum codespace_sel cs_sel);
+const struct imx_media_pixfmt *
+imx_media_find_mbus_format(u32 code, enum codespace_sel cs_sel,
+			   bool allow_bayer);
+int imx_media_enum_mbus_format(u32 *code, u32 index, enum codespace_sel cs_sel,
+			       bool allow_bayer);
+const struct imx_media_pixfmt *
+imx_media_find_ipu_format(u32 code, enum codespace_sel cs_sel);
+int imx_media_enum_ipu_format(u32 *code, u32 index, enum codespace_sel cs_sel);
 
 int imx_media_init_mbus_fmt(struct v4l2_mbus_framefmt *mbus,
 			    u32 width, u32 height, u32 code, u32 field,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 34/36] media: imx: csi: add frame skipping support
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (32 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 35/36] media: imx: csi: fix crop rectangle reset in sink set_fmt Steve Longerbeam
                   ` (3 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

From: Philipp Zabel <p.zabel@pengutronix.de>

The CSI can skip any out of up to 6 input frames, allowing to reduce the
frame rate at the output pads by small fractions.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx-media-csi.c | 119 +++++++++++++++++++++++++++++-
 1 file changed, 115 insertions(+), 4 deletions(-)

diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index 1d4e746..6284f99 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -9,6 +9,7 @@
  * the Free Software Foundation; either version 2 of the License, or
  * (at your option) any later version.
  */
+#include <linux/gcd.h>
 #include <linux/interrupt.h>
 #include <linux/module.h>
 #include <linux/platform_device.h>
@@ -40,6 +41,18 @@
 #define H_ALIGN    1 /* multiple of 2 lines */
 #define S_ALIGN    1 /* multiple of 2 */
 
+/*
+ * struct csi_skip_desc - CSI frame skipping descriptor
+ * @keep - number of frames kept per max_ratio frames
+ * @max_ratio - width of skip_smfc, written to MAX_RATIO bitfield
+ * @skip_smfc - skip pattern written to the SKIP_SMFC bitfield
+ */
+struct csi_skip_desc {
+	u8 keep;
+	u8 max_ratio;
+	u8 skip_smfc;
+};
+
 struct csi_priv {
 	struct device *dev;
 	struct ipu_soc *ipu;
@@ -58,6 +71,7 @@ struct csi_priv {
 	const struct imx_media_pixfmt *cc[CSI_NUM_PADS];
 	struct v4l2_fract frame_interval;
 	struct v4l2_rect crop;
+	const struct csi_skip_desc *skip[CSI_NUM_PADS - 1];
 
 	/* the video device at IDMAC output pad */
 	struct imx_media_video_dev *vdev;
@@ -512,10 +526,12 @@ static int csi_setup(struct csi_priv *priv)
 	struct v4l2_mbus_config sensor_mbus_cfg;
 	struct v4l2_of_endpoint *sensor_ep;
 	struct v4l2_mbus_framefmt if_fmt;
+	const struct csi_skip_desc *skip;
 
 	infmt = &priv->format_mbus[CSI_SINK_PAD];
 	outfmt = &priv->format_mbus[priv->active_output_pad];
 	sensor_ep = &priv->sensor->sensor_ep;
+	skip = priv->skip[priv->active_output_pad - 1];
 
 	/* compose mbus_config from sensor endpoint */
 	sensor_mbus_cfg.type = sensor_ep->bus_type;
@@ -540,6 +556,9 @@ static int csi_setup(struct csi_priv *priv)
 
 	ipu_csi_set_dest(priv->csi, priv->dest);
 
+	ipu_csi_set_skip_smfc(priv->csi, skip->skip_smfc, skip->max_ratio - 1,
+			      0);
+
 	ipu_csi_dump(priv->csi);
 
 	return 0;
@@ -603,26 +622,115 @@ static void csi_stop(struct csi_priv *priv)
 	ipu_csi_disable(priv->csi);
 }
 
+static const struct csi_skip_desc csi_skip[12] = {
+	{ 1, 1, 0x00 }, /* Keep all frames */
+	{ 5, 6, 0x10 }, /* Skip every sixth frame */
+	{ 4, 5, 0x08 }, /* Skip every fifth frame */
+	{ 3, 4, 0x04 }, /* Skip every fourth frame */
+	{ 2, 3, 0x02 }, /* Skip every third frame */
+	{ 3, 5, 0x0a }, /* Skip frames 1 and 3 of every 5 */
+	{ 1, 2, 0x01 }, /* Skip every second frame */
+	{ 2, 5, 0x0b }, /* Keep frames 1 and 4 of every 5 */
+	{ 1, 3, 0x03 }, /* Keep one in three frames */
+	{ 1, 4, 0x07 }, /* Keep one in four frames */
+	{ 1, 5, 0x0f }, /* Keep one in five frames */
+	{ 1, 6, 0x1f }, /* Keep one in six frames */
+};
+
+static void csi_apply_skip_interval(const struct csi_skip_desc *skip,
+				    struct v4l2_fract *interval)
+{
+	unsigned int div;
+
+	interval->numerator *= skip->max_ratio;
+	interval->denominator *= skip->keep;
+
+	/* Reduce fraction to lowest terms */
+	div = gcd(interval->numerator, interval->denominator);
+	if (div > 1) {
+		interval->numerator /= div;
+		interval->denominator /= div;
+	}
+}
+
 static int csi_g_frame_interval(struct v4l2_subdev *sd,
 				struct v4l2_subdev_frame_interval *fi)
 {
 	struct csi_priv *priv = v4l2_get_subdevdata(sd);
 
+	if (fi->pad >= CSI_NUM_PADS)
+		return -EINVAL;
+
 	fi->interval = priv->frame_interval;
 
+	if (fi->pad != CSI_SINK_PAD)
+		csi_apply_skip_interval(priv->skip[fi->pad - 1], &fi->interval);
+
 	return 0;
 }
 
+/*
+ * Find the skip pattern to produce the output frame interval closest to the
+ * requested one, for the given input frame interval. Updates the output frame
+ * interval to the exact value.
+ */
+static const struct csi_skip_desc *csi_find_best_skip(struct v4l2_fract *in,
+						      struct v4l2_fract *out)
+{
+	const struct csi_skip_desc *skip = &csi_skip[0], *best_skip = skip;
+	u32 min_err = UINT_MAX;
+	u64 want_us;
+	int i;
+
+	/* Default to 1:1 ratio */
+	if (out->numerator == 0 || out->denominator == 0 ||
+	    in->numerator == 0 || in->denominator == 0)
+		return best_skip;
+
+	want_us = div_u64((u64)USEC_PER_SEC * out->numerator, out->denominator);
+
+	/* Find the reduction closest to the requested time per frame */
+	for (i = 0; i < ARRAY_SIZE(csi_skip); i++, skip++) {
+		u64 tmp, err;
+
+		tmp = div_u64((u64)USEC_PER_SEC * in->numerator *
+			      skip->max_ratio, in->denominator * skip->keep);
+
+		err = abs((s64)tmp - want_us);
+		if (err < min_err) {
+			min_err = err;
+			best_skip = skip;
+		}
+	}
+
+	*out = *in;
+	csi_apply_skip_interval(best_skip, out);
+
+	return best_skip;
+}
+
 static int csi_s_frame_interval(struct v4l2_subdev *sd,
 				struct v4l2_subdev_frame_interval *fi)
 {
 	struct csi_priv *priv = v4l2_get_subdevdata(sd);
 
-	/* Output pads mirror active input pad, no limits on input pads */
-	if (fi->pad == CSI_SRC_PAD_IDMAC || fi->pad == CSI_SRC_PAD_DIRECT)
-		fi->interval = priv->frame_interval;
+	if (fi->pad >= CSI_NUM_PADS)
+		return -EINVAL;
+
+	/* No limits on input pad */
+	if (fi->pad == CSI_SINK_PAD) {
+		priv->frame_interval = fi->interval;
+
+		/* Reset frame skipping ratio to 1:1 */
+		priv->skip[0] = &csi_skip[0];
+		priv->skip[1] = &csi_skip[0];
+
+		return 0;
+	}
 
-	priv->frame_interval = fi->interval;
+	/* Output pads depend on input interval, modified by frame skipping */
+	priv->skip[fi->pad - 1] = csi_find_best_skip(&priv->frame_interval,
+						     &fi->interval);
 
 	return 0;
 }
@@ -1198,6 +1306,9 @@ static int csi_registered(struct v4l2_subdev *sd)
 					      &priv->cc[i]);
 		if (ret)
 			goto put_csi;
+		/* disable frame skipping */
+		if (i != CSI_SINK_PAD)
+			priv->skip[i - 1] = &csi_skip[0];
 	}
 
 	priv->fim = imx_media_fim_init(&priv->sd);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 35/36] media: imx: csi: fix crop rectangle reset in sink set_fmt
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (33 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 34/36] media: imx: csi: add frame skipping support Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16  2:19 ` [PATCH v4 36/36] media: imx: propagate sink pad formats to source pads Steve Longerbeam
                   ` (2 subsequent siblings)
  37 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

From: Philipp Zabel <p.zabel@pengutronix.de>

The csi_try_crop call in set_fmt should compare the cropping rectangle
to the currently set input format, not to the previous input format.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx-media-csi.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index 6284f99..3e6b607 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -937,15 +937,13 @@ __csi_get_fmt(struct csi_priv *priv, struct v4l2_subdev_pad_config *cfg,
 static int csi_try_crop(struct csi_priv *priv,
 			struct v4l2_rect *crop,
 			struct v4l2_subdev_pad_config *cfg,
-			enum v4l2_subdev_format_whence which,
+			struct v4l2_mbus_framefmt *infmt,
 			struct imx_media_subdev *sensor)
 {
 	struct v4l2_of_endpoint *sensor_ep;
-	struct v4l2_mbus_framefmt *infmt;
 	v4l2_std_id std;
 	int ret;
 
-	infmt = __csi_get_fmt(priv, cfg, CSI_SINK_PAD, which);
 	sensor_ep = &sensor->sensor_ep;
 
 	crop->width = min_t(__u32, infmt->width, crop->width);
@@ -1142,8 +1140,7 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 		crop.top = 0;
 		crop.width = sdformat->format.width;
 		crop.height = sdformat->format.height;
-		ret = csi_try_crop(priv, &crop, cfg,
-				   sdformat->which, sensor);
+		ret = csi_try_crop(priv, &crop, cfg, &sdformat->format, sensor);
 		if (ret)
 			return ret;
 
@@ -1225,6 +1222,7 @@ static int csi_set_selection(struct v4l2_subdev *sd,
 			     struct v4l2_subdev_selection *sel)
 {
 	struct csi_priv *priv = v4l2_get_subdevdata(sd);
+	struct v4l2_mbus_framefmt *infmt;
 	struct imx_media_subdev *sensor;
 	int ret;
 
@@ -1254,7 +1252,8 @@ static int csi_set_selection(struct v4l2_subdev *sd,
 		return 0;
 	}
 
-	ret = csi_try_crop(priv, &sel->r, cfg, sel->which, sensor);
+	infmt = __csi_get_fmt(priv, cfg, CSI_SINK_PAD, sel->which);
+	ret = csi_try_crop(priv, &sel->r, cfg, infmt, sensor);
 	if (ret)
 		return ret;
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* [PATCH v4 36/36] media: imx: propagate sink pad formats to source pads
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (34 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 35/36] media: imx: csi: fix crop rectangle reset in sink set_fmt Steve Longerbeam
@ 2017-02-16  2:19 ` Steve Longerbeam
  2017-02-16 11:29   ` Philipp Zabel
  2017-02-16 11:37 ` [PATCH v4 00/36] i.MX Media Driver Russell King - ARM Linux
  2017-02-16 22:20 ` Russell King - ARM Linux
  37 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:19 UTC (permalink / raw)
  To: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 drivers/staging/media/imx/imx-ic-prp.c      | 11 ++++++++++-
 drivers/staging/media/imx/imx-ic-prpencvf.c | 22 ++++++++++++++--------
 drivers/staging/media/imx/imx-media-csi.c   | 26 +++++++++++++++++---------
 drivers/staging/media/imx/imx-media-vdic.c  | 15 ++++++++++++++-
 4 files changed, 55 insertions(+), 19 deletions(-)

diff --git a/drivers/staging/media/imx/imx-ic-prp.c b/drivers/staging/media/imx/imx-ic-prp.c
index b9ee8fb..5c57d2b 100644
--- a/drivers/staging/media/imx/imx-ic-prp.c
+++ b/drivers/staging/media/imx/imx-ic-prp.c
@@ -196,8 +196,17 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
 	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
 		cfg->try_fmt = sdformat->format;
 	} else {
-		priv->format_mbus[sdformat->pad] = sdformat->format;
+		struct v4l2_mbus_framefmt *f =
+			&priv->format_mbus[sdformat->pad];
+
+		*f = sdformat->format;
 		priv->cc[sdformat->pad] = cc;
+
+		/* propagate format to source pads */
+		if (sdformat->pad == PRP_SINK_PAD) {
+			priv->format_mbus[PRP_SRC_PAD_PRPENC] = *f;
+			priv->format_mbus[PRP_SRC_PAD_PRPVF] = *f;
+		}
 	}
 
 	return 0;
diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
index dd9d499..c43f85f 100644
--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
+++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
@@ -806,16 +806,22 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
 	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
 		cfg->try_fmt = sdformat->format;
 	} else {
-		priv->format_mbus[sdformat->pad] = sdformat->format;
+		struct v4l2_mbus_framefmt *f =
+			&priv->format_mbus[sdformat->pad];
+		struct v4l2_mbus_framefmt *outf =
+			&priv->format_mbus[PRPENCVF_SRC_PAD];
+
+		*f = sdformat->format;
 		priv->cc[sdformat->pad] = cc;
-		if (sdformat->pad == PRPENCVF_SRC_PAD) {
-			/*
-			 * update the capture device format if this is
-			 * the IDMAC output pad
-			 */
-			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
-						      &sdformat->format, cc);
+
+		/* propagate format to source pad */
+		if (sdformat->pad == PRPENCVF_SINK_PAD) {
+			outf->width = f->width;
+			outf->height = f->height;
 		}
+
+		/* update the capture device format from output pad */
+		imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix, outf, cc);
 	}
 
 	return 0;
diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index 3e6b607..9d9ec03 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -1161,19 +1161,27 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
 	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
 		cfg->try_fmt = sdformat->format;
 	} else {
+		struct v4l2_mbus_framefmt *f_direct, *f_idmac;
+
 		priv->format_mbus[sdformat->pad] = sdformat->format;
 		priv->cc[sdformat->pad] = cc;
-		/* Reset the crop window if this is the input pad */
-		if (sdformat->pad == CSI_SINK_PAD)
+
+		f_direct = &priv->format_mbus[CSI_SRC_PAD_DIRECT];
+		f_idmac = &priv->format_mbus[CSI_SRC_PAD_IDMAC];
+
+		if (sdformat->pad == CSI_SINK_PAD) {
+			/* reset the crop window */
 			priv->crop = crop;
-		else if (sdformat->pad == CSI_SRC_PAD_IDMAC) {
-			/*
-			 * update the capture device format if this is
-			 * the IDMAC output pad
-			 */
-			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
-						      &sdformat->format, cc);
+
+			/* propagate format to source pads */
+			f_direct->width = crop.width;
+			f_direct->height = crop.height;
+			f_idmac->width = crop.width;
+			f_idmac->height = crop.height;
 		}
+
+		/* update the capture device format from IDMAC output pad */
+		imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix, f_idmac, cc);
 	}
 
 	return 0;
diff --git a/drivers/staging/media/imx/imx-media-vdic.c b/drivers/staging/media/imx/imx-media-vdic.c
index 61e6017..55fb522 100644
--- a/drivers/staging/media/imx/imx-media-vdic.c
+++ b/drivers/staging/media/imx/imx-media-vdic.c
@@ -649,8 +649,21 @@ static int vdic_set_fmt(struct v4l2_subdev *sd,
 	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
 		cfg->try_fmt = sdformat->format;
 	} else {
-		priv->format_mbus[sdformat->pad] = sdformat->format;
+		struct v4l2_mbus_framefmt *f =
+			&priv->format_mbus[sdformat->pad];
+		struct v4l2_mbus_framefmt *outf =
+			&priv->format_mbus[VDIC_SRC_PAD_DIRECT];
+
+		*f = sdformat->format;
 		priv->cc[sdformat->pad] = cc;
+
+		/* propagate format to source pad */
+		if (sdformat->pad == VDIC_SINK_PAD_DIRECT ||
+		    sdformat->pad == VDIC_SINK_PAD_IDMAC) {
+			outf->width = f->width;
+			outf->height = f->height;
+			outf->field = V4L2_FIELD_NONE;
+		}
 	}
 
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 32/36] media: imx: csi/fim: add support for frame intervals
  2017-02-16  2:19 ` [PATCH v4 32/36] media: imx: csi/fim: add support for frame intervals Steve Longerbeam
@ 2017-02-16  2:38   ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16  2:38 UTC (permalink / raw)
  To: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, linux, mchehab, hverkuil, nick, markus.heiser,
	p.zabel, laurent.pinchart+renesas, bparrot, geert, arnd,
	sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King

Sorry, I forgot to change authorship on this patch. It should
be authored by Russell King <rmk+kernel@armlinux.org.uk>.

Steve

On 02/15/2017 06:19 PM, Steve Longerbeam wrote:
> Add support to CSI for negotiation of frame intervals, and use this
> information to configure the frame interval monitor.
>
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>   drivers/staging/media/imx/imx-media-csi.c | 36 ++++++++++++++++++++++++++++---
>   drivers/staging/media/imx/imx-media-fim.c | 28 +++++++++---------------
>   drivers/staging/media/imx/imx-media.h     |  2 +-
>   3 files changed, 44 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
> index b0aac82..040cca6 100644
> --- a/drivers/staging/media/imx/imx-media-csi.c
> +++ b/drivers/staging/media/imx/imx-media-csi.c
> @@ -56,6 +56,7 @@ struct csi_priv {
>   
>   	struct v4l2_mbus_framefmt format_mbus[CSI_NUM_PADS];
>   	const struct imx_media_pixfmt *cc[CSI_NUM_PADS];
> +	struct v4l2_fract frame_interval;
>   	struct v4l2_rect crop;
>   
>   	/* the video device at IDMAC output pad */
> @@ -565,7 +566,8 @@ static int csi_start(struct csi_priv *priv)
>   
>   	/* start the frame interval monitor */
>   	if (priv->fim) {
> -		ret = imx_media_fim_set_stream(priv->fim, priv->sensor, true);
> +		ret = imx_media_fim_set_stream(priv->fim,
> +					       &priv->frame_interval, true);
>   		if (ret)
>   			goto idmac_stop;
>   	}
> @@ -580,7 +582,8 @@ static int csi_start(struct csi_priv *priv)
>   
>   fim_off:
>   	if (priv->fim)
> -		imx_media_fim_set_stream(priv->fim, priv->sensor, false);
> +		imx_media_fim_set_stream(priv->fim,
> +					 &priv->frame_interval, false);
>   idmac_stop:
>   	if (priv->dest == IPU_CSI_DEST_IDMAC)
>   		csi_idmac_stop(priv);
> @@ -594,11 +597,36 @@ static void csi_stop(struct csi_priv *priv)
>   
>   	/* stop the frame interval monitor */
>   	if (priv->fim)
> -		imx_media_fim_set_stream(priv->fim, priv->sensor, false);
> +		imx_media_fim_set_stream(priv->fim,
> +					 &priv->frame_interval, false);
>   
>   	ipu_csi_disable(priv->csi);
>   }
>   
> +static int csi_g_frame_interval(struct v4l2_subdev *sd,
> +				struct v4l2_subdev_frame_interval *fi)
> +{
> +	struct csi_priv *priv = v4l2_get_subdevdata(sd);
> +
> +	fi->interval = priv->frame_interval;
> +
> +	return 0;
> +}
> +
> +static int csi_s_frame_interval(struct v4l2_subdev *sd,
> +				struct v4l2_subdev_frame_interval *fi)
> +{
> +	struct csi_priv *priv = v4l2_get_subdevdata(sd);
> +
> +	/* Output pads mirror active input pad, no limits on input pads */
> +	if (fi->pad == CSI_SRC_PAD_IDMAC || fi->pad == CSI_SRC_PAD_DIRECT)
> +		fi->interval = priv->frame_interval;
> +
> +	priv->frame_interval = fi->interval;
> +
> +	return 0;
> +}
> +
>   static int csi_s_stream(struct v4l2_subdev *sd, int enable)
>   {
>   	struct csi_priv *priv = v4l2_get_subdevdata(sd);
> @@ -1187,6 +1215,8 @@ static struct v4l2_subdev_core_ops csi_core_ops = {
>   };
>   
>   static struct v4l2_subdev_video_ops csi_video_ops = {
> +	.g_frame_interval = csi_g_frame_interval,
> +	.s_frame_interval = csi_s_frame_interval,
>   	.s_stream = csi_s_stream,
>   };
>   
> diff --git a/drivers/staging/media/imx/imx-media-fim.c b/drivers/staging/media/imx/imx-media-fim.c
> index acc7e39..a6ed57e 100644
> --- a/drivers/staging/media/imx/imx-media-fim.c
> +++ b/drivers/staging/media/imx/imx-media-fim.c
> @@ -67,26 +67,18 @@ struct imx_media_fim {
>   };
>   
>   static void update_fim_nominal(struct imx_media_fim *fim,
> -			       struct imx_media_subdev *sensor)
> +			       const struct v4l2_fract *fi)
>   {
> -	struct v4l2_streamparm parm;
> -	struct v4l2_fract tpf;
> -	int ret;
> -
> -	parm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
> -	ret = v4l2_subdev_call(sensor->sd, video, g_parm, &parm);
> -	tpf = parm.parm.capture.timeperframe;
> -
> -	if (ret || tpf.denominator == 0) {
> -		dev_dbg(fim->sd->dev, "no tpf from sensor, FIM disabled\n");
> +	if (fi->denominator == 0) {
> +		dev_dbg(fim->sd->dev, "no frame interval, FIM disabled\n");
>   		fim->enabled = false;
>   		return;
>   	}
>   
> -	fim->nominal = DIV_ROUND_CLOSEST(1000 * 1000 * tpf.numerator,
> -					 tpf.denominator);
> +	fim->nominal = DIV_ROUND_CLOSEST_ULL(1000000ULL * (u64)fi->numerator,
> +					     fi->denominator);
>   
> -	dev_dbg(fim->sd->dev, "sensor FI=%lu usec\n", fim->nominal);
> +	dev_dbg(fim->sd->dev, "FI=%lu usec\n", fim->nominal);
>   }
>   
>   static void reset_fim(struct imx_media_fim *fim, bool curval)
> @@ -130,8 +122,8 @@ static void send_fim_event(struct imx_media_fim *fim, unsigned long error)
>   
>   /*
>    * Monitor an averaged frame interval. If the average deviates too much
> - * from the sensor's nominal frame rate, send the frame interval error
> - * event. The frame intervals are averaged in order to quiet noise from
> + * from the nominal frame rate, send the frame interval error event. The
> + * frame intervals are averaged in order to quiet noise from
>    * (presumably random) interrupt latency.
>    */
>   static void frame_interval_monitor(struct imx_media_fim *fim,
> @@ -422,12 +414,12 @@ EXPORT_SYMBOL_GPL(imx_media_fim_set_power);
>   
>   /* Called by the subdev in its s_stream callback */
>   int imx_media_fim_set_stream(struct imx_media_fim *fim,
> -			     struct imx_media_subdev *sensor,
> +			     const struct v4l2_fract *fi,
>   			     bool on)
>   {
>   	if (on) {
>   		reset_fim(fim, true);
> -		update_fim_nominal(fim, sensor);
> +		update_fim_nominal(fim, fi);
>   
>   		if (fim->icap_channel >= 0)
>   			fim_acquire_first_ts(fim);
> diff --git a/drivers/staging/media/imx/imx-media.h b/drivers/staging/media/imx/imx-media.h
> index ae3af0d..7f19739 100644
> --- a/drivers/staging/media/imx/imx-media.h
> +++ b/drivers/staging/media/imx/imx-media.h
> @@ -259,7 +259,7 @@ struct imx_media_fim;
>   void imx_media_fim_eof_monitor(struct imx_media_fim *fim, struct timespec *ts);
>   int imx_media_fim_set_power(struct imx_media_fim *fim, bool on);
>   int imx_media_fim_set_stream(struct imx_media_fim *fim,
> -			     struct imx_media_subdev *sensor,
> +			     const struct v4l2_fract *frame_interval,
>   			     bool on);
>   struct imx_media_fim *imx_media_fim_init(struct v4l2_subdev *sd);
>   void imx_media_fim_free(struct imx_media_fim *fim);

-- 
Steve Longerbeam | Senior Embedded Engineer, ESD Services
Mentor Embedded(tm) | 46871 Bayside Parkway, Fremont, CA 94538
P 510.354.5838 | M 408.410.2735

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 18/36] media: Add i.MX media core driver
  2017-02-16  2:19 ` [PATCH v4 18/36] media: Add i.MX media core driver Steve Longerbeam
@ 2017-02-16 10:27   ` Russell King - ARM Linux
  2017-02-16 17:53     ` Steve Longerbeam
  2017-02-16 13:02   ` Philipp Zabel
  1 sibling, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 10:27 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, Feb 15, 2017 at 06:19:20PM -0800, Steve Longerbeam wrote:
> Add the core media driver for i.MX SOC.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>

Just as I reported on the 30th January:

Applying: media: Add i.MX media core driver
.git/rebase-apply/patch:614: new blank line at EOF.
+
.git/rebase-apply/patch:626: new blank line at EOF.
+
.git/rebase-apply/patch:668: new blank line at EOF.
+

These need fixing.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-16  2:19 ` [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver Steve Longerbeam
@ 2017-02-16 10:28   ` Russell King - ARM Linux
  2017-02-16 17:54     ` Steve Longerbeam
  2017-02-17 10:47   ` Philipp Zabel
  1 sibling, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 10:28 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, Feb 15, 2017 at 06:19:25PM -0800, Steve Longerbeam wrote:
> Adds MIPI CSI-2 Receiver subdev driver. This subdev is required
> for sensors with a MIPI CSI2 interface.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>

Just like I reported on the 30th January:

.git/rebase-apply/patch:236: trailing whitespace.
 *
warning: 1 line adds whitespace errors.

This needs fixing.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 28/36] media: imx: csi: fix crop rectangle changes in set_fmt
  2017-02-16  2:19 ` [PATCH v4 28/36] media: imx: csi: fix crop rectangle changes in set_fmt Steve Longerbeam
@ 2017-02-16 11:05   ` Russell King - ARM Linux
  2017-02-16 18:16     ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 11:05 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, Feb 15, 2017 at 06:19:30PM -0800, Steve Longerbeam wrote:
> diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
> index ae24b42..3cb97e2 100644
> --- a/drivers/staging/media/imx/imx-media-csi.c
> +++ b/drivers/staging/media/imx/imx-media-csi.c
> @@ -531,6 +531,10 @@ static int csi_setup(struct csi_priv *priv)
>  
>  	ipu_csi_set_window(priv->csi, &priv->crop);
>  
> +	ipu_csi_set_downsize(priv->csi,
> +			     priv->crop.width == 2 * outfmt->width,
> +			     priv->crop.height == 2 * outfmt->height);
> +

This fails to build:

ERROR: "ipu_csi_set_downsize" [drivers/staging/media/imx/imx-media-csi.ko] undefined!

ipu_csi_set_downsize needs to be exported if we're going to use it in
a module:

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 30/36] media: imx: update capture dev format on IDMAC output pad set_fmt
  2017-02-16  2:19 ` [PATCH v4 30/36] media: imx: update capture dev format on IDMAC output pad set_fmt Steve Longerbeam
@ 2017-02-16 11:29   ` Philipp Zabel
  0 siblings, 0 replies; 228+ messages in thread
From: Philipp Zabel @ 2017-02-16 11:29 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> When configuring the IDMAC output pad formats (in ipu_csi,
> ipu_ic_prpenc, and ipu_ic_prpvf subdevs), the attached capture
> device format must also be updated.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> Suggested-by: Philipp Zabel <p.zabel@pengutronix.de>
> ---
>  drivers/staging/media/imx/imx-ic-prpencvf.c | 9 +++++++++
>  drivers/staging/media/imx/imx-media-csi.c   | 9 +++++++++
>  2 files changed, 18 insertions(+)
> 
> diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
> index 2be8845..6e45975 100644
> --- a/drivers/staging/media/imx/imx-ic-prpencvf.c
> +++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
> @@ -739,6 +739,7 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
>  		       struct v4l2_subdev_format *sdformat)
>  {
>  	struct prp_priv *priv = sd_to_priv(sd);
> +	struct imx_media_video_dev *vdev = priv->vdev;
>  	const struct imx_media_pixfmt *cc;
>  	struct v4l2_mbus_framefmt *infmt;
>  	u32 code;
> @@ -800,6 +801,14 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
>  	} else {
>  		priv->format_mbus[sdformat->pad] = sdformat->format;
>  		priv->cc[sdformat->pad] = cc;
> +		if (sdformat->pad == PRPENCVF_SRC_PAD) {
> +			/*
> +			 * update the capture device format if this is
> +			 * the IDMAC output pad
> +			 */
> +			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
> +						      &sdformat->format, cc);
> +		}

This is replaced again by patch 36. These should probably be squashed
together.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 36/36] media: imx: propagate sink pad formats to source pads
  2017-02-16  2:19 ` [PATCH v4 36/36] media: imx: propagate sink pad formats to source pads Steve Longerbeam
@ 2017-02-16 11:29   ` Philipp Zabel
  2017-02-16 18:19     ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-16 11:29 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
[...]
> diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
> index dd9d499..c43f85f 100644
> --- a/drivers/staging/media/imx/imx-ic-prpencvf.c
> +++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
> @@ -806,16 +806,22 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
>  	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
>  		cfg->try_fmt = sdformat->format;
>  	} else {
> -		priv->format_mbus[sdformat->pad] = sdformat->format;
> +		struct v4l2_mbus_framefmt *f =
> +			&priv->format_mbus[sdformat->pad];
> +		struct v4l2_mbus_framefmt *outf =
> +			&priv->format_mbus[PRPENCVF_SRC_PAD];
> +
> +		*f = sdformat->format;
>  		priv->cc[sdformat->pad] = cc;
> -		if (sdformat->pad == PRPENCVF_SRC_PAD) {
> -			/*
> -			 * update the capture device format if this is
> -			 * the IDMAC output pad
> -			 */
> -			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
> -						      &sdformat->format, cc);
> +
> +		/* propagate format to source pad */
> +		if (sdformat->pad == PRPENCVF_SINK_PAD) {
> +			outf->width = f->width;
> +			outf->height = f->height;

What about media bus format, field, and colorimetry?

>  		}
> +
> +		/* update the capture device format from output pad */
> +		imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix, outf, cc);
>  	}
>  
>  	return 0;
> diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
> index 3e6b607..9d9ec03 100644
> --- a/drivers/staging/media/imx/imx-media-csi.c
> +++ b/drivers/staging/media/imx/imx-media-csi.c
> @@ -1161,19 +1161,27 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
>  	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
>  		cfg->try_fmt = sdformat->format;
>  	} else {
> +		struct v4l2_mbus_framefmt *f_direct, *f_idmac;
> +
>  		priv->format_mbus[sdformat->pad] = sdformat->format;
>  		priv->cc[sdformat->pad] = cc;
> -		/* Reset the crop window if this is the input pad */
> -		if (sdformat->pad == CSI_SINK_PAD)
> +
> +		f_direct = &priv->format_mbus[CSI_SRC_PAD_DIRECT];
> +		f_idmac = &priv->format_mbus[CSI_SRC_PAD_IDMAC];
> +
> +		if (sdformat->pad == CSI_SINK_PAD) {
> +			/* reset the crop window */
>  			priv->crop = crop;
> -		else if (sdformat->pad == CSI_SRC_PAD_IDMAC) {
> -			/*
> -			 * update the capture device format if this is
> -			 * the IDMAC output pad
> -			 */
> -			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
> -						      &sdformat->format, cc);
> +
> +			/* propagate format to source pads */
> +			f_direct->width = crop.width;
> +			f_direct->height = crop.height;
> +			f_idmac->width = crop.width;
> +			f_idmac->height = crop.height;

This is missing also media bus format, field and colorimetry
propagation.

>  		}
> +
> +		/* update the capture device format from IDMAC output pad */
> +		imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix, f_idmac, cc);
>  	}
>  
>  	return 0;
> diff --git a/drivers/staging/media/imx/imx-media-vdic.c b/drivers/staging/media/imx/imx-media-vdic.c
> index 61e6017..55fb522 100644
> --- a/drivers/staging/media/imx/imx-media-vdic.c
> +++ b/drivers/staging/media/imx/imx-media-vdic.c
> @@ -649,8 +649,21 @@ static int vdic_set_fmt(struct v4l2_subdev *sd,
>  	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
>  		cfg->try_fmt = sdformat->format;
>  	} else {
> -		priv->format_mbus[sdformat->pad] = sdformat->format;
> +		struct v4l2_mbus_framefmt *f =
> +			&priv->format_mbus[sdformat->pad];
> +		struct v4l2_mbus_framefmt *outf =
> +			&priv->format_mbus[VDIC_SRC_PAD_DIRECT];
> +
> +		*f = sdformat->format;
>  		priv->cc[sdformat->pad] = cc;
> +
> +		/* propagate format to source pad */
> +		if (sdformat->pad == VDIC_SINK_PAD_DIRECT ||
> +		    sdformat->pad == VDIC_SINK_PAD_IDMAC) {
> +			outf->width = f->width;
> +			outf->height = f->height;
> +			outf->field = V4L2_FIELD_NONE;

This is missing colorimetry, too.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation
  2017-02-16  2:19 ` [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation Steve Longerbeam
@ 2017-02-16 11:32   ` Philipp Zabel
  2017-02-22 23:52     ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-16 11:32 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> The previous API and negotiation of mbus codes and pixel formats
> was broken, and has been completely redone.
> 
> The negotiation of media bus codes should be as follows:
> 
> CSI:
> 
> sink pad     direct src pad      IDMAC src pad
> --------     ----------------    -------------
> RGB (any)        IPU RGB           RGB (any)
> YUV (any)        IPU YUV           YUV (any)
> Bayer              N/A             must be same bayer code as sink

The IDMAC src pad should also use the internal 32-bit RGB / YUV format,
except if bayer/raw mode is selected, in which case the attached capture
video device should only allow a single mode corresponding to the output
pad media bus format.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 17/36] media: Add userspace header file for i.MX
  2017-02-16  2:19 ` [PATCH v4 17/36] media: Add userspace header file for i.MX Steve Longerbeam
@ 2017-02-16 11:33   ` Philipp Zabel
  2017-02-22 23:54     ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-16 11:33 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> This adds a header file for use by userspace programs wanting to interact
> with the i.MX media driver. It defines custom v4l2 controls and events
> generated by the i.MX v4l2 subdevices.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  include/uapi/media/Kbuild |  1 +
>  include/uapi/media/imx.h  | 29 +++++++++++++++++++++++++++++
>  2 files changed, 30 insertions(+)
>  create mode 100644 include/uapi/media/imx.h
> 
> diff --git a/include/uapi/media/Kbuild b/include/uapi/media/Kbuild
> index aafaa5a..fa78958 100644
> --- a/include/uapi/media/Kbuild
> +++ b/include/uapi/media/Kbuild
> @@ -1 +1,2 @@
>  # UAPI Header export list
> +header-y += imx.h
> diff --git a/include/uapi/media/imx.h b/include/uapi/media/imx.h
> new file mode 100644
> index 0000000..1fdd1c1
> --- /dev/null
> +++ b/include/uapi/media/imx.h
> @@ -0,0 +1,29 @@
> +/*
> + * Copyright (c) 2014-2015 Mentor Graphics Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version
> + */
> +
> +#ifndef __UAPI_MEDIA_IMX_H__
> +#define __UAPI_MEDIA_IMX_H__
> +
> +/*
> + * events from the subdevs
> + */
> +#define V4L2_EVENT_IMX_CLASS          V4L2_EVENT_PRIVATE_START
> +#define V4L2_EVENT_IMX_NFB4EOF        (V4L2_EVENT_IMX_CLASS + 1)
> +#define V4L2_EVENT_IMX_FRAME_INTERVAL (V4L2_EVENT_IMX_CLASS + 2)

These events are still i.MX specific. I think they shouldn't be.

> +enum imx_ctrl_id {
> +	V4L2_CID_IMX_MOTION = (V4L2_CID_USER_IMX_BASE + 0),
> +	V4L2_CID_IMX_FIM_ENABLE,
> +	V4L2_CID_IMX_FIM_NUM,
> +	V4L2_CID_IMX_FIM_TOLERANCE_MIN,
> +	V4L2_CID_IMX_FIM_TOLERANCE_MAX,
> +	V4L2_CID_IMX_FIM_NUM_SKIP,
> +};
> +
> +#endif

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (35 preceding siblings ...)
  2017-02-16  2:19 ` [PATCH v4 36/36] media: imx: propagate sink pad formats to source pads Steve Longerbeam
@ 2017-02-16 11:37 ` Russell King - ARM Linux
  2017-02-16 18:30   ` Steve Longerbeam
  2017-02-16 22:20 ` Russell King - ARM Linux
  37 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 11:37 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Two problems.

On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
>   media: imx: propagate sink pad formats to source pads

1) It looks like all cases aren't being caught:

- entity 74: ipu1_csi0 (3 pads, 4 links)
             type V4L2 subdev subtype Unknown flags 0
             device node name /dev/v4l-subdev13
        pad0: Sink
                [fmt:SRGGB8/816x616 field:none]
                <- "ipu1_csi0_mux":2 [ENABLED]
        pad1: Source
                [fmt:AYUV32/816x616 field:none
                 crop.bounds:(0,0)/816x616
                 crop:(0,0)/816x616]
                -> "ipu1_ic_prp":0 []
                -> "ipu1_vdic":0 []
        pad2: Source
                [fmt:SRGGB8/816x616 field:none
                 crop.bounds:(0,0)/816x616
                 crop:(0,0)/816x616]
                -> "ipu1_csi0 capture":0 [ENABLED]

While the size has been propagated to pad1, the format has not.

2) /dev/video* device node assignment

I've no idea at the moment how the correct /dev/video* node should be
chosen - initially with Philipp and your previous code, it was
/dev/video3 after initial boot.  Philipp's was consistently /dev/video3.
Yours changed to /dev/video7 when removing and re-inserting the modules
(having fixed that locally.)  This version makes CSI0 be /dev/video7,
but after a remove+reinsert, it becomes (eg) /dev/video8.

/dev/v4l/by-path/platform-capture-subsystem-video-index4 also is not a
stable path - the digit changes (it's supposed to be a stable path.)
After a remove+reinsert, it becomes (eg)
/dev/v4l/by-path/platform-capture-subsystem-video-index5.
/dev/v4l/by-id doesn't contain a symlink for this either.

What this means is that it's very hard to script the setup, because
there's no easy way to know what device is the capture device.  While
it may be possible to do:

	media-ctl -d /dev/media1 -p | \
		grep -A2 ': ipu1_csi0 capture' | \
			sed -n 's|.*\(/dev/video[0-9]*\).*|\1|p'

that's hardly a nice solution - while it fixes the setup script, it
doesn't stop the pain of having to delve around to find the correct
device to use for gstreamer to test with.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 20/36] media: imx: Add CSI subdev driver
  2017-02-16  2:19 ` [PATCH v4 20/36] media: imx: Add CSI subdev driver Steve Longerbeam
@ 2017-02-16 11:52   ` Russell King - ARM Linux
  2017-02-16 12:40     ` Russell King - ARM Linux
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 11:52 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, Feb 15, 2017 at 06:19:22PM -0800, Steve Longerbeam wrote:
> +static const struct platform_device_id imx_csi_ids[] = {
> +	{ .name = "imx-ipuv3-csi" },
> +	{ },
> +};
> +MODULE_DEVICE_TABLE(platform, imx_csi_ids);
> +
> +static struct platform_driver imx_csi_driver = {
> +	.probe = imx_csi_probe,
> +	.remove = imx_csi_remove,
> +	.id_table = imx_csi_ids,
> +	.driver = {
> +		.name = "imx-ipuv3-csi",
> +	},
> +};
> +module_platform_driver(imx_csi_driver);
> +
> +MODULE_DESCRIPTION("i.MX CSI subdev driver");
> +MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
> +MODULE_LICENSE("GPL");
> +MODULE_ALIAS("platform:imx-ipuv3-csi");

Just a reminder that automatic module loading of this is completely
broken right now (not your problem) due to this stupid idea in the
IPUv3 code:

		if (!ret)
			ret = platform_device_add(pdev);
		if (ret) {
			platform_device_put(pdev);
			goto err_register;
		}

		/*
		 * Set of_node only after calling platform_device_add. Otherwise
		 * the platform:imx-ipuv3-crtc modalias won't be used.
		 */
		pdev->dev.of_node = of_node;

setting pdev->dev.of_node changes the modalias exported to userspace,
so udev sees a DT based modalias, which causes it to totally miss any
driver using a non-DT based modalias.

The IPUv3 code needs fixing, not only for imx-media-csi, but also for
imx-ipuv3-crtc too, because that module will also suffer the same
issue.

The only solution is... don't fsck with dev->of_node assignment.  In
this case, it's probably much better to pass it in via platform data.
If you then absolutely must have dev->of_node, doing it in the driver
means that you avoid the modalias mess before the appropriate driver
is loaded.  However, that's still not a nice solution because the
modalias file still ends up randomly changing its contents.

As I say, not _your_ problem, but it's still a problem that needs
solving, and I don't want it forgotten about.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver
  2017-02-16  2:19 ` [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver Steve Longerbeam
@ 2017-02-16 11:54   ` Philipp Zabel
  2017-02-16 19:20     ` Steve Longerbeam
  2017-02-27 14:38   ` Rob Herring
  1 sibling, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-16 11:54 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> Add bindings documentation for the i.MX media driver.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  Documentation/devicetree/bindings/media/imx.txt | 66 +++++++++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/media/imx.txt
> 
> diff --git a/Documentation/devicetree/bindings/media/imx.txt b/Documentation/devicetree/bindings/media/imx.txt
> new file mode 100644
> index 0000000..fd5af50
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/media/imx.txt
> @@ -0,0 +1,66 @@
> +Freescale i.MX Media Video Device
> +=================================
> +
> +Video Media Controller node
> +---------------------------
> +
> +This is the media controller node for video capture support. It is a
> +virtual device that lists the camera serial interface nodes that the
> +media device will control.
> +
> +Required properties:
> +- compatible : "fsl,imx-capture-subsystem";
> +- ports      : Should contain a list of phandles pointing to camera
> +		sensor interface ports of IPU devices
> +
> +example:
> +
> +capture-subsystem {
> +	compatible = "fsl,capture-subsystem";

"fsl,imx-capture-subsystem"

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 20/36] media: imx: Add CSI subdev driver
  2017-02-16 11:52   ` Russell King - ARM Linux
@ 2017-02-16 12:40     ` Russell King - ARM Linux
  2017-02-16 13:09       ` Russell King - ARM Linux
  2017-02-16 18:44       ` Steve Longerbeam
  0 siblings, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 12:40 UTC (permalink / raw)
  To: Steve Longerbeam, Laurent Pinchart, Hans Verkuil, Mauro Carvalho Chehab
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, sakari.ailus, nick,
	songjun.wu, hverkuil, Steve Longerbeam, pavel, robert.jarzmik,
	devel, markus.heiser, laurent.pinchart+renesas, shuah, geert,
	linux-media, devicetree, kernel, arnd, mchehab, bparrot, robh+dt,
	horms+renesas, tiffany.lin, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel,
	jean-christophe.trotin, p.zabel, fabio.estevam, shawnguo,
	sudipm.mukherjee

On Thu, Feb 16, 2017 at 11:52:06AM +0000, Russell King - ARM Linux wrote:
> On Wed, Feb 15, 2017 at 06:19:22PM -0800, Steve Longerbeam wrote:
> > +static const struct platform_device_id imx_csi_ids[] = {
> > +	{ .name = "imx-ipuv3-csi" },
> > +	{ },
> > +};
> > +MODULE_DEVICE_TABLE(platform, imx_csi_ids);
> > +
> > +static struct platform_driver imx_csi_driver = {
> > +	.probe = imx_csi_probe,
> > +	.remove = imx_csi_remove,
> > +	.id_table = imx_csi_ids,
> > +	.driver = {
> > +		.name = "imx-ipuv3-csi",
> > +	},
> > +};
> > +module_platform_driver(imx_csi_driver);
> > +
> > +MODULE_DESCRIPTION("i.MX CSI subdev driver");
> > +MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
> > +MODULE_LICENSE("GPL");
> > +MODULE_ALIAS("platform:imx-ipuv3-csi");
> 
> Just a reminder that automatic module loading of this is completely
> broken right now (not your problem) due to this stupid idea in the
> IPUv3 code:
> 
> 		if (!ret)
> 			ret = platform_device_add(pdev);
> 		if (ret) {
> 			platform_device_put(pdev);
> 			goto err_register;
> 		}
> 
> 		/*
> 		 * Set of_node only after calling platform_device_add. Otherwise
> 		 * the platform:imx-ipuv3-crtc modalias won't be used.
> 		 */
> 		pdev->dev.of_node = of_node;
> 
> setting pdev->dev.of_node changes the modalias exported to userspace,
> so udev sees a DT based modalias, which causes it to totally miss any
> driver using a non-DT based modalias.
> 
> The IPUv3 code needs fixing, not only for imx-media-csi, but also for
> imx-ipuv3-crtc too, because that module will also suffer the same
> issue.
> 
> The only solution is... don't fsck with dev->of_node assignment.  In
> this case, it's probably much better to pass it in via platform data.
> If you then absolutely must have dev->of_node, doing it in the driver
> means that you avoid the modalias mess before the appropriate driver
> is loaded.  However, that's still not a nice solution because the
> modalias file still ends up randomly changing its contents.
> 
> As I say, not _your_ problem, but it's still a problem that needs
> solving, and I don't want it forgotten about.

I've just hacked up a solution to this, and unfortunately it reveals a
problem with Steve's code.  Picking out the imx & media-related messages:

[    8.012191] imx_media_common: module is from the staging directory, the quality is unknown, you have been warned.
[    8.018175] imx_media: module is from the staging directory, the quality is unknown, you have been warned.
[    8.748345] imx-media: Registered subdev ipu1_csi0_mux
[    8.753451] imx-media: Registered subdev ipu2_csi1_mux
[    9.055196] imx219 0-0010: detected IMX219 sensor
[    9.090733] imx6_mipi_csi2: module is from the staging directory, the quality is unknown, you have been warned.
[    9.092247] imx-media: Registered subdev imx219 0-0010
[    9.334338] imx-media: Registered subdev imx6-mipi-csi2
[    9.372452] imx_media_capture: module is from the staging directory, the quality is unknown, you have been warned.
[    9.378163] imx_media_capture: module is from the staging directory, the quality is unknown, you have been warned.
[    9.390033] imx_media_csi: module is from the staging directory, the quality is unknown, you have been warned.
[    9.394362] imx-media: Received unknown subdev ipu1_csi0
[    9.394699] imx-ipuv3-csi: probe of imx-ipuv3-csi.0 failed with error -22
[    9.394840] imx-media: Received unknown subdev ipu1_csi1
[    9.394887] imx-ipuv3-csi: probe of imx-ipuv3-csi.1 failed with error -22
[    9.394992] imx-media: Received unknown subdev ipu2_csi0
[    9.395026] imx-ipuv3-csi: probe of imx-ipuv3-csi.4 failed with error -22
[    9.395119] imx-media: Received unknown subdev ipu2_csi1
[    9.395159] imx-ipuv3-csi: probe of imx-ipuv3-csi.5 failed with error -22
[    9.411722] imx_media_vdic: module is from the staging directory, the quality is unknown, you have been warned.
[    9.412820] imx-media: Registered subdev ipu1_vdic
[    9.424687] imx-media: Registered subdev ipu2_vdic
[    9.436074] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
[    9.437455] imx-media: Registered subdev ipu1_ic_prp
[    9.437788] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
[    9.447542] imx-media: Registered subdev ipu1_ic_prpenc
[    9.455225] ipu1_ic_prpenc: Registered ipu1_ic_prpenc capture as /dev/video3
[    9.459203] imx-media: Registered subdev ipu1_ic_prpvf
[    9.460484] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
[    9.460726] ipu1_ic_prpvf: Registered ipu1_ic_prpvf capture as /dev/video4
[    9.460983] imx-media: Registered subdev ipu2_ic_prp
[    9.461161] imx-media: Registered subdev ipu2_ic_prpenc
[    9.461737] ipu2_ic_prpenc: Registered ipu2_ic_prpenc capture as /dev/video5
[    9.463767] imx-media: Registered subdev ipu2_ic_prpvf
[    9.464294] ipu2_ic_prpvf: Registered ipu2_ic_prpvf capture as /dev/video6
[    9.464345] imx-media: imx_media_create_link: (null):1 -> ipu1_ic_prp:0
[    9.464413] ------------[ cut here ]------------
[    9.469134] kernel BUG at /home/rmk/git/linux-rmk/drivers/media/media-entity.c:628!
[    9.476924] Internal error: Oops - BUG: 0 [#1] SMP ARM
[    9.482246] Modules linked in: imx_media_ic(C+) imx_media_vdic(C) imx_media_csi(C) imx_media_capture(C) uvcvideo imx6_mipi_csi2(C) snd_soc_imx_audmux imx219 snd_soc_sgtl5000 video_multiplexer caam imx_sdma imx2_wdt snd_soc_fsl_ssi snd_soc_fsl_spdif imx_pcm_dma coda imx_thermal v4l2_mem2mem videobuf2_v4l2 videobuf2_dma_contig videobuf2_core videobuf2_vmalloc videobuf2_memops imx_media(C) imx_media_common(C) rc_pinnacle_pctv_hd nfsd dw_hdmi_cec dw_hdmi_ahb_audio etnaviv
[    9.524500] CPU: 1 PID: 263 Comm: systemd-udevd Tainted: G         C      4.10.0-rc7+ #2112
[    9.532995] Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
[    9.539619] task: edef1880 task.stack: d03ca000
[    9.544313] PC is at media_create_pad_link+0x134/0x140
[    9.549541] LR is at imx_media_probe_complete+0x164/0x24c [imx_media]
[    9.556080] pc : [<c04f0eb0>]    lr : [<bf052524>]    psr: 60070013
               sp : d03cbbc8  ip : d03cbbf8  fp : d03cbbf4
[    9.567712] r10: 00000001  r9 : 00000000  r8 : d0170d14
[    9.573007] r7 : 00000000  r6 : 00000001  r5 : 00000000  r4 : d0170d14
[    9.579612] r3 : 00000000  r2 : d0170d14  r1 : 00000001  r0 : 00000000
[    9.586256] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none[    9.593486] Control: 10c5387d  Table: 3e77c04a  DAC: 00000051
[    9.599317] Process systemd-udevd (pid: 263, stack limit = 0xd03ca210)
[    9.605950] Stack: (0xd03cbbc8 to 0xd03cc000)
[    9.610368] bbc0:                   00000000 00000000 ee980410 00000000 00000000 d0170d14
[    9.618658] bbe0: 00000000 00000001 d03cbc54 d03cbbf8 bf052524 c04f0d88 00000000 d0170d88
[    9.626961] bc00: 00000000 c0a57dc4 0004a364 ee98011c 00000000 00000003 00000001 ee980230
[    9.635267] bc20: ee980274 ee980010 d03cbc54 d0170f14 ee9ca4cc ee9974c4 bf0523c0 c0a57dc4
[    9.643539] bc40: f184bb30 00000026 d03cbc74 d03cbc58 c0502f50 bf0523cc ee9ca4cc d0170f14
[    9.651824] bc60: c0a57e08 d0170fc0 d03cbc9c d03cbc78 c0502fdc c0502e70 00000000 d0170f10
[    9.660132] bc80: 00000000 d02c0c10 bf122cd0 d0170f14 d03cbcc4 d03cbca0 bf121154 c0502f68
[    9.668423] bca0: bf12104c ffffffed d02c0c10 fffffdfb bf123248 00000000 d03cbce4 d03cbcc8
[    9.676713] bcc0: c041aeb4 bf121058 d02c0c10 c1419d70 00000000 bf123248 d03cbd0c d03cbce8
[    9.684992] bce0: c0418ec4 c041ae68 d02c0c10 bf123248 d02c0c44 00000000 00000001 00000124
[    9.693282] bd00: d03cbd2c d03cbd10 c0419044 c0418ccc 00000000 00000000 bf123248 c0418f88
[    9.701618] bd20: d03cbd54 d03cbd30 c04172e4 c0418f94 ef0f64a4 d01b8cd0 d03d9858 bf123248
[    9.709900] bd40: d03d9c00 c0a45e10 d03cbd64 d03cbd58 c0418728 c0417294 d03cbd8c d03cbd68
[    9.718203] bd60: c0418428 c0418710 bf122e48 d03cbd78 bf123248 c0a704a8 bf126000 00000000
[    9.729180] bd80: d03cbda4 d03cbd90 c0419ec4 c0418340 bf123480 c0a704a8 d03cbdb4 d03cbda8
[    9.739950] bda0: c041ad88 c0419e50 d03cbdc4 d03cbdb8 bf126018 c041ad4c d03cbe34 d03cbdc8
[    9.751089] bdc0: c00098ac bf12600c d03cbdec d03cbdd8 c00a8888 c0087240 00000000 ed4a9440
[    9.761941] bde0: d03cbe34 d03cbdf0 c016c690 c00a8814 c016b554 c016aa60 00000001 c015f3f8
[    9.772940] be00: 00000005 0000000c edef1880 bf123480 c0a704a8 bf123480 c0a704a8 ed4a9440
[    9.784008] be20: bf123480 00000001 d03cbe5c d03cbe38 c011b1e4 c0009874 d03cbe5c d03cbe48
[    9.795017] be40: c09f5ea7 c0a704a8 c09e04ec bf123480 d03cbf14 d03cbe60 c00d2dd0 c011b188
[    9.806069] be60: bf12348c 00007fff bf123480 c00d09f0 f1847000 bf12792c f18495c0 bf123680
[    9.817194] be80: bf12348c bf1236f0 00000000 bf1234c8 c017c1d0 c017bfac f1847000 00004f68
[    9.828347] bea0: c017c2e8 00000000 edef1880 00000000 00000000 00000000 00000000 00000000
[    9.839542] bec0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    9.850754] bee0: 00000000 00000000 00000003 7fffffff 00000000 00000000 00000007 b6c9e63c
[    9.861991] bf00: d03ca000 00000000 d03cbfa4 d03cbf18 c00d36cc c00d1480 7fffffff 00000000
[    9.873266] bf20: 00000003 ee0384d4 d03cbf74 f1847000 00004f68 00000000 00000002 f1847000
[    9.884593] bf40: 00004f68 f184bb30 f184989b f184a7d8 000026f0 00002dd0 00000000 00000000
[    9.895998] bf60: 00000000 0000192c 00000019 0000001a 00000011 00000000 0000000a 00000000
[    9.907408] bf80: c008b848 80c36630 00000000 2529fc00 0000017b c000ff04 00000000 d03cbfa8
[    9.918858] bfa0: c000fd60 c00d3644 80c36630 00000000 00000007 b6c9e63c 00000000 80c38178
[    9.930354] bfc0: 80c36630 00000000 2529fc00 0000017b 00020000 7f96eb0c 80c37848 00000000
[    9.941900] bfe0: bed55928 bed55918 b6c988ff b6bea572 600f0030 00000007 3fffd861 3fffdc61
[    9.953532] Backtrace:
[    9.959442] [<c04f0d7c>] (media_create_pad_link) from [<bf052524>] (imx_media_probe_complete+0x164/0x24c [imx_media])
[    9.973644]  r10:00000001 r9:00000000 r8:d0170d14 r7:00000000 r6:00000000 r5:ee980410
[    9.985112]  r4:00000000 r3:00000000
[    9.992696] [<bf0523c0>] (imx_media_probe_complete [imx_media]) from [<c0502f50>] (v4l2_async_test_notify+0xec/0xf8)
[   10.007413]  r10:00000026 r9:f184bb30 r8:c0a57dc4 r7:bf0523c0 r6:ee9974c4 r5:ee9ca4cc
[   10.019212]  r4:d0170f14
[   10.025650] [<c0502e64>] (v4l2_async_test_notify) from [<c0502fdc>] (v4l2_async_register_subdev+0x80/0xdc)
[   10.039736]  r7:d0170fc0 r6:c0a57e08 r5:d0170f14 r4:ee9ca4cc
[   10.049613] [<c0502f5c>] (v4l2_async_register_subdev) from [<bf121154>] (imx_ic_probe+0x108/0x144 [imx_media_ic])
[   10.063953]  r8:d0170f14 r7:bf122cd0 r6:d02c0c10 r5:00000000 r4:d0170f10 r3:00000000
[   10.075786] [<bf12104c>] (imx_ic_probe [imx_media_ic]) from [<c041aeb4>] (platform_drv_probe+0x58/0xb8)
[   10.089118]  r8:00000000 r7:bf123248 r6:fffffdfb r5:d02c0c10 r4:ffffffed r3:bf12104c
[   10.100683] [<c041ae5c>] (platform_drv_probe) from [<c0418ec4>] (driver_probe_device+0x204/0x2c8)
[   10.113279]  r7:bf123248 r6:00000000 r5:c1419d70 r4:d02c0c10
[   10.122765] [<c0418cc0>] (driver_probe_device) from [<c0419044>] (__driver_attach+0xbc/0xc0)
[   10.135098]  r10:00000124 r8:00000001 r7:00000000 r6:d02c0c44 r5:bf123248 r4:d02c0c10
[   10.146785] [<c0418f88>] (__driver_attach) from [<c04172e4>] (bus_for_each_dev+0x5c/0x90)
[   10.158811]  r6:c0418f88 r5:bf123248 r4:00000000 r3:00000000
[   10.168375] [<c0417288>] (bus_for_each_dev) from [<c0418728>] (driver_attach+0x24/0x28)
[   10.180413]  r6:c0a45e10 r5:d03d9c00 r4:bf123248
[   10.188959] [<c0418704>] (driver_attach) from [<c0418428>] (bus_add_driver+0xf4/0x200)
[   10.200775] [<c0418334>] (bus_add_driver) from [<c0419ec4>] (driver_register+0x80/0xfc)
[   10.212707]  r7:00000000 r6:bf126000 r5:c0a704a8 r4:bf123248
[   10.222254] [<c0419e44>] (driver_register) from [<c041ad88>] (__platform_driver_register+0x48/0x4c)
[   10.235212]  r5:c0a704a8 r4:bf123480
[   10.242694] [<c041ad40>] (__platform_driver_register) from [<bf126018>] (imx_ic_driver_init+0x18/0x24 [imx_media_ic])
[   10.257308] [<bf126000>] (imx_ic_driver_init [imx_media_ic]) from [<c00098ac>] (do_one_initcall+0x44/0x170)
[   10.271043] [<c0009868>] (do_one_initcall) from [<c011b1e4>] (do_init_module+0x68/0x1d8)
[   10.283139]  r8:00000001 r7:bf123480 r6:ed4a9440 r5:c0a704a8 r4:bf123480
[   10.293849] [<c011b17c>] (do_init_module) from [<c00d2dd0>] (load_module+0x195c/0x2080)
[   10.305867]  r7:bf123480 r6:c09e04ec r5:c0a704a8 r4:c09f5ea7
[   10.315523] [<c00d1474>] (load_module) from [<c00d36cc>] (SyS_finit_module+0x94/0xa0)
[   10.327382]  r10:00000000 r9:d03ca000 r8:b6c9e63c r7:00000007 r6:00000000 r5:00000000
[   10.339238]  r4:7fffffff
[   10.345766] [<c00d3638>] (SyS_finit_module) from [<c000fd60>] (ret_fast_syscall+0x0/0x1c)
[   10.357994]  r8:c000ff04 r7:0000017b r6:2529fc00 r5:00000000 r4:80c36630
[   10.368747] Code: e1a01007 ebfffce1 e3e0000b e89daff8 (e7f001f2)
[   10.378883] ---[ end trace 2051fac455b36c5a ]---
[   11.228961] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
[   11.247536] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
[   11.301366] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.

So there's probably some sort of race going on.

However, the following is primerily directed at Laurent as the one who
introduced the BUG_ON() in question...

NEVER EVER USE BUG_ON() IN A PATH THAT CAN RETURN AN ERROR.

It's possible to find Linus rants about this, eg,
https://www.spinics.net/lists/stable/msg146439.html

 I should have reacted to the damn added BUG_ON() lines. I suspect I
 will have to finally just remove the idiotic BUG_ON() concept once and
 for all, because there is NO F*CKING EXCUSE to knowingly kill the
 kernel.

Also: http://yarchive.net/comp/linux/BUG.html

 Rule of thumb: BUG() is only good for something that never happens and
 that we really have no other option for (ie state is so corrupt that
 continuing is deadly).

So, _unless_ people want to see BUG_ON() removed from the kernel, I
strongly suggest to _STOP_ using it as "we didn't like the function
arguments, let's use it as an assert() statement instead of returning
an error."

There's no excuse what so ever to be killing the machine in
media_create_pad_link().  If it doesn't like a NULL pointer, it's damn
well got an error path to report that fact.  Use that mechanism and
stop needlessly killing the kernel.

BUG_ON() IS NOT ASSERT().  DO NOT USE IT AS SUCH.

Linus is absolutely right about BUG_ON() - it hurts debuggability,
because now the only way to do further tests is to reboot the damned
machine after removing those fscking BUG_ON()s that should *never*
have been there in the first place.

As Linus went on to say:

 And dammit, if anybody else feels that they had done "debugging
 messages with BUG_ON()", I would suggest you

  (a) rethink your approach to programming

  (b) send me patches to remove the crap entirely, or make them real
 *DEBUGGING* messages, not "kill the whole machine" messages.

 I've ranted against people using BUG_ON() for debugging in the past.
 Why the f*ck does this still happen? And Andrew - please stop taking
 those kinds of patches! Lookie here:

     https://lwn.net/Articles/13183/

 so excuse me for being upset that people still do this shit almost 15
 years later.

So I suggest people heed that advice and start fixing these stupid
BUG_ON()s that they've created.

Thanks.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 18/36] media: Add i.MX media core driver
  2017-02-16  2:19 ` [PATCH v4 18/36] media: Add i.MX media core driver Steve Longerbeam
  2017-02-16 10:27   ` Russell King - ARM Linux
@ 2017-02-16 13:02   ` Philipp Zabel
  2017-02-16 13:44     ` Russell King - ARM Linux
  2017-02-17  1:33     ` Steve Longerbeam
  1 sibling, 2 replies; 228+ messages in thread
From: Philipp Zabel @ 2017-02-16 13:02 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> Add the core media driver for i.MX SOC.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  Documentation/media/v4l-drivers/imx.rst           | 542 +++++++++++++++++
>  drivers/staging/media/Kconfig                     |   2 +
>  drivers/staging/media/Makefile                    |   1 +
>  drivers/staging/media/imx/Kconfig                 |   7 +
>  drivers/staging/media/imx/Makefile                |   6 +
>  drivers/staging/media/imx/TODO                    |  36 ++
>  drivers/staging/media/imx/imx-media-dev.c         | 487 +++++++++++++++
>  drivers/staging/media/imx/imx-media-fim.c         | 471 +++++++++++++++
>  drivers/staging/media/imx/imx-media-internal-sd.c | 349 +++++++++++
>  drivers/staging/media/imx/imx-media-of.c          | 267 ++++++++
>  drivers/staging/media/imx/imx-media-utils.c       | 701 ++++++++++++++++++++++
>  drivers/staging/media/imx/imx-media.h             | 297 +++++++++
>  include/media/imx.h                               |  15 +
>  include/uapi/linux/v4l2-controls.h                |   4 +
>  14 files changed, 3185 insertions(+)
>  create mode 100644 Documentation/media/v4l-drivers/imx.rst
>  create mode 100644 drivers/staging/media/imx/Kconfig
>  create mode 100644 drivers/staging/media/imx/Makefile
>  create mode 100644 drivers/staging/media/imx/TODO
>  create mode 100644 drivers/staging/media/imx/imx-media-dev.c
>  create mode 100644 drivers/staging/media/imx/imx-media-fim.c
>  create mode 100644 drivers/staging/media/imx/imx-media-internal-sd.c
>  create mode 100644 drivers/staging/media/imx/imx-media-of.c
>  create mode 100644 drivers/staging/media/imx/imx-media-utils.c
>  create mode 100644 drivers/staging/media/imx/imx-media.h
>  create mode 100644 include/media/imx.h
> 
> diff --git a/Documentation/media/v4l-drivers/imx.rst b/Documentation/media/v4l-drivers/imx.rst
> new file mode 100644
> index 0000000..f085e43
> --- /dev/null
> +++ b/Documentation/media/v4l-drivers/imx.rst
> @@ -0,0 +1,542 @@
> +i.MX Video Capture Driver
> +=========================
> +
> +Introduction
> +------------
> +
> +The Freescale i.MX5/6 contains an Image Processing Unit (IPU), which
> +handles the flow of image frames to and from capture devices and
> +display devices.
> +
> +For image capture, the IPU contains the following internal subunits:
> +
> +- Image DMA Controller (IDMAC)
> +- Camera Serial Interface (CSI)
> +- Image Converter (IC)
> +- Sensor Multi-FIFO Controller (SMFC)
> +- Image Rotator (IRT)
> +- Video De-Interlacing or Combining Block (VDIC)
> +
> +The IDMAC is the DMA controller for transfer of image frames to and from
> +memory. Various dedicated DMA channels exist for both video capture and
> +display paths. During transfer, the IDMAC is also capable of vertical
> +image flip, 8x8 block transfer (see IRT description), pixel component
> +re-ordering (for example UYVY to YUYV) within the same colorspace, and
> +even packed <--> planar conversion. It can also perform a simple
> +de-interlacing by interleaving even and odd lines during transfer
> +(without motion compensation which requires the VDIC).
> +
> +The CSI is the backend capture unit that interfaces directly with
> +camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 busses.
> +
> +The IC handles color-space conversion, resizing (downscaling and
> +upscaling), horizontal flip, and 90/270 degree rotation operations.
> +
> +There are three independent "tasks" within the IC that can carry out
> +conversions concurrently: pre-process encoding, pre-process viewfinder,
> +and post-processing. Within each task, conversions are split into three
> +sections: downsizing section, main section (upsizing, flip, colorspace
> +conversion, and graphics plane combining), and rotation section.
> +
> +The IPU time-shares the IC task operations. The time-slice granularity
> +is one burst of eight pixels in the downsizing section, one image line
> +in the main processing section, one image frame in the rotation section.
> +
> +The SMFC is composed of four independent FIFOs that each can transfer
> +captured frames from sensors directly to memory concurrently via four
> +IDMAC channels.
> +
> +The IRT carries out 90 and 270 degree image rotation operations. The
> +rotation operation is carried out on 8x8 pixel blocks at a time. This
> +operation is supported by the IDMAC which handles the 8x8 block transfer
> +along with block reordering, in coordination with vertical flip.
> +
> +The VDIC handles the conversion of interlaced video to progressive, with
> +support for different motion compensation modes (low, medium, and high
> +motion). The deinterlaced output frames from the VDIC can be sent to the
> +IC pre-process viewfinder task for further conversions. The VDIC also
> +contains a Combiner that combines two image planes, with alpha blending
> +and color keying.
> +
> +In addition to the IPU internal subunits, there are also two units
> +outside the IPU that are also involved in video capture on i.MX:
> +
> +- MIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus
> +  interface. This is a Synopsys DesignWare core.
> +- Two video multiplexers for selecting among multiple sensor inputs
> +  to send to a CSI.
> +
> +For more info, refer to the latest versions of the i.MX5/6 reference
> +manuals listed under References.
> +
> +
> +Features
> +--------
> +
> +Some of the features of this driver include:
> +
> +- Many different pipelines can be configured via media controller API,
> +  that correspond to the hardware video capture pipelines supported in
> +  the i.MX.
> +
> +- Supports parallel, BT.565, and MIPI CSI-2 interfaces.
> +
> +- Up to four concurrent sensor acquisitions, by configuring each
> +  sensor's pipeline using independent entities. This is currently
> +  demonstrated with the SabreSD and SabreLite reference boards with
> +  independent OV5642 and MIPI CSI-2 OV5640 sensor modules.
> +
> +- Scaling, color-space conversion, horizontal and vertical flip, and
> +  image rotation via IC task subdevs.
> +
> +- Many pixel formats supported (RGB, packed and planar YUV, partial
> +  planar YUV).
> +
> +- The VDIC subdev supports motion compensated de-interlacing, with three
> +  motion compensation modes: low, medium, and high motion. The mode is
> +  specified with a custom control. Pipelines are defined that allow
> +  sending frames to the VDIC subdev directly from the CSI or from
> +  memory buffers via an output/mem2mem device node. For low and medium
> +  motion modes, the VDIC must receive from memory buffers via a device
> +  node.
> +
> +- Includes a Frame Interval Monitor (FIM) that can correct vertical sync
> +  problems with the ADV718x video decoders. See below for a description
> +  of the FIM.
> +
> +
> +Entities
> +--------
> +
> +imx6-mipi-csi2
> +--------------
> +
> +This is the MIPI CSI-2 receiver entity. It has one sink pad to receive
> +the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has
> +four source pads, corresponding to the four MIPI CSI-2 demuxed virtual
> +channel outputs.
> +
> +This entity actually consists of two sub-blocks. One is the MIPI CSI-2
> +core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block
> +is a "CSI-2 to IPU gasket". The gasket acts as a demultiplexer of the
> +four virtual channels streams, providing four separate parallel buses
> +containing each virtual channel that are routed to CSIs or video
> +multiplexers as described below.
> +
> +On i.MX6 solo/dual-lite, all four virtual channel buses are routed to
> +two video multiplexers. Both CSI0 and CSI1 can receive any virtual
> +channel, as selected by the video multiplexers.
> +
> +On i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected
> +by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1
> +and IPU2-CSI0, respectively, and virtual channel 3 is routed to
> +IPU2-CSI1 (again selected by a video mux).
> +
> +ipuX_csiY_mux
> +-------------
> +
> +These are the video multiplexers. They have two or more sink pads to
> +select from either camera sensors with a parallel interface, or from
> +MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a
> +single source pad that routes to a CSI (ipuX_csiY entities).
> +
> +On i.MX6 solo/dual-lite, there are two video mux entities. One sits
> +in front of IPU1-CSI0 to select between a parallel sensor and any of
> +the four MIPI CSI-2 virtual channels (a total of five sink pads). The
> +other mux sits in front of IPU1-CSI1, and again has five sink pads to
> +select between a parallel sensor and any of the four MIPI CSI-2 virtual
> +channels.
> +
> +On i.MX6 Quad, there are two video mux entities. One sits in front of
> +IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual
> +channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to
> +select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two
> +sink pads).
> +
> +ipuX_csiY
> +---------
> +
> +These are the CSI entities. They have a single sink pad receiving from
> +either a video mux or from a MIPI CSI-2 virtual channel as described
> +above.
> +
> +This entity has two source pads. The first source pad can link directly
> +to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links
> +that require no IDMAC memory buffer transfer.
> +
> +When the direct source pad is routed to the ipuX_ic_prp entity, frames
> +from the CSI will be processed by one of the IC pre-processing tasks.
> +
> +When the direct source pad is routed to the ipuX_vdic entity, the VDIC
> +will carry out motion-compensated de-interlace using "high motion" mode
> +(see description of ipuX_vdic entity).
> +
> +The second source pad sends video frames to memory buffers via the SMFC
> +and an IDMAC channel. This source pad is routed to a capture device
> +node.
> +
> +Note that since the IDMAC source pad makes use of an IDMAC channel, it
> +can do pixel reordering within the same colorspace. For example, the
> +sink pad can take UYVY2X8, but the IDMAC source pad can output YUYV2X8.
> +If the sink pad is receiving YUV, the output at the capture device can
> +also be converted to a planar YUV format such as YUV420.
> +
> +It will also perform simple de-interlace without motion compensation,
> +which is activated if the sink pad's field type is an interlaced type,
> +and the IDMAC source pad field type is set to none.
> +
> +ipuX_vdic
> +---------
> +
> +The VDIC carries out motion compensated de-interlacing, with three
> +motion compensation modes: low, medium, and high motion. The mode is
> +specified with a custom v4l2 control. It has two sink pads and a
> +single source pad.
> +
> +The direct sink pad receives from an ipuX_csiY direct pad. With this
> +link the VDIC can only operate in high motion mode.
> +
> +When the IDMAC sink pad is activated, it receives from an output
> +or mem2mem device node. With this pipeline, it can also operate
> +in low and medium modes, because these modes require receiving
> +frames from memory buffers. Note that an output or mem2mem device
> +is not implemented yet, so this sink pad currently has no links.
> +
> +The source pad routes to the IC pre-processing entity ipuX_ic_prp.
> +
> +ipuX_ic_prp
> +-----------
> +
> +This is the IC pre-processing entity. It acts as a router, routing
> +data from its sink pad to one or both of its source pads.
> +
> +It has a single sink pad. The sink pad can receive from the ipuX_csiY
> +direct pad, or from ipuX_vdic.
> +
> +This entity has two source pads. One source pad routes to the
> +pre-process encode task entity (ipuX_ic_prpenc), the other to the
> +pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads
> +can be activated at the same time if the sink pad is receiving from
> +ipuX_csiY. Only the source pad to the pre-process viewfinder task entity
> +can be activated if the sink pad is receiving from ipuX_vdic (frames
> +from the VDIC can only be processed by the pre-process viewfinder task).
> +
> +ipuX_ic_prpenc
> +--------------
> +
> +This is the IC pre-processing encode entity. It has a single sink pad
> +from ipuX_ic_prp, and a single source pad. The source pad is routed
> +to a capture device node.
> +
> +This entity performs the IC pre-process encode task operations:
> +color-space conversion, resizing (downscaling and upscaling), horizontal
> +and vertical flip, and 90/270 degree rotation.
> +
> +Like the ipuX_csiY IDMAC source, it can also perform simple de-interlace
> +without motion compensation, and pixel reordering.
> +
> +ipuX_ic_prpvf
> +-------------
> +
> +This is the IC pre-processing viewfinder entity. It has a single sink pad
> +from ipuX_ic_prp, and a single source pad. The source pad is routed to
> +a capture device node.
> +
> +It is identical in operation to ipuX_ic_prpenc. It will receive and
> +process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is
> +receiving from ipuX_vdic.
> +
> +Like the ipuX_csiY IDMAC source, it can perform simple de-interlace
> +without motion compensation. However, note that if the ipuX_vdic is
> +included in the pipeline (ipuX_ic_prp is receiving from ipuX_vdic),
> +it's not possible to use simple de-interlace in ipuX_ic_prpvf, since
> +the ipuX_vdic has already carried out de-interlacing (with motion
> +compensation) and therefore the field type output from ipuX_ic_prp can
> +only be none.
> +
> +Capture Pipelines
> +-----------------
> +
> +The following describe the various use-cases supported by the pipelines.
> +
> +The links shown do not include the backend sensor, video mux, or mipi
> +csi-2 receiver links. This depends on the type of sensor interface
> +(parallel or mipi csi-2). So in all cases, these pipelines begin with:
> +
> +sensor -> ipuX_csiY_mux -> ...
> +
> +for parallel sensors, or:
> +
> +sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ...
> +
> +for mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route
> +to the video mux (ipuX_csiY_mux) before sending to the CSI, depending
> +on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in
> +parenthesis.
> +
> +Unprocessed Video Capture:
> +--------------------------
> +
> +Send frames directly from sensor to camera device interface node, with
> +no conversions:
> +
> +-> ipuX_csiY IDMAC pad -> capture node
> +
> +IC Direct Conversions:
> +----------------------
> +
> +This pipeline uses the preprocess encode entity to route frames directly
> +from the CSI to the IC, to carry out scaling up to 1024x1024 resolution,
> +CSC, flipping, and image rotation:
> +
> +-> ipuX_csiY direct pad -> ipuX_ic_prp -> ipuX_ic_prpenc -> capture node
> +
> +Motion Compensated De-interlace:
> +--------------------------------
> +
> +This pipeline routes frames from the CSI direct pad to the VDIC entity to
> +support motion-compensated de-interlacing (high motion mode only),
> +scaling up to 1024x1024, CSC, flip, and rotation:
> +
> +-> ipuX_csiY direct pad -> ipuX_vdic direct pad -> ipuX_ic_prp ->
> +   ipuX_ic_prpvf -> capture node
> +
> +
> +Usage Notes
> +-----------
> +
> +Many of the subdevs require information from the active sensor in the
> +current pipeline when configuring pad formats. Therefore the media links
> +should be established before configuring the media pad formats.
> +
> +Similarly, the capture device interfaces inherit controls from the
> +active entities in the current pipeline at link-setup time. Therefore
> +the capture device node links should be the last links established in
> +order for the capture interfaces to "see" and inherit all possible
> +controls.
> +
> +The following are usage notes for Sabre- reference platforms:
> +
> +
> +SabreLite with OV5642 and OV5640
> +--------------------------------
> +
> +This platform requires the OmniVision OV5642 module with a parallel
> +camera interface, and the OV5640 module with a MIPI CSI-2
> +interface. Both modules are available from Boundary Devices:
> +
> +https://boundarydevices.com/products/nit6x_5mp
> +https://boundarydevices.com/product/nit6x_5mp_mipi
> +
> +Note that if only one camera module is available, the other sensor
> +node can be disabled in the device tree.
> +
> +The OV5642 module is connected to the parallel bus input on the i.MX
> +internal video mux to IPU1 CSI0. It's i2c bus connects to i2c bus 2.
> +
> +The MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2
> +receiver, and the four virtual channel outputs from the receiver are
> +routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1,
> +vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is
> +also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and
> +OV5640 must not share the same i2c slave address.
> +
> +The following basic example configures unprocessed video capture
> +pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and
> +the OV5640 (transmitting on mipi csi-2 virtual channel 1) is routed
> +to ipu1_csi1. Both sensors are configured to output 640x480, the
> +OV5642 outputs YUYV2X8, the OV5640 UYVY2X8:
> +
> +.. code-block:: none
> +
> +   # Setup links for OV5642
> +   media-ctl -l '"ov5642 1-0042":0 -> "ipu1_csi0_mux":1[1]'
> +   media-ctl -l '"ipu1_csi0_mux":2 -> "ipu1_csi0":0[1]'
> +   media-ctl -l '"ipu1_csi0":2 -> "ipu1_csi0 capture":0[1]'
> +   # Setup links for OV5640
> +   media-ctl -l '"ov5640_mipi 1-0040":0 -> "imx6-mipi-csi2":0[1]'
> +   media-ctl -l '"imx6-mipi-csi2":2 -> "ipu1_csi1":0[1]'
> +   media-ctl -l '"ipu1_csi1":2 -> "ipu1_csi1 capture":0[1]'
> +   # Configure pads for OV5642 pipeline
> +   media-ctl -V "\"ov5642 1-0042\":0 [fmt:YUYV2X8/640x480 field:none]"
> +   media-ctl -V "\"ipu1_csi0_mux\":2 [fmt:YUYV2X8/640x480 field:none]"
> +   media-ctl -V "\"ipu1_csi0\":2 [fmt:YUYV2X8/640x480 field:none]"
> +   # Configure pads for OV5640 pipeline
> +   media-ctl -V "\"ov5640_mipi 1-0040\":0 [fmt:UYVY2X8/640x480 field:none]"
> +   media-ctl -V "\"imx6-mipi-csi2\":2 [fmt:UYVY2X8/640x480 field:none]"
> +   media-ctl -V "\"ipu1_csi1\":2 [fmt:UYVY2X8/640x480 field:none]"
> +
> +Streaming can then begin independently on the capture device nodes
> +"ipu1_csi0 capture" and "ipu1_csi1 capture".
> +
> +SabreAuto with ADV7180 decoder
> +------------------------------
> +
> +On the SabreAuto, an on-board ADV7180 SD decoder is connected to the
> +parallel bus input on the internal video mux to IPU1 CSI0.
> +
> +The following example configures a pipeline to capture from the ADV7180
> +video decoder, assuming NTSC 720x480 input signals, with Motion
> +Compensated de-interlacing. Pad field types assume the adv7180 outputs
> +"alternate", which the ipu1_csi0 entity converts to "seq-tb" at its
> +source pad. $outputfmt can be any format supported by the ipu1_ic_prpvf
> +entity at its output pad:
> +
> +.. code-block:: none
> +
> +   # Setup links
> +   media-ctl -l '"adv7180 4-0021":0 -> "ipu1_csi0_mux":1[1]'
> +   media-ctl -l '"ipu1_csi0_mux":2 -> "ipu1_csi0":0[1]'
> +   media-ctl -l '"ipu1_csi0":2 -> "ipu1_vdic":0[1]'
> +   media-ctl -l '"ipu1_vdic":2 -> "ipu1_ic_prp":0[1]'
> +   media-ctl -l '"ipu1_ic_prp":2 -> "ipu1_ic_prpvf":0[1]'
> +   media-ctl -l '"ipu1_ic_prpvf":1 -> "ipu1_ic_prpvf capture":0[1]'
> +   # Configure pads
> +   media-ctl -V "\"adv7180 4-0021\":0 [fmt:UYVY2X8/720x480]"
> +   media-ctl -V "\"ipu1_csi0_mux\":2 [fmt:UYVY2X8/720x480 field:alternate]"
> +   media-ctl -V "\"ipu1_csi0\":1 [fmt:AYUV32/720x480 field:seq-tb]"
> +   media-ctl -V "\"ipu1_vdic\":2 [fmt:AYUV32/720x480 field:none]"
> +   media-ctl -V "\"ipu1_ic_prp\":2 [fmt:AYUV32/720x480 field:none]"
> +   media-ctl -V "\"ipu1_ic_prpvf\":1 [fmt:$outputfmt field:none]"
> +
> +Streaming can then begin on the capture device node at
> +"ipu1_ic_prpvf capture".
> +
> +This platform accepts Composite Video analog inputs to the ADV7180 on
> +Ain1 (connector J42).
> +
> +Frame Interval Monitor
> +----------------------
> +
> +The adv718x decoders can occasionally send corrupt fields during
> +NTSC/PAL signal re-sync (too little or too many video lines). When
> +this happens, the IPU triggers a mechanism to re-establish vertical
> +sync by adding 1 dummy line every frame, which causes a rolling effect
> +from image to image, and can last a long time before a stable image is
> +recovered. Or sometimes the mechanism doesn't work at all, causing a
> +permanent split image (one frame contains lines from two consecutive
> +captured images).
> +
> +From experiment it was found that during image rolling, the frame
> +intervals (elapsed time between two EOF's) drop below the nominal
> +value for the current standard, by about one frame time (60 usec),
> +and remain at that value until rolling stops.
> +
> +While the reason for this observation isn't known (the IPU dummy
> +line mechanism should show an increase in the intervals by 1 line
> +time every frame, not a fixed value), we can use it to detect the
> +corrupt fields using a frame interval monitor. If the FIM detects a
> +bad frame interval, a subdev event is sent. In response, userland can
> +issue a streaming restart to correct the rolling/split image.
> +
> +The FIM is implemented in the ipuX_csiY entity, and the entities that
> +generate End-Of-Frame interrupts call into the FIM to monitor the frame
> +intervals: ipuX_ic_prpenc, and ipuX_ic_prpvf. Userland can register with
> +the FIM event notifications on the ipuX_csiY subdev device node
> +(V4L2_EVENT_IMX_FRAME_INTERVAL).
> +
> +The ipuX_csiY entity includes custom controls to tweak some dials for
> +FIM. If one of these controls is changed during streaming, the FIM will
> +be reset and will continue at the new settings.
> +
> +- V4L2_CID_IMX_FIM_ENABLE
> +
> +Enable/disable the FIM.
> +
> +- V4L2_CID_IMX_FIM_NUM
> +
> +How many frame interval errors to average before comparing against the
> +nominal frame interval reported by the sensor. This can reduce noise
> +from interrupt latency.
> +
> +- V4L2_CID_IMX_FIM_TOLERANCE_MIN
> +
> +If the averaged intervals fall outside nominal by this amount, in
> +microseconds, streaming will be restarted.
> +
> +- V4L2_CID_IMX_FIM_TOLERANCE_MAX
> +
> +If any interval errors are higher than this value, those error samples
> +are discarded and do not enter into the average. This can be used to
> +discard really high interval errors that might be due to very high
> +system load, causing excessive interrupt latencies.
> +
> +- V4L2_CID_IMX_FIM_NUM_SKIP
> +
> +How many frames to skip after a FIM reset or stream restart before
> +FIM begins to average intervals. It has been found that there can
> +be a few bad frame intervals after stream restart which are not
> +attributed to adv718x sending a corrupt field, so this is used to
> +skip those frames to prevent unnecessary restarts.
> +
> +
> +SabreSD with MIPI CSI-2 OV5640
> +------------------------------
> +
> +Similarly to SabreLite, the SabreSD supports a parallel interface
> +OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642
> +connects to i2c bus 1 and the OV5640 to i2c bus 2.
> +
> +The device tree for SabreSD includes OF graphs for both the parallel
> +OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI
> +CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled.
> +The OV5640 module connects to MIPI connector J5 (sorry I don't have the
> +compatible module part number or URL).
> +
> +The following example configures a direct conversion pipeline to capture
> +from the OV5640. $sensorfmt can be any format supported by the OV5640.
> +$sensordim is the frame dimension part of $sensorfmt (minus the mbus
> +pixel code). $outputfmt can be any format supported by the
> +ipu1_ic_prpenc entity at its output pad:
> +
> +.. code-block:: none
> +
> +   # Setup links
> +   media-ctl -l '"ov5640_mipi 1-003c":0 -> "imx6-mipi-csi2":0[1]'
> +   media-ctl -l '"imx6-mipi-csi2":2 -> "ipu1_csi1":0[1]'
> +   media-ctl -l '"ipu1_csi1":1 -> "ipu1_ic_prp":0[1]'
> +   media-ctl -l '"ipu1_ic_prp":1 -> "ipu1_ic_prpenc":0[1]'
> +   media-ctl -l '"ipu1_ic_prpenc":1 -> "ipu1_ic_prpenc capture":0[1]'
> +   # Configure pads
> +   media-ctl -V "\"ov5640_mipi 1-003c\":0 [fmt:$sensorfmt field:none]"
> +   media-ctl -V "\"imx6-mipi-csi2\":2 [fmt:$sensorfmt field:none]"
> +   media-ctl -V "\"ipu1_csi1\":1 [fmt:AYUV32/$sensordim field:none]"
> +   media-ctl -V "\"ipu1_ic_prp\":1 [fmt:AYUV32/$sensordim field:none]"
> +   media-ctl -V "\"ipu1_ic_prpenc\":1 [fmt:$outputfmt field:none]"
> +
> +Streaming can then begin on "ipu1_ic_prpenc capture" node.
> +
> +
> +
> +Known Issues
> +------------
> +
> +1. When using 90 or 270 degree rotation control at capture resolutions
> +   near the IC resizer limit of 1024x1024, and combined with planar
> +   pixel formats (YUV420, YUV422p), frame capture will often fail with
> +   no end-of-frame interrupts from the IDMAC channel. To work around
> +   this, use lower resolution and/or packed formats (YUYV, RGB3, etc.)
> +   when 90 or 270 rotations are needed.
> +
> +
> +File list
> +---------
> +
> +drivers/staging/media/imx/
> +include/media/imx.h
> +include/uapi/media/imx.h
> +
> +References
> +----------
> +
> +[1] "i.MX 6Dual/6Quad Applications Processor Reference Manual"
> +[2] "i.MX 6Solo/6DualLite Applications Processor Reference Manual"
> +
> +
> +Authors
> +-------
> +Steve Longerbeam <steve_longerbeam@mentor.com>
> +Philipp Zabel <kernel@pengutronix.de>
> +Russell King - ARM Linux <linux@armlinux.org.uk>
> +
> +Copyright (C) 2012-2017 Mentor Graphics Inc.
> diff --git a/drivers/staging/media/Kconfig b/drivers/staging/media/Kconfig
> index ffb8fa7..05b55a8 100644
> --- a/drivers/staging/media/Kconfig
> +++ b/drivers/staging/media/Kconfig
> @@ -25,6 +25,8 @@ source "drivers/staging/media/cxd2099/Kconfig"
>  
>  source "drivers/staging/media/davinci_vpfe/Kconfig"
>  
> +source "drivers/staging/media/imx/Kconfig"
> +
>  source "drivers/staging/media/omap4iss/Kconfig"
>  
>  source "drivers/staging/media/s5p-cec/Kconfig"
> diff --git a/drivers/staging/media/Makefile b/drivers/staging/media/Makefile
> index a28e82c..6f50ddd 100644
> --- a/drivers/staging/media/Makefile
> +++ b/drivers/staging/media/Makefile
> @@ -1,6 +1,7 @@
>  obj-$(CONFIG_I2C_BCM2048)	+= bcm2048/
>  obj-$(CONFIG_VIDEO_SAMSUNG_S5P_CEC) += s5p-cec/
>  obj-$(CONFIG_DVB_CXD2099)	+= cxd2099/
> +obj-$(CONFIG_VIDEO_IMX_MEDIA)	+= imx/
>  obj-$(CONFIG_LIRC_STAGING)	+= lirc/
>  obj-$(CONFIG_VIDEO_DM365_VPFE)	+= davinci_vpfe/
>  obj-$(CONFIG_VIDEO_OMAP4)	+= omap4iss/
> diff --git a/drivers/staging/media/imx/Kconfig b/drivers/staging/media/imx/Kconfig
> new file mode 100644
> index 0000000..722ed55
> --- /dev/null
> +++ b/drivers/staging/media/imx/Kconfig
> @@ -0,0 +1,7 @@
> +config VIDEO_IMX_MEDIA
> +	tristate "i.MX5/6 V4L2 media core driver"
> +	depends on MEDIA_CONTROLLER && VIDEO_V4L2 && ARCH_MXC && IMX_IPUV3_CORE
> +	---help---
> +	  Say yes here to enable support for video4linux media controller
> +	  driver for the i.MX5/6 SOC.
> +
> diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
> new file mode 100644
> index 0000000..ba8e4fb
> --- /dev/null
> +++ b/drivers/staging/media/imx/Makefile
> @@ -0,0 +1,6 @@
> +imx-media-objs := imx-media-dev.o imx-media-internal-sd.o imx-media-of.o
> +imx-media-common-objs := imx-media-utils.o imx-media-fim.o
> +
> +obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o
> +obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o
> +
> diff --git a/drivers/staging/media/imx/TODO b/drivers/staging/media/imx/TODO
> new file mode 100644
> index 0000000..f6d2bac
> --- /dev/null
> +++ b/drivers/staging/media/imx/TODO
> @@ -0,0 +1,36 @@
> +
> +- Finish v4l2-compliance
>+
> +- imx-csi subdev is not being autoloaded as a kernel module, probably
> +  because ipu_add_client_devices() does not register the IPU client
> +  platform devices, but only allocates those devices.

As Russell points out, this is an issue with the ipu-v3 driver, which
needs to be fixed to stop setting the ipu client devices' dev->of_node
field.

> +- Currently registering with notifications from subdevs are only
> +  available through the subdev device nodes and not through the main
> +  capture device node. Need to come up with a way to find the capture
> +  device in the current pipeline that owns the subdev that sent the
> +  notify.
> +
> +- Clean up and move the ov5642 subdev driver to drivers/media/i2c, and
> +  create the binding docs for it.

This is done already, right?

> +- The Frame Interval Monitor could be exported to v4l2-core for
> +  general use.
>+
> +- The subdev that is the original source of video data (referred to as
> +  the "sensor" in the code), is called from various subdevs in the
> +  pipeline in order to set/query the video standard ({g|s|enum}_std)
> +  and to get/set the original frame interval from the capture interface
> +  ([gs]_parm). Instead, the entities that need this info should call its
> +  direct neighbor, and the neighbor should propagate the call to its
> +  neighbor in turn if necessary.

Especially the [gs]_parm fix is necessary to present userspace with the
correct frame interval in case of frame skipping in the CSI.

> +- At driver load time, the device-tree node that is the original source
> +  (the "sensor"), is parsed to record its media bus configuration, and
> +  this info is required in various subdevs to setup the pipeline.
> +  Laurent Pinchart argues that instead the subdev should call its
> +  neighbor's g_mbus_config op (which should be propagated if necessary)
> +  to get this info. However Hans Verkuil is planning to remove the
> +  g_mbus_config op. For now this driver uses the parsed DT mbus config
> +  method until this issue is resolved.
> +
> diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c
> new file mode 100644
> index 0000000..e2041ad
> --- /dev/null
> +++ b/drivers/staging/media/imx/imx-media-dev.c
[...]
> +static inline u32 pixfmt_to_colorspace(const struct imx_media_pixfmt *fmt)
> +{
> +	return (fmt->cs == IPUV3_COLORSPACE_RGB) ?
> +		V4L2_COLORSPACE_SRGB : V4L2_COLORSPACE_SMPTE170M;
> +}

This ...

[...]
> +int imx_media_mbus_fmt_to_pix_fmt(struct v4l2_pix_format *pix,
> +				  struct v4l2_mbus_framefmt *mbus,
> +				  const struct imx_media_pixfmt *cc)
> +{
> +	u32 stride;
> +
> +	if (!cc) {
> +		cc = imx_media_find_format(0, mbus->code, true, false);
> +		if (!cc)
> +			return -EINVAL;
> +	}
> +
> +	stride = cc->planar ? mbus->width : (mbus->width * cc->bpp) >> 3;
> +
> +	pix->width = mbus->width;
> +	pix->height = mbus->height;
> +	pix->pixelformat = cc->fourcc;
> +	pix->colorspace = pixfmt_to_colorspace(cc);

... is not right. The colorspace should be taken from the input pad
colorspace everywhere (except for the IC output pad in the future, once
that supports changing YCbCr encoding and quantization), not guessed
based on the media bus format.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 20/36] media: imx: Add CSI subdev driver
  2017-02-16 12:40     ` Russell King - ARM Linux
@ 2017-02-16 13:09       ` Russell King - ARM Linux
  2017-02-16 14:20         ` Russell King - ARM Linux
  2017-02-16 18:44       ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 13:09 UTC (permalink / raw)
  To: Steve Longerbeam, Laurent Pinchart, Hans Verkuil, Mauro Carvalho Chehab
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, nick, songjun.wu,
	pavel, shuah, devel, markus.heiser, Steve Longerbeam,
	robert.jarzmik, geert, p.zabel, linux-media, devicetree, kernel,
	arnd, tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	laurent.pinchart+renesas, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel,
	jean-christophe.trotin, sakari.ailus, fabio.estevam, shawnguo,
	sudipm.mukherjee

On Thu, Feb 16, 2017 at 12:40:27PM +0000, Russell King - ARM Linux wrote:
> However, the following is primerily directed at Laurent as the one who
> introduced the BUG_ON() in question...
> 
> NEVER EVER USE BUG_ON() IN A PATH THAT CAN RETURN AN ERROR.
> 
> It's possible to find Linus rants about this, eg,
> https://www.spinics.net/lists/stable/msg146439.html
> 
>  I should have reacted to the damn added BUG_ON() lines. I suspect I
>  will have to finally just remove the idiotic BUG_ON() concept once and
>  for all, because there is NO F*CKING EXCUSE to knowingly kill the
>  kernel.
> 
> Also: http://yarchive.net/comp/linux/BUG.html
> 
>  Rule of thumb: BUG() is only good for something that never happens and
>  that we really have no other option for (ie state is so corrupt that
>  continuing is deadly).
> 
> So, _unless_ people want to see BUG_ON() removed from the kernel, I
> strongly suggest to _STOP_ using it as "we didn't like the function
> arguments, let's use it as an assert() statement instead of returning
> an error."
> 
> There's no excuse what so ever to be killing the machine in
> media_create_pad_link().  If it doesn't like a NULL pointer, it's damn
> well got an error path to report that fact.  Use that mechanism and
> stop needlessly killing the kernel.
> 
> BUG_ON() IS NOT ASSERT().  DO NOT USE IT AS SUCH.
> 
> Linus is absolutely right about BUG_ON() - it hurts debuggability,
> because now the only way to do further tests is to reboot the damned
> machine after removing those fscking BUG_ON()s that should *never*
> have been there in the first place.
> 
> As Linus went on to say:
> 
>  And dammit, if anybody else feels that they had done "debugging
>  messages with BUG_ON()", I would suggest you
> 
>   (a) rethink your approach to programming
> 
>   (b) send me patches to remove the crap entirely, or make them real
>  *DEBUGGING* messages, not "kill the whole machine" messages.
> 
>  I've ranted against people using BUG_ON() for debugging in the past.
>  Why the f*ck does this still happen? And Andrew - please stop taking
>  those kinds of patches! Lookie here:
> 
>      https://lwn.net/Articles/13183/
> 
>  so excuse me for being upset that people still do this shit almost 15
>  years later.
> 
> So I suggest people heed that advice and start fixing these stupid
> BUG_ON()s that they've created.

More crap.

If the "complete" method fails (or, in fact, anything in
v4l2_async_test_notify() fails) then all hell breaks loose, because
of the total lack of clean up (and no, this isn't anything to do with
some stupid justification of those BUG_ON()s above.)

v4l2_async_notifier_register() gets called, it adds the notifier to
the global notifier list.  v4l2_async_test_notify() gets called.  It
returns an error, which is propagated out of
v4l2_async_notifier_register().

So the caller thinks that v4l2_async_notifier_register() failed, which
will cause imx_media_probe() to fail, causing imxmd->subdev_notifier
to be kfree()'d.  We now have a use-after free bug.

Second case.  v4l2_async_register_subdev().  Almost exactly the same,
except in this case adding sd->async_list to the notifier->done list
may have succeeded, and failure after that, again, results in an
in-use list_head being kfree()'d.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 18/36] media: Add i.MX media core driver
  2017-02-16 13:02   ` Philipp Zabel
@ 2017-02-16 13:44     ` Russell King - ARM Linux
  2017-02-17  1:33     ` Steve Longerbeam
  1 sibling, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 13:44 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Thu, Feb 16, 2017 at 02:02:03PM +0100, Philipp Zabel wrote:
> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> > +- imx-csi subdev is not being autoloaded as a kernel module, probably
> > +  because ipu_add_client_devices() does not register the IPU client
> > +  platform devices, but only allocates those devices.
> 
> As Russell points out, this is an issue with the ipu-v3 driver, which
> needs to be fixed to stop setting the ipu client devices' dev->of_node
> field.

>From my local testing (albiet the shambles that is bits of v4l2) setting
dev->of_node is not necessary for imx-drm - imx-drm comes up fine without.

Fixing _this_ code for that is not too difficult - it's a matter of:

	priv->sd.of_node = pdata->of_node;

in imx_csi_probe().  However, the difficult bit is the poor state of
code in v4l2, particularly the v4l2-async crap.  Right now, fixing the
module autoloading will oops the kernel, so it's best that module
autoloading remains broken for the time being.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 20/36] media: imx: Add CSI subdev driver
  2017-02-16 13:09       ` Russell King - ARM Linux
@ 2017-02-16 14:20         ` Russell King - ARM Linux
  2017-02-16 19:07           ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 14:20 UTC (permalink / raw)
  To: Steve Longerbeam, Laurent Pinchart, Hans Verkuil, Mauro Carvalho Chehab
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, sakari.ailus, nick,
	songjun.wu, pavel, robert.jarzmik, devel, markus.heiser,
	Steve Longerbeam, shuah, geert, linux-media, devicetree, kernel,
	arnd, tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	laurent.pinchart+renesas, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel,
	jean-christophe.trotin, p.zabel, fabio.estevam, shawnguo,
	sudipm.mukherjee

On Thu, Feb 16, 2017 at 01:09:35PM +0000, Russell King - ARM Linux wrote:
> On Thu, Feb 16, 2017 at 12:40:27PM +0000, Russell King - ARM Linux wrote:
> > However, the following is primerily directed at Laurent as the one who
> > introduced the BUG_ON() in question...
> > 
> > NEVER EVER USE BUG_ON() IN A PATH THAT CAN RETURN AN ERROR.
> > 
> > It's possible to find Linus rants about this, eg,
> > https://www.spinics.net/lists/stable/msg146439.html
> > 
> >  I should have reacted to the damn added BUG_ON() lines. I suspect I
> >  will have to finally just remove the idiotic BUG_ON() concept once and
> >  for all, because there is NO F*CKING EXCUSE to knowingly kill the
> >  kernel.
> > 
> > Also: http://yarchive.net/comp/linux/BUG.html
> > 
> >  Rule of thumb: BUG() is only good for something that never happens and
> >  that we really have no other option for (ie state is so corrupt that
> >  continuing is deadly).
> > 
> > So, _unless_ people want to see BUG_ON() removed from the kernel, I
> > strongly suggest to _STOP_ using it as "we didn't like the function
> > arguments, let's use it as an assert() statement instead of returning
> > an error."
> > 
> > There's no excuse what so ever to be killing the machine in
> > media_create_pad_link().  If it doesn't like a NULL pointer, it's damn
> > well got an error path to report that fact.  Use that mechanism and
> > stop needlessly killing the kernel.
> > 
> > BUG_ON() IS NOT ASSERT().  DO NOT USE IT AS SUCH.
> > 
> > Linus is absolutely right about BUG_ON() - it hurts debuggability,
> > because now the only way to do further tests is to reboot the damned
> > machine after removing those fscking BUG_ON()s that should *never*
> > have been there in the first place.
> > 
> > As Linus went on to say:
> > 
> >  And dammit, if anybody else feels that they had done "debugging
> >  messages with BUG_ON()", I would suggest you
> > 
> >   (a) rethink your approach to programming
> > 
> >   (b) send me patches to remove the crap entirely, or make them real
> >  *DEBUGGING* messages, not "kill the whole machine" messages.
> > 
> >  I've ranted against people using BUG_ON() for debugging in the past.
> >  Why the f*ck does this still happen? And Andrew - please stop taking
> >  those kinds of patches! Lookie here:
> > 
> >      https://lwn.net/Articles/13183/
> > 
> >  so excuse me for being upset that people still do this shit almost 15
> >  years later.
> > 
> > So I suggest people heed that advice and start fixing these stupid
> > BUG_ON()s that they've created.
> 
> More crap.
> 
> If the "complete" method fails (or, in fact, anything in
> v4l2_async_test_notify() fails) then all hell breaks loose, because
> of the total lack of clean up (and no, this isn't anything to do with
> some stupid justification of those BUG_ON()s above.)
> 
> v4l2_async_notifier_register() gets called, it adds the notifier to
> the global notifier list.  v4l2_async_test_notify() gets called.  It
> returns an error, which is propagated out of
> v4l2_async_notifier_register().
> 
> So the caller thinks that v4l2_async_notifier_register() failed, which
> will cause imx_media_probe() to fail, causing imxmd->subdev_notifier
> to be kfree()'d.  We now have a use-after free bug.
> 
> Second case.  v4l2_async_register_subdev().  Almost exactly the same,
> except in this case adding sd->async_list to the notifier->done list
> may have succeeded, and failure after that, again, results in an
> in-use list_head being kfree()'d.

And here's a patch which, combined with the fixes for ipuv3, results in
everything appearing to work properly.  Feel free to tear out the bits
for your area and turn them into proper patches.

 drivers/gpu/ipu-v3/ipu-common.c           |  6 ---
 drivers/media/media-entity.c              |  7 +--
 drivers/media/v4l2-core/v4l2-async.c      | 71 +++++++++++++++++++++++--------
 drivers/staging/media/imx/imx-media-csi.c |  1 +
 drivers/staging/media/imx/imx-media-dev.c |  2 +-
 5 files changed, 59 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
index 97218af4fe75..8368e6f766ee 100644
--- a/drivers/gpu/ipu-v3/ipu-common.c
+++ b/drivers/gpu/ipu-v3/ipu-common.c
@@ -1238,12 +1238,6 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
 			platform_device_put(pdev);
 			goto err_register;
 		}
-
-		/*
-		 * Set of_node only after calling platform_device_add. Otherwise
-		 * the platform:imx-ipuv3-crtc modalias won't be used.
-		 */
-		pdev->dev.of_node = of_node;
 	}
 
 	return 0;
diff --git a/drivers/media/media-entity.c b/drivers/media/media-entity.c
index f9f723f5e4f0..154593a168df 100644
--- a/drivers/media/media-entity.c
+++ b/drivers/media/media-entity.c
@@ -625,9 +625,10 @@ media_create_pad_link(struct media_entity *source, u16 source_pad,
 	struct media_link *link;
 	struct media_link *backlink;
 
-	BUG_ON(source == NULL || sink == NULL);
-	BUG_ON(source_pad >= source->num_pads);
-	BUG_ON(sink_pad >= sink->num_pads);
+	if (WARN_ON(source == NULL || sink == NULL) ||
+	    WARN_ON(source_pad >= source->num_pads) ||
+	    WARN_ON(sink_pad >= sink->num_pads))
+		return -EINVAL;
 
 	link = media_add_link(&source->links);
 	if (link == NULL)
diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
index 5bada202b2d3..09934fb96a8d 100644
--- a/drivers/media/v4l2-core/v4l2-async.c
+++ b/drivers/media/v4l2-core/v4l2-async.c
@@ -94,7 +94,7 @@ static struct v4l2_async_subdev *v4l2_async_belongs(struct v4l2_async_notifier *
 }
 
 static int v4l2_async_test_notify(struct v4l2_async_notifier *notifier,
-				  struct v4l2_subdev *sd,
+				  struct list_head *new, struct v4l2_subdev *sd,
 				  struct v4l2_async_subdev *asd)
 {
 	int ret;
@@ -107,22 +107,36 @@ static int v4l2_async_test_notify(struct v4l2_async_notifier *notifier,
 	if (notifier->bound) {
 		ret = notifier->bound(notifier, sd, asd);
 		if (ret < 0)
-			return ret;
+			goto err_bind;
 	}
+
 	/* Move from the global subdevice list to notifier's done */
-	list_move(&sd->async_list, &notifier->done);
+	list_move(&sd->async_list, new);
 
 	ret = v4l2_device_register_subdev(notifier->v4l2_dev, sd);
-	if (ret < 0) {
-		if (notifier->unbind)
-			notifier->unbind(notifier, sd, asd);
-		return ret;
-	}
+	if (ret < 0)
+		goto err_register;
 
-	if (list_empty(&notifier->waiting) && notifier->complete)
-		return notifier->complete(notifier);
+	if (list_empty(&notifier->waiting) && notifier->complete) {
+		ret = notifier->complete(notifier);
+		if (ret < 0)
+			goto err_complete;
+	}
 
 	return 0;
+
+err_complete:
+	v4l2_device_unregister_subdev(sd);
+err_register:
+	if (notifier->unbind)
+		notifier->unbind(notifier, sd, asd);
+err_bind:
+	sd->notifier = NULL;
+	sd->asd = NULL;
+	list_add(&asd->list, &notifier->waiting);
+	/* always take this off the list on error */
+	list_del(&sd->async_list);
+	return ret;
 }
 
 static void v4l2_async_cleanup(struct v4l2_subdev *sd)
@@ -139,7 +153,8 @@ int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
 {
 	struct v4l2_subdev *sd, *tmp;
 	struct v4l2_async_subdev *asd;
-	int i;
+	LIST_HEAD(new);
+	int ret, i;
 
 	if (!notifier->num_subdevs || notifier->num_subdevs > V4L2_MAX_SUBDEVS)
 		return -EINVAL;
@@ -172,22 +187,39 @@ int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
 	list_add(&notifier->list, &notifier_list);
 
 	list_for_each_entry_safe(sd, tmp, &subdev_list, async_list) {
-		int ret;
-
 		asd = v4l2_async_belongs(notifier, sd);
 		if (!asd)
 			continue;
 
-		ret = v4l2_async_test_notify(notifier, sd, asd);
+		ret = v4l2_async_test_notify(notifier, &new, sd, asd);
 		if (ret < 0) {
-			mutex_unlock(&list_lock);
-			return ret;
+			/*
+			 * On failure, v4l2_async_test_notify() takes the
+			 * sd off the subdev list.  Add it back.
+			 */
+			list_add(&sd->async_list, &subdev_list);
+			goto err_notify;
 		}
 	}
 
+	list_splice(&new, &notifier->done);
+
 	mutex_unlock(&list_lock);
 
 	return 0;
+
+err_notify:
+	list_del(&notifier->list);
+	list_for_each_entry_safe(sd, tmp, &new, async_list) {
+		v4l2_device_unregister_subdev(sd);
+		list_move(&sd->async_list, &subdev_list);
+		if (notifier->unbind)
+			notifier->unbind(notifier, sd, sd->asd);
+		sd->notifier = NULL;
+		sd->asd = NULL;
+	}
+	mutex_unlock(&list_lock);
+	return ret;
 }
 EXPORT_SYMBOL(v4l2_async_notifier_register);
 
@@ -213,6 +245,7 @@ void v4l2_async_notifier_unregister(struct v4l2_async_notifier *notifier)
 	list_del(&notifier->list);
 
 	list_for_each_entry_safe(sd, tmp, &notifier->done, async_list) {
+		struct v4l2_async_subdev *asd = sd->asd;
 		struct device *d;
 
 		d = get_device(sd->dev);
@@ -223,7 +256,7 @@ void v4l2_async_notifier_unregister(struct v4l2_async_notifier *notifier)
 		device_release_driver(d);
 
 		if (notifier->unbind)
-			notifier->unbind(notifier, sd, sd->asd);
+			notifier->unbind(notifier, sd, asd);
 
 		/*
 		 * Store device at the device cache, in order to call
@@ -288,7 +321,9 @@ int v4l2_async_register_subdev(struct v4l2_subdev *sd)
 	list_for_each_entry(notifier, &notifier_list, list) {
 		struct v4l2_async_subdev *asd = v4l2_async_belongs(notifier, sd);
 		if (asd) {
-			int ret = v4l2_async_test_notify(notifier, sd, asd);
+			int ret = v4l2_async_test_notify(notifier,
+							 &notifier->done,
+							 sd, asd);
 			mutex_unlock(&list_lock);
 			return ret;
 		}
diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
index 9d9ec03436e4..507026feee91 100644
--- a/drivers/staging/media/imx/imx-media-csi.c
+++ b/drivers/staging/media/imx/imx-media-csi.c
@@ -1427,6 +1427,7 @@ static int imx_csi_probe(struct platform_device *pdev)
 	priv->sd.entity.ops = &csi_entity_ops;
 	priv->sd.entity.function = MEDIA_ENT_F_PROC_VIDEO_PIXEL_FORMATTER;
 	priv->sd.dev = &pdev->dev;
+	priv->sd.of_node = pdata->of_node;
 	priv->sd.owner = THIS_MODULE;
 	priv->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
 	priv->sd.grp_id = priv->csi_id ?
diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c
index 60f45fe4b506..5b4dfc1fb6ab 100644
--- a/drivers/staging/media/imx/imx-media-dev.c
+++ b/drivers/staging/media/imx/imx-media-dev.c
@@ -197,7 +197,7 @@ static int imx_media_subdev_bound(struct v4l2_async_notifier *notifier,
 	struct imx_media_subdev *imxsd;
 	int ret = -EINVAL;
 
-	imxsd = imx_media_find_async_subdev(imxmd, sd->dev->of_node,
+	imxsd = imx_media_find_async_subdev(imxmd, sd->of_node,
 					    dev_name(sd->dev));
 	if (!imxsd)
 		goto out;


-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 18/36] media: Add i.MX media core driver
  2017-02-16 10:27   ` Russell King - ARM Linux
@ 2017-02-16 17:53     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 17:53 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 02:27 AM, Russell King - ARM Linux wrote:
> On Wed, Feb 15, 2017 at 06:19:20PM -0800, Steve Longerbeam wrote:
>> Add the core media driver for i.MX SOC.
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> Just as I reported on the 30th January:
>
> Applying: media: Add i.MX media core driver
> .git/rebase-apply/patch:614: new blank line at EOF.
> +
> .git/rebase-apply/patch:626: new blank line at EOF.
> +
> .git/rebase-apply/patch:668: new blank line at EOF.
> +
>
> These need fixing.

Missed these obviously, fixed.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-16 10:28   ` Russell King - ARM Linux
@ 2017-02-16 17:54     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 17:54 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 02:28 AM, Russell King - ARM Linux wrote:
> On Wed, Feb 15, 2017 at 06:19:25PM -0800, Steve Longerbeam wrote:
>> Adds MIPI CSI-2 Receiver subdev driver. This subdev is required
>> for sensors with a MIPI CSI2 interface.
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>
> Just like I reported on the 30th January:
>
> .git/rebase-apply/patch:236: trailing whitespace.
>  *
> warning: 1 line adds whitespace errors.
>
> This needs fixing.
>

Fixed.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 28/36] media: imx: csi: fix crop rectangle changes in set_fmt
  2017-02-16 11:05   ` Russell King - ARM Linux
@ 2017-02-16 18:16     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 18:16 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 03:05 AM, Russell King - ARM Linux wrote:
> On Wed, Feb 15, 2017 at 06:19:30PM -0800, Steve Longerbeam wrote:
>> diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
>> index ae24b42..3cb97e2 100644
>> --- a/drivers/staging/media/imx/imx-media-csi.c
>> +++ b/drivers/staging/media/imx/imx-media-csi.c
>> @@ -531,6 +531,10 @@ static int csi_setup(struct csi_priv *priv)
>>
>>  	ipu_csi_set_window(priv->csi, &priv->crop);
>>
>> +	ipu_csi_set_downsize(priv->csi,
>> +			     priv->crop.width == 2 * outfmt->width,
>> +			     priv->crop.height == 2 * outfmt->height);
>> +
>
> This fails to build:
>
> ERROR: "ipu_csi_set_downsize" [drivers/staging/media/imx/imx-media-csi.ko] undefined!
>
> ipu_csi_set_downsize needs to be exported if we're going to use it in
> a module:
>

Yes I encountered the missing export too, forgot to mention it.
Philipp submitted a patch to dri-devel separately.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 36/36] media: imx: propagate sink pad formats to source pads
  2017-02-16 11:29   ` Philipp Zabel
@ 2017-02-16 18:19     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 18:19 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 03:29 AM, Philipp Zabel wrote:
> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> [...]
>> diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
>> index dd9d499..c43f85f 100644
>> --- a/drivers/staging/media/imx/imx-ic-prpencvf.c
>> +++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
>> @@ -806,16 +806,22 @@ static int prp_set_fmt(struct v4l2_subdev *sd,
>>  	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
>>  		cfg->try_fmt = sdformat->format;
>>  	} else {
>> -		priv->format_mbus[sdformat->pad] = sdformat->format;
>> +		struct v4l2_mbus_framefmt *f =
>> +			&priv->format_mbus[sdformat->pad];
>> +		struct v4l2_mbus_framefmt *outf =
>> +			&priv->format_mbus[PRPENCVF_SRC_PAD];
>> +
>> +		*f = sdformat->format;
>>  		priv->cc[sdformat->pad] = cc;
>> -		if (sdformat->pad == PRPENCVF_SRC_PAD) {
>> -			/*
>> -			 * update the capture device format if this is
>> -			 * the IDMAC output pad
>> -			 */
>> -			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
>> -						      &sdformat->format, cc);
>> +
>> +		/* propagate format to source pad */
>> +		if (sdformat->pad == PRPENCVF_SINK_PAD) {
>> +			outf->width = f->width;
>> +			outf->height = f->height;
>
> What about media bus format, field, and colorimetry?

Right, I need to propagate a default media bus format and field
that works.

As for colorimtery, I did see the work you are doing in your
branch, but was not sure if you were finished with that support.
Can you send me a patch?

>
>>  		}
>> +
>> +		/* update the capture device format from output pad */
>> +		imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix, outf, cc);
>>  	}
>>
>>  	return 0;
>> diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
>> index 3e6b607..9d9ec03 100644
>> --- a/drivers/staging/media/imx/imx-media-csi.c
>> +++ b/drivers/staging/media/imx/imx-media-csi.c
>> @@ -1161,19 +1161,27 @@ static int csi_set_fmt(struct v4l2_subdev *sd,
>>  	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
>>  		cfg->try_fmt = sdformat->format;
>>  	} else {
>> +		struct v4l2_mbus_framefmt *f_direct, *f_idmac;
>> +
>>  		priv->format_mbus[sdformat->pad] = sdformat->format;
>>  		priv->cc[sdformat->pad] = cc;
>> -		/* Reset the crop window if this is the input pad */
>> -		if (sdformat->pad == CSI_SINK_PAD)
>> +
>> +		f_direct = &priv->format_mbus[CSI_SRC_PAD_DIRECT];
>> +		f_idmac = &priv->format_mbus[CSI_SRC_PAD_IDMAC];
>> +
>> +		if (sdformat->pad == CSI_SINK_PAD) {
>> +			/* reset the crop window */
>>  			priv->crop = crop;
>> -		else if (sdformat->pad == CSI_SRC_PAD_IDMAC) {
>> -			/*
>> -			 * update the capture device format if this is
>> -			 * the IDMAC output pad
>> -			 */
>> -			imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix,
>> -						      &sdformat->format, cc);
>> +
>> +			/* propagate format to source pads */
>> +			f_direct->width = crop.width;
>> +			f_direct->height = crop.height;
>> +			f_idmac->width = crop.width;
>> +			f_idmac->height = crop.height;
>
> This is missing also media bus format, field and colorimetry
> propagation.

Yep, will add that.

Steve

>
>>  		}
>> +
>> +		/* update the capture device format from IDMAC output pad */
>> +		imx_media_mbus_fmt_to_pix_fmt(&vdev->fmt.fmt.pix, f_idmac, cc);
>>  	}
>>
>>  	return 0;
>> diff --git a/drivers/staging/media/imx/imx-media-vdic.c b/drivers/staging/media/imx/imx-media-vdic.c
>> index 61e6017..55fb522 100644
>> --- a/drivers/staging/media/imx/imx-media-vdic.c
>> +++ b/drivers/staging/media/imx/imx-media-vdic.c
>> @@ -649,8 +649,21 @@ static int vdic_set_fmt(struct v4l2_subdev *sd,
>>  	if (sdformat->which == V4L2_SUBDEV_FORMAT_TRY) {
>>  		cfg->try_fmt = sdformat->format;
>>  	} else {
>> -		priv->format_mbus[sdformat->pad] = sdformat->format;
>> +		struct v4l2_mbus_framefmt *f =
>> +			&priv->format_mbus[sdformat->pad];
>> +		struct v4l2_mbus_framefmt *outf =
>> +			&priv->format_mbus[VDIC_SRC_PAD_DIRECT];
>> +
>> +		*f = sdformat->format;
>>  		priv->cc[sdformat->pad] = cc;
>> +
>> +		/* propagate format to source pad */
>> +		if (sdformat->pad == VDIC_SINK_PAD_DIRECT ||
>> +		    sdformat->pad == VDIC_SINK_PAD_IDMAC) {
>> +			outf->width = f->width;
>> +			outf->height = f->height;
>> +			outf->field = V4L2_FIELD_NONE;
>
> This is missing colorimetry, too.
>
> regards
> Philipp
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-16 11:37 ` [PATCH v4 00/36] i.MX Media Driver Russell King - ARM Linux
@ 2017-02-16 18:30   ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 18:30 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 03:37 AM, Russell King - ARM Linux wrote:
> Two problems.
>
> On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
>>   media: imx: propagate sink pad formats to source pads
>
> 1) It looks like all cases aren't being caught:
>
> - entity 74: ipu1_csi0 (3 pads, 4 links)
>              type V4L2 subdev subtype Unknown flags 0
>              device node name /dev/v4l-subdev13
>         pad0: Sink
>                 [fmt:SRGGB8/816x616 field:none]
>                 <- "ipu1_csi0_mux":2 [ENABLED]
>         pad1: Source
>                 [fmt:AYUV32/816x616 field:none
>                  crop.bounds:(0,0)/816x616
>                  crop:(0,0)/816x616]
>                 -> "ipu1_ic_prp":0 []
>                 -> "ipu1_vdic":0 []
>         pad2: Source
>                 [fmt:SRGGB8/816x616 field:none
>                  crop.bounds:(0,0)/816x616
>                  crop:(0,0)/816x616]
>                 -> "ipu1_csi0 capture":0 [ENABLED]
>
> While the size has been propagated to pad1, the format has not.

Right, Philipp also caught this. I need to finish propagating all
params from sink to source pads (mbus code and field, and colorimetry
eventually).

>
> 2) /dev/video* device node assignment
>
> I've no idea at the moment how the correct /dev/video* node should be
> chosen - initially with Philipp and your previous code, it was
> /dev/video3 after initial boot.  Philipp's was consistently /dev/video3.
> Yours changed to /dev/video7 when removing and re-inserting the modules
> (having fixed that locally.)  This version makes CSI0 be /dev/video7,
> but after a remove+reinsert, it becomes (eg) /dev/video8.
>
> /dev/v4l/by-path/platform-capture-subsystem-video-index4 also is not a
> stable path - the digit changes (it's supposed to be a stable path.)
> After a remove+reinsert, it becomes (eg)
> /dev/v4l/by-path/platform-capture-subsystem-video-index5.
> /dev/v4l/by-id doesn't contain a symlink for this either.
>
> What this means is that it's very hard to script the setup, because
> there's no easy way to know what device is the capture device.  While
> it may be possible to do:
>
> 	media-ctl -d /dev/media1 -p | \
> 		grep -A2 ': ipu1_csi0 capture' | \
> 			sed -n 's|.*\(/dev/video[0-9]*\).*|\1|p'
>
> that's hardly a nice solution - while it fixes the setup script, it
> doesn't stop the pain of having to delve around to find the correct
> device to use for gstreamer to test with.
>

I'll try to nail down the main capture node numbers, even after module
reload.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 20/36] media: imx: Add CSI subdev driver
  2017-02-16 12:40     ` Russell King - ARM Linux
  2017-02-16 13:09       ` Russell King - ARM Linux
@ 2017-02-16 18:44       ` Steve Longerbeam
  2017-02-16 19:09         ` Russell King - ARM Linux
  1 sibling, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 18:44 UTC (permalink / raw)
  To: Russell King - ARM Linux, Laurent Pinchart, Hans Verkuil,
	Mauro Carvalho Chehab
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, sakari.ailus, nick,
	songjun.wu, Steve Longerbeam, pavel, robert.jarzmik, devel,
	markus.heiser, laurent.pinchart+renesas, shuah, geert,
	linux-media, devicetree, kernel, arnd, mchehab, bparrot, robh+dt,
	horms+renesas, tiffany.lin, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel,
	jean-christophe.trotin, p.zabel, fabio.estevam, shawnguo,
	sudipm.mukherjee



On 02/16/2017 04:40 AM, Russell King - ARM Linux wrote:
> On Thu, Feb 16, 2017 at 11:52:06AM +0000, Russell King - ARM Linux wrote:
>> On Wed, Feb 15, 2017 at 06:19:22PM -0800, Steve Longerbeam wrote:
>>> +static const struct platform_device_id imx_csi_ids[] = {
>>> +	{ .name = "imx-ipuv3-csi" },
>>> +	{ },
>>> +};
>>> +MODULE_DEVICE_TABLE(platform, imx_csi_ids);
>>> +
>>> +static struct platform_driver imx_csi_driver = {
>>> +	.probe = imx_csi_probe,
>>> +	.remove = imx_csi_remove,
>>> +	.id_table = imx_csi_ids,
>>> +	.driver = {
>>> +		.name = "imx-ipuv3-csi",
>>> +	},
>>> +};
>>> +module_platform_driver(imx_csi_driver);
>>> +
>>> +MODULE_DESCRIPTION("i.MX CSI subdev driver");
>>> +MODULE_AUTHOR("Steve Longerbeam <steve_longerbeam@mentor.com>");
>>> +MODULE_LICENSE("GPL");
>>> +MODULE_ALIAS("platform:imx-ipuv3-csi");
>>
>> Just a reminder that automatic module loading of this is completely
>> broken right now (not your problem) due to this stupid idea in the
>> IPUv3 code:
>>
>> 		if (!ret)
>> 			ret = platform_device_add(pdev);
>> 		if (ret) {
>> 			platform_device_put(pdev);
>> 			goto err_register;
>> 		}
>>
>> 		/*
>> 		 * Set of_node only after calling platform_device_add. Otherwise
>> 		 * the platform:imx-ipuv3-crtc modalias won't be used.
>> 		 */
>> 		pdev->dev.of_node = of_node;
>>
>> setting pdev->dev.of_node changes the modalias exported to userspace,
>> so udev sees a DT based modalias, which causes it to totally miss any
>> driver using a non-DT based modalias.
>>
>> The IPUv3 code needs fixing, not only for imx-media-csi, but also for
>> imx-ipuv3-crtc too, because that module will also suffer the same
>> issue.
>>
>> The only solution is... don't fsck with dev->of_node assignment.  In
>> this case, it's probably much better to pass it in via platform data.
>> If you then absolutely must have dev->of_node, doing it in the driver
>> means that you avoid the modalias mess before the appropriate driver
>> is loaded.  However, that's still not a nice solution because the
>> modalias file still ends up randomly changing its contents.
>>
>> As I say, not _your_ problem, but it's still a problem that needs
>> solving, and I don't want it forgotten about.
>
> I've just hacked up a solution to this, and unfortunately it reveals a
> problem with Steve's code.  Picking out the imx & media-related messages:
>
> [    8.012191] imx_media_common: module is from the staging directory, the quality is unknown, you have been warned.
> [    8.018175] imx_media: module is from the staging directory, the quality is unknown, you have been warned.
> [    8.748345] imx-media: Registered subdev ipu1_csi0_mux
> [    8.753451] imx-media: Registered subdev ipu2_csi1_mux
> [    9.055196] imx219 0-0010: detected IMX219 sensor
> [    9.090733] imx6_mipi_csi2: module is from the staging directory, the quality is unknown, you have been warned.
> [    9.092247] imx-media: Registered subdev imx219 0-0010
> [    9.334338] imx-media: Registered subdev imx6-mipi-csi2
> [    9.372452] imx_media_capture: module is from the staging directory, the quality is unknown, you have been warned.
> [    9.378163] imx_media_capture: module is from the staging directory, the quality is unknown, you have been warned.
> [    9.390033] imx_media_csi: module is from the staging directory, the quality is unknown, you have been warned.
> [    9.394362] imx-media: Received unknown subdev ipu1_csi0

The root problem is here. I don't know why the CSI entities are not
being recognized. Can you share the changes you made?

So imx_media_subdev_bound() returns error because it didn't recognize
the subdev that was bound.

And for some reason, even though some of the subdev bound ops return
error, v4l2-core still calls the async completion notifier
(imx_media_probe_complete()).

I'll add some checks to imx_media_probe_complete() to try and detect
when not all subdevs were bound correctly to get around this issue.
That should prevent the kernel BUG() below.

Steve


> [    9.394699] imx-ipuv3-csi: probe of imx-ipuv3-csi.0 failed with error -22
> [    9.394840] imx-media: Received unknown subdev ipu1_csi1
> [    9.394887] imx-ipuv3-csi: probe of imx-ipuv3-csi.1 failed with error -22
> [    9.394992] imx-media: Received unknown subdev ipu2_csi0
> [    9.395026] imx-ipuv3-csi: probe of imx-ipuv3-csi.4 failed with error -22
> [    9.395119] imx-media: Received unknown subdev ipu2_csi1
> [    9.395159] imx-ipuv3-csi: probe of imx-ipuv3-csi.5 failed with error -22
> [    9.411722] imx_media_vdic: module is from the staging directory, the quality is unknown, you have been warned.
> [    9.412820] imx-media: Registered subdev ipu1_vdic
> [    9.424687] imx-media: Registered subdev ipu2_vdic
> [    9.436074] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
> [    9.437455] imx-media: Registered subdev ipu1_ic_prp
> [    9.437788] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
> [    9.447542] imx-media: Registered subdev ipu1_ic_prpenc
> [    9.455225] ipu1_ic_prpenc: Registered ipu1_ic_prpenc capture as /dev/video3
> [    9.459203] imx-media: Registered subdev ipu1_ic_prpvf
> [    9.460484] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
> [    9.460726] ipu1_ic_prpvf: Registered ipu1_ic_prpvf capture as /dev/video4
> [    9.460983] imx-media: Registered subdev ipu2_ic_prp
> [    9.461161] imx-media: Registered subdev ipu2_ic_prpenc
> [    9.461737] ipu2_ic_prpenc: Registered ipu2_ic_prpenc capture as /dev/video5
> [    9.463767] imx-media: Registered subdev ipu2_ic_prpvf
> [    9.464294] ipu2_ic_prpvf: Registered ipu2_ic_prpvf capture as /dev/video6
> [    9.464345] imx-media: imx_media_create_link: (null):1 -> ipu1_ic_prp:0
> [    9.464413] ------------[ cut here ]------------
> [    9.469134] kernel BUG at /home/rmk/git/linux-rmk/drivers/media/media-entity.c:628!
> [    9.476924] Internal error: Oops - BUG: 0 [#1] SMP ARM
> [    9.482246] Modules linked in: imx_media_ic(C+) imx_media_vdic(C) imx_media_csi(C) imx_media_capture(C) uvcvideo imx6_mipi_csi2(C) snd_soc_imx_audmux imx219 snd_soc_sgtl5000 video_multiplexer caam imx_sdma imx2_wdt snd_soc_fsl_ssi snd_soc_fsl_spdif imx_pcm_dma coda imx_thermal v4l2_mem2mem videobuf2_v4l2 videobuf2_dma_contig videobuf2_core videobuf2_vmalloc videobuf2_memops imx_media(C) imx_media_common(C) rc_pinnacle_pctv_hd nfsd dw_hdmi_cec dw_hdmi_ahb_audio etnaviv
> [    9.524500] CPU: 1 PID: 263 Comm: systemd-udevd Tainted: G         C      4.10.0-rc7+ #2112
> [    9.532995] Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
> [    9.539619] task: edef1880 task.stack: d03ca000
> [    9.544313] PC is at media_create_pad_link+0x134/0x140
> [    9.549541] LR is at imx_media_probe_complete+0x164/0x24c [imx_media]
> [    9.556080] pc : [<c04f0eb0>]    lr : [<bf052524>]    psr: 60070013
>                sp : d03cbbc8  ip : d03cbbf8  fp : d03cbbf4
> [    9.567712] r10: 00000001  r9 : 00000000  r8 : d0170d14
> [    9.573007] r7 : 00000000  r6 : 00000001  r5 : 00000000  r4 : d0170d14
> [    9.579612] r3 : 00000000  r2 : d0170d14  r1 : 00000001  r0 : 00000000
> [    9.586256] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none[    9.593486] Control: 10c5387d  Table: 3e77c04a  DAC: 00000051
> [    9.599317] Process systemd-udevd (pid: 263, stack limit = 0xd03ca210)
> [    9.605950] Stack: (0xd03cbbc8 to 0xd03cc000)
> [    9.610368] bbc0:                   00000000 00000000 ee980410 00000000 00000000 d0170d14
> [    9.618658] bbe0: 00000000 00000001 d03cbc54 d03cbbf8 bf052524 c04f0d88 00000000 d0170d88
> [    9.626961] bc00: 00000000 c0a57dc4 0004a364 ee98011c 00000000 00000003 00000001 ee980230
> [    9.635267] bc20: ee980274 ee980010 d03cbc54 d0170f14 ee9ca4cc ee9974c4 bf0523c0 c0a57dc4
> [    9.643539] bc40: f184bb30 00000026 d03cbc74 d03cbc58 c0502f50 bf0523cc ee9ca4cc d0170f14
> [    9.651824] bc60: c0a57e08 d0170fc0 d03cbc9c d03cbc78 c0502fdc c0502e70 00000000 d0170f10
> [    9.660132] bc80: 00000000 d02c0c10 bf122cd0 d0170f14 d03cbcc4 d03cbca0 bf121154 c0502f68
> [    9.668423] bca0: bf12104c ffffffed d02c0c10 fffffdfb bf123248 00000000 d03cbce4 d03cbcc8
> [    9.676713] bcc0: c041aeb4 bf121058 d02c0c10 c1419d70 00000000 bf123248 d03cbd0c d03cbce8
> [    9.684992] bce0: c0418ec4 c041ae68 d02c0c10 bf123248 d02c0c44 00000000 00000001 00000124
> [    9.693282] bd00: d03cbd2c d03cbd10 c0419044 c0418ccc 00000000 00000000 bf123248 c0418f88
> [    9.701618] bd20: d03cbd54 d03cbd30 c04172e4 c0418f94 ef0f64a4 d01b8cd0 d03d9858 bf123248
> [    9.709900] bd40: d03d9c00 c0a45e10 d03cbd64 d03cbd58 c0418728 c0417294 d03cbd8c d03cbd68
> [    9.718203] bd60: c0418428 c0418710 bf122e48 d03cbd78 bf123248 c0a704a8 bf126000 00000000
> [    9.729180] bd80: d03cbda4 d03cbd90 c0419ec4 c0418340 bf123480 c0a704a8 d03cbdb4 d03cbda8
> [    9.739950] bda0: c041ad88 c0419e50 d03cbdc4 d03cbdb8 bf126018 c041ad4c d03cbe34 d03cbdc8
> [    9.751089] bdc0: c00098ac bf12600c d03cbdec d03cbdd8 c00a8888 c0087240 00000000 ed4a9440
> [    9.761941] bde0: d03cbe34 d03cbdf0 c016c690 c00a8814 c016b554 c016aa60 00000001 c015f3f8
> [    9.772940] be00: 00000005 0000000c edef1880 bf123480 c0a704a8 bf123480 c0a704a8 ed4a9440
> [    9.784008] be20: bf123480 00000001 d03cbe5c d03cbe38 c011b1e4 c0009874 d03cbe5c d03cbe48
> [    9.795017] be40: c09f5ea7 c0a704a8 c09e04ec bf123480 d03cbf14 d03cbe60 c00d2dd0 c011b188
> [    9.806069] be60: bf12348c 00007fff bf123480 c00d09f0 f1847000 bf12792c f18495c0 bf123680
> [    9.817194] be80: bf12348c bf1236f0 00000000 bf1234c8 c017c1d0 c017bfac f1847000 00004f68
> [    9.828347] bea0: c017c2e8 00000000 edef1880 00000000 00000000 00000000 00000000 00000000
> [    9.839542] bec0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
> [    9.850754] bee0: 00000000 00000000 00000003 7fffffff 00000000 00000000 00000007 b6c9e63c
> [    9.861991] bf00: d03ca000 00000000 d03cbfa4 d03cbf18 c00d36cc c00d1480 7fffffff 00000000
> [    9.873266] bf20: 00000003 ee0384d4 d03cbf74 f1847000 00004f68 00000000 00000002 f1847000
> [    9.884593] bf40: 00004f68 f184bb30 f184989b f184a7d8 000026f0 00002dd0 00000000 00000000
> [    9.895998] bf60: 00000000 0000192c 00000019 0000001a 00000011 00000000 0000000a 00000000
> [    9.907408] bf80: c008b848 80c36630 00000000 2529fc00 0000017b c000ff04 00000000 d03cbfa8
> [    9.918858] bfa0: c000fd60 c00d3644 80c36630 00000000 00000007 b6c9e63c 00000000 80c38178
> [    9.930354] bfc0: 80c36630 00000000 2529fc00 0000017b 00020000 7f96eb0c 80c37848 00000000
> [    9.941900] bfe0: bed55928 bed55918 b6c988ff b6bea572 600f0030 00000007 3fffd861 3fffdc61
> [    9.953532] Backtrace:
> [    9.959442] [<c04f0d7c>] (media_create_pad_link) from [<bf052524>] (imx_media_probe_complete+0x164/0x24c [imx_media])
> [    9.973644]  r10:00000001 r9:00000000 r8:d0170d14 r7:00000000 r6:00000000 r5:ee980410
> [    9.985112]  r4:00000000 r3:00000000
> [    9.992696] [<bf0523c0>] (imx_media_probe_complete [imx_media]) from [<c0502f50>] (v4l2_async_test_notify+0xec/0xf8)
> [   10.007413]  r10:00000026 r9:f184bb30 r8:c0a57dc4 r7:bf0523c0 r6:ee9974c4 r5:ee9ca4cc
> [   10.019212]  r4:d0170f14
> [   10.025650] [<c0502e64>] (v4l2_async_test_notify) from [<c0502fdc>] (v4l2_async_register_subdev+0x80/0xdc)
> [   10.039736]  r7:d0170fc0 r6:c0a57e08 r5:d0170f14 r4:ee9ca4cc
> [   10.049613] [<c0502f5c>] (v4l2_async_register_subdev) from [<bf121154>] (imx_ic_probe+0x108/0x144 [imx_media_ic])
> [   10.063953]  r8:d0170f14 r7:bf122cd0 r6:d02c0c10 r5:00000000 r4:d0170f10 r3:00000000
> [   10.075786] [<bf12104c>] (imx_ic_probe [imx_media_ic]) from [<c041aeb4>] (platform_drv_probe+0x58/0xb8)
> [   10.089118]  r8:00000000 r7:bf123248 r6:fffffdfb r5:d02c0c10 r4:ffffffed r3:bf12104c
> [   10.100683] [<c041ae5c>] (platform_drv_probe) from [<c0418ec4>] (driver_probe_device+0x204/0x2c8)
> [   10.113279]  r7:bf123248 r6:00000000 r5:c1419d70 r4:d02c0c10
> [   10.122765] [<c0418cc0>] (driver_probe_device) from [<c0419044>] (__driver_attach+0xbc/0xc0)
> [   10.135098]  r10:00000124 r8:00000001 r7:00000000 r6:d02c0c44 r5:bf123248 r4:d02c0c10
> [   10.146785] [<c0418f88>] (__driver_attach) from [<c04172e4>] (bus_for_each_dev+0x5c/0x90)
> [   10.158811]  r6:c0418f88 r5:bf123248 r4:00000000 r3:00000000
> [   10.168375] [<c0417288>] (bus_for_each_dev) from [<c0418728>] (driver_attach+0x24/0x28)
> [   10.180413]  r6:c0a45e10 r5:d03d9c00 r4:bf123248
> [   10.188959] [<c0418704>] (driver_attach) from [<c0418428>] (bus_add_driver+0xf4/0x200)
> [   10.200775] [<c0418334>] (bus_add_driver) from [<c0419ec4>] (driver_register+0x80/0xfc)
> [   10.212707]  r7:00000000 r6:bf126000 r5:c0a704a8 r4:bf123248
> [   10.222254] [<c0419e44>] (driver_register) from [<c041ad88>] (__platform_driver_register+0x48/0x4c)
> [   10.235212]  r5:c0a704a8 r4:bf123480
> [   10.242694] [<c041ad40>] (__platform_driver_register) from [<bf126018>] (imx_ic_driver_init+0x18/0x24 [imx_media_ic])
> [   10.257308] [<bf126000>] (imx_ic_driver_init [imx_media_ic]) from [<c00098ac>] (do_one_initcall+0x44/0x170)
> [   10.271043] [<c0009868>] (do_one_initcall) from [<c011b1e4>] (do_init_module+0x68/0x1d8)
> [   10.283139]  r8:00000001 r7:bf123480 r6:ed4a9440 r5:c0a704a8 r4:bf123480
> [   10.293849] [<c011b17c>] (do_init_module) from [<c00d2dd0>] (load_module+0x195c/0x2080)
> [   10.305867]  r7:bf123480 r6:c09e04ec r5:c0a704a8 r4:c09f5ea7
> [   10.315523] [<c00d1474>] (load_module) from [<c00d36cc>] (SyS_finit_module+0x94/0xa0)
> [   10.327382]  r10:00000000 r9:d03ca000 r8:b6c9e63c r7:00000007 r6:00000000 r5:00000000
> [   10.339238]  r4:7fffffff
> [   10.345766] [<c00d3638>] (SyS_finit_module) from [<c000fd60>] (ret_fast_syscall+0x0/0x1c)
> [   10.357994]  r8:c000ff04 r7:0000017b r6:2529fc00 r5:00000000 r4:80c36630
> [   10.368747] Code: e1a01007 ebfffce1 e3e0000b e89daff8 (e7f001f2)
> [   10.378883] ---[ end trace 2051fac455b36c5a ]---
> [   11.228961] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
> [   11.247536] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
> [   11.301366] imx_media_ic: module is from the staging directory, the quality is unknown, you have been warned.
>
> So there's probably some sort of race going on.
>
> However, the following is primerily directed at Laurent as the one who
> introduced the BUG_ON() in question...
>
> NEVER EVER USE BUG_ON() IN A PATH THAT CAN RETURN AN ERROR.
>
> It's possible to find Linus rants about this, eg,
> https://www.spinics.net/lists/stable/msg146439.html
>
>  I should have reacted to the damn added BUG_ON() lines. I suspect I
>  will have to finally just remove the idiotic BUG_ON() concept once and
>  for all, because there is NO F*CKING EXCUSE to knowingly kill the
>  kernel.
>
> Also: http://yarchive.net/comp/linux/BUG.html
>
>  Rule of thumb: BUG() is only good for something that never happens and
>  that we really have no other option for (ie state is so corrupt that
>  continuing is deadly).
>
> So, _unless_ people want to see BUG_ON() removed from the kernel, I
> strongly suggest to _STOP_ using it as "we didn't like the function
> arguments, let's use it as an assert() statement instead of returning
> an error."
>
> There's no excuse what so ever to be killing the machine in
> media_create_pad_link().  If it doesn't like a NULL pointer, it's damn
> well got an error path to report that fact.  Use that mechanism and
> stop needlessly killing the kernel.
>
> BUG_ON() IS NOT ASSERT().  DO NOT USE IT AS SUCH.
>
> Linus is absolutely right about BUG_ON() - it hurts debuggability,
> because now the only way to do further tests is to reboot the damned
> machine after removing those fscking BUG_ON()s that should *never*
> have been there in the first place.
>
> As Linus went on to say:
>
>  And dammit, if anybody else feels that they had done "debugging
>  messages with BUG_ON()", I would suggest you
>
>   (a) rethink your approach to programming
>
>   (b) send me patches to remove the crap entirely, or make them real
>  *DEBUGGING* messages, not "kill the whole machine" messages.
>
>  I've ranted against people using BUG_ON() for debugging in the past.
>  Why the f*ck does this still happen? And Andrew - please stop taking
>  those kinds of patches! Lookie here:
>
>      https://lwn.net/Articles/13183/
>
>  so excuse me for being upset that people still do this shit almost 15
>  years later.
>
> So I suggest people heed that advice and start fixing these stupid
> BUG_ON()s that they've created.
>
> Thanks.
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 20/36] media: imx: Add CSI subdev driver
  2017-02-16 14:20         ` Russell King - ARM Linux
@ 2017-02-16 19:07           ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 19:07 UTC (permalink / raw)
  To: Russell King - ARM Linux, Laurent Pinchart, Hans Verkuil,
	Mauro Carvalho Chehab
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, sakari.ailus, nick,
	songjun.wu, pavel, robert.jarzmik, devel, markus.heiser,
	Steve Longerbeam, shuah, geert, linux-media, devicetree, kernel,
	arnd, tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	laurent.pinchart+renesas, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel,
	jean-christophe.trotin, p.zabel, fabio.estevam, shawnguo,
	sudipm.mukherjee



On 02/16/2017 06:20 AM, Russell King - ARM Linux wrote:
> On Thu, Feb 16, 2017 at 01:09:35PM +0000, Russell King - ARM Linux wrote:
>>
>> <snip>
>> More crap.
>>
>> If the "complete" method fails (or, in fact, anything in
>> v4l2_async_test_notify() fails) then all hell breaks loose, because
>> of the total lack of clean up (and no, this isn't anything to do with
>> some stupid justification of those BUG_ON()s above.)
>>
>> v4l2_async_notifier_register() gets called, it adds the notifier to
>> the global notifier list.  v4l2_async_test_notify() gets called.  It
>> returns an error, which is propagated out of
>> v4l2_async_notifier_register().
>>
>> So the caller thinks that v4l2_async_notifier_register() failed, which
>> will cause imx_media_probe() to fail, causing imxmd->subdev_notifier
>> to be kfree()'d.  We now have a use-after free bug.
>>
>> Second case.  v4l2_async_register_subdev().  Almost exactly the same,
>> except in this case adding sd->async_list to the notifier->done list
>> may have succeeded, and failure after that, again, results in an
>> in-use list_head being kfree()'d.
>
> And here's a patch which, combined with the fixes for ipuv3, results in
> everything appearing to work properly.  Feel free to tear out the bits
> for your area and turn them into proper patches.
>
>  drivers/gpu/ipu-v3/ipu-common.c           |  6 ---
>  drivers/media/media-entity.c              |  7 +--
>  drivers/media/v4l2-core/v4l2-async.c      | 71 +++++++++++++++++++++++--------
>  drivers/staging/media/imx/imx-media-csi.c |  1 +
>  drivers/staging/media/imx/imx-media-dev.c |  2 +-
>  5 files changed, 59 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
> index 97218af4fe75..8368e6f766ee 100644
> --- a/drivers/gpu/ipu-v3/ipu-common.c
> +++ b/drivers/gpu/ipu-v3/ipu-common.c
> @@ -1238,12 +1238,6 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
>  			platform_device_put(pdev);
>  			goto err_register;
>  		}
> -
> -		/*
> -		 * Set of_node only after calling platform_device_add. Otherwise
> -		 * the platform:imx-ipuv3-crtc modalias won't be used.
> -		 */
> -		pdev->dev.of_node = of_node;
>  	}


Ah, never mind my question earlier, I see now why the CSI's were likely
not recognized, probably because of this. Anyway I agree with this 
change and I made the accompanying requisite change to imx-media-csi.c
and imx-media-dev.c below.

Steve




>
>  	return 0;
> diff --git a/drivers/media/media-entity.c b/drivers/media/media-entity.c
> index f9f723f5e4f0..154593a168df 100644
> --- a/drivers/media/media-entity.c
> +++ b/drivers/media/media-entity.c
> @@ -625,9 +625,10 @@ media_create_pad_link(struct media_entity *source, u16 source_pad,
>  	struct media_link *link;
>  	struct media_link *backlink;
>
> -	BUG_ON(source == NULL || sink == NULL);
> -	BUG_ON(source_pad >= source->num_pads);
> -	BUG_ON(sink_pad >= sink->num_pads);
> +	if (WARN_ON(source == NULL || sink == NULL) ||
> +	    WARN_ON(source_pad >= source->num_pads) ||
> +	    WARN_ON(sink_pad >= sink->num_pads))
> +		return -EINVAL;
>
>  	link = media_add_link(&source->links);
>  	if (link == NULL)
> diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
> index 5bada202b2d3..09934fb96a8d 100644
> --- a/drivers/media/v4l2-core/v4l2-async.c
> +++ b/drivers/media/v4l2-core/v4l2-async.c
> @@ -94,7 +94,7 @@ static struct v4l2_async_subdev *v4l2_async_belongs(struct v4l2_async_notifier *
>  }
>
>  static int v4l2_async_test_notify(struct v4l2_async_notifier *notifier,
> -				  struct v4l2_subdev *sd,
> +				  struct list_head *new, struct v4l2_subdev *sd,
>  				  struct v4l2_async_subdev *asd)
>  {
>  	int ret;
> @@ -107,22 +107,36 @@ static int v4l2_async_test_notify(struct v4l2_async_notifier *notifier,
>  	if (notifier->bound) {
>  		ret = notifier->bound(notifier, sd, asd);
>  		if (ret < 0)
> -			return ret;
> +			goto err_bind;
>  	}
> +
>  	/* Move from the global subdevice list to notifier's done */
> -	list_move(&sd->async_list, &notifier->done);
> +	list_move(&sd->async_list, new);
>
>  	ret = v4l2_device_register_subdev(notifier->v4l2_dev, sd);
> -	if (ret < 0) {
> -		if (notifier->unbind)
> -			notifier->unbind(notifier, sd, asd);
> -		return ret;
> -	}
> +	if (ret < 0)
> +		goto err_register;
>
> -	if (list_empty(&notifier->waiting) && notifier->complete)
> -		return notifier->complete(notifier);
> +	if (list_empty(&notifier->waiting) && notifier->complete) {
> +		ret = notifier->complete(notifier);
> +		if (ret < 0)
> +			goto err_complete;
> +	}
>
>  	return 0;
> +
> +err_complete:
> +	v4l2_device_unregister_subdev(sd);
> +err_register:
> +	if (notifier->unbind)
> +		notifier->unbind(notifier, sd, asd);
> +err_bind:
> +	sd->notifier = NULL;
> +	sd->asd = NULL;
> +	list_add(&asd->list, &notifier->waiting);
> +	/* always take this off the list on error */
> +	list_del(&sd->async_list);
> +	return ret;
>  }
>
>  static void v4l2_async_cleanup(struct v4l2_subdev *sd)
> @@ -139,7 +153,8 @@ int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
>  {
>  	struct v4l2_subdev *sd, *tmp;
>  	struct v4l2_async_subdev *asd;
> -	int i;
> +	LIST_HEAD(new);
> +	int ret, i;
>
>  	if (!notifier->num_subdevs || notifier->num_subdevs > V4L2_MAX_SUBDEVS)
>  		return -EINVAL;
> @@ -172,22 +187,39 @@ int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
>  	list_add(&notifier->list, &notifier_list);
>
>  	list_for_each_entry_safe(sd, tmp, &subdev_list, async_list) {
> -		int ret;
> -
>  		asd = v4l2_async_belongs(notifier, sd);
>  		if (!asd)
>  			continue;
>
> -		ret = v4l2_async_test_notify(notifier, sd, asd);
> +		ret = v4l2_async_test_notify(notifier, &new, sd, asd);
>  		if (ret < 0) {
> -			mutex_unlock(&list_lock);
> -			return ret;
> +			/*
> +			 * On failure, v4l2_async_test_notify() takes the
> +			 * sd off the subdev list.  Add it back.
> +			 */
> +			list_add(&sd->async_list, &subdev_list);
> +			goto err_notify;
>  		}
>  	}
>
> +	list_splice(&new, &notifier->done);
> +
>  	mutex_unlock(&list_lock);
>
>  	return 0;
> +
> +err_notify:
> +	list_del(&notifier->list);
> +	list_for_each_entry_safe(sd, tmp, &new, async_list) {
> +		v4l2_device_unregister_subdev(sd);
> +		list_move(&sd->async_list, &subdev_list);
> +		if (notifier->unbind)
> +			notifier->unbind(notifier, sd, sd->asd);
> +		sd->notifier = NULL;
> +		sd->asd = NULL;
> +	}
> +	mutex_unlock(&list_lock);
> +	return ret;
>  }
>  EXPORT_SYMBOL(v4l2_async_notifier_register);
>
> @@ -213,6 +245,7 @@ void v4l2_async_notifier_unregister(struct v4l2_async_notifier *notifier)
>  	list_del(&notifier->list);
>
>  	list_for_each_entry_safe(sd, tmp, &notifier->done, async_list) {
> +		struct v4l2_async_subdev *asd = sd->asd;
>  		struct device *d;
>
>  		d = get_device(sd->dev);
> @@ -223,7 +256,7 @@ void v4l2_async_notifier_unregister(struct v4l2_async_notifier *notifier)
>  		device_release_driver(d);
>
>  		if (notifier->unbind)
> -			notifier->unbind(notifier, sd, sd->asd);
> +			notifier->unbind(notifier, sd, asd);
>
>  		/*
>  		 * Store device at the device cache, in order to call
> @@ -288,7 +321,9 @@ int v4l2_async_register_subdev(struct v4l2_subdev *sd)
>  	list_for_each_entry(notifier, &notifier_list, list) {
>  		struct v4l2_async_subdev *asd = v4l2_async_belongs(notifier, sd);
>  		if (asd) {
> -			int ret = v4l2_async_test_notify(notifier, sd, asd);
> +			int ret = v4l2_async_test_notify(notifier,
> +							 &notifier->done,
> +							 sd, asd);
>  			mutex_unlock(&list_lock);
>  			return ret;
>  		}
> diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
> index 9d9ec03436e4..507026feee91 100644
> --- a/drivers/staging/media/imx/imx-media-csi.c
> +++ b/drivers/staging/media/imx/imx-media-csi.c
> @@ -1427,6 +1427,7 @@ static int imx_csi_probe(struct platform_device *pdev)
>  	priv->sd.entity.ops = &csi_entity_ops;
>  	priv->sd.entity.function = MEDIA_ENT_F_PROC_VIDEO_PIXEL_FORMATTER;
>  	priv->sd.dev = &pdev->dev;
> +	priv->sd.of_node = pdata->of_node;
>  	priv->sd.owner = THIS_MODULE;
>  	priv->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
>  	priv->sd.grp_id = priv->csi_id ?
> diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c
> index 60f45fe4b506..5b4dfc1fb6ab 100644
> --- a/drivers/staging/media/imx/imx-media-dev.c
> +++ b/drivers/staging/media/imx/imx-media-dev.c
> @@ -197,7 +197,7 @@ static int imx_media_subdev_bound(struct v4l2_async_notifier *notifier,
>  	struct imx_media_subdev *imxsd;
>  	int ret = -EINVAL;
>
> -	imxsd = imx_media_find_async_subdev(imxmd, sd->dev->of_node,
> +	imxsd = imx_media_find_async_subdev(imxmd, sd->of_node,
>  					    dev_name(sd->dev));
>  	if (!imxsd)
>  		goto out;
>
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 20/36] media: imx: Add CSI subdev driver
  2017-02-16 18:44       ` Steve Longerbeam
@ 2017-02-16 19:09         ` Russell King - ARM Linux
  0 siblings, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 19:09 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Laurent Pinchart, Hans Verkuil, Mauro Carvalho Chehab,
	mark.rutland, andrew-ct.chen, minghsiu.tsai, sakari.ailus, nick,
	songjun.wu, Steve Longerbeam, pavel, robert.jarzmik, devel,
	markus.heiser, laurent.pinchart+renesas, shuah, geert,
	linux-media, devicetree, kernel, arnd, mchehab, bparrot, robh+dt,
	horms+renesas, tiffany.lin, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel,
	jean-christophe.trotin, p.zabel, fabio.estevam, shawnguo,
	sudipm.mukherjee

On Thu, Feb 16, 2017 at 10:44:16AM -0800, Steve Longerbeam wrote:
> On 02/16/2017 04:40 AM, Russell King - ARM Linux wrote:
> >[    8.012191] imx_media_common: module is from the staging directory, the quality is unknown, you have been warned.
> >[    8.018175] imx_media: module is from the staging directory, the quality is unknown, you have been warned.
> >[    8.748345] imx-media: Registered subdev ipu1_csi0_mux
> >[    8.753451] imx-media: Registered subdev ipu2_csi1_mux
> >[    9.055196] imx219 0-0010: detected IMX219 sensor
> >[    9.090733] imx6_mipi_csi2: module is from the staging directory, the quality is unknown, you have been warned.
> >[    9.092247] imx-media: Registered subdev imx219 0-0010
> >[    9.334338] imx-media: Registered subdev imx6-mipi-csi2
> >[    9.372452] imx_media_capture: module is from the staging directory, the quality is unknown, you have been warned.
> >[    9.378163] imx_media_capture: module is from the staging directory, the quality is unknown, you have been warned.
> >[    9.390033] imx_media_csi: module is from the staging directory, the quality is unknown, you have been warned.
> >[    9.394362] imx-media: Received unknown subdev ipu1_csi0
> 
> The root problem is here. I don't know why the CSI entities are not
> being recognized. Can you share the changes you made?

No, it's not the root problem that's causing the BUG/etc, but it is
_a_ problem.  Nevertheless, it's something I fixed - disconnecting
the of_node from the struct device needed one other change in the
imx-media code that was missing at this time.

However, that's no excuse what so ever for the BUG_ON() and lack of
error cleanup (causing use-after-free, which is just another way of
saying "data corruption waiting to happen") that I identified.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver
  2017-02-16 11:54   ` Philipp Zabel
@ 2017-02-16 19:20     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 19:20 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 03:54 AM, Philipp Zabel wrote:
> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>> Add bindings documentation for the i.MX media driver.
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>> ---
>>  Documentation/devicetree/bindings/media/imx.txt | 66 +++++++++++++++++++++++++
>>  1 file changed, 66 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/media/imx.txt
>>
>> diff --git a/Documentation/devicetree/bindings/media/imx.txt b/Documentation/devicetree/bindings/media/imx.txt
>> new file mode 100644
>> index 0000000..fd5af50
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/media/imx.txt
>> @@ -0,0 +1,66 @@
>> +Freescale i.MX Media Video Device
>> +=================================
>> +
>> +Video Media Controller node
>> +---------------------------
>> +
>> +This is the media controller node for video capture support. It is a
>> +virtual device that lists the camera serial interface nodes that the
>> +media device will control.
>> +
>> +Required properties:
>> +- compatible : "fsl,imx-capture-subsystem";
>> +- ports      : Should contain a list of phandles pointing to camera
>> +		sensor interface ports of IPU devices
>> +
>> +example:
>> +
>> +capture-subsystem {
>> +	compatible = "fsl,capture-subsystem";
>
> "fsl,imx-capture-subsystem"


Fixed.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
                   ` (36 preceding siblings ...)
  2017-02-16 11:37 ` [PATCH v4 00/36] i.MX Media Driver Russell King - ARM Linux
@ 2017-02-16 22:20 ` Russell King - ARM Linux
  2017-02-16 22:27   ` Steve Longerbeam
  37 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 22:20 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> In version 4:

With this version, I get:

[28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
[28762.899409] ipu1_csi0: pipeline_set_stream failed with -110

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-16 22:20 ` Russell King - ARM Linux
@ 2017-02-16 22:27   ` Steve Longerbeam
  2017-02-16 22:57     ` Russell King - ARM Linux
  2017-02-17 11:43     ` Philipp Zabel
  0 siblings, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-16 22:27 UTC (permalink / raw)
  To: Russell King - ARM Linux, p.zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam



On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
>> In version 4:
>
> With this version, I get:
>
> [28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> [28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
>

Right, in the imx219, on exit from s_power(), the clock and data lanes
must be placed in the LP-11 state. This has been done in the ov5640 and
tc358743 subdevs.

If we want to bring in the patch that adds a .prepare_stream() op,
the csi-2 bus would need to be placed in LP-11 in that op instead.

Philipp, should I go ahead and add your .prepare_stream() patch?

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-16 22:27   ` Steve Longerbeam
@ 2017-02-16 22:57     ` Russell King - ARM Linux
  2017-02-17 10:39       ` Philipp Zabel
  2017-02-18 17:21       ` Steve Longerbeam
  2017-02-17 11:43     ` Philipp Zabel
  1 sibling, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-16 22:57 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: p.zabel, robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Thu, Feb 16, 2017 at 02:27:41PM -0800, Steve Longerbeam wrote:
> 
> 
> On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> >On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> >>In version 4:
> >
> >With this version, I get:
> >
> >[28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> >[28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
> >
> 
> Right, in the imx219, on exit from s_power(), the clock and data lanes
> must be placed in the LP-11 state. This has been done in the ov5640 and
> tc358743 subdevs.

The only way to do that is to enable streaming from the sensor, wait
an initialisation time, and then disable streaming, and wait for the
current line to finish.  There is _no_ other way to get the sensor to
place its clock and data lines into LP-11 state.

For that to happen, we need to program the sensor a bit more than we
currently do at power on (to a minimal resolution, and setting up the
PLLs), and introduce another 4ms on top of the 8ms or so that the
runtime resume function already takes.

Looking at the SMIA driver, things are worse, and I suspect that it also
will not work with the current setup - the SMIA spec shows that the CSI
clock and data lines are tristated while the sensor is not streaming,
which means they aren't held at a guaranteed LP-11 state, even if that
driver momentarily enabled streaming.  Hence, Freescale's (or is it
Synopsis') requirement may actually be difficult to satisfy.

However, I regard runtime PM broken with the current imx-capture setup.
At the moment, power is controlled at the sensor by whether the media
links are enabled.  So, if you have an enabled link coming off the
sensor, the sensor will be powered up, whether you're using it or not.

Given that the number of applications out there that know about the
media subdevs is really quite small, this combination makes having
runtime PM in sensor devices completely pointless - they can't sleep
as long as they have an enabled link, which could be persistent over
many days or weeks.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 07/36] ARM: dts: imx6-sabresd: add OV5642 and OV5640 camera sensors
  2017-02-16  2:19 ` [PATCH v4 07/36] ARM: dts: imx6-sabresd: " Steve Longerbeam
@ 2017-02-17  0:51   ` Fabio Estevam
  2017-02-17  0:56     ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Fabio Estevam @ 2017-02-17  0:51 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, Mark Rutland, Shawn Guo, Sascha Hauer, Fabio Estevam,
	Russell King - ARM Linux, mchehab, Hans Verkuil, nick,
	markus.heiser, Philipp Zabel, Laurent Pinchart, bparrot,
	Geert Uytterhoeven, Arnd Bergmann, Sudip Mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, Robert Jarzmik,
	songjun.wu, andrew-ct.chen, Greg Kroah-Hartman, shuah,
	sakari.ailus, Pavel Machek, devel, devicetree, Steve Longerbeam,
	linux-kernel, linux-arm-kernel, linux-media

Hi Steve,

On Thu, Feb 16, 2017 at 12:19 AM, Steve Longerbeam
<slongerbeam@gmail.com> wrote:
> Enables the OV5642 parallel-bus sensor, and the OV5640 MIPI CSI-2 sensor.
>
> The OV5642 connects to the parallel-bus mux input port on ipu1_csi0_mux.
>
> The OV5640 connects to the input port on the MIPI CSI-2 receiver on
> mipi_csi.
>
> Until the OV5652 sensor module compatible with the SabreSD becomes
> available for testing, the ov5642 node is currently disabled.

You missed your Signed-off-by.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 07/36] ARM: dts: imx6-sabresd: add OV5642 and OV5640 camera sensors
  2017-02-17  0:51   ` Fabio Estevam
@ 2017-02-17  0:56     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-17  0:56 UTC (permalink / raw)
  To: Fabio Estevam
  Cc: robh+dt, Mark Rutland, Shawn Guo, Sascha Hauer, Fabio Estevam,
	Russell King - ARM Linux, mchehab, Hans Verkuil, nick,
	markus.heiser, Philipp Zabel, Laurent Pinchart, bparrot,
	Geert Uytterhoeven, Arnd Bergmann, Sudip Mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, Robert Jarzmik,
	songjun.wu, andrew-ct.chen, Greg Kroah-Hartman, shuah,
	sakari.ailus, Pavel Machek, devel, devicetree, Steve Longerbeam,
	linux-kernel, linux-arm-kernel, linux-media



On 02/16/2017 04:51 PM, Fabio Estevam wrote:
> Hi Steve,
>
> On Thu, Feb 16, 2017 at 12:19 AM, Steve Longerbeam
> <slongerbeam@gmail.com> wrote:
>> Enables the OV5642 parallel-bus sensor, and the OV5640 MIPI CSI-2 sensor.
>>
>> The OV5642 connects to the parallel-bus mux input port on ipu1_csi0_mux.
>>
>> The OV5640 connects to the input port on the MIPI CSI-2 receiver on
>> mipi_csi.
>>
>> Until the OV5652 sensor module compatible with the SabreSD becomes
>> available for testing, the ov5642 node is currently disabled.
>
> You missed your Signed-off-by.

Thanks, fixed.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 18/36] media: Add i.MX media core driver
  2017-02-16 13:02   ` Philipp Zabel
  2017-02-16 13:44     ` Russell King - ARM Linux
@ 2017-02-17  1:33     ` Steve Longerbeam
  2017-02-17  8:34       ` Philipp Zabel
  1 sibling, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-17  1:33 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 05:02 AM, Philipp Zabel wrote:
> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
<snip>
>> +
>> +- Clean up and move the ov5642 subdev driver to drivers/media/i2c, and
>> +  create the binding docs for it.
>
> This is done already, right?


I cleaned up ov5640 and moved it to drivers/media/i2c with binding docs,
but not the ov5642 yet.


>
>> +- The Frame Interval Monitor could be exported to v4l2-core for
>> +  general use.
>> +
>> +- The subdev that is the original source of video data (referred to as
>> +  the "sensor" in the code), is called from various subdevs in the
>> +  pipeline in order to set/query the video standard ({g|s|enum}_std)
>> +  and to get/set the original frame interval from the capture interface
>> +  ([gs]_parm). Instead, the entities that need this info should call its
>> +  direct neighbor, and the neighbor should propagate the call to its
>> +  neighbor in turn if necessary.
>
> Especially the [gs]_parm fix is necessary to present userspace with the
> correct frame interval in case of frame skipping in the CSI.


Right, understood. I've added this to list of fixes for version 5.

What a pain though! It means propagating every call to g_frame_interval
upstream until a subdev "that cares" returns ret == 0 or
ret != -ENOIOCTLCMD. And that goes for any other chained subdev call
as well.

I've thought of writing something like a v4l2_chained_subdev_call()
macro to do this, but it would be a big macro.





>
>> +- At driver load time, the device-tree node that is the original source
>> +  (the "sensor"), is parsed to record its media bus configuration, and
>> +  this info is required in various subdevs to setup the pipeline.
>> +  Laurent Pinchart argues that instead the subdev should call its
>> +  neighbor's g_mbus_config op (which should be propagated if necessary)
>> +  to get this info. However Hans Verkuil is planning to remove the
>> +  g_mbus_config op. For now this driver uses the parsed DT mbus config
>> +  method until this issue is resolved.
>> +
>> diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c
>> new file mode 100644
>> index 0000000..e2041ad
>> --- /dev/null
>> +++ b/drivers/staging/media/imx/imx-media-dev.c
> [...]
>> +static inline u32 pixfmt_to_colorspace(const struct imx_media_pixfmt *fmt)
>> +{
>> +	return (fmt->cs == IPUV3_COLORSPACE_RGB) ?
>> +		V4L2_COLORSPACE_SRGB : V4L2_COLORSPACE_SMPTE170M;
>> +}
>
> This ...
>
> [...]
>> +int imx_media_mbus_fmt_to_pix_fmt(struct v4l2_pix_format *pix,
>> +				  struct v4l2_mbus_framefmt *mbus,
>> +				  const struct imx_media_pixfmt *cc)
>> +{
>> +	u32 stride;
>> +
>> +	if (!cc) {
>> +		cc = imx_media_find_format(0, mbus->code, true, false);
>> +		if (!cc)
>> +			return -EINVAL;
>> +	}
>> +
>> +	stride = cc->planar ? mbus->width : (mbus->width * cc->bpp) >> 3;
>> +
>> +	pix->width = mbus->width;
>> +	pix->height = mbus->height;
>> +	pix->pixelformat = cc->fourcc;
>> +	pix->colorspace = pixfmt_to_colorspace(cc);
>
> ... is not right. The colorspace should be taken from the input pad
> colorspace everywhere (except for the IC output pad in the future, once
> that supports changing YCbCr encoding and quantization), not guessed
> based on the media bus format.

Ok, will fix this to assign pix->colorspace to mbus->colorspace, after
all the subdevs assign colorspace values to their pads.


Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 18/36] media: Add i.MX media core driver
  2017-02-17  1:33     ` Steve Longerbeam
@ 2017-02-17  8:34       ` Philipp Zabel
  0 siblings, 0 replies; 228+ messages in thread
From: Philipp Zabel @ 2017-02-17  8:34 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Thu, 2017-02-16 at 17:33 -0800, Steve Longerbeam wrote:
> 
> On 02/16/2017 05:02 AM, Philipp Zabel wrote:
> > On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> <snip>
> >> +
> >> +- Clean up and move the ov5642 subdev driver to drivers/media/i2c, and
> >> +  create the binding docs for it.
> >
> > This is done already, right?
> 
> 
> I cleaned up ov5640 and moved it to drivers/media/i2c with binding docs,
> but not the ov5642 yet.

Ok, thanks.

> >> +- The Frame Interval Monitor could be exported to v4l2-core for
> >> +  general use.
> >> +
> >> +- The subdev that is the original source of video data (referred to as
> >> +  the "sensor" in the code), is called from various subdevs in the
> >> +  pipeline in order to set/query the video standard ({g|s|enum}_std)
> >> +  and to get/set the original frame interval from the capture interface
> >> +  ([gs]_parm). Instead, the entities that need this info should call its
> >> +  direct neighbor, and the neighbor should propagate the call to its
> >> +  neighbor in turn if necessary.
> >
> > Especially the [gs]_parm fix is necessary to present userspace with the
> > correct frame interval in case of frame skipping in the CSI.
> 
> 
> Right, understood. I've added this to list of fixes for version 5.
> 
> What a pain though! It means propagating every call to g_frame_interval
> upstream until a subdev "that cares" returns ret == 0 or
> ret != -ENOIOCTLCMD. And that goes for any other chained subdev call
> as well.

Not at all. Since the frame interval is a property of the pad, that had
to be propagated downstream by media-ctl along with media bus format,
frame size, and colorimetry earlier.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-16 22:57     ` Russell King - ARM Linux
@ 2017-02-17 10:39       ` Philipp Zabel
  2017-02-17 10:56         ` Russell King - ARM Linux
  2017-02-18 17:21       ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-17 10:39 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Thu, 2017-02-16 at 22:57 +0000, Russell King - ARM Linux wrote:
> On Thu, Feb 16, 2017 at 02:27:41PM -0800, Steve Longerbeam wrote:
> > 
> > 
> > On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> > >On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> > >>In version 4:
> > >
> > >With this version, I get:
> > >
> > >[28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> > >[28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
> > >
> > 
> > Right, in the imx219, on exit from s_power(), the clock and data lanes
> > must be placed in the LP-11 state. This has been done in the ov5640 and
> > tc358743 subdevs.
> 
> The only way to do that is to enable streaming from the sensor, wait
> an initialisation time, and then disable streaming, and wait for the
> current line to finish.  There is _no_ other way to get the sensor to
> place its clock and data lines into LP-11 state.

I thought going through LP-11 is part of the D-PHY transmitter
initialization, during the LP->HS wakeup sequence. But then I have no
access to MIPI specs.
It is unfortunate that the i.MX6 MIPI CSI-2 core needs software
assistance here, but could it be possible to trigger that sequence in
the sensor and then without waiting switching to polling for LP-11 state
in the i.MX6 MIPI CSI-2 receiver?

> For that to happen, we need to program the sensor a bit more than we
> currently do at power on (to a minimal resolution, and setting up the
> PLLs), and introduce another 4ms on top of the 8ms or so that the
> runtime resume function already takes.
> 
> Looking at the SMIA driver, things are worse, and I suspect that it also
> will not work with the current setup - the SMIA spec shows that the CSI
> clock and data lines are tristated while the sensor is not streaming,
> which means they aren't held at a guaranteed LP-11 state, even if that
> driver momentarily enabled streaming.  Hence, Freescale's (or is it
> Synopsis') requirement may actually be difficult to satisfy.
> 
> However, I regard runtime PM broken with the current imx-capture setup.
> At the moment, power is controlled at the sensor by whether the media
> links are enabled.  So, if you have an enabled link coming off the
> sensor, the sensor will be powered up, whether you're using it or not.
> 
> Given that the number of applications out there that know about the
> media subdevs is really quite small, this combination makes having
> runtime PM in sensor devices completely pointless - they can't sleep
> as long as they have an enabled link, which could be persistent over
> many days or weeks.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-16  2:19 ` [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver Steve Longerbeam
  2017-02-16 10:28   ` Russell King - ARM Linux
@ 2017-02-17 10:47   ` Philipp Zabel
  2017-02-17 11:06     ` Russell King - ARM Linux
  2017-02-17 14:16     ` Philipp Zabel
  1 sibling, 2 replies; 228+ messages in thread
From: Philipp Zabel @ 2017-02-17 10:47 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> Adds MIPI CSI-2 Receiver subdev driver. This subdev is required
> for sensors with a MIPI CSI2 interface.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  drivers/staging/media/imx/Makefile         |   1 +
>  drivers/staging/media/imx/imx6-mipi-csi2.c | 573 +++++++++++++++++++++++++++++
>  2 files changed, 574 insertions(+)
>  create mode 100644 drivers/staging/media/imx/imx6-mipi-csi2.c
> 
> diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
> index 878a126..3569625 100644
> --- a/drivers/staging/media/imx/Makefile
> +++ b/drivers/staging/media/imx/Makefile
> @@ -9,3 +9,4 @@ obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-vdic.o
>  obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-ic.o
>  
>  obj-$(CONFIG_VIDEO_IMX_CSI) += imx-media-csi.o
> +obj-$(CONFIG_VIDEO_IMX_CSI) += imx6-mipi-csi2.o
> diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
> new file mode 100644
> index 0000000..23dca80
> --- /dev/null
> +++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
> @@ -0,0 +1,573 @@
> +/*
> + * MIPI CSI-2 Receiver Subdev for Freescale i.MX6 SOC.
> + *
> + * Copyright (c) 2012-2017 Mentor Graphics Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + */
> +#include <linux/clk.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/iopoll.h>
> +#include <linux/irq.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-of.h>
> +#include <media/v4l2-subdev.h>
> +#include "imx-media.h"
> +
> +/*
> + * there must be 5 pads: 1 input pad from sensor, and
> + * the 4 virtual channel output pads
> + */
> +#define CSI2_SINK_PAD       0
> +#define CSI2_NUM_SINK_PADS  1
> +#define CSI2_NUM_SRC_PADS   4
> +#define CSI2_NUM_PADS       5
> +
> +struct csi2_dev {
> +	struct device          *dev;
> +	struct v4l2_subdev      sd;
> +	struct media_pad       pad[CSI2_NUM_PADS];
> +	struct v4l2_mbus_framefmt format_mbus;
> +	struct clk             *dphy_clk;
> +	struct clk             *cfg_clk;
> +	struct clk             *pix_clk; /* what is this? */
> +	void __iomem           *base;
> +	struct v4l2_of_bus_mipi_csi2 bus;
> +	bool                    on;
> +	bool                    stream_on;
> +	bool                    src_linked;
> +	bool                    sink_linked[CSI2_NUM_SRC_PADS];
> +};
> +
> +#define DEVICE_NAME "imx6-mipi-csi2"
> +
> +/* Register offsets */
> +#define CSI2_VERSION            0x000
> +#define CSI2_N_LANES            0x004
> +#define CSI2_PHY_SHUTDOWNZ      0x008
> +#define CSI2_DPHY_RSTZ          0x00c
> +#define CSI2_RESETN             0x010
> +#define CSI2_PHY_STATE          0x014
> +#define PHY_STOPSTATEDATA_BIT   4
> +#define PHY_STOPSTATEDATA(n)    BIT(PHY_STOPSTATEDATA_BIT + (n))
> +#define PHY_RXCLKACTIVEHS       BIT(8)
> +#define PHY_RXULPSCLKNOT        BIT(9)
> +#define PHY_STOPSTATECLK        BIT(10)
> +#define CSI2_DATA_IDS_1         0x018
> +#define CSI2_DATA_IDS_2         0x01c
> +#define CSI2_ERR1               0x020
> +#define CSI2_ERR2               0x024
> +#define CSI2_MSK1               0x028
> +#define CSI2_MSK2               0x02c
> +#define CSI2_PHY_TST_CTRL0      0x030
> +#define PHY_TESTCLR		BIT(0)
> +#define PHY_TESTCLK		BIT(1)
> +#define CSI2_PHY_TST_CTRL1      0x034
> +#define PHY_TESTEN		BIT(16)
> +#define CSI2_SFT_RESET          0xf00
> +
> +static inline struct csi2_dev *sd_to_dev(struct v4l2_subdev *sdev)
> +{
> +	return container_of(sdev, struct csi2_dev, sd);
> +}
> +
> +static void csi2_enable(struct csi2_dev *csi2, bool enable)
> +{
> +	if (enable) {
> +		writel(0x1, csi2->base + CSI2_PHY_SHUTDOWNZ);
> +		writel(0x1, csi2->base + CSI2_DPHY_RSTZ);
> +		writel(0x1, csi2->base + CSI2_RESETN);
> +	} else {
> +		writel(0x0, csi2->base + CSI2_PHY_SHUTDOWNZ);
> +		writel(0x0, csi2->base + CSI2_DPHY_RSTZ);
> +		writel(0x0, csi2->base + CSI2_RESETN);
> +	}
> +}
> +
> +static void csi2_set_lanes(struct csi2_dev *csi2)
> +{
> +	int lanes = csi2->bus.num_data_lanes;
> +
> +	writel(lanes - 1, csi2->base + CSI2_N_LANES);
> +}
> +
> +static void dw_mipi_csi2_phy_write(struct csi2_dev *csi2,
> +				   u32 test_code, u32 test_data)
> +{
> +	/* Clear PHY test interface */
> +	writel(PHY_TESTCLR, csi2->base + CSI2_PHY_TST_CTRL0);
> +	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL1);
> +	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
> +
> +	/* Raise test interface strobe signal */
> +	writel(PHY_TESTCLK, csi2->base + CSI2_PHY_TST_CTRL0);
> +
> +	/* Configure address write on falling edge and lower strobe signal */
> +	writel(PHY_TESTEN | test_code, csi2->base + CSI2_PHY_TST_CTRL1);
> +	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
> +
> +	/* Configure data write on rising edge and raise strobe signal */
> +	writel(test_data, csi2->base + CSI2_PHY_TST_CTRL1);
> +	writel(PHY_TESTCLK, csi2->base + CSI2_PHY_TST_CTRL0);
> +
> +	/* Clear strobe signal */
> +	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
> +}
> +
> +static void csi2_dphy_init(struct csi2_dev *csi2)
> +{
> +	/*
> +	 * FIXME: 0x14 is derived from a fixed D-PHY reference
> +	 * clock from the HSI_TX PLL, and a fixed target lane max
> +	 * bandwidth of 300 Mbps. This value should be derived

If the table in https://community.nxp.com/docs/DOC-94312 is correct,
this should be 850 Mbps. Where does this 300 Mbps value come from?

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-17 10:39       ` Philipp Zabel
@ 2017-02-17 10:56         ` Russell King - ARM Linux
  2017-02-17 11:21           ` Philipp Zabel
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-17 10:56 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Fri, Feb 17, 2017 at 11:39:11AM +0100, Philipp Zabel wrote:
> On Thu, 2017-02-16 at 22:57 +0000, Russell King - ARM Linux wrote:
> > On Thu, Feb 16, 2017 at 02:27:41PM -0800, Steve Longerbeam wrote:
> > > 
> > > 
> > > On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> > > >On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> > > >>In version 4:
> > > >
> > > >With this version, I get:
> > > >
> > > >[28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> > > >[28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
> > > >
> > > 
> > > Right, in the imx219, on exit from s_power(), the clock and data lanes
> > > must be placed in the LP-11 state. This has been done in the ov5640 and
> > > tc358743 subdevs.
> > 
> > The only way to do that is to enable streaming from the sensor, wait
> > an initialisation time, and then disable streaming, and wait for the
> > current line to finish.  There is _no_ other way to get the sensor to
> > place its clock and data lines into LP-11 state.
> 
> I thought going through LP-11 is part of the D-PHY transmitter
> initialization, during the LP->HS wakeup sequence. But then I have no
> access to MIPI specs.

The D-PHY transmitter initialisation *only* happens as part of the
wake-up from standby to streaming mode.  That is because Sony expect
that you program the sensor, and then when you switch it to streaming
mode, it computes the D-PHY parameters from the PLL, input clock rate
(you have to tell it the clock rate in 1/256 MHz units), number of
lanes, and other parameters.

It is possible to program the D-PHY parameters manually, but that
doesn't change the above sequence in any way (it just avoids the
chip computing the values, it doesn't result in any change of
behaviour on the bus.)

The IMX219 specifications are clear: the clock and data lines are
held low (LP-00 state) after releasing the hardware enable signal.
There's a period of chip initialisation, and then you can access the
I2C bus and configure it.  There's a further period of initialisation
where charge pumps are getting to their operating state.  Then, you
set the streaming bit, and a load more initialisation happens before
the CSI bus enters LP-11 state and the first frame pops out.  When
entering standby, the last frame is completed, and then the CSI bus
enters LP-11 state.

SMIA are slightly different - mostly following what I've said above,
but the clock and data lines are tristated after releasing the
xshutdown signal, and they remain tristated until the clock line
starts toggling before the first frame appears.  There appears to
be no point that the clock line enters LP-11 state before it starts
toggling.  When entering standby, the last frame is completed, and
the CSI bus enters tristate mode (so floating.)  There is no way to
get these sensors into LP-11 state.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-17 10:47   ` Philipp Zabel
@ 2017-02-17 11:06     ` Russell King - ARM Linux
  2017-02-17 11:38       ` Philipp Zabel
  2017-02-23  0:06       ` Steve Longerbeam
  2017-02-17 14:16     ` Philipp Zabel
  1 sibling, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-17 11:06 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Fri, Feb 17, 2017 at 11:47:59AM +0100, Philipp Zabel wrote:
> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> > +static void csi2_dphy_init(struct csi2_dev *csi2)
> > +{
> > +	/*
> > +	 * FIXME: 0x14 is derived from a fixed D-PHY reference
> > +	 * clock from the HSI_TX PLL, and a fixed target lane max
> > +	 * bandwidth of 300 Mbps. This value should be derived
> 
> If the table in https://community.nxp.com/docs/DOC-94312 is correct,
> this should be 850 Mbps. Where does this 300 Mbps value come from?

I thought you had some code to compute the correct value, although
I guess we've lost the ability to know how fast the sensor is going
to drive the link.

Note that the IMX219 currently drives the data lanes at 912Mbps almost
exclusively, as I've yet to finish working out how to derive the PLL
parameters.  (I have something that works, but it currently takes on
the order of 100k iterations to derive the parameters.  gcd() doesn't
help you in this instance.)

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-17 10:56         ` Russell King - ARM Linux
@ 2017-02-17 11:21           ` Philipp Zabel
  0 siblings, 0 replies; 228+ messages in thread
From: Philipp Zabel @ 2017-02-17 11:21 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Fri, 2017-02-17 at 10:56 +0000, Russell King - ARM Linux wrote:
> On Fri, Feb 17, 2017 at 11:39:11AM +0100, Philipp Zabel wrote:
> > On Thu, 2017-02-16 at 22:57 +0000, Russell King - ARM Linux wrote:
> > > On Thu, Feb 16, 2017 at 02:27:41PM -0800, Steve Longerbeam wrote:
> > > > 
> > > > 
> > > > On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> > > > >On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> > > > >>In version 4:
> > > > >
> > > > >With this version, I get:
> > > > >
> > > > >[28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> > > > >[28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
> > > > >
> > > > 
> > > > Right, in the imx219, on exit from s_power(), the clock and data lanes
> > > > must be placed in the LP-11 state. This has been done in the ov5640 and
> > > > tc358743 subdevs.
> > > 
> > > The only way to do that is to enable streaming from the sensor, wait
> > > an initialisation time, and then disable streaming, and wait for the
> > > current line to finish.  There is _no_ other way to get the sensor to
> > > place its clock and data lines into LP-11 state.
> > 
> > I thought going through LP-11 is part of the D-PHY transmitter
> > initialization, during the LP->HS wakeup sequence. But then I have no
> > access to MIPI specs.
> 
> The D-PHY transmitter initialisation *only* happens as part of the
> wake-up from standby to streaming mode.  That is because Sony expect
> that you program the sensor, and then when you switch it to streaming
> mode, it computes the D-PHY parameters from the PLL, input clock rate
> (you have to tell it the clock rate in 1/256 MHz units), number of
> lanes, and other parameters.
> 
> It is possible to program the D-PHY parameters manually, but that
> doesn't change the above sequence in any way (it just avoids the
> chip computing the values, it doesn't result in any change of
> behaviour on the bus.)
>
> The IMX219 specifications are clear: the clock and data lines are
> held low (LP-00 state) after releasing the hardware enable signal.
> There's a period of chip initialisation, and then you can access the
> I2C bus and configure it.  There's a further period of initialisation
> where charge pumps are getting to their operating state.  Then, you
> set the streaming bit, and a load more initialisation happens before
> the CSI bus enters LP-11 state and the first frame pops out.  When
> entering standby, the last frame is completed, and then the CSI bus
> enters LP-11 state.

How about firing off a thread in imx6-mipi-csi2 prepare_stream that
spins on the LP-11 check and then continues with the receiver D-PHY
initialization once the condition is met? I think we should have at
least 100 us to do this, but maybe the IMX219 can be programmed to stay
in LP-11 for a longer time.

> SMIA are slightly different - mostly following what I've said above,
> but the clock and data lines are tristated after releasing the
> xshutdown signal, and they remain tristated until the clock line
> starts toggling before the first frame appears.  There appears to
> be no point that the clock line enters LP-11 state before it starts
> toggling.  When entering standby, the last frame is completed, and
> the CSI bus enters tristate mode (so floating.)  There is no way to
> get these sensors into LP-11 state.

I have no idea what to do about those.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-17 11:06     ` Russell King - ARM Linux
@ 2017-02-17 11:38       ` Philipp Zabel
  2017-02-22 23:38         ` Steve Longerbeam
  2017-02-23  0:06       ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-17 11:38 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Fri, 2017-02-17 at 11:06 +0000, Russell King - ARM Linux wrote:
> On Fri, Feb 17, 2017 at 11:47:59AM +0100, Philipp Zabel wrote:
> > On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> > > +static void csi2_dphy_init(struct csi2_dev *csi2)
> > > +{
> > > +	/*
> > > +	 * FIXME: 0x14 is derived from a fixed D-PHY reference
> > > +	 * clock from the HSI_TX PLL, and a fixed target lane max
> > > +	 * bandwidth of 300 Mbps. This value should be derived
> > 
> > If the table in https://community.nxp.com/docs/DOC-94312 is correct,
> > this should be 850 Mbps. Where does this 300 Mbps value come from?
> 
> I thought you had some code to compute the correct value, although
> I guess we've lost the ability to know how fast the sensor is going
> to drive the link.

I had code to calculate the number of needed lanes from the bit rate and
link frequency. I did not actually change the D-PHY register value.
And as you pointed out, calculating the number of lanes is not useful
without input from the sensor driver, as some lane configurations might
not be supported.

> Note that the IMX219 currently drives the data lanes at 912Mbps almost
> exclusively, as I've yet to finish working out how to derive the PLL
> parameters.  (I have something that works, but it currently takes on
> the order of 100k iterations to derive the parameters.  gcd() doesn't
> help you in this instance.)

The tc358743 also currently only implements a fixed rate (of 594 Mbps).

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-16 22:27   ` Steve Longerbeam
  2017-02-16 22:57     ` Russell King - ARM Linux
@ 2017-02-17 11:43     ` Philipp Zabel
  2017-02-17 12:22       ` Sakari Ailus
  1 sibling, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-17 11:43 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Russell King - ARM Linux, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Thu, 2017-02-16 at 14:27 -0800, Steve Longerbeam wrote:
> 
> On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> > On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> >> In version 4:
> >
> > With this version, I get:
> >
> > [28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> > [28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
> >
> 
> Right, in the imx219, on exit from s_power(), the clock and data lanes
> must be placed in the LP-11 state. This has been done in the ov5640 and
> tc358743 subdevs.
> 
> If we want to bring in the patch that adds a .prepare_stream() op,
> the csi-2 bus would need to be placed in LP-11 in that op instead.
> 
> Philipp, should I go ahead and add your .prepare_stream() patch?

I think with Russell's explanation of how the imx219 sensor operates,
we'll have to do something before calling the sensor s_stream, but right
now I'm still unsure what exactly.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-17 11:43     ` Philipp Zabel
@ 2017-02-17 12:22       ` Sakari Ailus
  2017-02-17 12:31         ` Russell King - ARM Linux
  2017-02-17 15:04         ` Philipp Zabel
  0 siblings, 2 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-02-17 12:22 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Steve Longerbeam, Russell King - ARM Linux, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, hverkuil,
	nick, markus.heiser, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Hi Philipp, Steve and Russell,

On Fri, Feb 17, 2017 at 12:43:38PM +0100, Philipp Zabel wrote:
> On Thu, 2017-02-16 at 14:27 -0800, Steve Longerbeam wrote:
> > 
> > On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> > > On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> > >> In version 4:
> > >
> > > With this version, I get:
> > >
> > > [28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> > > [28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
> > >
> > 
> > Right, in the imx219, on exit from s_power(), the clock and data lanes
> > must be placed in the LP-11 state. This has been done in the ov5640 and
> > tc358743 subdevs.
> > 
> > If we want to bring in the patch that adds a .prepare_stream() op,
> > the csi-2 bus would need to be placed in LP-11 in that op instead.
> > 
> > Philipp, should I go ahead and add your .prepare_stream() patch?
> 
> I think with Russell's explanation of how the imx219 sensor operates,
> we'll have to do something before calling the sensor s_stream, but right
> now I'm still unsure what exactly.

Indeed there appears to be no other way to achieve the LP-11 state than
going through the streaming state for this particular sensor, apart from
starting streaming.

Is there a particular reason why you're waiting for the transmitter to
transfer to LP-11 state? That appears to be the last step which is done in
the csi2_s_stream() callback.

What the sensor does next is to start streaming, and the first thing it does
in that process is to switch to LP-11 state.

Have you tried what happens if you simply drop the LP-11 check? To me that
would seem the right thing to do.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-17 12:22       ` Sakari Ailus
@ 2017-02-17 12:31         ` Russell King - ARM Linux
  2017-02-17 15:04         ` Philipp Zabel
  1 sibling, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-17 12:31 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Philipp Zabel, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Fri, Feb 17, 2017 at 02:22:14PM +0200, Sakari Ailus wrote:
> Hi Philipp, Steve and Russell,
> 
> On Fri, Feb 17, 2017 at 12:43:38PM +0100, Philipp Zabel wrote:
> > I think with Russell's explanation of how the imx219 sensor operates,
> > we'll have to do something before calling the sensor s_stream, but right
> > now I'm still unsure what exactly.
> 
> Indeed there appears to be no other way to achieve the LP-11 state than
> going through the streaming state for this particular sensor, apart from
> starting streaming.
> 
> Is there a particular reason why you're waiting for the transmitter to
> transfer to LP-11 state? That appears to be the last step which is done in
> the csi2_s_stream() callback.
> 
> What the sensor does next is to start streaming, and the first thing it does
> in that process is to switch to LP-11 state.
> 
> Have you tried what happens if you simply drop the LP-11 check? To me that
> would seem the right thing to do.

The Freescale documentation for iMX6's CSI2 receiver (chapter 40.3.1)
specifies a very specific sequence to be followed to safely bring up the
CSI2 receiver.  Bold text gets used, which implies emphasis on certain
points, which suggests that it's important to follow it.

Presumably, the reason for this is to ensure that a state machine within
the CSI2 receiver is properly synchronised to the incoming data stream,
and while avoiding the sequence may work, it may not be guaranteed to
work every time.

I guess we need someone from NXP to comment.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-17 10:47   ` Philipp Zabel
  2017-02-17 11:06     ` Russell King - ARM Linux
@ 2017-02-17 14:16     ` Philipp Zabel
  2017-02-17 18:27       ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-17 14:16 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Fri, 2017-02-17 at 11:47 +0100, Philipp Zabel wrote:
> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> > Adds MIPI CSI-2 Receiver subdev driver. This subdev is required
> > for sensors with a MIPI CSI2 interface.
> > 
> > Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> > ---
> >  drivers/staging/media/imx/Makefile         |   1 +
> >  drivers/staging/media/imx/imx6-mipi-csi2.c | 573 +++++++++++++++++++++++++++++
> >  2 files changed, 574 insertions(+)
> >  create mode 100644 drivers/staging/media/imx/imx6-mipi-csi2.c
> > 
> > diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile
> > index 878a126..3569625 100644
> > --- a/drivers/staging/media/imx/Makefile
> > +++ b/drivers/staging/media/imx/Makefile
> > @@ -9,3 +9,4 @@ obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-vdic.o
> >  obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-ic.o
> >  
> >  obj-$(CONFIG_VIDEO_IMX_CSI) += imx-media-csi.o
> > +obj-$(CONFIG_VIDEO_IMX_CSI) += imx6-mipi-csi2.o
> > diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
> > new file mode 100644
> > index 0000000..23dca80
> > --- /dev/null
> > +++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
> > @@ -0,0 +1,573 @@
> > +/*
> > + * MIPI CSI-2 Receiver Subdev for Freescale i.MX6 SOC.
> > + *
> > + * Copyright (c) 2012-2017 Mentor Graphics Inc.
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + */
> > +#include <linux/clk.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/io.h>
> > +#include <linux/iopoll.h>
> > +#include <linux/irq.h>
> > +#include <linux/module.h>
> > +#include <linux/platform_device.h>
> > +#include <media/v4l2-device.h>
> > +#include <media/v4l2-of.h>
> > +#include <media/v4l2-subdev.h>
> > +#include "imx-media.h"
> > +
> > +/*
> > + * there must be 5 pads: 1 input pad from sensor, and
> > + * the 4 virtual channel output pads
> > + */
> > +#define CSI2_SINK_PAD       0
> > +#define CSI2_NUM_SINK_PADS  1
> > +#define CSI2_NUM_SRC_PADS   4
> > +#define CSI2_NUM_PADS       5
> > +
> > +struct csi2_dev {
> > +	struct device          *dev;
> > +	struct v4l2_subdev      sd;
> > +	struct media_pad       pad[CSI2_NUM_PADS];
> > +	struct v4l2_mbus_framefmt format_mbus;
> > +	struct clk             *dphy_clk;
> > +	struct clk             *cfg_clk;
> > +	struct clk             *pix_clk; /* what is this? */
> > +	void __iomem           *base;
> > +	struct v4l2_of_bus_mipi_csi2 bus;
> > +	bool                    on;
> > +	bool                    stream_on;
> > +	bool                    src_linked;
> > +	bool                    sink_linked[CSI2_NUM_SRC_PADS];
> > +};
> > +
> > +#define DEVICE_NAME "imx6-mipi-csi2"
> > +
> > +/* Register offsets */
> > +#define CSI2_VERSION            0x000
> > +#define CSI2_N_LANES            0x004
> > +#define CSI2_PHY_SHUTDOWNZ      0x008
> > +#define CSI2_DPHY_RSTZ          0x00c
> > +#define CSI2_RESETN             0x010
> > +#define CSI2_PHY_STATE          0x014
> > +#define PHY_STOPSTATEDATA_BIT   4
> > +#define PHY_STOPSTATEDATA(n)    BIT(PHY_STOPSTATEDATA_BIT + (n))
> > +#define PHY_RXCLKACTIVEHS       BIT(8)
> > +#define PHY_RXULPSCLKNOT        BIT(9)
> > +#define PHY_STOPSTATECLK        BIT(10)
> > +#define CSI2_DATA_IDS_1         0x018
> > +#define CSI2_DATA_IDS_2         0x01c
> > +#define CSI2_ERR1               0x020
> > +#define CSI2_ERR2               0x024
> > +#define CSI2_MSK1               0x028
> > +#define CSI2_MSK2               0x02c
> > +#define CSI2_PHY_TST_CTRL0      0x030
> > +#define PHY_TESTCLR		BIT(0)
> > +#define PHY_TESTCLK		BIT(1)
> > +#define CSI2_PHY_TST_CTRL1      0x034
> > +#define PHY_TESTEN		BIT(16)
> > +#define CSI2_SFT_RESET          0xf00
> > +
> > +static inline struct csi2_dev *sd_to_dev(struct v4l2_subdev *sdev)
> > +{
> > +	return container_of(sdev, struct csi2_dev, sd);
> > +}
> > +
> > +static void csi2_enable(struct csi2_dev *csi2, bool enable)
> > +{
> > +	if (enable) {
> > +		writel(0x1, csi2->base + CSI2_PHY_SHUTDOWNZ);
> > +		writel(0x1, csi2->base + CSI2_DPHY_RSTZ);
> > +		writel(0x1, csi2->base + CSI2_RESETN);
> > +	} else {
> > +		writel(0x0, csi2->base + CSI2_PHY_SHUTDOWNZ);
> > +		writel(0x0, csi2->base + CSI2_DPHY_RSTZ);
> > +		writel(0x0, csi2->base + CSI2_RESETN);
> > +	}
> > +}
> > +
> > +static void csi2_set_lanes(struct csi2_dev *csi2)
> > +{
> > +	int lanes = csi2->bus.num_data_lanes;
> > +
> > +	writel(lanes - 1, csi2->base + CSI2_N_LANES);
> > +}
> > +
> > +static void dw_mipi_csi2_phy_write(struct csi2_dev *csi2,
> > +				   u32 test_code, u32 test_data)
> > +{
> > +	/* Clear PHY test interface */
> > +	writel(PHY_TESTCLR, csi2->base + CSI2_PHY_TST_CTRL0);
> > +	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL1);
> > +	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
> > +
> > +	/* Raise test interface strobe signal */
> > +	writel(PHY_TESTCLK, csi2->base + CSI2_PHY_TST_CTRL0);
> > +
> > +	/* Configure address write on falling edge and lower strobe signal */
> > +	writel(PHY_TESTEN | test_code, csi2->base + CSI2_PHY_TST_CTRL1);
> > +	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
> > +
> > +	/* Configure data write on rising edge and raise strobe signal */
> > +	writel(test_data, csi2->base + CSI2_PHY_TST_CTRL1);
> > +	writel(PHY_TESTCLK, csi2->base + CSI2_PHY_TST_CTRL0);
> > +
> > +	/* Clear strobe signal */
> > +	writel(0x0, csi2->base + CSI2_PHY_TST_CTRL0);
> > +}
> > +
> > +static void csi2_dphy_init(struct csi2_dev *csi2)
> > +{
> > +	/*
> > +	 * FIXME: 0x14 is derived from a fixed D-PHY reference
> > +	 * clock from the HSI_TX PLL, and a fixed target lane max
> > +	 * bandwidth of 300 Mbps. This value should be derived
> 
> If the table in https://community.nxp.com/docs/DOC-94312 is correct,
> this should be 850 Mbps. Where does this 300 Mbps value come from?

I got it, the dptdin_map value for 300 Mbps is 0x14 in the Rockchip DSI
driver. But that value is written to the register as HSFREQRANGE_SEL(x):

#define HSFREQRANGE_SEL(val)    (((val) & 0x3f) << 1) 

which is 0x28. Further, the Rockchip D-PHY probably is another version,
as its max_mbps goes up to 1500.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-17 12:22       ` Sakari Ailus
  2017-02-17 12:31         ` Russell King - ARM Linux
@ 2017-02-17 15:04         ` Philipp Zabel
  2017-02-18 11:58           ` Sakari Ailus
  1 sibling, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-17 15:04 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Steve Longerbeam, Russell King - ARM Linux, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, hverkuil,
	nick, markus.heiser, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

On Fri, 2017-02-17 at 14:22 +0200, Sakari Ailus wrote:
> Hi Philipp, Steve and Russell,
> 
> On Fri, Feb 17, 2017 at 12:43:38PM +0100, Philipp Zabel wrote:
> > On Thu, 2017-02-16 at 14:27 -0800, Steve Longerbeam wrote:
> > > 
> > > On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> > > > On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> > > >> In version 4:
> > > >
> > > > With this version, I get:
> > > >
> > > > [28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> > > > [28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
> > > >
> > > 
> > > Right, in the imx219, on exit from s_power(), the clock and data lanes
> > > must be placed in the LP-11 state. This has been done in the ov5640 and
> > > tc358743 subdevs.
> > > 
> > > If we want to bring in the patch that adds a .prepare_stream() op,
> > > the csi-2 bus would need to be placed in LP-11 in that op instead.
> > > 
> > > Philipp, should I go ahead and add your .prepare_stream() patch?
> > 
> > I think with Russell's explanation of how the imx219 sensor operates,
> > we'll have to do something before calling the sensor s_stream, but right
> > now I'm still unsure what exactly.
> 
> Indeed there appears to be no other way to achieve the LP-11 state than
> going through the streaming state for this particular sensor, apart from
> starting streaming.
> 
> Is there a particular reason why you're waiting for the transmitter to
> transfer to LP-11 state? That appears to be the last step which is done in
> the csi2_s_stream() callback.
> 
> What the sensor does next is to start streaming, and the first thing it does
> in that process is to switch to LP-11 state.
> 
> Have you tried what happens if you simply drop the LP-11 check? To me that
> would seem the right thing to do.

Removing the wait for LP-11 alone might not be an issue in my case, as
the TC358743 is known to be in stop state all along. So I just have to
make sure that the time between s_stream(csi2) starting the receiver and
s_stream(tc358743) causing LP-11 to be changed to the next state is long
enough for the receiver to detect LP-11 (which I really can't, I just
have to pray I2C transmissions are slow enough).

The problems start if we have to enable the D-PHY and deassert resets
either before the sensor enters LP-11 state or after it already started
streaming, because we don't know when the sensor drives that state on
the bus.

The latter case I is simulated easily by again changing the order so
that the "sensor" (tc358743) is enabled before the CSI-2 receiver D-PHY
initialization. The result is that captures time out, presumably because
the receiver never entered HS mode because it didn't see LP-11. The
PHY_STATE register contains 0x200, meaning RXCLKACTIVEHS (which we
should wait for in step 7.) is never set.

I tried to test the former by instead modifying the tc358743 driver a
bit:

----------8<----------
diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
index 39d4cdd328c0f..43df80903215b 100644
--- a/drivers/media/i2c/tc358743.c
+++ b/drivers/media/i2c/tc358743.c
@@ -1378,8 +1378,6 @@ static int tc358743_s_dv_timings(struct v4l2_subdev *sd,
        state->timings = *timings;
 
        enable_stream(sd, false);
-       tc358743_set_pll(sd);
-       tc358743_set_csi(sd);
 
        return 0;
 }
@@ -1469,6 +1467,11 @@ static int tc358743_g_mbus_config(struct v4l2_subdev *sd,
 
 static int tc358743_s_stream(struct v4l2_subdev *sd, int enable)
 {
+       if (enable) {
+               tc358743_set_pll(sd);
+               tc358743_set_csi(sd);
+               tc358743_set_csi_color_space(sd);
+       }
        enable_stream(sd, enable);
        if (!enable) {
                /* Put all lanes in PL-11 state (STOPSTATE) */
@@ -1657,9 +1660,6 @@ static int tc358743_set_fmt(struct v4l2_subdev *sd,
        state->vout_color_sel = vout_color_sel;
 
        enable_stream(sd, false);
-       tc358743_set_pll(sd);
-       tc358743_set_csi(sd);
-       tc358743_set_csi_color_space(sd);
 
        return 0;
 }
---------->8----------

That should enable the CSI-2 Tx and put it in LP-11 only after the CSI-2
receiver is enabled, right before starting streaming.

That did seem to work the few times I tested, but I have no idea how
this will behave with other chips that do something else to the bus
while not streaming, and whether it is ok to enable the CSI right after
the sensor without waiting for the CSI-2 bus to settle.

regards
Philipp

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-17 14:16     ` Philipp Zabel
@ 2017-02-17 18:27       ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-17 18:27 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/17/2017 06:16 AM, Philipp Zabel wrote:
> On Fri, 2017-02-17 at 11:47 +0100, Philipp Zabel wrote:
>> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>>> +static void csi2_dphy_init(struct csi2_dev *csi2)
>>> +{
>>> +	/*
>>> +	 * FIXME: 0x14 is derived from a fixed D-PHY reference
>>> +	 * clock from the HSI_TX PLL, and a fixed target lane max
>>> +	 * bandwidth of 300 Mbps. This value should be derived
>>
>> If the table in https://community.nxp.com/docs/DOC-94312 is correct,
>> this should be 850 Mbps. Where does this 300 Mbps value come from?
>
> I got it, the dptdin_map value for 300 Mbps is 0x14 in the Rockchip DSI
> driver. But that value is written to the register as HSFREQRANGE_SEL(x):
>
> #define HSFREQRANGE_SEL(val)    (((val) & 0x3f) << 1)

Ah you are right, 0x14 would be a "testdin" value of 0x0a, which from
the Rockchip table would be 950 MHz per lane.

But thanks for pointing the table at
https://community.nxp.com/docs/DOC-94312. That table is what
should be referenced in the above comment (850 MHz per lane
for a 27MHz reference clock). I will update the comment based
on that table.

Steve



>
> which is 0x28. Further, the Rockchip D-PHY probably is another version,
> as its max_mbps goes up to 1500.
>
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-16  2:19 ` [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates Steve Longerbeam
@ 2017-02-18  1:11   ` Steve Longerbeam
  2017-02-18  1:12   ` Steve Longerbeam
  2017-02-20 22:04   ` Sakari Ailus
  2 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-18  1:11 UTC (permalink / raw)
  To: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, linux, mchehab, hverkuil, nick, markus.heiser,
	p.zabel, laurent.pinchart+renesas, bparrot, geert, arnd,
	sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King



On 02/15/2017 06:19 PM, Steve Longerbeam wrote:
> From: Russell King <rmk+kernel@armlinux.org.uk>
>
> Setting and getting frame rates is part of the negotiation mechanism
> between subdevs.  The lack of support means that a frame rate at the
> sensor can't be negotiated through the subdev path.
>
> Add support at MIPI CSI2 level for handling this part of the
> negotiation.
>
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>


Hi Russell,

I signed-off on this but after more review I'm not sure this is right.

The CSI-2 receiver really has no control over frame rate. It's output
frame rate is the same as the rate that is delivered to it.

So this subdev should either not implement these ops, or it should
refer them to the attached source subdev.

Steve

> ---
>  drivers/staging/media/imx/imx6-mipi-csi2.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
>
> diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
> index 23dca80..c62f14e 100644
> --- a/drivers/staging/media/imx/imx6-mipi-csi2.c
> +++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
> @@ -34,6 +34,7 @@ struct csi2_dev {
>  	struct v4l2_subdev      sd;
>  	struct media_pad       pad[CSI2_NUM_PADS];
>  	struct v4l2_mbus_framefmt format_mbus;
> +	struct v4l2_fract      frame_interval;
>  	struct clk             *dphy_clk;
>  	struct clk             *cfg_clk;
>  	struct clk             *pix_clk; /* what is this? */
> @@ -397,6 +398,30 @@ static int csi2_set_fmt(struct v4l2_subdev *sd,
>  	return 0;
>  }
>
> +static int csi2_g_frame_interval(struct v4l2_subdev *sd,
> +				 struct v4l2_subdev_frame_interval *fi)
> +{
> +	struct csi2_dev *csi2 = sd_to_dev(sd);
> +
> +	fi->interval = csi2->frame_interval;
> +
> +	return 0;
> +}
> +
> +static int csi2_s_frame_interval(struct v4l2_subdev *sd,
> +				 struct v4l2_subdev_frame_interval *fi)
> +{
> +	struct csi2_dev *csi2 = sd_to_dev(sd);
> +
> +	/* Output pads mirror active input pad, no limits on input pads */
> +	if (fi->pad != CSI2_SINK_PAD)
> +		fi->interval = csi2->frame_interval;
> +
> +	csi2->frame_interval = fi->interval;
> +
> +	return 0;
> +}
> +
>  /*
>   * retrieve our pads parsed from the OF graph by the media device
>   */
> @@ -430,6 +455,8 @@ static struct v4l2_subdev_core_ops csi2_core_ops = {
>
>  static struct v4l2_subdev_video_ops csi2_video_ops = {
>  	.s_stream = csi2_s_stream,
> +	.g_frame_interval = csi2_g_frame_interval,
> +	.s_frame_interval = csi2_s_frame_interval,
>  };
>
>  static struct v4l2_subdev_pad_ops csi2_pad_ops = {
>

-- 
Steve Longerbeam | Senior Embedded Engineer, ESD Services
Mentor Embedded(tm) | 46871 Bayside Parkway, Fremont, CA 94538
P 510.354.5838 | M 408.410.2735

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-16  2:19 ` [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates Steve Longerbeam
  2017-02-18  1:11   ` Steve Longerbeam
@ 2017-02-18  1:12   ` Steve Longerbeam
  2017-02-18  9:23     ` Russell King - ARM Linux
  2017-02-20 22:04   ` Sakari Ailus
  2 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-18  1:12 UTC (permalink / raw)
  To: Russell King, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, linux, mchehab, hverkuil, nick, markus.heiser,
	p.zabel, laurent.pinchart+renesas, bparrot, geert, arnd,
	sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel
  Cc: devicetree, linux-kernel, linux-arm-kernel, linux-media, devel



On 02/15/2017 06:19 PM, Steve Longerbeam wrote:
> From: Russell King <rmk+kernel@armlinux.org.uk>
>
> Setting and getting frame rates is part of the negotiation mechanism
> between subdevs.  The lack of support means that a frame rate at the
> sensor can't be negotiated through the subdev path.
>
> Add support at MIPI CSI2 level for handling this part of the
> negotiation.
>
> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>


Hi Russell,

I signed-off on this but after more review I'm not sure this is right.

The CSI-2 receiver really has no control over frame rate. It's output
frame rate is the same as the rate that is delivered to it.

So this subdev should either not implement these ops, or it should
refer them to the attached source subdev.

Steve

> ---
>  drivers/staging/media/imx/imx6-mipi-csi2.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
>
> diff --git a/drivers/staging/media/imx/imx6-mipi-csi2.c b/drivers/staging/media/imx/imx6-mipi-csi2.c
> index 23dca80..c62f14e 100644
> --- a/drivers/staging/media/imx/imx6-mipi-csi2.c
> +++ b/drivers/staging/media/imx/imx6-mipi-csi2.c
> @@ -34,6 +34,7 @@ struct csi2_dev {
>  	struct v4l2_subdev      sd;
>  	struct media_pad       pad[CSI2_NUM_PADS];
>  	struct v4l2_mbus_framefmt format_mbus;
> +	struct v4l2_fract      frame_interval;
>  	struct clk             *dphy_clk;
>  	struct clk             *cfg_clk;
>  	struct clk             *pix_clk; /* what is this? */
> @@ -397,6 +398,30 @@ static int csi2_set_fmt(struct v4l2_subdev *sd,
>  	return 0;
>  }
>
> +static int csi2_g_frame_interval(struct v4l2_subdev *sd,
> +				 struct v4l2_subdev_frame_interval *fi)
> +{
> +	struct csi2_dev *csi2 = sd_to_dev(sd);
> +
> +	fi->interval = csi2->frame_interval;
> +
> +	return 0;
> +}
> +
> +static int csi2_s_frame_interval(struct v4l2_subdev *sd,
> +				 struct v4l2_subdev_frame_interval *fi)
> +{
> +	struct csi2_dev *csi2 = sd_to_dev(sd);
> +
> +	/* Output pads mirror active input pad, no limits on input pads */
> +	if (fi->pad != CSI2_SINK_PAD)
> +		fi->interval = csi2->frame_interval;
> +
> +	csi2->frame_interval = fi->interval;
> +
> +	return 0;
> +}
> +
>  /*
>   * retrieve our pads parsed from the OF graph by the media device
>   */
> @@ -430,6 +455,8 @@ static struct v4l2_subdev_core_ops csi2_core_ops = {
>
>  static struct v4l2_subdev_video_ops csi2_video_ops = {
>  	.s_stream = csi2_s_stream,
> +	.g_frame_interval = csi2_g_frame_interval,
> +	.s_frame_interval = csi2_s_frame_interval,
>  };
>
>  static struct v4l2_subdev_pad_ops csi2_pad_ops = {
>

-- 
Steve Longerbeam | Senior Embedded Engineer, ESD Services
Mentor Embedded(tm) | 46871 Bayside Parkway, Fremont, CA 94538
P 510.354.5838 | M 408.410.2735

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-18  1:12   ` Steve Longerbeam
@ 2017-02-18  9:23     ` Russell King - ARM Linux
  2017-02-18 17:29       ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-18  9:23 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel

On Fri, Feb 17, 2017 at 05:12:44PM -0800, Steve Longerbeam wrote:
> Hi Russell,
> 
> I signed-off on this but after more review I'm not sure this is right.
> 
> The CSI-2 receiver really has no control over frame rate. It's output
> frame rate is the same as the rate that is delivered to it.
> 
> So this subdev should either not implement these ops, or it should
> refer them to the attached source subdev.

Where in the V4L2 documentation does it say that is permissible?

If you don't implement these, media-ctl fails to propagate _anything_
to the next sink pad if you specify a frame rate, because media-ctl
throws an error and exits immediately.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-17 15:04         ` Philipp Zabel
@ 2017-02-18 11:58           ` Sakari Ailus
  0 siblings, 0 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-02-18 11:58 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Steve Longerbeam, Russell King - ARM Linux, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, hverkuil,
	nick, markus.heiser, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Hi Philipp and Russell,

On Fri, Feb 17, 2017 at 04:04:30PM +0100, Philipp Zabel wrote:
> On Fri, 2017-02-17 at 14:22 +0200, Sakari Ailus wrote:
> > Hi Philipp, Steve and Russell,
> > 
> > On Fri, Feb 17, 2017 at 12:43:38PM +0100, Philipp Zabel wrote:
> > > On Thu, 2017-02-16 at 14:27 -0800, Steve Longerbeam wrote:
> > > > 
> > > > On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
> > > > > On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
> > > > >> In version 4:
> > > > >
> > > > > With this version, I get:
> > > > >
> > > > > [28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
> > > > > [28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
> > > > >
> > > > 
> > > > Right, in the imx219, on exit from s_power(), the clock and data lanes
> > > > must be placed in the LP-11 state. This has been done in the ov5640 and
> > > > tc358743 subdevs.
> > > > 
> > > > If we want to bring in the patch that adds a .prepare_stream() op,
> > > > the csi-2 bus would need to be placed in LP-11 in that op instead.
> > > > 
> > > > Philipp, should I go ahead and add your .prepare_stream() patch?
> > > 
> > > I think with Russell's explanation of how the imx219 sensor operates,
> > > we'll have to do something before calling the sensor s_stream, but right
> > > now I'm still unsure what exactly.
> > 
> > Indeed there appears to be no other way to achieve the LP-11 state than
> > going through the streaming state for this particular sensor, apart from
> > starting streaming.
> > 
> > Is there a particular reason why you're waiting for the transmitter to
> > transfer to LP-11 state? That appears to be the last step which is done in
> > the csi2_s_stream() callback.
> > 
> > What the sensor does next is to start streaming, and the first thing it does
> > in that process is to switch to LP-11 state.
> > 
> > Have you tried what happens if you simply drop the LP-11 check? To me that
> > would seem the right thing to do.
> 
> Removing the wait for LP-11 alone might not be an issue in my case, as
> the TC358743 is known to be in stop state all along. So I just have to
> make sure that the time between s_stream(csi2) starting the receiver and
> s_stream(tc358743) causing LP-11 to be changed to the next state is long
> enough for the receiver to detect LP-11 (which I really can't, I just
> have to pray I2C transmissions are slow enough).

Fair enough; it appears that the timing of the bus setup is indeed ill
defined between the transmitter and the receiver. So there can be hardware
specific matters in stream starting that have to be taken into account. :-(

This is quite annoying, as there does not appear to be a good way to tell
the sensor to set the receiver to LP-11 state without going through the
streaming state. If there was, just doing that in s_power(, 1) callback
would be quite practical.

I guess then there's no really a way to avoid having an extra callback that
would explicitly tell the sensor to go to LP-11 state. It should be no issue
if the transmitter is already in that state from power-on, but the new
callback should guarantee that.

Another question is that how far do you need to proceed with streaming in a
case where you want to go to LP-11 through streaming? Is simply starting
streaming and stopping it right after enough? On some devices it might be
but not on others. As the receiver is not started yet, you can't wait for
the first frame to start either. And how long it would take for the first
frame to start is not defined either in general case: or a driver such as
SMIA that's not exactly aware of the underlying hardware but is relying on a
standard device interface and behaviour, such approach could be best effort
only. Of course it's possible to make changes to the driver if you encounter
a combination of a sensor and a receiver that doesn't seem to work, but
still it's hardly an ideal solution.

How about calling the new callback phy_prepare(), for instance? We could
document that it must explicitly set up the transmitter PHY in LP-11 state
for CSI-2. The current documentation states that the device should be
already in LP-11 after power-on but that apparently is not the case in
general.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 00/36] i.MX Media Driver
  2017-02-16 22:57     ` Russell King - ARM Linux
  2017-02-17 10:39       ` Philipp Zabel
@ 2017-02-18 17:21       ` Steve Longerbeam
  1 sibling, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-18 17:21 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: p.zabel, robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 02:57 PM, Russell King - ARM Linux wrote:
> On Thu, Feb 16, 2017 at 02:27:41PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 02/16/2017 02:20 PM, Russell King - ARM Linux wrote:
>>> On Wed, Feb 15, 2017 at 06:19:02PM -0800, Steve Longerbeam wrote:
>>>> In version 4:
>>>
>>> With this version, I get:
>>>
>>> [28762.892053] imx6-mipi-csi2: LP-11 timeout, phy_state = 0x00000000
>>> [28762.899409] ipu1_csi0: pipeline_set_stream failed with -110
>>>
>>
>> Right, in the imx219, on exit from s_power(), the clock and data lanes
>> must be placed in the LP-11 state. This has been done in the ov5640 and
>> tc358743 subdevs.
>
> The only way to do that is to enable streaming from the sensor, wait
> an initialisation time, and then disable streaming, and wait for the
> current line to finish.  There is _no_ other way to get the sensor to
> place its clock and data lines into LP-11 state.
>
> For that to happen, we need to program the sensor a bit more than we
> currently do at power on (to a minimal resolution, and setting up the
> PLLs), and introduce another 4ms on top of the 8ms or so that the
> runtime resume function already takes.

This is basically the same procedure that was necessary to get the
OV5640 to enter LP-11 on all its lanes. Power-on procedure writes
an initial register set that gets the sensor to a default resolution,
turn on streaming briefly (I wait 1msec which is probably too long,
but it's not clear to me how to determine that wait time), and then
disable streaming. All lanes are then in LP-11 state.


Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-18  9:23     ` Russell King - ARM Linux
@ 2017-02-18 17:29       ` Steve Longerbeam
  2017-02-18 18:08         ` Russell King - ARM Linux
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-18 17:29 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel



On 02/18/2017 01:23 AM, Russell King - ARM Linux wrote:
> On Fri, Feb 17, 2017 at 05:12:44PM -0800, Steve Longerbeam wrote:
>> Hi Russell,
>>
>> I signed-off on this but after more review I'm not sure this is right.
>>
>> The CSI-2 receiver really has no control over frame rate. It's output
>> frame rate is the same as the rate that is delivered to it.
>>
>> So this subdev should either not implement these ops, or it should
>> refer them to the attached source subdev.
>
> Where in the V4L2 documentation does it say that is permissible?
>

https://www.linuxtv.org/downloads/v4l-dvb-apis-old/vidioc-subdev-g-frame-interval.html

"The frame interval only makes sense for sub-devices that can control 
the frame period on their own. This includes, for instance, image 
sensors and TV tuners. Sub-devices that don't support frame intervals 
must not implement these ioctls."


> If you don't implement these, media-ctl fails to propagate _anything_
> to the next sink pad if you specify a frame rate, because media-ctl
> throws an error and exits immediately.
>

But I agree with you here. I think our only option is to ignore that
quoted requirement above and propagate [gs]_frame_interval all the way
to the CSI (which can control the frame rate via frame skipping).

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-18 17:29       ` Steve Longerbeam
@ 2017-02-18 18:08         ` Russell King - ARM Linux
  0 siblings, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-18 18:08 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel

On Sat, Feb 18, 2017 at 09:29:17AM -0800, Steve Longerbeam wrote:
> On 02/18/2017 01:23 AM, Russell King - ARM Linux wrote:
> >On Fri, Feb 17, 2017 at 05:12:44PM -0800, Steve Longerbeam wrote:
> >>Hi Russell,
> >>
> >>I signed-off on this but after more review I'm not sure this is right.
> >>
> >>The CSI-2 receiver really has no control over frame rate. It's output
> >>frame rate is the same as the rate that is delivered to it.
> >>
> >>So this subdev should either not implement these ops, or it should
> >>refer them to the attached source subdev.
> >
> >Where in the V4L2 documentation does it say that is permissible?
> >
> 
> https://www.linuxtv.org/downloads/v4l-dvb-apis-old/vidioc-subdev-g-frame-interval.html
> 
> "The frame interval only makes sense for sub-devices that can control the
> frame period on their own. This includes, for instance, image sensors and TV
> tuners. Sub-devices that don't support frame intervals must not implement
> these ioctls."

That sounds clear - but the TV tuner example seems odd - the frame rate
is determined at transmission time, not reception time.  Yes, it's
possible to skip frames (which would be scaling) but you can't
_control_ the frame rate per se.

> >If you don't implement these, media-ctl fails to propagate _anything_
> >to the next sink pad if you specify a frame rate, because media-ctl
> >throws an error and exits immediately.
> >
> 
> But I agree with you here. I think our only option is to ignore that
> quoted requirement above and propagate [gs]_frame_interval all the way
> to the CSI (which can control the frame rate via frame skipping).

Sounds like something to tackle the media maintainers over - the
documentation vs media-ctl seem to have different ideas on this
point.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 12/36] add mux and video interface bridge entity functions
  2017-02-16  2:19 ` [PATCH v4 12/36] add mux and video interface bridge entity functions Steve Longerbeam
@ 2017-02-19 21:28   ` Pavel Machek
  2017-02-22 17:19     ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Pavel Machek @ 2017-02-19 21:28 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

[-- Attachment #1: Type: text/plain, Size: 1171 bytes --]

On Wed 2017-02-15 18:19:14, Steve Longerbeam wrote:
> From: Philipp Zabel <p.zabel@pengutronix.de>
> 
> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
> 
> - renamed MEDIA_ENT_F_MUX to MEDIA_ENT_F_VID_MUX
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>

This is slightly "interesting" format of changelog. Normally signoffs
go below.

> diff --git a/Documentation/media/uapi/mediactl/media-types.rst b/Documentation/media/uapi/mediactl/media-types.rst
> index 3e03dc2..023be29 100644
> --- a/Documentation/media/uapi/mediactl/media-types.rst
> +++ b/Documentation/media/uapi/mediactl/media-types.rst
> @@ -298,6 +298,28 @@ Types and flags used to represent the media graph elements
>  	  received on its sink pad and outputs the statistics data on
>  	  its source pad.
>  
> +    -  ..  row 29
> +
> +       ..  _MEDIA-ENT-F-MUX:
> +
> +       -  ``MEDIA_ENT_F_MUX``

And you probably want to rename it here, too.

With that fixed:

Reviewed-by: Pavel Machek <pavel@ucw.cz>
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-02-16  2:19 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Steve Longerbeam
@ 2017-02-19 21:44   ` Pavel Machek
  2017-03-02 16:02   ` Sakari Ailus
  1 sibling, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-02-19 21:44 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

[-- Attachment #1: Type: text/plain, Size: 437 bytes --]

On Wed 2017-02-15 18:19:16, Steve Longerbeam wrote:
> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> all subdev entities in a pipeline to a given video device.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>

Reviewed-by: Pavel Machek <pavel@ucw.cz>

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 15/36] platform: add video-multiplexer subdevice driver
  2017-02-16  2:19 ` [PATCH v4 15/36] platform: add video-multiplexer subdevice driver Steve Longerbeam
@ 2017-02-19 22:02   ` Pavel Machek
  2017-02-21  9:11     ` Philipp Zabel
  2017-02-27 14:41   ` Rob Herring
  1 sibling, 1 reply; 228+ messages in thread
From: Pavel Machek @ 2017-02-19 22:02 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Sascha Hauer, Steve Longerbeam

[-- Attachment #1: Type: text/plain, Size: 3556 bytes --]

Hi!

> From: Philipp Zabel <p.zabel@pengutronix.de>
> 
> This driver can handle SoC internal and external video bus multiplexers,
> controlled either by register bit fields or by a GPIO. The subdevice
> passes through frame interval and mbus configuration of the active input
> to the output side.
> 
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
> --
>

Again, this is slightly non-standard format. Normally changes from v1
go below ---, but in your case it would cut off the signoff...

> diff --git a/Documentation/devicetree/bindings/media/video-multiplexer.txt b/Documentation/devicetree/bindings/media/video-multiplexer.txt
> new file mode 100644
> index 0000000..9d133d9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/media/video-multiplexer.txt
> @@ -0,0 +1,59 @@
> +Video Multiplexer
> +=================
> +
> +Video multiplexers allow to select between multiple input ports. Video received
> +on the active input port is passed through to the output port. Muxes described
> +by this binding may be controlled by a syscon register bitfield or by a GPIO.
> +
> +Required properties:
> +- compatible : should be "video-multiplexer"
> +- reg: should be register base of the register containing the control bitfield
> +- bit-mask: bitmask of the control bitfield in the control register
> +- bit-shift: bit offset of the control bitfield in the control register
> +- gpios: alternatively to reg, bit-mask, and bit-shift, a single GPIO phandle
> +  may be given to switch between two inputs
> +- #address-cells: should be <1>
> +- #size-cells: should be <0>
> +- port@*: at least three port nodes containing endpoints connecting to the
> +  source and sink devices according to of_graph bindings. The last port is
> +  the output port, all others are inputs.

At least three? I guess it is exactly three with the gpio?

Plus you might want to describe which port correspond to which gpio
state/bitfield values...

> +struct vidsw {

I knew it: it is secretely a switch! :-).

> +static void vidsw_set_active(struct vidsw *vidsw, int active)
> +{
> +	vidsw->active = active;
> +	if (active < 0)
> +		return;
> +
> +	dev_dbg(vidsw->subdev.dev, "setting %d active\n", active);
> +
> +	if (vidsw->field)
> +		regmap_field_write(vidsw->field, active);
> +	else if (vidsw->gpio)
> +		gpiod_set_value(vidsw->gpio, active);

         else dev_err()...?
	 
> +static int vidsw_async_init(struct vidsw *vidsw, struct device_node *node)
> +{
> +	struct device_node *ep;
> +	u32 portno;
> +	int numports;

numbports is int, so I guess portno should be, too?

> +		portno = endpoint.base.port;
> +		if (portno >= numports - 1)
> +			continue;


> +	if (!pad) {
> +		/* Mirror the input side on the output side */
> +		cfg->type = vidsw->endpoint[vidsw->active].bus_type;
> +		if (cfg->type == V4L2_MBUS_PARALLEL ||
> +		    cfg->type == V4L2_MBUS_BT656)
> +			cfg->flags = vidsw->endpoint[vidsw->active].bus.parallel.flags;
> +	}

Will this need support for other V4L2_MBUS_ values?

> +MODULE_AUTHOR("Sascha Hauer, Pengutronix");
> +MODULE_AUTHOR("Philipp Zabel, Pengutronix");

Normally, MODULE_AUTHOR contains comma separated names of authors,
perhaps with <email@addresses>. Not sure two MODULE_AUTHORs per file
will work.

Thanks,
								Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-16  2:19 ` [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates Steve Longerbeam
  2017-02-18  1:11   ` Steve Longerbeam
  2017-02-18  1:12   ` Steve Longerbeam
@ 2017-02-20 22:04   ` Sakari Ailus
  2017-02-20 22:56     ` Steve Longerbeam
  2017-02-21  0:13     ` Russell King - ARM Linux
  2 siblings, 2 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-02-20 22:04 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam

Hi Steve,

On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> From: Russell King <rmk+kernel@armlinux.org.uk>
> 
> Setting and getting frame rates is part of the negotiation mechanism
> between subdevs.  The lack of support means that a frame rate at the
> sensor can't be negotiated through the subdev path.

Just wondering --- what do you need this for?

-- 
Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-20 22:04   ` Sakari Ailus
@ 2017-02-20 22:56     ` Steve Longerbeam
  2017-02-20 23:47       ` Steve Longerbeam
  2017-02-21 12:15       ` Sakari Ailus
  2017-02-21  0:13     ` Russell King - ARM Linux
  1 sibling, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-20 22:56 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam



On 02/20/2017 02:04 PM, Sakari Ailus wrote:
> Hi Steve,
>
> On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
>> From: Russell King <rmk+kernel@armlinux.org.uk>
>>
>> Setting and getting frame rates is part of the negotiation mechanism
>> between subdevs.  The lack of support means that a frame rate at the
>> sensor can't be negotiated through the subdev path.
>
> Just wondering --- what do you need this for?


Hi Sakari,

i.MX does need the ability to negotiate the frame rates in the
pipelines. The CSI has the ability to skip frames at the output,
which is something Philipp added to the CSI subdev. That affects
frame interval at the CSI output.

But as Russell pointed out, the lack of [gs]_frame_interval op
causes media-ctl to fail:

media-ctl -v -d /dev/media1 --set-v4l2 
'"imx6-mipi-csi2":1[fmt:SGBRG8/512x512@1/30]'

Opening media device /dev/media1
Enumerating entities
Found 29 entities
Enumerating pads and links
Setting up format SGBRG8 512x512 on pad imx6-mipi-csi2/1
Format set: SGBRG8 512x512
Setting up frame interval 1/30 on entity imx6-mipi-csi2
Unable to set frame interval: Inappropriate ioctl for device (-25)Unable 
to setup formats: Inappropriate ioctl for device (25)


So i.MX needs to implement this op in every subdev in the
pipeline, otherwise it's not possible to configure the
pipeline with media-ctl.


Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-20 22:56     ` Steve Longerbeam
@ 2017-02-20 23:47       ` Steve Longerbeam
  2017-02-21 12:15       ` Sakari Ailus
  1 sibling, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-20 23:47 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam



On 02/20/2017 02:56 PM, Steve Longerbeam wrote:
>
>
> On 02/20/2017 02:04 PM, Sakari Ailus wrote:
>> Hi Steve,
>>
>> On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
>>> From: Russell King <rmk+kernel@armlinux.org.uk>
>>>
>>> Setting and getting frame rates is part of the negotiation mechanism
>>> between subdevs.  The lack of support means that a frame rate at the
>>> sensor can't be negotiated through the subdev path.
>>
>> Just wondering --- what do you need this for?
>
>
> Hi Sakari,
>
> i.MX does need the ability to negotiate the frame rates in the
> pipelines. The CSI has the ability to skip frames at the output,
> which is something Philipp added to the CSI subdev. That affects
> frame interval at the CSI output.
>
> But as Russell pointed out, the lack of [gs]_frame_interval op
> causes media-ctl to fail:
>
> media-ctl -v -d /dev/media1 --set-v4l2
> '"imx6-mipi-csi2":1[fmt:SGBRG8/512x512@1/30]'
>
> Opening media device /dev/media1
> Enumerating entities
> Found 29 entities
> Enumerating pads and links
> Setting up format SGBRG8 512x512 on pad imx6-mipi-csi2/1
> Format set: SGBRG8 512x512
> Setting up frame interval 1/30 on entity imx6-mipi-csi2
> Unable to set frame interval: Inappropriate ioctl for device (-25)Unable
> to setup formats: Inappropriate ioctl for device (25)
>
>
> So i.MX needs to implement this op in every subdev in the
> pipeline, otherwise it's not possible to configure the
> pipeline with media-ctl.
>


Hi Russell,

But Sakari brings up a good point. The mipi csi-2 receiver doesn't
have any control over frame rate, so why do you even need to
give it this information via media-ctl?

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-20 22:04   ` Sakari Ailus
  2017-02-20 22:56     ` Steve Longerbeam
@ 2017-02-21  0:13     ` Russell King - ARM Linux
  2017-02-21  0:18       ` Steve Longerbeam
  2017-02-21 12:37       ` Sakari Ailus
  1 sibling, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-21  0:13 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Tue, Feb 21, 2017 at 12:04:10AM +0200, Sakari Ailus wrote:
> On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> > From: Russell King <rmk+kernel@armlinux.org.uk>
> > 
> > Setting and getting frame rates is part of the negotiation mechanism
> > between subdevs.  The lack of support means that a frame rate at the
> > sensor can't be negotiated through the subdev path.
> 
> Just wondering --- what do you need this for?

The v4l2 documentation contradicts the media-ctl implementation.

While v4l2 documentation says:

  These ioctls are used to get and set the frame interval at specific
  subdev pads in the image pipeline. The frame interval only makes sense
  for sub-devices that can control the frame period on their own. This
  includes, for instance, image sensors and TV tuners. Sub-devices that
  don't support frame intervals must not implement these ioctls.

However, when trying to configure the pipeline using media-ctl, eg:

media-ctl -d /dev/media1 --set-v4l2 '"imx219 pixel 0-0010":0[crop:(0,0)/3264x2464]'
media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":1[fmt:SRGGB10/3264x2464@1/30]'
media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":0[fmt:SRGGB8/816x616@1/30]'
media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
Unable to setup formats: Inappropriate ioctl for device (25)
media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0_mux":2[fmt:SRGGB8/816x616@1/30]'
media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0":2[fmt:SRGGB8/816x616@1/30]'

The problem there is that the format setting for the csi2 does not get
propagated forward:

$ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
...
open("/dev/v4l-subdev16", O_RDWR)       = 3
ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbec16244) = 0
ioctl(3, VIDIOC_SUBDEV_S_FRAME_INTERVAL, 0xbec162a4) = -1 ENOTTY (Inappropriate
ioctl for device)
fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 2), ...}) = 0
write(1, "Unable to setup formats: Inappro"..., 61) = 61
Unable to setup formats: Inappropriate ioctl for device (25)
close(3)                                = 0
exit_group(1)                           = ?
+++ exited with 1 +++

because media-ctl exits as soon as it encouters the error while trying
to set the frame rate.

This makes implementing setup of the media pipeline in shell scripts
unnecessarily difficult - as you need to then know whether an entity
is likely not to support the VIDIOC_SUBDEV_S_FRAME_INTERVAL call,
and either avoid specifying a frame rate:

$ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616]'
...
open("/dev/v4l-subdev16", O_RDWR)       = 3
ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
open("/dev/v4l-subdev0", O_RDWR)        = 4
ioctl(4, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
close(4)                                = 0
close(3)                                = 0
exit_group(0)                           = ?
+++ exited with 0 +++

or manually setting the format on the sink.

Allowing the S_FRAME_INTERVAL call seems to me to be more in keeping
with the negotiation mechanism that is implemented in subdevs, and
IMHO should be implemented inside the kernel as a pad operation along
with the format negotiation, especially so as frame skipping is
defined as scaling, in just the same way as the frame size is also
scaling:

       -  ``MEDIA_ENT_F_PROC_VIDEO_SCALER``

       -  Video scaler. An entity capable of video scaling must have
          at least one sink pad and one source pad, and scale the
          video frame(s) received on its sink pad(s) to a different
          resolution output on its source pad(s). The range of
          supported scaling ratios is entity-specific and can differ
          between the horizontal and vertical directions (in particular
          scaling can be supported in one direction only). Binning and
          skipping are considered as scaling.

Although, this is vague, as it doesn't define what it means by "skipping",
whether that's skipping pixels (iow, sub-sampling) or whether that's
frame skipping.

Then there's the issue where, if you have this setup:

 camera --> csi2 receiver --> csi --> capture

and the "csi" subdev can skip frames, you need to know (a) at the CSI
sink pad what the frame rate is of the source (b) what the desired
source pad frame rate is, so you can configure the frame skipping.
So, does the csi subdev have to walk back through the media graph
looking for the frame rate?  Does the capture device have to walk back
through the media graph looking for some subdev to tell it what the
frame rate is - the capture device certainly can't go straight to the
sensor to get an answer to that question, because that bypasses the
effect of the CSI frame skipping (which will lower the frame rate.)

IMHO, frame rate is just another format property, just like the
resolution and data format itself, and v4l2 should be treating it no
differently.

In any case, the documentation vs media-ctl create something of a very
obscure situation, one that probably needs solving one way or another.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21  0:13     ` Russell King - ARM Linux
@ 2017-02-21  0:18       ` Steve Longerbeam
  2017-02-21  8:50         ` Philipp Zabel
  2017-02-21 12:37       ` Sakari Ailus
  1 sibling, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-21  0:18 UTC (permalink / raw)
  To: Russell King - ARM Linux, Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

[-- Attachment #1: Type: text/plain, Size: 5381 bytes --]



On 02/20/2017 04:13 PM, Russell King - ARM Linux wrote:
> On Tue, Feb 21, 2017 at 12:04:10AM +0200, Sakari Ailus wrote:
>> On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
>>> From: Russell King <rmk+kernel@armlinux.org.uk>
>>>
>>> Setting and getting frame rates is part of the negotiation mechanism
>>> between subdevs.  The lack of support means that a frame rate at the
>>> sensor can't be negotiated through the subdev path.
>>
>> Just wondering --- what do you need this for?
>
> The v4l2 documentation contradicts the media-ctl implementation.
>
> While v4l2 documentation says:
>
>   These ioctls are used to get and set the frame interval at specific
>   subdev pads in the image pipeline. The frame interval only makes sense
>   for sub-devices that can control the frame period on their own. This
>   includes, for instance, image sensors and TV tuners. Sub-devices that
>   don't support frame intervals must not implement these ioctls.
>
> However, when trying to configure the pipeline using media-ctl, eg:
>
> media-ctl -d /dev/media1 --set-v4l2 '"imx219 pixel 0-0010":0[crop:(0,0)/3264x2464]'
> media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":1[fmt:SRGGB10/3264x2464@1/30]'
> media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":0[fmt:SRGGB8/816x616@1/30]'
> media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> Unable to setup formats: Inappropriate ioctl for device (25)
> media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0_mux":2[fmt:SRGGB8/816x616@1/30]'
> media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0":2[fmt:SRGGB8/816x616@1/30]'
>
> The problem there is that the format setting for the csi2 does not get
> propagated forward:
>
> $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> ...
> open("/dev/v4l-subdev16", O_RDWR)       = 3
> ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbec16244) = 0
> ioctl(3, VIDIOC_SUBDEV_S_FRAME_INTERVAL, 0xbec162a4) = -1 ENOTTY (Inappropriate
> ioctl for device)
> fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 2), ...}) = 0
> write(1, "Unable to setup formats: Inappro"..., 61) = 61
> Unable to setup formats: Inappropriate ioctl for device (25)
> close(3)                                = 0
> exit_group(1)                           = ?
> +++ exited with 1 +++
>
> because media-ctl exits as soon as it encouters the error while trying
> to set the frame rate.
>
> This makes implementing setup of the media pipeline in shell scripts
> unnecessarily difficult - as you need to then know whether an entity
> is likely not to support the VIDIOC_SUBDEV_S_FRAME_INTERVAL call,
> and either avoid specifying a frame rate:
>
> $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616]'
> ...
> open("/dev/v4l-subdev16", O_RDWR)       = 3
> ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> open("/dev/v4l-subdev0", O_RDWR)        = 4
> ioctl(4, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> close(4)                                = 0
> close(3)                                = 0
> exit_group(0)                           = ?
> +++ exited with 0 +++
>
> or manually setting the format on the sink.
>
> Allowing the S_FRAME_INTERVAL call seems to me to be more in keeping
> with the negotiation mechanism that is implemented in subdevs, and
> IMHO should be implemented inside the kernel as a pad operation along
> with the format negotiation, especially so as frame skipping is
> defined as scaling, in just the same way as the frame size is also
> scaling:
>
>        -  ``MEDIA_ENT_F_PROC_VIDEO_SCALER``
>
>        -  Video scaler. An entity capable of video scaling must have
>           at least one sink pad and one source pad, and scale the
>           video frame(s) received on its sink pad(s) to a different
>           resolution output on its source pad(s). The range of
>           supported scaling ratios is entity-specific and can differ
>           between the horizontal and vertical directions (in particular
>           scaling can be supported in one direction only). Binning and
>           skipping are considered as scaling.
>
> Although, this is vague, as it doesn't define what it means by "skipping",
> whether that's skipping pixels (iow, sub-sampling) or whether that's
> frame skipping.
>
> Then there's the issue where, if you have this setup:
>
>  camera --> csi2 receiver --> csi --> capture
>
> and the "csi" subdev can skip frames, you need to know (a) at the CSI
> sink pad what the frame rate is of the source (b) what the desired
> source pad frame rate is, so you can configure the frame skipping.
> So, does the csi subdev have to walk back through the media graph
> looking for the frame rate?  Does the capture device have to walk back
> through the media graph looking for some subdev to tell it what the
> frame rate is - the capture device certainly can't go straight to the
> sensor to get an answer to that question, because that bypasses the
> effect of the CSI frame skipping (which will lower the frame rate.)
>
> IMHO, frame rate is just another format property, just like the
> resolution and data format itself, and v4l2 should be treating it no
> differently.
>

I agree, frame rate, if indicated/specified by both sides of a link,
should match. So maybe this should be part of v4l2 link validation.

This might be a good time to propose the following patch.

Steve


[-- Attachment #2: 0015-media-v4l-subdev-Add-function-to-validate-frame-inte.patch --]
[-- Type: text/x-patch, Size: 4620 bytes --]

>From 82fbf487ba9ca0dfd2d624c73a78f3741c974d3e Mon Sep 17 00:00:00 2001
From: Steve Longerbeam <steve_longerbeam@mentor.com>
Date: Mon, 20 Feb 2017 15:13:15 -0800
Subject: [PATCH 15/37] [media] v4l: subdev: Add function to validate frame
 interval

If the pads on both sides of a link specify a frame interval, then
those frame intervals should match. Create the exported function
v4l2_subdev_link_validate_frame_interval() for this purpose. This
function is also added to v4l2_subdev_link_validate_default().

Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
---
 Documentation/media/kapi/v4l2-subdev.rst |  5 ++--
 drivers/media/v4l2-core/v4l2-subdev.c    | 50 +++++++++++++++++++++++++++++++-
 include/media/v4l2-subdev.h              | 10 +++++++
 3 files changed, 62 insertions(+), 3 deletions(-)

diff --git a/Documentation/media/kapi/v4l2-subdev.rst b/Documentation/media/kapi/v4l2-subdev.rst
index e1f0b72..5e424f6 100644
--- a/Documentation/media/kapi/v4l2-subdev.rst
+++ b/Documentation/media/kapi/v4l2-subdev.rst
@@ -132,8 +132,9 @@ of the format configuration between sub-devices and video nodes.
 
 If link_validate op is not set, the default function
 :c:func:`v4l2_subdev_link_validate_default` is used instead. This function
-ensures that width, height and the media bus pixel code are equal on both source
-and sink of the link. Subdev drivers are also free to use this function to
+ensures that width, height, the media bus pixel code, and the frame
+interval (if indicated by both sides), are equal on both source and
+sink of the link. Subdev drivers are also free to use this function to
 perform the checks mentioned above in addition to their own checks.
 
 There are currently two ways to register subdevices with the V4L2 core. The
diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
index da78497..23a3e74 100644
--- a/drivers/media/v4l2-core/v4l2-subdev.c
+++ b/drivers/media/v4l2-core/v4l2-subdev.c
@@ -497,6 +497,50 @@ const struct v4l2_file_operations v4l2_subdev_fops = {
 };
 
 #ifdef CONFIG_MEDIA_CONTROLLER
+static int
+v4l2_subdev_link_validate_get_fi(struct media_pad *pad,
+				 struct v4l2_subdev_frame_interval *fi)
+{
+	if (is_media_entity_v4l2_subdev(pad->entity)) {
+		struct v4l2_subdev *sd =
+			media_entity_to_v4l2_subdev(pad->entity);
+
+		fi->pad = pad->index;
+		return v4l2_subdev_call(sd, video, g_frame_interval, fi);
+	}
+
+	WARN(pad->entity->function != MEDIA_ENT_F_IO_V4L,
+	     "Driver bug! Wrong media entity type 0x%08x, entity %s\n",
+	     pad->entity->function, pad->entity->name);
+
+	return -EINVAL;
+}
+
+int v4l2_subdev_link_validate_frame_interval(struct media_link *link)
+{
+	struct v4l2_subdev_frame_interval src_fi, sink_fi;
+	unsigned long src_usec, sink_usec;
+	int rval;
+
+	rval = v4l2_subdev_link_validate_get_fi(link->source, &src_fi);
+	if (rval < 0)
+		return 0;
+
+	rval = v4l2_subdev_link_validate_get_fi(link->sink, &sink_fi);
+	if (rval < 0)
+		return 0;
+
+	src_usec = DIV_ROUND_CLOSEST_ULL(
+		(u64)src_fi.interval.numerator * USEC_PER_SEC,
+		src_fi.interval.denominator);
+	sink_usec = DIV_ROUND_CLOSEST_ULL(
+		(u64)sink_fi.interval.numerator * USEC_PER_SEC,
+		sink_fi.interval.denominator);
+
+	return (src_usec != sink_usec) ? -EPIPE : 0;
+}
+EXPORT_SYMBOL_GPL(v4l2_subdev_link_validate_frame_interval);
+
 int v4l2_subdev_link_validate_default(struct v4l2_subdev *sd,
 				      struct media_link *link,
 				      struct v4l2_subdev_format *source_fmt,
@@ -516,7 +560,11 @@ int v4l2_subdev_link_validate_default(struct v4l2_subdev *sd,
 	    sink_fmt->format.field != V4L2_FIELD_NONE)
 		return -EPIPE;
 
-	return 0;
+	/*
+	 * The frame interval must match if specified on both ends
+	 * of the link.
+	 */
+	return v4l2_subdev_link_validate_frame_interval(link);
 }
 EXPORT_SYMBOL_GPL(v4l2_subdev_link_validate_default);
 
diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 0ab1c5d..60c941d 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -929,6 +929,16 @@ int v4l2_subdev_link_validate_default(struct v4l2_subdev *sd,
 				      struct v4l2_subdev_format *sink_fmt);
 
 /**
+ * v4l2_subdev_link_validate_frame_interval - validates a media link
+ *
+ * @link: pointer to &struct media_link
+ *
+ * This function ensures that the frame intervals, if specified by
+ * both the source and sink subdevs of the link, are equal.
+ */
+int v4l2_subdev_link_validate_frame_interval(struct media_link *link);
+
+/**
  * v4l2_subdev_link_validate - validates a media link
  *
  * @link: pointer to &struct media_link
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21  0:18       ` Steve Longerbeam
@ 2017-02-21  8:50         ` Philipp Zabel
  2017-03-13 13:16           ` Sakari Ailus
  0 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-21  8:50 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Russell King - ARM Linux, Sakari Ailus, robh+dt, mark.rutland,
	shawnguo, kernel, fabio.estevam, mchehab, hverkuil, nick,
	markus.heiser, laurent.pinchart+renesas, bparrot, geert, arnd,
	sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

On Mon, 2017-02-20 at 16:18 -0800, Steve Longerbeam wrote:
> 
> On 02/20/2017 04:13 PM, Russell King - ARM Linux wrote:
> > On Tue, Feb 21, 2017 at 12:04:10AM +0200, Sakari Ailus wrote:
> >> On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> >>> From: Russell King <rmk+kernel@armlinux.org.uk>
> >>>
> >>> Setting and getting frame rates is part of the negotiation mechanism
> >>> between subdevs.  The lack of support means that a frame rate at the
> >>> sensor can't be negotiated through the subdev path.
> >>
> >> Just wondering --- what do you need this for?
> >
> > The v4l2 documentation contradicts the media-ctl implementation.
> >
> > While v4l2 documentation says:
> >
> >   These ioctls are used to get and set the frame interval at specific
> >   subdev pads in the image pipeline. The frame interval only makes sense
> >   for sub-devices that can control the frame period on their own. This
> >   includes, for instance, image sensors and TV tuners. Sub-devices that
> >   don't support frame intervals must not implement these ioctls.
> >
> > However, when trying to configure the pipeline using media-ctl, eg:
> >
> > media-ctl -d /dev/media1 --set-v4l2 '"imx219 pixel 0-0010":0[crop:(0,0)/3264x2464]'
> > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":1[fmt:SRGGB10/3264x2464@1/30]'
> > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":0[fmt:SRGGB8/816x616@1/30]'
> > media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> > Unable to setup formats: Inappropriate ioctl for device (25)
> > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0_mux":2[fmt:SRGGB8/816x616@1/30]'
> > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0":2[fmt:SRGGB8/816x616@1/30]'
> >
> > The problem there is that the format setting for the csi2 does not get
> > propagated forward:
> >
> > $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> > ...
> > open("/dev/v4l-subdev16", O_RDWR)       = 3
> > ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbec16244) = 0
> > ioctl(3, VIDIOC_SUBDEV_S_FRAME_INTERVAL, 0xbec162a4) = -1 ENOTTY (Inappropriate
> > ioctl for device)
> > fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 2), ...}) = 0
> > write(1, "Unable to setup formats: Inappro"..., 61) = 61
> > Unable to setup formats: Inappropriate ioctl for device (25)
> > close(3)                                = 0
> > exit_group(1)                           = ?
> > +++ exited with 1 +++
> >
> > because media-ctl exits as soon as it encouters the error while trying
> > to set the frame rate.
> >
> > This makes implementing setup of the media pipeline in shell scripts
> > unnecessarily difficult - as you need to then know whether an entity
> > is likely not to support the VIDIOC_SUBDEV_S_FRAME_INTERVAL call,
> > and either avoid specifying a frame rate:
> >
> > $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616]'
> > ...
> > open("/dev/v4l-subdev16", O_RDWR)       = 3
> > ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> > open("/dev/v4l-subdev0", O_RDWR)        = 4
> > ioctl(4, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> > close(4)                                = 0
> > close(3)                                = 0
> > exit_group(0)                           = ?
> > +++ exited with 0 +++
> >
> > or manually setting the format on the sink.
> >
> > Allowing the S_FRAME_INTERVAL call seems to me to be more in keeping
> > with the negotiation mechanism that is implemented in subdevs, and
> > IMHO should be implemented inside the kernel as a pad operation along
> > with the format negotiation, especially so as frame skipping is
> > defined as scaling, in just the same way as the frame size is also
> > scaling:
> >
> >        -  ``MEDIA_ENT_F_PROC_VIDEO_SCALER``
> >
> >        -  Video scaler. An entity capable of video scaling must have
> >           at least one sink pad and one source pad, and scale the
> >           video frame(s) received on its sink pad(s) to a different
> >           resolution output on its source pad(s). The range of
> >           supported scaling ratios is entity-specific and can differ
> >           between the horizontal and vertical directions (in particular
> >           scaling can be supported in one direction only). Binning and
> >           skipping are considered as scaling.
> >
> > Although, this is vague, as it doesn't define what it means by "skipping",
> > whether that's skipping pixels (iow, sub-sampling) or whether that's
> > frame skipping.

I'd interpret this as meaning pixel skipping, not frame skipping.

> > Then there's the issue where, if you have this setup:
> >
> >  camera --> csi2 receiver --> csi --> capture
> >
> > and the "csi" subdev can skip frames, you need to know (a) at the CSI
> > sink pad what the frame rate is of the source (b) what the desired
> > source pad frame rate is, so you can configure the frame skipping.
> > So, does the csi subdev have to walk back through the media graph
> > looking for the frame rate?  Does the capture device have to walk back
> > through the media graph looking for some subdev to tell it what the
> > frame rate is - the capture device certainly can't go straight to the
> > sensor to get an answer to that question, because that bypasses the
> > effect of the CSI frame skipping (which will lower the frame rate.)
> >
> > IMHO, frame rate is just another format property, just like the
> > resolution and data format itself, and v4l2 should be treating it no
> > differently.
> >
> 
> I agree, frame rate, if indicated/specified by both sides of a link,
> should match. So maybe this should be part of v4l2 link validation.
> 
> This might be a good time to propose the following patch.

I agree with Steve and Russell. I don't see why the (nominal) frame
interval should be handled differently than resolution, data format, and
colorspace information. I think it should just be propagated in the same
way, and there is no reason to have two connected pads set to a
different interval. That would make implementing the g/s_frame_interval
subdev calls mandatory.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 15/36] platform: add video-multiplexer subdevice driver
  2017-02-19 22:02   ` Pavel Machek
@ 2017-02-21  9:11     ` Philipp Zabel
  2017-02-24 20:09       ` Pavel Machek
  0 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-21  9:11 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, linux, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Sascha Hauer, Steve Longerbeam

On Sun, 2017-02-19 at 23:02 +0100, Pavel Machek wrote:
> Hi!
> 
> > From: Philipp Zabel <p.zabel@pengutronix.de>
> > 
> > This driver can handle SoC internal and external video bus multiplexers,
> > controlled either by register bit fields or by a GPIO. The subdevice
> > passes through frame interval and mbus configuration of the active input
> > to the output side.
> > 
> > Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> > Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
> > --
> >
> 
> Again, this is slightly non-standard format. Normally changes from v1
> go below ---, but in your case it would cut off the signoff...
> 
> > diff --git a/Documentation/devicetree/bindings/media/video-multiplexer.txt b/Documentation/devicetree/bindings/media/video-multiplexer.txt
> > new file mode 100644
> > index 0000000..9d133d9
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/media/video-multiplexer.txt
> > @@ -0,0 +1,59 @@
> > +Video Multiplexer
> > +=================
> > +
> > +Video multiplexers allow to select between multiple input ports. Video received
> > +on the active input port is passed through to the output port. Muxes described
> > +by this binding may be controlled by a syscon register bitfield or by a GPIO.
> > +
> > +Required properties:
> > +- compatible : should be "video-multiplexer"
> > +- reg: should be register base of the register containing the control bitfield
> > +- bit-mask: bitmask of the control bitfield in the control register
> > +- bit-shift: bit offset of the control bitfield in the control register
> > +- gpios: alternatively to reg, bit-mask, and bit-shift, a single GPIO phandle
> > +  may be given to switch between two inputs
> > +- #address-cells: should be <1>
> > +- #size-cells: should be <0>
> > +- port@*: at least three port nodes containing endpoints connecting to the
> > +  source and sink devices according to of_graph bindings. The last port is
> > +  the output port, all others are inputs.
> 
> At least three? I guess it is exactly three with the gpio?

Yes. With the mmio bitfield muxes there can be more.

> Plus you might want to describe which port correspond to which gpio
> state/bitfield values...
> 
> > +struct vidsw {
> 
> I knew it: it is secretely a switch! :-).

This driver started as a two-input gpio controlled bus switch.
I changed the name when adding support for bitfield controlled
multiplexers with more than two inputs.

> > +static void vidsw_set_active(struct vidsw *vidsw, int active)
> > +{
> > +	vidsw->active = active;
> > +	if (active < 0)
> > +		return;
> > +
> > +	dev_dbg(vidsw->subdev.dev, "setting %d active\n", active);
> > +
> > +	if (vidsw->field)
> > +		regmap_field_write(vidsw->field, active);
> > +	else if (vidsw->gpio)
> > +		gpiod_set_value(vidsw->gpio, active);
> 
>          else dev_err()...?

If neither field nor gpio are set, probe will have failed and this will
never be called.

> > +static int vidsw_async_init(struct vidsw *vidsw, struct device_node *node)
> > +{
> > +	struct device_node *ep;
> > +	u32 portno;
> > +	int numports;
> 
> numbports is int, so I guess portno should be, too?

We could change both to unsigned int, as both vidsw->num_pads and
endpoint.base.port are unsigned int, and they are only compared/assigned
to those and each other.

> > +		portno = endpoint.base.port;
> > +		if (portno >= numports - 1)
> > +			continue;
> 
     I. 
> > +	if (!pad) {
> > +		/* Mirror the input side on the output side */
> > +		cfg->type = vidsw->endpoint[vidsw->active].bus_type;
> > +		if (cfg->type == V4L2_MBUS_PARALLEL ||
> > +		    cfg->type == V4L2_MBUS_BT656)
> > +			cfg->flags = vidsw->endpoint[vidsw->active].bus.parallel.flags;
> > +	}
> 
> Will this need support for other V4L2_MBUS_ values?

To support CSI-2 multiplexers, yes.

> > +MODULE_AUTHOR("Sascha Hauer, Pengutronix");
> > +MODULE_AUTHOR("Philipp Zabel, Pengutronix");
> 
> Normally, MODULE_AUTHOR contains comma separated names of authors,
> perhaps with <email@addresses>. Not sure two MODULE_AUTHORs per file
> will work.
> 
> Thanks,
> 								Pavel

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-20 22:56     ` Steve Longerbeam
  2017-02-20 23:47       ` Steve Longerbeam
@ 2017-02-21 12:15       ` Sakari Ailus
  2017-02-21 22:21         ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-02-21 12:15 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam

Hi Steve,

On Mon, Feb 20, 2017 at 02:56:15PM -0800, Steve Longerbeam wrote:
> 
> 
> On 02/20/2017 02:04 PM, Sakari Ailus wrote:
> >Hi Steve,
> >
> >On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> >>From: Russell King <rmk+kernel@armlinux.org.uk>
> >>
> >>Setting and getting frame rates is part of the negotiation mechanism
> >>between subdevs.  The lack of support means that a frame rate at the
> >>sensor can't be negotiated through the subdev path.
> >
> >Just wondering --- what do you need this for?
> 
> 
> Hi Sakari,
> 
> i.MX does need the ability to negotiate the frame rates in the
> pipelines. The CSI has the ability to skip frames at the output,
> which is something Philipp added to the CSI subdev. That affects
> frame interval at the CSI output.
> 
> But as Russell pointed out, the lack of [gs]_frame_interval op
> causes media-ctl to fail:
> 
> media-ctl -v -d /dev/media1 --set-v4l2
> '"imx6-mipi-csi2":1[fmt:SGBRG8/512x512@1/30]'
> 
> Opening media device /dev/media1
> Enumerating entities
> Found 29 entities
> Enumerating pads and links
> Setting up format SGBRG8 512x512 on pad imx6-mipi-csi2/1
> Format set: SGBRG8 512x512
> Setting up frame interval 1/30 on entity imx6-mipi-csi2
> Unable to set frame interval: Inappropriate ioctl for device (-25)Unable to
> setup formats: Inappropriate ioctl for device (25)
> 
> 
> So i.MX needs to implement this op in every subdev in the
> pipeline, otherwise it's not possible to configure the
> pipeline with media-ctl.

The frame rate is only set on the sub-device which you explicitly set it.
I.e. setting the frame rate fails if it's not supported on a pad.

Philipp recently posted patches that add frame rate propagation to
media-ctl.

Frame rate is typically settable (and gettable) only on sensor sub-device's
source pad, which means it normally would not be propagated by the kernel
but with Philipp's patches, on the sink pad of the bus receiver. Receivers
don't have a way to control it nor they implement the IOCTLs, so that would
indeed result in an error.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21  0:13     ` Russell King - ARM Linux
  2017-02-21  0:18       ` Steve Longerbeam
@ 2017-02-21 12:37       ` Sakari Ailus
  2017-02-21 13:21         ` Russell King - ARM Linux
  1 sibling, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-02-21 12:37 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Russell,

On Tue, Feb 21, 2017 at 12:13:32AM +0000, Russell King - ARM Linux wrote:
> On Tue, Feb 21, 2017 at 12:04:10AM +0200, Sakari Ailus wrote:
> > On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> > > From: Russell King <rmk+kernel@armlinux.org.uk>
> > > 
> > > Setting and getting frame rates is part of the negotiation mechanism
> > > between subdevs.  The lack of support means that a frame rate at the
> > > sensor can't be negotiated through the subdev path.
> > 
> > Just wondering --- what do you need this for?
> 
> The v4l2 documentation contradicts the media-ctl implementation.
> 
> While v4l2 documentation says:
> 
>   These ioctls are used to get and set the frame interval at specific
>   subdev pads in the image pipeline. The frame interval only makes sense
>   for sub-devices that can control the frame period on their own. This
>   includes, for instance, image sensors and TV tuners. Sub-devices that
>   don't support frame intervals must not implement these ioctls.
> 
> However, when trying to configure the pipeline using media-ctl, eg:
> 
> media-ctl -d /dev/media1 --set-v4l2 '"imx219 pixel 0-0010":0[crop:(0,0)/3264x2464]'
> media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":1[fmt:SRGGB10/3264x2464@1/30]'
> media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":0[fmt:SRGGB8/816x616@1/30]'
> media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> Unable to setup formats: Inappropriate ioctl for device (25)
> media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0_mux":2[fmt:SRGGB8/816x616@1/30]'
> media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0":2[fmt:SRGGB8/816x616@1/30]'
> 
> The problem there is that the format setting for the csi2 does not get
> propagated forward:

The CSI-2 receivers typically do not implement frame interval IOCTLs as they
do not control the frame interval. Some sensors or TV tuners typically do,
so they implement these IOCTLs.

There are alternative ways to specify the frame rate.

> 
> $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> ...
> open("/dev/v4l-subdev16", O_RDWR)       = 3
> ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbec16244) = 0
> ioctl(3, VIDIOC_SUBDEV_S_FRAME_INTERVAL, 0xbec162a4) = -1 ENOTTY (Inappropriate
> ioctl for device)
> fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 2), ...}) = 0
> write(1, "Unable to setup formats: Inappro"..., 61) = 61
> Unable to setup formats: Inappropriate ioctl for device (25)
> close(3)                                = 0
> exit_group(1)                           = ?
> +++ exited with 1 +++
> 
> because media-ctl exits as soon as it encouters the error while trying
> to set the frame rate.
> 
> This makes implementing setup of the media pipeline in shell scripts
> unnecessarily difficult - as you need to then know whether an entity
> is likely not to support the VIDIOC_SUBDEV_S_FRAME_INTERVAL call,
> and either avoid specifying a frame rate:

You should remove the frame interval setting from sub-devices that do not
support it.

> 
> $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616]'
> ...
> open("/dev/v4l-subdev16", O_RDWR)       = 3
> ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> open("/dev/v4l-subdev0", O_RDWR)        = 4
> ioctl(4, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> close(4)                                = 0
> close(3)                                = 0
> exit_group(0)                           = ?
> +++ exited with 0 +++
> 
> or manually setting the format on the sink.
> 
> Allowing the S_FRAME_INTERVAL call seems to me to be more in keeping
> with the negotiation mechanism that is implemented in subdevs, and
> IMHO should be implemented inside the kernel as a pad operation along
> with the format negotiation, especially so as frame skipping is
> defined as scaling, in just the same way as the frame size is also
> scaling:

The origins of the S_FRAME_INTERVAL IOCTL for sub-devices are the S_PARM
IOCTL for video nodes. It is used to control the frame rate for more simple
devices that do not expose the Media controller interface. The similar
S_FRAME_INTERVAL was added for sub-devices as well, and it has been so far
used to control the frame interval for sensors (and G_FRAME_INTERVAL to
obtain the frame interval for TV tuners, for instance).

The pad argument was added there but media-ctl only supported setting the
frame interval on pad 0, which, coincidentally, worked well for sensor
devices.

The link validation is primarily done in order to ensure the validity of the
hardware configuration: streaming may not be started if the hardware
configuration is not valid.

Also, frame interval is not a static property during streaming: it may be
changed without the knowledge of the other sub-device drivers downstream. It
neither is a property of hardware receiving or processing images: if there
are limitations in processing pixels, then they in practice are related to
pixel rates or image sizes (i.e. not frame rates).

> 
>        -  ``MEDIA_ENT_F_PROC_VIDEO_SCALER``
> 
>        -  Video scaler. An entity capable of video scaling must have
>           at least one sink pad and one source pad, and scale the
>           video frame(s) received on its sink pad(s) to a different
>           resolution output on its source pad(s). The range of
>           supported scaling ratios is entity-specific and can differ
>           between the horizontal and vertical directions (in particular
>           scaling can be supported in one direction only). Binning and
>           skipping are considered as scaling.
> 
> Although, this is vague, as it doesn't define what it means by "skipping",
> whether that's skipping pixels (iow, sub-sampling) or whether that's
> frame skipping.

Skipping in the context is used to refer to sub-sampling. The term is often
used in conjunction of sensors. The documentation could certainly be
clarified here.

> 
> Then there's the issue where, if you have this setup:
> 
>  camera --> csi2 receiver --> csi --> capture
> 
> and the "csi" subdev can skip frames, you need to know (a) at the CSI
> sink pad what the frame rate is of the source (b) what the desired
> source pad frame rate is, so you can configure the frame skipping.
> So, does the csi subdev have to walk back through the media graph
> looking for the frame rate?  Does the capture device have to walk back
> through the media graph looking for some subdev to tell it what the
> frame rate is - the capture device certainly can't go straight to the
> sensor to get an answer to that question, because that bypasses the
> effect of the CSI frame skipping (which will lower the frame rate.)
> 
> IMHO, frame rate is just another format property, just like the
> resolution and data format itself, and v4l2 should be treating it no
> differently.
> 
> In any case, the documentation vs media-ctl create something of a very
> obscure situation, one that probably needs solving one way or another.

Before going to solutions I need to ask: what do you want to achieve?

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21 12:37       ` Sakari Ailus
@ 2017-02-21 13:21         ` Russell King - ARM Linux
  2017-02-21 15:38           ` Sakari Ailus
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-21 13:21 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Tue, Feb 21, 2017 at 02:37:57PM +0200, Sakari Ailus wrote:
> Hi Russell,
> 
> On Tue, Feb 21, 2017 at 12:13:32AM +0000, Russell King - ARM Linux wrote:
> > On Tue, Feb 21, 2017 at 12:04:10AM +0200, Sakari Ailus wrote:
> > > On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> > > > From: Russell King <rmk+kernel@armlinux.org.uk>
> > > > 
> > > > Setting and getting frame rates is part of the negotiation mechanism
> > > > between subdevs.  The lack of support means that a frame rate at the
> > > > sensor can't be negotiated through the subdev path.
> > > 
> > > Just wondering --- what do you need this for?
> > 
> > The v4l2 documentation contradicts the media-ctl implementation.
> > 
> > While v4l2 documentation says:
> > 
> >   These ioctls are used to get and set the frame interval at specific
> >   subdev pads in the image pipeline. The frame interval only makes sense
> >   for sub-devices that can control the frame period on their own. This
> >   includes, for instance, image sensors and TV tuners. Sub-devices that
> >   don't support frame intervals must not implement these ioctls.
> > 
> > However, when trying to configure the pipeline using media-ctl, eg:
> > 
> > media-ctl -d /dev/media1 --set-v4l2 '"imx219 pixel 0-0010":0[crop:(0,0)/3264x2464]'
> > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":1[fmt:SRGGB10/3264x2464@1/30]'
> > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":0[fmt:SRGGB8/816x616@1/30]'
> > media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> > Unable to setup formats: Inappropriate ioctl for device (25)
> > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0_mux":2[fmt:SRGGB8/816x616@1/30]'
> > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0":2[fmt:SRGGB8/816x616@1/30]'
> > 
> > The problem there is that the format setting for the csi2 does not get
> > propagated forward:
> 
> The CSI-2 receivers typically do not implement frame interval IOCTLs as they
> do not control the frame interval. Some sensors or TV tuners typically do,
> so they implement these IOCTLs.

No, TV tuners do not.  The frame rate for a TV tuner is set by the
broadcaster, not by the tuner.  The tuner can't change that frame rate.
The tuner may opt to "skip" fields or frames.  That's no different from
what the CSI block in my example below is capable of doing.

Treating a tuner differently from the CSI block is inconsistent and
completely wrong.

> There are alternative ways to specify the frame rate.

Empty statements (or hand-waving type statements) I'm afraid don't
contribute to the discussion, because they mean nothing to me.  Please
give an example, or flesh out what you mean.

> > $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> > ...
> > open("/dev/v4l-subdev16", O_RDWR)       = 3
> > ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbec16244) = 0
> > ioctl(3, VIDIOC_SUBDEV_S_FRAME_INTERVAL, 0xbec162a4) = -1 ENOTTY (Inappropriate
> > ioctl for device)
> > fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 2), ...}) = 0
> > write(1, "Unable to setup formats: Inappro"..., 61) = 61
> > Unable to setup formats: Inappropriate ioctl for device (25)
> > close(3)                                = 0
> > exit_group(1)                           = ?
> > +++ exited with 1 +++
> > 
> > because media-ctl exits as soon as it encouters the error while trying
> > to set the frame rate.
> > 
> > This makes implementing setup of the media pipeline in shell scripts
> > unnecessarily difficult - as you need to then know whether an entity
> > is likely not to support the VIDIOC_SUBDEV_S_FRAME_INTERVAL call,
> > and either avoid specifying a frame rate:
> 
> You should remove the frame interval setting from sub-devices that do not
> support it.

That means we end up with horribly complex scripts.  This "solution" does
not scale.  Therefore, it is not a "solution".

It's fine if you want to write a script to setup the media pipeline using
media-ctl, listing _each_ media-ctl command individually, with arguments
specific to each step, but as I've already said, that does not scale.

I don't want to end up writing separate scripts to configure the pipeline
for different parameters or setups.  I don't want to teach users how to
do that either.

How are users supposed to cope with this craziness?  Are they expected to
write their own scripts and understand this stuff?

As far as I can see, there are no applications out there at the moment that
come close to understanding how to configure a media pipeline, so users
have to understand how to use media-ctl to configure the pipeline manually.
Are we really expecting users to write scripts to do this, and understand
all these nuances?

IMHO, this is completely crazy, and hasn't been fully thought out.

> > $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616]'
> > ...
> > open("/dev/v4l-subdev16", O_RDWR)       = 3
> > ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> > open("/dev/v4l-subdev0", O_RDWR)        = 4
> > ioctl(4, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> > close(4)                                = 0
> > close(3)                                = 0
> > exit_group(0)                           = ?
> > +++ exited with 0 +++
> > 
> > or manually setting the format on the sink.
> > 
> > Allowing the S_FRAME_INTERVAL call seems to me to be more in keeping
> > with the negotiation mechanism that is implemented in subdevs, and
> > IMHO should be implemented inside the kernel as a pad operation along
> > with the format negotiation, especially so as frame skipping is
> > defined as scaling, in just the same way as the frame size is also
> > scaling:
> 
> The origins of the S_FRAME_INTERVAL IOCTL for sub-devices are the S_PARM
> IOCTL for video nodes. It is used to control the frame rate for more simple
> devices that do not expose the Media controller interface. The similar
> S_FRAME_INTERVAL was added for sub-devices as well, and it has been so far
> used to control the frame interval for sensors (and G_FRAME_INTERVAL to
> obtain the frame interval for TV tuners, for instance).
> 
> The pad argument was added there but media-ctl only supported setting the
> frame interval on pad 0, which, coincidentally, worked well for sensor
> devices.
> 
> The link validation is primarily done in order to ensure the validity of the
> hardware configuration: streaming may not be started if the hardware
> configuration is not valid.
> 
> Also, frame interval is not a static property during streaming: it may be
> changed without the knowledge of the other sub-device drivers downstream. It
> neither is a property of hardware receiving or processing images: if there
> are limitations in processing pixels, then they in practice are related to
> pixel rates or image sizes (i.e. not frame rates).

So what about the case where we have a subdev (CSI) that is capable of
frame rate reduction, that needs to know the input frame rate and the
desired output frame rate?  It seems to me that this has not been
thought through...

> >        -  ``MEDIA_ENT_F_PROC_VIDEO_SCALER``
> > 
> >        -  Video scaler. An entity capable of video scaling must have
> >           at least one sink pad and one source pad, and scale the
> >           video frame(s) received on its sink pad(s) to a different
> >           resolution output on its source pad(s). The range of
> >           supported scaling ratios is entity-specific and can differ
> >           between the horizontal and vertical directions (in particular
> >           scaling can be supported in one direction only). Binning and
> >           skipping are considered as scaling.
> > 
> > Although, this is vague, as it doesn't define what it means by "skipping",
> > whether that's skipping pixels (iow, sub-sampling) or whether that's
> > frame skipping.
> 
> Skipping in the context is used to refer to sub-sampling. The term is often
> used in conjunction of sensors. The documentation could certainly be
> clarified here.

It definitely needs to be, it's currently mis-leading.

> > Then there's the issue where, if you have this setup:
> > 
> >  camera --> csi2 receiver --> csi --> capture
> > 
> > and the "csi" subdev can skip frames, you need to know (a) at the CSI
> > sink pad what the frame rate is of the source (b) what the desired
> > source pad frame rate is, so you can configure the frame skipping.
> > So, does the csi subdev have to walk back through the media graph
> > looking for the frame rate?  Does the capture device have to walk back
> > through the media graph looking for some subdev to tell it what the
> > frame rate is - the capture device certainly can't go straight to the
> > sensor to get an answer to that question, because that bypasses the
> > effect of the CSI frame skipping (which will lower the frame rate.)
> > 
> > IMHO, frame rate is just another format property, just like the
> > resolution and data format itself, and v4l2 should be treating it no
> > differently.
> > 
> > In any case, the documentation vs media-ctl create something of a very
> > obscure situation, one that probably needs solving one way or another.
> 
> Before going to solutions I need to ask: what do you want to achieve?

Full and consistent support for the hardware, and a sane and consistent
way to setup a media pipeline that is easy for everyone to understand.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21 13:21         ` Russell King - ARM Linux
@ 2017-02-21 15:38           ` Sakari Ailus
  2017-02-21 16:03             ` Russell King - ARM Linux
  0 siblings, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-02-21 15:38 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Russell,

On Tue, Feb 21, 2017 at 01:21:32PM +0000, Russell King - ARM Linux wrote:
> On Tue, Feb 21, 2017 at 02:37:57PM +0200, Sakari Ailus wrote:
> > Hi Russell,
> > 
> > On Tue, Feb 21, 2017 at 12:13:32AM +0000, Russell King - ARM Linux wrote:
> > > On Tue, Feb 21, 2017 at 12:04:10AM +0200, Sakari Ailus wrote:
> > > > On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> > > > > From: Russell King <rmk+kernel@armlinux.org.uk>
> > > > > 
> > > > > Setting and getting frame rates is part of the negotiation mechanism
> > > > > between subdevs.  The lack of support means that a frame rate at the
> > > > > sensor can't be negotiated through the subdev path.
> > > > 
> > > > Just wondering --- what do you need this for?
> > > 
> > > The v4l2 documentation contradicts the media-ctl implementation.
> > > 
> > > While v4l2 documentation says:
> > > 
> > >   These ioctls are used to get and set the frame interval at specific
> > >   subdev pads in the image pipeline. The frame interval only makes sense
> > >   for sub-devices that can control the frame period on their own. This
> > >   includes, for instance, image sensors and TV tuners. Sub-devices that
> > >   don't support frame intervals must not implement these ioctls.
> > > 
> > > However, when trying to configure the pipeline using media-ctl, eg:
> > > 
> > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 pixel 0-0010":0[crop:(0,0)/3264x2464]'
> > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":1[fmt:SRGGB10/3264x2464@1/30]'
> > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":0[fmt:SRGGB8/816x616@1/30]'
> > > media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> > > Unable to setup formats: Inappropriate ioctl for device (25)
> > > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0_mux":2[fmt:SRGGB8/816x616@1/30]'
> > > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0":2[fmt:SRGGB8/816x616@1/30]'
> > > 
> > > The problem there is that the format setting for the csi2 does not get
> > > propagated forward:
> > 
> > The CSI-2 receivers typically do not implement frame interval IOCTLs as they
> > do not control the frame interval. Some sensors or TV tuners typically do,
> > so they implement these IOCTLs.
> 
> No, TV tuners do not.  The frame rate for a TV tuner is set by the
> broadcaster, not by the tuner.  The tuner can't change that frame rate.
> The tuner may opt to "skip" fields or frames.  That's no different from
> what the CSI block in my example below is capable of doing.
> 
> Treating a tuner differently from the CSI block is inconsistent and
> completely wrong.

I agree tuners in that sense are somewhat similar, and they are not treated
differently because they are tuners (and not CSI-2 receivers). Neither can
control the frame rate of the incoming video stream.

Conceivably a tuner could implement G_FRAME_INTERVAL IOCTL, but based on a
quick glance none appears to. Neither do CSI-2 receivers. Only sensor
drivers do currently.

> 
> > There are alternative ways to specify the frame rate.
> 
> Empty statements (or hand-waving type statements) I'm afraid don't
> contribute to the discussion, because they mean nothing to me.  Please
> give an example, or flesh out what you mean.

Images are transmitted as series of lines, with each line ending in a
horizontal blanking period, and each frame ending with a similar period of
vertical blanking. The blanking configuration in the units of pixels and
lines at their pixel clock is a native unit which sensors typically use, and
some drivers expose the blanking controls directly to the user.

<URL:http://hverkuil.home.xs4all.nl/spec/uapi/v4l/extended-controls.html#image-source-control-ids>

> 
> > > $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> > > ...
> > > open("/dev/v4l-subdev16", O_RDWR)       = 3
> > > ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbec16244) = 0
> > > ioctl(3, VIDIOC_SUBDEV_S_FRAME_INTERVAL, 0xbec162a4) = -1 ENOTTY (Inappropriate
> > > ioctl for device)
> > > fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 2), ...}) = 0
> > > write(1, "Unable to setup formats: Inappro"..., 61) = 61
> > > Unable to setup formats: Inappropriate ioctl for device (25)
> > > close(3)                                = 0
> > > exit_group(1)                           = ?
> > > +++ exited with 1 +++
> > > 
> > > because media-ctl exits as soon as it encouters the error while trying
> > > to set the frame rate.
> > > 
> > > This makes implementing setup of the media pipeline in shell scripts
> > > unnecessarily difficult - as you need to then know whether an entity
> > > is likely not to support the VIDIOC_SUBDEV_S_FRAME_INTERVAL call,
> > > and either avoid specifying a frame rate:
> > 
> > You should remove the frame interval setting from sub-devices that do not
> > support it.
> 
> That means we end up with horribly complex scripts.  This "solution" does
> not scale.  Therefore, it is not a "solution".

I have to disagree with that: if a piece of hardware does not offer to
control or, if a concept is not even relevant for a piece of hardware, then
a driver for that piece of hardware should not expose an interface to
control such a feature. Doing so would provide no value and at the same time
would be simply confusing for the user space.

> 
> It's fine if you want to write a script to setup the media pipeline using
> media-ctl, listing _each_ media-ctl command individually, with arguments
> specific to each step, but as I've already said, that does not scale.

Pipeline configuration as such is highly hardware specific. There are rules,
but there are details in hardware that have to be taken into account, such
as mandated cropping in certain situations. You have to simply accept that:
when it comes to camera Image Signal Processors, there are no standard
pipelines. Each ISP is different from the rest, more or less, and often more
so.

As the interface is generic, you can write generic programs that use that
interface, but you need to be able to adapt to the differences in the
functionality of the hardware.

Frankly, I think this is just needless noise stemming from a problem that's
not really difficult to solve --- if that technical problem really even
exists. But let's not debate that; I accept that dropping frames is what
you're willing to do.

> 
> I don't want to end up writing separate scripts to configure the pipeline
> for different parameters or setups.  I don't want to teach users how to
> do that either.
> 
> How are users supposed to cope with this craziness?  Are they expected to
> write their own scripts and understand this stuff?
> 
> As far as I can see, there are no applications out there at the moment that
> come close to understanding how to configure a media pipeline, so users
> have to understand how to use media-ctl to configure the pipeline manually.
> Are we really expecting users to write scripts to do this, and understand
> all these nuances?
> 
> IMHO, this is completely crazy, and hasn't been fully thought out.
> 
> > > $ strace media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616]'
> > > ...
> > > open("/dev/v4l-subdev16", O_RDWR)       = 3
> > > ioctl(3, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> > > open("/dev/v4l-subdev0", O_RDWR)        = 4
> > > ioctl(4, VIDIOC_SUBDEV_S_FMT, 0xbeb1a254) = 0
> > > close(4)                                = 0
> > > close(3)                                = 0
> > > exit_group(0)                           = ?
> > > +++ exited with 0 +++
> > > 
> > > or manually setting the format on the sink.
> > > 
> > > Allowing the S_FRAME_INTERVAL call seems to me to be more in keeping
> > > with the negotiation mechanism that is implemented in subdevs, and
> > > IMHO should be implemented inside the kernel as a pad operation along
> > > with the format negotiation, especially so as frame skipping is
> > > defined as scaling, in just the same way as the frame size is also
> > > scaling:
> > 
> > The origins of the S_FRAME_INTERVAL IOCTL for sub-devices are the S_PARM
> > IOCTL for video nodes. It is used to control the frame rate for more simple
> > devices that do not expose the Media controller interface. The similar
> > S_FRAME_INTERVAL was added for sub-devices as well, and it has been so far
> > used to control the frame interval for sensors (and G_FRAME_INTERVAL to
> > obtain the frame interval for TV tuners, for instance).
> > 
> > The pad argument was added there but media-ctl only supported setting the
> > frame interval on pad 0, which, coincidentally, worked well for sensor
> > devices.
> > 
> > The link validation is primarily done in order to ensure the validity of the
> > hardware configuration: streaming may not be started if the hardware
> > configuration is not valid.
> > 
> > Also, frame interval is not a static property during streaming: it may be
> > changed without the knowledge of the other sub-device drivers downstream. It
> > neither is a property of hardware receiving or processing images: if there
> > are limitations in processing pixels, then they in practice are related to
> > pixel rates or image sizes (i.e. not frame rates).
> 
> So what about the case where we have a subdev (CSI) that is capable of
> frame rate reduction, that needs to know the input frame rate and the
> desired output frame rate?  It seems to me that this has not been
> thought through...

That's because I believe you're the first one wanting to willfully throw
away perfectly good frames. :-)

If you want to do that, a simple option could be to just support
[GS]_FRAME_INTERVAL on all pads of a sub-device that can drop frames. But it
should not be include in pipeline validation, it simply does not belong
there for the reasons stated previously.

The user would be responsible for configuring the frame rates right. That
information would simply be used to configure frame dropping frequency.

I'd like to have a comment from Laurent and Hans on this.

> 
> > >        -  ``MEDIA_ENT_F_PROC_VIDEO_SCALER``
> > > 
> > >        -  Video scaler. An entity capable of video scaling must have
> > >           at least one sink pad and one source pad, and scale the
> > >           video frame(s) received on its sink pad(s) to a different
> > >           resolution output on its source pad(s). The range of
> > >           supported scaling ratios is entity-specific and can differ
> > >           between the horizontal and vertical directions (in particular
> > >           scaling can be supported in one direction only). Binning and
> > >           skipping are considered as scaling.
> > > 
> > > Although, this is vague, as it doesn't define what it means by "skipping",
> > > whether that's skipping pixels (iow, sub-sampling) or whether that's
> > > frame skipping.
> > 
> > Skipping in the context is used to refer to sub-sampling. The term is often
> > used in conjunction of sensors. The documentation could certainly be
> > clarified here.
> 
> It definitely needs to be, it's currently mis-leading.

If you're not familiar with terminology typically used with many camera
sensor, perhaps so. The documentation should indeed not assume that; I'll
submit a patch to fix that.

> 
> > > Then there's the issue where, if you have this setup:
> > > 
> > >  camera --> csi2 receiver --> csi --> capture
> > > 
> > > and the "csi" subdev can skip frames, you need to know (a) at the CSI
> > > sink pad what the frame rate is of the source (b) what the desired
> > > source pad frame rate is, so you can configure the frame skipping.
> > > So, does the csi subdev have to walk back through the media graph
> > > looking for the frame rate?  Does the capture device have to walk back
> > > through the media graph looking for some subdev to tell it what the
> > > frame rate is - the capture device certainly can't go straight to the
> > > sensor to get an answer to that question, because that bypasses the
> > > effect of the CSI frame skipping (which will lower the frame rate.)
> > > 
> > > IMHO, frame rate is just another format property, just like the
> > > resolution and data format itself, and v4l2 should be treating it no
> > > differently.
> > > 
> > > In any case, the documentation vs media-ctl create something of a very
> > > obscure situation, one that probably needs solving one way or another.
> > 
> > Before going to solutions I need to ask: what do you want to achieve?
> 
> Full and consistent support for the hardware, and a sane and consistent
> way to setup a media pipeline that is easy for everyone to understand.

That's essentially what we do have: the same interface is supported on a
large variety of different hardware devices. However, not all IOCTLs are
supported by all device drivers.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21 15:38           ` Sakari Ailus
@ 2017-02-21 16:03             ` Russell King - ARM Linux
  2017-02-21 16:15               ` Sakari Ailus
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-02-21 16:03 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Tue, Feb 21, 2017 at 05:38:34PM +0200, Sakari Ailus wrote:
> Hi Russell,
> 
> On Tue, Feb 21, 2017 at 01:21:32PM +0000, Russell King - ARM Linux wrote:
> > On Tue, Feb 21, 2017 at 02:37:57PM +0200, Sakari Ailus wrote:
> > > Hi Russell,
> > > 
> > > On Tue, Feb 21, 2017 at 12:13:32AM +0000, Russell King - ARM Linux wrote:
> > > > On Tue, Feb 21, 2017 at 12:04:10AM +0200, Sakari Ailus wrote:
> > > > > On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> > > > > > From: Russell King <rmk+kernel@armlinux.org.uk>
> > > > > > 
> > > > > > Setting and getting frame rates is part of the negotiation mechanism
> > > > > > between subdevs.  The lack of support means that a frame rate at the
> > > > > > sensor can't be negotiated through the subdev path.
> > > > > 
> > > > > Just wondering --- what do you need this for?
> > > > 
> > > > The v4l2 documentation contradicts the media-ctl implementation.
> > > > 
> > > > While v4l2 documentation says:
> > > > 
> > > >   These ioctls are used to get and set the frame interval at specific
> > > >   subdev pads in the image pipeline. The frame interval only makes sense
> > > >   for sub-devices that can control the frame period on their own. This
> > > >   includes, for instance, image sensors and TV tuners. Sub-devices that
> > > >   don't support frame intervals must not implement these ioctls.
> > > > 
> > > > However, when trying to configure the pipeline using media-ctl, eg:
> > > > 
> > > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 pixel 0-0010":0[crop:(0,0)/3264x2464]'
> > > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":1[fmt:SRGGB10/3264x2464@1/30]'
> > > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":0[fmt:SRGGB8/816x616@1/30]'
> > > > media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> > > > Unable to setup formats: Inappropriate ioctl for device (25)
> > > > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0_mux":2[fmt:SRGGB8/816x616@1/30]'
> > > > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0":2[fmt:SRGGB8/816x616@1/30]'
> > > > 
> > > > The problem there is that the format setting for the csi2 does not get
> > > > propagated forward:
> > > 
> > > The CSI-2 receivers typically do not implement frame interval IOCTLs as they
> > > do not control the frame interval. Some sensors or TV tuners typically do,
> > > so they implement these IOCTLs.
> > 
> > No, TV tuners do not.  The frame rate for a TV tuner is set by the
> > broadcaster, not by the tuner.  The tuner can't change that frame rate.
> > The tuner may opt to "skip" fields or frames.  That's no different from
> > what the CSI block in my example below is capable of doing.
> > 
> > Treating a tuner differently from the CSI block is inconsistent and
> > completely wrong.
> 
> I agree tuners in that sense are somewhat similar, and they are not treated
> differently because they are tuners (and not CSI-2 receivers). Neither can
> control the frame rate of the incoming video stream.
> 
> Conceivably a tuner could implement G_FRAME_INTERVAL IOCTL, but based on a
> quick glance none appears to. Neither do CSI-2 receivers. Only sensor
> drivers do currently.

Please look again.  I am being very careful with "CSI" vs "CSI-2" in my
emails, you are conflating the two.

In all my emails so far, "CSI" refers to a block of hardware that is
responsible for receiving an image stream from some kind of source.  It
contains hardware that supports frame skipping.

"CSI-2" refers to a different block of hardware that is responsible for
receiving a serially encoded stream from a MIPI-CSI-2 compliant source
and providing it to the "CSI" block.

I would have thought my diagram that I drew would have made it clear that
they were different blocks of hardware, but I guess in this case, the old
saying "a picture is worth 1000 words" is simply not true.

> Images are transmitted as series of lines, with each line ending in a
> horizontal blanking period, and each frame ending with a similar period of

I'm sorry, are you seriously teaching me to suck rocks?  I am insulted.

I've been involved in TV and video for many years, I don't need you to
tell me how video is transmitted.

Sorry, you've just lost my interest in further discussion.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21 16:03             ` Russell King - ARM Linux
@ 2017-02-21 16:15               ` Sakari Ailus
  0 siblings, 0 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-02-21 16:15 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Tue, Feb 21, 2017 at 04:03:32PM +0000, Russell King - ARM Linux wrote:
> On Tue, Feb 21, 2017 at 05:38:34PM +0200, Sakari Ailus wrote:
> > Hi Russell,
> > 
> > On Tue, Feb 21, 2017 at 01:21:32PM +0000, Russell King - ARM Linux wrote:
> > > On Tue, Feb 21, 2017 at 02:37:57PM +0200, Sakari Ailus wrote:
> > > > Hi Russell,
> > > > 
> > > > On Tue, Feb 21, 2017 at 12:13:32AM +0000, Russell King - ARM Linux wrote:
> > > > > On Tue, Feb 21, 2017 at 12:04:10AM +0200, Sakari Ailus wrote:
> > > > > > On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
> > > > > > > From: Russell King <rmk+kernel@armlinux.org.uk>
> > > > > > > 
> > > > > > > Setting and getting frame rates is part of the negotiation mechanism
> > > > > > > between subdevs.  The lack of support means that a frame rate at the
> > > > > > > sensor can't be negotiated through the subdev path.
> > > > > > 
> > > > > > Just wondering --- what do you need this for?
> > > > > 
> > > > > The v4l2 documentation contradicts the media-ctl implementation.
> > > > > 
> > > > > While v4l2 documentation says:
> > > > > 
> > > > >   These ioctls are used to get and set the frame interval at specific
> > > > >   subdev pads in the image pipeline. The frame interval only makes sense
> > > > >   for sub-devices that can control the frame period on their own. This
> > > > >   includes, for instance, image sensors and TV tuners. Sub-devices that
> > > > >   don't support frame intervals must not implement these ioctls.
> > > > > 
> > > > > However, when trying to configure the pipeline using media-ctl, eg:
> > > > > 
> > > > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 pixel 0-0010":0[crop:(0,0)/3264x2464]'
> > > > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":1[fmt:SRGGB10/3264x2464@1/30]'
> > > > > media-ctl -d /dev/media1 --set-v4l2 '"imx219 0-0010":0[fmt:SRGGB8/816x616@1/30]'
> > > > > media-ctl -d /dev/media1 --set-v4l2 '"imx6-mipi-csi2":1[fmt:SRGGB8/816x616@1/30]'
> > > > > Unable to setup formats: Inappropriate ioctl for device (25)
> > > > > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0_mux":2[fmt:SRGGB8/816x616@1/30]'
> > > > > media-ctl -d /dev/media1 --set-v4l2 '"ipu1_csi0":2[fmt:SRGGB8/816x616@1/30]'
> > > > > 
> > > > > The problem there is that the format setting for the csi2 does not get
> > > > > propagated forward:
> > > > 
> > > > The CSI-2 receivers typically do not implement frame interval IOCTLs as they
> > > > do not control the frame interval. Some sensors or TV tuners typically do,
> > > > so they implement these IOCTLs.
> > > 
> > > No, TV tuners do not.  The frame rate for a TV tuner is set by the
> > > broadcaster, not by the tuner.  The tuner can't change that frame rate.
> > > The tuner may opt to "skip" fields or frames.  That's no different from
> > > what the CSI block in my example below is capable of doing.
> > > 
> > > Treating a tuner differently from the CSI block is inconsistent and
> > > completely wrong.
> > 
> > I agree tuners in that sense are somewhat similar, and they are not treated
> > differently because they are tuners (and not CSI-2 receivers). Neither can
> > control the frame rate of the incoming video stream.
> > 
> > Conceivably a tuner could implement G_FRAME_INTERVAL IOCTL, but based on a
> > quick glance none appears to. Neither do CSI-2 receivers. Only sensor
> > drivers do currently.
> 
> Please look again.  I am being very careful with "CSI" vs "CSI-2" in my
> emails, you are conflating the two.
> 
> In all my emails so far, "CSI" refers to a block of hardware that is
> responsible for receiving an image stream from some kind of source.  It
> contains hardware that supports frame skipping.

Ah, I missed the difference. Thanks for pointing it out.

Still, that does not change how the skipping would work nor how I proposed
it would be configured from the user space.

> 
> "CSI-2" refers to a different block of hardware that is responsible for
> receiving a serially encoded stream from a MIPI-CSI-2 compliant source
> and providing it to the "CSI" block.
> 
> I would have thought my diagram that I drew would have made it clear that
> they were different blocks of hardware, but I guess in this case, the old
> saying "a picture is worth 1000 words" is simply not true.
> 
> > Images are transmitted as series of lines, with each line ending in a
> > horizontal blanking period, and each frame ending with a similar period of
> 
> I'm sorry, are you seriously teaching me to suck rocks?  I am insulted.
> 
> I've been involved in TV and video for many years, I don't need you to
> tell me how video is transmitted.
> 
> Sorry, you've just lost my interest in further discussion.

There's no need to feel insulted; that certainly was not the intention.

I've proposed you a solution, please comment on that.

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21 12:15       ` Sakari Ailus
@ 2017-02-21 22:21         ` Steve Longerbeam
  2017-02-21 23:34           ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-21 22:21 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam



On 02/21/2017 04:15 AM, Sakari Ailus wrote:
> Hi Steve,
>
> On Mon, Feb 20, 2017 at 02:56:15PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 02/20/2017 02:04 PM, Sakari Ailus wrote:
>>> Hi Steve,
>>>
>>> On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
>>>> From: Russell King <rmk+kernel@armlinux.org.uk>
>>>>
>>>> Setting and getting frame rates is part of the negotiation mechanism
>>>> between subdevs.  The lack of support means that a frame rate at the
>>>> sensor can't be negotiated through the subdev path.
>>>
>>> Just wondering --- what do you need this for?
>>
>>
>> Hi Sakari,
>>
>> i.MX does need the ability to negotiate the frame rates in the
>> pipelines. The CSI has the ability to skip frames at the output,
>> which is something Philipp added to the CSI subdev. That affects
>> frame interval at the CSI output.
>>
>> But as Russell pointed out, the lack of [gs]_frame_interval op
>> causes media-ctl to fail:
>>
>> media-ctl -v -d /dev/media1 --set-v4l2
>> '"imx6-mipi-csi2":1[fmt:SGBRG8/512x512@1/30]'
>>
>> Opening media device /dev/media1
>> Enumerating entities
>> Found 29 entities
>> Enumerating pads and links
>> Setting up format SGBRG8 512x512 on pad imx6-mipi-csi2/1
>> Format set: SGBRG8 512x512
>> Setting up frame interval 1/30 on entity imx6-mipi-csi2
>> Unable to set frame interval: Inappropriate ioctl for device (-25)Unable to
>> setup formats: Inappropriate ioctl for device (25)
>>
>>
>> So i.MX needs to implement this op in every subdev in the
>> pipeline, otherwise it's not possible to configure the
>> pipeline with media-ctl.
>
> The frame rate is only set on the sub-device which you explicitly set it.
> I.e. setting the frame rate fails if it's not supported on a pad.
>
> Philipp recently posted patches that add frame rate propagation to
> media-ctl.
>
> Frame rate is typically settable (and gettable) only on sensor sub-device's
> source pad,  which means it normally would not be propagated by the kernel
> but with Philipp's patches, on the sink pad of the bus receiver. Receivers
> don't have a way to control it nor they implement the IOCTLs, so that would
> indeed result in an error.
>

Frame rate is really an essential piece of information. The spatial
dimensions and data type provided by set_fmt are really only half the
equation, the other is temporal, i.e. the data rate.

It's true that subdevices have no control over the frame rate at their
sink pads, but the same argument applies to set_fmt. Even if it has
no control over the data format it receives, it still needs that
information in order to determine the correct format at the source.
The same argument applies to frame rate.

So in my opinion, the behavior of [gs]_frame_interval should be, if a
subdevice is capable of modifying the frame rate, then it should
implement [gs]_frame_interval at _all_ of its pads, similar to set_fmt.
And frame rate should really be part of link validation the same as
set_fmt is.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21 22:21         ` Steve Longerbeam
@ 2017-02-21 23:34           ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-21 23:34 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Russell King, Steve Longerbeam



On 02/21/2017 02:21 PM, Steve Longerbeam wrote:
>
>
> On 02/21/2017 04:15 AM, Sakari Ailus wrote:
>> Hi Steve,
>>
>> On Mon, Feb 20, 2017 at 02:56:15PM -0800, Steve Longerbeam wrote:
>>>
>>>
>>> On 02/20/2017 02:04 PM, Sakari Ailus wrote:
>>>> Hi Steve,
>>>>
>>>> On Wed, Feb 15, 2017 at 06:19:31PM -0800, Steve Longerbeam wrote:
>>>>> From: Russell King <rmk+kernel@armlinux.org.uk>
>>>>>
>>>>> Setting and getting frame rates is part of the negotiation mechanism
>>>>> between subdevs.  The lack of support means that a frame rate at the
>>>>> sensor can't be negotiated through the subdev path.
>>>>
>>>> Just wondering --- what do you need this for?
>>>
>>>
>>> Hi Sakari,
>>>
>>> i.MX does need the ability to negotiate the frame rates in the
>>> pipelines. The CSI has the ability to skip frames at the output,
>>> which is something Philipp added to the CSI subdev. That affects
>>> frame interval at the CSI output.
>>>
>>> But as Russell pointed out, the lack of [gs]_frame_interval op
>>> causes media-ctl to fail:
>>>
>>> media-ctl -v -d /dev/media1 --set-v4l2
>>> '"imx6-mipi-csi2":1[fmt:SGBRG8/512x512@1/30]'
>>>
>>> Opening media device /dev/media1
>>> Enumerating entities
>>> Found 29 entities
>>> Enumerating pads and links
>>> Setting up format SGBRG8 512x512 on pad imx6-mipi-csi2/1
>>> Format set: SGBRG8 512x512
>>> Setting up frame interval 1/30 on entity imx6-mipi-csi2
>>> Unable to set frame interval: Inappropriate ioctl for device
>>> (-25)Unable to
>>> setup formats: Inappropriate ioctl for device (25)
>>>
>>>
>>> So i.MX needs to implement this op in every subdev in the
>>> pipeline, otherwise it's not possible to configure the
>>> pipeline with media-ctl.
>>
>> The frame rate is only set on the sub-device which you explicitly set it.
>> I.e. setting the frame rate fails if it's not supported on a pad.
>>
>> Philipp recently posted patches that add frame rate propagation to
>> media-ctl.
>>
>> Frame rate is typically settable (and gettable) only on sensor
>> sub-device's
>> source pad,  which means it normally would not be propagated by the
>> kernel
>> but with Philipp's patches, on the sink pad of the bus receiver.
>> Receivers
>> don't have a way to control it nor they implement the IOCTLs, so that
>> would
>> indeed result in an error.
>>
>
> Frame rate is really an essential piece of information. The spatial
> dimensions and data type provided by set_fmt are really only half the
> equation, the other is temporal, i.e. the data rate.
>
> It's true that subdevices have no control over the frame rate at their
> sink pads, but the same argument applies to set_fmt. Even if it has
> no control over the data format it receives, it still needs that
> information in order to determine the correct format at the source.
> The same argument applies to frame rate.
>
> So in my opinion, the behavior of [gs]_frame_interval should be, if a
> subdevice is capable of modifying the frame rate, then it should
> implement [gs]_frame_interval at _all_ of its pads, similar to set_fmt.
> And frame rate should really be part of link validation the same as
> set_fmt is.
>

Actually, if frame rate were added to link validation then
[gs]_frame_interval would have to be mandatory, even if the
subdev has no control over frame rate, again this is like
set_fmt. Otherwise, if a subdev has not implemented
[gs]_frame_interval, then frame rate validation across
the whole pipeline is broken. Because, if we have

A -> B -> C

and B has not implemented [gs]_frame_interval, and C is expecting
30 fps, then pipeline validation would succeed even though A is 
outputting 60 fps.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 12/36] add mux and video interface bridge entity functions
  2017-02-19 21:28   ` Pavel Machek
@ 2017-02-22 17:19     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-22 17:19 UTC (permalink / raw)
  To: Pavel Machek
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/19/2017 01:28 PM, Pavel Machek wrote:
> On Wed 2017-02-15 18:19:14, Steve Longerbeam wrote:
>> From: Philipp Zabel <p.zabel@pengutronix.de>
>>
>> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
>>
>> - renamed MEDIA_ENT_F_MUX to MEDIA_ENT_F_VID_MUX
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>
> This is slightly "interesting" format of changelog. Normally signoffs
> go below.
>
>> diff --git a/Documentation/media/uapi/mediactl/media-types.rst b/Documentation/media/uapi/mediactl/media-types.rst
>> index 3e03dc2..023be29 100644
>> --- a/Documentation/media/uapi/mediactl/media-types.rst
>> +++ b/Documentation/media/uapi/mediactl/media-types.rst
>> @@ -298,6 +298,28 @@ Types and flags used to represent the media graph elements
>>  	  received on its sink pad and outputs the statistics data on
>>  	  its source pad.
>>
>> +    -  ..  row 29
>> +
>> +       ..  _MEDIA-ENT-F-MUX:
>> +
>> +       -  ``MEDIA_ENT_F_MUX``
>
> And you probably want to rename it here, too.

Done, thanks.
Steve

>
> With that fixed:
>
> Reviewed-by: Pavel Machek <pavel@ucw.cz>
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-17 11:38       ` Philipp Zabel
@ 2017-02-22 23:38         ` Steve Longerbeam
  2017-02-22 23:41           ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-22 23:38 UTC (permalink / raw)
  To: Philipp Zabel, Russell King - ARM Linux
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam



On 02/17/2017 03:38 AM, Philipp Zabel wrote:
> On Fri, 2017-02-17 at 11:06 +0000, Russell King - ARM Linux wrote:
>> On Fri, Feb 17, 2017 at 11:47:59AM +0100, Philipp Zabel wrote:
>>> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>>>> +static void csi2_dphy_init(struct csi2_dev *csi2)
>>>> +{
>>>> +	/*
>>>> +	 * FIXME: 0x14 is derived from a fixed D-PHY reference
>>>> +	 * clock from the HSI_TX PLL, and a fixed target lane max
>>>> +	 * bandwidth of 300 Mbps. This value should be derived
>>>
>>> If the table in https://community.nxp.com/docs/DOC-94312 is correct,
>>> this should be 850 Mbps. Where does this 300 Mbps value come from?
>>
>> I thought you had some code to compute the correct value, although
>> I guess we've lost the ability to know how fast the sensor is going
>> to drive the link.
>
> I had code to calculate the number of needed lanes from the bit rate and
> link frequency. I did not actually change the D-PHY register value.
> And as you pointed out, calculating the number of lanes is not useful
> without input from the sensor driver, as some lane configurations might
> not be supported.
>
>> Note that the IMX219 currently drives the data lanes at 912Mbps almost
>> exclusively, as I've yet to finish working out how to derive the PLL
>> parameters.  (I have something that works, but it currently takes on
>> the order of 100k iterations to derive the parameters.  gcd() doesn't
>> help you in this instance.)
>
> The tc358743 also currently only implements a fixed rate (of 594 Mbps).

I've analyzed the OV5640 video modes, and they generate the following
"sysclk"'s in Mhz:

210
280
420
560
840

But I don't know whether this is equivalent to bit rate. Is it the same
as Mbps-per-lane? If so, this could be indicated to the sink by
implementing V4L2_CID_LINK_FREQ in the ov5640.c sensor driver.

The Mbps-per-lane value would then be link_freq * 2, and then
Mbps-per-lane could be used to lookup the correct "hsfreqsel"
value to program the D-PHY.

I've added this to imx6-mipi-csi2.c. If the source didn't implement
V4L2_CID_LINK_FREQ then it uses a default 849 Mbps-per-lane.


Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-22 23:38         ` Steve Longerbeam
@ 2017-02-22 23:41           ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-22 23:41 UTC (permalink / raw)
  To: Philipp Zabel, Russell King - ARM Linux
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam



On 02/22/2017 03:38 PM, Steve Longerbeam wrote:
>
>
> On 02/17/2017 03:38 AM, Philipp Zabel wrote:
>> On Fri, 2017-02-17 at 11:06 +0000, Russell King - ARM Linux wrote:
>>> On Fri, Feb 17, 2017 at 11:47:59AM +0100, Philipp Zabel wrote:
>>>> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>>>>> +static void csi2_dphy_init(struct csi2_dev *csi2)
>>>>> +{
>>>>> +    /*
>>>>> +     * FIXME: 0x14 is derived from a fixed D-PHY reference
>>>>> +     * clock from the HSI_TX PLL, and a fixed target lane max
>>>>> +     * bandwidth of 300 Mbps. This value should be derived
>>>>
>>>> If the table in https://community.nxp.com/docs/DOC-94312 is correct,
>>>> this should be 850 Mbps. Where does this 300 Mbps value come from?
>>>
>>> I thought you had some code to compute the correct value, although
>>> I guess we've lost the ability to know how fast the sensor is going
>>> to drive the link.
>>
>> I had code to calculate the number of needed lanes from the bit rate and
>> link frequency. I did not actually change the D-PHY register value.
>> And as you pointed out, calculating the number of lanes is not useful
>> without input from the sensor driver, as some lane configurations might
>> not be supported.
>>
>>> Note that the IMX219 currently drives the data lanes at 912Mbps almost
>>> exclusively, as I've yet to finish working out how to derive the PLL
>>> parameters.  (I have something that works, but it currently takes on
>>> the order of 100k iterations to derive the parameters.  gcd() doesn't
>>> help you in this instance.)
>>
>> The tc358743 also currently only implements a fixed rate (of 594 Mbps).
>
> I've analyzed the OV5640 video modes, and they generate the following
> "sysclk"'s in Mhz:
>
> 210
> 280
> 420
> 560
> 840
>
> But I don't know whether this is equivalent to bit rate. Is it the same
> as Mbps-per-lane? If so, this could be indicated to the sink by
> implementing V4L2_CID_LINK_FREQ in the ov5640.c sensor driver.
>

er, rather, if I knew what this "sysclk" value was, I could use it
to convert to a link frequency and return it in V4L2_CID_LINK_FREQ.

Steve

> The Mbps-per-lane value would then be link_freq * 2, and then
> Mbps-per-lane could be used to lookup the correct "hsfreqsel"
> value to program the D-PHY.
>
> I've added this to imx6-mipi-csi2.c. If the source didn't implement
> V4L2_CID_LINK_FREQ then it uses a default 849 Mbps-per-lane.
>
>
> Steve
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation
  2017-02-16 11:32   ` Philipp Zabel
@ 2017-02-22 23:52     ` Steve Longerbeam
  2017-02-23  9:10       ` Philipp Zabel
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-22 23:52 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Philipp,


On 02/16/2017 03:32 AM, Philipp Zabel wrote:
> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>> The previous API and negotiation of mbus codes and pixel formats
>> was broken, and has been completely redone.
>>
>> The negotiation of media bus codes should be as follows:
>>
>> CSI:
>>
>> sink pad     direct src pad      IDMAC src pad
>> --------     ----------------    -------------
>> RGB (any)        IPU RGB           RGB (any)
>> YUV (any)        IPU YUV           YUV (any)
>> Bayer              N/A             must be same bayer code as sink
>
> The IDMAC src pad should also use the internal 32-bit RGB / YUV format,
> except if bayer/raw mode is selected, in which case the attached capture
> video device should only allow a single mode corresponding to the output
> pad media bus format.

The IDMAC source pad is going to memory, so it has left the IPU.
Are you sure it should be an internal IPU format? I realize it
is linked to a capture device node, and the IPU format could then
be translated to a v4l2 fourcc by the capture device, but IMHO this
pad is external to the IPU.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 17/36] media: Add userspace header file for i.MX
  2017-02-16 11:33   ` Philipp Zabel
@ 2017-02-22 23:54     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-22 23:54 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/16/2017 03:33 AM, Philipp Zabel wrote:
> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>>
>> +/*
>> + * events from the subdevs
>> + */
>> +#define V4L2_EVENT_IMX_CLASS          V4L2_EVENT_PRIVATE_START
>> +#define V4L2_EVENT_IMX_NFB4EOF        (V4L2_EVENT_IMX_CLASS + 1)
>> +#define V4L2_EVENT_IMX_FRAME_INTERVAL (V4L2_EVENT_IMX_CLASS + 2)
>
> These events are still i.MX specific. I think they shouldn't be.

Done, I've exported them to

V4L2_EVENT_FRAME_INTERVAL_ERROR
V4L2_EVENT_NEW_FRAME_BEFORE_EOF

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-17 11:06     ` Russell King - ARM Linux
  2017-02-17 11:38       ` Philipp Zabel
@ 2017-02-23  0:06       ` Steve Longerbeam
  2017-02-23  0:09         ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-23  0:06 UTC (permalink / raw)
  To: Russell King - ARM Linux, Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam



On 02/17/2017 03:06 AM, Russell King - ARM Linux wrote:
> On Fri, Feb 17, 2017 at 11:47:59AM +0100, Philipp Zabel wrote:
>> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>>> +static void csi2_dphy_init(struct csi2_dev *csi2)
>>> +{
>>> +	/*
>>> +	 * FIXME: 0x14 is derived from a fixed D-PHY reference
>>> +	 * clock from the HSI_TX PLL, and a fixed target lane max
>>> +	 * bandwidth of 300 Mbps. This value should be derived
>>
>> If the table in https://community.nxp.com/docs/DOC-94312 is correct,
>> this should be 850 Mbps. Where does this 300 Mbps value come from?
>
> I thought you had some code to compute the correct value, although
> I guess we've lost the ability to know how fast the sensor is going
> to drive the link.
>
> Note that the IMX219 currently drives the data lanes at 912Mbps almost
> exclusively, as I've yet to finish working out how to derive the PLL
> parameters.  (I have something that works, but it currently takes on
> the order of 100k iterations to derive the parameters.  gcd() doesn't
> help you in this instance.)

Hi Russell,

As I mentioned, I've added code to imx6-mipi-csi2 to determine the
sources link frequency via V4L2_CID_LINK_FREQ. If you were to implement
this control and return 912 Mbps-per-lane, the D-PHY will be programmed
correctly for the IMX219 (at least, that is the theory anyway).

Alternatively, I could up the default in imx6-mipi-csi2 to 950
Mbps. I will have to test that to make sure it still works with
OV5640 and tc358743.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver
  2017-02-23  0:06       ` Steve Longerbeam
@ 2017-02-23  0:09         ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-23  0:09 UTC (permalink / raw)
  To: Russell King - ARM Linux, Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam



On 02/22/2017 04:06 PM, Steve Longerbeam wrote:
>
>
> On 02/17/2017 03:06 AM, Russell King - ARM Linux wrote:
>> On Fri, Feb 17, 2017 at 11:47:59AM +0100, Philipp Zabel wrote:
>>> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>>>> +static void csi2_dphy_init(struct csi2_dev *csi2)
>>>> +{
>>>> +    /*
>>>> +     * FIXME: 0x14 is derived from a fixed D-PHY reference
>>>> +     * clock from the HSI_TX PLL, and a fixed target lane max
>>>> +     * bandwidth of 300 Mbps. This value should be derived
>>>
>>> If the table in https://community.nxp.com/docs/DOC-94312 is correct,
>>> this should be 850 Mbps. Where does this 300 Mbps value come from?
>>
>> I thought you had some code to compute the correct value, although
>> I guess we've lost the ability to know how fast the sensor is going
>> to drive the link.
>>
>> Note that the IMX219 currently drives the data lanes at 912Mbps almost
>> exclusively, as I've yet to finish working out how to derive the PLL
>> parameters.  (I have something that works, but it currently takes on
>> the order of 100k iterations to derive the parameters.  gcd() doesn't
>> help you in this instance.)
>
> Hi Russell,
>
> As I mentioned, I've added code to imx6-mipi-csi2 to determine the
> sources link frequency via V4L2_CID_LINK_FREQ. If you were to implement
> this control and return 912 Mbps-per-lane,

argh, I mean return 912 / 2.

Steve

  the D-PHY will be programmed
> correctly for the IMX219 (at least, that is the theory anyway).
>
> Alternatively, I could up the default in imx6-mipi-csi2 to 950
> Mbps. I will have to test that to make sure it still works with
> OV5640 and tc358743.
>
> Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation
  2017-02-22 23:52     ` Steve Longerbeam
@ 2017-02-23  9:10       ` Philipp Zabel
  2017-02-24  1:30         ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-02-23  9:10 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Steve,

On Wed, 2017-02-22 at 15:52 -0800, Steve Longerbeam wrote:
> Hi Philipp,
> 
> 
> On 02/16/2017 03:32 AM, Philipp Zabel wrote:
> > On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
> >> The previous API and negotiation of mbus codes and pixel formats
> >> was broken, and has been completely redone.
> >>
> >> The negotiation of media bus codes should be as follows:
> >>
> >> CSI:
> >>
> >> sink pad     direct src pad      IDMAC src pad
> >> --------     ----------------    -------------
> >> RGB (any)        IPU RGB           RGB (any)
> >> YUV (any)        IPU YUV           YUV (any)
> >> Bayer              N/A             must be same bayer code as sink
> >
> > The IDMAC src pad should also use the internal 32-bit RGB / YUV format,
> > except if bayer/raw mode is selected, in which case the attached capture
> > video device should only allow a single mode corresponding to the output
> > pad media bus format.
> 
> The IDMAC source pad is going to memory, so it has left the IPU.
> Are you sure it should be an internal IPU format? I realize it
> is linked to a capture device node, and the IPU format could then
> be translated to a v4l2 fourcc by the capture device, but IMHO this
> pad is external to the IPU.

The CSI IDMAC source pad should describe the format at the connection
between the CSI and the IDMAC, just as the icprpvf and icprpenc source
pads.
The format outside of the IPU is the memory format written by the IDMAC,
but that is a memory pixel format and not a media bus format at all.

That would also make it more straightforward to enumerate the memory
pixel formats in the capture device: If the source pad media bus format
is 32-bit YUV, enumerate all YUV formats, if it is 32-bit RGB or RGB565,
enumerate all rgb formats, otherwise (bayer/raw mode) only allow the
specific memory format matching the bus format.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation
  2017-02-23  9:10       ` Philipp Zabel
@ 2017-02-24  1:30         ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-02-24  1:30 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/23/2017 01:10 AM, Philipp Zabel wrote:
> Hi Steve,
>
> On Wed, 2017-02-22 at 15:52 -0800, Steve Longerbeam wrote:
>> Hi Philipp,
>>
>>
>> On 02/16/2017 03:32 AM, Philipp Zabel wrote:
>>> On Wed, 2017-02-15 at 18:19 -0800, Steve Longerbeam wrote:
>>>> The previous API and negotiation of mbus codes and pixel formats
>>>> was broken, and has been completely redone.
>>>>
>>>> The negotiation of media bus codes should be as follows:
>>>>
>>>> CSI:
>>>>
>>>> sink pad     direct src pad      IDMAC src pad
>>>> --------     ----------------    -------------
>>>> RGB (any)        IPU RGB           RGB (any)
>>>> YUV (any)        IPU YUV           YUV (any)
>>>> Bayer              N/A             must be same bayer code as sink
>>>
>>> The IDMAC src pad should also use the internal 32-bit RGB / YUV format,
>>> except if bayer/raw mode is selected, in which case the attached capture
>>> video device should only allow a single mode corresponding to the output
>>> pad media bus format.
>>
>> The IDMAC source pad is going to memory, so it has left the IPU.
>> Are you sure it should be an internal IPU format? I realize it
>> is linked to a capture device node, and the IPU format could then
>> be translated to a v4l2 fourcc by the capture device, but IMHO this
>> pad is external to the IPU.
>
> The CSI IDMAC source pad should describe the format at the connection
> between the CSI and the IDMAC, just as the icprpvf and icprpenc source
> pads.
> The format outside of the IPU is the memory format written by the IDMAC,
> but that is a memory pixel format and not a media bus format at all.
>

True, it is a memory format. I don't really mind if the CSI and PRP
ENC/VF source pads are characterized as IPU internal formats, since
they are only used to indicate the colorspace to the capture device.

And yes it did simplify the enumeration and try_fmt code a bit. So
I went ahead and made the change.

Steve


> That would also make it more straightforward to enumerate the memory
> pixel formats in the capture device: If the source pad media bus format
> is 32-bit YUV, enumerate all YUV formats, if it is 32-bit RGB or RGB565,
> enumerate all rgb formats, otherwise (bayer/raw mode) only allow the
> specific memory format matching the bus format.
>
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 15/36] platform: add video-multiplexer subdevice driver
  2017-02-21  9:11     ` Philipp Zabel
@ 2017-02-24 20:09       ` Pavel Machek
  0 siblings, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-02-24 20:09 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, linux, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Sascha Hauer, Steve Longerbeam

[-- Attachment #1: Type: text/plain, Size: 1155 bytes --]

Hi!

> > Plus you might want to describe which port correspond to which gpio
> > state/bitfield values...
> > 
> > > +struct vidsw {
> > 
> > I knew it: it is secretely a switch! :-).
> 
> This driver started as a two-input gpio controlled bus switch.
> I changed the name when adding support for bitfield controlled
> multiplexers with more than two inputs.

We had discussion with Sakari / Rob whether gpio controlled thing is a
switch or a multiplexer :-).

> > > +	if (!pad) {
> > > +		/* Mirror the input side on the output side */
> > > +		cfg->type = vidsw->endpoint[vidsw->active].bus_type;
> > > +		if (cfg->type == V4L2_MBUS_PARALLEL ||
> > > +		    cfg->type == V4L2_MBUS_BT656)
> > > +			cfg->flags = vidsw->endpoint[vidsw->active].bus.parallel.flags;
> > > +	}
> > 
> > Will this need support for other V4L2_MBUS_ values?
> 
> To support CSI-2 multiplexers, yes.

Can you stick switch () { .... default: dev_err() } there, to help
future hackers?

Thank,
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver
  2017-02-16  2:19 ` [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver Steve Longerbeam
  2017-02-16 11:54   ` Philipp Zabel
@ 2017-02-27 14:38   ` Rob Herring
  2017-03-01  0:00     ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Rob Herring @ 2017-02-27 14:38 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: mark.rutland, shawnguo, kernel, fabio.estevam, linux, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, Feb 15, 2017 at 06:19:03PM -0800, Steve Longerbeam wrote:
> Add bindings documentation for the i.MX media driver.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  Documentation/devicetree/bindings/media/imx.txt | 66 +++++++++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/media/imx.txt
> 
> diff --git a/Documentation/devicetree/bindings/media/imx.txt b/Documentation/devicetree/bindings/media/imx.txt
> new file mode 100644
> index 0000000..fd5af50
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/media/imx.txt
> @@ -0,0 +1,66 @@
> +Freescale i.MX Media Video Device
> +=================================
> +
> +Video Media Controller node
> +---------------------------
> +
> +This is the media controller node for video capture support. It is a
> +virtual device that lists the camera serial interface nodes that the
> +media device will control.
> +
> +Required properties:
> +- compatible : "fsl,imx-capture-subsystem";
> +- ports      : Should contain a list of phandles pointing to camera
> +		sensor interface ports of IPU devices
> +
> +example:
> +
> +capture-subsystem {
> +	compatible = "fsl,capture-subsystem";
> +	ports = <&ipu1_csi0>, <&ipu1_csi1>;
> +};
> +
> +fim child node
> +--------------
> +
> +This is an optional child node of the ipu_csi port nodes. If present and
> +available, it enables the Frame Interval Monitor. Its properties can be
> +used to modify the method in which the FIM measures frame intervals.
> +Refer to Documentation/media/v4l-drivers/imx.rst for more info on the
> +Frame Interval Monitor.

This should have a compatible string.

> +
> +Optional properties:
> +- fsl,input-capture-channel: an input capture channel and channel flags,
> +			     specified as <chan flags>. The channel number
> +			     must be 0 or 1. The flags can be
> +			     IRQ_TYPE_EDGE_RISING, IRQ_TYPE_EDGE_FALLING, or
> +			     IRQ_TYPE_EDGE_BOTH, and specify which input
> +			     capture signal edge will trigger the input
> +			     capture event. If an input capture channel is
> +			     specified, the FIM will use this method to
> +			     measure frame intervals instead of via the EOF
> +			     interrupt. The input capture method is much
> +			     preferred over EOF as it is not subject to
> +			     interrupt latency errors. However it requires
> +			     routing the VSYNC or FIELD output signals of
> +			     the camera sensor to one of the i.MX input
> +			     capture pads (SD1_DAT0, SD1_DAT1), which also
> +			     gives up support for SD1.
> +
> +
> +mipi_csi2 node
> +--------------
> +
> +This is the device node for the MIPI CSI-2 Receiver, required for MIPI
> +CSI-2 sensors.
> +
> +Required properties:
> +- compatible	: "fsl,imx6-mipi-csi2", "snps,dw-mipi-csi2";
> +- reg           : physical base address and length of the register set;
> +- clocks	: the MIPI CSI-2 receiver requires three clocks: hsi_tx
> +                  (the DPHY clock), video_27m, and eim_podf;
> +- clock-names	: must contain "dphy", "cfg", "pix";

Don't you need ports to describe the sensor and IPU connections?

> +
> +Optional properties:
> +- interrupts	: must contain two level-triggered interrupts,
> +                  in order: 100 and 101;
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 15/36] platform: add video-multiplexer subdevice driver
  2017-02-16  2:19 ` [PATCH v4 15/36] platform: add video-multiplexer subdevice driver Steve Longerbeam
  2017-02-19 22:02   ` Pavel Machek
@ 2017-02-27 14:41   ` Rob Herring
  2017-03-01  0:20     ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Rob Herring @ 2017-02-27 14:41 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: mark.rutland, shawnguo, kernel, fabio.estevam, linux, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel, Sascha Hauer,
	Steve Longerbeam

On Wed, Feb 15, 2017 at 06:19:17PM -0800, Steve Longerbeam wrote:
> From: Philipp Zabel <p.zabel@pengutronix.de>
> 
> This driver can handle SoC internal and external video bus multiplexers,
> controlled either by register bit fields or by a GPIO. The subdevice
> passes through frame interval and mbus configuration of the active input
> to the output side.
> 
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
> 
> --
> 
> - fixed a cut&paste error in vidsw_remove(): v4l2_async_register_subdev()
>   should be unregister.
> 
> - added media_entity_cleanup() and v4l2_device_unregister_subdev()
>   to vidsw_remove().
> 
> - added missing MODULE_DEVICE_TABLE().
>   Suggested-by: Javier Martinez Canillas <javier@dowhile0.org>
> 
> - there was a line left over from a previous iteration that negated
>   the new way of determining the pad count just before it which
>   has been removed (num_pads = of_get_child_count(np)).
> 
> - Philipp Zabel has developed a set of patches that allow adding
>   to the subdev async notifier waiting list using a chaining method
>   from the async registered callbacks (v4l2_of_subdev_registered()
>   and the prep patches for that). For now, I've removed the use of
>   v4l2_of_subdev_registered() for the vidmux driver's registered
>   callback. This doesn't affect the functionality of this driver,
>   but allows for it to be merged now, before adding the chaining
>   support.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  .../bindings/media/video-multiplexer.txt           |  59 +++

Please make this a separate commit.

>  drivers/media/platform/Kconfig                     |   8 +
>  drivers/media/platform/Makefile                    |   2 +
>  drivers/media/platform/video-multiplexer.c         | 474 +++++++++++++++++++++
>  4 files changed, 543 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/media/video-multiplexer.txt
>  create mode 100644 drivers/media/platform/video-multiplexer.c
> 
> diff --git a/Documentation/devicetree/bindings/media/video-multiplexer.txt b/Documentation/devicetree/bindings/media/video-multiplexer.txt
> new file mode 100644
> index 0000000..9d133d9
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/media/video-multiplexer.txt
> @@ -0,0 +1,59 @@
> +Video Multiplexer
> +=================
> +
> +Video multiplexers allow to select between multiple input ports. Video received
> +on the active input port is passed through to the output port. Muxes described
> +by this binding may be controlled by a syscon register bitfield or by a GPIO.
> +
> +Required properties:
> +- compatible : should be "video-multiplexer"
> +- reg: should be register base of the register containing the control bitfield
> +- bit-mask: bitmask of the control bitfield in the control register
> +- bit-shift: bit offset of the control bitfield in the control register
> +- gpios: alternatively to reg, bit-mask, and bit-shift, a single GPIO phandle
> +  may be given to switch between two inputs
> +- #address-cells: should be <1>
> +- #size-cells: should be <0>
> +- port@*: at least three port nodes containing endpoints connecting to the
> +  source and sink devices according to of_graph bindings. The last port is
> +  the output port, all others are inputs.
> +
> +Example:
> +
> +syscon {
> +	compatible = "syscon", "simple-mfd";
> +
> +	mux {
> +		compatible = "video-multiplexer";
> +		/* Single bit (1 << 19) in syscon register 0x04: */
> +		reg = <0x04>;
> +		bit-mask = <1>;
> +		bit-shift = <19>;
> +		#address-cells = <1>;
> +		#size-cells = <0>;
> +
> +		port@0 {
> +			reg = <0>;
> +
> +			mux_in0: endpoint {
> +				remote-endpoint = <&video_source0_out>;
> +			};
> +		};
> +
> +		port@1 {
> +			reg = <1>;
> +
> +			mux_in1: endpoint {
> +				remote-endpoint = <&video_source1_out>;
> +			};
> +		};
> +
> +		port@2 {
> +			reg = <2>;
> +
> +			mux_out: endpoint {
> +				remote-endpoint = <&capture_interface_in>;
> +			};
> +		};
> +	};
> +};

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 24/36] [media] add Omnivision OV5640 sensor driver
  2017-02-16  2:19 ` [PATCH v4 24/36] [media] add Omnivision OV5640 sensor driver Steve Longerbeam
@ 2017-02-27 14:45   ` Rob Herring
  2017-03-01  0:43     ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Rob Herring @ 2017-02-27 14:45 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: mark.rutland, shawnguo, kernel, fabio.estevam, linux, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Wed, Feb 15, 2017 at 06:19:26PM -0800, Steve Longerbeam wrote:
> This driver is based on ov5640_mipi.c from Freescale imx_3.10.17_1.0.0_beta
> branch, modified heavily to bring forward to latest interfaces and code
> cleanup.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  .../devicetree/bindings/media/i2c/ov5640.txt       |   43 +

Please split to separate commit.

>  drivers/media/i2c/Kconfig                          |    7 +
>  drivers/media/i2c/Makefile                         |    1 +
>  drivers/media/i2c/ov5640.c                         | 2109 ++++++++++++++++++++
>  4 files changed, 2160 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/media/i2c/ov5640.txt
>  create mode 100644 drivers/media/i2c/ov5640.c
> 
> diff --git a/Documentation/devicetree/bindings/media/i2c/ov5640.txt b/Documentation/devicetree/bindings/media/i2c/ov5640.txt
> new file mode 100644
> index 0000000..4607bbe
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/media/i2c/ov5640.txt
> @@ -0,0 +1,43 @@
> +* Omnivision OV5640 MIPI CSI-2 sensor
> +
> +Required Properties:
> +- compatible: should be "ovti,ov5640"
> +- clocks: reference to the xclk input clock.
> +- clock-names: should be "xclk".
> +- DOVDD-supply: Digital I/O voltage supply, 1.8 volts
> +- AVDD-supply: Analog voltage supply, 2.8 volts
> +- DVDD-supply: Digital core voltage supply, 1.5 volts
> +
> +Optional Properties:
> +- reset-gpios: reference to the GPIO connected to the reset pin, if any.
> +- pwdn-gpios: reference to the GPIO connected to the pwdn pin, if any.

Use powerdown-gpios here as that is a somewhat standard name.

Both need to state what is the active state.

> +
> +The device node must contain one 'port' child node for its digital output
> +video port, in accordance with the video interface bindings defined in
> +Documentation/devicetree/bindings/media/video-interfaces.txt.
> +
> +Example:
> +
> +&i2c1 {
> +	ov5640: camera@3c {
> +		compatible = "ovti,ov5640";
> +		pinctrl-names = "default";
> +		pinctrl-0 = <&pinctrl_ov5640>;
> +		reg = <0x3c>;
> +		clocks = <&clks IMX6QDL_CLK_CKO>;
> +		clock-names = "xclk";
> +		DOVDD-supply = <&vgen4_reg>; /* 1.8v */
> +		AVDD-supply = <&vgen3_reg>;  /* 2.8v */
> +		DVDD-supply = <&vgen2_reg>;  /* 1.5v */
> +		pwdn-gpios = <&gpio1 19 GPIO_ACTIVE_HIGH>;
> +		reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>;
> +
> +		port {
> +			ov5640_to_mipi_csi2: endpoint {
> +				remote-endpoint = <&mipi_csi2_from_ov5640>;
> +				clock-lanes = <0>;
> +				data-lanes = <1 2>;
> +			};
> +		};
> +	};
> +};

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver
  2017-02-27 14:38   ` Rob Herring
@ 2017-03-01  0:00     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-01  0:00 UTC (permalink / raw)
  To: Rob Herring
  Cc: mark.rutland, shawnguo, kernel, fabio.estevam, linux, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Rob,


On 02/27/2017 06:38 AM, Rob Herring wrote:
> On Wed, Feb 15, 2017 at 06:19:03PM -0800, Steve Longerbeam wrote:
>> Add bindings documentation for the i.MX media driver.
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>> ---
>>   Documentation/devicetree/bindings/media/imx.txt | 66 +++++++++++++++++++++++++
>>   1 file changed, 66 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/media/imx.txt
>>
>> diff --git a/Documentation/devicetree/bindings/media/imx.txt b/Documentation/devicetree/bindings/media/imx.txt
>> new file mode 100644
>> index 0000000..fd5af50
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/media/imx.txt
>> @@ -0,0 +1,66 @@
>> +Freescale i.MX Media Video Device
>> +=================================
>> +
>> +Video Media Controller node
>> +---------------------------
>> +
>> +This is the media controller node for video capture support. It is a
>> +virtual device that lists the camera serial interface nodes that the
>> +media device will control.
>> +
>> +Required properties:
>> +- compatible : "fsl,imx-capture-subsystem";
>> +- ports      : Should contain a list of phandles pointing to camera
>> +		sensor interface ports of IPU devices
>> +
>> +example:
>> +
>> +capture-subsystem {
>> +	compatible = "fsl,capture-subsystem";
>> +	ports = <&ipu1_csi0>, <&ipu1_csi1>;
>> +};
>> +
>> +fim child node
>> +--------------
>> +
>> +This is an optional child node of the ipu_csi port nodes. If present and
>> +available, it enables the Frame Interval Monitor. Its properties can be
>> +used to modify the method in which the FIM measures frame intervals.
>> +Refer to Documentation/media/v4l-drivers/imx.rst for more info on the
>> +Frame Interval Monitor.
> This should have a compatible string.

I don't think so. The fim child node does not represent a device. The
CSI supports monitoring frame intervals (reporting via a v4l2 event
when a measured frame interval falls outside the nominal interval
by some tolerance value). The fim child node is only to group properties
for the FIM under a common child node.

>> +
>> +Optional properties:
>> +- fsl,input-capture-channel: an input capture channel and channel flags,
>> +			     specified as <chan flags>. The channel number
>> +			     must be 0 or 1. The flags can be
>> +			     IRQ_TYPE_EDGE_RISING, IRQ_TYPE_EDGE_FALLING, or
>> +			     IRQ_TYPE_EDGE_BOTH, and specify which input
>> +			     capture signal edge will trigger the input
>> +			     capture event. If an input capture channel is
>> +			     specified, the FIM will use this method to
>> +			     measure frame intervals instead of via the EOF
>> +			     interrupt. The input capture method is much
>> +			     preferred over EOF as it is not subject to
>> +			     interrupt latency errors. However it requires
>> +			     routing the VSYNC or FIELD output signals of
>> +			     the camera sensor to one of the i.MX input
>> +			     capture pads (SD1_DAT0, SD1_DAT1), which also
>> +			     gives up support for SD1.
>> +
>> +
>> +mipi_csi2 node
>> +--------------
>> +
>> +This is the device node for the MIPI CSI-2 Receiver, required for MIPI
>> +CSI-2 sensors.
>> +
>> +Required properties:
>> +- compatible	: "fsl,imx6-mipi-csi2", "snps,dw-mipi-csi2";
>> +- reg           : physical base address and length of the register set;
>> +- clocks	: the MIPI CSI-2 receiver requires three clocks: hsi_tx
>> +                  (the DPHY clock), video_27m, and eim_podf;
>> +- clock-names	: must contain "dphy", "cfg", "pix";
> Don't you need ports to describe the sensor and IPU connections?

Done.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 15/36] platform: add video-multiplexer subdevice driver
  2017-02-27 14:41   ` Rob Herring
@ 2017-03-01  0:20     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-01  0:20 UTC (permalink / raw)
  To: Rob Herring
  Cc: mark.rutland, shawnguo, kernel, fabio.estevam, linux, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel, Sascha Hauer,
	Steve Longerbeam



On 02/27/2017 06:41 AM, Rob Herring wrote:
> On Wed, Feb 15, 2017 at 06:19:17PM -0800, Steve Longerbeam wrote:
>> From: Philipp Zabel <p.zabel@pengutronix.de>
>>
>> This driver can handle SoC internal and external video bus multiplexers,
>> controlled either by register bit fields or by a GPIO. The subdevice
>> passes through frame interval and mbus configuration of the active input
>> to the output side.
>>
>> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
>> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
>>
>> --
>>
>> - fixed a cut&paste error in vidsw_remove(): v4l2_async_register_subdev()
>>    should be unregister.
>>
>> - added media_entity_cleanup() and v4l2_device_unregister_subdev()
>>    to vidsw_remove().
>>
>> - added missing MODULE_DEVICE_TABLE().
>>    Suggested-by: Javier Martinez Canillas <javier@dowhile0.org>
>>
>> - there was a line left over from a previous iteration that negated
>>    the new way of determining the pad count just before it which
>>    has been removed (num_pads = of_get_child_count(np)).
>>
>> - Philipp Zabel has developed a set of patches that allow adding
>>    to the subdev async notifier waiting list using a chaining method
>>    from the async registered callbacks (v4l2_of_subdev_registered()
>>    and the prep patches for that). For now, I've removed the use of
>>    v4l2_of_subdev_registered() for the vidmux driver's registered
>>    callback. This doesn't affect the functionality of this driver,
>>    but allows for it to be merged now, before adding the chaining
>>    support.
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>> ---
>>   .../bindings/media/video-multiplexer.txt           |  59 +++
> Please make this a separate commit.

Done.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 24/36] [media] add Omnivision OV5640 sensor driver
  2017-02-27 14:45   ` Rob Herring
@ 2017-03-01  0:43     ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-01  0:43 UTC (permalink / raw)
  To: Rob Herring
  Cc: mark.rutland, shawnguo, kernel, fabio.estevam, linux, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 02/27/2017 06:45 AM, Rob Herring wrote:
> On Wed, Feb 15, 2017 at 06:19:26PM -0800, Steve Longerbeam wrote:
>> This driver is based on ov5640_mipi.c from Freescale imx_3.10.17_1.0.0_beta
>> branch, modified heavily to bring forward to latest interfaces and code
>> cleanup.
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>> ---
>>   .../devicetree/bindings/media/i2c/ov5640.txt       |   43 +
> Please split to separate commit.

Done.

>
>>   drivers/media/i2c/Kconfig                          |    7 +
>>   drivers/media/i2c/Makefile                         |    1 +
>>   drivers/media/i2c/ov5640.c                         | 2109 ++++++++++++++++++++
>>   4 files changed, 2160 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/media/i2c/ov5640.txt
>>   create mode 100644 drivers/media/i2c/ov5640.c
>>
>> diff --git a/Documentation/devicetree/bindings/media/i2c/ov5640.txt b/Documentation/devicetree/bindings/media/i2c/ov5640.txt
>> new file mode 100644
>> index 0000000..4607bbe
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/media/i2c/ov5640.txt
>> @@ -0,0 +1,43 @@
>> +* Omnivision OV5640 MIPI CSI-2 sensor
>> +
>> +Required Properties:
>> +- compatible: should be "ovti,ov5640"
>> +- clocks: reference to the xclk input clock.
>> +- clock-names: should be "xclk".
>> +- DOVDD-supply: Digital I/O voltage supply, 1.8 volts
>> +- AVDD-supply: Analog voltage supply, 2.8 volts
>> +- DVDD-supply: Digital core voltage supply, 1.5 volts
>> +
>> +Optional Properties:
>> +- reset-gpios: reference to the GPIO connected to the reset pin, if any.
>> +- pwdn-gpios: reference to the GPIO connected to the pwdn pin, if any.
> Use powerdown-gpios here as that is a somewhat standard name.

Done.

>
> Both need to state what is the active state.

Done.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-02-16  2:19 ` [PATCH v4 13/36] [media] v4l2: add a frame timeout event Steve Longerbeam
@ 2017-03-02 15:53   ` Sakari Ailus
  2017-03-02 23:07     ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-02 15:53 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Steve,

On Wed, Feb 15, 2017 at 06:19:15PM -0800, Steve Longerbeam wrote:
> Add a new FRAME_TIMEOUT event to signal that a video capture or
> output device has timed out waiting for reception or transmit
> completion of a video frame.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  Documentation/media/uapi/v4l/vidioc-dqevent.rst | 5 +++++
>  Documentation/media/videodev2.h.rst.exceptions  | 1 +
>  include/uapi/linux/videodev2.h                  | 1 +
>  3 files changed, 7 insertions(+)
> 
> diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> index 8d663a7..dd77d9b 100644
> --- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> +++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> @@ -197,6 +197,11 @@ call.
>  	the regions changes. This event has a struct
>  	:c:type:`v4l2_event_motion_det`
>  	associated with it.
> +    * - ``V4L2_EVENT_FRAME_TIMEOUT``
> +      - 7
> +      - This event is triggered when the video capture or output device
> +	has timed out waiting for the reception or transmit completion of
> +	a frame of video.

As you're adding a new interface, I suppose you have an implementation
around. How do you determine what that timeout should be?

>      * - ``V4L2_EVENT_PRIVATE_START``
>        - 0x08000000
>        - Base event number for driver-private events.
> diff --git a/Documentation/media/videodev2.h.rst.exceptions b/Documentation/media/videodev2.h.rst.exceptions
> index e11a0d0..5b0f767 100644
> --- a/Documentation/media/videodev2.h.rst.exceptions
> +++ b/Documentation/media/videodev2.h.rst.exceptions
> @@ -459,6 +459,7 @@ replace define V4L2_EVENT_CTRL event-type
>  replace define V4L2_EVENT_FRAME_SYNC event-type
>  replace define V4L2_EVENT_SOURCE_CHANGE event-type
>  replace define V4L2_EVENT_MOTION_DET event-type
> +replace define V4L2_EVENT_FRAME_TIMEOUT event-type
>  replace define V4L2_EVENT_PRIVATE_START event-type
>  
>  replace define V4L2_EVENT_CTRL_CH_VALUE ctrl-changes-flags
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 46e8a2e3..e174c45 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -2132,6 +2132,7 @@ struct v4l2_streamparm {
>  #define V4L2_EVENT_FRAME_SYNC			4
>  #define V4L2_EVENT_SOURCE_CHANGE		5
>  #define V4L2_EVENT_MOTION_DET			6
> +#define V4L2_EVENT_FRAME_TIMEOUT		7
>  #define V4L2_EVENT_PRIVATE_START		0x08000000
>  
>  /* Payload for V4L2_EVENT_VSYNC */

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-02-16  2:19 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Steve Longerbeam
  2017-02-19 21:44   ` Pavel Machek
@ 2017-03-02 16:02   ` Sakari Ailus
  2017-03-02 23:48     ` Steve Longerbeam
  2017-03-03 23:06     ` Russell King - ARM Linux
  1 sibling, 2 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-03-02 16:02 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Steve,

On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> all subdev entities in a pipeline to a given video device.
> 
> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> ---
>  drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
>  include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
>  2 files changed, 73 insertions(+)
> 
> diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
> index 303980b..09d4d97 100644
> --- a/drivers/media/v4l2-core/v4l2-mc.c
> +++ b/drivers/media/v4l2-core/v4l2-mc.c
> @@ -22,6 +22,7 @@
>  #include <linux/usb.h>
>  #include <media/media-device.h>
>  #include <media/media-entity.h>
> +#include <media/v4l2-ctrls.h>
>  #include <media/v4l2-fh.h>
>  #include <media/v4l2-mc.h>
>  #include <media/v4l2-subdev.h>
> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
>  }
>  EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>  
> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> +				     struct media_entity *start_entity)

I have a few concerns / questions:

- What's the purpose of this patch? Why not to access the sub-device node
  directly?

- This implementation is only workable as long as you do not modify the
  pipeline. Once you disable a link along the pipeline, a device where the
  control was inherited from may no longer be a part of the pipeline.
  Depending on the hardware, it could be a part of another pipeline, in
  which case it certainly must not be accessible through an unrelated video
  node. As the function is added to the framework, I would expect it to
  handle such a case correctly.

- I assume it is the responsibility of the caller of this function to ensure
  the device in question will not be powered off whilst the video node is
  used as another user space interface to such a sub-device. If the driver
  uses the generic PM functions in the same file, this works, but it still
  has to be documented.

> +{
> +	struct media_device *mdev = start_entity->graph_obj.mdev;
> +	struct media_entity *entity;
> +	struct media_graph graph;
> +	struct v4l2_subdev *sd;
> +	int ret;
> +
> +	ret = media_graph_walk_init(&graph, mdev);
> +	if (ret)
> +		return ret;
> +
> +	media_graph_walk_start(&graph, start_entity);
> +
> +	while ((entity = media_graph_walk_next(&graph))) {
> +		if (!is_media_entity_v4l2_subdev(entity))
> +			continue;
> +
> +		sd = media_entity_to_v4l2_subdev(entity);
> +
> +		ret = v4l2_ctrl_add_handler(vfd->ctrl_handler,
> +					    sd->ctrl_handler,
> +					    NULL);
> +		if (ret)
> +			break;
> +	}
> +
> +	media_graph_walk_cleanup(&graph);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(__v4l2_pipeline_inherit_controls);
> +
> +int v4l2_pipeline_inherit_controls(struct video_device *vfd,
> +				   struct media_entity *start_entity)
> +{
> +	struct media_device *mdev = start_entity->graph_obj.mdev;
> +	int ret;
> +
> +	mutex_lock(&mdev->graph_mutex);
> +	ret = __v4l2_pipeline_inherit_controls(vfd, start_entity);
> +	mutex_unlock(&mdev->graph_mutex);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(v4l2_pipeline_inherit_controls);
> +
>  /* -----------------------------------------------------------------------------
>   * Pipeline power management
>   *
> diff --git a/include/media/v4l2-mc.h b/include/media/v4l2-mc.h
> index 2634d9d..9848e77 100644
> --- a/include/media/v4l2-mc.h
> +++ b/include/media/v4l2-mc.h
> @@ -171,6 +171,17 @@ void v4l_disable_media_source(struct video_device *vdev);
>   */
>  int v4l_vb2q_enable_media_source(struct vb2_queue *q);
>  
> +/**
> + * v4l2_pipeline_inherit_controls - Add the v4l2 controls from all
> + *				    subdev entities in a pipeline to
> + *				    the given video device.
> + * @vfd: the video device
> + * @start_entity: Starting entity
> + */
> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> +				     struct media_entity *start_entity);
> +int v4l2_pipeline_inherit_controls(struct video_device *vfd,
> +				   struct media_entity *start_entity);
>  
>  /**
>   * v4l2_pipeline_pm_use - Update the use count of an entity
> @@ -231,6 +242,20 @@ static inline int v4l_vb2q_enable_media_source(struct vb2_queue *q)
>  	return 0;
>  }
>  
> +static inline int __v4l2_pipeline_inherit_controls(
> +	struct video_device *vfd,
> +	struct media_entity *start_entity)
> +{
> +	return 0;
> +}
> +
> +static inline int v4l2_pipeline_inherit_controls(
> +	struct video_device *vfd,
> +	struct media_entity *start_entity)
> +{
> +	return 0;
> +}
> +
>  static inline int v4l2_pipeline_pm_use(struct media_entity *entity, int use)
>  {
>  	return 0;

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-02 15:53   ` Sakari Ailus
@ 2017-03-02 23:07     ` Steve Longerbeam
  2017-03-03 11:45       ` Sakari Ailus
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-02 23:07 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/02/2017 07:53 AM, Sakari Ailus wrote:
> Hi Steve,
>
> On Wed, Feb 15, 2017 at 06:19:15PM -0800, Steve Longerbeam wrote:
>> Add a new FRAME_TIMEOUT event to signal that a video capture or
>> output device has timed out waiting for reception or transmit
>> completion of a video frame.
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>> ---
>>  Documentation/media/uapi/v4l/vidioc-dqevent.rst | 5 +++++
>>  Documentation/media/videodev2.h.rst.exceptions  | 1 +
>>  include/uapi/linux/videodev2.h                  | 1 +
>>  3 files changed, 7 insertions(+)
>>
>> diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>> index 8d663a7..dd77d9b 100644
>> --- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>> +++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>> @@ -197,6 +197,11 @@ call.
>>  	the regions changes. This event has a struct
>>  	:c:type:`v4l2_event_motion_det`
>>  	associated with it.
>> +    * - ``V4L2_EVENT_FRAME_TIMEOUT``
>> +      - 7
>> +      - This event is triggered when the video capture or output device
>> +	has timed out waiting for the reception or transmit completion of
>> +	a frame of video.
>
> As you're adding a new interface, I suppose you have an implementation
> around. How do you determine what that timeout should be?

The imx-media driver sets the timeout to 1 second, or 30 frame
periods at 30 fps.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-02 16:02   ` Sakari Ailus
@ 2017-03-02 23:48     ` Steve Longerbeam
  2017-03-03  0:46       ` Steve Longerbeam
  2017-03-03  2:12       ` Steve Longerbeam
  2017-03-03 23:06     ` Russell King - ARM Linux
  1 sibling, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-02 23:48 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/02/2017 08:02 AM, Sakari Ailus wrote:
> Hi Steve,
>
> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>> all subdev entities in a pipeline to a given video device.
>>
>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>> ---
>>  drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
>>  include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
>>  2 files changed, 73 insertions(+)
>>
>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
>> index 303980b..09d4d97 100644
>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>> @@ -22,6 +22,7 @@
>>  #include <linux/usb.h>
>>  #include <media/media-device.h>
>>  #include <media/media-entity.h>
>> +#include <media/v4l2-ctrls.h>
>>  #include <media/v4l2-fh.h>
>>  #include <media/v4l2-mc.h>
>>  #include <media/v4l2-subdev.h>
>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
>>  }
>>  EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>
>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>> +				     struct media_entity *start_entity)
>
> I have a few concerns / questions:
>
> - What's the purpose of this patch? Why not to access the sub-device node
>   directly?


I don't really understand what you are trying to say. That's exactly
what this function is doing, accessing every subdevice in a pipeline
directly, in one convenient function call.


>
> - This implementation is only workable as long as you do not modify the
>   pipeline. Once you disable a link along the pipeline, a device where the
>   control was inherited from may no longer be a part of the pipeline.

That's correct. It's up to the media driver to clear the video device's
inherited controls whenever the pipeline is modified, and then call this
function again if need be.

In imx-media driver, the function is called in link_setup when the link 
from a source pad that is attached to a capture video node is enabled.
This is the last link that must be made to define the pipeline, so it
is at this time that a complete list of subdevice controls can be
gathered by walking the pipeline.


>   Depending on the hardware, it could be a part of another pipeline, in
>   which case it certainly must not be accessible through an unrelated video
>   node. As the function is added to the framework, I would expect it to
>   handle such a case correctly.

The function will not inherit controls from a device that is not
reachable from the given starting subdevice, so I don't understand
you're point here.


>
> - I assume it is the responsibility of the caller of this function to ensure
>   the device in question will not be powered off whilst the video node is
>   used as another user space interface to such a sub-device. If the driver
>   uses the generic PM functions in the same file, this works, but it still
>   has to be documented.

I guess I'm missing something. Why are you bringing up the subject of
power? What does this function have to do with whether a subdevice is
powered or not? The function makes use of v4l2_ctrl_add_handler(), and
the latter has no requirements about whether the device's owning the
control handlers are powered or not.


Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-02 23:48     ` Steve Longerbeam
@ 2017-03-03  0:46       ` Steve Longerbeam
  2017-03-03  2:12       ` Steve Longerbeam
  1 sibling, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-03  0:46 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/02/2017 03:48 PM, Steve Longerbeam wrote:
>
>
> On 03/02/2017 08:02 AM, Sakari Ailus wrote:
>> Hi Steve,
>>
>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>> all subdev entities in a pipeline to a given video device.
>>>
>>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>>> ---
>>>  drivers/media/v4l2-core/v4l2-mc.c | 48
>>> +++++++++++++++++++++++++++++++++++++++
>>>  include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
>>>  2 files changed, 73 insertions(+)
>>>
>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c
>>> b/drivers/media/v4l2-core/v4l2-mc.c
>>> index 303980b..09d4d97 100644
>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>> @@ -22,6 +22,7 @@
>>>  #include <linux/usb.h>
>>>  #include <media/media-device.h>
>>>  #include <media/media-entity.h>
>>> +#include <media/v4l2-ctrls.h>
>>>  #include <media/v4l2-fh.h>
>>>  #include <media/v4l2-mc.h>
>>>  #include <media/v4l2-subdev.h>
>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct
>>> vb2_queue *q)
>>>  }
>>>  EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>
>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>> +                     struct media_entity *start_entity)
>>
>> I have a few concerns / questions:
>>
>> - What's the purpose of this patch? Why not to access the sub-device node
>>   directly?
>
>
> I don't really understand what you are trying to say. That's exactly
> what this function is doing, accessing every subdevice in a pipeline
> directly, in one convenient function call.
>
>
>>
>> - This implementation is only workable as long as you do not modify the
>>   pipeline. Once you disable a link along the pipeline, a device where
>> the
>>   control was inherited from may no longer be a part of the pipeline.
>
> That's correct. It's up to the media driver to clear the video device's
> inherited controls whenever the pipeline is modified, and then call this
> function again if need be.

And here is where I need to eat my words :). I'm not actually
clearing the inherited controls if an upstream link from the
device node link is modified after the whole pipeline has been
configured. If the user does that the controls can become
invalid. Need to fix that by clearing device node controls in
the link_notify callback.

Steve


>
> In imx-media driver, the function is called in link_setup when the link
> from a source pad that is attached to a capture video node is enabled.
> This is the last link that must be made to define the pipeline, so it
> is at this time that a complete list of subdevice controls can be
> gathered by walking the pipeline.
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-02 23:48     ` Steve Longerbeam
  2017-03-03  0:46       ` Steve Longerbeam
@ 2017-03-03  2:12       ` Steve Longerbeam
  2017-03-03 19:17         ` Sakari Ailus
  1 sibling, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-03  2:12 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/02/2017 03:48 PM, Steve Longerbeam wrote:
>
>
> On 03/02/2017 08:02 AM, Sakari Ailus wrote:
>> Hi Steve,
>>
>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>> all subdev entities in a pipeline to a given video device.
>>>
>>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>>> ---
>>>  drivers/media/v4l2-core/v4l2-mc.c | 48
>>> +++++++++++++++++++++++++++++++++++++++
>>>  include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
>>>  2 files changed, 73 insertions(+)
>>>
>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c
>>> b/drivers/media/v4l2-core/v4l2-mc.c
>>> index 303980b..09d4d97 100644
>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>> @@ -22,6 +22,7 @@
>>>  #include <linux/usb.h>
>>>  #include <media/media-device.h>
>>>  #include <media/media-entity.h>
>>> +#include <media/v4l2-ctrls.h>
>>>  #include <media/v4l2-fh.h>
>>>  #include <media/v4l2-mc.h>
>>>  #include <media/v4l2-subdev.h>
>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct
>>> vb2_queue *q)
>>>  }
>>>  EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>
>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>> +                     struct media_entity *start_entity)
>>
>> I have a few concerns / questions:
>>
>> - What's the purpose of this patch? Why not to access the sub-device node
>>   directly?
>
>
> I don't really understand what you are trying to say.<snip>
>

Actually I think I understand what you mean now. Yes, the user can
always access a subdev's control directly from its /dev/v4l-subdevXX.
I'm only providing this feature as a convenience to the user, so that
all controls in a pipeline can be accessed from one place, i.e. the
main capture device node.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-02 23:07     ` Steve Longerbeam
@ 2017-03-03 11:45       ` Sakari Ailus
  2017-03-03 22:43         ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-03 11:45 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Thu, Mar 02, 2017 at 03:07:21PM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/02/2017 07:53 AM, Sakari Ailus wrote:
> >Hi Steve,
> >
> >On Wed, Feb 15, 2017 at 06:19:15PM -0800, Steve Longerbeam wrote:
> >>Add a new FRAME_TIMEOUT event to signal that a video capture or
> >>output device has timed out waiting for reception or transmit
> >>completion of a video frame.
> >>
> >>Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> >>---
> >> Documentation/media/uapi/v4l/vidioc-dqevent.rst | 5 +++++
> >> Documentation/media/videodev2.h.rst.exceptions  | 1 +
> >> include/uapi/linux/videodev2.h                  | 1 +
> >> 3 files changed, 7 insertions(+)
> >>
> >>diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>index 8d663a7..dd77d9b 100644
> >>--- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>+++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>@@ -197,6 +197,11 @@ call.
> >> 	the regions changes. This event has a struct
> >> 	:c:type:`v4l2_event_motion_det`
> >> 	associated with it.
> >>+    * - ``V4L2_EVENT_FRAME_TIMEOUT``
> >>+      - 7
> >>+      - This event is triggered when the video capture or output device
> >>+	has timed out waiting for the reception or transmit completion of
> >>+	a frame of video.
> >
> >As you're adding a new interface, I suppose you have an implementation
> >around. How do you determine what that timeout should be?
> 
> The imx-media driver sets the timeout to 1 second, or 30 frame
> periods at 30 fps.

The frame rate is not necessarily constant during streaming. It may well
change as a result of lighting conditions. I wouldn't add an event for this:
this is unreliable and 30 times the frame period is an arbitrary value
anyway. No other drivers do this either.

The user space is generally in control of the frame period (or on some
devices it could be the sensor, too, but *not* the CSI-2 receiver driver),
so detecting the condition of not receiving any frames is more reliably done
in the user space --- if needed.

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-03  2:12       ` Steve Longerbeam
@ 2017-03-03 19:17         ` Sakari Ailus
  2017-03-03 22:47           ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-03 19:17 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Steve,

On Thu, Mar 02, 2017 at 06:12:43PM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/02/2017 03:48 PM, Steve Longerbeam wrote:
> >
> >
> >On 03/02/2017 08:02 AM, Sakari Ailus wrote:
> >>Hi Steve,
> >>
> >>On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
> >>>v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> >>>all subdev entities in a pipeline to a given video device.
> >>>
> >>>Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> >>>---
> >>> drivers/media/v4l2-core/v4l2-mc.c | 48
> >>>+++++++++++++++++++++++++++++++++++++++
> >>> include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
> >>> 2 files changed, 73 insertions(+)
> >>>
> >>>diff --git a/drivers/media/v4l2-core/v4l2-mc.c
> >>>b/drivers/media/v4l2-core/v4l2-mc.c
> >>>index 303980b..09d4d97 100644
> >>>--- a/drivers/media/v4l2-core/v4l2-mc.c
> >>>+++ b/drivers/media/v4l2-core/v4l2-mc.c
> >>>@@ -22,6 +22,7 @@
> >>> #include <linux/usb.h>
> >>> #include <media/media-device.h>
> >>> #include <media/media-entity.h>
> >>>+#include <media/v4l2-ctrls.h>
> >>> #include <media/v4l2-fh.h>
> >>> #include <media/v4l2-mc.h>
> >>> #include <media/v4l2-subdev.h>
> >>>@@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct
> >>>vb2_queue *q)
> >>> }
> >>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
> >>>
> >>>+int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> >>>+                     struct media_entity *start_entity)
> >>
> >>I have a few concerns / questions:
> >>
> >>- What's the purpose of this patch? Why not to access the sub-device node
> >>  directly?
> >
> >
> >I don't really understand what you are trying to say.<snip>
> >
> 
> Actually I think I understand what you mean now. Yes, the user can
> always access a subdev's control directly from its /dev/v4l-subdevXX.
> I'm only providing this feature as a convenience to the user, so that
> all controls in a pipeline can be accessed from one place, i.e. the
> main capture device node.

No other MC based V4L2 driver does this. You'd be creating device specific
behaviour that differs from what the rest of the drivers do. The purpose of
MC is to provide the user with knowledge of what devices are there, and the
V4L2 sub-devices interface is used to access them in this case.

It does matter where a control is implemented, too. If the pipeline contains
multiple sub-devices that implement the same control, only one of them may
be accessed. The driver calling the function (or even less the function)
would not know which one of them should be ignored.

If you need such functionality, it should be implemented in the user space
instead.

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-03 11:45       ` Sakari Ailus
@ 2017-03-03 22:43         ` Steve Longerbeam
  2017-03-04 10:56           ` Sakari Ailus
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-03 22:43 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/03/2017 03:45 AM, Sakari Ailus wrote:
> On Thu, Mar 02, 2017 at 03:07:21PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/02/2017 07:53 AM, Sakari Ailus wrote:
>>> Hi Steve,
>>>
>>> On Wed, Feb 15, 2017 at 06:19:15PM -0800, Steve Longerbeam wrote:
>>>> Add a new FRAME_TIMEOUT event to signal that a video capture or
>>>> output device has timed out waiting for reception or transmit
>>>> completion of a video frame.
>>>>
>>>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>>>> ---
>>>> Documentation/media/uapi/v4l/vidioc-dqevent.rst | 5 +++++
>>>> Documentation/media/videodev2.h.rst.exceptions  | 1 +
>>>> include/uapi/linux/videodev2.h                  | 1 +
>>>> 3 files changed, 7 insertions(+)
>>>>
>>>> diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>>> index 8d663a7..dd77d9b 100644
>>>> --- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>>> +++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>>> @@ -197,6 +197,11 @@ call.
>>>> 	the regions changes. This event has a struct
>>>> 	:c:type:`v4l2_event_motion_det`
>>>> 	associated with it.
>>>> +    * - ``V4L2_EVENT_FRAME_TIMEOUT``
>>>> +      - 7
>>>> +      - This event is triggered when the video capture or output device
>>>> +	has timed out waiting for the reception or transmit completion of
>>>> +	a frame of video.
>>>
>>> As you're adding a new interface, I suppose you have an implementation
>>> around. How do you determine what that timeout should be?
>>
>> The imx-media driver sets the timeout to 1 second, or 30 frame
>> periods at 30 fps.
>
> The frame rate is not necessarily constant during streaming. It may well
> change as a result of lighting conditions.

I think you mean that would be a _temporary_ change in frame rate, but
yes I agree the data rate can temporarily fluctuate. Although I doubt
lighting conditions would cause a sensor to pause data transmission
for a full 1 second.


> I wouldn't add an event for this:
> this is unreliable and 30 times the frame period is an arbitrary value
> anyway. No other drivers do this either.

If no other drivers do this I don't mind removing it. It is really meant
to deal with the ADV718x CVBS decoder, which often simply stops sending
data on the BT.656 bus if there is an interruption in the input analog
signal. But I agree that user space could detect this timeout instead.
Unless I hear from someone else that they would like to keep this
feature I'll remove it in version 5.

Steve


>
> The user space is generally in control of the frame period (or on some
> devices it could be the sensor, too, but *not* the CSI-2 receiver driver),
> so detecting the condition of not receiving any frames is more reliably done
> in the user space --- if needed.
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-03 19:17         ` Sakari Ailus
@ 2017-03-03 22:47           ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-03 22:47 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/03/2017 11:17 AM, Sakari Ailus wrote:
> Hi Steve,
>
> On Thu, Mar 02, 2017 at 06:12:43PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/02/2017 03:48 PM, Steve Longerbeam wrote:
>>>
>>>
>>> On 03/02/2017 08:02 AM, Sakari Ailus wrote:
>>>> Hi Steve,
>>>>
>>>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>>>> all subdev entities in a pipeline to a given video device.
>>>>>
>>>>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>>>>> ---
>>>>> drivers/media/v4l2-core/v4l2-mc.c | 48
>>>>> +++++++++++++++++++++++++++++++++++++++
>>>>> include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
>>>>> 2 files changed, 73 insertions(+)
>>>>>
>>>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c
>>>>> b/drivers/media/v4l2-core/v4l2-mc.c
>>>>> index 303980b..09d4d97 100644
>>>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>>>> @@ -22,6 +22,7 @@
>>>>> #include <linux/usb.h>
>>>>> #include <media/media-device.h>
>>>>> #include <media/media-entity.h>
>>>>> +#include <media/v4l2-ctrls.h>
>>>>> #include <media/v4l2-fh.h>
>>>>> #include <media/v4l2-mc.h>
>>>>> #include <media/v4l2-subdev.h>
>>>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct
>>>>> vb2_queue *q)
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>>>
>>>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>>>> +                     struct media_entity *start_entity)
>>>>
>>>> I have a few concerns / questions:
>>>>
>>>> - What's the purpose of this patch? Why not to access the sub-device node
>>>>  directly?
>>>
>>>
>>> I don't really understand what you are trying to say.<snip>
>>>
>>
>> Actually I think I understand what you mean now. Yes, the user can
>> always access a subdev's control directly from its /dev/v4l-subdevXX.
>> I'm only providing this feature as a convenience to the user, so that
>> all controls in a pipeline can be accessed from one place, i.e. the
>> main capture device node.
>
> No other MC based V4L2 driver does this. You'd be creating device specific
> behaviour that differs from what the rest of the drivers do. The purpose of
> MC is to provide the user with knowledge of what devices are there, and the
> V4L2 sub-devices interface is used to access them in this case.

Well, again, I don't mind removing this. As I said it is only a
convenience (although quite a nice one in my opinion). I'd like
to hear from others whether this is worth keeping though.


>
> It does matter where a control is implemented, too. If the pipeline contains
> multiple sub-devices that implement the same control, only one of them may
> be accessed. The driver calling the function (or even less the function)
> would not know which one of them should be ignored.

Yes the pipeline should not have any duplicate controls. On imx-media no
pipelines that can be configured have duplicate controls.

Steve

>
> If you need such functionality, it should be implemented in the user space
> instead.
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-02 16:02   ` Sakari Ailus
  2017-03-02 23:48     ` Steve Longerbeam
@ 2017-03-03 23:06     ` Russell King - ARM Linux
  2017-03-04  0:36       ` Steve Longerbeam
  2017-03-04 13:13       ` Sakari Ailus
  1 sibling, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-03 23:06 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Thu, Mar 02, 2017 at 06:02:57PM +0200, Sakari Ailus wrote:
> Hi Steve,
> 
> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
> > v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> > all subdev entities in a pipeline to a given video device.
> > 
> > Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> > ---
> >  drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
> >  include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
> >  2 files changed, 73 insertions(+)
> > 
> > diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
> > index 303980b..09d4d97 100644
> > --- a/drivers/media/v4l2-core/v4l2-mc.c
> > +++ b/drivers/media/v4l2-core/v4l2-mc.c
> > @@ -22,6 +22,7 @@
> >  #include <linux/usb.h>
> >  #include <media/media-device.h>
> >  #include <media/media-entity.h>
> > +#include <media/v4l2-ctrls.h>
> >  #include <media/v4l2-fh.h>
> >  #include <media/v4l2-mc.h>
> >  #include <media/v4l2-subdev.h>
> > @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
> >  }
> >  EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
> >  
> > +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> > +				     struct media_entity *start_entity)
> 
> I have a few concerns / questions:
> 
> - What's the purpose of this patch? Why not to access the sub-device node
>   directly?

What tools are in existance _today_ to provide access to these controls
via the sub-device nodes?

v4l-tools doesn't last time I looked - in fact, the only tool in v4l-tools
which is capable of accessing the subdevices is media-ctl, and that only
provides functionality for configuring the pipeline.

So, pointing people at vapourware userspace is really quite rediculous.

The established way to control video capture is through the main video
capture device, not through the sub-devices.  Yes, the controls are
exposed through sub-devices too, but that does not mean that is the
correct way to access them.

The v4l2 documentation (Documentation/media/kapi/v4l2-controls.rst)
even disagrees with your statements.  That talks about control
inheritence from sub-devices to the main video device, and the core
v4l2 code provides _automatic_ support for this - see
v4l2_device_register_subdev():

        /* This just returns 0 if either of the two args is NULL */
        err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);

which merges the subdev's controls into the main device's control
handler.

So, (a) I don't think Steve needs to add this code, and (b) I think
your statements about not inheriting controls goes against the
documentation and API compatibility with _existing_ applications,
and ultimately hurts the user experience, since there's nothing
existing today to support what you're suggesting in userspace.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-03 23:06     ` Russell King - ARM Linux
@ 2017-03-04  0:36       ` Steve Longerbeam
  2017-03-04 13:13       ` Sakari Ailus
  1 sibling, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-04  0:36 UTC (permalink / raw)
  To: Russell King - ARM Linux, Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	hverkuil, nick, markus.heiser, p.zabel, laurent.pinchart+renesas,
	bparrot, geert, arnd, sudipm.mukherjee, minghsiu.tsai,
	tiffany.lin, jean-christophe.trotin, horms+renesas,
	niklas.soderlund+renesas, robert.jarzmik, songjun.wu,
	andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/03/2017 03:06 PM, Russell King - ARM Linux wrote:
> On Thu, Mar 02, 2017 at 06:02:57PM +0200, Sakari Ailus wrote:
>> Hi Steve,
>>
>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>> all subdev entities in a pipeline to a given video device.
>>>
>>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>>> ---
>>>  drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
>>>  include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
>>>  2 files changed, 73 insertions(+)
>>>
>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
>>> index 303980b..09d4d97 100644
>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>> @@ -22,6 +22,7 @@
>>>  #include <linux/usb.h>
>>>  #include <media/media-device.h>
>>>  #include <media/media-entity.h>
>>> +#include <media/v4l2-ctrls.h>
>>>  #include <media/v4l2-fh.h>
>>>  #include <media/v4l2-mc.h>
>>>  #include <media/v4l2-subdev.h>
>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
>>>  }
>>>  EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>
>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>> +				     struct media_entity *start_entity)
>>
>> I have a few concerns / questions:
>>
>> - What's the purpose of this patch? Why not to access the sub-device node
>>   directly?
>
> What tools are in existance _today_ to provide access to these controls
> via the sub-device nodes?
>
> v4l-tools doesn't last time I looked - in fact, the only tool in v4l-tools
> which is capable of accessing the subdevices is media-ctl, and that only
> provides functionality for configuring the pipeline.
>
> So, pointing people at vapourware userspace is really quite rediculous.


Hi Russell,

Yes, that's a big reason why I added this capability. The v4l2-ctl
tool won't accept subdev nodes, although Philipp Zabel has a quick hack
to get around this (ignore return code from VIDIOC_QUERYCAP).


>
> The established way to control video capture is through the main video
> capture device, not through the sub-devices.  Yes, the controls are
> exposed through sub-devices too, but that does not mean that is the
> correct way to access them.
>
> The v4l2 documentation (Documentation/media/kapi/v4l2-controls.rst)
> even disagrees with your statements.  That talks about control
> inheritence from sub-devices to the main video device, and the core
> v4l2 code provides _automatic_ support for this - see
> v4l2_device_register_subdev():
>
>         /* This just returns 0 if either of the two args is NULL */
>         err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);
>
> which merges the subdev's controls into the main device's control
> handler.

Actually v4l2_dev->ctrl_handler is not of much use to me. This will
compose a list of controls from all registered subdevs, i.e. _all
possible controls_.

What v4l2_pipeline_inherit_controls() does is compose a list of
controls that are reachable and available in the currently configured
pipeline.

Steve

>
> So, (a) I don't think Steve needs to add this code, and (b) I think
> your statements about not inheriting controls goes against the
> documentation and API compatibility with _existing_ applications,
> and ultimately hurts the user experience, since there's nothing
> existing today to support what you're suggesting in userspace.
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-03 22:43         ` Steve Longerbeam
@ 2017-03-04 10:56           ` Sakari Ailus
  2017-03-05  0:37             ` Steve Longerbeam
  0 siblings, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-04 10:56 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Steve,

On Fri, Mar 03, 2017 at 02:43:51PM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/03/2017 03:45 AM, Sakari Ailus wrote:
> >On Thu, Mar 02, 2017 at 03:07:21PM -0800, Steve Longerbeam wrote:
> >>
> >>
> >>On 03/02/2017 07:53 AM, Sakari Ailus wrote:
> >>>Hi Steve,
> >>>
> >>>On Wed, Feb 15, 2017 at 06:19:15PM -0800, Steve Longerbeam wrote:
> >>>>Add a new FRAME_TIMEOUT event to signal that a video capture or
> >>>>output device has timed out waiting for reception or transmit
> >>>>completion of a video frame.
> >>>>
> >>>>Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> >>>>---
> >>>>Documentation/media/uapi/v4l/vidioc-dqevent.rst | 5 +++++
> >>>>Documentation/media/videodev2.h.rst.exceptions  | 1 +
> >>>>include/uapi/linux/videodev2.h                  | 1 +
> >>>>3 files changed, 7 insertions(+)
> >>>>
> >>>>diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>>>index 8d663a7..dd77d9b 100644
> >>>>--- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>>>+++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>>>@@ -197,6 +197,11 @@ call.
> >>>>	the regions changes. This event has a struct
> >>>>	:c:type:`v4l2_event_motion_det`
> >>>>	associated with it.
> >>>>+    * - ``V4L2_EVENT_FRAME_TIMEOUT``
> >>>>+      - 7
> >>>>+      - This event is triggered when the video capture or output device
> >>>>+	has timed out waiting for the reception or transmit completion of
> >>>>+	a frame of video.
> >>>
> >>>As you're adding a new interface, I suppose you have an implementation
> >>>around. How do you determine what that timeout should be?
> >>
> >>The imx-media driver sets the timeout to 1 second, or 30 frame
> >>periods at 30 fps.
> >
> >The frame rate is not necessarily constant during streaming. It may well
> >change as a result of lighting conditions.
> 
> I think you mean that would be a _temporary_ change in frame rate, but
> yes I agree the data rate can temporarily fluctuate. Although I doubt
> lighting conditions would cause a sensor to pause data transmission
> for a full 1 second.

Likely not, at least not in typical conditions. The exposure time is still
quite specific to applications: it could be minutes if you take photos e.g.
of the night sky.

What I'm saying here is that any static value is likely not both reasonable
and workable in all potential situations all the time. Still there are cases
(as yours below) that may happen in relatively common cases on some hardware
(more common than taking long exposure photos of the night sky with the said
hardware :)).

> 
> 
> >I wouldn't add an event for this:
> >this is unreliable and 30 times the frame period is an arbitrary value
> >anyway. No other drivers do this either.
> 
> If no other drivers do this I don't mind removing it. It is really meant
> to deal with the ADV718x CVBS decoder, which often simply stops sending
> data on the BT.656 bus if there is an interruption in the input analog
> signal. But I agree that user space could detect this timeout instead.
> Unless I hear from someone else that they would like to keep this
> feature I'll remove it in version 5.

That's a bit of a special situation --- still there are alike conditions on
existing hardware. You should return the buffers to the user with the ERROR
flag set --- or return -EIO from VIDIOC_DQBUF, if the condition will
persist:

<URL:https://www.linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/vidioc-qbuf.html>

Do you already obtain the frame rate from the image source (e.g. tuner,
sensor, decoder) and multiply the frame time by some number in the current
implementation? Not all sub-device drivers may implement g_frame_interval()
so I'd disable the timeout in that case.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-03 23:06     ` Russell King - ARM Linux
  2017-03-04  0:36       ` Steve Longerbeam
@ 2017-03-04 13:13       ` Sakari Ailus
  2017-03-10 12:54         ` Hans Verkuil
  1 sibling, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-04 13:13 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Russell,

On Fri, Mar 03, 2017 at 11:06:45PM +0000, Russell King - ARM Linux wrote:
> On Thu, Mar 02, 2017 at 06:02:57PM +0200, Sakari Ailus wrote:
> > Hi Steve,
> > 
> > On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
> > > v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> > > all subdev entities in a pipeline to a given video device.
> > > 
> > > Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> > > ---
> > >  drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
> > >  include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
> > >  2 files changed, 73 insertions(+)
> > > 
> > > diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
> > > index 303980b..09d4d97 100644
> > > --- a/drivers/media/v4l2-core/v4l2-mc.c
> > > +++ b/drivers/media/v4l2-core/v4l2-mc.c
> > > @@ -22,6 +22,7 @@
> > >  #include <linux/usb.h>
> > >  #include <media/media-device.h>
> > >  #include <media/media-entity.h>
> > > +#include <media/v4l2-ctrls.h>
> > >  #include <media/v4l2-fh.h>
> > >  #include <media/v4l2-mc.h>
> > >  #include <media/v4l2-subdev.h>
> > > @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
> > >  }
> > >  EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
> > >  
> > > +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> > > +				     struct media_entity *start_entity)
> > 
> > I have a few concerns / questions:
> > 
> > - What's the purpose of this patch? Why not to access the sub-device node
> >   directly?
> 
> What tools are in existance _today_ to provide access to these controls
> via the sub-device nodes?

yavta, for instance:

<URL:http://git.ideasonboard.org/yavta.git>

VIDIOC_QUERYCAP isn't supported on sub-devices and v4l2-ctl appears to be
checking for that. That check should be removed (with possible other
implications taken into account).

> 
> v4l-tools doesn't last time I looked - in fact, the only tool in v4l-tools
> which is capable of accessing the subdevices is media-ctl, and that only
> provides functionality for configuring the pipeline.
> 
> So, pointing people at vapourware userspace is really quite rediculous.

Do bear in mind that there are other programs that can make use of these
interfaces. It's not just the test programs, or a test program you attempted
to use.

> 
> The established way to control video capture is through the main video
> capture device, not through the sub-devices.  Yes, the controls are
> exposed through sub-devices too, but that does not mean that is the
> correct way to access them.

It is. That's the very purpose of the sub-devices: to provide access to the
hardware independently of how the links are configured.

> 
> The v4l2 documentation (Documentation/media/kapi/v4l2-controls.rst)
> even disagrees with your statements.  That talks about control
> inheritence from sub-devices to the main video device, and the core
> v4l2 code provides _automatic_ support for this - see
> v4l2_device_register_subdev():
> 
>         /* This just returns 0 if either of the two args is NULL */
>         err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);
> 
> which merges the subdev's controls into the main device's control
> handler.

That's done on different kind of devices: those that provide plain V4L2 API
to control the entire device. V4L2 sub-device interface is used *in kernel*
as an interface to control sub-devices that do not need to be exposed to the
user space.

Devices that have complex pipeline that do essentially require using the
Media controller interface to configure them are out of that scope.

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-04 10:56           ` Sakari Ailus
@ 2017-03-05  0:37             ` Steve Longerbeam
  2017-03-05 21:31               ` Sakari Ailus
  2017-03-05 22:41               ` Russell King - ARM Linux
  0 siblings, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-05  0:37 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/04/2017 02:56 AM, Sakari Ailus wrote:
> Hi Steve,
>
> On Fri, Mar 03, 2017 at 02:43:51PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/03/2017 03:45 AM, Sakari Ailus wrote:
>>> On Thu, Mar 02, 2017 at 03:07:21PM -0800, Steve Longerbeam wrote:
>>>>
>>>>
>>>> On 03/02/2017 07:53 AM, Sakari Ailus wrote:
>>>>> Hi Steve,
>>>>>
>>>>> On Wed, Feb 15, 2017 at 06:19:15PM -0800, Steve Longerbeam wrote:
>>>>>> Add a new FRAME_TIMEOUT event to signal that a video capture or
>>>>>> output device has timed out waiting for reception or transmit
>>>>>> completion of a video frame.
>>>>>>
>>>>>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>>>>>> ---
>>>>>> Documentation/media/uapi/v4l/vidioc-dqevent.rst | 5 +++++
>>>>>> Documentation/media/videodev2.h.rst.exceptions  | 1 +
>>>>>> include/uapi/linux/videodev2.h                  | 1 +
>>>>>> 3 files changed, 7 insertions(+)
>>>>>>
>>>>>> diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>>>>> index 8d663a7..dd77d9b 100644
>>>>>> --- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>>>>> +++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
>>>>>> @@ -197,6 +197,11 @@ call.
>>>>>> 	the regions changes. This event has a struct
>>>>>> 	:c:type:`v4l2_event_motion_det`
>>>>>> 	associated with it.
>>>>>> +    * - ``V4L2_EVENT_FRAME_TIMEOUT``
>>>>>> +      - 7
>>>>>> +      - This event is triggered when the video capture or output device
>>>>>> +	has timed out waiting for the reception or transmit completion of
>>>>>> +	a frame of video.
>>>>>
>>>>> As you're adding a new interface, I suppose you have an implementation
>>>>> around. How do you determine what that timeout should be?
>>>>
>>>> The imx-media driver sets the timeout to 1 second, or 30 frame
>>>> periods at 30 fps.
>>>
>>> The frame rate is not necessarily constant during streaming. It may well
>>> change as a result of lighting conditions.
>>
>> I think you mean that would be a _temporary_ change in frame rate, but
>> yes I agree the data rate can temporarily fluctuate. Although I doubt
>> lighting conditions would cause a sensor to pause data transmission
>> for a full 1 second.
>
> Likely not, at least not in typical conditions. The exposure time is still
> quite specific to applications: it could be minutes if you take photos e.g.
> of the night sky.
>
> What I'm saying here is that any static value is likely not both reasonable
> and workable in all potential situations all the time. Still there are cases
> (as yours below) that may happen in relatively common cases on some hardware
> (more common than taking long exposure photos of the night sky with the said
> hardware :)).

I doubt night photography will ever be a use-case for i.MX. The most
common use-case for this driver will be used in automotive applications
such as rear-view or 360 degree view cameras.


>
>>
>>
>>> I wouldn't add an event for this:
>>> this is unreliable and 30 times the frame period is an arbitrary value
>>> anyway. No other drivers do this either.
>>
>> If no other drivers do this I don't mind removing it. It is really meant
>> to deal with the ADV718x CVBS decoder, which often simply stops sending
>> data on the BT.656 bus if there is an interruption in the input analog
>> signal. But I agree that user space could detect this timeout instead.
>> Unless I hear from someone else that they would like to keep this
>> feature I'll remove it in version 5.
>
> That's a bit of a special situation --- still there are alike conditions on
> existing hardware. You should return the buffers to the user with the ERROR
> flag set --- or return -EIO from VIDIOC_DQBUF, if the condition will
> persist:

On i.MX an EOF timeout is not recoverable without a stream restart, so
I decided to call vb2_queue_error() when the timeout occurs (instead
of sending an event). The user will then get -EIO when it attempts to
queue or dequeue further buffers.


>
> <URL:https://www.linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/vidioc-qbuf.html>
>
> Do you already obtain the frame rate from the image source (e.g. tuner,
> sensor, decoder) and multiply the frame time by some number in the current
> implementation?

No the timeout is a constant value, regardless of the source frame
rate. Should the timeout be based on a constant time, or based on a
constant # of frames? I really don't think it matters much, what matters
is that it be long enough to be reasonably sure no data is forthcoming,
for most use-cases.

Steve



> Not all sub-device drivers may implement g_frame_interval()
> so I'd disable the timeout in that case.
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-05  0:37             ` Steve Longerbeam
@ 2017-03-05 21:31               ` Sakari Ailus
  2017-03-05 22:41               ` Russell King - ARM Linux
  1 sibling, 0 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-03-05 21:31 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, linux,
	mchehab, hverkuil, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Sat, Mar 04, 2017 at 04:37:43PM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/04/2017 02:56 AM, Sakari Ailus wrote:
> >Hi Steve,
> >
> >On Fri, Mar 03, 2017 at 02:43:51PM -0800, Steve Longerbeam wrote:
> >>
> >>
> >>On 03/03/2017 03:45 AM, Sakari Ailus wrote:
> >>>On Thu, Mar 02, 2017 at 03:07:21PM -0800, Steve Longerbeam wrote:
> >>>>
> >>>>
> >>>>On 03/02/2017 07:53 AM, Sakari Ailus wrote:
> >>>>>Hi Steve,
> >>>>>
> >>>>>On Wed, Feb 15, 2017 at 06:19:15PM -0800, Steve Longerbeam wrote:
> >>>>>>Add a new FRAME_TIMEOUT event to signal that a video capture or
> >>>>>>output device has timed out waiting for reception or transmit
> >>>>>>completion of a video frame.
> >>>>>>
> >>>>>>Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
> >>>>>>---
> >>>>>>Documentation/media/uapi/v4l/vidioc-dqevent.rst | 5 +++++
> >>>>>>Documentation/media/videodev2.h.rst.exceptions  | 1 +
> >>>>>>include/uapi/linux/videodev2.h                  | 1 +
> >>>>>>3 files changed, 7 insertions(+)
> >>>>>>
> >>>>>>diff --git a/Documentation/media/uapi/v4l/vidioc-dqevent.rst b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>>>>>index 8d663a7..dd77d9b 100644
> >>>>>>--- a/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>>>>>+++ b/Documentation/media/uapi/v4l/vidioc-dqevent.rst
> >>>>>>@@ -197,6 +197,11 @@ call.
> >>>>>>	the regions changes. This event has a struct
> >>>>>>	:c:type:`v4l2_event_motion_det`
> >>>>>>	associated with it.
> >>>>>>+    * - ``V4L2_EVENT_FRAME_TIMEOUT``
> >>>>>>+      - 7
> >>>>>>+      - This event is triggered when the video capture or output device
> >>>>>>+	has timed out waiting for the reception or transmit completion of
> >>>>>>+	a frame of video.
> >>>>>
> >>>>>As you're adding a new interface, I suppose you have an implementation
> >>>>>around. How do you determine what that timeout should be?
> >>>>
> >>>>The imx-media driver sets the timeout to 1 second, or 30 frame
> >>>>periods at 30 fps.
> >>>
> >>>The frame rate is not necessarily constant during streaming. It may well
> >>>change as a result of lighting conditions.
> >>
> >>I think you mean that would be a _temporary_ change in frame rate, but
> >>yes I agree the data rate can temporarily fluctuate. Although I doubt
> >>lighting conditions would cause a sensor to pause data transmission
> >>for a full 1 second.
> >
> >Likely not, at least not in typical conditions. The exposure time is still
> >quite specific to applications: it could be minutes if you take photos e.g.
> >of the night sky.
> >
> >What I'm saying here is that any static value is likely not both reasonable
> >and workable in all potential situations all the time. Still there are cases
> >(as yours below) that may happen in relatively common cases on some hardware
> >(more common than taking long exposure photos of the night sky with the said
> >hardware :)).
> 
> I doubt night photography will ever be a use-case for i.MX. The most
> common use-case for this driver will be used in automotive applications
> such as rear-view or 360 degree view cameras.

Ack.

> 
> 
> >
> >>
> >>
> >>>I wouldn't add an event for this:
> >>>this is unreliable and 30 times the frame period is an arbitrary value
> >>>anyway. No other drivers do this either.
> >>
> >>If no other drivers do this I don't mind removing it. It is really meant
> >>to deal with the ADV718x CVBS decoder, which often simply stops sending
> >>data on the BT.656 bus if there is an interruption in the input analog
> >>signal. But I agree that user space could detect this timeout instead.
> >>Unless I hear from someone else that they would like to keep this
> >>feature I'll remove it in version 5.
> >
> >That's a bit of a special situation --- still there are alike conditions on
> >existing hardware. You should return the buffers to the user with the ERROR
> >flag set --- or return -EIO from VIDIOC_DQBUF, if the condition will
> >persist:
> 
> On i.MX an EOF timeout is not recoverable without a stream restart, so
> I decided to call vb2_queue_error() when the timeout occurs (instead
> of sending an event). The user will then get -EIO when it attempts to
> queue or dequeue further buffers.

It believe that's the correct thing to do.

> 
> 
> >
> ><URL:https://www.linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/vidioc-qbuf.html>
> >
> >Do you already obtain the frame rate from the image source (e.g. tuner,
> >sensor, decoder) and multiply the frame time by some number in the current
> >implementation?
> 
> No the timeout is a constant value, regardless of the source frame
> rate. Should the timeout be based on a constant time, or based on a
> constant # of frames? I really don't think it matters much, what matters
> is that it be long enough to be reasonably sure no data is forthcoming,
> for most use-cases.

That should be fine. If there is a use case that requires something else,
then the implementation can be changed: it's not visible to the user space
anyway.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-05  0:37             ` Steve Longerbeam
  2017-03-05 21:31               ` Sakari Ailus
@ 2017-03-05 22:41               ` Russell King - ARM Linux
  2017-03-10  2:38                 ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-05 22:41 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Sakari Ailus, mark.rutland, andrew-ct.chen, minghsiu.tsai,
	sakari.ailus, nick, songjun.wu, hverkuil, Steve Longerbeam,
	pavel, robert.jarzmik, devel, markus.heiser,
	laurent.pinchart+renesas, shuah, geert, linux-media, devicetree,
	kernel, arnd, mchehab, bparrot, robh+dt, horms+renesas,
	tiffany.lin, linux-arm-kernel, niklas.soderlund+renesas, gregkh,
	linux-kernel, jean-christophe.trotin, p.zabel, fabio.estevam,
	shawnguo, sudipm.mukherjee

On Sat, Mar 04, 2017 at 04:37:43PM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/04/2017 02:56 AM, Sakari Ailus wrote:
> >That's a bit of a special situation --- still there are alike conditions on
> >existing hardware. You should return the buffers to the user with the ERROR
> >flag set --- or return -EIO from VIDIOC_DQBUF, if the condition will
> >persist:
> 
> On i.MX an EOF timeout is not recoverable without a stream restart, so
> I decided to call vb2_queue_error() when the timeout occurs (instead
> of sending an event). The user will then get -EIO when it attempts to
> queue or dequeue further buffers.

I'm not sure that statement is entirely accurate.  With the IMX219
camera, I _could_ (with previous iterations of the iMX capture code)
stop it streaming, wait a while, and restart it, and everything
continues to work.

Are you sure that the problem you have here is caused by the iMX6
rather than the ADV718x CVBS decoder (your initial description said
it was the decoder.)

If it _is_ the decoder that's going wrong, that doesn't justify
cripping the rest of the driver for one instance of broken hardware
that _might_ be attached to it.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-05 22:41               ` Russell King - ARM Linux
@ 2017-03-10  2:38                 ` Steve Longerbeam
  2017-03-10  9:33                   ` Russell King - ARM Linux
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-10  2:38 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Sakari Ailus, mark.rutland, andrew-ct.chen, minghsiu.tsai,
	sakari.ailus, nick, songjun.wu, hverkuil, Steve Longerbeam,
	pavel, robert.jarzmik, devel, markus.heiser,
	laurent.pinchart+renesas, shuah, geert, linux-media, devicetree,
	kernel, arnd, mchehab, bparrot, robh+dt, horms+renesas,
	tiffany.lin, linux-arm-kernel, niklas.soderlund+renesas, gregkh,
	linux-kernel, jean-christophe.trotin, p.zabel, fabio.estevam,
	shawnguo, sudipm.mukherjee



On 03/05/2017 02:41 PM, Russell King - ARM Linux wrote:
> On Sat, Mar 04, 2017 at 04:37:43PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/04/2017 02:56 AM, Sakari Ailus wrote:
>>> That's a bit of a special situation --- still there are alike conditions on
>>> existing hardware. You should return the buffers to the user with the ERROR
>>> flag set --- or return -EIO from VIDIOC_DQBUF, if the condition will
>>> persist:
>>
>> On i.MX an EOF timeout is not recoverable without a stream restart, so
>> I decided to call vb2_queue_error() when the timeout occurs (instead
>> of sending an event). The user will then get -EIO when it attempts to
>> queue or dequeue further buffers.
>
> I'm not sure that statement is entirely accurate.  With the IMX219
> camera, I _could_ (with previous iterations of the iMX capture code)
> stop it streaming, wait a while, and restart it, and everything
> continues to work.

Hi Russell, did you see the "EOF timeout" kernel error message when you
stopped the IMX219 from streaming? Only a "EOF timeout" message
indicates the unrecoverable case.


>
> Are you sure that the problem you have here is caused by the iMX6
> rather than the ADV718x CVBS decoder (your initial description said
> it was the decoder.)

Actually yes I did say it was the adv718x, but in fact I doubt the
adv718x has abruptly stopped data transmission on the bt.656 bus.
I actually suspect the IPU, specifically the CSI. In our experience
the CSI is rather sensitive to glitches and/or truncated frames on the
bt.656 bus and can easily loose vertical sync, and/or lock-up.

Steve

>
> If it _is_ the decoder that's going wrong, that doesn't justify
> cripping the rest of the driver for one instance of broken hardware
> that _might_ be attached to it.
>

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 13/36] [media] v4l2: add a frame timeout event
  2017-03-10  2:38                 ` Steve Longerbeam
@ 2017-03-10  9:33                   ` Russell King - ARM Linux
  0 siblings, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-10  9:33 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Sakari Ailus, mark.rutland, andrew-ct.chen, minghsiu.tsai,
	sakari.ailus, nick, songjun.wu, hverkuil, Steve Longerbeam,
	pavel, robert.jarzmik, devel, markus.heiser,
	laurent.pinchart+renesas, shuah, geert, linux-media, devicetree,
	kernel, arnd, mchehab, bparrot, robh+dt, horms+renesas,
	tiffany.lin, linux-arm-kernel, niklas.soderlund+renesas, gregkh,
	linux-kernel, jean-christophe.trotin, p.zabel, fabio.estevam,
	shawnguo, sudipm.mukherjee

On Thu, Mar 09, 2017 at 06:38:18PM -0800, Steve Longerbeam wrote:
> On 03/05/2017 02:41 PM, Russell King - ARM Linux wrote:
> >I'm not sure that statement is entirely accurate.  With the IMX219
> >camera, I _could_ (with previous iterations of the iMX capture code)
> >stop it streaming, wait a while, and restart it, and everything
> >continues to work.
> 
> Hi Russell, did you see the "EOF timeout" kernel error message when you
> stopped the IMX219 from streaming? Only a "EOF timeout" message
> indicates the unrecoverable case.

I really couldn't tell you anymore - I can't go back and test at all,
because:

(a) your v4 patch set never worked for me
(b) I've now moved forward to v4.11-rc1, which conflicts with your v4
    and older patch sets.

In any case, I'm in complete disagreement with many of the points that
Sakari has been bringing up, and I'm finding the direction that things
are progressing to be abhorrent.

I've discussed with Mauro about (eg) the application interface, and
unsurprisingly to me, Mauro immediately said about control inheritence
to the main v4l2 device, contradicting what Sakari has been saying.
The subdev API is supposed to be there to allow for finer control, it's
not a "one or the other" thing.  The controls are still supposed to be
exposed through the main v4l2 subdev device.

Since the v4l2 stuff is becoming abhorrent, and I've also come to
realise that I'm going to have to write an entirely new userspace
application to capture, debayer and encode efficiently, I'm finding
that I've little motivation to take much of a further interest in
iMX6 capture, or indeed continue my reverse engineering efforts of
the IMX219 sensor.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-04 13:13       ` Sakari Ailus
@ 2017-03-10 12:54         ` Hans Verkuil
  2017-03-10 13:07           ` Russell King - ARM Linux
  2017-03-10 15:09           ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 228+ messages in thread
From: Hans Verkuil @ 2017-03-10 12:54 UTC (permalink / raw)
  To: Sakari Ailus, Russell King - ARM Linux
  Cc: Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On 04/03/17 14:13, Sakari Ailus wrote:
> Hi Russell,
> 
> On Fri, Mar 03, 2017 at 11:06:45PM +0000, Russell King - ARM Linux wrote:
>> On Thu, Mar 02, 2017 at 06:02:57PM +0200, Sakari Ailus wrote:
>>> Hi Steve,
>>>
>>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>>> all subdev entities in a pipeline to a given video device.
>>>>
>>>> Signed-off-by: Steve Longerbeam <steve_longerbeam@mentor.com>
>>>> ---
>>>>  drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
>>>>  include/media/v4l2-mc.h           | 25 ++++++++++++++++++++
>>>>  2 files changed, 73 insertions(+)
>>>>
>>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
>>>> index 303980b..09d4d97 100644
>>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>>> @@ -22,6 +22,7 @@
>>>>  #include <linux/usb.h>
>>>>  #include <media/media-device.h>
>>>>  #include <media/media-entity.h>
>>>> +#include <media/v4l2-ctrls.h>
>>>>  #include <media/v4l2-fh.h>
>>>>  #include <media/v4l2-mc.h>
>>>>  #include <media/v4l2-subdev.h>
>>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
>>>>  }
>>>>  EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>>  
>>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>>> +				     struct media_entity *start_entity)
>>>
>>> I have a few concerns / questions:
>>>
>>> - What's the purpose of this patch? Why not to access the sub-device node
>>>   directly?
>>
>> What tools are in existance _today_ to provide access to these controls
>> via the sub-device nodes?
> 
> yavta, for instance:
> 
> <URL:http://git.ideasonboard.org/yavta.git>
> 
> VIDIOC_QUERYCAP isn't supported on sub-devices and v4l2-ctl appears to be
> checking for that. That check should be removed (with possible other
> implications taken into account).

No, the subdev API should get a similar QUERYCAP ioctl. There isn't a single
ioctl that is guaranteed to be available for all subdev devices. I've made
proposals for this in the past, and those have all been shot down.

Add that, and I'll add support for subdevs in v4l2-ctl.

> 
>>
>> v4l-tools doesn't last time I looked - in fact, the only tool in v4l-tools
>> which is capable of accessing the subdevices is media-ctl, and that only
>> provides functionality for configuring the pipeline.
>>
>> So, pointing people at vapourware userspace is really quite rediculous.
> 
> Do bear in mind that there are other programs that can make use of these
> interfaces. It's not just the test programs, or a test program you attempted
> to use.
> 
>>
>> The established way to control video capture is through the main video
>> capture device, not through the sub-devices.  Yes, the controls are
>> exposed through sub-devices too, but that does not mean that is the
>> correct way to access them.
> 
> It is. That's the very purpose of the sub-devices: to provide access to the
> hardware independently of how the links are configured.
> 
>>
>> The v4l2 documentation (Documentation/media/kapi/v4l2-controls.rst)
>> even disagrees with your statements.  That talks about control
>> inheritence from sub-devices to the main video device, and the core
>> v4l2 code provides _automatic_ support for this - see
>> v4l2_device_register_subdev():
>>
>>         /* This just returns 0 if either of the two args is NULL */
>>         err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);
>>
>> which merges the subdev's controls into the main device's control
>> handler.
> 
> That's done on different kind of devices: those that provide plain V4L2 API
> to control the entire device. V4L2 sub-device interface is used *in kernel*
> as an interface to control sub-devices that do not need to be exposed to the
> user space.
> 
> Devices that have complex pipeline that do essentially require using the
> Media controller interface to configure them are out of that scope.
> 

Way too much of how the MC devices should be used is in the minds of developers.
There is a major lack for good detailed documentation, utilities, compliance
test (really needed!) and libv4l plugins.

Russell's comments are spot on and it is a thorn in my side that this still
hasn't been addressed.

I want to see if I can get time from my boss to work on this this summer, but
there is no guarantee.

The main reason this hasn't been a much bigger problem is that most end-users
make custom applications for this hardware. It makes sense, if you need full
control over everything you make the application yourself, that's the whole point.

But there was always meant to be a layer (libv4l plugin) that could be used to
setup a 'default scenario' that existing applications could use, but that was
never enforced, sadly.

Anyway, regarding this specific patch and for this MC-aware driver: no, you
shouldn't inherit controls from subdevs. It defeats the purpose.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 12:54         ` Hans Verkuil
@ 2017-03-10 13:07           ` Russell King - ARM Linux
  2017-03-10 13:22             ` Hans Verkuil
  2017-03-10 15:26             ` Mauro Carvalho Chehab
  2017-03-10 15:09           ` Mauro Carvalho Chehab
  1 sibling, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-10 13:07 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sakari Ailus, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Fri, Mar 10, 2017 at 01:54:28PM +0100, Hans Verkuil wrote:
> But there was always meant to be a layer (libv4l plugin) that could be
> used to setup a 'default scenario' that existing applications could use,
> but that was never enforced, sadly.

However, there's other painful issues lurking in userspace, particularly
to do with the v4l libraries.

The idea that the v4l libraries should intercept the format negotiation
between the application and kernel is a particularly painful one - the
default gstreamer build detects the v4l libraries, and links against it.
That much is fine.

However, the problem comes when you're trying to use bayer formats. The
v4l libraries "helpfully" (or rather unhelpfully) intercept the format
negotiation, and decide that they'll invoke v4lconvert to convert the
bayer to RGB for you, whether you want them to do that or not.

v4lconvert may not be the most efficient way to convert, or even what
is desired (eg, you may want to receive the raw bayer image.)  However,
since the v4l libraries/v4lconvert gives you no option but to have its
conversion forced into the pipeline, other options (such as using the
gstreamer neon accelerated de-bayer plugin) isn't an option without
rebuilding gstreamer _without_ linking against the v4l libraries.

At that point, saying "this should be done in a libv4l plugin" becomes
a total nonsense, because if you need to avoid libv4l due to its
stupidities, you don't get the benefit of subdevs, and it yet again
_forces_ people down the route of custom applications.

So, I really don't agree with pushing this into a userspace library
plugin - at least not with the current state there.

_At least_ the debayering in the v4l libraries needs to become optional.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 13:07           ` Russell King - ARM Linux
@ 2017-03-10 13:22             ` Hans Verkuil
  2017-03-10 14:01               ` Russell King - ARM Linux
  2017-03-10 15:26             ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 228+ messages in thread
From: Hans Verkuil @ 2017-03-10 13:22 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Sakari Ailus, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On 10/03/17 14:07, Russell King - ARM Linux wrote:
> On Fri, Mar 10, 2017 at 01:54:28PM +0100, Hans Verkuil wrote:
>> But there was always meant to be a layer (libv4l plugin) that could be
>> used to setup a 'default scenario' that existing applications could use,
>> but that was never enforced, sadly.
> 
> However, there's other painful issues lurking in userspace, particularly
> to do with the v4l libraries.
> 
> The idea that the v4l libraries should intercept the format negotiation
> between the application and kernel is a particularly painful one - the
> default gstreamer build detects the v4l libraries, and links against it.
> That much is fine.
> 
> However, the problem comes when you're trying to use bayer formats. The
> v4l libraries "helpfully" (or rather unhelpfully) intercept the format
> negotiation, and decide that they'll invoke v4lconvert to convert the
> bayer to RGB for you, whether you want them to do that or not.
> 
> v4lconvert may not be the most efficient way to convert, or even what
> is desired (eg, you may want to receive the raw bayer image.)  However,
> since the v4l libraries/v4lconvert gives you no option but to have its
> conversion forced into the pipeline, other options (such as using the
> gstreamer neon accelerated de-bayer plugin) isn't an option without
> rebuilding gstreamer _without_ linking against the v4l libraries.
> 
> At that point, saying "this should be done in a libv4l plugin" becomes
> a total nonsense, because if you need to avoid libv4l due to its
> stupidities, you don't get the benefit of subdevs, and it yet again
> _forces_ people down the route of custom applications.
> 
> So, I really don't agree with pushing this into a userspace library
> plugin - at least not with the current state there.
> 
> _At least_ the debayering in the v4l libraries needs to become optional.
> 

I *thought* that when a plugin is used the format conversion code was disabled.
But I'm not sure.

The whole problem is that we still don't have a decent plugin for an MC driver.
There is one for the exynos4 floating around, but it's still not accepted.

Companies write the driver, but the plugin isn't really needed since their
customers won't use it anyway since they make their own embedded driver.

And nobody of the media core developers has the time to work on the docs,
utilities and libraries you need to make this all work cleanly and reliably.

As mentioned, I will attempt to try and get some time to work on this
later this year. Fingers crossed.

We also have a virtual MC driver floating around. I've pinged the author if
she can fix the last round of review comments and post a new version. Having
a virtual driver makes life much easier when writing docs, utilities, etc.
since you don't need real hardware which can be hard to obtain and run.

Again, I agree completely with you. But we don't have many core developers
who can do something like this, and it's even harder for them to find the time.

Solutions on a postcard...

BTW, Steve: this has nothing to do with your work, it's a problem in our subsystem.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 13:22             ` Hans Verkuil
@ 2017-03-10 14:01               ` Russell King - ARM Linux
  2017-03-10 14:20                 ` Hans Verkuil
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-10 14:01 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sakari Ailus, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Fri, Mar 10, 2017 at 02:22:29PM +0100, Hans Verkuil wrote:
> And nobody of the media core developers has the time to work on the docs,
> utilities and libraries you need to make this all work cleanly and reliably.

Well, talking about docs, and in connection to control inheritence,
this is already documented in at least three separate places:

Documentation/media/uapi/v4l/dev-subdev.rst:

  Controls
  ========
  ...
  Depending on the driver, those controls might also be exposed through
  one (or several) V4L2 device nodes.

Documentation/media/kapi/v4l2-subdev.rst:

  ``VIDIOC_QUERYCTRL``,
  ``VIDIOC_QUERYMENU``,
  ``VIDIOC_G_CTRL``,
  ``VIDIOC_S_CTRL``,
  ``VIDIOC_G_EXT_CTRLS``,
  ``VIDIOC_S_EXT_CTRLS`` and
  ``VIDIOC_TRY_EXT_CTRLS``:
  
          The controls ioctls are identical to the ones defined in V4L2. They
          behave identically, with the only exception that they deal only with
          controls implemented in the sub-device. Depending on the driver, those
          controls can be also be accessed through one (or several) V4L2 device
          nodes.

Then there's Documentation/media/kapi/v4l2-controls.rst, which gives a
step by step approach to the main video device inheriting controls from
its subdevices, and it says:

  Inheriting Controls
  -------------------
  
  When a sub-device is registered with a V4L2 driver by calling
  v4l2_device_register_subdev() and the ctrl_handler fields of both v4l2_subdev
  and v4l2_device are set, then the controls of the subdev will become
  automatically available in the V4L2 driver as well. If the subdev driver
  contains controls that already exist in the V4L2 driver, then those will be
  skipped (so a V4L2 driver can always override a subdev control).
  
  What happens here is that v4l2_device_register_subdev() calls
  v4l2_ctrl_add_handler() adding the controls of the subdev to the controls
  of v4l2_device.

So, either the docs are wrong, or the advice being mentioned in emails
about subdev control inheritence is misleading.  Whatever, the two are
currently inconsistent.

As I've already mentioned, from talking about this with Mauro, it seems
Mauro is in agreement with permitting the control inheritence... I wish
Mauro would comment for himself, as I can't quote our private discussion
on the subject.

Right now, my view is that v4l2 is currently being screwed up by people
with different opinions - there is no unified concensus on how any of
this stuff is supposed to work, everyone is pulling in different
directions.  That needs solving _really_ quickly, so I suggest that
v4l2 people urgently talk to each other and thrash out some of the
issues that Steve's patch set has brought up, and settle on a way
forward, rather than what is seemingly happening today - which is
everyone working in isolation of everyone else with their own bias on
how things should be done.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 14:01               ` Russell King - ARM Linux
@ 2017-03-10 14:20                 ` Hans Verkuil
  2017-03-10 15:53                   ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 228+ messages in thread
From: Hans Verkuil @ 2017-03-10 14:20 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Sakari Ailus, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On 10/03/17 15:01, Russell King - ARM Linux wrote:
> On Fri, Mar 10, 2017 at 02:22:29PM +0100, Hans Verkuil wrote:
>> And nobody of the media core developers has the time to work on the docs,
>> utilities and libraries you need to make this all work cleanly and reliably.
> 
> Well, talking about docs, and in connection to control inheritence,
> this is already documented in at least three separate places:
> 
> Documentation/media/uapi/v4l/dev-subdev.rst:
> 
>   Controls
>   ========
>   ...
>   Depending on the driver, those controls might also be exposed through
>   one (or several) V4L2 device nodes.
> 
> Documentation/media/kapi/v4l2-subdev.rst:
> 
>   ``VIDIOC_QUERYCTRL``,
>   ``VIDIOC_QUERYMENU``,
>   ``VIDIOC_G_CTRL``,
>   ``VIDIOC_S_CTRL``,
>   ``VIDIOC_G_EXT_CTRLS``,
>   ``VIDIOC_S_EXT_CTRLS`` and
>   ``VIDIOC_TRY_EXT_CTRLS``:
>   
>           The controls ioctls are identical to the ones defined in V4L2. They
>           behave identically, with the only exception that they deal only with
>           controls implemented in the sub-device. Depending on the driver, those
>           controls can be also be accessed through one (or several) V4L2 device
>           nodes.
> 
> Then there's Documentation/media/kapi/v4l2-controls.rst, which gives a
> step by step approach to the main video device inheriting controls from
> its subdevices, and it says:
> 
>   Inheriting Controls
>   -------------------
>   
>   When a sub-device is registered with a V4L2 driver by calling
>   v4l2_device_register_subdev() and the ctrl_handler fields of both v4l2_subdev
>   and v4l2_device are set, then the controls of the subdev will become
>   automatically available in the V4L2 driver as well. If the subdev driver
>   contains controls that already exist in the V4L2 driver, then those will be
>   skipped (so a V4L2 driver can always override a subdev control).
>   
>   What happens here is that v4l2_device_register_subdev() calls
>   v4l2_ctrl_add_handler() adding the controls of the subdev to the controls
>   of v4l2_device.
> 
> So, either the docs are wrong, or the advice being mentioned in emails
> about subdev control inheritence is misleading.  Whatever, the two are
> currently inconsistent.

These docs were written for non-MC drivers, and for those the documentation
is correct. Unfortunately, this was never updated for MC drivers.

> As I've already mentioned, from talking about this with Mauro, it seems
> Mauro is in agreement with permitting the control inheritence... I wish
> Mauro would comment for himself, as I can't quote our private discussion
> on the subject.

I can't comment either, not having seen his mail and reasoning.

> Right now, my view is that v4l2 is currently being screwed up by people
> with different opinions - there is no unified concensus on how any of
> this stuff is supposed to work, everyone is pulling in different
> directions.  That needs solving _really_ quickly, so I suggest that
> v4l2 people urgently talk to each other and thrash out some of the
> issues that Steve's patch set has brought up, and settle on a way
> forward, rather than what is seemingly happening today - which is
> everyone working in isolation of everyone else with their own bias on
> how things should be done.

The simple fact is that to my knowledge no other MC applications inherit
controls from subdevs. Suddenly doing something different here seems very
wrong to me and needs very good reasons.

But yes, the current situation sucks. Yelling doesn't help though if nobody
has time and there are several other high-prio projects that need our
attention as well.

If you know a good kernel developer who has a few months to spare, please
point him/her in our direction!

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 12:54         ` Hans Verkuil
  2017-03-10 13:07           ` Russell King - ARM Linux
@ 2017-03-10 15:09           ` Mauro Carvalho Chehab
  2017-03-11 11:32             ` Hans Verkuil
  1 sibling, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-10 15:09 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sakari Ailus, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Em Fri, 10 Mar 2017 13:54:28 +0100
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> > Devices that have complex pipeline that do essentially require using the
> > Media controller interface to configure them are out of that scope.
> >   
> 
> Way too much of how the MC devices should be used is in the minds of developers.
> There is a major lack for good detailed documentation, utilities, compliance
> test (really needed!) and libv4l plugins.

Unfortunately, we merged an incomplete MC support at the Kernel. We knew
all the problems with MC-based drivers and V4L2 applications by the time
it was developed, and we requested Nokia developers (with was sponsoring MC
develoment, on that time) to work on a solution to allow standard V4L2
applications to work with MC based boards.

Unfortunately, we took the decision to merge MC without that, because 
Nokia was giving up on Linux development, and we didn't want to lose the
2 years of discussions and work around it, as Nokia employers were leaving
the company. Also, on that time, there was already some patches floating
around adding backward support via libv4l. Unfortunately, those patches
were never finished.

The net result is that MC was merged with some huge gaps, including
the lack of a proper solution for a generic V4L2 program to work
with V4L2 devices that use the subdev API.

That was not that bad by then, as MC was used only on cell phones
that run custom-made applications. 

The reallity changed, as now, we have lots of low cost SoC based
boards, used for all sort of purposes. So, we need a quick solution
for it.

In other words, while that would be acceptable support special apps
on really embedded systems, it is *not OK* for general purpose SoC
harware[1].

[1] I'm calling "general purpose SoC harware" those ARM boards
like Raspberry Pi that are shipped to the mass and used by a wide
range of hobbyists and other people that just wants to run Linux on
ARM. It is possible to buy such boards for a very cheap price,
making them to be used not only on special projects, where a custom
made application could be interesting, but also for a lot of
users that just want to run Linux on a low cost ARM board, while
keeping using standard V4L2 apps, like "camorama".

That's perhaps one of the reasons why it took a long time for us to
start receiving drivers upstream for such hardware: it is quite 
intimidating and not logical to require developers to implement
on their drivers 2 complex APIs (MC, subdev) for those
hardware that most users won't care. From user's perspective,
being able to support generic applications like "camorama" and
"zbar" is all they want.

In summary, I'm pretty sure we need to support standard V4L2 
applications on boards like Raspberry Pi and those low-cost 
SoC-based boards that are shipped to end users.

> Anyway, regarding this specific patch and for this MC-aware driver: no, you
> shouldn't inherit controls from subdevs. It defeats the purpose.

Sorry, but I don't agree with that. The subdev API is an optional API
(and even the MC API can be optional).

I see the rationale for using MC and subdev APIs on cell phones,
ISV and other embedded hardware, as it will allow fine-tuning
the driver's support to allow providing the required quality for
certain custom-made applications. but on general SoC hardware,
supporting standard V4L2 applications is a need.

Ok, perhaps supporting both subdev API and V4L2 API at the same
time doesn't make much sense. We could disable one in favor of the
other, either at compilation time or at runtime.

This way, if the subdev API is disabled, the driver will be
functional for V4L2-based applications that don't support neither
MC or subdev APIs.

> As mentioned, I will attempt to try and get some time to work on this
> later this year. Fingers crossed.

That will be good, and, once we have a solution that works, we can
work on cleanup the code, but, until then, drivers for arm-based boards
sold to end consumers should work out of the box with standard V4L2 apps.

While we don't have that, I'm OK to merge patches adding such support
upstream.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 13:07           ` Russell King - ARM Linux
  2017-03-10 13:22             ` Hans Verkuil
@ 2017-03-10 15:26             ` Mauro Carvalho Chehab
  2017-03-10 15:57               ` Russell King - ARM Linux
  1 sibling, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-10 15:26 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Hans Verkuil, Sakari Ailus, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Hi Russell,

Em Fri, 10 Mar 2017 13:07:33 +0000
Russell King - ARM Linux <linux@armlinux.org.uk> escreveu:

> The idea that the v4l libraries should intercept the format negotiation
> between the application and kernel is a particularly painful one - the
> default gstreamer build detects the v4l libraries, and links against it.
> That much is fine.
> 
> However, the problem comes when you're trying to use bayer formats. The
> v4l libraries "helpfully" (or rather unhelpfully) intercept the format
> negotiation, and decide that they'll invoke v4lconvert to convert the
> bayer to RGB for you, whether you want them to do that or not.
> 
> v4lconvert may not be the most efficient way to convert, or even what
> is desired (eg, you may want to receive the raw bayer image.)  However,
> since the v4l libraries/v4lconvert gives you no option but to have its
> conversion forced into the pipeline, other options (such as using the
> gstreamer neon accelerated de-bayer plugin) isn't an option 

That's not true. There is an special flag, used only by libv4l2
emulated formats, that indicates when a video format is handled
via v4lconvert:

    * - ``V4L2_FMT_FLAG_EMULATED``
      - 0x0002
      - This format is not native to the device but emulated through
	software (usually libv4l2), where possible try to use a native
	format instead for better performance.

Using this flag, if the application supports a video format directly
supported by the hardware, it can use their own video format decoder.
If not, it is still possible to use the V4L2 hardware, by using
v4lconvert.

Unfortunately, very few applications currently check it.

I wrote a patch for zbar (a multi-format barcode reader) in the past,
adding a logic there that gives a high priority to hardware formats,
and a low priority to emulated ones:
	https://lists.fedoraproject.org/pipermail/scm-commits/2010-December/537428.html

> without
> rebuilding gstreamer _without_ linking against the v4l libraries.

I guess it wouldn't be complex to add a logic similar to that at
gstreamer.

AFAIKT, there's another problem that would prevent to make
libv4l the default on gstreamer: right now, libv4l doesn't support
DMABUF. As gstreamer is being used on embedded hardware, I'd say
that DMABUF support should be default there.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 14:20                 ` Hans Verkuil
@ 2017-03-10 15:53                   ` Mauro Carvalho Chehab
  2017-03-10 22:37                     ` Sakari Ailus
  0 siblings, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-10 15:53 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Russell King - ARM Linux, Sakari Ailus, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Em Fri, 10 Mar 2017 15:20:48 +0100
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> 
> > As I've already mentioned, from talking about this with Mauro, it seems
> > Mauro is in agreement with permitting the control inheritence... I wish
> > Mauro would comment for himself, as I can't quote our private discussion
> > on the subject.  
> 
> I can't comment either, not having seen his mail and reasoning.

The rationale is that we should support the simplest use cases first.

In the case of the first MC-based driver (and several subsequent
ones), the simplest use case required MC, as it was meant to suport
a custom-made sophisticated application that required fine control
on each component of the pipeline and to allow their advanced
proprietary AAA userspace-based algorithms to work.

That's not true, for example, for the UVC driver. There, MC
is optional, as it should be.

> > Right now, my view is that v4l2 is currently being screwed up by people
> > with different opinions - there is no unified concensus on how any of
> > this stuff is supposed to work, everyone is pulling in different
> > directions.  That needs solving _really_ quickly, so I suggest that
> > v4l2 people urgently talk to each other and thrash out some of the
> > issues that Steve's patch set has brought up, and settle on a way
> > forward, rather than what is seemingly happening today - which is
> > everyone working in isolation of everyone else with their own bias on
> > how things should be done.  
> 
> The simple fact is that to my knowledge no other MC applications inherit
> controls from subdevs. Suddenly doing something different here seems very
> wrong to me and needs very good reasons.

That's because it was not needed before, as other subdev-based drivers
are meant to be used only on complex scenarios with custom-made apps.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 15:26             ` Mauro Carvalho Chehab
@ 2017-03-10 15:57               ` Russell King - ARM Linux
  2017-03-10 17:06                 ` Russell King - ARM Linux
  2017-03-10 20:42                 ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-10 15:57 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Sakari Ailus, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

On Fri, Mar 10, 2017 at 12:26:34PM -0300, Mauro Carvalho Chehab wrote:
> Hi Russell,
> 
> Em Fri, 10 Mar 2017 13:07:33 +0000
> Russell King - ARM Linux <linux@armlinux.org.uk> escreveu:
> 
> > The idea that the v4l libraries should intercept the format negotiation
> > between the application and kernel is a particularly painful one - the
> > default gstreamer build detects the v4l libraries, and links against it.
> > That much is fine.
> > 
> > However, the problem comes when you're trying to use bayer formats. The
> > v4l libraries "helpfully" (or rather unhelpfully) intercept the format
> > negotiation, and decide that they'll invoke v4lconvert to convert the
> > bayer to RGB for you, whether you want them to do that or not.
> > 
> > v4lconvert may not be the most efficient way to convert, or even what
> > is desired (eg, you may want to receive the raw bayer image.)  However,
> > since the v4l libraries/v4lconvert gives you no option but to have its
> > conversion forced into the pipeline, other options (such as using the
> > gstreamer neon accelerated de-bayer plugin) isn't an option 
> 
> That's not true. There is an special flag, used only by libv4l2
> emulated formats, that indicates when a video format is handled
> via v4lconvert:

I'm afraid that my statement comes from trying to use gstreamer with
libv4l2 and _not_ being able to use the 8-bit bayer formats there at
all - they are simply not passed across to the application through
libv4l2/v4lconvert.

Instead, the formats that are passed across are the emulated formats.
As I said above, that forces applications to use only the v4lconvert
formats, the raw formats are not available.

So, the presence or absence of the V4L2_FMT_FLAG_EMULATED is quite
meaningless if you can't even enumerate the non-converted formats.

The problem comes from the "always needs conversion" stuff in
v4lconvert coupled with the way this subdev stuff works - since it
requires manual configuration of all the pads within the kernel
media pipeline, the kernel ends up only advertising _one_ format
to userspace - in my case, that's RGGB8.

When v4lconvert_create_with_dev_ops() enumerates the formats from
the kernel, it gets only RGGB8.  That causes always_needs_conversion
in there to remain true, so the special v4l control which enables/
disables conversion gets created with a default value of "true".
The RGGB8 bit is also set in data->supported_src_formats.

This causes v4lconvert_supported_dst_fmt_only() to return true.

What this all means is that v4lconvert_enum_fmt() will _not_ return
any of the kernel formats, only the faked formats.

Ergo, the RGGB8 format from the kernel is completely hidden from the
application, and only the emulated format is made available.  As I
said above, this forces v4lconvert's debayering on the application,
whether you want it or not.

In the gstreamer case, it knows nothing about this special control,
which means that trying to use this gstreamer pipeline:

$ gst-launch-1.0 v4l2src device=/dev/video6 ! bayer2rgbneon ! xvimagesink

is completely impossible without first rebuilding gstreamer _without_
libv4l support.  Build gstreamer without libv4l support, and the above
works.

Enabling debug output in gstreamer's v4l2src plugin confirms that
the kernel's bayer format are totally hidden from gstreamer when
linked with libv4l2, but are present when it isn't linked with
libv4l2.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 15:57               ` Russell King - ARM Linux
@ 2017-03-10 17:06                 ` Russell King - ARM Linux
  2017-03-10 20:42                 ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-10 17:06 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, sakari.ailus, nick,
	songjun.wu, Hans Verkuil, Steve Longerbeam, pavel,
	robert.jarzmik, devel, markus.heiser, laurent.pinchart+renesas,
	shuah, geert, Steve Longerbeam, linux-media, devicetree, kernel,
	arnd, mchehab, bparrot, robh+dt, horms+renesas, tiffany.lin,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, p.zabel, fabio.estevam,
	shawnguo, sudipm.mukherjee

On Fri, Mar 10, 2017 at 03:57:09PM +0000, Russell King - ARM Linux wrote:
> Enabling debug output in gstreamer's v4l2src plugin confirms that
> the kernel's bayer format are totally hidden from gstreamer when
> linked with libv4l2, but are present when it isn't linked with
> libv4l2.

Here's the information to back up my claims:

root@hbi2ex:~# v4l2-ctl -d 6 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'RGGB'
        Name        : 8-bit Bayer RGRG/GBGB

root@hbi2ex:~# DISPLAY=:0 GST_DEBUG_NO_COLOR=1 GST_DEBUG=v4l2:9 gst-launch-1.0 v4l2src device=/dev/video6 ! bayer2rgbneon ! xvimagesink > gst-v4l2-1.log 2>&1
root@hbi2ex:~# cut -b65- gst-v4l2-1.log|less
v4l2_calls.c:519:gst_v4l2_open:<v4l2src0> Trying to open device /dev/video6
v4l2_calls.c:69:gst_v4l2_get_capabilities:<v4l2src0> getting capabilities
v4l2_calls.c:77:gst_v4l2_get_capabilities:<v4l2src0> driver:      'imx-media-camif'
v4l2_calls.c:78:gst_v4l2_get_capabilities:<v4l2src0> card:        'imx-media-camif'
v4l2_calls.c:79:gst_v4l2_get_capabilities:<v4l2src0> bus_info:    ''
v4l2_calls.c:80:gst_v4l2_get_capabilities:<v4l2src0> version:     00040a00
v4l2_calls.c:81:gst_v4l2_get_capabilities:<v4l2src0> capabilites: 85200001
...
v4l2_calls.c:258:gst_v4l2_fill_lists:<v4l2src0>   controls+menus
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00000000
v4l2_calls.c:319:gst_v4l2_fill_lists:<v4l2src0> starting control class 'User Controls'
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980001
v4l2_calls.c:389:gst_v4l2_fill_lists:<v4l2src0> Adding ControlID white_balance_automatic (98090c)
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 0098090c
v4l2_calls.c:389:gst_v4l2_fill_lists:<v4l2src0> Adding ControlID gamma (980910)
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980910
v4l2_calls.c:389:gst_v4l2_fill_lists:<v4l2src0> Adding ControlID gain (980913)
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980913
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980914
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980915
v4l2_calls.c:319:gst_v4l2_fill_lists:<v4l2src0> starting control class 'Camera Controls'
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009a0001
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID exposure_time_absolute (9a0902) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009a0902
v4l2_calls.c:319:gst_v4l2_fill_lists:<v4l2src0> starting control class 'Image Source Controls'
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0001
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID vertical_blanking (9e0901) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0901
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID horizontal_blanking (9e0902) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0902
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID analogue_gain (9e0903) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0903
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID red_pixel_value (9e0904) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0904
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID green_red_pixel_value (9e0905) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0905
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID blue_pixel_value (9e0906) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0906
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID green_blue_pixel_value (9e0907) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0907
v4l2_calls.c:319:gst_v4l2_fill_lists:<v4l2src0> starting control class 'Image Processing Controls'
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009f0001
v4l2_calls.c:340:gst_v4l2_fill_lists:<v4l2src0> Control type for 'Pixel Rate' not suppored for extra controls.
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID Pixel Rate (9f0902) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009f0902
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID test_pattern (9f0903) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009f0903
v4l2_calls.c:284:gst_v4l2_fill_lists:<v4l2src0> controls finished
v4l2_calls.c:451:gst_v4l2_fill_lists:<v4l2src0> done
v4l2_calls.c:587:gst_v4l2_open:<v4l2src0> Opened device 'imx-media-camif' (/dev/video6) successfully
gstv4l2object.c:804:gst_v4l2_set_defaults:<v4l2src0> tv_norm=0x0, norm=(nil)
v4l2_calls.c:734:gst_v4l2_get_norm:<v4l2src0> getting norm
v4l2_calls.c:1021:gst_v4l2_get_input:<v4l2src0> trying to get input
v4l2_calls.c:1031:gst_v4l2_get_input:<v4l2src0> input: 0

gstv4l2object.c:1106:gst_v4l2_object_fill_format_list:<v4l2src0> getting src format enumerations
gstv4l2object.c:1124:gst_v4l2_object_fill_format_list:<v4l2src0> index:       0
gstv4l2object.c:1125:gst_v4l2_object_fill_format_list:<v4l2src0> type:        1
gstv4l2object.c:1126:gst_v4l2_object_fill_format_list:<v4l2src0> flags:       00000002
gstv4l2object.c:1128:gst_v4l2_object_fill_format_list:<v4l2src0> description: 'RGB3'
gstv4l2object.c:1130:gst_v4l2_object_fill_format_list:<v4l2src0> pixelformat: RGB3
gstv4l2object.c:1124:gst_v4l2_object_fill_format_list:<v4l2src0> index:       1
gstv4l2object.c:1125:gst_v4l2_object_fill_format_list:<v4l2src0> type:        1
gstv4l2object.c:1126:gst_v4l2_object_fill_format_list:<v4l2src0> flags:       00000002
gstv4l2object.c:1128:gst_v4l2_object_fill_format_list:<v4l2src0> description: 'BGR3'
gstv4l2object.c:1130:gst_v4l2_object_fill_format_list:<v4l2src0> pixelformat: BGR3
gstv4l2object.c:1124:gst_v4l2_object_fill_format_list:<v4l2src0> index:       2
gstv4l2object.c:1125:gst_v4l2_object_fill_format_list:<v4l2src0> type:        1
gstv4l2object.c:1126:gst_v4l2_object_fill_format_list:<v4l2src0> flags:       00000002
gstv4l2object.c:1128:gst_v4l2_object_fill_format_list:<v4l2src0> description: 'YU12'
gstv4l2object.c:1130:gst_v4l2_object_fill_format_list:<v4l2src0> pixelformat: YU12
gstv4l2object.c:1124:gst_v4l2_object_fill_format_list:<v4l2src0> index:       3
gstv4l2object.c:1125:gst_v4l2_object_fill_format_list:<v4l2src0> type:        1
gstv4l2object.c:1126:gst_v4l2_object_fill_format_list:<v4l2src0> flags:       00000002
gstv4l2object.c:1128:gst_v4l2_object_fill_format_list:<v4l2src0> description: 'YV12'

gstv4l2object.c:1130:gst_v4l2_object_fill_format_list:<v4l2src0> pixelformat: YV12
gstv4l2object.c:1143:gst_v4l2_object_fill_format_list:<v4l2src0> got 4 format(s):
gstv4l2object.c:1149:gst_v4l2_object_fill_format_list:<v4l2src0>   YU12 (emulated)
gstv4l2object.c:1149:gst_v4l2_object_fill_format_list:<v4l2src0>   YV12 (emulated)
gstv4l2object.c:1149:gst_v4l2_object_fill_format_list:<v4l2src0>   BGR3 (emulated)
gstv4l2object.c:1149:gst_v4l2_object_fill_format_list:<v4l2src0>   RGB3 (emulated)

As you can see from this, the RGGB bayer format advertised from the
kernel is not listed - only the four emulated formats provided by
v4lconvert are listed, so the application has _no_ choice but to use
v4lconvert's RGGB conversion.

The result is that the above pipeline fails:

0:00:00.345739030  2794   0x3ade60 DEBUG                   v4l2 gstv4l2object.c:3812:gst_v4l2_object_get_caps:<v4l2src0> ret: video/x-raw, format=(string)I420, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)816, height=(int)616, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1; video/x-raw, format=(string)YV12, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)816, height=(int)616, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1; video/x-raw, format=(string)BGR, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)816, height=(int)616, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1; video/x-raw, format=(string)RGB, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)816, height=(int)616, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)

as the v4l2src element is offering an I420-formatted buffer to the
gstreamer bayer converter, which obviously objects.

Rebuilding without libv4l2 linked results in gstreamer working.
Using a kernel driver which exposes some formats that libv4lconvert
_doesn't_ need to convert in addition to bayer _also_ works.

The only case where this fails is where the kernel device only
advertises formats where libv4lconvert is in "we must always convert"
mode, where upon the unconverted formats are completely hidden from
the application.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 15:57               ` Russell King - ARM Linux
  2017-03-10 17:06                 ` Russell King - ARM Linux
@ 2017-03-10 20:42                 ` Mauro Carvalho Chehab
  2017-03-10 21:55                   ` Pavel Machek
  1 sibling, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-10 20:42 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Hans Verkuil, Sakari Ailus, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Em Fri, 10 Mar 2017 15:57:09 +0000
Russell King - ARM Linux <linux@armlinux.org.uk> escreveu:

> On Fri, Mar 10, 2017 at 12:26:34PM -0300, Mauro Carvalho Chehab wrote:
> > Hi Russell,
> > 
> > Em Fri, 10 Mar 2017 13:07:33 +0000
> > Russell King - ARM Linux <linux@armlinux.org.uk> escreveu:
> >   
> > > The idea that the v4l libraries should intercept the format negotiation
> > > between the application and kernel is a particularly painful one - the
> > > default gstreamer build detects the v4l libraries, and links against it.
> > > That much is fine.
> > > 
> > > However, the problem comes when you're trying to use bayer formats. The
> > > v4l libraries "helpfully" (or rather unhelpfully) intercept the format
> > > negotiation, and decide that they'll invoke v4lconvert to convert the
> > > bayer to RGB for you, whether you want them to do that or not.
> > > 
> > > v4lconvert may not be the most efficient way to convert, or even what
> > > is desired (eg, you may want to receive the raw bayer image.)  However,
> > > since the v4l libraries/v4lconvert gives you no option but to have its
> > > conversion forced into the pipeline, other options (such as using the
> > > gstreamer neon accelerated de-bayer plugin) isn't an option   
> > 
> > That's not true. There is an special flag, used only by libv4l2
> > emulated formats, that indicates when a video format is handled
> > via v4lconvert:  
> 
> I'm afraid that my statement comes from trying to use gstreamer with
> libv4l2 and _not_ being able to use the 8-bit bayer formats there at
> all - they are simply not passed across to the application through
> libv4l2/v4lconvert.
> 
> Instead, the formats that are passed across are the emulated formats.
> As I said above, that forces applications to use only the v4lconvert
> formats, the raw formats are not available.
> 
> So, the presence or absence of the V4L2_FMT_FLAG_EMULATED is quite
> meaningless if you can't even enumerate the non-converted formats.
> 
> The problem comes from the "always needs conversion" stuff in
> v4lconvert coupled with the way this subdev stuff works - since it
> requires manual configuration of all the pads within the kernel
> media pipeline, the kernel ends up only advertising _one_ format
> to userspace - in my case, that's RGGB8.
> 
> When v4lconvert_create_with_dev_ops() enumerates the formats from
> the kernel, it gets only RGGB8.  That causes always_needs_conversion
> in there to remain true, so the special v4l control which enables/
> disables conversion gets created with a default value of "true".
> The RGGB8 bit is also set in data->supported_src_formats.
> 
> This causes v4lconvert_supported_dst_fmt_only() to return true.
> 
> What this all means is that v4lconvert_enum_fmt() will _not_ return
> any of the kernel formats, only the faked formats.
> 
> Ergo, the RGGB8 format from the kernel is completely hidden from the
> application, and only the emulated format is made available.  As I
> said above, this forces v4lconvert's debayering on the application,
> whether you want it or not.
> 
> In the gstreamer case, it knows nothing about this special control,
> which means that trying to use this gstreamer pipeline:
> 
> $ gst-launch-1.0 v4l2src device=/dev/video6 ! bayer2rgbneon ! xvimagesink
> 
> is completely impossible without first rebuilding gstreamer _without_
> libv4l support.  Build gstreamer without libv4l support, and the above
> works.
> 
> Enabling debug output in gstreamer's v4l2src plugin confirms that
> the kernel's bayer format are totally hidden from gstreamer when
> linked with libv4l2, but are present when it isn't linked with
> libv4l2.

Argh! that is indeed a bug at libv4l (and maybe at gstreamer).

I guess that the always_needs_conversion logic was meant to be used to
really odd proprietary formats, e. g:
	
/*  Vendor-specific formats   */
#define V4L2_PIX_FMT_CPIA1    v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */
#define V4L2_PIX_FMT_WNVA     v4l2_fourcc('W', 'N', 'V', 'A') /* Winnov hw compress */
#define V4L2_PIX_FMT_SN9C10X  v4l2_fourcc('S', '9', '1', '0') /* SN9C10x compression */
...

I suspect that nobody uses libv4l2 with MC-based V4L2 devices. That's
likely why nobody reported this bug before (that I know of).

In any case, for non-proprietary formats, the default should be to
always offer both the emulated format and the original one.

I suspect that the enclosed patch should fix the issue with bayer formats.

> 

Thanks,
Mauro

[PATCH RFC] libv4lconvert: by default, offer the original format to the client

Applications should have the right to decide between using a
libv4lconvert emulated format or to implement the decoding themselves,
as this may have significative performance impact.

So, change the default to always show both formats.

Change also the default for Bayer encoded formats, as userspace
likely will want to handle it directly.

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>


diff --git a/lib/libv4lconvert/libv4lconvert.c b/lib/libv4lconvert/libv4lconvert.c
index da718918b030..2e5458fa420d 100644
--- a/lib/libv4lconvert/libv4lconvert.c
+++ b/lib/libv4lconvert/libv4lconvert.c
@@ -118,10 +118,10 @@ static const struct v4lconvert_pixfmt supported_src_pixfmts[] = {
 	{ V4L2_PIX_FMT_OV511,		 0,	 7,	 7,	1 },
 	{ V4L2_PIX_FMT_OV518,		 0,	 7,	 7,	1 },
 	/* uncompressed bayer */
-	{ V4L2_PIX_FMT_SBGGR8,		 8,	 8,	 8,	1 },
-	{ V4L2_PIX_FMT_SGBRG8,		 8,	 8,	 8,	1 },
-	{ V4L2_PIX_FMT_SGRBG8,		 8,	 8,	 8,	1 },
-	{ V4L2_PIX_FMT_SRGGB8,		 8,	 8,	 8,	1 },
+	{ V4L2_PIX_FMT_SBGGR8,		 8,	 8,	 8,	0 },
+	{ V4L2_PIX_FMT_SGBRG8,		 8,	 8,	 8,	0 },
+	{ V4L2_PIX_FMT_SGRBG8,		 8,	 8,	 8,	0 },
+	{ V4L2_PIX_FMT_SRGGB8,		 8,	 8,	 8,	0 },
 	{ V4L2_PIX_FMT_STV0680,		 8,	 8,	 8,	1 },
 	/* compressed bayer */
 	{ V4L2_PIX_FMT_SPCA561,		 0,	 9,	 9,	1 },
@@ -178,7 +178,7 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
 	/* This keeps tracks of devices which have only formats for which apps
 	   most likely will need conversion and we can thus safely add software
 	   processing controls without a performance impact. */
-	int always_needs_conversion = 1;
+	int always_needs_conversion = 0;
 
 	if (!data) {
 		fprintf(stderr, "libv4lconvert: error: out of memory!\n");
@@ -208,8 +208,8 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
 		if (j < ARRAY_SIZE(supported_src_pixfmts)) {
 			data->supported_src_formats |= 1ULL << j;
 			v4lconvert_get_framesizes(data, fmt.pixelformat, j);
-			if (!supported_src_pixfmts[j].needs_conversion)
-				always_needs_conversion = 0;
+			if (supported_src_pixfmts[j].needs_conversion)
+				always_needs_conversion = 1;
 		} else
 			always_needs_conversion = 0;
 	}

^ permalink raw reply related	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 20:42                 ` Mauro Carvalho Chehab
@ 2017-03-10 21:55                   ` Pavel Machek
  0 siblings, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-10 21:55 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Russell King - ARM Linux, Hans Verkuil, Sakari Ailus,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

[-- Attachment #1: Type: text/plain, Size: 2069 bytes --]

Hi!

> Argh! that is indeed a bug at libv4l (and maybe at gstreamer).
> 
> I guess that the always_needs_conversion logic was meant to be used to
> really odd proprietary formats, e. g:
> 	
> /*  Vendor-specific formats   */
> #define V4L2_PIX_FMT_CPIA1    v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */
> #define V4L2_PIX_FMT_WNVA     v4l2_fourcc('W', 'N', 'V', 'A') /* Winnov hw compress */
> #define V4L2_PIX_FMT_SN9C10X  v4l2_fourcc('S', '9', '1', '0') /* SN9C10x compression */
> ...
> 
> I suspect that nobody uses libv4l2 with MC-based V4L2 devices. That's
> likely why nobody reported this bug before (that I know of).
> 
> In any case, for non-proprietary formats, the default should be to
> always offer both the emulated format and the original one.
> 
> I suspect that the enclosed patch should fix the issue with bayer formats.

...
> @@ -178,7 +178,7 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
>  	/* This keeps tracks of devices which have only formats for which apps
>  	   most likely will need conversion and we can thus safely add software
>  	   processing controls without a performance impact. */
> -	int always_needs_conversion = 1;
> +	int always_needs_conversion = 0;
>  
>  	if (!data) {
>  		fprintf(stderr, "libv4lconvert: error: out of memory!\n");
> @@ -208,8 +208,8 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
>  		if (j < ARRAY_SIZE(supported_src_pixfmts)) {
>  			data->supported_src_formats |= 1ULL << j;
>  			v4lconvert_get_framesizes(data, fmt.pixelformat, j);
> -			if (!supported_src_pixfmts[j].needs_conversion)
> -				always_needs_conversion = 0;
> +			if (supported_src_pixfmts[j].needs_conversion)
> +				always_needs_conversion = 1;
>  		} else
>  			always_needs_conversion = 0;
>  	}

Is the else still needed? You changed default to 0...

										Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 15:53                   ` Mauro Carvalho Chehab
@ 2017-03-10 22:37                     ` Sakari Ailus
  2017-03-11 11:25                       ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-10 22:37 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Hi Mauro (and others),

On Fri, Mar 10, 2017 at 12:53:42PM -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 10 Mar 2017 15:20:48 +0100
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> > 
> > > As I've already mentioned, from talking about this with Mauro, it seems
> > > Mauro is in agreement with permitting the control inheritence... I wish
> > > Mauro would comment for himself, as I can't quote our private discussion
> > > on the subject.  
> > 
> > I can't comment either, not having seen his mail and reasoning.
> 
> The rationale is that we should support the simplest use cases first.
> 
> In the case of the first MC-based driver (and several subsequent
> ones), the simplest use case required MC, as it was meant to suport
> a custom-made sophisticated application that required fine control
> on each component of the pipeline and to allow their advanced
> proprietary AAA userspace-based algorithms to work.

The first MC based driver (omap3isp) supports what the hardware can do, it
does not support applications as such.

Adding support to drivers for different "operation modes" --- this is
essentially what is being asked for --- is not an approach which could serve
either purpose (some functionality with simple interface vs. fully support
what the hardware can do, with interfaces allowing that) adequately in the
short or the long run.

If we are missing pieces in the puzzle --- in this case the missing pieces
in the puzzle are a generic pipeline configuration library and another
library that, with the help of pipeline autoconfiguration would implement
"best effort" service for regular V4L2 on top of the MC + V4L2 subdev + V4L2
--- then these pieces need to be impelemented. The solution is
*not* to attempt to support different types of applications in each driver
separately. That will make writing drivers painful, error prone and is
unlikely ever deliver what either purpose requires.

So let's continue to implement the functionality that the hardware supports.
Making a different choice here is bound to create a lasting conflict between
having to change kernel interface behaviour and the requirement of
supporting new functionality that hasn't been previously thought of, pushing
away SoC vendors from V4L2 ecosystem. This is what we all do want to avoid.

As far as i.MX6 driver goes, it is always possible to implement i.MX6 plugin
for libv4l to perform this. This should be much easier than getting the
automatic pipe configuration library and the rest working, and as it is
custom for i.MX6, the resulting plugin may make informed technical choices
for better functionality. Jacek has been working on such a plugin for
Samsung Exynos hardware, but I don't think he has quite finished it yey.

The original plan was and continues to be sound, it's just that there have
always been too few hands to implement it. :-(

> 
> That's not true, for example, for the UVC driver. There, MC
> is optional, as it should be.

UVC is different. The device simply provides additional information through
MC to the user but MC (or V4L2 sub-device interface) is not used for
controlling the device.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 22:37                     ` Sakari Ailus
@ 2017-03-11 11:25                       ` Mauro Carvalho Chehab
  2017-03-11 21:52                         ` Pavel Machek
                                           ` (2 more replies)
  0 siblings, 3 replies; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-11 11:25 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Hans Verkuil, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

Em Sat, 11 Mar 2017 00:37:14 +0200
Sakari Ailus <sakari.ailus@iki.fi> escreveu:

> Hi Mauro (and others),
> 
> On Fri, Mar 10, 2017 at 12:53:42PM -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 10 Mar 2017 15:20:48 +0100
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >   
> > >   
> > > > As I've already mentioned, from talking about this with Mauro, it seems
> > > > Mauro is in agreement with permitting the control inheritence... I wish
> > > > Mauro would comment for himself, as I can't quote our private discussion
> > > > on the subject.    
> > > 
> > > I can't comment either, not having seen his mail and reasoning.  
> > 
> > The rationale is that we should support the simplest use cases first.
> > 
> > In the case of the first MC-based driver (and several subsequent
> > ones), the simplest use case required MC, as it was meant to suport
> > a custom-made sophisticated application that required fine control
> > on each component of the pipeline and to allow their advanced
> > proprietary AAA userspace-based algorithms to work.  
> 
> The first MC based driver (omap3isp) supports what the hardware can do, it
> does not support applications as such.

All media drivers support a subset of what the hardware can do. The
question is if such subset covers the use cases or not.

The current MC-based drivers (except for uvc) took a patch to offer a
more advanced API, to allow direct control to each IP module, as it was
said, by the time we merged the OMAP3 driver, that, for the N9/N900 camera
to work, it was mandatory to access the pipeline's individual components.

Such approach require that some userspace software will have knowledge
about some hardware details, in order to setup pipelines and send controls
to the right components. That makes really hard to have a generic user
friendly application to use such devices.

Non-MC based drivers control the hardware via a portable interface with
doesn't require any knowledge about the hardware specifics, as either the
Kernel or some firmware at the device will set any needed pipelines.

In the case of V4L2 controls, when there's no subdev API, the main
driver (e. g. the driver that creates the /dev/video nodes) sends a
multicast message to all bound I2C drivers. The driver(s) that need 
them handle it. When the same control may be implemented on different
drivers, the main driver sends a unicast message to just one driver[1].

[1] There are several non-MC drivers that have multiple ways to
control some things, like doing scaling or adjust volume levels at
either the bridge driver or at a subdriver.

There's nothing wrong with this approach: it works, it is simpler,
it is generic. So, if it covers most use cases, why not allowing it
for usecases where a finer control is not a requirement?

> Adding support to drivers for different "operation modes" --- this is
> essentially what is being asked for --- is not an approach which could serve
> either purpose (some functionality with simple interface vs. fully support
> what the hardware can do, with interfaces allowing that) adequately in the
> short or the long run.

Why not?

> If we are missing pieces in the puzzle --- in this case the missing pieces
> in the puzzle are a generic pipeline configuration library and another
> library that, with the help of pipeline autoconfiguration would implement
> "best effort" service for regular V4L2 on top of the MC + V4L2 subdev + V4L2
> --- then these pieces need to be impelemented. The solution is
> *not* to attempt to support different types of applications in each driver
> separately. That will make writing drivers painful, error prone and is
> unlikely ever deliver what either purpose requires.
> 
> So let's continue to implement the functionality that the hardware supports.
> Making a different choice here is bound to create a lasting conflict between
> having to change kernel interface behaviour and the requirement of
> supporting new functionality that hasn't been previously thought of, pushing
> away SoC vendors from V4L2 ecosystem. This is what we all do want to avoid.

This situation is there since 2009. If I remember well, you tried to write
such generic plugin in the past, but never finished it, apparently because
it is too complex. Others tried too over the years. 

The last trial was done by Jacek, trying to cover just the exynos4 driver. 
Yet, even such limited scope plugin was not good enough, as it was never
merged upstream. Currently, there's no such plugins upstream.

If we can't even merge a plugin that solves it for just *one* driver,
I have no hope that we'll be able to do it for the generic case.

That's why I'm saying that I'm OK on merging any patch that would allow
setting controls via the /dev/video interface on MC-based drivers when
compiled without subdev API. I may also consider merging patches allowing
to change the behavior on runtime, when compiled with subdev API.

> As far as i.MX6 driver goes, it is always possible to implement i.MX6 plugin
> for libv4l to perform this. This should be much easier than getting the
> automatic pipe configuration library and the rest working, and as it is
> custom for i.MX6, the resulting plugin may make informed technical choices
> for better functionality.

I wouldn't call "much easier" something that experienced media
developers failed to do over the last 8 years.

It is just the opposite: broadcasting a control via I2C is very easy:
there are several examples about how to do that all over the media
drivers.

> Jacek has been working on such a plugin for
> Samsung Exynos hardware, but I don't think he has quite finished it yey.

As Jacek answered when questioned about the merge status:

	Hi Hans,

	On 11/03/2016 12:51 PM, Hans Verkuil wrote:
	> Hi all,
	>
	> Is there anything that blocks me from merging this?
	>
	> This plugin work has been ongoing for years and unless there are serious
	> objections I propose that this is merged.
	>
	> Jacek, is there anything missing that would prevent merging this?  

	There were issues raised by Sakari during last review, related to
	the way how v4l2 control bindings are defined. That discussion wasn't
	finished, so I stayed by my approach. Other than that - I've tested it
	and it works fine both with GStreamer and my test app.

After that, he sent a new version (v7.1), but never got reviews.

> The original plan was and continues to be sound, it's just that there have
> always been too few hands to implement it. :-(

If there are no people to implement a plan, it doesn't matter how good
the plan is, it won't work.

> > That's not true, for example, for the UVC driver. There, MC
> > is optional, as it should be.  
> 
> UVC is different. The device simply provides additional information through
> MC to the user but MC (or V4L2 sub-device interface) is not used for
> controlling the device.

It is not different. If the Kernel is compiled without the V4L2
subdev interface, the i.MX6 driver (or whatever other driver)
won't receive any control via the subdev interface. So, it has to
handle the control logic control via the only interface that
supports it, e. g. via the video devnode.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-10 15:09           ` Mauro Carvalho Chehab
@ 2017-03-11 11:32             ` Hans Verkuil
  2017-03-11 13:14               ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 228+ messages in thread
From: Hans Verkuil @ 2017-03-11 11:32 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> Em Fri, 10 Mar 2017 13:54:28 +0100
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
>>> Devices that have complex pipeline that do essentially require using the
>>> Media controller interface to configure them are out of that scope.
>>>   
>>
>> Way too much of how the MC devices should be used is in the minds of developers.
>> There is a major lack for good detailed documentation, utilities, compliance
>> test (really needed!) and libv4l plugins.
> 
> Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> all the problems with MC-based drivers and V4L2 applications by the time
> it was developed, and we requested Nokia developers (with was sponsoring MC
> develoment, on that time) to work on a solution to allow standard V4L2
> applications to work with MC based boards.
> 
> Unfortunately, we took the decision to merge MC without that, because 
> Nokia was giving up on Linux development, and we didn't want to lose the
> 2 years of discussions and work around it, as Nokia employers were leaving
> the company. Also, on that time, there was already some patches floating
> around adding backward support via libv4l. Unfortunately, those patches
> were never finished.
> 
> The net result is that MC was merged with some huge gaps, including
> the lack of a proper solution for a generic V4L2 program to work
> with V4L2 devices that use the subdev API.
> 
> That was not that bad by then, as MC was used only on cell phones
> that run custom-made applications. 
> 
> The reallity changed, as now, we have lots of low cost SoC based
> boards, used for all sort of purposes. So, we need a quick solution
> for it.
> 
> In other words, while that would be acceptable support special apps
> on really embedded systems, it is *not OK* for general purpose SoC
> harware[1].
> 
> [1] I'm calling "general purpose SoC harware" those ARM boards
> like Raspberry Pi that are shipped to the mass and used by a wide
> range of hobbyists and other people that just wants to run Linux on
> ARM. It is possible to buy such boards for a very cheap price,
> making them to be used not only on special projects, where a custom
> made application could be interesting, but also for a lot of
> users that just want to run Linux on a low cost ARM board, while
> keeping using standard V4L2 apps, like "camorama".
> 
> That's perhaps one of the reasons why it took a long time for us to
> start receiving drivers upstream for such hardware: it is quite 
> intimidating and not logical to require developers to implement
> on their drivers 2 complex APIs (MC, subdev) for those
> hardware that most users won't care. From user's perspective,
> being able to support generic applications like "camorama" and
> "zbar" is all they want.
> 
> In summary, I'm pretty sure we need to support standard V4L2 
> applications on boards like Raspberry Pi and those low-cost 
> SoC-based boards that are shipped to end users.
> 
>> Anyway, regarding this specific patch and for this MC-aware driver: no, you
>> shouldn't inherit controls from subdevs. It defeats the purpose.
> 
> Sorry, but I don't agree with that. The subdev API is an optional API
> (and even the MC API can be optional).
> 
> I see the rationale for using MC and subdev APIs on cell phones,
> ISV and other embedded hardware, as it will allow fine-tuning
> the driver's support to allow providing the required quality for
> certain custom-made applications. but on general SoC hardware,
> supporting standard V4L2 applications is a need.
> 
> Ok, perhaps supporting both subdev API and V4L2 API at the same
> time doesn't make much sense. We could disable one in favor of the
> other, either at compilation time or at runtime.

Right. If the subdev API is disabled, then you have to inherit the subdev
controls in the bridge driver (how else would you be able to access them?).
And that's the usual case.

If you do have the subdev API enabled, AND you use the MC, then the
intention clearly is to give userspace full control and inheriting controls
no longer makes any sense (and is highly confusing IMHO).

> 
> This way, if the subdev API is disabled, the driver will be
> functional for V4L2-based applications that don't support neither
> MC or subdev APIs.

I'm not sure if it makes sense for the i.MX driver to behave differently
depending on whether the subdev API is enabled or disabled. I don't know
enough of the hardware to tell if it would ever make sense to disable the
subdev API.

Regards,

	Hans

> 
>> As mentioned, I will attempt to try and get some time to work on this
>> later this year. Fingers crossed.
> 
> That will be good, and, once we have a solution that works, we can
> work on cleanup the code, but, until then, drivers for arm-based boards
> sold to end consumers should work out of the box with standard V4L2 apps.
> 
> While we don't have that, I'm OK to merge patches adding such support
> upstream.
> 
> Thanks,
> Mauro
> 

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 11:32             ` Hans Verkuil
@ 2017-03-11 13:14               ` Mauro Carvalho Chehab
  2017-03-11 15:32                 ` Sakari Ailus
  2017-03-11 21:30                 ` Pavel Machek
  0 siblings, 2 replies; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-11 13:14 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sakari Ailus, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Em Sat, 11 Mar 2017 12:32:43 +0100
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> > Em Fri, 10 Mar 2017 13:54:28 +0100
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >   
> >>> Devices that have complex pipeline that do essentially require using the
> >>> Media controller interface to configure them are out of that scope.
> >>>     
> >>
> >> Way too much of how the MC devices should be used is in the minds of developers.
> >> There is a major lack for good detailed documentation, utilities, compliance
> >> test (really needed!) and libv4l plugins.  
> > 
> > Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> > all the problems with MC-based drivers and V4L2 applications by the time
> > it was developed, and we requested Nokia developers (with was sponsoring MC
> > develoment, on that time) to work on a solution to allow standard V4L2
> > applications to work with MC based boards.
> > 
> > Unfortunately, we took the decision to merge MC without that, because 
> > Nokia was giving up on Linux development, and we didn't want to lose the
> > 2 years of discussions and work around it, as Nokia employers were leaving
> > the company. Also, on that time, there was already some patches floating
> > around adding backward support via libv4l. Unfortunately, those patches
> > were never finished.
> > 
> > The net result is that MC was merged with some huge gaps, including
> > the lack of a proper solution for a generic V4L2 program to work
> > with V4L2 devices that use the subdev API.
> > 
> > That was not that bad by then, as MC was used only on cell phones
> > that run custom-made applications. 
> > 
> > The reallity changed, as now, we have lots of low cost SoC based
> > boards, used for all sort of purposes. So, we need a quick solution
> > for it.
> > 
> > In other words, while that would be acceptable support special apps
> > on really embedded systems, it is *not OK* for general purpose SoC
> > harware[1].
> > 
> > [1] I'm calling "general purpose SoC harware" those ARM boards
> > like Raspberry Pi that are shipped to the mass and used by a wide
> > range of hobbyists and other people that just wants to run Linux on
> > ARM. It is possible to buy such boards for a very cheap price,
> > making them to be used not only on special projects, where a custom
> > made application could be interesting, but also for a lot of
> > users that just want to run Linux on a low cost ARM board, while
> > keeping using standard V4L2 apps, like "camorama".
> > 
> > That's perhaps one of the reasons why it took a long time for us to
> > start receiving drivers upstream for such hardware: it is quite 
> > intimidating and not logical to require developers to implement
> > on their drivers 2 complex APIs (MC, subdev) for those
> > hardware that most users won't care. From user's perspective,
> > being able to support generic applications like "camorama" and
> > "zbar" is all they want.
> > 
> > In summary, I'm pretty sure we need to support standard V4L2 
> > applications on boards like Raspberry Pi and those low-cost 
> > SoC-based boards that are shipped to end users.
> >   
> >> Anyway, regarding this specific patch and for this MC-aware driver: no, you
> >> shouldn't inherit controls from subdevs. It defeats the purpose.  
> > 
> > Sorry, but I don't agree with that. The subdev API is an optional API
> > (and even the MC API can be optional).
> > 
> > I see the rationale for using MC and subdev APIs on cell phones,
> > ISV and other embedded hardware, as it will allow fine-tuning
> > the driver's support to allow providing the required quality for
> > certain custom-made applications. but on general SoC hardware,
> > supporting standard V4L2 applications is a need.
> > 
> > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > time doesn't make much sense. We could disable one in favor of the
> > other, either at compilation time or at runtime.  
> 
> Right. If the subdev API is disabled, then you have to inherit the subdev
> controls in the bridge driver (how else would you be able to access them?).
> And that's the usual case.
> 
> If you do have the subdev API enabled, AND you use the MC, then the
> intention clearly is to give userspace full control and inheriting controls
> no longer makes any sense (and is highly confusing IMHO).

I tend to agree with that.

> > 
> > This way, if the subdev API is disabled, the driver will be
> > functional for V4L2-based applications that don't support neither
> > MC or subdev APIs.  
> 
> I'm not sure if it makes sense for the i.MX driver to behave differently
> depending on whether the subdev API is enabled or disabled. I don't know
> enough of the hardware to tell if it would ever make sense to disable the
> subdev API.

Yeah, I don't know enough about it either. The point is: this is
something that the driver maintainer and driver users should
decide if it either makes sense or not, based on the expected use cases.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 13:14               ` Mauro Carvalho Chehab
@ 2017-03-11 15:32                 ` Sakari Ailus
  2017-03-11 17:32                   ` Russell King - ARM Linux
  2017-03-11 18:08                   ` Steve Longerbeam
  2017-03-11 21:30                 ` Pavel Machek
  1 sibling, 2 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-03-11 15:32 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Hi Mauro and Hans,

On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
> Em Sat, 11 Mar 2017 12:32:43 +0100
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> > On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> > > Em Fri, 10 Mar 2017 13:54:28 +0100
> > > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > >   
> > >>> Devices that have complex pipeline that do essentially require using the
> > >>> Media controller interface to configure them are out of that scope.
> > >>>     
> > >>
> > >> Way too much of how the MC devices should be used is in the minds of developers.
> > >> There is a major lack for good detailed documentation, utilities, compliance
> > >> test (really needed!) and libv4l plugins.  
> > > 
> > > Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> > > all the problems with MC-based drivers and V4L2 applications by the time
> > > it was developed, and we requested Nokia developers (with was sponsoring MC
> > > develoment, on that time) to work on a solution to allow standard V4L2
> > > applications to work with MC based boards.
> > > 
> > > Unfortunately, we took the decision to merge MC without that, because 
> > > Nokia was giving up on Linux development, and we didn't want to lose the
> > > 2 years of discussions and work around it, as Nokia employers were leaving
> > > the company. Also, on that time, there was already some patches floating
> > > around adding backward support via libv4l. Unfortunately, those patches
> > > were never finished.
> > > 
> > > The net result is that MC was merged with some huge gaps, including
> > > the lack of a proper solution for a generic V4L2 program to work
> > > with V4L2 devices that use the subdev API.
> > > 
> > > That was not that bad by then, as MC was used only on cell phones
> > > that run custom-made applications. 
> > > 
> > > The reallity changed, as now, we have lots of low cost SoC based
> > > boards, used for all sort of purposes. So, we need a quick solution
> > > for it.
> > > 
> > > In other words, while that would be acceptable support special apps
> > > on really embedded systems, it is *not OK* for general purpose SoC
> > > harware[1].
> > > 
> > > [1] I'm calling "general purpose SoC harware" those ARM boards
> > > like Raspberry Pi that are shipped to the mass and used by a wide
> > > range of hobbyists and other people that just wants to run Linux on
> > > ARM. It is possible to buy such boards for a very cheap price,
> > > making them to be used not only on special projects, where a custom
> > > made application could be interesting, but also for a lot of
> > > users that just want to run Linux on a low cost ARM board, while
> > > keeping using standard V4L2 apps, like "camorama".
> > > 
> > > That's perhaps one of the reasons why it took a long time for us to
> > > start receiving drivers upstream for such hardware: it is quite 
> > > intimidating and not logical to require developers to implement
> > > on their drivers 2 complex APIs (MC, subdev) for those
> > > hardware that most users won't care. From user's perspective,
> > > being able to support generic applications like "camorama" and
> > > "zbar" is all they want.
> > > 
> > > In summary, I'm pretty sure we need to support standard V4L2 
> > > applications on boards like Raspberry Pi and those low-cost 
> > > SoC-based boards that are shipped to end users.
> > >   
> > >> Anyway, regarding this specific patch and for this MC-aware driver: no, you
> > >> shouldn't inherit controls from subdevs. It defeats the purpose.  
> > > 
> > > Sorry, but I don't agree with that. The subdev API is an optional API
> > > (and even the MC API can be optional).
> > > 
> > > I see the rationale for using MC and subdev APIs on cell phones,
> > > ISV and other embedded hardware, as it will allow fine-tuning
> > > the driver's support to allow providing the required quality for
> > > certain custom-made applications. but on general SoC hardware,
> > > supporting standard V4L2 applications is a need.
> > > 
> > > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > > time doesn't make much sense. We could disable one in favor of the
> > > other, either at compilation time or at runtime.  
> > 
> > Right. If the subdev API is disabled, then you have to inherit the subdev
> > controls in the bridge driver (how else would you be able to access them?).
> > And that's the usual case.
> > 
> > If you do have the subdev API enabled, AND you use the MC, then the
> > intention clearly is to give userspace full control and inheriting controls
> > no longer makes any sense (and is highly confusing IMHO).
> 
> I tend to agree with that.

I agree as well.

This is in line with how existing drivers behave, too.

> 
> > > 
> > > This way, if the subdev API is disabled, the driver will be
> > > functional for V4L2-based applications that don't support neither
> > > MC or subdev APIs.  
> > 
> > I'm not sure if it makes sense for the i.MX driver to behave differently
> > depending on whether the subdev API is enabled or disabled. I don't know
> > enough of the hardware to tell if it would ever make sense to disable the
> > subdev API.
> 
> Yeah, I don't know enough about it either. The point is: this is
> something that the driver maintainer and driver users should
> decide if it either makes sense or not, based on the expected use cases.

My understanding of the i.MX6 case is the hardware is configurable enough
to warrant the use of the Media controller API. Some patches indicate
there are choices to be made in data routing.

Steve: could you enlighten us on the topic, by e.g. doing media-ctl
--print-dot and sending the results to the list? What kind of different IP
blocks are there and what do they do? A pointer to hardware documentation
wouldn't hurt either (if it's available).

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 15:32                 ` Sakari Ailus
@ 2017-03-11 17:32                   ` Russell King - ARM Linux
  2017-03-11 18:08                   ` Steve Longerbeam
  1 sibling, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-11 17:32 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Mauro Carvalho Chehab, Hans Verkuil, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

[-- Attachment #1: Type: text/plain, Size: 1430 bytes --]

On Sat, Mar 11, 2017 at 05:32:29PM +0200, Sakari Ailus wrote:
> My understanding of the i.MX6 case is the hardware is configurable enough
> to warrant the use of the Media controller API. Some patches indicate
> there are choices to be made in data routing.

The iMX6 does have configurable data routing, but in some scenarios
(eg, when receiving bayer data) there's only one possible routing.

> Steve: could you enlighten us on the topic, by e.g. doing media-ctl
> --print-dot and sending the results to the list? What kind of different IP
> blocks are there and what do they do? A pointer to hardware documentation
> wouldn't hurt either (if it's available).

Attached for the imx219 camera.  Note that although the CSI2 block has
four outputs, each output is dedicated to a CSI virtual channel, so
they can not be arbitarily assigned without configuring the sensor.

Since the imx219 only produces bayer, the graph is also showing the
_only_ possible routing for the imx219 configured for CSI virtual
channel 0.

The iMX6 manuals are available on the 'net.

	https://community.nxp.com/docs/DOC-101840

There are several chapters that cover the capture side:

* MIPI CSI2
* IPU CSI2 gasket
* IPU

The IPU not only performs capture, but also display as well.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

[-- Attachment #2: imx219.dot --]
[-- Type: text/vnd.graphviz, Size: 4557 bytes --]

digraph board {
	rankdir=TB
	n00000001 [label="{{<port0> 0 | <port1> 1} | ipu1_csi0_mux\n/dev/v4l-subdev0 | {<port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000001:port2 -> n00000044:port0
	n00000005 [label="{{<port0> 0 | <port1> 1} | ipu2_csi1_mux\n/dev/v4l-subdev1 | {<port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000005:port2 -> n00000068:port0 [style=dashed]
	n00000009 [label="{{<port0> 0 | <port1> 1} | ipu1_vdic\n/dev/v4l-subdev2 | {<port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000009:port2 -> n00000011:port0 [style=dashed]
	n0000000d [label="{{<port0> 0 | <port1> 1} | ipu2_vdic\n/dev/v4l-subdev3 | {<port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0000000d:port2 -> n00000027:port0 [style=dashed]
	n00000011 [label="{{<port0> 0} | ipu1_ic_prp\n/dev/v4l-subdev4 | {<port1> 1 | <port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000011:port1 -> n00000015:port0 [style=dashed]
	n00000011:port2 -> n0000001e:port0 [style=dashed]
	n00000015 [label="{{<port0> 0} | ipu1_ic_prpenc\n/dev/v4l-subdev5 | {<port1> 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000015:port1 -> n00000018 [style=dashed]
	n00000018 [label="ipu1_ic_prpenc capture\n/dev/video0", shape=box, style=filled, fillcolor=yellow]
	n0000001e [label="{{<port0> 0} | ipu1_ic_prpvf\n/dev/v4l-subdev6 | {<port1> 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n0000001e:port1 -> n00000021 [style=dashed]
	n00000021 [label="ipu1_ic_prpvf capture\n/dev/video1", shape=box, style=filled, fillcolor=yellow]
	n00000027 [label="{{<port0> 0} | ipu2_ic_prp\n/dev/v4l-subdev7 | {<port1> 1 | <port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000027:port1 -> n0000002b:port0 [style=dashed]
	n00000027:port2 -> n00000034:port0 [style=dashed]
	n0000002b [label="{{<port0> 0} | ipu2_ic_prpenc\n/dev/v4l-subdev8 | {<port1> 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n0000002b:port1 -> n0000002e [style=dashed]
	n0000002e [label="ipu2_ic_prpenc capture\n/dev/video2", shape=box, style=filled, fillcolor=yellow]
	n00000034 [label="{{<port0> 0} | ipu2_ic_prpvf\n/dev/v4l-subdev9 | {<port1> 1}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000034:port1 -> n00000037 [style=dashed]
	n00000037 [label="ipu2_ic_prpvf capture\n/dev/video3", shape=box, style=filled, fillcolor=yellow]
	n0000003d [label="{{<port1> 1} | imx219 0-0010\n/dev/v4l-subdev11 | {<port0> 0}}", shape=Mrecord, style=filled, fillcolor=green]
	n0000003d:port0 -> n00000058:port0
	n00000040 [label="{{} | imx219 pixel 0-0010\n/dev/v4l-subdev10 | {<port0> 0}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000040:port0 -> n0000003d:port1 [style=bold]
	n00000044 [label="{{<port0> 0} | ipu1_csi0\n/dev/v4l-subdev12 | {<port1> 1 | <port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000044:port2 -> n00000048
	n00000044:port1 -> n00000011:port0 [style=dashed]
	n00000044:port1 -> n00000009:port0 [style=dashed]
	n00000048 [label="ipu1_csi0 capture\n/dev/video4", shape=box, style=filled, fillcolor=yellow]
	n0000004e [label="{{<port0> 0} | ipu1_csi1\n/dev/v4l-subdev13 | {<port1> 1 | <port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0000004e:port2 -> n00000052 [style=dashed]
	n0000004e:port1 -> n00000011:port0 [style=dashed]
	n0000004e:port1 -> n00000009:port0 [style=dashed]
	n00000052 [label="ipu1_csi1 capture\n/dev/video5", shape=box, style=filled, fillcolor=yellow]
	n00000058 [label="{{<port0> 0} | imx6-mipi-csi2\n/dev/v4l-subdev14 | {<port1> 1 | <port2> 2 | <port3> 3 | <port4> 4}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000058:port1 -> n00000001:port0
	n00000058:port2 -> n0000004e:port0 [style=dashed]
	n00000058:port3 -> n0000005e:port0 [style=dashed]
	n00000058:port4 -> n00000005:port0 [style=dashed]
	n0000005e [label="{{<port0> 0} | ipu2_csi0\n/dev/v4l-subdev15 | {<port1> 1 | <port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n0000005e:port2 -> n00000062 [style=dashed]
	n0000005e:port1 -> n00000027:port0 [style=dashed]
	n0000005e:port1 -> n0000000d:port0 [style=dashed]
	n00000062 [label="ipu2_csi0 capture\n/dev/video6", shape=box, style=filled, fillcolor=yellow]
	n00000068 [label="{{<port0> 0} | ipu2_csi1\n/dev/v4l-subdev16 | {<port1> 1 | <port2> 2}}", shape=Mrecord, style=filled, fillcolor=green]
	n00000068:port2 -> n0000006c [style=dashed]
	n00000068:port1 -> n00000027:port0 [style=dashed]
	n00000068:port1 -> n0000000d:port0 [style=dashed]
	n0000006c [label="ipu2_csi1 capture\n/dev/video7", shape=box, style=filled, fillcolor=yellow]
}


^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 15:32                 ` Sakari Ailus
  2017-03-11 17:32                   ` Russell King - ARM Linux
@ 2017-03-11 18:08                   ` Steve Longerbeam
  2017-03-11 18:45                     ` Russell King - ARM Linux
  2017-03-11 20:26                     ` Pavel Machek
  1 sibling, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-11 18:08 UTC (permalink / raw)
  To: Sakari Ailus, Mauro Carvalho Chehab
  Cc: Hans Verkuil, Russell King - ARM Linux, robh+dt, mark.rutland,
	shawnguo, kernel, fabio.estevam, mchehab, nick, markus.heiser,
	p.zabel, laurent.pinchart+renesas, bparrot, geert, arnd,
	sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel

[-- Attachment #1: Type: text/plain, Size: 7102 bytes --]



On 03/11/2017 07:32 AM, Sakari Ailus wrote:
> Hi Mauro and Hans,
>
> On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
>> Em Sat, 11 Mar 2017 12:32:43 +0100
>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>>
>>> On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
>>>> Em Fri, 10 Mar 2017 13:54:28 +0100
>>>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>>>>
>>>>>> Devices that have complex pipeline that do essentially require using the
>>>>>> Media controller interface to configure them are out of that scope.
>>>>>>
>>>>>
>>>>> Way too much of how the MC devices should be used is in the minds of developers.
>>>>> There is a major lack for good detailed documentation, utilities, compliance
>>>>> test (really needed!) and libv4l plugins.
>>>>
>>>> Unfortunately, we merged an incomplete MC support at the Kernel. We knew
>>>> all the problems with MC-based drivers and V4L2 applications by the time
>>>> it was developed, and we requested Nokia developers (with was sponsoring MC
>>>> develoment, on that time) to work on a solution to allow standard V4L2
>>>> applications to work with MC based boards.
>>>>
>>>> Unfortunately, we took the decision to merge MC without that, because
>>>> Nokia was giving up on Linux development, and we didn't want to lose the
>>>> 2 years of discussions and work around it, as Nokia employers were leaving
>>>> the company. Also, on that time, there was already some patches floating
>>>> around adding backward support via libv4l. Unfortunately, those patches
>>>> were never finished.
>>>>
>>>> The net result is that MC was merged with some huge gaps, including
>>>> the lack of a proper solution for a generic V4L2 program to work
>>>> with V4L2 devices that use the subdev API.
>>>>
>>>> That was not that bad by then, as MC was used only on cell phones
>>>> that run custom-made applications.
>>>>
>>>> The reallity changed, as now, we have lots of low cost SoC based
>>>> boards, used for all sort of purposes. So, we need a quick solution
>>>> for it.
>>>>
>>>> In other words, while that would be acceptable support special apps
>>>> on really embedded systems, it is *not OK* for general purpose SoC
>>>> harware[1].
>>>>
>>>> [1] I'm calling "general purpose SoC harware" those ARM boards
>>>> like Raspberry Pi that are shipped to the mass and used by a wide
>>>> range of hobbyists and other people that just wants to run Linux on
>>>> ARM. It is possible to buy such boards for a very cheap price,
>>>> making them to be used not only on special projects, where a custom
>>>> made application could be interesting, but also for a lot of
>>>> users that just want to run Linux on a low cost ARM board, while
>>>> keeping using standard V4L2 apps, like "camorama".
>>>>
>>>> That's perhaps one of the reasons why it took a long time for us to
>>>> start receiving drivers upstream for such hardware: it is quite
>>>> intimidating and not logical to require developers to implement
>>>> on their drivers 2 complex APIs (MC, subdev) for those
>>>> hardware that most users won't care. From user's perspective,
>>>> being able to support generic applications like "camorama" and
>>>> "zbar" is all they want.
>>>>
>>>> In summary, I'm pretty sure we need to support standard V4L2
>>>> applications on boards like Raspberry Pi and those low-cost
>>>> SoC-based boards that are shipped to end users.
>>>>
>>>>> Anyway, regarding this specific patch and for this MC-aware driver: no, you
>>>>> shouldn't inherit controls from subdevs. It defeats the purpose.
>>>>
>>>> Sorry, but I don't agree with that. The subdev API is an optional API
>>>> (and even the MC API can be optional).
>>>>
>>>> I see the rationale for using MC and subdev APIs on cell phones,
>>>> ISV and other embedded hardware, as it will allow fine-tuning
>>>> the driver's support to allow providing the required quality for
>>>> certain custom-made applications. but on general SoC hardware,
>>>> supporting standard V4L2 applications is a need.
>>>>
>>>> Ok, perhaps supporting both subdev API and V4L2 API at the same
>>>> time doesn't make much sense. We could disable one in favor of the
>>>> other, either at compilation time or at runtime.
>>>
>>> Right. If the subdev API is disabled, then you have to inherit the subdev
>>> controls in the bridge driver (how else would you be able to access them?).
>>> And that's the usual case.
>>>
>>> If you do have the subdev API enabled, AND you use the MC, then the
>>> intention clearly is to give userspace full control and inheriting controls
>>> no longer makes any sense (and is highly confusing IMHO).
>>
>> I tend to agree with that.
>
> I agree as well.
>
> This is in line with how existing drivers behave, too.


Well, sounds like there is consensus on this topic. I guess I'll
go ahead and remove the control inheritance support. I suppose
having a control appear in two places (subdev and video nodes) can
be confusing.

As for the configurability vs. ease-of-use debate, I added the
control inheritance to make it a little easier on the user, but,
as the dot graphs below will show, the user already needs quite
a lot of knowledge of the architecture already, in order to setup
the different pipelines. So perhaps the control inheritance is
rather pointless anyway.



>
>>
>>>>
>>>> This way, if the subdev API is disabled, the driver will be
>>>> functional for V4L2-based applications that don't support neither
>>>> MC or subdev APIs.
>>>
>>> I'm not sure if it makes sense for the i.MX driver to behave differently
>>> depending on whether the subdev API is enabled or disabled. I don't know
>>> enough of the hardware to tell if it would ever make sense to disable the
>>> subdev API.
>>
>> Yeah, I don't know enough about it either. The point is: this is
>> something that the driver maintainer and driver users should
>> decide if it either makes sense or not, based on the expected use cases.
>
> My understanding of the i.MX6 case is the hardware is configurable enough
> to warrant the use of the Media controller API. Some patches indicate
> there are choices to be made in data routing.
>
> Steve: could you enlighten us on the topic, by e.g. doing media-ctl
> --print-dot and sending the results to the list? What kind of different IP
> blocks are there and what do they do? A pointer to hardware documentation
> wouldn't hurt either (if it's available).

Wow, I didn't realize there was so little knowledge of the imx6
IPU capture architecture.

Yes, the imx6 definitely warrants the need for MC, as the dot graphs
will attest.

The graphs follows very closely the actual hardware architecture of
the IPU capture blocks. I.e., all the subdevs and links shown correspond 
to actual hardware connections and sub-blocks.

Russell just provided a link to the imx6 reference manual, and dot
graph for the imx219 based platform.

Also I've added quite a lot of detail to the media doc at
Documentation/media/v4l-drivers/imx.rst.

The dot graphs for SabreSD, SabreLite, and SabreAuto reference platforms 
are attached. This is generated from the most recent
(version 5) imx-media driver.

Steve

[-- Attachment #2: sabresd.dot --]
[-- Type: application/msword-template, Size: 4432 bytes --]

[-- Attachment #3: sabrelite.dot --]
[-- Type: application/msword-template, Size: 4432 bytes --]

[-- Attachment #4: sabreauto.dot --]
[-- Type: application/msword-template, Size: 4062 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 18:08                   ` Steve Longerbeam
@ 2017-03-11 18:45                     ` Russell King - ARM Linux
  2017-03-11 18:54                       ` Steve Longerbeam
  2017-03-11 20:26                     ` Pavel Machek
  1 sibling, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-11 18:45 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Sakari Ailus, Mauro Carvalho Chehab, mark.rutland,
	andrew-ct.chen, minghsiu.tsai, sakari.ailus, nick, songjun.wu,
	Hans Verkuil, pavel, robert.jarzmik, devel, markus.heiser,
	laurent.pinchart+renesas, shuah, geert, linux-media, devicetree,
	kernel, arnd, mchehab, bparrot, robh+dt, horms+renesas,
	tiffany.lin, linux-arm-kernel, niklas.soderlund+renesas, gregkh,
	linux-kernel, jean-christophe.trotin, p.zabel, fabio.estevam,
	shawnguo, sudipm.mukherjee

On Sat, Mar 11, 2017 at 10:08:23AM -0800, Steve Longerbeam wrote:
> On 03/11/2017 07:32 AM, Sakari Ailus wrote:
> >Hi Mauro and Hans,
> >
> >On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
> >>Em Sat, 11 Mar 2017 12:32:43 +0100
> >>Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >>
> >>>On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> >>>>Em Fri, 10 Mar 2017 13:54:28 +0100
> >>>>Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >>>>
> >>>>>>Devices that have complex pipeline that do essentially require using the
> >>>>>>Media controller interface to configure them are out of that scope.
> >>>>>>
> >>>>>
> >>>>>Way too much of how the MC devices should be used is in the minds of developers.
> >>>>>There is a major lack for good detailed documentation, utilities, compliance
> >>>>>test (really needed!) and libv4l plugins.
> >>>>
> >>>>Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> >>>>all the problems with MC-based drivers and V4L2 applications by the time
> >>>>it was developed, and we requested Nokia developers (with was sponsoring MC
> >>>>develoment, on that time) to work on a solution to allow standard V4L2
> >>>>applications to work with MC based boards.
> >>>>
> >>>>Unfortunately, we took the decision to merge MC without that, because
> >>>>Nokia was giving up on Linux development, and we didn't want to lose the
> >>>>2 years of discussions and work around it, as Nokia employers were leaving
> >>>>the company. Also, on that time, there was already some patches floating
> >>>>around adding backward support via libv4l. Unfortunately, those patches
> >>>>were never finished.
> >>>>
> >>>>The net result is that MC was merged with some huge gaps, including
> >>>>the lack of a proper solution for a generic V4L2 program to work
> >>>>with V4L2 devices that use the subdev API.
> >>>>
> >>>>That was not that bad by then, as MC was used only on cell phones
> >>>>that run custom-made applications.
> >>>>
> >>>>The reallity changed, as now, we have lots of low cost SoC based
> >>>>boards, used for all sort of purposes. So, we need a quick solution
> >>>>for it.
> >>>>
> >>>>In other words, while that would be acceptable support special apps
> >>>>on really embedded systems, it is *not OK* for general purpose SoC
> >>>>harware[1].
> >>>>
> >>>>[1] I'm calling "general purpose SoC harware" those ARM boards
> >>>>like Raspberry Pi that are shipped to the mass and used by a wide
> >>>>range of hobbyists and other people that just wants to run Linux on
> >>>>ARM. It is possible to buy such boards for a very cheap price,
> >>>>making them to be used not only on special projects, where a custom
> >>>>made application could be interesting, but also for a lot of
> >>>>users that just want to run Linux on a low cost ARM board, while
> >>>>keeping using standard V4L2 apps, like "camorama".
> >>>>
> >>>>That's perhaps one of the reasons why it took a long time for us to
> >>>>start receiving drivers upstream for such hardware: it is quite
> >>>>intimidating and not logical to require developers to implement
> >>>>on their drivers 2 complex APIs (MC, subdev) for those
> >>>>hardware that most users won't care. From user's perspective,
> >>>>being able to support generic applications like "camorama" and
> >>>>"zbar" is all they want.
> >>>>
> >>>>In summary, I'm pretty sure we need to support standard V4L2
> >>>>applications on boards like Raspberry Pi and those low-cost
> >>>>SoC-based boards that are shipped to end users.
> >>>>
> >>>>>Anyway, regarding this specific patch and for this MC-aware driver: no, you
> >>>>>shouldn't inherit controls from subdevs. It defeats the purpose.
> >>>>
> >>>>Sorry, but I don't agree with that. The subdev API is an optional API
> >>>>(and even the MC API can be optional).
> >>>>
> >>>>I see the rationale for using MC and subdev APIs on cell phones,
> >>>>ISV and other embedded hardware, as it will allow fine-tuning
> >>>>the driver's support to allow providing the required quality for
> >>>>certain custom-made applications. but on general SoC hardware,
> >>>>supporting standard V4L2 applications is a need.
> >>>>
> >>>>Ok, perhaps supporting both subdev API and V4L2 API at the same
> >>>>time doesn't make much sense. We could disable one in favor of the
> >>>>other, either at compilation time or at runtime.
> >>>
> >>>Right. If the subdev API is disabled, then you have to inherit the subdev
> >>>controls in the bridge driver (how else would you be able to access them?).
> >>>And that's the usual case.
> >>>
> >>>If you do have the subdev API enabled, AND you use the MC, then the
> >>>intention clearly is to give userspace full control and inheriting controls
> >>>no longer makes any sense (and is highly confusing IMHO).
> >>
> >>I tend to agree with that.
> >
> >I agree as well.
> >
> >This is in line with how existing drivers behave, too.
> 
> Well, sounds like there is consensus on this topic. I guess I'll
> go ahead and remove the control inheritance support. I suppose
> having a control appear in two places (subdev and video nodes) can
> be confusing.

I would say _don't_ do that until there are tools/libraries in place
that are able to support controlling subdevs, otherwise it's just
going to be another reason for me to walk away from this stuff, and
stick with a version that does work sensibly.

> As for the configurability vs. ease-of-use debate, I added the
> control inheritance to make it a little easier on the user, but,
> as the dot graphs below will show, the user already needs quite
> a lot of knowledge of the architecture already, in order to setup
> the different pipelines. So perhaps the control inheritance is
> rather pointless anyway.

I really don't think expecting the user to understand and configure
the pipeline is a sane way forward.  Think about it - should the
user need to know that, because they have a bayer-only CSI data
source, that there is only one path possible, and if they try to
configure a different path, then things will just error out?

For the case of imx219 connected to iMX6, it really is as simple as
"there is only one possible path" and all the complexity of the media
interfaces/subdevs is completely unnecessary.  Every other block in
the graph is just noise.

The fact is that these dot graphs show a complex picture, but reality
is somewhat different - there's only relatively few paths available
depending on the connected source and the rest of the paths are
completely useless.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 18:45                     ` Russell King - ARM Linux
@ 2017-03-11 18:54                       ` Steve Longerbeam
  2017-03-11 18:59                         ` Russell King - ARM Linux
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-11 18:54 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Sakari Ailus, Mauro Carvalho Chehab, mark.rutland,
	andrew-ct.chen, minghsiu.tsai, sakari.ailus, nick, songjun.wu,
	Hans Verkuil, pavel, robert.jarzmik, devel, markus.heiser,
	laurent.pinchart+renesas, shuah, geert, linux-media, devicetree,
	kernel, arnd, mchehab, bparrot, robh+dt, horms+renesas,
	tiffany.lin, linux-arm-kernel, niklas.soderlund+renesas, gregkh,
	linux-kernel, jean-christophe.trotin, p.zabel, fabio.estevam,
	shawnguo, sudipm.mukherjee



On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 10:08:23AM -0800, Steve Longerbeam wrote:
>> On 03/11/2017 07:32 AM, Sakari Ailus wrote:
>>> Hi Mauro and Hans,
>>>
>>> On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
>>>> Em Sat, 11 Mar 2017 12:32:43 +0100
>>>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>>>>
>>>>> On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
>>>>>> Em Fri, 10 Mar 2017 13:54:28 +0100
>>>>>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>>>>>>
>>>>>>>> Devices that have complex pipeline that do essentially require using the
>>>>>>>> Media controller interface to configure them are out of that scope.
>>>>>>>>
>>>>>>>
>>>>>>> Way too much of how the MC devices should be used is in the minds of developers.
>>>>>>> There is a major lack for good detailed documentation, utilities, compliance
>>>>>>> test (really needed!) and libv4l plugins.
>>>>>>
>>>>>> Unfortunately, we merged an incomplete MC support at the Kernel. We knew
>>>>>> all the problems with MC-based drivers and V4L2 applications by the time
>>>>>> it was developed, and we requested Nokia developers (with was sponsoring MC
>>>>>> develoment, on that time) to work on a solution to allow standard V4L2
>>>>>> applications to work with MC based boards.
>>>>>>
>>>>>> Unfortunately, we took the decision to merge MC without that, because
>>>>>> Nokia was giving up on Linux development, and we didn't want to lose the
>>>>>> 2 years of discussions and work around it, as Nokia employers were leaving
>>>>>> the company. Also, on that time, there was already some patches floating
>>>>>> around adding backward support via libv4l. Unfortunately, those patches
>>>>>> were never finished.
>>>>>>
>>>>>> The net result is that MC was merged with some huge gaps, including
>>>>>> the lack of a proper solution for a generic V4L2 program to work
>>>>>> with V4L2 devices that use the subdev API.
>>>>>>
>>>>>> That was not that bad by then, as MC was used only on cell phones
>>>>>> that run custom-made applications.
>>>>>>
>>>>>> The reallity changed, as now, we have lots of low cost SoC based
>>>>>> boards, used for all sort of purposes. So, we need a quick solution
>>>>>> for it.
>>>>>>
>>>>>> In other words, while that would be acceptable support special apps
>>>>>> on really embedded systems, it is *not OK* for general purpose SoC
>>>>>> harware[1].
>>>>>>
>>>>>> [1] I'm calling "general purpose SoC harware" those ARM boards
>>>>>> like Raspberry Pi that are shipped to the mass and used by a wide
>>>>>> range of hobbyists and other people that just wants to run Linux on
>>>>>> ARM. It is possible to buy such boards for a very cheap price,
>>>>>> making them to be used not only on special projects, where a custom
>>>>>> made application could be interesting, but also for a lot of
>>>>>> users that just want to run Linux on a low cost ARM board, while
>>>>>> keeping using standard V4L2 apps, like "camorama".
>>>>>>
>>>>>> That's perhaps one of the reasons why it took a long time for us to
>>>>>> start receiving drivers upstream for such hardware: it is quite
>>>>>> intimidating and not logical to require developers to implement
>>>>>> on their drivers 2 complex APIs (MC, subdev) for those
>>>>>> hardware that most users won't care. From user's perspective,
>>>>>> being able to support generic applications like "camorama" and
>>>>>> "zbar" is all they want.
>>>>>>
>>>>>> In summary, I'm pretty sure we need to support standard V4L2
>>>>>> applications on boards like Raspberry Pi and those low-cost
>>>>>> SoC-based boards that are shipped to end users.
>>>>>>
>>>>>>> Anyway, regarding this specific patch and for this MC-aware driver: no, you
>>>>>>> shouldn't inherit controls from subdevs. It defeats the purpose.
>>>>>>
>>>>>> Sorry, but I don't agree with that. The subdev API is an optional API
>>>>>> (and even the MC API can be optional).
>>>>>>
>>>>>> I see the rationale for using MC and subdev APIs on cell phones,
>>>>>> ISV and other embedded hardware, as it will allow fine-tuning
>>>>>> the driver's support to allow providing the required quality for
>>>>>> certain custom-made applications. but on general SoC hardware,
>>>>>> supporting standard V4L2 applications is a need.
>>>>>>
>>>>>> Ok, perhaps supporting both subdev API and V4L2 API at the same
>>>>>> time doesn't make much sense. We could disable one in favor of the
>>>>>> other, either at compilation time or at runtime.
>>>>>
>>>>> Right. If the subdev API is disabled, then you have to inherit the subdev
>>>>> controls in the bridge driver (how else would you be able to access them?).
>>>>> And that's the usual case.
>>>>>
>>>>> If you do have the subdev API enabled, AND you use the MC, then the
>>>>> intention clearly is to give userspace full control and inheriting controls
>>>>> no longer makes any sense (and is highly confusing IMHO).
>>>>
>>>> I tend to agree with that.
>>>
>>> I agree as well.
>>>
>>> This is in line with how existing drivers behave, too.
>>
>> Well, sounds like there is consensus on this topic. I guess I'll
>> go ahead and remove the control inheritance support. I suppose
>> having a control appear in two places (subdev and video nodes) can
>> be confusing.
>
> I would say _don't_ do that until there are tools/libraries in place
> that are able to support controlling subdevs, otherwise it's just
> going to be another reason for me to walk away from this stuff, and
> stick with a version that does work sensibly.
>
>> As for the configurability vs. ease-of-use debate, I added the
>> control inheritance to make it a little easier on the user, but,
>> as the dot graphs below will show, the user already needs quite
>> a lot of knowledge of the architecture already, in order to setup
>> the different pipelines. So perhaps the control inheritance is
>> rather pointless anyway.
>
> I really don't think expecting the user to understand and configure
> the pipeline is a sane way forward.  Think about it - should the
> user need to know that, because they have a bayer-only CSI data
> source, that there is only one path possible, and if they try to
> configure a different path, then things will just error out?
>
> For the case of imx219 connected to iMX6, it really is as simple as
> "there is only one possible path" and all the complexity of the media
> interfaces/subdevs is completely unnecessary.  Every other block in
> the graph is just noise.
>
> The fact is that these dot graphs show a complex picture, but reality
> is somewhat different - there's only relatively few paths available
> depending on the connected source and the rest of the paths are
> completely useless.
>

I totally disagree there. Raw bayer requires passthrough yes, but for
all other media bus formats on a mipi csi-2 bus, and all other media
bus formats on 8-bit parallel buses, the conersion pipelines can be
used for scaling, CSC, rotation, and motion-compensated de-interlacing.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 18:54                       ` Steve Longerbeam
@ 2017-03-11 18:59                         ` Russell King - ARM Linux
  2017-03-11 19:06                           ` Steve Longerbeam
  2017-03-12  3:31                           ` Steve Longerbeam
  0 siblings, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-11 18:59 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, nick, songjun.wu,
	Hans Verkuil, pavel, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee

On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >I really don't think expecting the user to understand and configure
> >the pipeline is a sane way forward.  Think about it - should the
> >user need to know that, because they have a bayer-only CSI data
> >source, that there is only one path possible, and if they try to
> >configure a different path, then things will just error out?
> >
> >For the case of imx219 connected to iMX6, it really is as simple as
> >"there is only one possible path" and all the complexity of the media
> >interfaces/subdevs is completely unnecessary.  Every other block in
> >the graph is just noise.
> >
> >The fact is that these dot graphs show a complex picture, but reality
> >is somewhat different - there's only relatively few paths available
> >depending on the connected source and the rest of the paths are
> >completely useless.
> >
> 
> I totally disagree there. Raw bayer requires passthrough yes, but for
> all other media bus formats on a mipi csi-2 bus, and all other media
> bus formats on 8-bit parallel buses, the conersion pipelines can be
> used for scaling, CSC, rotation, and motion-compensated de-interlacing.

... which only makes sense _if_ your source can produce those formats.
We don't actually disagree on that.

Let me re-state.  If the source can _only_ produce bayer, then there is
_only_ _one_ possible path, and all the overhead of the media controller
stuff is totally unnecessary.

Or, are you going to tell me that the user should have the right to
configure paths through the iMX6 hardware that are not permitted by the
iMX6 manuals for the data format being produced by the sensor?

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 18:59                         ` Russell King - ARM Linux
@ 2017-03-11 19:06                           ` Steve Longerbeam
  2017-03-11 20:41                             ` Russell King - ARM Linux
  2017-03-12  3:31                           ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-11 19:06 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, nick, songjun.wu,
	Hans Verkuil, pavel, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee



On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
>>> I really don't think expecting the user to understand and configure
>>> the pipeline is a sane way forward.  Think about it - should the
>>> user need to know that, because they have a bayer-only CSI data
>>> source, that there is only one path possible, and if they try to
>>> configure a different path, then things will just error out?
>>>
>>> For the case of imx219 connected to iMX6, it really is as simple as
>>> "there is only one possible path" and all the complexity of the media
>>> interfaces/subdevs is completely unnecessary.  Every other block in
>>> the graph is just noise.
>>>
>>> The fact is that these dot graphs show a complex picture, but reality
>>> is somewhat different - there's only relatively few paths available
>>> depending on the connected source and the rest of the paths are
>>> completely useless.
>>>
>>
>> I totally disagree there. Raw bayer requires passthrough yes, but for
>> all other media bus formats on a mipi csi-2 bus, and all other media
>> bus formats on 8-bit parallel buses, the conersion pipelines can be
>> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
>
> ... which only makes sense _if_ your source can produce those formats.
> We don't actually disagree on that.
>
> Let me re-state.  If the source can _only_ produce bayer, then there is
> _only_ _one_ possible path, and all the overhead of the media controller
> stuff is totally unnecessary.
>
> Or, are you going to tell me that the user should have the right to
> configure paths through the iMX6 hardware that are not permitted by the
> iMX6 manuals for the data format being produced by the sensor?
>

Russell, I'm not following you. The imx6 pipelines allow for many
different sources, not just the inx219 that only outputs bayer. You
seem to be saying that those other pipelines should not be present
because they don't support raw bayer.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 18:08                   ` Steve Longerbeam
  2017-03-11 18:45                     ` Russell King - ARM Linux
@ 2017-03-11 20:26                     ` Pavel Machek
  2017-03-11 20:33                       ` Steve Longerbeam
  1 sibling, 1 reply; 228+ messages in thread
From: Pavel Machek @ 2017-03-11 20:26 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Sakari Ailus, Mauro Carvalho Chehab, Hans Verkuil,
	Russell King - ARM Linux, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel

Hi!

> >>I tend to agree with that.
> >
> >I agree as well.
> >
> >This is in line with how existing drivers behave, too.
> 
> 
> Well, sounds like there is consensus on this topic. I guess I'll
> go ahead and remove the control inheritance support. I suppose
> having a control appear in two places (subdev and video nodes) can
> be confusing.

I guess that's way to go. It is impossible to change userland APIs
once the patch is merged...
								Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 20:26                     ` Pavel Machek
@ 2017-03-11 20:33                       ` Steve Longerbeam
  0 siblings, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-11 20:33 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Sakari Ailus, Mauro Carvalho Chehab, Hans Verkuil,
	Russell King - ARM Linux, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel



On 03/11/2017 12:26 PM, Pavel Machek wrote:
> Hi!
>
>>>> I tend to agree with that.
>>>
>>> I agree as well.
>>>
>>> This is in line with how existing drivers behave, too.
>>
>>
>> Well, sounds like there is consensus on this topic. I guess I'll
>> go ahead and remove the control inheritance support. I suppose
>> having a control appear in two places (subdev and video nodes) can
>> be confusing.
>
> I guess that's way to go. It is impossible to change userland APIs
> once the patch is merged...

Ok, not including myself, it's now 4 in favor of removing, 1 against...

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 19:06                           ` Steve Longerbeam
@ 2017-03-11 20:41                             ` Russell King - ARM Linux
  0 siblings, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-11 20:41 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, nick, songjun.wu,
	Hans Verkuil, pavel, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee

On Sat, Mar 11, 2017 at 11:06:55AM -0800, Steve Longerbeam wrote:
> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> >On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> >>
> >>
> >>On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >>>I really don't think expecting the user to understand and configure
> >>>the pipeline is a sane way forward.  Think about it - should the
> >>>user need to know that, because they have a bayer-only CSI data
> >>>source, that there is only one path possible, and if they try to
> >>>configure a different path, then things will just error out?
> >>>
> >>>For the case of imx219 connected to iMX6, it really is as simple as
> >>>"there is only one possible path" and all the complexity of the media
> >>>interfaces/subdevs is completely unnecessary.  Every other block in
> >>>the graph is just noise.
> >>>
> >>>The fact is that these dot graphs show a complex picture, but reality
> >>>is somewhat different - there's only relatively few paths available
> >>>depending on the connected source and the rest of the paths are
> >>>completely useless.
> >>>
> >>
> >>I totally disagree there. Raw bayer requires passthrough yes, but for
> >>all other media bus formats on a mipi csi-2 bus, and all other media
> >>bus formats on 8-bit parallel buses, the conersion pipelines can be
> >>used for scaling, CSC, rotation, and motion-compensated de-interlacing.
> >
> >... which only makes sense _if_ your source can produce those formats.
> >We don't actually disagree on that.
> >
> >Let me re-state.  If the source can _only_ produce bayer, then there is
> >_only_ _one_ possible path, and all the overhead of the media controller
> >stuff is totally unnecessary.
> >
> >Or, are you going to tell me that the user should have the right to
> >configure paths through the iMX6 hardware that are not permitted by the
> >iMX6 manuals for the data format being produced by the sensor?
> >
> 
> Russell, I'm not following you. The imx6 pipelines allow for many
> different sources, not just the inx219 that only outputs bayer. You
> seem to be saying that those other pipelines should not be present
> because they don't support raw bayer.

What I'm saying is this:

_If_ you have a sensor connected that can _only_ produce bayer, _then_
there is only _one_ possible path through the imx6 pipelines that is
legal.  Offering other paths from the source is noise, because every
other path can't be used with a bayer source.

_If_ you have a sensor connected which can produce RGB or YUV formats,
_then_ other paths are available, and pipeline needs to be configured
to select the appropriate path with the desired features.

So, in the case of a bayer source, offering the user the chance to
manually configure the _single_ allowable route through the tree is
needless complexity.  Forcing the user to have to use the subdev
interfaces to configure the camera is needless complexity.  Such a
source can only ever be used with one single /dev/video* node.

Moreover, this requires user education, and this brings me on to much
larger concerns.  We seem to be saying "this is too complicated, the
user can work it out!"

We've been here with VGA devices.  Remember the old days when you had
to put mode lines into the Xorg.conf, or go through a lengthy setup
process to get X running?  It wasn't very user-friendly.  We seem to
be making the same mistake here.

Usability comes first and foremost - throwing complex problems at
users is not a solution.

Now, given that this media control API has been around for several
years, and the userspace side of the story has not really improved
(according to Mauro, several attempts have been made, every single
attempt so far has failed, even for specific hardware) it seems to me
that using the media control API is a very poor choice for the very
simple reason that _no one_ knows how to configure a system using it.
Hans thoughts of getting some funding to look at this aspect is a
good idea, but I really wonder, given the history so far, how long
this will take - and whether it _ever_ will get solved.

If it doesn't get solved, then we're stuck with quite a big problem.

So, I suggest that we don't merge any further media-controller based
kernel code _until_ we have the userspace side sorted out.  Merging
the kernel side drivers when we don't even know that the userspace
API is functionally usable in userspace beyond test programs is
utterly absurd - what if it turns out that no one can write v4l
plugins that sort out the issues that have been highlighted throughout
these discussions.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 13:14               ` Mauro Carvalho Chehab
  2017-03-11 15:32                 ` Sakari Ailus
@ 2017-03-11 21:30                 ` Pavel Machek
  1 sibling, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-11 21:30 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

[-- Attachment #1: Type: text/plain, Size: 1008 bytes --]

Hi!

> > > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > > time doesn't make much sense. We could disable one in favor of the
> > > other, either at compilation time or at runtime.  
> > 
> > Right. If the subdev API is disabled, then you have to inherit the subdev
> > controls in the bridge driver (how else would you be able to access them?).
> > And that's the usual case.
> > 
> > If you do have the subdev API enabled, AND you use the MC, then the
> > intention clearly is to give userspace full control and inheriting controls
> > no longer makes any sense (and is highly confusing IMHO).
> 
> I tend to agree with that.

Well, having different userspace interface according to config options
is strange. I believe the right solution is to make complex drivers
depend on CONFIG_VIDEO_V4L2_SUBDEV_API...

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 11:25                       ` Mauro Carvalho Chehab
@ 2017-03-11 21:52                         ` Pavel Machek
  2017-03-11 23:14                         ` Russell King - ARM Linux
  2017-03-13 12:46                         ` Sakari Ailus
  2 siblings, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-11 21:52 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Hans Verkuil, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 3342 bytes --]

Hi!

> > > The rationale is that we should support the simplest use cases first.
> > > 
> > > In the case of the first MC-based driver (and several subsequent
> > > ones), the simplest use case required MC, as it was meant to suport
> > > a custom-made sophisticated application that required fine control
> > > on each component of the pipeline and to allow their advanced
> > > proprietary AAA userspace-based algorithms to work.  
> > 
> > The first MC based driver (omap3isp) supports what the hardware can do, it
> > does not support applications as such.
> 
> All media drivers support a subset of what the hardware can do. The
> question is if such subset covers the use cases or not.
> 
> The current MC-based drivers (except for uvc) took a patch to offer a
> more advanced API, to allow direct control to each IP module, as it was
> said, by the time we merged the OMAP3 driver, that, for the N9/N900 camera
> to work, it was mandatory to access the pipeline's individual components.
> 
> Such approach require that some userspace software will have knowledge
> about some hardware details, in order to setup pipelines and send controls
> to the right components. That makes really hard to have a generic user
> friendly application to use such devices.

Well. Even if you propagate controls to the right components, there's
still a lot application needs to know about the camera
subsystem. Focus lengths, for example. Speed of the focus
coil. Whether or not aperture controls are available. If they are not,
what is the fixed aperture.

Dunno. Knowing what control to apply on what subdevice does not look
like the hardest part of camera driver. Yes, it would be a tiny bit
easier if I would have just one device to deal with, but.... fcam-dev
has cca 20000 lines of C++ code. 

> In the case of V4L2 controls, when there's no subdev API, the main
> driver (e. g. the driver that creates the /dev/video nodes) sends a
> multicast message to all bound I2C drivers. The driver(s) that need 
> them handle it. When the same control may be implemented on different
> drivers, the main driver sends a unicast message to just one
> driver[1].

Dunno. There's quite common to have two flashes. In that case, will
application control both at the same time?

> There's nothing wrong with this approach: it works, it is simpler,
> it is generic. So, if it covers most use cases, why not allowing it
> for usecases where a finer control is not a requirement?

Because the resulting interface is quite ugly?

> That's why I'm saying that I'm OK on merging any patch that would allow
> setting controls via the /dev/video interface on MC-based drivers when
> compiled without subdev API. I may also consider merging patches allowing

So.. userspace will now have to detect if subdev is available or not,
and access hardware in different ways?

> > The original plan was and continues to be sound, it's just that there have
> > always been too few hands to implement it. :-(
> 
> If there are no people to implement a plan, it doesn't matter how good
> the plan is, it won't work.

If the plan is good, someone will do it.
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 11:25                       ` Mauro Carvalho Chehab
  2017-03-11 21:52                         ` Pavel Machek
@ 2017-03-11 23:14                         ` Russell King - ARM Linux
  2017-03-12  0:19                           ` Steve Longerbeam
  2017-03-12 21:29                           ` Pavel Machek
  2017-03-13 12:46                         ` Sakari Ailus
  2 siblings, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-11 23:14 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Hans Verkuil, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
> This situation is there since 2009. If I remember well, you tried to write
> such generic plugin in the past, but never finished it, apparently because
> it is too complex. Others tried too over the years. 
> 
> The last trial was done by Jacek, trying to cover just the exynos4 driver. 
> Yet, even such limited scope plugin was not good enough, as it was never
> merged upstream. Currently, there's no such plugins upstream.
> 
> If we can't even merge a plugin that solves it for just *one* driver,
> I have no hope that we'll be able to do it for the generic case.

This is what really worries me right now about the current proposal for
iMX6.  What's being proposed is to make the driver exclusively MC-based.

What that means is that existing applications are _not_ going to work
until we have some answer for libv4l2, and from what you've said above,
it seems that this has been attempted multiple times over the last _8_
years, and each time it's failed.

When thinking about it, it's quite obvious why merely trying to push
the problem into userspace fails:

  If we assert that the kernel does not have sufficient information to
  make decisions about how to route and control the hardware, then under
  what scenario does a userspace library have sufficient information to
  make those decisions?

So, merely moving the problem into userspace doesn't solve anything.

Loading the problem onto the user in the hope that the user knows
enough to properly configure it also doesn't work - who is going to
educate the user about the various quirks of the hardware they're
dealing with?

I don't think pushing it into platform specific libv4l2 plugins works
either - as you say above, even just trying to develop a plugin for
exynos4 seems to have failed, so what makes us think that developing
a plugin for iMX6 is going to succeed?  Actually, that's exactly where
the problem lies.

Is "iMX6 plugin" even right?  That only deals with the complexity of
one part of the system - what about the source device, which as we
have already seen can be a tuner or a camera with its own multiple
sub-devices.  What if there's a video improvement chip in the chain
as well - how is a "generic" iMX6 plugin supposed to know how to deal
with that?

It seems to me that what's required is not an "iMX6 plugin" but a
separate plugin for each platform - or worse.  Consider boards like
the Raspberry Pi, where users can attach a variety of cameras.  I
don't think this approach scales.  (This is relevant: the iMX6 board
I have here has a RPi compatible connector for a MIPI CSI2 camera.
In fact, the IMX219 module I'm using _is_ a RPi camera, it's the RPi
NoIR Camera V2.)

The iMX6 problem is way larger than just "which subdev do I need to
configure for control X" - if you look at the dot graphs both Steve
and myself have supplied, you'll notice that there are eight (yes,
8) video capture devices.  Let's say that we can solve the subdev
problem in libv4l2.  There's another problem lurking here - libv4l2
is /dev/video* based.  How does it know which /dev/video* device to
open?

We don't open by sensor, we open by /dev/video*.  In my case, there
is only one correct /dev/video* node for the attached sensor, the
other seven are totally irrelevant.  For other situations, there may
be the choice of three functional /dev/video* nodes.

Right now, for my case, there isn't the information exported from the
kernel to know which is the correct one, since that requires knowing
which virtual channel the data is going to be sent over the CSI2
interface.  That information is not present in DT, or anywhere.  It
only comes from system knowledge - in my case, I know that the IMX219
is currently being configured to use virtual channel 0.  SMIA cameras
are also configurable.  Then there's CSI2 cameras that can produce
different formats via different virtual channels (eg, JPEG compressed
image on one channel while streaming a RGB image via the other channel.)

Whether you can use one or three in _this_ scenario depends on the
source format - again, another bit of implementation specific
information that userspace would need to know.  Kernel space should
know that, and it's discoverable by testing which paths accept the
source format - but that doesn't tell you ahead of time which
/dev/video* node to open.

So, the problem space we have here is absolutely huge, and merely
having a plugin that activates when you open a /dev/video* node
really doesn't solve it.

All in all, I really don't think "lets hope someone writes a v4l2
plugin to solve it" is ever going to be successful.  I don't even
see that there will ever be a userspace application that is anything
more than a representation of the dot graphs that users can use to
manually configure the capture system with system knowledge.

I think everyone needs to take a step back and think long and hard
about this from the system usability perspective - I seriously
doubt that we will ever see any kind of solution to this if we
continue to progress with "we'll sort it in userspace some day."

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 23:14                         ` Russell King - ARM Linux
@ 2017-03-12  0:19                           ` Steve Longerbeam
  2017-03-12 21:29                           ` Pavel Machek
  1 sibling, 0 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-12  0:19 UTC (permalink / raw)
  To: Russell King - ARM Linux, Mauro Carvalho Chehab
  Cc: Sakari Ailus, Hans Verkuil, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski



On 03/11/2017 03:14 PM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
>> This situation is there since 2009. If I remember well, you tried to write
>> such generic plugin in the past, but never finished it, apparently because
>> it is too complex. Others tried too over the years.
>>
>> The last trial was done by Jacek, trying to cover just the exynos4 driver.
>> Yet, even such limited scope plugin was not good enough, as it was never
>> merged upstream. Currently, there's no such plugins upstream.
>>
>> If we can't even merge a plugin that solves it for just *one* driver,
>> I have no hope that we'll be able to do it for the generic case.
>
> This is what really worries me right now about the current proposal for
> iMX6.  What's being proposed is to make the driver exclusively MC-based.
>

I don't see anything wrong with that.


> What that means is that existing applications are _not_ going to work
> until we have some answer for libv4l2, and from what you've said above,
> it seems that this has been attempted multiple times over the last _8_
> years, and each time it's failed.
>
> When thinking about it, it's quite obvious why merely trying to push
> the problem into userspace fails:
>
>   If we assert that the kernel does not have sufficient information to
>   make decisions about how to route and control the hardware, then under
>   what scenario does a userspace library have sufficient information to
>   make those decisions?
>
> So, merely moving the problem into userspace doesn't solve anything.
>
> Loading the problem onto the user in the hope that the user knows
> enough to properly configure it also doesn't work - who is going to
> educate the user about the various quirks of the hardware they're
> dealing with?

Documentation?

>
> I don't think pushing it into platform specific libv4l2 plugins works
> either - as you say above, even just trying to develop a plugin for
> exynos4 seems to have failed, so what makes us think that developing
> a plugin for iMX6 is going to succeed?  Actually, that's exactly where
> the problem lies.
>
> Is "iMX6 plugin" even right?  That only deals with the complexity of
> one part of the system - what about the source device, which as we
> have already seen can be a tuner or a camera with its own multiple
> sub-devices.  What if there's a video improvement chip in the chain
> as well - how is a "generic" iMX6 plugin supposed to know how to deal
> with that?
>
> It seems to me that what's required is not an "iMX6 plugin" but a
> separate plugin for each platform - or worse.  Consider boards like
> the Raspberry Pi, where users can attach a variety of cameras.  I
> don't think this approach scales.  (This is relevant: the iMX6 board
> I have here has a RPi compatible connector for a MIPI CSI2 camera.
> In fact, the IMX219 module I'm using _is_ a RPi camera, it's the RPi
> NoIR Camera V2.)
>
> The iMX6 problem is way larger than just "which subdev do I need to
> configure for control X" - if you look at the dot graphs both Steve
> and myself have supplied, you'll notice that there are eight (yes,
> 8) video capture devices.

There are 4 video nodes (per IPU):

- unconverted capture from CSI0
- unconverted capture from CSI1
- scaled, CSC, and/or rotated capture from PRP ENC
- scaled, CSC, rotated, and/or de-interlaced capture from PRP VF


Configuring the imx6 pipelines are not that difficult. I've put quite
a bit of detail in the media doc, so it should become clear to any
user with MC knowledge (even those with absolutely no knowledge of
imx) to quickly start getting working pipelines.



   Let's say that we can solve the subdev
> problem in libv4l2.  There's another problem lurking here - libv4l2
> is /dev/video* based.  How does it know which /dev/video* device to
> open?
>
> We don't open by sensor, we open by /dev/video*.  In my case, there
> is only one correct /dev/video* node for the attached sensor, the
> other seven are totally irrelevant.  For other situations, there may
> be the choice of three functional /dev/video* nodes.
>
> Right now, for my case, there isn't the information exported from the
> kernel to know which is the correct one, since that requires knowing
> which virtual channel the data is going to be sent over the CSI2
> interface.  That information is not present in DT, or anywhere.

It is described in the media doc:

"This is the MIPI CSI-2 receiver entity. It has one sink pad to receive
the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has
four source pads, corresponding to the four MIPI CSI-2 demuxed virtual
channel outputs."


> It only comes from system knowledge - in my case, I know that the IMX219
> is currently being configured to use virtual channel 0.  SMIA cameras
> are also configurable.  Then there's CSI2 cameras that can produce
> different formats via different virtual channels (eg, JPEG compressed
> image on one channel while streaming a RGB image via the other channel.)
>
> Whether you can use one or three in _this_ scenario depends on the
> source format - again, another bit of implementation specific
> information that userspace would need to know.  Kernel space should
> know that, and it's discoverable by testing which paths accept the
> source format - but that doesn't tell you ahead of time which
> /dev/video* node to open.
>
> So, the problem space we have here is absolutely huge, and merely
> having a plugin that activates when you open a /dev/video* node
> really doesn't solve it.
>
> All in all, I really don't think "lets hope someone writes a v4l2
> plugin to solve it" is ever going to be successful.  I don't even
> see that there will ever be a userspace application that is anything
> more than a representation of the dot graphs that users can use to
> manually configure the capture system with system knowledge.
>
> I think everyone needs to take a step back and think long and hard
> about this from the system usability perspective - I seriously
> doubt that we will ever see any kind of solution to this if we
> continue to progress with "we'll sort it in userspace some day."

While I admit when I first came across the MC idea a couple years ago,
my first impression was it was putting a lot of burden on the user to
have a detailed knowledge of the system in question. But I don't think
that is a problem with good documentation, and most people who have a
need to use a specific MC driver will already have that knowledge.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 18:59                         ` Russell King - ARM Linux
  2017-03-11 19:06                           ` Steve Longerbeam
@ 2017-03-12  3:31                           ` Steve Longerbeam
  2017-03-12  7:37                             ` Russell King - ARM Linux
  1 sibling, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-12  3:31 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, nick, songjun.wu,
	Hans Verkuil, pavel, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee



On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
>>> I really don't think expecting the user to understand and configure
>>> the pipeline is a sane way forward.  Think about it - should the
>>> user need to know that, because they have a bayer-only CSI data
>>> source, that there is only one path possible, and if they try to
>>> configure a different path, then things will just error out?
>>>
>>> For the case of imx219 connected to iMX6, it really is as simple as
>>> "there is only one possible path" and all the complexity of the media
>>> interfaces/subdevs is completely unnecessary.  Every other block in
>>> the graph is just noise.
>>>
>>> The fact is that these dot graphs show a complex picture, but reality
>>> is somewhat different - there's only relatively few paths available
>>> depending on the connected source and the rest of the paths are
>>> completely useless.
>>>
>>
>> I totally disagree there. Raw bayer requires passthrough yes, but for
>> all other media bus formats on a mipi csi-2 bus, and all other media
>> bus formats on 8-bit parallel buses, the conersion pipelines can be
>> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
>
> ... which only makes sense _if_ your source can produce those formats.
> We don't actually disagree on that.

...and there are lots of those sources! You should try getting out of
your imx219 shell some time, and have a look around! :)

>
> Let me re-state.  If the source can _only_ produce bayer, then there is
> _only_ _one_ possible path, and all the overhead of the media controller
> stuff is totally unnecessary.
>
> Or, are you going to tell me that the user should have the right to
> configure paths through the iMX6 hardware that are not permitted by the
> iMX6 manuals for the data format being produced by the sensor?

Anyway, no the user is not allowed to configure a path that is not
allowed by the hardware, such as attempting to pass raw bayer through
an Image Converter path.

I guess you are simply commenting that for users of bayer sensors, the
other pipelines can be "confusing". But I trust you're not saying those
other pipelines should therefore not be present, which would be a
completely nutty argument.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-12  3:31                           ` Steve Longerbeam
@ 2017-03-12  7:37                             ` Russell King - ARM Linux
  2017-03-12 17:56                               ` Steve Longerbeam
  2017-03-12 18:14                               ` Pavel Machek
  0 siblings, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-12  7:37 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, nick, songjun.wu,
	Hans Verkuil, pavel, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee

On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
> 
> 
> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> >On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> >>
> >>
> >>On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >>>I really don't think expecting the user to understand and configure
> >>>the pipeline is a sane way forward.  Think about it - should the
> >>>user need to know that, because they have a bayer-only CSI data
> >>>source, that there is only one path possible, and if they try to
> >>>configure a different path, then things will just error out?
> >>>
> >>>For the case of imx219 connected to iMX6, it really is as simple as
> >>>"there is only one possible path" and all the complexity of the media
> >>>interfaces/subdevs is completely unnecessary.  Every other block in
> >>>the graph is just noise.
> >>>
> >>>The fact is that these dot graphs show a complex picture, but reality
> >>>is somewhat different - there's only relatively few paths available
> >>>depending on the connected source and the rest of the paths are
> >>>completely useless.
> >>>
> >>
> >>I totally disagree there. Raw bayer requires passthrough yes, but for
> >>all other media bus formats on a mipi csi-2 bus, and all other media
> >>bus formats on 8-bit parallel buses, the conersion pipelines can be
> >>used for scaling, CSC, rotation, and motion-compensated de-interlacing.
> >
> >... which only makes sense _if_ your source can produce those formats.
> >We don't actually disagree on that.
> 
> ...and there are lots of those sources! You should try getting out of
> your imx219 shell some time, and have a look around! :)

If you think that, you are insulting me.  I've been thinking about this
from the "big picture" point of view.  If you think I'm only thinking
about this from only the bayer point of view, you're wrong.

Given what Mauro has said, I'm convinced that the media controller stuff
is a complete failure for usability, and adding further drivers using it
is a mistake.

I counter your accusation by saying that you are actually so focused on
the media controller way of doing things that you can't see the bigger
picture here.

So, tell me how the user can possibly use iMX6 video capture without
resorting to opening up a terminal and using media-ctl to manually
configure the pipeline.  How is the user going to control the source
device without using media-ctl to find the subdev node, and then using
v4l2-ctl on it.  How is the user supposed to know which /dev/video*
node they should be opening with their capture application?

If you can actually respond to the points that I've been raising about
end user usability, then we can have a discussion.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-12  7:37                             ` Russell King - ARM Linux
@ 2017-03-12 17:56                               ` Steve Longerbeam
  2017-03-12 21:58                                 ` Mauro Carvalho Chehab
  2017-03-13 10:44                                 ` Hans Verkuil
  2017-03-12 18:14                               ` Pavel Machek
  1 sibling, 2 replies; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-12 17:56 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, nick, songjun.wu,
	Hans Verkuil, pavel, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee



On 03/11/2017 11:37 PM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
>>> On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>>>>
>>>>
>>>> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
>>>>> I really don't think expecting the user to understand and configure
>>>>> the pipeline is a sane way forward.  Think about it - should the
>>>>> user need to know that, because they have a bayer-only CSI data
>>>>> source, that there is only one path possible, and if they try to
>>>>> configure a different path, then things will just error out?
>>>>>
>>>>> For the case of imx219 connected to iMX6, it really is as simple as
>>>>> "there is only one possible path" and all the complexity of the media
>>>>> interfaces/subdevs is completely unnecessary.  Every other block in
>>>>> the graph is just noise.
>>>>>
>>>>> The fact is that these dot graphs show a complex picture, but reality
>>>>> is somewhat different - there's only relatively few paths available
>>>>> depending on the connected source and the rest of the paths are
>>>>> completely useless.
>>>>>
>>>>
>>>> I totally disagree there. Raw bayer requires passthrough yes, but for
>>>> all other media bus formats on a mipi csi-2 bus, and all other media
>>>> bus formats on 8-bit parallel buses, the conersion pipelines can be
>>>> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
>>>
>>> ... which only makes sense _if_ your source can produce those formats.
>>> We don't actually disagree on that.
>>
>> ...and there are lots of those sources! You should try getting out of
>> your imx219 shell some time, and have a look around! :)
>
> If you think that, you are insulting me.  I've been thinking about this
> from the "big picture" point of view.  If you think I'm only thinking
> about this from only the bayer point of view, you're wrong.

No insult there, you have my utmost respect Russel. Me gives you the
Ali-G "respec!" :)

It was just a light-hearted attempt at suggesting you might be too
entangled with the imx219 (or short on hardware access, which I can
certainly understand).


>
> Given what Mauro has said, I'm convinced that the media controller stuff
> is a complete failure for usability, and adding further drivers using it
> is a mistake.
>

I do agree with you that MC places a lot of burden on the user to
attain a lot of knowledge of the system's architecture. That's really
why I included that control inheritance patch, to ease the burden
somewhat.

On the other hand, I also think this just requires that MC drivers have
very good user documentation.

And my other point is, I think most people who have a need to work with
the media framework on a particular platform will likely already be
quite familiar with that platform.

> I counter your accusation by saying that you are actually so focused on
> the media controller way of doing things that you can't see the bigger
> picture here.
>

Yeah I've been too mired in the details of this driver.


> So, tell me how the user can possibly use iMX6 video capture without
> resorting to opening up a terminal and using media-ctl to manually
> configure the pipeline.  How is the user going to control the source
> device without using media-ctl to find the subdev node, and then using
> v4l2-ctl on it.  How is the user supposed to know which /dev/video*
> node they should be opening with their capture application?

The media graph for imx6 is fairly self-explanatory in my opinion.
Yes that graph has to be generated, but just with a simple 'media-ctl
--print-dot', I don't see how that is difficult for the user.

The graph makes it quite clear which subdev node belongs to which
entity.

As for which /dev/videoX node to use, I hope I made it fairly clear
in the user doc what functions each node performs. But I will review
the doc again and make sure it's been made explicitly clear.


>
> If you can actually respond to the points that I've been raising about
> end user usability, then we can have a discussion.

Right, I haven't added my input to the middle-ware discussions (libv4l,
v4lconvert, and the auto-pipeline-configuration library work). I can
only say at this point that v4lconvert does indeed sound broken w.r.t
bayer formats from your description. But it also sounds like an isolated
problem and it just needs a patch to allow passing bayer through without
software conversion.

I wish I had the IMX219 to help you debug these bayer issues. I don't
have any bayer sources.

In summary, I do like the media framework, it's a good abstraction of
hardware pipelines. It does require a lot of system level knowledge to
configure, but as I said that is a matter of good documentation.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-12  7:37                             ` Russell King - ARM Linux
  2017-03-12 17:56                               ` Steve Longerbeam
@ 2017-03-12 18:14                               ` Pavel Machek
  1 sibling, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-12 18:14 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, mark.rutland, andrew-ct.chen, minghsiu.tsai,
	nick, songjun.wu, Hans Verkuil, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee

[-- Attachment #1: Type: text/plain, Size: 3346 bytes --]

On Sun 2017-03-12 07:37:45, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
> > 
> > 
> > On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> > >On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> > >>
> > >>
> > >>On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> > >>>I really don't think expecting the user to understand and configure
> > >>>the pipeline is a sane way forward.  Think about it - should the
> > >>>user need to know that, because they have a bayer-only CSI data
> > >>>source, that there is only one path possible, and if they try to
> > >>>configure a different path, then things will just error out?
> > >>>
> > >>>For the case of imx219 connected to iMX6, it really is as simple as
> > >>>"there is only one possible path" and all the complexity of the media
> > >>>interfaces/subdevs is completely unnecessary.  Every other block in
> > >>>the graph is just noise.
> > >>>
> > >>>The fact is that these dot graphs show a complex picture, but reality
> > >>>is somewhat different - there's only relatively few paths available
> > >>>depending on the connected source and the rest of the paths are
> > >>>completely useless.
> > >>>
> > >>
> > >>I totally disagree there. Raw bayer requires passthrough yes, but for
> > >>all other media bus formats on a mipi csi-2 bus, and all other media
> > >>bus formats on 8-bit parallel buses, the conersion pipelines can be
> > >>used for scaling, CSC, rotation, and motion-compensated de-interlacing.
> > >
> > >... which only makes sense _if_ your source can produce those formats.
> > >We don't actually disagree on that.
> > 
> > ...and there are lots of those sources! You should try getting out of
> > your imx219 shell some time, and have a look around! :)
> 
> If you think that, you are insulting me.  I've been thinking about this
> from the "big picture" point of view.  If you think I'm only thinking
> about this from only the bayer point of view, you're wrong.

Can you stop that insults nonsense?

> Given what Mauro has said, I'm convinced that the media controller stuff
> is a complete failure for usability, and adding further drivers using it
> is a mistake.

Hmm. But you did not present any alternative. Seems some hardware is
simply complex. So either we don't add complex drivers (_that_ would
be a mistake), or some userspace solution will need to be
done. Shell-script running media-ctl does not seem that hard.

> So, tell me how the user can possibly use iMX6 video capture without
> resorting to opening up a terminal and using media-ctl to manually
> configure the pipeline.  How is the user going to control the source
> device without using media-ctl to find the subdev node, and then using
> v4l2-ctl on it.  How is the user supposed to know which /dev/video*
> node they should be opening with their capture application?

Complex hardware sometimes requires userspace configuration. Running a
shell script on startup does not seem that hard.

And maybe we could do some kind of default setup in kernel, but that
does not really solve the problem.

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 23:14                         ` Russell King - ARM Linux
  2017-03-12  0:19                           ` Steve Longerbeam
@ 2017-03-12 21:29                           ` Pavel Machek
  2017-03-12 22:37                             ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 228+ messages in thread
From: Pavel Machek @ 2017-03-12 21:29 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Mauro Carvalho Chehab, Sakari Ailus, Hans Verkuil,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 2750 bytes --]

On Sat 2017-03-11 23:14:56, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
> > This situation is there since 2009. If I remember well, you tried to write
> > such generic plugin in the past, but never finished it, apparently because
> > it is too complex. Others tried too over the years. 
> > 
> > The last trial was done by Jacek, trying to cover just the exynos4 driver. 
> > Yet, even such limited scope plugin was not good enough, as it was never
> > merged upstream. Currently, there's no such plugins upstream.
> > 
> > If we can't even merge a plugin that solves it for just *one* driver,
> > I have no hope that we'll be able to do it for the generic case.
> 
> This is what really worries me right now about the current proposal for
> iMX6.  What's being proposed is to make the driver exclusively MC-based.
> 
> What that means is that existing applications are _not_ going to work
> until we have some answer for libv4l2, and from what you've said above,
> it seems that this has been attempted multiple times over the last _8_
> years, and each time it's failed.

Yeah. We need a mid-layer between legacy applications and MC
devices. Such layer does not exist in userspace or in kernel.

> Loading the problem onto the user in the hope that the user knows
> enough to properly configure it also doesn't work - who is going to
> educate the user about the various quirks of the hardware they're
> dealing with?

We have docs. Users can write shell scripts. Still, mid-layer would be
nice.

> So, the problem space we have here is absolutely huge, and merely
> having a plugin that activates when you open a /dev/video* node
> really doesn't solve it.
> 
> All in all, I really don't think "lets hope someone writes a v4l2
> plugin to solve it" is ever going to be successful.  I don't even
> see that there will ever be a userspace application that is anything
> more than a representation of the dot graphs that users can use to
> manually configure the capture system with system knowledge.
> 
> I think everyone needs to take a step back and think long and hard
> about this from the system usability perspective - I seriously
> doubt that we will ever see any kind of solution to this if we
> continue to progress with "we'll sort it in userspace some day."

Mid-layer is difficult... there are _hundreds_ of possible
pipeline setups. If it should live in kernel or in userspace is a
question... but I don't think having it in kernel helps in any way.

Best regards,
									Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-12 17:56                               ` Steve Longerbeam
@ 2017-03-12 21:58                                 ` Mauro Carvalho Chehab
  2017-03-26  9:12                                   ` script to setup pipeline was " Pavel Machek
  2017-03-13 10:44                                 ` Hans Verkuil
  1 sibling, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-12 21:58 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Russell King - ARM Linux, mark.rutland, andrew-ct.chen,
	minghsiu.tsai, nick, songjun.wu, Hans Verkuil, pavel, shuah,
	devel, markus.heiser, laurent.pinchart+renesas, robert.jarzmik,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee

Em Sun, 12 Mar 2017 10:56:53 -0700
Steve Longerbeam <slongerbeam@gmail.com> escreveu:

> On 03/11/2017 11:37 PM, Russell King - ARM Linux wrote:
> > On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:  

> > Given what Mauro has said, I'm convinced that the media controller stuff
> > is a complete failure for usability, and adding further drivers using it
> > is a mistake.

I never said that. The thing is that the V4L2 API was designed in
1999, when the video hardware were a way simpler: just one DMA engine
on a PCI device, one video/audio input switch and a few video entries.

On those days, setting up a pipeline on such devices is simple, and can be
done via VIDIOC_*_INPUT ioctls.

Nowadays hardware used on SoC devices are a way more complex.

SoC devices comes with several DMA engines for buffer transfers, plus
video transform blocks whose pipeline can be set dynamically.

The MC API is a need to allow setting a complex pipeline, as
VIDIOC_*_INPUT cannot work with such complexity.

The subdev API solves a different issue. On a "traditional" device,
we usually have a pipeline like:

<video input> ==> <processing> ==> /dev/video0

Where <processing> controls something at the device (like
bright and/or resolution) if you change something at the /dev/video0 
node, it is clear that the <processing> block should handle it.

On complex devices, with a pipeline like:
<camera> ==> <processing0> ==> <CSI bus> ==> <processing1> ==> /dev/video0

If you send a command to adjust the something at /dev/video0, it is
not clear for the device driver to do it at processing0 or at
processing1. Ok, the driver can decide it, but this can be sub-optimal.

Yet, several drivers do that. For example, with em28xx-based drivers
several parameters can be adjusted either at the em28xx driver or at
the video decoder driver (saa711x). There's a logic inside the driver
that decides it. The pipeline there is fixed, though, so it is
easy to hardcode a logic for that.

So, I've no doubt that both MC and subdev APIs are needed when full
hardware control is required.

I don't know about how much flexibility the i.MX6 hardware gives,
nor if all such flexibility is needed for most use case applications.

If I were to code a driver for such hardware, though, I would try to
provide a subset of the functionality that would work without the
subdev API, allowing it to work with standard V4L applications.

That doesn't sound hard to do, as the driver may limit the pipelines
to a subset that would make sense, in order to make easier for the
driver to take the right decision about to where send a control
to setup some parameter.

> I do agree with you that MC places a lot of burden on the user to
> attain a lot of knowledge of the system's architecture.

Setting up the pipeline is not the hard part. One could write a
script to do that. 

> That's really  why I included that control inheritance patch, to 
> ease the burden somewhat.

IMHO, that makes sense, as, once some script sets the pipeline, any
V4L2 application can work, if you forward the controls to the right
I2C devices.

> On the other hand, I also think this just requires that MC drivers have
> very good user documentation.

No, it is not a matter of just documentation. It is a matter of having
to rewrite applications for each device, as the information exposed by
MC are not enough for an application to do what's needed.

For a generic application to work properly with MC, we need to have to
add more stuff to MC, in order to allow applications to know more about
the features of each subdevice and to things like discovering what kind
of signal is present on each PAD. We're calling it as "properties API"[1].

[1] we discussed about that at the ML and at the MC workshop:
	https://linuxtv.org/news.php?entry=2015-08-17.mchehab

Unfortunately, nobody sent any patches implementing it so far :-(

> And my other point is, I think most people who have a need to work with
> the media framework on a particular platform will likely already be
> quite familiar with that platform.

I disagree. The most popular platform device currently is Raspberry PI.

I doubt that almost all owners of RPi + camera module know anything
about MC. They just use Raspberry's official driver with just provides
the V4L2 interface.

I have a strong opinion that, for hardware like RPi, just the V4L2
API is enough for more than 90% of the cases.

> The media graph for imx6 is fairly self-explanatory in my opinion.
> Yes that graph has to be generated, but just with a simple 'media-ctl
> --print-dot', I don't see how that is difficult for the user.

Again, IMHO, the problem is not how to setup the pipeline, but, instead,
the need to forward controls to the subdevices.

To use a camera, the user needs to set up a set of controls for the
image to make sense (bright, contrast, focus, etc). If the driver
doesn't forward those controls to the subdevs, an application like
"camorama" won't actually work for real, as the user won't be able
to adjust those parameters via GUI.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-12 21:29                           ` Pavel Machek
@ 2017-03-12 22:37                             ` Mauro Carvalho Chehab
  2017-03-14 18:26                               ` Pavel Machek
  0 siblings, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-12 22:37 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Russell King - ARM Linux, Sakari Ailus, Hans Verkuil,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

Em Sun, 12 Mar 2017 22:29:04 +0100
Pavel Machek <pavel@ucw.cz> escreveu:

> Mid-layer is difficult... there are _hundreds_ of possible
> pipeline setups. If it should live in kernel or in userspace is a
> question... but I don't think having it in kernel helps in any way.

Mid-layer is difficult, because we either need to feed some
library with knowledge for all kernel drivers or we need to improve
the MC API to provide more details.

For example, several drivers used to expose entities via the
generic MEDIA_ENT_T_DEVNODE to represent entities of different
types. See, for example, entities 1, 5 and 7 (and others) at:
	https://mchehab.fedorapeople.org/mc-next-gen/igepv2_omap3isp.png

A device-specific code could either be hardcoding the entity number
or checking for the entity strings to add some logic to setup
controls on those "unknown" entities, a generic app won't be able 
to do anything with them, as it doesn't know what function(s) such
entity provide.

Also, on some devices, like the analog TV decoder at:
	https://mchehab.fedorapeople.org/mc-next-gen/au0828_test/mc_nextgen_test-output.png

May have pads with different signals on their output. In such
case, pads 1 and 2 provide video, while pad 3 provides audio using a
different type of output.

The application needs to know such kind of things in order to be able
to properly setup the pipeline [1].

[1] this specific device has a fixed pipeline, but I'm aware of SoC with
flexible pipelines that have this kind of issue (nobody upstreamed the
V4L part of those devices yet).

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-12 17:56                               ` Steve Longerbeam
  2017-03-12 21:58                                 ` Mauro Carvalho Chehab
@ 2017-03-13 10:44                                 ` Hans Verkuil
  2017-03-13 10:58                                   ` Russell King - ARM Linux
  1 sibling, 1 reply; 228+ messages in thread
From: Hans Verkuil @ 2017-03-13 10:44 UTC (permalink / raw)
  To: Steve Longerbeam, Russell King - ARM Linux
  Cc: mark.rutland, andrew-ct.chen, minghsiu.tsai, nick, songjun.wu,
	pavel, shuah, devel, markus.heiser, laurent.pinchart+renesas,
	robert.jarzmik, Mauro Carvalho Chehab, geert, p.zabel,
	linux-media, devicetree, kernel, arnd, tiffany.lin, bparrot,
	robh+dt, horms+renesas, mchehab, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel, Sakari Ailus,
	jean-christophe.trotin, sakari.ailus, fabio.estevam, shawnguo,
	sudipm.mukherjee

On 03/12/2017 06:56 PM, Steve Longerbeam wrote:
> 
> 
> On 03/11/2017 11:37 PM, Russell King - ARM Linux wrote:
>> On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
>>>
>>>
>>> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
>>>> On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>>>>>
>>>>>
>>>>> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
>>>>>> I really don't think expecting the user to understand and configure
>>>>>> the pipeline is a sane way forward.  Think about it - should the
>>>>>> user need to know that, because they have a bayer-only CSI data
>>>>>> source, that there is only one path possible, and if they try to
>>>>>> configure a different path, then things will just error out?
>>>>>>
>>>>>> For the case of imx219 connected to iMX6, it really is as simple as
>>>>>> "there is only one possible path" and all the complexity of the media
>>>>>> interfaces/subdevs is completely unnecessary.  Every other block in
>>>>>> the graph is just noise.
>>>>>>
>>>>>> The fact is that these dot graphs show a complex picture, but reality
>>>>>> is somewhat different - there's only relatively few paths available
>>>>>> depending on the connected source and the rest of the paths are
>>>>>> completely useless.
>>>>>>
>>>>>
>>>>> I totally disagree there. Raw bayer requires passthrough yes, but for
>>>>> all other media bus formats on a mipi csi-2 bus, and all other media
>>>>> bus formats on 8-bit parallel buses, the conersion pipelines can be
>>>>> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
>>>>
>>>> ... which only makes sense _if_ your source can produce those formats.
>>>> We don't actually disagree on that.
>>>
>>> ...and there are lots of those sources! You should try getting out of
>>> your imx219 shell some time, and have a look around! :)
>>
>> If you think that, you are insulting me.  I've been thinking about this
>> from the "big picture" point of view.  If you think I'm only thinking
>> about this from only the bayer point of view, you're wrong.
> 
> No insult there, you have my utmost respect Russel. Me gives you the
> Ali-G "respec!" :)
> 
> It was just a light-hearted attempt at suggesting you might be too
> entangled with the imx219 (or short on hardware access, which I can
> certainly understand).
> 
> 
>>
>> Given what Mauro has said, I'm convinced that the media controller stuff
>> is a complete failure for usability, and adding further drivers using it
>> is a mistake.
>>
> 
> I do agree with you that MC places a lot of burden on the user to
> attain a lot of knowledge of the system's architecture. That's really
> why I included that control inheritance patch, to ease the burden
> somewhat.
> 
> On the other hand, I also think this just requires that MC drivers have
> very good user documentation.
> 
> And my other point is, I think most people who have a need to work with
> the media framework on a particular platform will likely already be
> quite familiar with that platform.
> 
>> I counter your accusation by saying that you are actually so focused on
>> the media controller way of doing things that you can't see the bigger
>> picture here.
>>
> 
> Yeah I've been too mired in the details of this driver.
> 
> 
>> So, tell me how the user can possibly use iMX6 video capture without
>> resorting to opening up a terminal and using media-ctl to manually
>> configure the pipeline.  How is the user going to control the source
>> device without using media-ctl to find the subdev node, and then using
>> v4l2-ctl on it.  How is the user supposed to know which /dev/video*
>> node they should be opening with their capture application?
> 
> The media graph for imx6 is fairly self-explanatory in my opinion.
> Yes that graph has to be generated, but just with a simple 'media-ctl
> --print-dot', I don't see how that is difficult for the user.
> 
> The graph makes it quite clear which subdev node belongs to which
> entity.
> 
> As for which /dev/videoX node to use, I hope I made it fairly clear
> in the user doc what functions each node performs. But I will review
> the doc again and make sure it's been made explicitly clear.
> 
> 
>>
>> If you can actually respond to the points that I've been raising about
>> end user usability, then we can have a discussion.
> 
> Right, I haven't added my input to the middle-ware discussions (libv4l,
> v4lconvert, and the auto-pipeline-configuration library work). I can
> only say at this point that v4lconvert does indeed sound broken w.r.t
> bayer formats from your description. But it also sounds like an isolated
> problem and it just needs a patch to allow passing bayer through without
> software conversion.
> 
> I wish I had the IMX219 to help you debug these bayer issues. I don't
> have any bayer sources.
> 
> In summary, I do like the media framework, it's a good abstraction of
> hardware pipelines. It does require a lot of system level knowledge to
> configure, but as I said that is a matter of good documentation.

And the reason we went into this direction is that the end-users that use
these SoCs with complex pipelines actually *need* this functionality. Which
is also part of the reason why work on improved userspace support gets
little attention: they don't need to have a plugin that allows generic V4L2
applications to work (at least with simple scenarios).

If they would need it, it would have been done (and paid for) before.

And improving userspace support for this isn't even at the top of our prio
list: getting the request API and stateless codec support in is our highest
priority. And that's a big job as well.

If you want to blame anyone for this, blame Nokia who set fire to their linux-based
phones and thus to the funding for this work.

Yes, I am very unhappy with the current state, but given the limited resources
I understand why it is as it is. I will try to get time to work on this this summer,
but there is no guarantee that that will be granted. If someone else is interested
in doing this and can get funding for it, then that would be very welcome.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-13 10:44                                 ` Hans Verkuil
@ 2017-03-13 10:58                                   ` Russell King - ARM Linux
  2017-03-13 11:08                                     ` Hans Verkuil
  2017-03-13 11:42                                     ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-13 10:58 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Steve Longerbeam, mark.rutland, andrew-ct.chen, minghsiu.tsai,
	nick, songjun.wu, pavel, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee

On Mon, Mar 13, 2017 at 11:44:50AM +0100, Hans Verkuil wrote:
> On 03/12/2017 06:56 PM, Steve Longerbeam wrote:
> > In summary, I do like the media framework, it's a good abstraction of
> > hardware pipelines. It does require a lot of system level knowledge to
> > configure, but as I said that is a matter of good documentation.
> 
> And the reason we went into this direction is that the end-users that use
> these SoCs with complex pipelines actually *need* this functionality. Which
> is also part of the reason why work on improved userspace support gets
> little attention: they don't need to have a plugin that allows generic V4L2
> applications to work (at least with simple scenarios).

If you stop inheriting controls from the capture sensor to the v4l2
capture device, then this breaks - generic v4l2 applications are not
going to be able to show the controls, because they're not visible at
the v4l2 capture device anymore.  They're only visible through the
subdev interfaces, which these generic applications know nothing about.

> If you want to blame anyone for this, blame Nokia who set fire to
> their linux-based phones and thus to the funding for this work.

No, I think that's completely unfair to Nokia.  If the MC approach is
the way you want to go, you should be thanking Nokia for the amount of
effort that they have put in to it, and recognising that it was rather
unfortunate that the market had changed, which meant that they weren't
able to continue.

No one has any right to require any of us to finish what we start
coding up in open source, unless there is a contractual obligation in
place.  That goes for Nokia too.

Nokia's decision had ramifications far and wide (resulting in knock on
effects in TI and further afield), so don't think for a moment I wasn't
affected by what happened in Nokia.  Even so, it was a decision for
Nokia to make, they had the right to make it, and we have no right to
attribute "blame" to Nokia for having made that decision.

To even suggest that Nokia should be blamed is absurd.

Open source gives rights to everyone.  It gives rights to contribute
and use, but it also gives rights to walk away without notice (remember
the "as is" and "no warranty" clauses?)

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-13 10:58                                   ` Russell King - ARM Linux
@ 2017-03-13 11:08                                     ` Hans Verkuil
  2017-03-13 11:42                                     ` Mauro Carvalho Chehab
  1 sibling, 0 replies; 228+ messages in thread
From: Hans Verkuil @ 2017-03-13 11:08 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Steve Longerbeam, mark.rutland, andrew-ct.chen, minghsiu.tsai,
	nick, songjun.wu, pavel, shuah, devel, markus.heiser,
	laurent.pinchart+renesas, robert.jarzmik, Mauro Carvalho Chehab,
	geert, p.zabel, linux-media, devicetree, kernel, arnd,
	tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee

On 03/13/2017 11:58 AM, Russell King - ARM Linux wrote:
> On Mon, Mar 13, 2017 at 11:44:50AM +0100, Hans Verkuil wrote:
>> On 03/12/2017 06:56 PM, Steve Longerbeam wrote:
>>> In summary, I do like the media framework, it's a good abstraction of
>>> hardware pipelines. It does require a lot of system level knowledge to
>>> configure, but as I said that is a matter of good documentation.
>>
>> And the reason we went into this direction is that the end-users that use
>> these SoCs with complex pipelines actually *need* this functionality. Which
>> is also part of the reason why work on improved userspace support gets
>> little attention: they don't need to have a plugin that allows generic V4L2
>> applications to work (at least with simple scenarios).
> 
> If you stop inheriting controls from the capture sensor to the v4l2
> capture device, then this breaks - generic v4l2 applications are not
> going to be able to show the controls, because they're not visible at
> the v4l2 capture device anymore.  They're only visible through the
> subdev interfaces, which these generic applications know nothing about.
> 
>> If you want to blame anyone for this, blame Nokia who set fire to
>> their linux-based phones and thus to the funding for this work.
> 
> No, I think that's completely unfair to Nokia.  If the MC approach is
> the way you want to go, you should be thanking Nokia for the amount of
> effort that they have put in to it, and recognising that it was rather
> unfortunate that the market had changed, which meant that they weren't
> able to continue.
> 
> No one has any right to require any of us to finish what we start
> coding up in open source, unless there is a contractual obligation in
> place.  That goes for Nokia too.
> 
> Nokia's decision had ramifications far and wide (resulting in knock on
> effects in TI and further afield), so don't think for a moment I wasn't
> affected by what happened in Nokia.  Even so, it was a decision for
> Nokia to make, they had the right to make it, and we have no right to
> attribute "blame" to Nokia for having made that decision.
> 
> To even suggest that Nokia should be blamed is absurd.
> 
> Open source gives rights to everyone.  It gives rights to contribute
> and use, but it also gives rights to walk away without notice (remember
> the "as is" and "no warranty" clauses?)

Sorry, unfortunate choice of words. While it lasted they did great work.
But the reason why MC development stopped for quite some time (esp. the
work on userspace software) was because the funding from Nokia dried up.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-13 10:58                                   ` Russell King - ARM Linux
  2017-03-13 11:08                                     ` Hans Verkuil
@ 2017-03-13 11:42                                     ` Mauro Carvalho Chehab
  2017-03-13 12:35                                       ` Russell King - ARM Linux
  1 sibling, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-13 11:42 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Hans Verkuil, Steve Longerbeam, mark.rutland, andrew-ct.chen,
	minghsiu.tsai, nick, songjun.wu, pavel, shuah, devel,
	markus.heiser, laurent.pinchart+renesas, robert.jarzmik, geert,
	p.zabel, linux-media, devicetree, kernel, arnd, tiffany.lin,
	bparrot, robh+dt, horms+renesas, mchehab, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel, Sakari Ailus,
	jean-christophe.trotin, sakari.ailus, fabio.estevam, shawnguo,
	sudipm.mukherjee

Em Mon, 13 Mar 2017 10:58:42 +0000
Russell King - ARM Linux <linux@armlinux.org.uk> escreveu:

> On Mon, Mar 13, 2017 at 11:44:50AM +0100, Hans Verkuil wrote:
> > On 03/12/2017 06:56 PM, Steve Longerbeam wrote:  
> > > In summary, I do like the media framework, it's a good abstraction of
> > > hardware pipelines. It does require a lot of system level knowledge to
> > > configure, but as I said that is a matter of good documentation.  
> > 
> > And the reason we went into this direction is that the end-users that use
> > these SoCs with complex pipelines actually *need* this functionality. Which
> > is also part of the reason why work on improved userspace support gets
> > little attention: they don't need to have a plugin that allows generic V4L2
> > applications to work (at least with simple scenarios).  
> 
> If you stop inheriting controls from the capture sensor to the v4l2
> capture device, then this breaks - generic v4l2 applications are not
> going to be able to show the controls, because they're not visible at
> the v4l2 capture device anymore.  They're only visible through the
> subdev interfaces, which these generic applications know nothing about.

True. That's why IMHO, the best is to do control inheritance when
there are use cases for generic applications and is possible for
the driver to do it (e. g. when the pipeline is not too complex
to prevent it to work).

As Hans said, for the drivers currently upstreamed at drivers/media,
there are currently very little interest on running generic apps 
there, as they're meant to be used inside embedded hardware using
specialized applications.

I don't have myself any hardware with i.MX6. Yet, I believe that
a low cost board like SolidRun Hummingboard - with comes with a 
CSI interface compatible with RPi camera modules - will likely
attract users who need to run generic applications on their
devices.

So, I believe that it makes sense for i.MX6 driver to inherit
controls from video devnode.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-13 11:42                                     ` Mauro Carvalho Chehab
@ 2017-03-13 12:35                                       ` Russell King - ARM Linux
  0 siblings, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-13 12:35 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Steve Longerbeam, mark.rutland, andrew-ct.chen,
	minghsiu.tsai, nick, songjun.wu, pavel, shuah, devel,
	markus.heiser, laurent.pinchart+renesas, robert.jarzmik, geert,
	p.zabel, linux-media, devicetree, kernel, arnd, tiffany.lin,
	bparrot, robh+dt, horms+renesas, mchehab, linux-arm-kernel,
	niklas.soderlund+renesas, gregkh, linux-kernel, Sakari Ailus,
	jean-christophe.trotin, sakari.ailus, fabio.estevam, shawnguo,
	sudipm.mukherjee

On Mon, Mar 13, 2017 at 08:42:15AM -0300, Mauro Carvalho Chehab wrote:
> I don't have myself any hardware with i.MX6. Yet, I believe that
> a low cost board like SolidRun Hummingboard - with comes with a 
> CSI interface compatible with RPi camera modules - will likely
> attract users who need to run generic applications on their
> devices.

As you've previously mentioned about camorama, I've installed it (I
run Ubuntu 16.04 with "gnome-flashback-metacity" on the HB) and I'm
able to use camorama to view the IMX219 camera sensor.

There's some gotcha's though:

* you need to start it on the command line, manually specifying
  which /dev/video device to use, as it always wants to use
  /dev/video0.  With the CODA mem2mem driver loaded, this may not
  be a camera device:

$ v4l2-ctl -d 0 --all
Driver Info (not using libv4l2):
        Driver name   : coda
        Card type     : CODA960
        Bus info      : platform:coda
        Driver version: 4.11.0

* camorama seems to use the v4lconvert library, and looking at the
  resulting image quality, is rather pixelated - my guess is that
  v4lconvert is using a basic algorithm to de-bayer the data.  It
  also appears to only manage 7fps at best.  The gstreamer neon
  debayer plugin appears to be faster and higher quality.

* it provides five controls - brightness/contrast/color/hue/white
  balance, each of which are not supported by the hardware (IMX219
  supports gain and analogue gain only.)  These controls appear to
  have no effect on the resulting image.

However, using qv4l2 (with the segfault bug in
GeneralTab::updateFrameSize() fixed - m_frameSize, m_frameWidth and
m_frameHeight can be NULL) provides access to all controls.  This
can happen if GeneralTab::inputSection() is not called.

The USB uvcvideo camera achieves around 24fps with functional controls
in camorama (mainly because it provides those exact controls to
userspace.)

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-11 11:25                       ` Mauro Carvalho Chehab
  2017-03-11 21:52                         ` Pavel Machek
  2017-03-11 23:14                         ` Russell King - ARM Linux
@ 2017-03-13 12:46                         ` Sakari Ailus
  2017-03-14  3:45                           ` Mauro Carvalho Chehab
  2 siblings, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-13 12:46 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

Hi Mauro,

On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
> Em Sat, 11 Mar 2017 00:37:14 +0200
> Sakari Ailus <sakari.ailus@iki.fi> escreveu:
> 
> > Hi Mauro (and others),
> > 
> > On Fri, Mar 10, 2017 at 12:53:42PM -0300, Mauro Carvalho Chehab wrote:
> > > Em Fri, 10 Mar 2017 15:20:48 +0100
> > > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > >   
> > > >   
> > > > > As I've already mentioned, from talking about this with Mauro, it seems
> > > > > Mauro is in agreement with permitting the control inheritence... I wish
> > > > > Mauro would comment for himself, as I can't quote our private discussion
> > > > > on the subject.    
> > > > 
> > > > I can't comment either, not having seen his mail and reasoning.  
> > > 
> > > The rationale is that we should support the simplest use cases first.
> > > 
> > > In the case of the first MC-based driver (and several subsequent
> > > ones), the simplest use case required MC, as it was meant to suport
> > > a custom-made sophisticated application that required fine control
> > > on each component of the pipeline and to allow their advanced
> > > proprietary AAA userspace-based algorithms to work.  
> > 
> > The first MC based driver (omap3isp) supports what the hardware can do, it
> > does not support applications as such.
> 
> All media drivers support a subset of what the hardware can do. The
> question is if such subset covers the use cases or not.

Can you name a feature in the OMAP 3 ISP that is not and can not be
supported using the current driver model (MC + V4L2 sub-device + V4L2) that
could be even remotely useful?

> 
> The current MC-based drivers (except for uvc) took a patch to offer a
> more advanced API, to allow direct control to each IP module, as it was
> said, by the time we merged the OMAP3 driver, that, for the N9/N900 camera
> to work, it was mandatory to access the pipeline's individual components.
> 
> Such approach require that some userspace software will have knowledge
> about some hardware details, in order to setup pipelines and send controls
> to the right components. That makes really hard to have a generic user
> friendly application to use such devices.

The effect you described above is true, but I disagree with the cause. The
cause is the hardware is more complex and variable than what has been
supported previously, and providing a generic interface for accessing such
hardware will require more complex interface.

The hardware we have today and the user cases we have today are more --- not
less --- complex and nuanced than when the Media controller was merged back
in 2010. Arguably there is thus more need for the functionality it provides,
not less.

> 
> Non-MC based drivers control the hardware via a portable interface with
> doesn't require any knowledge about the hardware specifics, as either the
> Kernel or some firmware at the device will set any needed pipelines.
> 
> In the case of V4L2 controls, when there's no subdev API, the main
> driver (e. g. the driver that creates the /dev/video nodes) sends a
> multicast message to all bound I2C drivers. The driver(s) that need 
> them handle it. When the same control may be implemented on different
> drivers, the main driver sends a unicast message to just one driver[1].
> 
> [1] There are several non-MC drivers that have multiple ways to
> control some things, like doing scaling or adjust volume levels at
> either the bridge driver or at a subdriver.
> 
> There's nothing wrong with this approach: it works, it is simpler,
> it is generic. So, if it covers most use cases, why not allowing it
> for usecases where a finer control is not a requirement?

Drivers are written to support hardware, not particular use case. Use case
specific knowledge should be present only in applications, not in drivers.
Doing it otherwise will lead to use case specific drivers and more driver
code to maintain for any particular piece of hardware.

An individual could possibly choose the right driver for his / her use case,
but this approach could hardly work for Linux distribution kernels.

The plain V4L2 interface is generic within its own scope: hardware can be
supported within the hardware model assumed by the interface. However, on
some devices this will end up being a small subset of what the hardware can
do. Besides that, when writing the driver, you need to decide at detail
level what kind of subset that might be.

That's not something anyone writing a driver should need to confront.

> 
> > Adding support to drivers for different "operation modes" --- this is
> > essentially what is being asked for --- is not an approach which could serve
> > either purpose (some functionality with simple interface vs. fully support
> > what the hardware can do, with interfaces allowing that) adequately in the
> > short or the long run.
> 
> Why not?

Let's suppose that the omap3isp driver provided an "operation mode" for more
simple applications.

Would you continue to have a V4L2 video device per DMA engine? Without Media
controller it'd be rather confusing for applications since depending on
which format (and level of processing) is requested defines the video node
where the images is captured.

Instead you'd probably want to have a single video node. For the driver to
expose just a single device node, should that be a Kconfig option or a
module parameter, for instance?

I have to say I wouldn't be even particularly interested to know how much
driver changes you'd have to implement to achieve that and how
unmaintainable to end result would be. Consider inflicting the same on all
drivers.

That's just *one* of a large number of things you'd need to change in order
to support plain V4L2 applications from the driver, while still continuing to
support the current interface.

With the help of a user space library, we can show the omap3isp device as a
single video node with a number of inputs (sensors) that can provide some
level of service to the user. I'm using "can", because it's just up to a
missing implementation of such a library.

It may be hardware specific or not. A hardware specific one may produce
better results than best effort since it may use knowledge of the hardware
not available through the kernel interfaces.

> 
> > If we are missing pieces in the puzzle --- in this case the missing pieces
> > in the puzzle are a generic pipeline configuration library and another
> > library that, with the help of pipeline autoconfiguration would implement
> > "best effort" service for regular V4L2 on top of the MC + V4L2 subdev + V4L2
> > --- then these pieces need to be impelemented. The solution is
> > *not* to attempt to support different types of applications in each driver
> > separately. That will make writing drivers painful, error prone and is
> > unlikely ever deliver what either purpose requires.
> > 
> > So let's continue to implement the functionality that the hardware supports.
> > Making a different choice here is bound to create a lasting conflict between
> > having to change kernel interface behaviour and the requirement of
> > supporting new functionality that hasn't been previously thought of, pushing
> > away SoC vendors from V4L2 ecosystem. This is what we all do want to avoid.
> 
> This situation is there since 2009. If I remember well, you tried to write
> such generic plugin in the past, but never finished it, apparently because
> it is too complex. Others tried too over the years. 

I'd argue I know better what happened with that attempt than you do. I had a
prototype of a generic pipeline configuration library but due to various
reasons I haven't been able to continue working on that since around 2012.
The prototype could figure out that the ccdc -> resizer path isn't usable
with raw sensors due to the lack of common formats between the two,
something that was argued to be too complex to implement.

I'm not aware of anyone else who would have tried that. Are you?

> 
> The last trial was done by Jacek, trying to cover just the exynos4 driver. 
> Yet, even such limited scope plugin was not good enough, as it was never
> merged upstream. Currently, there's no such plugins upstream.
> 
> If we can't even merge a plugin that solves it for just *one* driver,
> I have no hope that we'll be able to do it for the generic case.

I believe Jacek ceased to work on that plugin in his day job; other than
that, there are some matters left to be addressed in his latest patchset.

Having provided feedback on that patchset, I don't see additional technical
problems that require solving before the patches can be merged. The
remaining matters seem to be actually fairly trivial.

> 
> That's why I'm saying that I'm OK on merging any patch that would allow
> setting controls via the /dev/video interface on MC-based drivers when
> compiled without subdev API. I may also consider merging patches allowing
> to change the behavior on runtime, when compiled with subdev API.
> 
> > As far as i.MX6 driver goes, it is always possible to implement i.MX6 plugin
> > for libv4l to perform this. This should be much easier than getting the
> > automatic pipe configuration library and the rest working, and as it is
> > custom for i.MX6, the resulting plugin may make informed technical choices
> > for better functionality.
> 
> I wouldn't call "much easier" something that experienced media
> developers failed to do over the last 8 years.
> 
> It is just the opposite: broadcasting a control via I2C is very easy:
> there are several examples about how to do that all over the media
> drivers.

That's one of the things Jacek's plugin actually does. This is still
functionality that *only* some user applications wish to have, and as
implemented in the plugin it works correctly without use case specific
semantics --- please see my comments on the patch picking the controls from
the pipeline.

Drivers can also make use of v4l2_device_for_each_subdev() to distribute
setting controls on different sub-devices. A few drivers actually do that.

> 
> > Jacek has been working on such a plugin for
> > Samsung Exynos hardware, but I don't think he has quite finished it yey.
> 
> As Jacek answered when questioned about the merge status:
> 
> 	Hi Hans,
> 
> 	On 11/03/2016 12:51 PM, Hans Verkuil wrote:
> 	> Hi all,
> 	>
> 	> Is there anything that blocks me from merging this?
> 	>
> 	> This plugin work has been ongoing for years and unless there are serious
> 	> objections I propose that this is merged.
> 	>
> 	> Jacek, is there anything missing that would prevent merging this?  
> 
> 	There were issues raised by Sakari during last review, related to
> 	the way how v4l2 control bindings are defined. That discussion wasn't
> 	finished, so I stayed by my approach. Other than that - I've tested it
> 	and it works fine both with GStreamer and my test app.
> 
> After that, he sent a new version (v7.1), but never got reviews.

There are a number of mutually agreed but unaddressed comments against v7.
Also, v7.1 is just a single patch that does not address issues pointed out
in v7, but something Jacek wanted to fix himself.

In other words, there are comments to address but no patches to review.
Let's see how to best address them. I could possibly fix at least some of
those, but due to lack of hardware I have no ability to test the end result.

> 
> > The original plan was and continues to be sound, it's just that there have
> > always been too few hands to implement it. :-(
> 
> If there are no people to implement a plan, it doesn't matter how good
> the plan is, it won't work.

Do you have other proposals than what we have commonly agreed on? I don't
see other approaches that could satisfactorily address all the requirements
going forward.

That said, I do think we need to reinvigorate the efforts to get things
rolling again on supporting plain V4L2 applications on devices that are
controlled through the MC inteface. These matters have received undeservedly
little attention in recent years.

> 
> > > That's not true, for example, for the UVC driver. There, MC
> > > is optional, as it should be.  
> > 
> > UVC is different. The device simply provides additional information through
> > MC to the user but MC (or V4L2 sub-device interface) is not used for
> > controlling the device.
> 
> It is not different. If the Kernel is compiled without the V4L2
> subdev interface, the i.MX6 driver (or whatever other driver)
> won't receive any control via the subdev interface. So, it has to
> handle the control logic control via the only interface that
> supports it, e. g. via the video devnode.

Looking at the driver code and the Kconfig file, i.MX6 driver depends on
CONFIG_MEDIA_CONTROLLER and uses the Media controller API. So if Media
controller is disabled in kernel configuration, the i.MX6 IPU driver won't
be compiled.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-02-21  8:50         ` Philipp Zabel
@ 2017-03-13 13:16           ` Sakari Ailus
  2017-03-13 13:27             ` Russell King - ARM Linux
  0 siblings, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-13 13:16 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Steve Longerbeam, Russell King - ARM Linux, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, hverkuil,
	nick, markus.heiser, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Hi Philipp,

On Tue, Feb 21, 2017 at 09:50:23AM +0100, Philipp Zabel wrote:
...
> > > Then there's the issue where, if you have this setup:
> > >
> > >  camera --> csi2 receiver --> csi --> capture
> > >
> > > and the "csi" subdev can skip frames, you need to know (a) at the CSI
> > > sink pad what the frame rate is of the source (b) what the desired
> > > source pad frame rate is, so you can configure the frame skipping.
> > > So, does the csi subdev have to walk back through the media graph
> > > looking for the frame rate?  Does the capture device have to walk back
> > > through the media graph looking for some subdev to tell it what the
> > > frame rate is - the capture device certainly can't go straight to the
> > > sensor to get an answer to that question, because that bypasses the
> > > effect of the CSI frame skipping (which will lower the frame rate.)
> > >
> > > IMHO, frame rate is just another format property, just like the
> > > resolution and data format itself, and v4l2 should be treating it no
> > > differently.
> > >
> > 
> > I agree, frame rate, if indicated/specified by both sides of a link,
> > should match. So maybe this should be part of v4l2 link validation.
> > 
> > This might be a good time to propose the following patch.
> 
> I agree with Steve and Russell. I don't see why the (nominal) frame
> interval should be handled differently than resolution, data format, and
> colorspace information. I think it should just be propagated in the same
> way, and there is no reason to have two connected pads set to a
> different interval. That would make implementing the g/s_frame_interval
> subdev calls mandatory.

The vast majority of existing drivers do not implement them nor the user
space expects having to set them. Making that mandatory would break existing
user space.

In addition, that does not belong to link validation either: link validation
should only include static properties of the link that are required for
correct hardware operation. Frame rate is not such property: hardware that
supports the MC interface generally does not recognise such concept (with
the exception of some sensors). Additionally, it is dynamic: the frame rate
can change during streaming, making its validation at streamon time useless.

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-13 13:16           ` Sakari Ailus
@ 2017-03-13 13:27             ` Russell King - ARM Linux
  2017-03-13 13:55               ` Philipp Zabel
  2017-03-13 20:56               ` Sakari Ailus
  0 siblings, 2 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-13 13:27 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Philipp Zabel, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Mon, Mar 13, 2017 at 03:16:48PM +0200, Sakari Ailus wrote:
> The vast majority of existing drivers do not implement them nor the user
> space expects having to set them. Making that mandatory would break existing
> user space.
> 
> In addition, that does not belong to link validation either: link validation
> should only include static properties of the link that are required for
> correct hardware operation. Frame rate is not such property: hardware that
> supports the MC interface generally does not recognise such concept (with
> the exception of some sensors). Additionally, it is dynamic: the frame rate
> can change during streaming, making its validation at streamon time useless.

So how do we configure the CSI, which can do frame skipping?

With what you're proposing, it means it's possible to configure the
camera sensor source pad to do 50fps.  Configure the CSI sink pad to
an arbitary value, such as 30fps, and configure the CSI source pad to
15fps.

What you actually get out of the CSI is 25fps, which bears very little
with the actual values used on the CSI source pad.

You could say "CSI should ask the camera sensor" - well, that's fine
if it's immediately downstream, but otherwise we'd need to go walking
down the graph to find something that resembles its source - there may
be mux and CSI2 interface subdev blocks in that path.  Or we just accept
that frame rates are completely arbitary and bear no useful meaning what
so ever.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-13 13:27             ` Russell King - ARM Linux
@ 2017-03-13 13:55               ` Philipp Zabel
  2017-03-13 18:06                 ` Steve Longerbeam
  2017-03-13 20:56               ` Sakari Ailus
  1 sibling, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-03-13 13:55 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Sakari Ailus, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Mon, 2017-03-13 at 13:27 +0000, Russell King - ARM Linux wrote:
> On Mon, Mar 13, 2017 at 03:16:48PM +0200, Sakari Ailus wrote:
> > The vast majority of existing drivers do not implement them nor the user
> > space expects having to set them. Making that mandatory would break existing
> > user space.
> > 
> > In addition, that does not belong to link validation either: link validation
> > should only include static properties of the link that are required for
> > correct hardware operation. Frame rate is not such property: hardware that
> > supports the MC interface generally does not recognise such concept (with
> > the exception of some sensors). Additionally, it is dynamic: the frame rate
> > can change during streaming, making its validation at streamon time useless.
> 
> So how do we configure the CSI, which can do frame skipping?
> 
> With what you're proposing, it means it's possible to configure the
> camera sensor source pad to do 50fps.  Configure the CSI sink pad to
> an arbitary value, such as 30fps, and configure the CSI source pad to
> 15fps.
> 
> What you actually get out of the CSI is 25fps, which bears very little
> with the actual values used on the CSI source pad.
> 
> You could say "CSI should ask the camera sensor" - well, that's fine
> if it's immediately downstream, but otherwise we'd need to go walking
> down the graph to find something that resembles its source - there may
> be mux and CSI2 interface subdev blocks in that path.  Or we just accept
> that frame rates are completely arbitary and bear no useful meaning what
> so ever.

Which would include the frame interval returned by VIDIOC_G_PARM on the
connected video device, as that gets its information from the CSI output
pad's frame interval.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-13 13:55               ` Philipp Zabel
@ 2017-03-13 18:06                 ` Steve Longerbeam
  2017-03-13 21:03                   ` Sakari Ailus
  0 siblings, 1 reply; 228+ messages in thread
From: Steve Longerbeam @ 2017-03-13 18:06 UTC (permalink / raw)
  To: Philipp Zabel, Russell King - ARM Linux
  Cc: Sakari Ailus, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam



On 03/13/2017 06:55 AM, Philipp Zabel wrote:
> On Mon, 2017-03-13 at 13:27 +0000, Russell King - ARM Linux wrote:
>> On Mon, Mar 13, 2017 at 03:16:48PM +0200, Sakari Ailus wrote:
>>> The vast majority of existing drivers do not implement them nor the user
>>> space expects having to set them. Making that mandatory would break existing
>>> user space.
>>>
>>> In addition, that does not belong to link validation either: link validation
>>> should only include static properties of the link that are required for
>>> correct hardware operation. Frame rate is not such property: hardware that
>>> supports the MC interface generally does not recognise such concept (with
>>> the exception of some sensors). Additionally, it is dynamic: the frame rate
>>> can change during streaming, making its validation at streamon time useless.
>>
>> So how do we configure the CSI, which can do frame skipping?
>>
>> With what you're proposing, it means it's possible to configure the
>> camera sensor source pad to do 50fps.  Configure the CSI sink pad to
>> an arbitary value, such as 30fps, and configure the CSI source pad to
>> 15fps.
>>
>> What you actually get out of the CSI is 25fps, which bears very little
>> with the actual values used on the CSI source pad.
>>
>> You could say "CSI should ask the camera sensor" - well, that's fine
>> if it's immediately downstream, but otherwise we'd need to go walking
>> down the graph to find something that resembles its source - there may
>> be mux and CSI2 interface subdev blocks in that path.  Or we just accept
>> that frame rates are completely arbitary and bear no useful meaning what
>> so ever.
>
> Which would include the frame interval returned by VIDIOC_G_PARM on the
> connected video device, as that gets its information from the CSI output
> pad's frame interval.
>

I'm kinda in the middle on this topic. I agree with Sakari that
frame rate can fluctuate, but that should only be temporary. If
the frame rate permanently shifts from what a subdev reports via
g_frame_interval, then that is a system problem. So I agree with
Phillip and Russell that a link validation of frame interval still
makes sense.

But I also have to agree with Sakari that a subdev that has no
control over frame rate has no business implementing those ops.

And then I agree with Russell that for subdevs that do have control
over frame rate, they would have to walk the graph to find the frame
rate source.

So we're stuck in a broken situation: either the subdevs have to walk
the graph to find the source of frame rate, or s_frame_interval
would have to be mandatory and validated between pads, same as set_fmt.

Steve

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-13 13:27             ` Russell King - ARM Linux
  2017-03-13 13:55               ` Philipp Zabel
@ 2017-03-13 20:56               ` Sakari Ailus
  2017-03-13 21:07                 ` Russell King - ARM Linux
  1 sibling, 1 reply; 228+ messages in thread
From: Sakari Ailus @ 2017-03-13 20:56 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Philipp Zabel, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

Hi Russell,

On Mon, Mar 13, 2017 at 01:27:02PM +0000, Russell King - ARM Linux wrote:
> On Mon, Mar 13, 2017 at 03:16:48PM +0200, Sakari Ailus wrote:
> > The vast majority of existing drivers do not implement them nor the user
> > space expects having to set them. Making that mandatory would break existing
> > user space.
> > 
> > In addition, that does not belong to link validation either: link validation
> > should only include static properties of the link that are required for
> > correct hardware operation. Frame rate is not such property: hardware that
> > supports the MC interface generally does not recognise such concept (with
> > the exception of some sensors). Additionally, it is dynamic: the frame rate
> > can change during streaming, making its validation at streamon time useless.
> 
> So how do we configure the CSI, which can do frame skipping?
> 
> With what you're proposing, it means it's possible to configure the
> camera sensor source pad to do 50fps.  Configure the CSI sink pad to
> an arbitary value, such as 30fps, and configure the CSI source pad to
> 15fps.
> 
> What you actually get out of the CSI is 25fps, which bears very little
> with the actual values used on the CSI source pad.
> 
> You could say "CSI should ask the camera sensor" - well, that's fine
> if it's immediately downstream, but otherwise we'd need to go walking
> down the graph to find something that resembles its source - there may
> be mux and CSI2 interface subdev blocks in that path.  Or we just accept
> that frame rates are completely arbitary and bear no useful meaning what
> so ever.

The user is responsible for configuring the pipeline. It is thus not
unreasonable to as the user to configure the frame rate as well if there are
device in the pipeline that can affect the frame rate. The way I proposed to
implement it is compliant with the existing API and entirely deterministic,
contrary to what you're saying.

It's not the job of the CSI sub-device to figure it out.

S_PARM and G_PARM function as interface on the V4L2 API to access the frame
rate on plain V4L2 devices. The i.MX6 IPU is not a plain V4L2 device.
Essentially the V4L2 device node you have there is an interface to a rather
plain DMA engine, no more.

There have been plans to add device profiles to the documentation but that
work has not yet been done.

-- 
Regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-13 18:06                 ` Steve Longerbeam
@ 2017-03-13 21:03                   ` Sakari Ailus
  2017-03-13 21:29                     ` Russell King - ARM Linux
  2017-03-14  7:34                     ` Hans Verkuil
  0 siblings, 2 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-03-13 21:03 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Philipp Zabel, Russell King - ARM Linux, robh+dt, mark.rutland,
	shawnguo, kernel, fabio.estevam, mchehab, hverkuil, nick,
	markus.heiser, laurent.pinchart+renesas, bparrot, geert, arnd,
	sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

Hi Steve,

On Mon, Mar 13, 2017 at 11:06:22AM -0700, Steve Longerbeam wrote:
> 
> 
> On 03/13/2017 06:55 AM, Philipp Zabel wrote:
> >On Mon, 2017-03-13 at 13:27 +0000, Russell King - ARM Linux wrote:
> >>On Mon, Mar 13, 2017 at 03:16:48PM +0200, Sakari Ailus wrote:
> >>>The vast majority of existing drivers do not implement them nor the user
> >>>space expects having to set them. Making that mandatory would break existing
> >>>user space.
> >>>
> >>>In addition, that does not belong to link validation either: link validation
> >>>should only include static properties of the link that are required for
> >>>correct hardware operation. Frame rate is not such property: hardware that
> >>>supports the MC interface generally does not recognise such concept (with
> >>>the exception of some sensors). Additionally, it is dynamic: the frame rate
> >>>can change during streaming, making its validation at streamon time useless.
> >>
> >>So how do we configure the CSI, which can do frame skipping?
> >>
> >>With what you're proposing, it means it's possible to configure the
> >>camera sensor source pad to do 50fps.  Configure the CSI sink pad to
> >>an arbitary value, such as 30fps, and configure the CSI source pad to
> >>15fps.
> >>
> >>What you actually get out of the CSI is 25fps, which bears very little
> >>with the actual values used on the CSI source pad.
> >>
> >>You could say "CSI should ask the camera sensor" - well, that's fine
> >>if it's immediately downstream, but otherwise we'd need to go walking
> >>down the graph to find something that resembles its source - there may
> >>be mux and CSI2 interface subdev blocks in that path.  Or we just accept
> >>that frame rates are completely arbitary and bear no useful meaning what
> >>so ever.
> >
> >Which would include the frame interval returned by VIDIOC_G_PARM on the
> >connected video device, as that gets its information from the CSI output
> >pad's frame interval.
> >
> 
> I'm kinda in the middle on this topic. I agree with Sakari that
> frame rate can fluctuate, but that should only be temporary. If
> the frame rate permanently shifts from what a subdev reports via
> g_frame_interval, then that is a system problem. So I agree with
> Phillip and Russell that a link validation of frame interval still
> makes sense.
> 
> But I also have to agree with Sakari that a subdev that has no
> control over frame rate has no business implementing those ops.
> 
> And then I agree with Russell that for subdevs that do have control
> over frame rate, they would have to walk the graph to find the frame
> rate source.
> 
> So we're stuck in a broken situation: either the subdevs have to walk
> the graph to find the source of frame rate, or s_frame_interval
> would have to be mandatory and validated between pads, same as set_fmt.

It's not broken; what we are missing though is documentation on how to
control devices that can change the frame rate i.e. presumably drop frames
occasionally.

If you're doing something that hasn't been done before, it may be that new
documentation needs to be written to accomodate that use case. As we have an
existing interface (VIDIOC_SUBDEV_[GS]_FRAME_INTERVAL) it does make sense
to use that. What is not possible, though, is to mandate its use in link
validation everywhere.

If you had a hardware limitation that would require that the frame rate is
constant, then we'd need to handle that in link validation for that
particular piece of hardware. But there really is no case for doing that for
everything else.

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-13 20:56               ` Sakari Ailus
@ 2017-03-13 21:07                 ` Russell King - ARM Linux
  0 siblings, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-13 21:07 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Philipp Zabel, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Mon, Mar 13, 2017 at 10:56:46PM +0200, Sakari Ailus wrote:
> Hi Russell,
> 
> On Mon, Mar 13, 2017 at 01:27:02PM +0000, Russell King - ARM Linux wrote:
> > On Mon, Mar 13, 2017 at 03:16:48PM +0200, Sakari Ailus wrote:
> > > The vast majority of existing drivers do not implement them nor the user
> > > space expects having to set them. Making that mandatory would break existing
> > > user space.
> > > 
> > > In addition, that does not belong to link validation either: link validation
> > > should only include static properties of the link that are required for
> > > correct hardware operation. Frame rate is not such property: hardware that
> > > supports the MC interface generally does not recognise such concept (with
> > > the exception of some sensors). Additionally, it is dynamic: the frame rate
> > > can change during streaming, making its validation at streamon time useless.
> > 
> > So how do we configure the CSI, which can do frame skipping?
> > 
> > With what you're proposing, it means it's possible to configure the
> > camera sensor source pad to do 50fps.  Configure the CSI sink pad to
> > an arbitary value, such as 30fps, and configure the CSI source pad to
> > 15fps.
> > 
> > What you actually get out of the CSI is 25fps, which bears very little
> > with the actual values used on the CSI source pad.
> > 
> > You could say "CSI should ask the camera sensor" - well, that's fine
> > if it's immediately downstream, but otherwise we'd need to go walking
> > down the graph to find something that resembles its source - there may
> > be mux and CSI2 interface subdev blocks in that path.  Or we just accept
> > that frame rates are completely arbitary and bear no useful meaning what
> > so ever.
> 
> The user is responsible for configuring the pipeline. It is thus not
> unreasonable to as the user to configure the frame rate as well if there are
> device in the pipeline that can affect the frame rate. The way I proposed to
> implement it is compliant with the existing API and entirely deterministic,
> contrary to what you're saying.

You haven't really addressed my point at all.

What you seem to be saying is that you're quite happy for the situation
(which is a total misconfiguration) to exist.  Given the vapourware of
userspace (which I don't see changing in any kind of reasonable timeline)
I think this is completely absurd.

I'll state clearly now: everything that we've discussed so far, I'm
finding very hard to take anything you've said seriously.  I think we
have very different and incompatible point of views about what is
acceptable from a user point of view, so much so that we're never going
to agree on any point.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-13 21:03                   ` Sakari Ailus
@ 2017-03-13 21:29                     ` Russell King - ARM Linux
  2017-03-14  7:34                     ` Hans Verkuil
  1 sibling, 0 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-13 21:29 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Steve Longerbeam, Philipp Zabel, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, hverkuil, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Mon, Mar 13, 2017 at 11:03:50PM +0200, Sakari Ailus wrote:
> Hi Steve,
> 
> On Mon, Mar 13, 2017 at 11:06:22AM -0700, Steve Longerbeam wrote:
> > I'm kinda in the middle on this topic. I agree with Sakari that
> > frame rate can fluctuate, but that should only be temporary. If
> > the frame rate permanently shifts from what a subdev reports via
> > g_frame_interval, then that is a system problem. So I agree with
> > Phillip and Russell that a link validation of frame interval still
> > makes sense.
> > 
> > But I also have to agree with Sakari that a subdev that has no
> > control over frame rate has no business implementing those ops.
> > 
> > And then I agree with Russell that for subdevs that do have control
> > over frame rate, they would have to walk the graph to find the frame
> > rate source.
> > 
> > So we're stuck in a broken situation: either the subdevs have to walk
> > the graph to find the source of frame rate, or s_frame_interval
> > would have to be mandatory and validated between pads, same as set_fmt.
> 
> It's not broken; what we are missing though is documentation on how to
> control devices that can change the frame rate i.e. presumably drop frames
> occasionally.

It's not about "presumably drop frames occasionally."  The definition
of "occasional" is "occurring or appearing at irregular or infrequent
intervals".  Another word which describes what you're saying would be
"randomly drop frames" which would be a quite absurd "feature" to
include in hardware.

It's about _deterministically_ omitting frames from the capture.

The hardware provides two controls:
1. the size of a group of frames - between 1 and 5 frames
2. select which frames within the group are dropped using a bitmask

This gives a _regular_ pattern of dropped frames.

The rate scaling is given by: hweight(bitmask) / group size.

So, for example, if you're receiving a 50fps TV broadcast, and want to
capture at 25fps, you can set the group size to 2, and set the frame
drop to binary "01" or "10" - if it's interlaced, this would have the
effect of selecting one field, or skipping every other frame if
progressive.

That's not "we'll occasionally drop some frames", that's frame rate
scaling.  Just like you can scale the size of an image by omitting
every other pixel and every other line, this hardware allows omitting
every other frame or more.

So, to configure this feature, CSI needs to know two bits of information:

1. The _source_ frame rate.
2. The desired _sink_ frame rate.

>From that, it can compute how many frames to drop, and size of the group.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-13 12:46                         ` Sakari Ailus
@ 2017-03-14  3:45                           ` Mauro Carvalho Chehab
  2017-03-14  7:55                             ` Hans Verkuil
  0 siblings, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-14  3:45 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Hans Verkuil, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

Hi Sakari,

I started preparing a long argument about it, but gave up in favor of a
simpler one.

Em Mon, 13 Mar 2017 14:46:22 +0200
Sakari Ailus <sakari.ailus@iki.fi> escreveu:

> Drivers are written to support hardware, not particular use case.  

No, it is just the reverse: drivers and hardware are developed to
support use cases.

Btw, you should remember that the hardware is the full board, not just the
SoC. In practice, the board do limit the use cases: several provide a
single physical CSI connector, allowing just one sensor.

> > This situation is there since 2009. If I remember well, you tried to write
> > such generic plugin in the past, but never finished it, apparently because
> > it is too complex. Others tried too over the years.   
> 
> I'd argue I know better what happened with that attempt than you do. I had a
> prototype of a generic pipeline configuration library but due to various
> reasons I haven't been able to continue working on that since around 2012.

...

> > The last trial was done by Jacek, trying to cover just the exynos4 driver. 
> > Yet, even such limited scope plugin was not good enough, as it was never
> > merged upstream. Currently, there's no such plugins upstream.
> > 
> > If we can't even merge a plugin that solves it for just *one* driver,
> > I have no hope that we'll be able to do it for the generic case.  
> 
> I believe Jacek ceased to work on that plugin in his day job; other than
> that, there are some matters left to be addressed in his latest patchset.

The two above basically summaries the issue: the task of doing a generic
plugin on userspace, even for a single driver is complex enough to
not cover within a reasonable timeline.

>From 2009 to 2012, you were working on it, but didn't finish it.

Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
I didn't check when the generic plugin interface was added to libv4l).

In the case of Jacek's work, the first patch I was able to find was
written in Oct, 2014:
	https://patchwork.kernel.org/patch/5098111/
	(not sure what happened with the version 1).

The last e-mail about this subject was issued in Dec, 2016.

In summary, you had this on your task for 3 years for an OMAP3
plugin (where you have a good expertise), and Jacek for 2 years, 
for Exynos 4, where he should also have a good knowledge.

Yet, with all that efforts, no concrete results were achieved, as none
of the plugins got merged.

Even if they were merged, if we keep the same mean time to develop a
libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
years to be developed.

There's a clear message on it:
	- we shouldn't keep pushing for a solution via libv4l.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-13 21:03                   ` Sakari Ailus
  2017-03-13 21:29                     ` Russell King - ARM Linux
@ 2017-03-14  7:34                     ` Hans Verkuil
  2017-03-14 10:43                       ` Philipp Zabel
  1 sibling, 1 reply; 228+ messages in thread
From: Hans Verkuil @ 2017-03-14  7:34 UTC (permalink / raw)
  To: Sakari Ailus, Steve Longerbeam
  Cc: Philipp Zabel, Russell King - ARM Linux, robh+dt, mark.rutland,
	shawnguo, kernel, fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On 03/13/2017 10:03 PM, Sakari Ailus wrote:
> Hi Steve,
> 
> On Mon, Mar 13, 2017 at 11:06:22AM -0700, Steve Longerbeam wrote:
>>
>>
>> On 03/13/2017 06:55 AM, Philipp Zabel wrote:
>>> On Mon, 2017-03-13 at 13:27 +0000, Russell King - ARM Linux wrote:
>>>> On Mon, Mar 13, 2017 at 03:16:48PM +0200, Sakari Ailus wrote:
>>>>> The vast majority of existing drivers do not implement them nor the user
>>>>> space expects having to set them. Making that mandatory would break existing
>>>>> user space.
>>>>>
>>>>> In addition, that does not belong to link validation either: link validation
>>>>> should only include static properties of the link that are required for
>>>>> correct hardware operation. Frame rate is not such property: hardware that
>>>>> supports the MC interface generally does not recognise such concept (with
>>>>> the exception of some sensors). Additionally, it is dynamic: the frame rate
>>>>> can change during streaming, making its validation at streamon time useless.
>>>>
>>>> So how do we configure the CSI, which can do frame skipping?
>>>>
>>>> With what you're proposing, it means it's possible to configure the
>>>> camera sensor source pad to do 50fps.  Configure the CSI sink pad to
>>>> an arbitary value, such as 30fps, and configure the CSI source pad to
>>>> 15fps.
>>>>
>>>> What you actually get out of the CSI is 25fps, which bears very little
>>>> with the actual values used on the CSI source pad.
>>>>
>>>> You could say "CSI should ask the camera sensor" - well, that's fine
>>>> if it's immediately downstream, but otherwise we'd need to go walking
>>>> down the graph to find something that resembles its source - there may
>>>> be mux and CSI2 interface subdev blocks in that path.  Or we just accept
>>>> that frame rates are completely arbitary and bear no useful meaning what
>>>> so ever.
>>>
>>> Which would include the frame interval returned by VIDIOC_G_PARM on the
>>> connected video device, as that gets its information from the CSI output
>>> pad's frame interval.
>>>
>>
>> I'm kinda in the middle on this topic. I agree with Sakari that
>> frame rate can fluctuate, but that should only be temporary. If
>> the frame rate permanently shifts from what a subdev reports via
>> g_frame_interval, then that is a system problem. So I agree with
>> Phillip and Russell that a link validation of frame interval still
>> makes sense.
>>
>> But I also have to agree with Sakari that a subdev that has no
>> control over frame rate has no business implementing those ops.
>>
>> And then I agree with Russell that for subdevs that do have control
>> over frame rate, they would have to walk the graph to find the frame
>> rate source.
>>
>> So we're stuck in a broken situation: either the subdevs have to walk
>> the graph to find the source of frame rate, or s_frame_interval
>> would have to be mandatory and validated between pads, same as set_fmt.
> 
> It's not broken; what we are missing though is documentation on how to
> control devices that can change the frame rate i.e. presumably drop frames
> occasionally.
> 
> If you're doing something that hasn't been done before, it may be that new
> documentation needs to be written to accomodate that use case. As we have an
> existing interface (VIDIOC_SUBDEV_[GS]_FRAME_INTERVAL) it does make sense
> to use that. What is not possible, though, is to mandate its use in link
> validation everywhere.
> 
> If you had a hardware limitation that would require that the frame rate is
> constant, then we'd need to handle that in link validation for that
> particular piece of hardware. But there really is no case for doing that for
> everything else.
> 

General note: I would strongly recommend that g/s_parm support is removed in
v4l2_subdev in favor of g/s_frame_interval.

g/s_parm is an abomination...

There seem to be only a few i2c drivers that use g/s_parm, so this shouldn't
be a lot of work.

Having two APIs for the same thing is always very bad.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-14  3:45                           ` Mauro Carvalho Chehab
@ 2017-03-14  7:55                             ` Hans Verkuil
  2017-03-14 10:21                               ` Mauro Carvalho Chehab
                                                 ` (2 more replies)
  0 siblings, 3 replies; 228+ messages in thread
From: Hans Verkuil @ 2017-03-14  7:55 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Sakari Ailus
  Cc: Russell King - ARM Linux, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
> Hi Sakari,
> 
> I started preparing a long argument about it, but gave up in favor of a
> simpler one.
> 
> Em Mon, 13 Mar 2017 14:46:22 +0200
> Sakari Ailus <sakari.ailus@iki.fi> escreveu:
> 
>> Drivers are written to support hardware, not particular use case.  
> 
> No, it is just the reverse: drivers and hardware are developed to
> support use cases.
> 
> Btw, you should remember that the hardware is the full board, not just the
> SoC. In practice, the board do limit the use cases: several provide a
> single physical CSI connector, allowing just one sensor.
> 
>>> This situation is there since 2009. If I remember well, you tried to write
>>> such generic plugin in the past, but never finished it, apparently because
>>> it is too complex. Others tried too over the years.   
>>
>> I'd argue I know better what happened with that attempt than you do. I had a
>> prototype of a generic pipeline configuration library but due to various
>> reasons I haven't been able to continue working on that since around 2012.
> 
> ...
> 
>>> The last trial was done by Jacek, trying to cover just the exynos4 driver. 
>>> Yet, even such limited scope plugin was not good enough, as it was never
>>> merged upstream. Currently, there's no such plugins upstream.
>>>
>>> If we can't even merge a plugin that solves it for just *one* driver,
>>> I have no hope that we'll be able to do it for the generic case.  
>>
>> I believe Jacek ceased to work on that plugin in his day job; other than
>> that, there are some matters left to be addressed in his latest patchset.
> 
> The two above basically summaries the issue: the task of doing a generic
> plugin on userspace, even for a single driver is complex enough to
> not cover within a reasonable timeline.
> 
> From 2009 to 2012, you were working on it, but didn't finish it.
> 
> Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
> I didn't check when the generic plugin interface was added to libv4l).
> 
> In the case of Jacek's work, the first patch I was able to find was
> written in Oct, 2014:
> 	https://patchwork.kernel.org/patch/5098111/
> 	(not sure what happened with the version 1).
> 
> The last e-mail about this subject was issued in Dec, 2016.
> 
> In summary, you had this on your task for 3 years for an OMAP3
> plugin (where you have a good expertise), and Jacek for 2 years, 
> for Exynos 4, where he should also have a good knowledge.
> 
> Yet, with all that efforts, no concrete results were achieved, as none
> of the plugins got merged.
> 
> Even if they were merged, if we keep the same mean time to develop a
> libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> years to be developed.
> 
> There's a clear message on it:
> 	- we shouldn't keep pushing for a solution via libv4l.

Or:

	- userspace plugin development had a very a low priority and
	  never got the attention it needed.

I know that's *my* reason. I rarely if ever looked at it. I always assumed
Sakari and/or Laurent would look at it. If this reason is also valid for
Sakari and Laurent, then it is no wonder nothing has happened in all that
time.

We're all very driver-development-driven, and userspace gets very little
attention in general. So before just throwing in the towel we should take
a good look at the reasons why there has been little or no development: is
it because of fundamental design defects, or because nobody paid attention
to it?

I strongly suspect it is the latter.

In addition, I suspect end-users of these complex devices don't really care
about a plugin: they want full control and won't typically use generic
applications. If they would need support for that, we'd have seen much more
interest. The main reason for having a plugin is to simplify testing and
if this is going to be used on cheap hobbyist devkits.

An additional complication is simply that it is hard to find fully supported
MC hardware. omap3 boards are hard to find these days, renesas boards are not
easy to get, freescale isn't the most popular either. Allwinner, mediatek,
amlogic, broadcom and qualcomm all have closed source implementations or no
implementation at all.

I know it took me a very long time before I had a working omap3.

So I am not at all surprised that little progress has been made.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-14  7:55                             ` Hans Verkuil
@ 2017-03-14 10:21                               ` Mauro Carvalho Chehab
  2017-03-14 22:32                                 ` media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline) Pavel Machek
  2017-03-20 13:24                                 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Hans Verkuil
  2017-03-17 11:42                               ` Russell King - ARM Linux
  2017-03-26 16:44                               ` Laurent Pinchart
  2 siblings, 2 replies; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-14 10:21 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sakari Ailus, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

Em Tue, 14 Mar 2017 08:55:36 +0100
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
> > Hi Sakari,
> > 
> > I started preparing a long argument about it, but gave up in favor of a
> > simpler one.
> > 
> > Em Mon, 13 Mar 2017 14:46:22 +0200
> > Sakari Ailus <sakari.ailus@iki.fi> escreveu:
> >   
> >> Drivers are written to support hardware, not particular use case.    
> > 
> > No, it is just the reverse: drivers and hardware are developed to
> > support use cases.
> > 
> > Btw, you should remember that the hardware is the full board, not just the
> > SoC. In practice, the board do limit the use cases: several provide a
> > single physical CSI connector, allowing just one sensor.
> >   
> >>> This situation is there since 2009. If I remember well, you tried to write
> >>> such generic plugin in the past, but never finished it, apparently because
> >>> it is too complex. Others tried too over the years.     
> >>
> >> I'd argue I know better what happened with that attempt than you do. I had a
> >> prototype of a generic pipeline configuration library but due to various
> >> reasons I haven't been able to continue working on that since around 2012.  
> > 
> > ...
> >   
> >>> The last trial was done by Jacek, trying to cover just the exynos4 driver. 
> >>> Yet, even such limited scope plugin was not good enough, as it was never
> >>> merged upstream. Currently, there's no such plugins upstream.
> >>>
> >>> If we can't even merge a plugin that solves it for just *one* driver,
> >>> I have no hope that we'll be able to do it for the generic case.    
> >>
> >> I believe Jacek ceased to work on that plugin in his day job; other than
> >> that, there are some matters left to be addressed in his latest patchset.  
> > 
> > The two above basically summaries the issue: the task of doing a generic
> > plugin on userspace, even for a single driver is complex enough to
> > not cover within a reasonable timeline.
> > 
> > From 2009 to 2012, you were working on it, but didn't finish it.
> > 
> > Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
> > I didn't check when the generic plugin interface was added to libv4l).
> > 
> > In the case of Jacek's work, the first patch I was able to find was
> > written in Oct, 2014:
> > 	https://patchwork.kernel.org/patch/5098111/
> > 	(not sure what happened with the version 1).
> > 
> > The last e-mail about this subject was issued in Dec, 2016.
> > 
> > In summary, you had this on your task for 3 years for an OMAP3
> > plugin (where you have a good expertise), and Jacek for 2 years, 
> > for Exynos 4, where he should also have a good knowledge.
> > 
> > Yet, with all that efforts, no concrete results were achieved, as none
> > of the plugins got merged.
> > 
> > Even if they were merged, if we keep the same mean time to develop a
> > libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> > years to be developed.
> > 
> > There's a clear message on it:
> > 	- we shouldn't keep pushing for a solution via libv4l.  
> 
> Or:
> 
> 	- userspace plugin development had a very a low priority and
> 	  never got the attention it needed.

The end result is the same: we can't count on it.

> 
> I know that's *my* reason. I rarely if ever looked at it. I always assumed
> Sakari and/or Laurent would look at it. If this reason is also valid for
> Sakari and Laurent, then it is no wonder nothing has happened in all that
> time.
> 
> We're all very driver-development-driven, and userspace gets very little
> attention in general. So before just throwing in the towel we should take
> a good look at the reasons why there has been little or no development: is
> it because of fundamental design defects, or because nobody paid attention
> to it?

No. We should look it the other way: basically, there are patches
for i.MX6 driver that sends control from videonode to subdevs. 

If we nack apply it, who will write the userspace plugin? When
such change will be merged upstream?

If we don't have answers to any of the above questions, we should not
nack it.

That's said, that doesn't prevent merging a libv4l plugin if/when
someone can find time/interest to develop it.

> I strongly suspect it is the latter.
> 
> In addition, I suspect end-users of these complex devices don't really care
> about a plugin: they want full control and won't typically use generic
> applications. If they would need support for that, we'd have seen much more
> interest. The main reason for having a plugin is to simplify testing and
> if this is going to be used on cheap hobbyist devkits.

What are the needs for a cheap hobbyist devkit owner? Do we currently
satisfy those needs? I'd say that having a functional driver when
compiled without the subdev API, that implements the ioctl's/controls
that a generic application like camorama/google talk/skype/zbar...
would work should be enough to make them happy, even if they need to
add some udev rule and/or run some "prep" application that would setup 
the pipelines via MC and eventually rename the device with a working
pipeline to /dev/video0.

> 
> An additional complication is simply that it is hard to find fully supported
> MC hardware. omap3 boards are hard to find these days, renesas boards are not
> easy to get, freescale isn't the most popular either. Allwinner, mediatek,
> amlogic, broadcom and qualcomm all have closed source implementations or no
> implementation at all.

I'd say that we should not care anymore on providing a solution for
generic applications to run on boards like OMAP3[1]. For hardware that
are currently available that have Kernel driver and boards developed
to be used as "cheap hobbyist devkit", I'd say we should implement
a Kernel solution that would allow them to be used without subdev
API, e. g. having all ioctls needed by generic applications to work
functional, after some external application sets the pipeline.

[1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
just arrived here last week. It would be nice to have xawtv3 running on it :-)
So, if I have a lot of spare time (with is very unlikely), I might eventually 
do something for it to work.

> I know it took me a very long time before I had a working omap3.

My first OMAP3 board with working V4L2 source just arrived last week :-)

> So I am not at all surprised that little progress has been made.

I'm not surprised, but I'm disappointed, as I tried to push toward a
solution for this problem since when we had our initial meetings about
it.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates
  2017-03-14  7:34                     ` Hans Verkuil
@ 2017-03-14 10:43                       ` Philipp Zabel
  0 siblings, 0 replies; 228+ messages in thread
From: Philipp Zabel @ 2017-03-14 10:43 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sakari Ailus, Steve Longerbeam, Russell King - ARM Linux,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam

On Tue, 2017-03-14 at 08:34 +0100, Hans Verkuil wrote:
> On 03/13/2017 10:03 PM, Sakari Ailus wrote:
> > Hi Steve,
> > 
> > On Mon, Mar 13, 2017 at 11:06:22AM -0700, Steve Longerbeam wrote:
> >>
> >>
> >> On 03/13/2017 06:55 AM, Philipp Zabel wrote:
> >>> On Mon, 2017-03-13 at 13:27 +0000, Russell King - ARM Linux wrote:
> >>>> On Mon, Mar 13, 2017 at 03:16:48PM +0200, Sakari Ailus wrote:
> >>>>> The vast majority of existing drivers do not implement them nor the user
> >>>>> space expects having to set them. Making that mandatory would break existing
> >>>>> user space.
> >>>>>
> >>>>> In addition, that does not belong to link validation either: link validation
> >>>>> should only include static properties of the link that are required for
> >>>>> correct hardware operation. Frame rate is not such property: hardware that
> >>>>> supports the MC interface generally does not recognise such concept (with
> >>>>> the exception of some sensors). Additionally, it is dynamic: the frame rate
> >>>>> can change during streaming, making its validation at streamon time useless.
> >>>>
> >>>> So how do we configure the CSI, which can do frame skipping?
> >>>>
> >>>> With what you're proposing, it means it's possible to configure the
> >>>> camera sensor source pad to do 50fps.  Configure the CSI sink pad to
> >>>> an arbitary value, such as 30fps, and configure the CSI source pad to
> >>>> 15fps.
> >>>>
> >>>> What you actually get out of the CSI is 25fps, which bears very little
> >>>> with the actual values used on the CSI source pad.
> >>>>
> >>>> You could say "CSI should ask the camera sensor" - well, that's fine
> >>>> if it's immediately downstream, but otherwise we'd need to go walking
> >>>> down the graph to find something that resembles its source - there may
> >>>> be mux and CSI2 interface subdev blocks in that path.  Or we just accept
> >>>> that frame rates are completely arbitary and bear no useful meaning what
> >>>> so ever.
> >>>
> >>> Which would include the frame interval returned by VIDIOC_G_PARM on the
> >>> connected video device, as that gets its information from the CSI output
> >>> pad's frame interval.
> >>>
> >>
> >> I'm kinda in the middle on this topic. I agree with Sakari that
> >> frame rate can fluctuate, but that should only be temporary. If
> >> the frame rate permanently shifts from what a subdev reports via
> >> g_frame_interval, then that is a system problem. So I agree with
> >> Phillip and Russell that a link validation of frame interval still
> >> makes sense.
> >>
> >> But I also have to agree with Sakari that a subdev that has no
> >> control over frame rate has no business implementing those ops.
> >>
> >> And then I agree with Russell that for subdevs that do have control
> >> over frame rate, they would have to walk the graph to find the frame
> >> rate source.
> >>
> >> So we're stuck in a broken situation: either the subdevs have to walk
> >> the graph to find the source of frame rate, or s_frame_interval
> >> would have to be mandatory and validated between pads, same as set_fmt.
> > 
> > It's not broken; what we are missing though is documentation on how to
> > control devices that can change the frame rate i.e. presumably drop frames
> > occasionally.
> > 
> > If you're doing something that hasn't been done before, it may be that new
> > documentation needs to be written to accomodate that use case. As we have an
> > existing interface (VIDIOC_SUBDEV_[GS]_FRAME_INTERVAL) it does make sense
> > to use that. What is not possible, though, is to mandate its use in link
> > validation everywhere.
> > 
> > If you had a hardware limitation that would require that the frame rate is
> > constant, then we'd need to handle that in link validation for that
> > particular piece of hardware. But there really is no case for doing that for
> > everything else.
> > 
> 
> General note: I would strongly recommend that g/s_parm support is removed in
> v4l2_subdev in favor of g/s_frame_interval.
> 
> g/s_parm is an abomination...

Agreed. Just in this specific case I was talking about G_PARM on
the /dev/video node, not the v4l2_subdev nodes. This is currently used
by non-subdev-aware userspace to obtain the framerate from the video
capture device.

> There seem to be only a few i2c drivers that use g/s_parm, so this shouldn't
> be a lot of work.
> 
> Having two APIs for the same thing is always very bad.
> 
> Regards,
> 
> 	Hans
> 

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-12 22:37                             ` Mauro Carvalho Chehab
@ 2017-03-14 18:26                               ` Pavel Machek
  0 siblings, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-14 18:26 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Russell King - ARM Linux, Sakari Ailus, Hans Verkuil,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 2099 bytes --]

Hi!

> > Mid-layer is difficult... there are _hundreds_ of possible
> > pipeline setups. If it should live in kernel or in userspace is a
> > question... but I don't think having it in kernel helps in any way.
> 
> Mid-layer is difficult, because we either need to feed some
> library with knowledge for all kernel drivers or we need to improve
> the MC API to provide more details.
> 
> For example, several drivers used to expose entities via the
> generic MEDIA_ENT_T_DEVNODE to represent entities of different
> types. See, for example, entities 1, 5 and 7 (and others) at:
>
> > https://mchehab.fedorapeople.org/mc-next-gen/igepv2_omap3isp.png

Well... we provide enough information, so that device-specific code
does not have to be in kernel.

There are few types of ENT_T_DEVNODE there. "ISP CCP2" does not really
provide any functionality to the user. It just has to be there,
because the pipeline needs to be connected. "ISP Preview" provides
format conversion. "ISP Resizer" provides rescaling.

I'm not sure if it ever makes sense to use "ISP Preview
output". Normally you take data for display from "ISP Resizer
output". (Would there be some power advantage from that?)

> A device-specific code could either be hardcoding the entity number
> or checking for the entity strings to add some logic to setup
> controls on those "unknown" entities, a generic app won't be able 
> to do anything with them, as it doesn't know what function(s) such
> entity provide.

Generic app should know if it wants RGGB10 data, then it can use "ISP
CCDC output", or if it wants "cooked" data suitable for display, then
it wants to use "ISP Resizer output". Once application knows what
output it wants, there's just one path through the system. So being
able to tell the ENT_T_DEVNODEs apart does not seem to be critical.

OTOH, for useful camera application, different paths are needed at
different phases.
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-14 10:21                               ` Mauro Carvalho Chehab
@ 2017-03-14 22:32                                 ` Pavel Machek
  2017-03-15  0:54                                   ` Mauro Carvalho Chehab
  2017-03-20 13:24                                 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Hans Verkuil
  1 sibling, 1 reply; 228+ messages in thread
From: Pavel Machek @ 2017-03-14 22:32 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 4913 bytes --]

Hi!

> > > Even if they were merged, if we keep the same mean time to develop a
> > > libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> > > years to be developed.
> > > 
> > > There's a clear message on it:
> > > 	- we shouldn't keep pushing for a solution via libv4l.  
> > 
> > Or:
> > 
> > 	- userspace plugin development had a very a low priority and
> > 	  never got the attention it needed.
> 
> The end result is the same: we can't count on it.
> 
> > 
> > I know that's *my* reason. I rarely if ever looked at it. I always assumed
> > Sakari and/or Laurent would look at it. If this reason is also valid for
> > Sakari and Laurent, then it is no wonder nothing has happened in all that
> > time.
> > 
> > We're all very driver-development-driven, and userspace gets very little
> > attention in general. So before just throwing in the towel we should take
> > a good look at the reasons why there has been little or no development: is
> > it because of fundamental design defects, or because nobody paid attention
> > to it?
> 
> No. We should look it the other way: basically, there are patches
> for i.MX6 driver that sends control from videonode to subdevs. 
> 
> If we nack apply it, who will write the userspace plugin? When
> such change will be merged upstream?

Well, I believe first question is: what applications would we want to
run on complex devices? Will sending control from video to subdevs
actually help?

mplayer is useful for testing... but that one already works (after you
setup the pipeline, and configure exposure/gain).

But thats useful for testing, not really for production. Image will be
out of focus and with wrong white balance.

What I would really like is an application to get still photos. For
taking pictures with manual settings we need

a) units for controls: user wants to focus on 1m, and take picture
with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
focal length is 20mm with 5mm chip.

But... autofocus/autogain would really be good to have. Thus we need:

b) for each frame, we need exposure settings and focus position at
time frame was taken. Otherwise autofocus/autogain will be too
slow. At least focus position is going to be tricky -- either kernel
would have to compute focus position for us (not trivial) or we'd need
enough information to compute it in userspace.

There are more problems: hardware-accelerated preview is not trivial
to set up (and I'm unsure if it can be done in generic way). Still
photos application needs to switch resolutions between preview and
photo capture. Probably hardware-accelerated histograms are needed for
white balance, auto gain and auto focus, ....

It seems like there's a _lot_ of stuff to be done before we have
useful support for complex cameras...

(And I'm not sure... when application such as skype is running, is
there some way to run autogain/autofocus/autowhitebalance? Is that
something we want to support?)

> If we don't have answers to any of the above questions, we should not
> nack it.
> 
> That's said, that doesn't prevent merging a libv4l plugin if/when
> someone can find time/interest to develop it.

I believe other question is: will not having same control on main
video device and subdevs be confusing? Does it actually help userspace
in any way? Yes, we can make controls accessible to old application,
but does it make them more useful? 

> > In addition, I suspect end-users of these complex devices don't really care
> > about a plugin: they want full control and won't typically use generic
> > applications. If they would need support for that, we'd have seen much more
> > interest. The main reason for having a plugin is to simplify testing and
> > if this is going to be used on cheap hobbyist devkits.
> 
> What are the needs for a cheap hobbyist devkit owner? Do we currently
> satisfy those needs? I'd say that having a functional driver when
> compiled without the subdev API, that implements the ioctl's/controls

Having different interface based on config options... is just
weird. What about poor people (like me) trying to develop complex
applications?

> [1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
> just arrived here last week. It would be nice to have xawtv3 running on it :-)
> So, if I have a lot of spare time (with is very unlikely), I might eventually 
> do something for it to work.
> 
> > I know it took me a very long time before I had a working omap3.
> 
> My first OMAP3 board with working V4L2 source just arrived last week
> :-)

You can get Nokia N900 on aliexpress. If not, they are still available
between people :-).

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-14 22:32                                 ` media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline) Pavel Machek
@ 2017-03-15  0:54                                   ` Mauro Carvalho Chehab
  2017-03-15 10:50                                     ` Philippe De Muyter
  2017-03-15 18:04                                     ` Pavel Machek
  0 siblings, 2 replies; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-15  0:54 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

Em Tue, 14 Mar 2017 23:32:54 +0100
Pavel Machek <pavel@ucw.cz> escreveu:

> Hi!
> 
> > > > Even if they were merged, if we keep the same mean time to develop a
> > > > libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> > > > years to be developed.
> > > > 
> > > > There's a clear message on it:
> > > > 	- we shouldn't keep pushing for a solution via libv4l.    
> > > 
> > > Or:
> > > 
> > > 	- userspace plugin development had a very a low priority and
> > > 	  never got the attention it needed.  
> > 
> > The end result is the same: we can't count on it.
> >   
> > > 
> > > I know that's *my* reason. I rarely if ever looked at it. I always assumed
> > > Sakari and/or Laurent would look at it. If this reason is also valid for
> > > Sakari and Laurent, then it is no wonder nothing has happened in all that
> > > time.
> > > 
> > > We're all very driver-development-driven, and userspace gets very little
> > > attention in general. So before just throwing in the towel we should take
> > > a good look at the reasons why there has been little or no development: is
> > > it because of fundamental design defects, or because nobody paid attention
> > > to it?  
> > 
> > No. We should look it the other way: basically, there are patches
> > for i.MX6 driver that sends control from videonode to subdevs. 
> > 
> > If we nack apply it, who will write the userspace plugin? When
> > such change will be merged upstream?  
> 
> Well, I believe first question is: what applications would we want to
> run on complex devices? Will sending control from video to subdevs
> actually help?

I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
with those, it will likely run with any other application.

> mplayer is useful for testing... but that one already works (after you
> setup the pipeline, and configure exposure/gain).
> 
> But thats useful for testing, not really for production. Image will be
> out of focus and with wrong white balance.
> 
> What I would really like is an application to get still photos. For
> taking pictures with manual settings we need
> 
> a) units for controls: user wants to focus on 1m, and take picture
> with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
> focal length is 20mm with 5mm chip.
> 
> But... autofocus/autogain would really be good to have. Thus we need:
> 
> b) for each frame, we need exposure settings and focus position at
> time frame was taken. Otherwise autofocus/autogain will be too
> slow. At least focus position is going to be tricky -- either kernel
> would have to compute focus position for us (not trivial) or we'd need
> enough information to compute it in userspace.
> 
> There are more problems: hardware-accelerated preview is not trivial
> to set up (and I'm unsure if it can be done in generic way). Still
> photos application needs to switch resolutions between preview and
> photo capture. Probably hardware-accelerated histograms are needed for
> white balance, auto gain and auto focus, ....
> 
> It seems like there's a _lot_ of stuff to be done before we have
> useful support for complex cameras...

Taking still pictures using a hardware-accelerated preview is
a sophisticated use case. I don't know any userspace application
that does that. Ok, several allow taking snapshots, by simply
storing the image of the current frame.

> (And I'm not sure... when application such as skype is running, is
> there some way to run autogain/autofocus/autowhitebalance? Is that
> something we want to support?)

Autofocus no. Autogain/Autowhite can be done via libv4l, provided that
it can access the device's controls via /dev/video devnode. Other
applications may be using some other similar algorithms.

Ok, they don't use histograms provided by the SoC. So, they do it in
software, with is slower. Still, it works fine when the light
conditions don't change too fast.

> > If we don't have answers to any of the above questions, we should not
> > nack it.
> > 
> > That's said, that doesn't prevent merging a libv4l plugin if/when
> > someone can find time/interest to develop it.  
> 
> I believe other question is: will not having same control on main
> video device and subdevs be confusing? Does it actually help userspace
> in any way? Yes, we can make controls accessible to old application,
> but does it make them more useful? 

Yes. As I said, libv4l (and some apps) have logic inside to adjust
the image via bright, contrast and white balance controls, using the
video devnode. They don't talk subdev API. So, if those controls
aren't exported, they won't be able to provide a good quality image.

> > > In addition, I suspect end-users of these complex devices don't really care
> > > about a plugin: they want full control and won't typically use generic
> > > applications. If they would need support for that, we'd have seen much more
> > > interest. The main reason for having a plugin is to simplify testing and
> > > if this is going to be used on cheap hobbyist devkits.  
> > 
> > What are the needs for a cheap hobbyist devkit owner? Do we currently
> > satisfy those needs? I'd say that having a functional driver when
> > compiled without the subdev API, that implements the ioctl's/controls  
> 
> Having different interface based on config options... is just
> weird. What about poor people (like me) trying to develop complex
> applications?

Well, that could be done using other mechanisms, like a modprobe
parameter or by switching the behaviour if a subdev interface is
opened. I don't see much trouble on allowing accessing a control via
both interfaces.

> 
> > [1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
> > just arrived here last week. It would be nice to have xawtv3 running on it :-)
> > So, if I have a lot of spare time (with is very unlikely), I might eventually 
> > do something for it to work.
> >   
> > > I know it took me a very long time before I had a working omap3.  
> > 
> > My first OMAP3 board with working V4L2 source just arrived last week
> > :-)  
> 
> You can get Nokia N900 on aliexpress. If not, they are still available
> between people :-)

I have one. Unfortunately, I never had a chance to use it, as the display
stopped working one week after I get it.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-15  0:54                                   ` Mauro Carvalho Chehab
@ 2017-03-15 10:50                                     ` Philippe De Muyter
  2017-03-15 18:55                                       ` Nicolas Dufresne
  2017-03-15 18:04                                     ` Pavel Machek
  1 sibling, 1 reply; 228+ messages in thread
From: Philippe De Muyter @ 2017-03-15 10:50 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Pavel Machek, Hans Verkuil, Sakari Ailus,
	Russell King - ARM Linux, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

On Tue, Mar 14, 2017 at 09:54:31PM -0300, Mauro Carvalho Chehab wrote:
> Em Tue, 14 Mar 2017 23:32:54 +0100
> Pavel Machek <pavel@ucw.cz> escreveu:
> 
> > 
> > Well, I believe first question is: what applications would we want to
> > run on complex devices? Will sending control from video to subdevs
> > actually help?
> 
> I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> with those, it will likely run with any other application.
> 

I would like to add the 'v4l2src' plugin of gstreamer, and on the imx6 its
imx-specific counterpart 'imxv4l2videosrc' from the gstreamer-imx package
at https://github.com/Freescale/gstreamer-imx, and 'v4l2-ctl'.

Philippe

-- 
Philippe De Muyter +32 2 6101532 Macq SA rue de l'Aeronef 2 B-1140 Bruxelles

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-15  0:54                                   ` Mauro Carvalho Chehab
  2017-03-15 10:50                                     ` Philippe De Muyter
@ 2017-03-15 18:04                                     ` Pavel Machek
  2017-03-15 20:26                                       ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 228+ messages in thread
From: Pavel Machek @ 2017-03-15 18:04 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 5973 bytes --]

Hi!

> > Well, I believe first question is: what applications would we want to
> > run on complex devices? Will sending control from video to subdevs
> > actually help?
> 
> I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> with those, it will likely run with any other application.

I'll take a look when I'm at better internet access.

> > mplayer is useful for testing... but that one already works (after you
> > setup the pipeline, and configure exposure/gain).
> > 
> > But thats useful for testing, not really for production. Image will be
> > out of focus and with wrong white balance.
> > 
> > What I would really like is an application to get still photos. For
> > taking pictures with manual settings we need
> > 
> > a) units for controls: user wants to focus on 1m, and take picture
> > with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
> > focal length is 20mm with 5mm chip.
> > 
> > But... autofocus/autogain would really be good to have. Thus we need:
> > 
> > b) for each frame, we need exposure settings and focus position at
> > time frame was taken. Otherwise autofocus/autogain will be too
> > slow. At least focus position is going to be tricky -- either kernel
> > would have to compute focus position for us (not trivial) or we'd need
> > enough information to compute it in userspace.
> > 
> > There are more problems: hardware-accelerated preview is not trivial
> > to set up (and I'm unsure if it can be done in generic way). Still
> > photos application needs to switch resolutions between preview and
> > photo capture. Probably hardware-accelerated histograms are needed for
> > white balance, auto gain and auto focus, ....
> > 
> > It seems like there's a _lot_ of stuff to be done before we have
> > useful support for complex cameras...
> 
> Taking still pictures using a hardware-accelerated preview is
> a sophisticated use case. I don't know any userspace application
> that does that. Ok, several allow taking snapshots, by simply
> storing the image of the current frame.

Well, there are applications that take still pictures. Android has
one. Maemo has another. Then there's fcam-dev. Its open source; with
modified kernel it is fully usable. I have version that runs on recent
nearly-mainline on N900. 

So yes, I'd like solution for problems a) and b).

> > (And I'm not sure... when application such as skype is running, is
> > there some way to run autogain/autofocus/autowhitebalance? Is that
> > something we want to support?)
> 
> Autofocus no. Autogain/Autowhite can be done via libv4l, provided that
> it can access the device's controls via /dev/video devnode. Other
> applications may be using some other similar algorithms.
> 
> Ok, they don't use histograms provided by the SoC. So, they do it in
> software, with is slower. Still, it works fine when the light
> conditions don't change too fast.

I guess it is going to work well enough with higher CPU
usage. Question is if camera without autofocus is usable. I'd say "not
really".

> > I believe other question is: will not having same control on main
> > video device and subdevs be confusing? Does it actually help userspace
> > in any way? Yes, we can make controls accessible to old application,
> > but does it make them more useful? 
> 
> Yes. As I said, libv4l (and some apps) have logic inside to adjust
> the image via bright, contrast and white balance controls, using the
> video devnode. They don't talk subdev API. So, if those controls
> aren't exported, they won't be able to provide a good quality image.

Next question is if the libv4l will do the right thing if we just put
all controls to one node. For example on N900 you have exposure/gain
and brightness. But the brightness is applied at preview phase, so it
is "basically useless". You really need to adjust the image using the
exposure/gain.

> > > > In addition, I suspect end-users of these complex devices don't really care
> > > > about a plugin: they want full control and won't typically use generic
> > > > applications. If they would need support for that, we'd have seen much more
> > > > interest. The main reason for having a plugin is to simplify testing and
> > > > if this is going to be used on cheap hobbyist devkits.  
> > > 
> > > What are the needs for a cheap hobbyist devkit owner? Do we currently
> > > satisfy those needs? I'd say that having a functional driver when
> > > compiled without the subdev API, that implements the ioctl's/controls  
> > 
> > Having different interface based on config options... is just
> > weird. What about poor people (like me) trying to develop complex
> > applications?
> 
> Well, that could be done using other mechanisms, like a modprobe
> parameter or by switching the behaviour if a subdev interface is
> opened. I don't see much trouble on allowing accessing a control via
> both interfaces.

If we really want to go that way (is not modifying library to access
the right files quite easy?), I believe non-confusing option would be
to have '/dev/video0 -- omap3 camera for legacy applications' which
would include all the controls.

> > You can get Nokia N900 on aliexpress. If not, they are still available
> > between people :-)
> 
> I have one. Unfortunately, I never had a chance to use it, as the display
> stopped working one week after I get it.

Well, I guess the easiest option is to just get another one :-).

But otoh -- N900 is quite usable without the screen. 0xffff tool can
be used to boot the kernel, then you can use nfsroot and usb
networking. It also has serial port (over strange
connector). Connected over ssh over usb network is actually how I do
most of the v4l work.

Best regards,

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-15 10:50                                     ` Philippe De Muyter
@ 2017-03-15 18:55                                       ` Nicolas Dufresne
  2017-03-16  9:26                                         ` Philipp Zabel
  0 siblings, 1 reply; 228+ messages in thread
From: Nicolas Dufresne @ 2017-03-15 18:55 UTC (permalink / raw)
  To: Philippe De Muyter, Mauro Carvalho Chehab
  Cc: Pavel Machek, Hans Verkuil, Sakari Ailus,
	Russell King - ARM Linux, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 1374 bytes --]

Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > with those, it will likely run with any other application.
> > 
> 
> I would like to add the 'v4l2src' plugin of gstreamer, and on the
> imx6 its

While it would be nice if somehow you would get v4l2src to work (in
some legacy/emulation mode through libv4l2), the longer plan is to
implement smart bin that handle several v4l2src, that can do the
required interactions so we can expose similar level of controls as
found in Android Camera HAL3, and maybe even further assuming userspace
can change the media tree at run-time. We might be a long way from
there, specially that some of the features depends on how much the
hardware can do. Just being able to figure-out how to build the MC tree
dynamically seems really hard when thinking of generic mechanism. Also,
Request API will be needed.

I think for this one, we'll need some userspace driver that enable the
features (not hide them), and that's what I'd be looking for from
libv4l2 in this regard.

> imx-specific counterpart 'imxv4l2videosrc' from the gstreamer-imx
> package
> at https://github.com/Freescale/gstreamer-imx, and 'v4l2-ctl'.

This one is specific to IMX hardware using vendor driver. You can
probably ignore that.

Nicolas

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-15 18:04                                     ` Pavel Machek
@ 2017-03-15 20:26                                       ` Mauro Carvalho Chehab
  2017-03-16 22:11                                         ` Pavel Machek
  0 siblings, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-15 20:26 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

Em Wed, 15 Mar 2017 19:04:21 +0100
Pavel Machek <pavel@ucw.cz> escreveu:

> Hi!
> 
> > > Well, I believe first question is: what applications would we want to
> > > run on complex devices? Will sending control from video to subdevs
> > > actually help?  
> > 
> > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > with those, it will likely run with any other application.  
> 
> I'll take a look when I'm at better internet access.

Ok.

> > > mplayer is useful for testing... but that one already works (after you
> > > setup the pipeline, and configure exposure/gain).
> > > 
> > > But thats useful for testing, not really for production. Image will be
> > > out of focus and with wrong white balance.
> > > 
> > > What I would really like is an application to get still photos. For
> > > taking pictures with manual settings we need
> > > 
> > > a) units for controls: user wants to focus on 1m, and take picture
> > > with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
> > > focal length is 20mm with 5mm chip.
> > > 
> > > But... autofocus/autogain would really be good to have. Thus we need:
> > > 
> > > b) for each frame, we need exposure settings and focus position at
> > > time frame was taken. Otherwise autofocus/autogain will be too
> > > slow. At least focus position is going to be tricky -- either kernel
> > > would have to compute focus position for us (not trivial) or we'd need
> > > enough information to compute it in userspace.
> > > 
> > > There are more problems: hardware-accelerated preview is not trivial
> > > to set up (and I'm unsure if it can be done in generic way). Still
> > > photos application needs to switch resolutions between preview and
> > > photo capture. Probably hardware-accelerated histograms are needed for
> > > white balance, auto gain and auto focus, ....
> > > 
> > > It seems like there's a _lot_ of stuff to be done before we have
> > > useful support for complex cameras...  
> > 
> > Taking still pictures using a hardware-accelerated preview is
> > a sophisticated use case. I don't know any userspace application
> > that does that. Ok, several allow taking snapshots, by simply
> > storing the image of the current frame.  
> 
> Well, there are applications that take still pictures. Android has
> one. Maemo has another. Then there's fcam-dev. Its open source; with
> modified kernel it is fully usable. I have version that runs on recent
> nearly-mainline on N900. 

Hmm... it seems that FCam is specific for N900:
	http://fcam.garage.maemo.org/

If so, then we have here just the opposite problem, if want it to be
used as a generic application, as very likely it requires OMAP3-specific
graph/subdevs.

> So yes, I'd like solution for problems a) and b).
> 
> > > (And I'm not sure... when application such as skype is running, is
> > > there some way to run autogain/autofocus/autowhitebalance? Is that
> > > something we want to support?)  
> > 
> > Autofocus no. Autogain/Autowhite can be done via libv4l, provided that
> > it can access the device's controls via /dev/video devnode. Other
> > applications may be using some other similar algorithms.
> > 
> > Ok, they don't use histograms provided by the SoC. So, they do it in
> > software, with is slower. Still, it works fine when the light
> > conditions don't change too fast.  
> 
> I guess it is going to work well enough with higher CPU
> usage.

Yes.

> Question is if camera without autofocus is usable. I'd say "not
> really".qv4l2

That actually depends on the sensor and how focus is adjusted.

I'm testing right now this camera module for RPi:
   https://www.raspberrypi.org/products/camera-module-v2/

I might be wrong, but this sensor doesn't seem to have auto-focus.
Instead, it seems to use a wide-angle lens. So, except when the
object is too close, the focus look OK.

> > > I believe other question is: will not having same control on main
> > > video device and subdevs be confusing? Does it actually help userspace
> > > in any way? Yes, we can make controls accessible to old application,
> > > but does it make them more useful?   
> > 
> > Yes. As I said, libv4l (and some apps) have logic inside to adjust
> > the image via bright, contrast and white balance controls, using the
> > video devnode. They don't talk subdev API. So, if those controls
> > aren't exported, they won't be able to provide a good quality image.  
> 
> Next question is if the libv4l will do the right thing if we just put
> all controls to one node. For example on N900 you have exposure/gain
> and brightness. But the brightness is applied at preview phase, so it
> is "basically useless". You really need to adjust the image using the
> exposure/gain.

I've no idea, but I suspect it shouldn't be hard to teach libv4l to
prefer use an exposure/gain instead of brightness when available.

> > > > > In addition, I suspect end-users of these complex devices don't really care
> > > > > about a plugin: they want full control and won't typically use generic
> > > > > applications. If they would need support for that, we'd have seen much more
> > > > > interest. The main reason for having a plugin is to simplify testing and
> > > > > if this is going to be used on cheap hobbyist devkits.    
> > > > 
> > > > What are the needs for a cheap hobbyist devkit owner? Do we currently
> > > > satisfy those needs? I'd say that having a functional driver when
> > > > compiled without the subdev API, that implements the ioctl's/controls    
> > > 
> > > Having different interface based on config options... is just
> > > weird. What about poor people (like me) trying to develop complex
> > > applications?  
> > 
> > Well, that could be done using other mechanisms, like a modprobe
> > parameter or by switching the behaviour if a subdev interface is
> > opened. I don't see much trouble on allowing accessing a control via
> > both interfaces.  
> 
> If we really want to go that way (is not modifying library to access
> the right files quite easy?), I believe non-confusing option would be
> to have '/dev/video0 -- omap3 camera for legacy applications' which
> would include all the controls.

Yeah, keeping /dev/video0 reserved for generic applications is something
that could work. Not sure how easy would be to implement it.

> 
> > > You can get Nokia N900 on aliexpress. If not, they are still available
> > > between people :-)  
> > 
> > I have one. Unfortunately, I never had a chance to use it, as the display
> > stopped working one week after I get it.  
> 
> Well, I guess the easiest option is to just get another one :-).

:-)  Well, I guess very few units of N900 was sold in Brazil. Importing
one is too expensive, due to taxes.

> But otoh -- N900 is quite usable without the screen. 0xffff tool can
> be used to boot the kernel, then you can use nfsroot and usb
> networking. It also has serial port (over strange
> connector). Connected over ssh over usb network is actually how I do
> most of the v4l work.

If you pass me the pointers, I can try it when I have some time.

Anyway, I got myself an ISEE IGEPv2, with the expansion board:
	https://www.isee.biz/products/igep-processor-boards/igepv2-dm3730
	https://www.isee.biz/products/igep-expansion-boards/igepv2-expansion

The expansion board comes with a tvp5150 analog TV demod. So, with
this device, I can simply connect it to a composite input signal.
I have some sources here that I can use to test it.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-15 18:55                                       ` Nicolas Dufresne
@ 2017-03-16  9:26                                         ` Philipp Zabel
  2017-03-16  9:47                                           ` Philippe De Muyter
  0 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-03-16  9:26 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Philippe De Muyter, Mauro Carvalho Chehab, Pavel Machek,
	Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On Wed, 2017-03-15 at 14:55 -0400, Nicolas Dufresne wrote:
> Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > > with those, it will likely run with any other application.
> > > 
> > 
> > I would like to add the 'v4l2src' plugin of gstreamer, and on the
> > imx6 its
> 
> While it would be nice if somehow you would get v4l2src to work (in
> some legacy/emulation mode through libv4l2),

v4l2src works just fine, provided the pipeline is configured manually in
advance via media-ctl.

>  the longer plan is to
> implement smart bin that handle several v4l2src, that can do the
> required interactions so we can expose similar level of controls as
> found in Android Camera HAL3, and maybe even further assuming userspace
> can change the media tree at run-time. We might be a long way from
> there, specially that some of the features depends on how much the
> hardware can do. Just being able to figure-out how to build the MC tree
> dynamically seems really hard when thinking of generic mechanism. Also,
> Request API will be needed.
> 
> I think for this one, we'll need some userspace driver that enable the
> features (not hide them), and that's what I'd be looking for from
> libv4l2 in this regard.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-16  9:26                                         ` Philipp Zabel
@ 2017-03-16  9:47                                           ` Philippe De Muyter
  2017-03-16 10:01                                             ` Philipp Zabel
  0 siblings, 1 reply; 228+ messages in thread
From: Philippe De Muyter @ 2017-03-16  9:47 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Nicolas Dufresne, Mauro Carvalho Chehab, Pavel Machek,
	Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On Thu, Mar 16, 2017 at 10:26:00AM +0100, Philipp Zabel wrote:
> On Wed, 2017-03-15 at 14:55 -0400, Nicolas Dufresne wrote:
> > Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > > > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > > > with those, it will likely run with any other application.
> > > > 
> > > 
> > > I would like to add the 'v4l2src' plugin of gstreamer, and on the
> > > imx6 its
> > 
> > While it would be nice if somehow you would get v4l2src to work (in
> > some legacy/emulation mode through libv4l2),
> 
> v4l2src works just fine, provided the pipeline is configured manually in
> advance via media-ctl.

Including choosing the framerate ?  Sorry, I have no time these days
to test it myself.

And I cited imxv4l2videosrc for its ability to provide the physical address
of the image buffers for further processing by other (not necessarily next
in gstreamer pipeline, or for all frames) hardware-accelerated plugins likes
the h.264 video encoder.  As I am stuck with fsl/nxp kernel and driver on that
matter, I don't know how the interfaces have evolved in current linux kernels.

BR

Philippe

-- 
Philippe De Muyter +32 2 6101532 Macq SA rue de l'Aeronef 2 B-1140 Bruxelles

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-16  9:47                                           ` Philippe De Muyter
@ 2017-03-16 10:01                                             ` Philipp Zabel
  2017-03-16 10:19                                               ` Philippe De Muyter
  0 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-03-16 10:01 UTC (permalink / raw)
  To: Philippe De Muyter
  Cc: Nicolas Dufresne, Mauro Carvalho Chehab, Pavel Machek,
	Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On Thu, 2017-03-16 at 10:47 +0100, Philippe De Muyter wrote:
> On Thu, Mar 16, 2017 at 10:26:00AM +0100, Philipp Zabel wrote:
> > On Wed, 2017-03-15 at 14:55 -0400, Nicolas Dufresne wrote:
> > > Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > > > > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > > > > with those, it will likely run with any other application.
> > > > > 
> > > > 
> > > > I would like to add the 'v4l2src' plugin of gstreamer, and on the
> > > > imx6 its
> > > 
> > > While it would be nice if somehow you would get v4l2src to work (in
> > > some legacy/emulation mode through libv4l2),
> > 
> > v4l2src works just fine, provided the pipeline is configured manually in
> > advance via media-ctl.
> 
> Including choosing the framerate ?  Sorry, I have no time these days
> to test it myself.

No, the framerate is set with media-ctl on the CSI output pad. To really
choose the framerate, the element would indeed need a deeper
understanding of the pipeline, as the resulting framerate depends on at
least the source v4l2_subdevice (sensor) framerate and the CSI frame
skipping.

> And I cited imxv4l2videosrc for its ability to provide the physical address
> of the image buffers for further processing by other (not necessarily next
> in gstreamer pipeline, or for all frames) hardware-accelerated plugins likes
> the h.264 video encoder.  As I am stuck with fsl/nxp kernel and driver on that
> matter, I don't know how the interfaces have evolved in current linux kernels.

The physical address of the image buffers is hidden from userspace by
dma-buf objects, but those can be passed around to the next driver
without copying the image data.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-16 10:01                                             ` Philipp Zabel
@ 2017-03-16 10:19                                               ` Philippe De Muyter
  0 siblings, 0 replies; 228+ messages in thread
From: Philippe De Muyter @ 2017-03-16 10:19 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Nicolas Dufresne, Mauro Carvalho Chehab, Pavel Machek,
	Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam

On Thu, Mar 16, 2017 at 11:01:56AM +0100, Philipp Zabel wrote:
> On Thu, 2017-03-16 at 10:47 +0100, Philippe De Muyter wrote:
> > On Thu, Mar 16, 2017 at 10:26:00AM +0100, Philipp Zabel wrote:
> > > On Wed, 2017-03-15 at 14:55 -0400, Nicolas Dufresne wrote:
> > > > Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > > > > > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > > > > > with those, it will likely run with any other application.
> > > > > > 
> > > > > 
> > > > > I would like to add the 'v4l2src' plugin of gstreamer, and on the
> > > > > imx6 its
> > > > 
> > > > While it would be nice if somehow you would get v4l2src to work (in
> > > > some legacy/emulation mode through libv4l2),
> > > 
> > > v4l2src works just fine, provided the pipeline is configured manually in
> > > advance via media-ctl.
> > 
> > Including choosing the framerate ?  Sorry, I have no time these days
> > to test it myself.
> 
> No, the framerate is set with media-ctl on the CSI output pad. To really
> choose the framerate, the element would indeed need a deeper
> understanding of the pipeline, as the resulting framerate depends on at
> least the source v4l2_subdevice (sensor) framerate and the CSI frame
> skipping.

Count me in than as a supporter of Steve's "v4l2-mc: add a function to
inherit controls from a pipeline" patch

> > of the image buffers for further processing by other (not necessarily next
> > in gstreamer pipeline, or for all frames) hardware-accelerated plugins likes
> > the h.264 video encoder.  As I am stuck with fsl/nxp kernel and driver on that
> > matter, I don't know how the interfaces have evolved in current linux kernels.
> 
> The physical address of the image buffers is hidden from userspace by
> dma-buf objects, but those can be passed around to the next driver
> without copying the image data.

OK

thanks

Philippe

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)
  2017-03-15 20:26                                       ` Mauro Carvalho Chehab
@ 2017-03-16 22:11                                         ` Pavel Machek
  0 siblings, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-16 22:11 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 5420 bytes --]

Hi!

> > > > mplayer is useful for testing... but that one already works (after you
> > > > setup the pipeline, and configure exposure/gain).
> > > > 
> > > > But thats useful for testing, not really for production. Image will be
> > > > out of focus and with wrong white balance.
> > > > 
> > > > What I would really like is an application to get still photos. For
> > > > taking pictures with manual settings we need
> > > > 
> > > > a) units for controls: user wants to focus on 1m, and take picture
> > > > with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
> > > > focal length is 20mm with 5mm chip.
> > > > 
> > > > But... autofocus/autogain would really be good to have. Thus we need:
> > > > 
> > > > b) for each frame, we need exposure settings and focus position at
> > > > time frame was taken. Otherwise autofocus/autogain will be too
> > > > slow. At least focus position is going to be tricky -- either kernel
> > > > would have to compute focus position for us (not trivial) or we'd need
> > > > enough information to compute it in userspace.
> > > > 
> > > > There are more problems: hardware-accelerated preview is not trivial
> > > > to set up (and I'm unsure if it can be done in generic way). Still
> > > > photos application needs to switch resolutions between preview and
> > > > photo capture. Probably hardware-accelerated histograms are needed for
> > > > white balance, auto gain and auto focus, ....
> > > > 
> > > > It seems like there's a _lot_ of stuff to be done before we have
> > > > useful support for complex cameras...  
> > > 
> > > Taking still pictures using a hardware-accelerated preview is
> > > a sophisticated use case. I don't know any userspace application
> > > that does that. Ok, several allow taking snapshots, by simply
> > > storing the image of the current frame.  
> > 
> > Well, there are applications that take still pictures. Android has
> > one. Maemo has another. Then there's fcam-dev. Its open source; with
> > modified kernel it is fully usable. I have version that runs on recent
> > nearly-mainline on N900. 
> 
> Hmm... it seems that FCam is specific for N900:
> 	http://fcam.garage.maemo.org/
> 
> If so, then we have here just the opposite problem, if want it to be
> used as a generic application, as very likely it requires OMAP3-specific
> graph/subdevs.

Well... there's quick and great version on maemo.org. I do have local
version (still somehow N900-specific), but it no longer uses hardware
histogram/sharpness support. Should be almost generic.

> > So yes, I'd like solution for problems a) and b).

...but it has camera parameters hardcoded (problem a) and slow
(problem b). 

> > Question is if camera without autofocus is usable. I'd say "not
> > really".qv4l2
> 
> That actually depends on the sensor and how focus is adjusted.
> 
> I'm testing right now this camera module for RPi:
>    https://www.raspberrypi.org/products/camera-module-v2/
> 
> I might be wrong, but this sensor doesn't seem to have auto-focus.
> Instead, it seems to use a wide-angle lens. So, except when the
> object is too close, the focus look OK.

Well, cameras without autofocus are somehow usable without
autofocus. But cameras with autofocus don't work too well without one.

> > If we really want to go that way (is not modifying library to access
> > the right files quite easy?), I believe non-confusing option would be
> > to have '/dev/video0 -- omap3 camera for legacy applications' which
> > would include all the controls.
> 
> Yeah, keeping /dev/video0 reserved for generic applications is something
> that could work. Not sure how easy would be to implement it.

Plus advanced applications would just ignore /dev/video0.. and not be confused.

> > > > > > You can get Nokia N900 on aliexpress. If not, they are still available
> > > > between people :-)  
> > > 
> > > I have one. Unfortunately, I never had a chance to use it, as the display
> > > stopped working one week after I get it.  
> > 
> > Well, I guess the easiest option is to just get another one :-).
> 
> :-)  Well, I guess very few units of N900 was sold in Brazil. Importing
> one is too expensive, due to taxes.

Try to ask at local mailing list. Those machines were quite common.

> > But otoh -- N900 is quite usable without the screen. 0xffff tool can
> > be used to boot the kernel, then you can use nfsroot and usb
> > networking. It also has serial port (over strange
> > connector). Connected over ssh over usb network is actually how I do
> > most of the v4l work.
> 
> If you pass me the pointers, I can try it when I have some time.

Ok, I guess I'll do that in private email.

> Anyway, I got myself an ISEE IGEPv2, with the expansion board:
> 	https://www.isee.biz/products/igep-processor-boards/igepv2-dm3730
> 	https://www.isee.biz/products/igep-expansion-boards/igepv2-expansion
> 
> The expansion board comes with a tvp5150 analog TV demod. So, with
> this device, I can simply connect it to a composite input signal.
> I have some sources here that I can use to test it.

Well... it looks like TV capture is a "solved" problem. Taking useful
photos is what is hard...


									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-14  7:55                             ` Hans Verkuil
  2017-03-14 10:21                               ` Mauro Carvalho Chehab
@ 2017-03-17 11:42                               ` Russell King - ARM Linux
  2017-03-17 11:55                                 ` Sakari Ailus
                                                   ` (2 more replies)
  2017-03-26 16:44                               ` Laurent Pinchart
  2 siblings, 3 replies; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-17 11:42 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Mauro Carvalho Chehab, Sakari Ailus, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
> We're all very driver-development-driven, and userspace gets very little
> attention in general. So before just throwing in the towel we should take
> a good look at the reasons why there has been little or no development: is
> it because of fundamental design defects, or because nobody paid attention
> to it?
> 
> I strongly suspect it is the latter.
> 
> In addition, I suspect end-users of these complex devices don't really care
> about a plugin: they want full control and won't typically use generic
> applications. If they would need support for that, we'd have seen much more
> interest. The main reason for having a plugin is to simplify testing and
> if this is going to be used on cheap hobbyist devkits.

I think you're looking at it with a programmers hat on, not a users hat.

Are you really telling me that requiring users to 'su' to root, and then
use media-ctl to manually configure the capture device is what most
users "want" ?

Hasn't the way technology has moved towards graphical interfaces,
particularly smart phones, taught us that the vast majority of users
want is intuitive, easy to use interfaces, and not the command line
with reams of documentation?

Why are smart phones soo popular - it's partly because they're flashy,
but also because of the wealth of apps, and apps which follow the
philosophy of "do one job, do it well" (otherwise they get bad reviews.)

> An additional complication is simply that it is hard to find fully supported
> MC hardware. omap3 boards are hard to find these days, renesas boards are not
> easy to get, freescale isn't the most popular either. Allwinner, mediatek,
> amlogic, broadcom and qualcomm all have closed source implementations or no
> implementation at all.

Right, and that in itself tells us something - the problem that we're
trying to solve is not one that commonly exists in the real world.

Yes, the hardware we have in front of us may be very complex, but if
there's very few systems out there which are capable of making use of
all that complexity, then we're trying to solve a problem that isn't
the common case - and if it's going to take years to solve it (it
already has taken years) then it's the wrong problem to be solved.

I bet most of the problem can be eliminated if, rather than exposing
all this complexity, we instead expose a simpler capture system where
the board designer gets to "wire up" the capture system.

I'll go back to my Bayer example, because that's the simplest.  As
I've already said many times in these threads, there is only one
possible path through the iMX6 device that such a source can be used
with - it's a fixed path.  The actual path depends on the CSI2
virtual channel that the camera has been _configured_ to use, but
apart from that, it's effectively a well known set of blocks.  Such
a configuration could be placed in DT.

For RGB connected to a single parallel CSI, things get a little more
complex - capture through the CSI or through two other capture devices
for de-interlacing or other features.  However, I'm not convinced that
exposing multiple /dev/video* devices for different features for the
same video source is a sane approach - I think that's a huge usability
problem.  (The user is expected to select the capture device on iMX6
depending on the features they want, and if they want to change features,
they're expected to shut down their application and start it up on a
different capture device.)  For the most part on iMX6, there's one
path down to the CSI block, and then there's optional routing through
the rest of the IPU depending on what features you want (such as
de-interlacing.)

The complex case is a CSI2 connected camera which produces multiple
streams through differing virtual channels - and that's IMHO the only
case where we need multiple different /dev/video* capture devices to
be present.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 11:42                               ` Russell King - ARM Linux
@ 2017-03-17 11:55                                 ` Sakari Ailus
  2017-03-17 13:24                                   ` Mauro Carvalho Chehab
  2017-03-20 11:16                                   ` Hans Verkuil
  2017-03-17 12:02                                 ` Philipp Zabel
  2017-03-19 13:25                                 ` Pavel Machek
  2 siblings, 2 replies; 228+ messages in thread
From: Sakari Ailus @ 2017-03-17 11:55 UTC (permalink / raw)
  To: Russell King - ARM Linux, Hans Verkuil
  Cc: Mauro Carvalho Chehab, Sakari Ailus, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

Hi Russell,

On 03/17/17 13:42, Russell King - ARM Linux wrote:
> On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
>> We're all very driver-development-driven, and userspace gets very little
>> attention in general. So before just throwing in the towel we should take
>> a good look at the reasons why there has been little or no development: is
>> it because of fundamental design defects, or because nobody paid attention
>> to it?
>>
>> I strongly suspect it is the latter.
>>
>> In addition, I suspect end-users of these complex devices don't really care
>> about a plugin: they want full control and won't typically use generic
>> applications. If they would need support for that, we'd have seen much more
>> interest. The main reason for having a plugin is to simplify testing and
>> if this is going to be used on cheap hobbyist devkits.
> 
> I think you're looking at it with a programmers hat on, not a users hat.
> 
> Are you really telling me that requiring users to 'su' to root, and then
> use media-ctl to manually configure the capture device is what most
> users "want" ?

It depends on who the user is. I don't think anyone is suggesting a
regular end user is the user of all these APIs: it is either an
application tailored for that given device, a skilled user with his test
scripts or as suggested previously, a libv4l plugin knowing the device
or a generic library geared towards providing best effort service. The
last one of this list does not exist yet and the second last item
requires help.

Typically this class of devices is simply not up to provide the level of
service you're requesting without additional user space control library
which is responsible for automatic white balance, exposure and focus.

Making use of the full potential of the hardware requires using a more
expressive interface. That's what the kernel interface must provide. If
we decide to limit ourselves to a small sub-set of that potential on the
level of the kernel interface, we have made a wrong decision. It's as
simple as that. This is why the functionality (and which requires taking
a lot of policy decisions) belongs to the user space. We cannot have
multiple drivers providing multiple kernel interfaces for the same hardware.

That said, I'm not trying to provide an excuse for not having libraries
available to help the user to configure and control the device more or
less automatically even in terms of best effort. It's something that
does require attention, a lot more of it than it has received in recent
few years.

-- 
Kind regards,

Sakari Ailus
sakari.ailus@linux.intel.com

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 11:42                               ` Russell King - ARM Linux
  2017-03-17 11:55                                 ` Sakari Ailus
@ 2017-03-17 12:02                                 ` Philipp Zabel
  2017-03-17 12:16                                   ` Russell King - ARM Linux
  2017-03-19 13:25                                 ` Pavel Machek
  2 siblings, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-03-17 12:02 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Hans Verkuil, Mauro Carvalho Chehab, Sakari Ailus,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On Fri, 2017-03-17 at 11:42 +0000, Russell King - ARM Linux wrote:
> On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
> > We're all very driver-development-driven, and userspace gets very little
> > attention in general. So before just throwing in the towel we should take
> > a good look at the reasons why there has been little or no development: is
> > it because of fundamental design defects, or because nobody paid attention
> > to it?
> > 
> > I strongly suspect it is the latter.
> > 
> > In addition, I suspect end-users of these complex devices don't really care
> > about a plugin: they want full control and won't typically use generic
> > applications. If they would need support for that, we'd have seen much more
> > interest. The main reason for having a plugin is to simplify testing and
> > if this is going to be used on cheap hobbyist devkits.
> 
> I think you're looking at it with a programmers hat on, not a users hat.
> 
> Are you really telling me that requiring users to 'su' to root, and then
> use media-ctl to manually configure the capture device is what most
> users "want" ?
> 
> Hasn't the way technology has moved towards graphical interfaces,
> particularly smart phones, taught us that the vast majority of users
> want is intuitive, easy to use interfaces, and not the command line
> with reams of documentation?
> 
> Why are smart phones soo popular - it's partly because they're flashy,
> but also because of the wealth of apps, and apps which follow the
> philosophy of "do one job, do it well" (otherwise they get bad reviews.)

> > An additional complication is simply that it is hard to find fully supported
> > MC hardware. omap3 boards are hard to find these days, renesas boards are not
> > easy to get, freescale isn't the most popular either. Allwinner, mediatek,
> > amlogic, broadcom and qualcomm all have closed source implementations or no
> > implementation at all.
> 
> Right, and that in itself tells us something - the problem that we're
> trying to solve is not one that commonly exists in the real world.
> 
> Yes, the hardware we have in front of us may be very complex, but if
> there's very few systems out there which are capable of making use of
> all that complexity, then we're trying to solve a problem that isn't
> the common case - and if it's going to take years to solve it (it
> already has taken years) then it's the wrong problem to be solved.
> 
> I bet most of the problem can be eliminated if, rather than exposing
> all this complexity, we instead expose a simpler capture system where
> the board designer gets to "wire up" the capture system.
> 
> I'll go back to my Bayer example, because that's the simplest.  As
> I've already said many times in these threads, there is only one
> possible path through the iMX6 device that such a source can be used
> with - it's a fixed path.  The actual path depends on the CSI2
> virtual channel that the camera has been _configured_ to use, but
> apart from that, it's effectively a well known set of blocks.  Such
> a configuration could be placed in DT.
> 
> For RGB connected to a single parallel CSI, things get a little more
> complex - capture through the CSI or through two other capture devices
> for de-interlacing or other features.  However, I'm not convinced that
> exposing multiple /dev/video* devices for different features for the
> same video source is a sane approach - I think that's a huge usability
> problem.  (The user is expected to select the capture device on iMX6
> depending on the features they want, and if they want to change features,
> they're expected to shut down their application and start it up on a
> different capture device.)  For the most part on iMX6, there's one
> path down to the CSI block, and then there's optional routing through
> the rest of the IPU depending on what features you want (such as
> de-interlacing.)
>
> The complex case is a CSI2 connected camera which produces multiple
> streams through differing virtual channels - and that's IMHO the only
> case where we need multiple different /dev/video* capture devices to
> be present.

I wanted to have the IC PRP outputs separate because the IC PRP should
support running both the VF and ENC tasks with different parameters from
the same input. That would allow to capture two different resolutions
(up to 1024x1024) at the same time.

I think most of the simple, fixed pipeline use cases could be handled by
libv4l2, by allowing to pass a v4l2 subdevice path to v4l2_open. If that
function internally would set up the media links to the
nearest /dev/video interface, propagate format, resolution and frame
intervals if necessary, and return an fd to the video device, there'd be
no additional complexity for the users beyond selecting the v4l2_subdev
instead of the video device.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 12:02                                 ` Philipp Zabel
@ 2017-03-17 12:16                                   ` Russell King - ARM Linux
  2017-03-17 17:49                                     ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-17 12:16 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Hans Verkuil, Mauro Carvalho Chehab, Sakari Ailus,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On Fri, Mar 17, 2017 at 01:02:07PM +0100, Philipp Zabel wrote:
> I think most of the simple, fixed pipeline use cases could be handled by
> libv4l2, by allowing to pass a v4l2 subdevice path to v4l2_open. If that
> function internally would set up the media links to the
> nearest /dev/video interface, propagate format, resolution and frame
> intervals if necessary, and return an fd to the video device, there'd be
> no additional complexity for the users beyond selecting the v4l2_subdev
> instead of the video device.

... which would then require gstreamer to be modified too. The gstreamer
v4l2 plugin looks for /dev/video* or /dev/v4l2/video* devices and monitors
these for changes, so gstreamer applications know which capture devices
are available:

  const gchar *paths[] = { "/dev", "/dev/v4l2", NULL };
  const gchar *names[] = { "video", NULL };

  /* Add some depedency, so the dynamic features get updated upon changes in
   * /dev/video* */
  gst_plugin_add_dependency (plugin,
      NULL, paths, names, GST_PLUGIN_DEPENDENCY_FLAG_FILE_NAME_IS_PREFIX);

I haven't checked yet whether sys/v4l2/gstv4l2deviceprovider.c knows
anything about the v4l2 subdevs.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 11:55                                 ` Sakari Ailus
@ 2017-03-17 13:24                                   ` Mauro Carvalho Chehab
  2017-03-17 13:51                                     ` Philipp Zabel
  2017-03-21 11:11                                     ` Pavel Machek
  2017-03-20 11:16                                   ` Hans Verkuil
  1 sibling, 2 replies; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-17 13:24 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Russell King - ARM Linux, Hans Verkuil, Sakari Ailus,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

Em Fri, 17 Mar 2017 13:55:33 +0200
Sakari Ailus <sakari.ailus@linux.intel.com> escreveu:

> Hi Russell,
> 
> On 03/17/17 13:42, Russell King - ARM Linux wrote:
> > On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:  
> >> We're all very driver-development-driven, and userspace gets very little
> >> attention in general. So before just throwing in the towel we should take
> >> a good look at the reasons why there has been little or no development: is
> >> it because of fundamental design defects, or because nobody paid attention
> >> to it?
> >>
> >> I strongly suspect it is the latter.
> >>
> >> In addition, I suspect end-users of these complex devices don't really care
> >> about a plugin: they want full control and won't typically use generic
> >> applications. If they would need support for that, we'd have seen much more
> >> interest. The main reason for having a plugin is to simplify testing and
> >> if this is going to be used on cheap hobbyist devkits.  
> > 
> > I think you're looking at it with a programmers hat on, not a users hat.

I fully agree with you: whatever solution is provided, this should be fully
transparent for the end user, no matter what V4L2 application he's using.

> > 
> > Are you really telling me that requiring users to 'su' to root, and then
> > use media-ctl to manually configure the capture device is what most
> > users "want" ?  

The need of su can easily be fixed with a simple addition at the udev
rules.d, for it to consider media controller as part of the v4l stuff:

--- udev/rules.d/50-udev-default.rules	2017-02-01 19:45:35.000000000 -0200
+++ udev/rules.d/50-udev-default.rules	2015-08-29 07:54:16.033122614 -0300
@@ -30,6 +30,7 @@ SUBSYSTEM=="mem", KERNEL=="mem|kmem|port
 SUBSYSTEM=="input", GROUP="input"
 SUBSYSTEM=="input", KERNEL=="js[0-9]*", MODE="0664"
 
+SUBSYSTEM=="media", GROUP="video"
 SUBSYSTEM=="video4linux", GROUP="video"
 SUBSYSTEM=="graphics", GROUP="video"
 SUBSYSTEM=="drm", GROUP="video"

Ok, someone should base it on upstream and submit this to
udev maintainers[1].

[1] On a side note, it would also be possible to have an udev rule that
    would be automatically setting the pipeline then the media device pops
    up.

   I wrote something like that for remote controllers, with gets
   installed together with v4l-utils package: when a remote controller
   is detected, it checks the driver and the remote controller table
   that the driver wants and load it on userspace.

   It would be possible to do something like that for MC, but someone 
   would need to do such task. Of course, that would require to have
   a way for a generic application to detect the board type and be
   able to automatically setup the pipelines. So, we'll go back to
   the initial problem that nobody was able to do that so far.

> It depends on who the user is. I don't think anyone is suggesting a
> regular end user is the user of all these APIs: it is either an
> application tailored for that given device, a skilled user with his test
> scripts 

Test scripts are just test scripts, meant for development purposes.
We shouldn't even consider this seriously.

> Making use of the full potential of the hardware requires using a more
> expressive interface. 

That's the core of the problem: most users don't need "full potential
of the hardware". It is actually worse than that: several boards
don't allow "full potential" of the SoC capabilities.

Ok, when the user requires "full potential", they may need a complex
tailored application. But, on most cases, all it is needed is to
support a simple application that controls a video stream via
/dev/video0.

> That's what the kernel interface must provide. If
> we decide to limit ourselves to a small sub-set of that potential on the
> level of the kernel interface, we have made a wrong decision. It's as
> simple as that. This is why the functionality (and which requires taking
> a lot of policy decisions) belongs to the user space. We cannot have
> multiple drivers providing multiple kernel interfaces for the same hardware.

I strongly disagree. Looking only at the hardware capabilities without
having a solution to provide what the user wants is *wrong*.

The project decisions should be based on the common use cases, and, if
possible and not to expensive/complex, covering exotic cases.

The V4L2 API was designed to fulfill the user needs. Drivers should
take it in consideration when choosing policy decisions.

In order to give you some examples, before the V4L2 API, the bttv driver
used to have its own set of ioctls, meant to provide functionality based
on its hardware capabilities (like selecting other standards like PAL/M).
The end result is that applications written for bttv were not generic
enough[1]. The V4L2 API was designed to be generic enough to cover the
common use-cases.

[1] As the bttv board were very popular, most userspace apps didn't
    work on other hardware. The big advantage of V4L2 API is that it
    contains a set of functions that it is good enough to control any
    hardware. Ok, some features are missing. 

   For example, in the past, one could use the bttv hardware to "decrypt"
   analog cable TV using custom ioctls. Those ioctls got removed during
   the V4L2 conversion, as they are specific to the way bttv hardware
   works.
 
   Also, the cx88 hardware could be used as a generic fast A/D converter, 
   using a custom set of ioctls - something similar to SDR. In this
   specific case, such patchset was never merged upstream.


To give you a few examples about policy decisions taken by the drivers
in order to fulfill the user needs, the Conexant chipsets (supported by
bttv,  cx88, cx231xx and cx25821 drivers, among others) provide a way 
more possibilities than what the driver supports.

Basically, they all have fast A/D converters, running at around
27-30 MHz clock. The samples of the A/D can be passed as-is to
userspace and/or handled by some IP blocks inside the chips.

Also, even the simplest one (bttv) uses a RISC with a firmware that
is built dynamically at runtime by the Kernel driver. Such firmware
needs to setup several DMA pipelines.

For example, the cx88 driver sets those DMA pipelines
(see drivers/media/pci/cx88/cx88-core.c):

 * FIFO space allocations:
 *    channel  21    (y video)  - 10.0k
 *    channel  22    (u video)  -  2.0k
 *    channel  23    (v video)  -  2.0k
 *    channel  24    (vbi)      -  4.0k
 *    channels 25+26 (audio)    -  4.0k
 *    channel  28    (mpeg)     -  4.0k
 *    channel  27    (audio rds)-  3.0k

In order to provide the generic V4L2 API, the driver need to
take lots of device-specific decisions. 

For example, the DMA from channel 28 is only enabled if the
device has an IP block to do MPEG, and the user wants to
receive mpeg data, instead of YUV.

The DMA from channel 27 has samples taken from the audio IF.
The audio decoding itself are at DMA channels 25 and 26.

The data from DMA channel 27 is used by a software dsp code, 
implemented at drivers/media/pci/cx88/cx88-dsp.c, with detects 
if the IF contains AM or FM modulation and what are the carriers,
in order to identify the audio standard. Once the standard is
detected, it sets the hardware audio decoder to the right
standard, and the DMA from channels 25 and 26 will contain
audio PCM samples.

The net result is that an user of any of the V4L2 drivers

That's a way more complex than deciding if a pipeline would
require an IP block to convert from Bayer format to YUV.

Another case: the cx25821 hardware supports 12 video streams, 
consuming almost all available bandwidth of an ePCI bus. Each video 
stream connector can either be configured to be capture or output, in
runtime. The hardware vendor chose to hardcode the driver to provide
8 inputs and 4 outputs. Their decision was based in the fact that
the driver is already very complex, and it satisfies their customer's 
needs. The cost/efforts of make the driver to be reconfigured in
runtime were too high for almost no benefit.

> That said, I'm not trying to provide an excuse for not having libraries
> available to help the user to configure and control the device more or
> less automatically even in terms of best effort. It's something that
> does require attention, a lot more of it than it has received in recent
> few years.

The big question, waiting for an answer on the last 8 years is
who would do that? Such person would need to have several different
hardware from different vendors, in order to ensure that it has
a generic solution.

It is a way more feasible that the Kernel developers that already 
have a certain hardware on their hands to add support inside the
driver to forward the controls through the pipeline and to setup
a "default" pipeline that would cover the common use cases at
driver's probe.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 13:24                                   ` Mauro Carvalho Chehab
@ 2017-03-17 13:51                                     ` Philipp Zabel
  2017-03-17 14:37                                       ` Russell King - ARM Linux
  2017-03-21 11:11                                     ` Pavel Machek
  1 sibling, 1 reply; 228+ messages in thread
From: Philipp Zabel @ 2017-03-17 13:51 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Russell King - ARM Linux, Hans Verkuil,
	Sakari Ailus, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On Fri, 2017-03-17 at 10:24 -0300, Mauro Carvalho Chehab wrote:
[...]
> The big question, waiting for an answer on the last 8 years is
> who would do that? Such person would need to have several different
> hardware from different vendors, in order to ensure that it has
> a generic solution.
> 
> It is a way more feasible that the Kernel developers that already 
> have a certain hardware on their hands to add support inside the
> driver to forward the controls through the pipeline and to setup
> a "default" pipeline that would cover the common use cases at
> driver's probe.

Actually, would setting pipeline via libv4l2 plugin and letting drivers
provide a sane enabled default pipeline configuration be mutually
exclusive? Not sure about the control forwarding, but at least a simple
link setup and format forwarding would also be possible in the kernel
without hindering userspace from doing it themselves later.

regards
Philipp

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 13:51                                     ` Philipp Zabel
@ 2017-03-17 14:37                                       ` Russell King - ARM Linux
  2017-03-20 13:10                                         ` Hans Verkuil
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-17 14:37 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Mauro Carvalho Chehab, Sakari Ailus, Hans Verkuil, Sakari Ailus,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On Fri, Mar 17, 2017 at 02:51:10PM +0100, Philipp Zabel wrote:
> On Fri, 2017-03-17 at 10:24 -0300, Mauro Carvalho Chehab wrote:
> [...]
> > The big question, waiting for an answer on the last 8 years is
> > who would do that? Such person would need to have several different
> > hardware from different vendors, in order to ensure that it has
> > a generic solution.
> > 
> > It is a way more feasible that the Kernel developers that already 
> > have a certain hardware on their hands to add support inside the
> > driver to forward the controls through the pipeline and to setup
> > a "default" pipeline that would cover the common use cases at
> > driver's probe.
> 
> Actually, would setting pipeline via libv4l2 plugin and letting drivers
> provide a sane enabled default pipeline configuration be mutually
> exclusive? Not sure about the control forwarding, but at least a simple
> link setup and format forwarding would also be possible in the kernel
> without hindering userspace from doing it themselves later.

I think this is the exact same problem as controls in ALSA.

When ALSA started off in life, the requirement was that all controls
shall default to minimum, and the user is expected to adjust controls
after the system is running.

After OSS, this gave quite a marked change in system behaviour, and
led to a lot of "why doesn't my sound work anymore" problems, because
people then had to figure out which combination of controls had to be
set to get sound out of their systems.

Now it seems to be much better, where install Linux on a platform, and
you have a working sound system (assuming that the drivers are all there
which is generally the case for x86.)

However, it's still possible to adjust all the controls from userspace.
All that's changed is the defaults.

Why am I mentioning this - because from what I understand Mauro saying,
it's no different from this situation.  Userspace will still have the
power to disable all links and setup its own.  The difference is that
there will be a default configuration that the kernel sets up at boot
time that will be functional, rather than the current default
configuration where the system is completely non-functional until
manually configured.

However, at the end of the day, I don't care _where_ the usability
problems are solved, only that there is some kind of solution.  It's not
the _where_ that's the real issue here, but the _how_, and discussion of
the _how_ is completely missing.

So, let's try kicking off a discussion about _how_ to do things.

_How_ do we setup a media controller system so that we end up with a
usable configuration - let's start with the obvious bit... which links
should be enabled.

I think the first pre-requisit is that we stop exposing capture devices
that can never be functional for the hardware that's present on the board,
so that there isn't this plentora of useless /dev/video* nodes and useless
subdevices.

One possible solution to finding a default path may be "find the shortest
path between the capture device and the sensor and enable intervening
links".

Then we need to try configuring that path with format/resolution
information.

However, what if something in the shortest path can't handle the format
that the sensor produces?  I think at that point, we'd need to drop that
subdev out of the path resolution, re-run the "find the shortest path"
algorithm, and try again.

Repeat until success or no path between the capture and sensor exists.

This works fine if you have just one sensor visible from a capture device,
but not if there's more than one (which I suspect is the case with the
Sabrelite board with its two cameras and video receiver.)  That breaks
the "find the shortest path" algorithm.

So, maybe it's a lot better to just let the board people provide via DT
a default setup for the connectivity of the modules somehow - certainly
one big step forward would be to disable in DT parts of the capture
system that can never be used (remembering that boards like the RPi /
Hummingboard may end up using DT overlays to describe this for different
cameras, so the capture setup may change after initial boot.)

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 12:16                                   ` Russell King - ARM Linux
@ 2017-03-17 17:49                                     ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-17 17:49 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Philipp Zabel, Hans Verkuil, Sakari Ailus, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

Em Fri, 17 Mar 2017 12:16:08 +0000
Russell King - ARM Linux <linux@armlinux.org.uk> escreveu:

> On Fri, Mar 17, 2017 at 01:02:07PM +0100, Philipp Zabel wrote:
> > I think most of the simple, fixed pipeline use cases could be handled by
> > libv4l2, by allowing to pass a v4l2 subdevice path to v4l2_open. If that
> > function internally would set up the media links to the
> > nearest /dev/video interface, propagate format, resolution and frame
> > intervals if necessary, and return an fd to the video device, there'd be
> > no additional complexity for the users beyond selecting the v4l2_subdev
> > instead of the video device.  
> 
> ... which would then require gstreamer to be modified too. The gstreamer
> v4l2 plugin looks for /dev/video* or /dev/v4l2/video* devices and monitors
> these for changes, so gstreamer applications know which capture devices
> are available:
> 
>   const gchar *paths[] = { "/dev", "/dev/v4l2", NULL };
>   const gchar *names[] = { "video", NULL };
> 
>   /* Add some depedency, so the dynamic features get updated upon changes in
>    * /dev/video* */
>   gst_plugin_add_dependency (plugin,
>       NULL, paths, names, GST_PLUGIN_DEPENDENCY_FLAG_FILE_NAME_IS_PREFIX);
> 
> I haven't checked yet whether sys/v4l2/gstv4l2deviceprovider.c knows
> anything about the v4l2 subdevs.

Not only gstreamer do that, but all simple V4L2 applications, although
on most of them, you can either pass a command line argument or setup
the patch via GUI.

Btw, I've no idea from where gstreamer took /dev/v4l2 :-)
I'm yet to find a distribution using it.

On the other hand, /dev/v4l/by-patch and /dev/v4l/by-id are usual directories
where V4L2 devices can be found, and should provide persistent names. So, IMHO,
gst should prefer those names, when they exist:

$ tree /dev/v4l
/dev/v4l
├── by-id
│   ├── usb-046d_HD_Pro_Webcam_C920_55DA1CCF-video-index0 -> ../../video1
│   └── usb-Sunplus_mMobile_Inc_USB_Web-CAM-video-index0 -> ../../video0
└── by-path
    ├── platform-3f980000.usb-usb-0:1.2:1.0-video-index0 -> ../../video1
    └── platform-3f980000.usb-usb-0:1.5:1.0-video-index0 -> ../../video0




Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 11:42                               ` Russell King - ARM Linux
  2017-03-17 11:55                                 ` Sakari Ailus
  2017-03-17 12:02                                 ` Philipp Zabel
@ 2017-03-19 13:25                                 ` Pavel Machek
  2 siblings, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-19 13:25 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Hans Verkuil, Mauro Carvalho Chehab, Sakari Ailus,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 1656 bytes --]

On Fri 2017-03-17 11:42:03, Russell King - ARM Linux wrote:
> On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
> > We're all very driver-development-driven, and userspace gets very little
> > attention in general. So before just throwing in the towel we should take
> > a good look at the reasons why there has been little or no development: is
> > it because of fundamental design defects, or because nobody paid attention
> > to it?
> > 
> > I strongly suspect it is the latter.
> > 
> > In addition, I suspect end-users of these complex devices don't really care
> > about a plugin: they want full control and won't typically use generic
> > applications. If they would need support for that, we'd have seen much more
> > interest. The main reason for having a plugin is to simplify testing and
> > if this is going to be used on cheap hobbyist devkits.
> 
> I think you're looking at it with a programmers hat on, not a users hat.
> 
> Are you really telling me that requiring users to 'su' to root, and then
> use media-ctl to manually configure the capture device is what most
> users "want" ?

If you want to help users, right way is to improve userland support. 

> Hasn't the way technology has moved towards graphical interfaces,
> particularly smart phones, taught us that the vast majority of users
> want is intuitive, easy to use interfaces, and not the command line
> with reams of documentation?

How is it relevant to _kernel_ interfaces?
									Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 11:55                                 ` Sakari Ailus
  2017-03-17 13:24                                   ` Mauro Carvalho Chehab
@ 2017-03-20 11:16                                   ` Hans Verkuil
  1 sibling, 0 replies; 228+ messages in thread
From: Hans Verkuil @ 2017-03-20 11:16 UTC (permalink / raw)
  To: Sakari Ailus, Russell King - ARM Linux
  Cc: Mauro Carvalho Chehab, Sakari Ailus, Steve Longerbeam, robh+dt,
	mark.rutland, shawnguo, kernel, fabio.estevam, mchehab, nick,
	markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot, geert,
	arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On 03/17/2017 12:55 PM, Sakari Ailus wrote:
> Hi Russell,
> 
> On 03/17/17 13:42, Russell King - ARM Linux wrote:
>> On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
>>> We're all very driver-development-driven, and userspace gets very little
>>> attention in general. So before just throwing in the towel we should take
>>> a good look at the reasons why there has been little or no development: is
>>> it because of fundamental design defects, or because nobody paid attention
>>> to it?
>>>
>>> I strongly suspect it is the latter.
>>>
>>> In addition, I suspect end-users of these complex devices don't really care
>>> about a plugin: they want full control and won't typically use generic
>>> applications. If they would need support for that, we'd have seen much more
>>> interest. The main reason for having a plugin is to simplify testing and
>>> if this is going to be used on cheap hobbyist devkits.
>>
>> I think you're looking at it with a programmers hat on, not a users hat.
>>
>> Are you really telling me that requiring users to 'su' to root, and then
>> use media-ctl to manually configure the capture device is what most
>> users "want" ?
> 
> It depends on who the user is. I don't think anyone is suggesting a
> regular end user is the user of all these APIs: it is either an
> application tailored for that given device, a skilled user with his test
> scripts or as suggested previously, a libv4l plugin knowing the device
> or a generic library geared towards providing best effort service. The
> last one of this list does not exist yet and the second last item
> requires help.
> 
> Typically this class of devices is simply not up to provide the level of
> service you're requesting without additional user space control library
> which is responsible for automatic white balance, exposure and focus.
> 
> Making use of the full potential of the hardware requires using a more
> expressive interface. That's what the kernel interface must provide. If
> we decide to limit ourselves to a small sub-set of that potential on the
> level of the kernel interface, we have made a wrong decision. It's as
> simple as that. This is why the functionality (and which requires taking
> a lot of policy decisions) belongs to the user space. We cannot have
> multiple drivers providing multiple kernel interfaces for the same hardware.

Right. With my Cisco hat on I can tell you that Cisco would want full low-level
control. If the driver would limit us we would not be able to use it.

Same with anyone who wants to put Android CameraHAL on top of a V4L2 driver:
they would need full control. Some simplified interface would be unacceptable.

> 
> That said, I'm not trying to provide an excuse for not having libraries
> available to help the user to configure and control the device more or
> less automatically even in terms of best effort. It's something that
> does require attention, a lot more of it than it has received in recent
> few years.

Right.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 14:37                                       ` Russell King - ARM Linux
@ 2017-03-20 13:10                                         ` Hans Verkuil
  2017-03-20 15:06                                           ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 228+ messages in thread
From: Hans Verkuil @ 2017-03-20 13:10 UTC (permalink / raw)
  To: Russell King - ARM Linux, Philipp Zabel
  Cc: Mauro Carvalho Chehab, Sakari Ailus, Sakari Ailus,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

On 03/17/2017 03:37 PM, Russell King - ARM Linux wrote:
> On Fri, Mar 17, 2017 at 02:51:10PM +0100, Philipp Zabel wrote:
>> On Fri, 2017-03-17 at 10:24 -0300, Mauro Carvalho Chehab wrote:
>> [...]
>>> The big question, waiting for an answer on the last 8 years is
>>> who would do that? Such person would need to have several different
>>> hardware from different vendors, in order to ensure that it has
>>> a generic solution.
>>>
>>> It is a way more feasible that the Kernel developers that already 
>>> have a certain hardware on their hands to add support inside the
>>> driver to forward the controls through the pipeline and to setup
>>> a "default" pipeline that would cover the common use cases at
>>> driver's probe.
>>
>> Actually, would setting pipeline via libv4l2 plugin and letting drivers
>> provide a sane enabled default pipeline configuration be mutually
>> exclusive? Not sure about the control forwarding, but at least a simple
>> link setup and format forwarding would also be possible in the kernel
>> without hindering userspace from doing it themselves later.
> 
> I think this is the exact same problem as controls in ALSA.
> 
> When ALSA started off in life, the requirement was that all controls
> shall default to minimum, and the user is expected to adjust controls
> after the system is running.
> 
> After OSS, this gave quite a marked change in system behaviour, and
> led to a lot of "why doesn't my sound work anymore" problems, because
> people then had to figure out which combination of controls had to be
> set to get sound out of their systems.
> 
> Now it seems to be much better, where install Linux on a platform, and
> you have a working sound system (assuming that the drivers are all there
> which is generally the case for x86.)
> 
> However, it's still possible to adjust all the controls from userspace.
> All that's changed is the defaults.
> 
> Why am I mentioning this - because from what I understand Mauro saying,
> it's no different from this situation.  Userspace will still have the
> power to disable all links and setup its own.  The difference is that
> there will be a default configuration that the kernel sets up at boot
> time that will be functional, rather than the current default
> configuration where the system is completely non-functional until
> manually configured.
> 
> However, at the end of the day, I don't care _where_ the usability
> problems are solved, only that there is some kind of solution.  It's not
> the _where_ that's the real issue here, but the _how_, and discussion of
> the _how_ is completely missing.
> 
> So, let's try kicking off a discussion about _how_ to do things.
> 
> _How_ do we setup a media controller system so that we end up with a
> usable configuration - let's start with the obvious bit... which links
> should be enabled.
> 
> I think the first pre-requisit is that we stop exposing capture devices
> that can never be functional for the hardware that's present on the board,
> so that there isn't this plentora of useless /dev/video* nodes and useless
> subdevices.
> 
> One possible solution to finding a default path may be "find the shortest
> path between the capture device and the sensor and enable intervening
> links".
> 
> Then we need to try configuring that path with format/resolution
> information.
> 
> However, what if something in the shortest path can't handle the format
> that the sensor produces?  I think at that point, we'd need to drop that
> subdev out of the path resolution, re-run the "find the shortest path"
> algorithm, and try again.
> 
> Repeat until success or no path between the capture and sensor exists.
> 
> This works fine if you have just one sensor visible from a capture device,
> but not if there's more than one (which I suspect is the case with the
> Sabrelite board with its two cameras and video receiver.)  That breaks
> the "find the shortest path" algorithm.
> 
> So, maybe it's a lot better to just let the board people provide via DT
> a default setup for the connectivity of the modules somehow - certainly
> one big step forward would be to disable in DT parts of the capture
> system that can never be used (remembering that boards like the RPi /
> Hummingboard may end up using DT overlays to describe this for different
> cameras, so the capture setup may change after initial boot.)

The MC was developed before the device tree came along. But now that the DT
is here, I think this could be a sensible idea to let the DT provide an
initial path.

Sakari, Laurent, Mauro: any opinions?

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-14 10:21                               ` Mauro Carvalho Chehab
  2017-03-14 22:32                                 ` media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline) Pavel Machek
@ 2017-03-20 13:24                                 ` Hans Verkuil
  2017-03-20 15:39                                   ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 228+ messages in thread
From: Hans Verkuil @ 2017-03-20 13:24 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

On 03/14/2017 11:21 AM, Mauro Carvalho Chehab wrote:
> Em Tue, 14 Mar 2017 08:55:36 +0100
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
>> On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
>>> Hi Sakari,
>>>
>>> I started preparing a long argument about it, but gave up in favor of a
>>> simpler one.
>>>
>>> Em Mon, 13 Mar 2017 14:46:22 +0200
>>> Sakari Ailus <sakari.ailus@iki.fi> escreveu:
>>>   
>>>> Drivers are written to support hardware, not particular use case.    
>>>
>>> No, it is just the reverse: drivers and hardware are developed to
>>> support use cases.
>>>
>>> Btw, you should remember that the hardware is the full board, not just the
>>> SoC. In practice, the board do limit the use cases: several provide a
>>> single physical CSI connector, allowing just one sensor.
>>>   
>>>>> This situation is there since 2009. If I remember well, you tried to write
>>>>> such generic plugin in the past, but never finished it, apparently because
>>>>> it is too complex. Others tried too over the years.     
>>>>
>>>> I'd argue I know better what happened with that attempt than you do. I had a
>>>> prototype of a generic pipeline configuration library but due to various
>>>> reasons I haven't been able to continue working on that since around 2012.  
>>>
>>> ...
>>>   
>>>>> The last trial was done by Jacek, trying to cover just the exynos4 driver. 
>>>>> Yet, even such limited scope plugin was not good enough, as it was never
>>>>> merged upstream. Currently, there's no such plugins upstream.
>>>>>
>>>>> If we can't even merge a plugin that solves it for just *one* driver,
>>>>> I have no hope that we'll be able to do it for the generic case.    
>>>>
>>>> I believe Jacek ceased to work on that plugin in his day job; other than
>>>> that, there are some matters left to be addressed in his latest patchset.  
>>>
>>> The two above basically summaries the issue: the task of doing a generic
>>> plugin on userspace, even for a single driver is complex enough to
>>> not cover within a reasonable timeline.
>>>
>>> From 2009 to 2012, you were working on it, but didn't finish it.
>>>
>>> Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
>>> I didn't check when the generic plugin interface was added to libv4l).
>>>
>>> In the case of Jacek's work, the first patch I was able to find was
>>> written in Oct, 2014:
>>> 	https://patchwork.kernel.org/patch/5098111/
>>> 	(not sure what happened with the version 1).
>>>
>>> The last e-mail about this subject was issued in Dec, 2016.
>>>
>>> In summary, you had this on your task for 3 years for an OMAP3
>>> plugin (where you have a good expertise), and Jacek for 2 years, 
>>> for Exynos 4, where he should also have a good knowledge.
>>>
>>> Yet, with all that efforts, no concrete results were achieved, as none
>>> of the plugins got merged.
>>>
>>> Even if they were merged, if we keep the same mean time to develop a
>>> libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
>>> years to be developed.
>>>
>>> There's a clear message on it:
>>> 	- we shouldn't keep pushing for a solution via libv4l.  
>>
>> Or:
>>
>> 	- userspace plugin development had a very a low priority and
>> 	  never got the attention it needed.
> 
> The end result is the same: we can't count on it.
> 
>>
>> I know that's *my* reason. I rarely if ever looked at it. I always assumed
>> Sakari and/or Laurent would look at it. If this reason is also valid for
>> Sakari and Laurent, then it is no wonder nothing has happened in all that
>> time.
>>
>> We're all very driver-development-driven, and userspace gets very little
>> attention in general. So before just throwing in the towel we should take
>> a good look at the reasons why there has been little or no development: is
>> it because of fundamental design defects, or because nobody paid attention
>> to it?
> 
> No. We should look it the other way: basically, there are patches
> for i.MX6 driver that sends control from videonode to subdevs. 
> 
> If we nack apply it, who will write the userspace plugin? When
> such change will be merged upstream?
> 
> If we don't have answers to any of the above questions, we should not
> nack it.
> 
> That's said, that doesn't prevent merging a libv4l plugin if/when
> someone can find time/interest to develop it.

I don't think this control inheritance patch will magically prevent you
from needed a plugin.

> 
>> I strongly suspect it is the latter.
>>
>> In addition, I suspect end-users of these complex devices don't really care
>> about a plugin: they want full control and won't typically use generic
>> applications. If they would need support for that, we'd have seen much more
>> interest. The main reason for having a plugin is to simplify testing and
>> if this is going to be used on cheap hobbyist devkits.
> 
> What are the needs for a cheap hobbyist devkit owner? Do we currently
> satisfy those needs? I'd say that having a functional driver when
> compiled without the subdev API, that implements the ioctl's/controls
> that a generic application like camorama/google talk/skype/zbar...
> would work should be enough to make them happy, even if they need to
> add some udev rule and/or run some "prep" application that would setup 
> the pipelines via MC and eventually rename the device with a working
> pipeline to /dev/video0.

This is literally the first time we have to cater to a cheap devkit. We
were always aware of this issue, but nobody really needed it.

> 
>>
>> An additional complication is simply that it is hard to find fully supported
>> MC hardware. omap3 boards are hard to find these days, renesas boards are not
>> easy to get, freescale isn't the most popular either. Allwinner, mediatek,
>> amlogic, broadcom and qualcomm all have closed source implementations or no
>> implementation at all.
> 
> I'd say that we should not care anymore on providing a solution for
> generic applications to run on boards like OMAP3[1]. For hardware that
> are currently available that have Kernel driver and boards developed
> to be used as "cheap hobbyist devkit", I'd say we should implement
> a Kernel solution that would allow them to be used without subdev
> API, e. g. having all ioctls needed by generic applications to work
> functional, after some external application sets the pipeline.

I liked Russell's idea of having the DT set up an initial video path.
This would (probably) make it much easier to provide a generic plugin since
there is already an initial valid path when the driver is loaded, and it
doesn't require custom code in the driver since this is left to the DT
which really knows about the HW.

> 
> [1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
> just arrived here last week. It would be nice to have xawtv3 running on it :-)
> So, if I have a lot of spare time (with is very unlikely), I might eventually 
> do something for it to work.
> 
>> I know it took me a very long time before I had a working omap3.
> 
> My first OMAP3 board with working V4L2 source just arrived last week :-)
> 
>> So I am not at all surprised that little progress has been made.
> 
> I'm not surprised, but I'm disappointed, as I tried to push toward a
> solution for this problem since when we had our initial meetings about
> it.

So many things to do, so little time. Sounds corny, but really, that's what
this is all about. There were always (and frankly, still are) more important
things that needed to be done.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-20 13:10                                         ` Hans Verkuil
@ 2017-03-20 15:06                                           ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-20 15:06 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Russell King - ARM Linux, Philipp Zabel, Sakari Ailus,
	Sakari Ailus, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, pavel, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

Em Mon, 20 Mar 2017 14:10:30 +0100
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On 03/17/2017 03:37 PM, Russell King - ARM Linux wrote:
> > On Fri, Mar 17, 2017 at 02:51:10PM +0100, Philipp Zabel wrote:  
> >> On Fri, 2017-03-17 at 10:24 -0300, Mauro Carvalho Chehab wrote:
> >> [...]  
> >>> The big question, waiting for an answer on the last 8 years is
> >>> who would do that? Such person would need to have several different
> >>> hardware from different vendors, in order to ensure that it has
> >>> a generic solution.
> >>>
> >>> It is a way more feasible that the Kernel developers that already 
> >>> have a certain hardware on their hands to add support inside the
> >>> driver to forward the controls through the pipeline and to setup
> >>> a "default" pipeline that would cover the common use cases at
> >>> driver's probe.  
> >>
> >> Actually, would setting pipeline via libv4l2 plugin and letting drivers
> >> provide a sane enabled default pipeline configuration be mutually
> >> exclusive? Not sure about the control forwarding, but at least a simple
> >> link setup and format forwarding would also be possible in the kernel
> >> without hindering userspace from doing it themselves later.  
> > 
> > I think this is the exact same problem as controls in ALSA.
> > 
> > When ALSA started off in life, the requirement was that all controls
> > shall default to minimum, and the user is expected to adjust controls
> > after the system is running.
> > 
> > After OSS, this gave quite a marked change in system behaviour, and
> > led to a lot of "why doesn't my sound work anymore" problems, because
> > people then had to figure out which combination of controls had to be
> > set to get sound out of their systems.
> > 
> > Now it seems to be much better, where install Linux on a platform, and
> > you have a working sound system (assuming that the drivers are all there
> > which is generally the case for x86.)
> > 
> > However, it's still possible to adjust all the controls from userspace.
> > All that's changed is the defaults.
> > 
> > Why am I mentioning this - because from what I understand Mauro saying,
> > it's no different from this situation.  Userspace will still have the
> > power to disable all links and setup its own.  The difference is that
> > there will be a default configuration that the kernel sets up at boot
> > time that will be functional, rather than the current default
> > configuration where the system is completely non-functional until
> > manually configured.
> > 
> > However, at the end of the day, I don't care _where_ the usability
> > problems are solved, only that there is some kind of solution.  It's not
> > the _where_ that's the real issue here, but the _how_, and discussion of
> > the _how_ is completely missing.
> > 
> > So, let's try kicking off a discussion about _how_ to do things.
> > 
> > _How_ do we setup a media controller system so that we end up with a
> > usable configuration - let's start with the obvious bit... which links
> > should be enabled.
> > 
> > I think the first pre-requisit is that we stop exposing capture devices
> > that can never be functional for the hardware that's present on the board,
> > so that there isn't this plentora of useless /dev/video* nodes and useless
> > subdevices.
> > 
> > One possible solution to finding a default path may be "find the shortest
> > path between the capture device and the sensor and enable intervening
> > links".
> > 
> > Then we need to try configuring that path with format/resolution
> > information.
> > 
> > However, what if something in the shortest path can't handle the format
> > that the sensor produces?  I think at that point, we'd need to drop that
> > subdev out of the path resolution, re-run the "find the shortest path"
> > algorithm, and try again.
> > 
> > Repeat until success or no path between the capture and sensor exists.
> > 
> > This works fine if you have just one sensor visible from a capture device,
> > but not if there's more than one (which I suspect is the case with the
> > Sabrelite board with its two cameras and video receiver.)  That breaks
> > the "find the shortest path" algorithm.
> > 
> > So, maybe it's a lot better to just let the board people provide via DT
> > a default setup for the connectivity of the modules somehow - certainly
> > one big step forward would be to disable in DT parts of the capture
> > system that can never be used (remembering that boards like the RPi /
> > Hummingboard may end up using DT overlays to describe this for different
> > cameras, so the capture setup may change after initial boot.)  
> 
> The MC was developed before the device tree came along. But now that the DT
> is here, I think this could be a sensible idea to let the DT provide an
> initial path.
> 
> Sakari, Laurent, Mauro: any opinions?

It makes perfect sense to me.

By setting the pipeline via DT on boards with simple configurations,
e. g. just one CSI physical interface, it can create just one
devnode (e. g. /dev/video0) with would fully control the device,
without enabling subdev API for such hardware, making the hardware
usable with all V4L2 applications.

Regards,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-20 13:24                                 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Hans Verkuil
@ 2017-03-20 15:39                                   ` Mauro Carvalho Chehab
  2017-03-20 16:10                                     ` Russell King - ARM Linux
  0 siblings, 1 reply; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-20 15:39 UTC (permalink / raw)
  To: Hans Verkuil, Nicolas Dufresne
  Cc: Sakari Ailus, Russell King - ARM Linux, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

Em Mon, 20 Mar 2017 14:24:25 +0100
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> On 03/14/2017 11:21 AM, Mauro Carvalho Chehab wrote:
> > Em Tue, 14 Mar 2017 08:55:36 +0100
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >   
> >> On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:  
> >>> Hi Sakari,
> >>>

> >> We're all very driver-development-driven, and userspace gets very little
> >> attention in general. So before just throwing in the towel we should take
> >> a good look at the reasons why there has been little or no development: is
> >> it because of fundamental design defects, or because nobody paid attention
> >> to it?  
> > 
> > No. We should look it the other way: basically, there are patches
> > for i.MX6 driver that sends control from videonode to subdevs. 
> > 
> > If we nack apply it, who will write the userspace plugin? When
> > such change will be merged upstream?
> > 
> > If we don't have answers to any of the above questions, we should not
> > nack it.
> > 
> > That's said, that doesn't prevent merging a libv4l plugin if/when
> > someone can find time/interest to develop it.  
> 
> I don't think this control inheritance patch will magically prevent you
> from needed a plugin.

Yeah, it is not just control inheritance. The driver needs to work
without subdev API, e. g. mbus settings should also be done via the
video devnode.

Btw, Sakari made a good point on IRC: what happens if some app 
try to change the pipeline or subdev settings while another
application is using the device? The driver should block such 
changes, maybe using the V4L2 priority mechanism.

> This is literally the first time we have to cater to a cheap devkit. We
> were always aware of this issue, but nobody really needed it.

That's true. Now that we have a real need for that, we need to
provide a solution.

> > I'd say that we should not care anymore on providing a solution for
> > generic applications to run on boards like OMAP3[1]. For hardware that
> > are currently available that have Kernel driver and boards developed
> > to be used as "cheap hobbyist devkit", I'd say we should implement
> > a Kernel solution that would allow them to be used without subdev
> > API, e. g. having all ioctls needed by generic applications to work
> > functional, after some external application sets the pipeline.  
> 
> I liked Russell's idea of having the DT set up an initial video path.
> This would (probably) make it much easier to provide a generic plugin since
> there is already an initial valid path when the driver is loaded, and it
> doesn't require custom code in the driver since this is left to the DT
> which really knows about the HW.

Setting the device via DT indeed makes easier (either for a kernel
or userspace solution), but things like resolution changes should
be possible without needing to change DT.

Also, as MC and subdev changes should be blocked while a V4L2 app
is using the device, using a mechanism like calling VIDIOC_S_PRIORITY
ioctl via the V4l2 interface, Kernel will require changes, anyway.

My suggestion is to touch on one driver to make it work with a
generic application. As we currently have efforts and needs for
the i.MX6 to do it, I'd say that the best would be to make it
work on such driver. Once the work is done, we can see if the
approach taken there would work at libv4l or not.

In parallel, someone has to fix libv4l for it to be default on
applications like gstreamer, e. g. adding support for DMABUF
and fixing other issues that are preventing it to be used by
default.

Nicolas,

Why libv4l is currently disabled at gstreamer's default settings?

> > [1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
> > just arrived here last week. It would be nice to have xawtv3 running on it :-)
> > So, if I have a lot of spare time (with is very unlikely), I might eventually 
> > do something for it to work.
> >   
> >> I know it took me a very long time before I had a working omap3.  
> > 
> > My first OMAP3 board with working V4L2 source just arrived last week :-)
> >   
> >> So I am not at all surprised that little progress has been made.  
> > 
> > I'm not surprised, but I'm disappointed, as I tried to push toward a
> > solution for this problem since when we had our initial meetings about
> > it.  
> 
> So many things to do, so little time. Sounds corny, but really, that's what
> this is all about. There were always (and frankly, still are) more important
> things that needed to be done.

What's most important for some developer may not be so important for
another developer.

My understanding here is that there are developers wanting/needing
to have standard V4L2 apps support for (some) i.MX6 devices. Those are
the ones that may/will allocate some time for it to happen.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-20 15:39                                   ` Mauro Carvalho Chehab
@ 2017-03-20 16:10                                     ` Russell King - ARM Linux
  2017-03-20 17:37                                       ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 228+ messages in thread
From: Russell King - ARM Linux @ 2017-03-20 16:10 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Nicolas Dufresne, Sakari Ailus, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

On Mon, Mar 20, 2017 at 12:39:38PM -0300, Mauro Carvalho Chehab wrote:
> Em Mon, 20 Mar 2017 14:24:25 +0100
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > I don't think this control inheritance patch will magically prevent you
> > from needed a plugin.
> 
> Yeah, it is not just control inheritance. The driver needs to work
> without subdev API, e. g. mbus settings should also be done via the
> video devnode.
> 
> Btw, Sakari made a good point on IRC: what happens if some app 
> try to change the pipeline or subdev settings while another
> application is using the device? The driver should block such 
> changes, maybe using the V4L2 priority mechanism.

My understanding is that there are already mechanisms in place to
prevent that, but it's driver dependent - certainly several of the
imx driver subdevs check whether they have an active stream, and
refuse (eg) all set_fmt calls with -EBUSY if that is so.

(That statement raises another question in my mind: if the subdev is
streaming, should it refuse all set_fmt, even for the TRY stuff?)

> In parallel, someone has to fix libv4l for it to be default on
> applications like gstreamer, e. g. adding support for DMABUF
> and fixing other issues that are preventing it to be used by
> default.

Hmm, not sure what you mean there - I've used dmabuf with gstreamer's
v4l2src linked to libv4l2, importing the buffers into etnaviv using
a custom plugin.  There are distros around (ubuntu) where the v4l2
plugin is built against libv4l2.

> My understanding here is that there are developers wanting/needing
> to have standard V4L2 apps support for (some) i.MX6 devices. Those are
> the ones that may/will allocate some time for it to happen.

Quite - but we need to first know what is acceptable to the v4l2
community before we waste a lot of effort coding something up that
may not be suitable.  Like everyone else, there's only a limited
amount of effort available, so using it wisely is a very good idea.

Up until recently, it seemed that the only solution was to solve the
userspace side of the media controller API via v4l2 plugins and the
like.

Much of my time that I have available to look at the imx6 capture
stuff at the moment is taken up by triping over UAPI issues with the
current code (such as the ones about CSI scaling, colorimetry, etc)
and trying to get concensus on what the right solution to fix those
issues actually is, and at the moment I don't have spare time to
start addressing any kind of v4l2 plugin for user controls nor any
other solution.

Eg, I spent much of this last weekend sorting out my IMX219 camera
sensor driver for my new understanding about how scaling is supposed
to work, the frame enumeration issue (which I've posted patches for)
and the CSI scaling issue (which I've some half-baked patch for at the
moment, but probably by the time I've finished sorting that, Philipp
or Steve will already have a solution.)

That said, my new understanding of the scaling impacts the four patches
I posted, and probably makes the frame size enumeration in CSI (due to
its scaling) rather obsolete.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-20 16:10                                     ` Russell King - ARM Linux
@ 2017-03-20 17:37                                       ` Mauro Carvalho Chehab
  0 siblings, 0 replies; 228+ messages in thread
From: Mauro Carvalho Chehab @ 2017-03-20 17:37 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Hans Verkuil, Nicolas Dufresne, Sakari Ailus, Steve Longerbeam,
	robh+dt, mark.rutland, shawnguo, kernel, fabio.estevam, mchehab,
	nick, markus.heiser, p.zabel, laurent.pinchart+renesas, bparrot,
	geert, arnd, sudipm.mukherjee, minghsiu.tsai, tiffany.lin,
	jean-christophe.trotin, horms+renesas, niklas.soderlund+renesas,
	robert.jarzmik, songjun.wu, andrew-ct.chen, gregkh, shuah,
	sakari.ailus, pavel, devicetree, linux-kernel, linux-arm-kernel,
	linux-media, devel, Steve Longerbeam, Jacek Anaszewski

Em Mon, 20 Mar 2017 16:10:03 +0000
Russell King - ARM Linux <linux@armlinux.org.uk> escreveu:

> On Mon, Mar 20, 2017 at 12:39:38PM -0300, Mauro Carvalho Chehab wrote:
> > Em Mon, 20 Mar 2017 14:24:25 +0100
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:  
> > > I don't think this control inheritance patch will magically prevent you
> > > from needed a plugin.  
> > 
> > Yeah, it is not just control inheritance. The driver needs to work
> > without subdev API, e. g. mbus settings should also be done via the
> > video devnode.
> > 
> > Btw, Sakari made a good point on IRC: what happens if some app 
> > try to change the pipeline or subdev settings while another
> > application is using the device? The driver should block such 
> > changes, maybe using the V4L2 priority mechanism.  
> 
> My understanding is that there are already mechanisms in place to
> prevent that, but it's driver dependent - certainly several of the
> imx driver subdevs check whether they have an active stream, and
> refuse (eg) all set_fmt calls with -EBUSY if that is so.
> 
> (That statement raises another question in my mind: if the subdev is
> streaming, should it refuse all set_fmt, even for the TRY stuff?)

Returning -EBUSY only when streaming is too late, as ioctl's
may be changing the pipeline configuration and/or buffer allocation,
while the application is sending other ioctls in order to prepare
for streaming.

V4L2 has a mechanism of blocking other apps to change such
parameters via VIDIOC_S_PRIORITY[1]. If an application sets
priority to V4L2_PRIORITY_RECORD, any other application attempting
to change the device via some other file descriptor will fail.
So, it is a sort of "exclusive write access".

On a quick look at V4L2 core, currently, sending a 
VIDIOC_S_PRIORITY ioctl to a /dev/video device doesn't seem to have
any effect at either MC or V4L2 subdev API for the subdevs connected
to it. We'll likely need to add some code at v4l2_prio_change()
for it to notify the subdevs for them to return -EBUSY if one
would try to change something there, while the device is priorized.

[1] https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/vidioc-g-priority.html

> > In parallel, someone has to fix libv4l for it to be default on
> > applications like gstreamer, e. g. adding support for DMABUF
> > and fixing other issues that are preventing it to be used by
> > default.  
> 
> Hmm, not sure what you mean there - I've used dmabuf with gstreamer's
> v4l2src linked to libv4l2, importing the buffers into etnaviv using
> a custom plugin.  There are distros around (ubuntu) where the v4l2
> plugin is built against libv4l2.

Hmm... I guess some gst developer mentioned that there are/where
some restrictions at libv4l2 with regards to DMABUF. I may be
wrong.

> 
> > My understanding here is that there are developers wanting/needing
> > to have standard V4L2 apps support for (some) i.MX6 devices. Those are
> > the ones that may/will allocate some time for it to happen.  
> 
> Quite - but we need to first know what is acceptable to the v4l2
> community before we waste a lot of effort coding something up that
> may not be suitable.  Like everyone else, there's only a limited
> amount of effort available, so using it wisely is a very good idea.

Sure. That's why we're discussing here :-)

> Up until recently, it seemed that the only solution was to solve the
> userspace side of the media controller API via v4l2 plugins and the
> like.
> 
> Much of my time that I have available to look at the imx6 capture
> stuff at the moment is taken up by triping over UAPI issues with the
> current code (such as the ones about CSI scaling, colorimetry, etc)
> and trying to get concensus on what the right solution to fix those
> issues actually is, and at the moment I don't have spare time to
> start addressing any kind of v4l2 plugin for user controls nor any
> other solution.

I hear you. A solution via libv4l could be more elegant, but it
doesn't seem simple, as nobody did it before, and depends on the
libv4l plugin mechanism, with is currently unused.

Also, I think it is easier to provide a solution using DT and some
driver and/or core support for it, specially since, AFAICT,
currently there's no way request exclusive access to MC and subdevs.

It is probably not possible do to that exclusively in userspace.

> Eg, I spent much of this last weekend sorting out my IMX219 camera
> sensor driver for my new understanding about how scaling is supposed
> to work, the frame enumeration issue (which I've posted patches for)
> and the CSI scaling issue (which I've some half-baked patch for at the
> moment, but probably by the time I've finished sorting that, Philipp
> or Steve will already have a solution.)
> 
> That said, my new understanding of the scaling impacts the four patches
> I posted, and probably makes the frame size enumeration in CSI (due to
> its scaling) rather obsolete.

Yeah, when there's no scaler, it should report just the resolution(s)
supported by the sensor (actually, at the CSI) via
V4L2_FRMSIZE_TYPE_DISCRETE.

However, when there's a scaler at the pipeline, it should report the
range supported by the scaler, e. g.:

- V4L2_FRMSIZE_TYPE_CONTINUOUS - when an entire range of resolutions
  is supported with step = 1 for both H and V .

- V4L2_FRMSIZE_TYPE_STEPWISE - when there's either a H or V step at
  the possible values for resolutions. This is actually more common
  in practice, as several encodings take a 2x2 pixel block. So, the
  step should be at least 2.

There is something to be considered by the logic that forwards the
resolution to the CSI: a lower resolution there means a higher number of
frames per second.

So, if the driver is setting the resolution via a V4L2 device, it
will provide a higher number of fps if it selects the lowest 
resolution at CSI that it is greater or equal to the resolution
set at the scaler. On the other hand, the image quality could be 
better if it doesn't scale at CSI.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-17 13:24                                   ` Mauro Carvalho Chehab
  2017-03-17 13:51                                     ` Philipp Zabel
@ 2017-03-21 11:11                                     ` Pavel Machek
  1 sibling, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-21 11:11 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sakari Ailus, Russell King - ARM Linux, Hans Verkuil,
	Sakari Ailus, Steve Longerbeam, robh+dt, mark.rutland, shawnguo,
	kernel, fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, devicetree,
	linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

[-- Attachment #1: Type: text/plain, Size: 2127 bytes --]

Hi!

> > Making use of the full potential of the hardware requires using a more
> > expressive interface. 
> 
> That's the core of the problem: most users don't need "full potential
> of the hardware". It is actually worse than that: several boards
> don't allow "full potential" of the SoC capabilities.

Well, in kernel we usually try to support "full hardware" potential.

And we are pretty sure users would like to take still photos, even if
common v4l2 applications can not do it.

> > That's what the kernel interface must provide. If
> > we decide to limit ourselves to a small sub-set of that potential on the
> > level of the kernel interface, we have made a wrong decision. It's as
> > simple as that. This is why the functionality (and which requires taking
> > a lot of policy decisions) belongs to the user space. We cannot have
> > multiple drivers providing multiple kernel interfaces for the same hardware.
> 
> I strongly disagree. Looking only at the hardware capabilities without
> having a solution to provide what the user wants is *wrong*.

Hardware manufacturers already did this kind of research for us. They
don't usually include features noone wants...

> Another case: the cx25821 hardware supports 12 video streams, 
> consuming almost all available bandwidth of an ePCI bus. Each video 
> stream connector can either be configured to be capture or output, in
> runtime. The hardware vendor chose to hardcode the driver to provide
> 8 inputs and 4 outputs. Their decision was based in the fact that
> the driver is already very complex, and it satisfies their customer's 
> needs. The cost/efforts of make the driver to be reconfigured in
> runtime were too high for almost no benefit.

Well, it is okay to provide 'limited' driver -- there's possibility to
fix the driver. But IMO it is not okay to provide 'limited' kernel
interface -- because if you try to fix it, you'll suddenly have
regressions.

									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* script to setup pipeline was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-12 21:58                                 ` Mauro Carvalho Chehab
@ 2017-03-26  9:12                                   ` Pavel Machek
  0 siblings, 0 replies; 228+ messages in thread
From: Pavel Machek @ 2017-03-26  9:12 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Steve Longerbeam, Russell King - ARM Linux, mark.rutland,
	andrew-ct.chen, minghsiu.tsai, nick, songjun.wu, Hans Verkuil,
	shuah, devel, markus.heiser, laurent.pinchart+renesas,
	robert.jarzmik, geert, p.zabel, linux-media, devicetree, kernel,
	arnd, tiffany.lin, bparrot, robh+dt, horms+renesas, mchehab,
	linux-arm-kernel, niklas.soderlund+renesas, gregkh, linux-kernel,
	Sakari Ailus, jean-christophe.trotin, sakari.ailus,
	fabio.estevam, shawnguo, sudipm.mukherjee

[-- Attachment #1: Type: text/plain, Size: 1993 bytes --]

Hi!

> > I do agree with you that MC places a lot of burden on the user to
> > attain a lot of knowledge of the system's architecture.
> 
> Setting up the pipeline is not the hard part. One could write a
> script to do that. 

Can you try to write that script? I believe it would solve big part of
the problem.

> > And my other point is, I think most people who have a need to work with
> > the media framework on a particular platform will likely already be
> > quite familiar with that platform.
> 
> I disagree. The most popular platform device currently is Raspberry PI.
> 
> I doubt that almost all owners of RPi + camera module know anything
> about MC. They just use Raspberry's official driver with just provides
> the V4L2 interface.
> 
> I have a strong opinion that, for hardware like RPi, just the V4L2
> API is enough for more than 90% of the cases.

Maybe V4L2 API is enough for 90% of the users. But I don't believe
that means that we should provide compatibility. V4L2 API is not good
enough for complex devices, and if we can make RPi people fix
userspace... that's a good thing.

> > The media graph for imx6 is fairly self-explanatory in my opinion.
> > Yes that graph has to be generated, but just with a simple 'media-ctl
> > --print-dot', I don't see how that is difficult for the user.
> 
> Again, IMHO, the problem is not how to setup the pipeline, but, instead,
> the need to forward controls to the subdevices.
> 
> To use a camera, the user needs to set up a set of controls for the
> image to make sense (bright, contrast, focus, etc). If the driver
> doesn't forward those controls to the subdevs, an application like
> "camorama" won't actually work for real, as the user won't be able
> to adjust those parameters via GUI.

I believe this can be fixed in libv4l2.
								Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 228+ messages in thread

* Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline
  2017-03-14  7:55                             ` Hans Verkuil
  2017-03-14 10:21                               ` Mauro Carvalho Chehab
  2017-03-17 11:42                               ` Russell King - ARM Linux
@ 2017-03-26 16:44                               ` Laurent Pinchart
  2 siblings, 0 replies; 228+ messages in thread
From: Laurent Pinchart @ 2017-03-26 16:44 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Mauro Carvalho Chehab, Sakari Ailus, Russell King - ARM Linux,
	Steve Longerbeam, robh+dt, mark.rutland, shawnguo, kernel,
	fabio.estevam, mchehab, nick, markus.heiser, p.zabel,
	laurent.pinchart+renesas, bparrot, geert, arnd, sudipm.mukherjee,
	minghsiu.tsai, tiffany.lin, jean-christophe.trotin,
	horms+renesas, niklas.soderlund+renesas, robert.jarzmik,
	songjun.wu, andrew-ct.chen, gregkh, shuah, sakari.ailus, pavel,
	devicetree, linux-kernel, linux-arm-kernel, linux-media, devel,
	Steve Longerbeam, Jacek Anaszewski

Hi Hans,

On Tuesday 14 Mar 2017 08:55:36 Hans Verkuil wrote:
> On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
> > Hi Sakari,
> > 
> > I started preparing a long argument about it, but gave up in favor of a
> > simpler one.
> > 
> > Em Mon, 13 Mar 2017 14:46:22 +0200 Sakari Ailus escreveu:
> >> Drivers are written to support hardware, not particular use case.
> > 
> > No, it is just the reverse: drivers and hardware are developed to
> > support use cases.
> > 
> > Btw, you should remember that the hardware is the full board, not just the
> > SoC. In practice, the board do limit the use cases: several provide a
> > single physical CSI connector, allowing just one sensor.
> > 
> >>> This situation is there since 2009. If I remember well, you tried to
> >>> write such generic plugin in the past, but never finished it, apparently
> >>> because it is too complex. Others tried too over the years.
> >> 
> >> I'd argue I know better what happened with that attempt than you do. I
> >> had a prototype of a generic pipeline configuration library but due to
> >> various reasons I haven't been able to continue working on that since
> >> around 2012.
> > ...
> > 
> >>> The last trial was done by Jacek, trying to cover just the exynos4
> >>> driver. Yet, even such limited scope plugin was not good enough, as it
> >>> was never merged upstream. Currently, there's no such plugins upstream.
> >>> 
> >>> If we can't even merge a plugin that solves it for just *one* driver,
> >>> I have no hope that we'll be able to do it for the generic case.
> >> 
> >> I believe Jacek ceased to work on that plugin in his day job; other than
> >> that, there are some matters left to be addressed in his latest patchset.
> > 
> > The two above basically summaries the issue: the task of doing a generic
> > plugin on userspace, even for a single driver is complex enough to
> > not cover within a reasonable timeline.
> > 
> > From 2009 to 2012, you were working on it, but didn't finish it.
> > 
> > Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
> > I didn't check when the generic plugin interface was added to libv4l).
> > 
> > In the case of Jacek's work, the first patch I was able to find was
> > 
> > written in Oct, 2014:
> > 	https://patchwork.kernel.org/patch/5098111/
> > 	(not sure what happened with the version 1).
> > 
> > The last e-mail about this subject was issued in Dec, 2016.
> > 
> > In summary, you had this on your task for 3 years for an OMAP3
> > plugin (where you have a good expertise), and Jacek for 2 years,
> > for Exynos 4, where he should also have a good knowledge.
> > 
> > Yet, with all that efforts, no concrete results were achieved, as none
> > of the plugins got merged.
> > 
> > Even if they were merged, if we keep the same mean time to develop a
> > libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> > years to be developed.
> > 
> > There's a clear message on it:
> > 	- we shouldn't keep pushing for a solution via libv4l.
> 
> Or:
> 	- userspace plugin development had a very a low priority and
> 	  never got the attention it needed.
>
> I know that's *my* reason. I rarely if ever looked at it. I always assumed
> Sakari and/or Laurent would look at it. If this reason is also valid for
> Sakari and Laurent, then it is no wonder nothing has happened in all that
> time.

The reason is also valid for me. I'd really love to work on the userspace 
side, but I just can't find time at the moment.

> We're all very driver-development-driven, and userspace gets very little
> attention in general. So before just throwing in the towel we should take
> a good look at the reasons why there has been little or no development: is
> it because of fundamental design defects, or because nobody paid attention
> to it?
> 
> I strongly suspect it is the latter.
> 
> In addition, I suspect end-users of these complex devices don't really care
> about a plugin: they want full control and won't typically use generic
> applications. If they would need support for that, we'd have seen much more
> interest. The main reason for having a plugin is to simplify testing and
> if this is going to be used on cheap hobbyist devkits.
> 
> An additional complication is simply that it is hard to find fully supported
> MC hardware. omap3 boards are hard to find these days, renesas boards are
> not easy to get, freescale isn't the most popular either. Allwinner,
> mediatek, amlogic, broadcom and qualcomm all have closed source
> implementations or no implementation at all.
> 
> I know it took me a very long time before I had a working omap3.
> 
> So I am not at all surprised that little progress has been made.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 228+ messages in thread

end of thread, other threads:[~2017-03-26 16:43 UTC | newest]

Thread overview: 228+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-16  2:19 [PATCH v4 00/36] i.MX Media Driver Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 01/36] [media] dt-bindings: Add bindings for i.MX media driver Steve Longerbeam
2017-02-16 11:54   ` Philipp Zabel
2017-02-16 19:20     ` Steve Longerbeam
2017-02-27 14:38   ` Rob Herring
2017-03-01  0:00     ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 02/36] ARM: dts: imx6qdl: Add compatible, clocks, irqs to MIPI CSI-2 node Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 03/36] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 04/36] ARM: dts: imx6qdl: add capture-subsystem device Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 05/36] ARM: dts: imx6qdl-sabrelite: remove erratum ERR006687 workaround Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 06/36] ARM: dts: imx6-sabrelite: add OV5642 and OV5640 camera sensors Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 07/36] ARM: dts: imx6-sabresd: " Steve Longerbeam
2017-02-17  0:51   ` Fabio Estevam
2017-02-17  0:56     ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 08/36] ARM: dts: imx6-sabreauto: create i2cmux for i2c3 Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 09/36] ARM: dts: imx6-sabreauto: add reset-gpios property for max7310_b Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 10/36] ARM: dts: imx6-sabreauto: add pinctrl for gpt input capture Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 11/36] ARM: dts: imx6-sabreauto: add the ADV7180 video decoder Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 12/36] add mux and video interface bridge entity functions Steve Longerbeam
2017-02-19 21:28   ` Pavel Machek
2017-02-22 17:19     ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 13/36] [media] v4l2: add a frame timeout event Steve Longerbeam
2017-03-02 15:53   ` Sakari Ailus
2017-03-02 23:07     ` Steve Longerbeam
2017-03-03 11:45       ` Sakari Ailus
2017-03-03 22:43         ` Steve Longerbeam
2017-03-04 10:56           ` Sakari Ailus
2017-03-05  0:37             ` Steve Longerbeam
2017-03-05 21:31               ` Sakari Ailus
2017-03-05 22:41               ` Russell King - ARM Linux
2017-03-10  2:38                 ` Steve Longerbeam
2017-03-10  9:33                   ` Russell King - ARM Linux
2017-02-16  2:19 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Steve Longerbeam
2017-02-19 21:44   ` Pavel Machek
2017-03-02 16:02   ` Sakari Ailus
2017-03-02 23:48     ` Steve Longerbeam
2017-03-03  0:46       ` Steve Longerbeam
2017-03-03  2:12       ` Steve Longerbeam
2017-03-03 19:17         ` Sakari Ailus
2017-03-03 22:47           ` Steve Longerbeam
2017-03-03 23:06     ` Russell King - ARM Linux
2017-03-04  0:36       ` Steve Longerbeam
2017-03-04 13:13       ` Sakari Ailus
2017-03-10 12:54         ` Hans Verkuil
2017-03-10 13:07           ` Russell King - ARM Linux
2017-03-10 13:22             ` Hans Verkuil
2017-03-10 14:01               ` Russell King - ARM Linux
2017-03-10 14:20                 ` Hans Verkuil
2017-03-10 15:53                   ` Mauro Carvalho Chehab
2017-03-10 22:37                     ` Sakari Ailus
2017-03-11 11:25                       ` Mauro Carvalho Chehab
2017-03-11 21:52                         ` Pavel Machek
2017-03-11 23:14                         ` Russell King - ARM Linux
2017-03-12  0:19                           ` Steve Longerbeam
2017-03-12 21:29                           ` Pavel Machek
2017-03-12 22:37                             ` Mauro Carvalho Chehab
2017-03-14 18:26                               ` Pavel Machek
2017-03-13 12:46                         ` Sakari Ailus
2017-03-14  3:45                           ` Mauro Carvalho Chehab
2017-03-14  7:55                             ` Hans Verkuil
2017-03-14 10:21                               ` Mauro Carvalho Chehab
2017-03-14 22:32                                 ` media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline) Pavel Machek
2017-03-15  0:54                                   ` Mauro Carvalho Chehab
2017-03-15 10:50                                     ` Philippe De Muyter
2017-03-15 18:55                                       ` Nicolas Dufresne
2017-03-16  9:26                                         ` Philipp Zabel
2017-03-16  9:47                                           ` Philippe De Muyter
2017-03-16 10:01                                             ` Philipp Zabel
2017-03-16 10:19                                               ` Philippe De Muyter
2017-03-15 18:04                                     ` Pavel Machek
2017-03-15 20:26                                       ` Mauro Carvalho Chehab
2017-03-16 22:11                                         ` Pavel Machek
2017-03-20 13:24                                 ` [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline Hans Verkuil
2017-03-20 15:39                                   ` Mauro Carvalho Chehab
2017-03-20 16:10                                     ` Russell King - ARM Linux
2017-03-20 17:37                                       ` Mauro Carvalho Chehab
2017-03-17 11:42                               ` Russell King - ARM Linux
2017-03-17 11:55                                 ` Sakari Ailus
2017-03-17 13:24                                   ` Mauro Carvalho Chehab
2017-03-17 13:51                                     ` Philipp Zabel
2017-03-17 14:37                                       ` Russell King - ARM Linux
2017-03-20 13:10                                         ` Hans Verkuil
2017-03-20 15:06                                           ` Mauro Carvalho Chehab
2017-03-21 11:11                                     ` Pavel Machek
2017-03-20 11:16                                   ` Hans Verkuil
2017-03-17 12:02                                 ` Philipp Zabel
2017-03-17 12:16                                   ` Russell King - ARM Linux
2017-03-17 17:49                                     ` Mauro Carvalho Chehab
2017-03-19 13:25                                 ` Pavel Machek
2017-03-26 16:44                               ` Laurent Pinchart
2017-03-10 15:26             ` Mauro Carvalho Chehab
2017-03-10 15:57               ` Russell King - ARM Linux
2017-03-10 17:06                 ` Russell King - ARM Linux
2017-03-10 20:42                 ` Mauro Carvalho Chehab
2017-03-10 21:55                   ` Pavel Machek
2017-03-10 15:09           ` Mauro Carvalho Chehab
2017-03-11 11:32             ` Hans Verkuil
2017-03-11 13:14               ` Mauro Carvalho Chehab
2017-03-11 15:32                 ` Sakari Ailus
2017-03-11 17:32                   ` Russell King - ARM Linux
2017-03-11 18:08                   ` Steve Longerbeam
2017-03-11 18:45                     ` Russell King - ARM Linux
2017-03-11 18:54                       ` Steve Longerbeam
2017-03-11 18:59                         ` Russell King - ARM Linux
2017-03-11 19:06                           ` Steve Longerbeam
2017-03-11 20:41                             ` Russell King - ARM Linux
2017-03-12  3:31                           ` Steve Longerbeam
2017-03-12  7:37                             ` Russell King - ARM Linux
2017-03-12 17:56                               ` Steve Longerbeam
2017-03-12 21:58                                 ` Mauro Carvalho Chehab
2017-03-26  9:12                                   ` script to setup pipeline was " Pavel Machek
2017-03-13 10:44                                 ` Hans Verkuil
2017-03-13 10:58                                   ` Russell King - ARM Linux
2017-03-13 11:08                                     ` Hans Verkuil
2017-03-13 11:42                                     ` Mauro Carvalho Chehab
2017-03-13 12:35                                       ` Russell King - ARM Linux
2017-03-12 18:14                               ` Pavel Machek
2017-03-11 20:26                     ` Pavel Machek
2017-03-11 20:33                       ` Steve Longerbeam
2017-03-11 21:30                 ` Pavel Machek
2017-02-16  2:19 ` [PATCH v4 15/36] platform: add video-multiplexer subdevice driver Steve Longerbeam
2017-02-19 22:02   ` Pavel Machek
2017-02-21  9:11     ` Philipp Zabel
2017-02-24 20:09       ` Pavel Machek
2017-02-27 14:41   ` Rob Herring
2017-03-01  0:20     ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 16/36] UAPI: Add media UAPI Kbuild file Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 17/36] media: Add userspace header file for i.MX Steve Longerbeam
2017-02-16 11:33   ` Philipp Zabel
2017-02-22 23:54     ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 18/36] media: Add i.MX media core driver Steve Longerbeam
2017-02-16 10:27   ` Russell King - ARM Linux
2017-02-16 17:53     ` Steve Longerbeam
2017-02-16 13:02   ` Philipp Zabel
2017-02-16 13:44     ` Russell King - ARM Linux
2017-02-17  1:33     ` Steve Longerbeam
2017-02-17  8:34       ` Philipp Zabel
2017-02-16  2:19 ` [PATCH v4 19/36] media: imx: Add Capture Device Interface Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 20/36] media: imx: Add CSI subdev driver Steve Longerbeam
2017-02-16 11:52   ` Russell King - ARM Linux
2017-02-16 12:40     ` Russell King - ARM Linux
2017-02-16 13:09       ` Russell King - ARM Linux
2017-02-16 14:20         ` Russell King - ARM Linux
2017-02-16 19:07           ` Steve Longerbeam
2017-02-16 18:44       ` Steve Longerbeam
2017-02-16 19:09         ` Russell King - ARM Linux
2017-02-16  2:19 ` [PATCH v4 21/36] media: imx: Add VDIC " Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 22/36] media: imx: Add IC subdev drivers Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 23/36] media: imx: Add MIPI CSI-2 Receiver subdev driver Steve Longerbeam
2017-02-16 10:28   ` Russell King - ARM Linux
2017-02-16 17:54     ` Steve Longerbeam
2017-02-17 10:47   ` Philipp Zabel
2017-02-17 11:06     ` Russell King - ARM Linux
2017-02-17 11:38       ` Philipp Zabel
2017-02-22 23:38         ` Steve Longerbeam
2017-02-22 23:41           ` Steve Longerbeam
2017-02-23  0:06       ` Steve Longerbeam
2017-02-23  0:09         ` Steve Longerbeam
2017-02-17 14:16     ` Philipp Zabel
2017-02-17 18:27       ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 24/36] [media] add Omnivision OV5640 sensor driver Steve Longerbeam
2017-02-27 14:45   ` Rob Herring
2017-03-01  0:43     ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 25/36] ARM: imx_v6_v7_defconfig: Enable staging video4linux drivers Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 26/36] media: imx: add support for bayer formats Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 27/36] media: imx: csi: " Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 28/36] media: imx: csi: fix crop rectangle changes in set_fmt Steve Longerbeam
2017-02-16 11:05   ` Russell King - ARM Linux
2017-02-16 18:16     ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 29/36] media: imx: mipi-csi2: enable setting and getting of frame rates Steve Longerbeam
2017-02-18  1:11   ` Steve Longerbeam
2017-02-18  1:12   ` Steve Longerbeam
2017-02-18  9:23     ` Russell King - ARM Linux
2017-02-18 17:29       ` Steve Longerbeam
2017-02-18 18:08         ` Russell King - ARM Linux
2017-02-20 22:04   ` Sakari Ailus
2017-02-20 22:56     ` Steve Longerbeam
2017-02-20 23:47       ` Steve Longerbeam
2017-02-21 12:15       ` Sakari Ailus
2017-02-21 22:21         ` Steve Longerbeam
2017-02-21 23:34           ` Steve Longerbeam
2017-02-21  0:13     ` Russell King - ARM Linux
2017-02-21  0:18       ` Steve Longerbeam
2017-02-21  8:50         ` Philipp Zabel
2017-03-13 13:16           ` Sakari Ailus
2017-03-13 13:27             ` Russell King - ARM Linux
2017-03-13 13:55               ` Philipp Zabel
2017-03-13 18:06                 ` Steve Longerbeam
2017-03-13 21:03                   ` Sakari Ailus
2017-03-13 21:29                     ` Russell King - ARM Linux
2017-03-14  7:34                     ` Hans Verkuil
2017-03-14 10:43                       ` Philipp Zabel
2017-03-13 20:56               ` Sakari Ailus
2017-03-13 21:07                 ` Russell King - ARM Linux
2017-02-21 12:37       ` Sakari Ailus
2017-02-21 13:21         ` Russell King - ARM Linux
2017-02-21 15:38           ` Sakari Ailus
2017-02-21 16:03             ` Russell King - ARM Linux
2017-02-21 16:15               ` Sakari Ailus
2017-02-16  2:19 ` [PATCH v4 30/36] media: imx: update capture dev format on IDMAC output pad set_fmt Steve Longerbeam
2017-02-16 11:29   ` Philipp Zabel
2017-02-16  2:19 ` [PATCH v4 31/36] media: imx: csi: add __csi_get_fmt Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 32/36] media: imx: csi/fim: add support for frame intervals Steve Longerbeam
2017-02-16  2:38   ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 33/36] media: imx: redo pixel format enumeration and negotiation Steve Longerbeam
2017-02-16 11:32   ` Philipp Zabel
2017-02-22 23:52     ` Steve Longerbeam
2017-02-23  9:10       ` Philipp Zabel
2017-02-24  1:30         ` Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 34/36] media: imx: csi: add frame skipping support Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 35/36] media: imx: csi: fix crop rectangle reset in sink set_fmt Steve Longerbeam
2017-02-16  2:19 ` [PATCH v4 36/36] media: imx: propagate sink pad formats to source pads Steve Longerbeam
2017-02-16 11:29   ` Philipp Zabel
2017-02-16 18:19     ` Steve Longerbeam
2017-02-16 11:37 ` [PATCH v4 00/36] i.MX Media Driver Russell King - ARM Linux
2017-02-16 18:30   ` Steve Longerbeam
2017-02-16 22:20 ` Russell King - ARM Linux
2017-02-16 22:27   ` Steve Longerbeam
2017-02-16 22:57     ` Russell King - ARM Linux
2017-02-17 10:39       ` Philipp Zabel
2017-02-17 10:56         ` Russell King - ARM Linux
2017-02-17 11:21           ` Philipp Zabel
2017-02-18 17:21       ` Steve Longerbeam
2017-02-17 11:43     ` Philipp Zabel
2017-02-17 12:22       ` Sakari Ailus
2017-02-17 12:31         ` Russell King - ARM Linux
2017-02-17 15:04         ` Philipp Zabel
2017-02-18 11:58           ` Sakari Ailus

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).