From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935937AbdACVAm (ORCPT ); Tue, 3 Jan 2017 16:00:42 -0500 Received: from mail-pg0-f65.google.com ([74.125.83.65]:34281 "EHLO mail-pg0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761250AbdACU56 (ORCPT ); Tue, 3 Jan 2017 15:57:58 -0500 From: Steve Longerbeam X-Google-Original-From: Steve Longerbeam To: shawnguo@kernel.org, kernel@pengutronix.de, fabio.estevam@nxp.com, robh+dt@kernel.org, mark.rutland@arm.com, linux@armlinux.org.uk, mchehab@kernel.org, gregkh@linuxfoundation.org, p.zabel@pengutronix.de Cc: linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, devel@driverdev.osuosl.org, Steve Longerbeam Subject: [PATCH v2 10/19] media: Add i.MX media core driver Date: Tue, 3 Jan 2017 12:57:20 -0800 Message-Id: <1483477049-19056-11-git-send-email-steve_longerbeam@mentor.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1483477049-19056-1-git-send-email-steve_longerbeam@mentor.com> References: <1483477049-19056-1-git-send-email-steve_longerbeam@mentor.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add the core media driver for i.MX SOC. Signed-off-by: Steve Longerbeam --- Documentation/devicetree/bindings/media/imx.txt | 205 +++++ Documentation/media/v4l-drivers/imx.rst | 430 ++++++++++ drivers/staging/media/Kconfig | 2 + drivers/staging/media/Makefile | 1 + drivers/staging/media/imx/Kconfig | 8 + drivers/staging/media/imx/Makefile | 6 + drivers/staging/media/imx/TODO | 18 + drivers/staging/media/imx/imx-media-common.c | 985 ++++++++++++++++++++++ drivers/staging/media/imx/imx-media-dev.c | 479 +++++++++++ drivers/staging/media/imx/imx-media-fim.c | 509 +++++++++++ drivers/staging/media/imx/imx-media-internal-sd.c | 457 ++++++++++ drivers/staging/media/imx/imx-media-of.c | 291 +++++++ drivers/staging/media/imx/imx-media-of.h | 25 + drivers/staging/media/imx/imx-media.h | 299 +++++++ include/media/imx.h | 15 + include/uapi/Kbuild | 1 + include/uapi/linux/v4l2-controls.h | 4 + include/uapi/media/Kbuild | 2 + include/uapi/media/imx.h | 30 + 19 files changed, 3767 insertions(+) create mode 100644 Documentation/devicetree/bindings/media/imx.txt create mode 100644 Documentation/media/v4l-drivers/imx.rst create mode 100644 drivers/staging/media/imx/Kconfig create mode 100644 drivers/staging/media/imx/Makefile create mode 100644 drivers/staging/media/imx/TODO create mode 100644 drivers/staging/media/imx/imx-media-common.c create mode 100644 drivers/staging/media/imx/imx-media-dev.c create mode 100644 drivers/staging/media/imx/imx-media-fim.c create mode 100644 drivers/staging/media/imx/imx-media-internal-sd.c create mode 100644 drivers/staging/media/imx/imx-media-of.c create mode 100644 drivers/staging/media/imx/imx-media-of.h create mode 100644 drivers/staging/media/imx/imx-media.h create mode 100644 include/media/imx.h create mode 100644 include/uapi/media/Kbuild create mode 100644 include/uapi/media/imx.h diff --git a/Documentation/devicetree/bindings/media/imx.txt b/Documentation/devicetree/bindings/media/imx.txt new file mode 100644 index 0000000..3593354 --- /dev/null +++ b/Documentation/devicetree/bindings/media/imx.txt @@ -0,0 +1,205 @@ +Freescale i.MX Media Video Devices + +Video Media Controller node +--------------------------- + +This is the parent media controller node for video capture support. + +Required properties: +- compatible : "fsl,imx-media"; +- ports : Should contain a list of phandles pointing to camera + sensor interface ports of IPU devices + + +fim child node +-------------- + +This is an optional child node of the ipu_csi port nodes. It can be used +to modify the default control values for the video capture Frame +Interval Monitor. Refer to Documentation/media/v4l-drivers/imx.rst for +more info on the Frame Interval Monitor. + +Optional properties: +- enable : enable (1) or disable (0) the FIM; +- num-avg : how many frame intervals the FIM will average; +- num-skip : how many frames the FIM will skip after a video + capture restart before beginning to sample frame + intervals; +- tolerance-range : a range of tolerances for the averaged frame + interval error, specified as , in usec. + The FIM will signal a frame interval error if + min < error < max. If the max is <= min, then + tolerance range is disabled (interval error if + error > min). +- input-capture-channel: an input capture channel and channel flags, + specified as . The channel number + must be 0 or 1. The flags can be + IRQ_TYPE_EDGE_RISING, IRQ_TYPE_EDGE_FALLING, or + IRQ_TYPE_EDGE_BOTH, and specify which input + capture signal edge will trigger the event. If + an input capture channel is specified, the FIM + will use this method to measure frame intervals + instead of via the EOF interrupt. The input capture + method is much preferred over EOF as it is not + subject to interrupt latency errors. However it + requires routing the VSYNC or FIELD output + signals of the camera sensor to one of the + i.MX input capture pads (SD1_DAT0, SD1_DAT1), + which also gives up support for SD1. + + +mipi_csi2 node +-------------- + +This is the device node for the MIPI CSI-2 Receiver, required for MIPI +CSI-2 sensors. + +Required properties: +- compatible : "fsl,imx-mipi-csi2"; +- reg : physical base address and length of the register set; +- clocks : the MIPI CSI-2 receiver requires three clocks: hsi_tx + (the DPHY clock), video_27m, and eim_sel; +- clock-names : must contain "dphy_clk", "cfg_clk", "pix_clk"; + +Optional properties: +- interrupts : must contain two level-triggered interrupts, + in order: 100 and 101; + + +video mux node +-------------- + +This is the device node for the video multiplexer. It can control +either the i.MX internal video mux that selects between parallel image +sensors and MIPI CSI-2 virtual channels, or an external mux controlled +by a GPIO. It must be a child device of the syscon GPR device. + +Required properties: +- compatible : "imx-video-mux"; +- sink-ports : the number of sink (input) ports that follow +- ports : at least 2 sink ports must be specified that define + the endpoint inputs to the video mux, and there must + be exactly one output port endpoint which must be the + last port endpoint defined; + +Optional properties: +- reg : the GPR iomuxc register offset and bitmask of the + internal mux bits; +- mux-gpios : if reg is not specified, this must exist to define + a GPIO to control an external mux; + + +SabreLite Quad with OV5642 and OV5640 +------------------------------------- + +On the Sabrelite, the OV5642 module is connected to the parallel bus +input on the i.MX internal video mux to IPU1 CSI0. It's i2c bus connects +to i2c bus 2, so the ov5642 sensor node must be a child of i2c2. + +The MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2 +receiver, and the four virtual channel outputs from the receiver are +routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1, +vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is +also connected to i2c bus 2 on the SabreLite, so it also must be a child +of i2c2. Therefore the OV5642 and OV5640 must not share the same i2c slave +address. + +OV5642 Required properties: +- compatible : "ovti,ov5642"; +- clocks : the OV5642 system clock (cko2, 200 on Sabrelite); +- clock-names : must be "xclk"; +- reg : i2c slave address (must not be default 0x3c on Sabrelite); +- xclk : the system clock frequency (24000000 on Sabrelite); +- reset-gpios : gpio for the reset pin to OV5642 +- pwdn-gpios : gpio for the powewr-down pin to OV5642 + +OV5642 Endpoint Required properties: +- remote-endpoint : must connect to parallel sensor interface input endpoint + on ipu1_csi0 video mux (ipu1_csi0_mux_from_parallel_sensor). +- bus-width : must be 8; +- hsync-active : must be 1; +- vsync-active : must be 1; + +OV5640 Required properties: +- compatible : "ovti,ov5640_mipi"; +- clocks : the OV5640 system clock (pwm3 on Sabrelite); +- clock-names : must be "xclk"; +- reg : i2c slave address (must not be default 0x3c on Sabrelite); +- xclk : the system clock frequency (22000000 on Sabrelite); +- reset-gpios : gpio for the reset pin to OV5640 +- pwdn-gpios : gpio for the power-down pin to OV5640 + +OV5640 MIPI CSI-2 Endpoint Required properties: +- remote-endpoint : must connect to mipi_csi receiver input endpoint + (mipi_csi_from_mipi_sensor). +- reg : the MIPI CSI-2 virtual channel to transmit over; +- data-lanes : must be <0 1>; +- clock-lanes : must be <2>; + +OV5640/OV5642 Optional properties: +- DOVDD-supply : DOVDD regulator supply; +- AVDD-supply : AVDD regulator supply; +- DVDD-supply : DVDD regulator supply; + + +SabreAuto Quad with ADV7180 +--------------------------- + +On the SabreAuto, an on-board ADV7180 SD decoder is connected to the +parallel bus input on the internal video mux to IPU1 CSI0. + +Two analog video inputs are routed to the ADV7180 on the SabreAuto, +composite on Ain1, and composite on Ain3. Those inputs are defined +via inputs and input-names properties in the ADV7180 device node. + +Regulators and port expanders are required for the ADV7180 (power pin +is via port expander gpio on i2c3). The reset pin to the port expander +chip (MAX7310) is controlled by a gpio, so a reset-gpios property must +be defined under the port expander node to control it. + +The sabreauto uses a steering pin to select between the SDA signal on +i2c3 bus, and a data-in pin for an SPI NOR chip. i2cmux can be used to +control this steering pin. Idle state of the i2cmux selects SPI NOR. +This is not classic way to use i2cmux, since one side of the mux selects +something other than an i2c bus, but it works and is probably the cleanest +solution. Note that if one thread is attempting to access SPI NOR while +another thread is accessing i2c3, the SPI NOR access will fail since the +i2cmux has selected the SDA pin rather than SPI NOR data-in. This couldn't +be avoided in any case, the board is not designed to allow concurrent +i2c3 and SPI NOR functions (and the default device-tree does not enable +SPI NOR anyway). + +ADV7180 Required properties: +- compatible : "adi,adv7180"; +- reg : must be 0x21; + +ADV7180 Optional properties: +- DOVDD-supply : DOVDD regulator supply; +- AVDD-supply : AVDD regulator supply; +- DVDD-supply : DVDD regulator supply; +- PVDD-supply : PVDD regulator supply; +- pwdn-gpios : gpio to control ADV7180 power pin, must be + <&port_exp_b 2 GPIO_ACTIVE_LOW> on SabreAuto; +- interrupts : interrupt from ADV7180, must be <27 0x8> on SabreAuto; +- interrupt-parent : must be <&gpio1> on SabreAuto; +- inputs : list of input mux values, must be 0x00 followed by + 0x02 on SabreAuto; +- input-names : names of the inputs; + +ADV7180 Endpoint Required properties: +- remote-endpoint : must connect to parallel sensor interface input endpoint + on ipu1_csi0 video mux (ipu1_csi0_mux_from_parallel_sensor). +- bus-width : must be 8; + + +SabreSD Quad with OV5642 and MIPI CSI-2 OV5640 +---------------------------------------------- + +Similarly to SabreLite, the SabreSD supports a parallel interface +OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642 +connects to i2c bus 1 (i2c1) and the OV5640 to i2c bus 2 (i2c2). + +OV5640 and OV5642 properties are as described above on SabreLite. + +The OV5642 support has not been tested yet due to lack of hardware, +so only OV5640 is enabled in the device tree at this time. diff --git a/Documentation/media/v4l-drivers/imx.rst b/Documentation/media/v4l-drivers/imx.rst new file mode 100644 index 0000000..7a215d8 --- /dev/null +++ b/Documentation/media/v4l-drivers/imx.rst @@ -0,0 +1,430 @@ +i.MX Video Capture Driver +========================= + +Introduction +------------ + +The Freescale i.MX5/6 contains an Image Processing Unit (IPU), which +handles the flow of image frames to and from capture devices and +display devices. + +For image capture, the IPU contains the following internal subunits: + +- Image DMA Controller (IDMAC) +- Camera Serial Interface (CSI) +- Image Converter (IC) +- Sensor Multi-FIFO Controller (SMFC) +- Image Rotator (IRT) +- Video De-Interlace Controller (VDIC) + +The IDMAC is the DMA controller for transfer of image frames to and from +memory. Various dedicated DMA channels exist for both video capture and +display paths. + +The CSI is the frontend capture unit that interfaces directly with +capture sensors over Parallel, BT.656/1120, and MIPI CSI-2 busses. + +The IC handles color-space conversion, resizing, and rotation +operations. There are three independent "tasks" within the IC that can +carry out conversions concurrently: pre-processing encoding, +pre-processing preview, and post-processing. + +The SMFC is composed of four independent channels that each can transfer +captured frames from sensors directly to memory concurrently. + +The IRT carries out 90 and 270 degree image rotation operations. + +The VDIC handles the conversion of interlaced video to progressive, with +support for different motion compensation modes (low, medium, and high +motion). The deinterlaced output frames from the VDIC can be sent to the +IC pre-process preview task for further conversions. + +In addition to the IPU internal subunits, there are also two units +outside the IPU that are also involved in video capture on i.MX: + +- MIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus + interface. This is a Synopsys DesignWare core. +- A video multiplexer for selecting among multiple sensor inputs to + send to a CSI. + +For more info, refer to the latest versions of the i.MX5/6 reference +manuals listed under References. + + +Features +-------- + +Some of the features of this driver include: + +- Many different pipelines can be configured via media controller API, + that correspond to the hardware video capture pipelines supported in + the i.MX. + +- Supports parallel, BT.565, and MIPI CSI-2 interfaces. + +- Up to four concurrent sensor acquisitions, by configuring each + sensor's pipeline using independent entities. This is currently + demonstrated with the SabreSD and SabreLite reference boards with + independent OV5642 and MIPI CSI-2 OV5640 sensor modules. + +- Scaling, color-space conversion, and image rotation via IC task + subdevs. + +- Many pixel formats supported (RGB, packed and planar YUV, partial + planar YUV). + +- The IC pre-process preview subdev supports motion compensated + de-interlacing using the VDIC, with three motion compensation modes: + low, medium, and high motion. The mode is specified with a custom + control. Pipelines are defined that allow sending frames to the + preview subdev directly from the CSI or from the SMFC. + +- Includes a Frame Interval Monitor (FIM) that can correct vertical sync + problems with the ADV718x video decoders. See below for a description + of the FIM. + + +Capture Pipelines +----------------- + +The following describe the various use-cases supported by the pipelines. + +The links shown do not include the frontend sensor, video mux, or mipi +csi-2 receiver links. This depends on the type of sensor interface +(parallel or mipi csi-2). So in all cases, these pipelines begin with: + +sensor -> ipu_csi_mux -> ipu_csi -> ... + +for parallel sensors, or: + +sensor -> imx-mipi-csi2 -> (ipu_csi_mux) -> ipu_csi -> ... + +for mipi csi-2 sensors. The imx-mipi-csi2 receiver may need to route +to the video mux (ipu_csi_mux) before sending to the CSI, depending +on the mipi csi-2 virtual channel, hence ipu_csi_mux is shown in +parenthesis. + +Unprocessed Video Capture: +-------------------------- + +Send frames directly from sensor to camera interface, with no +conversions: + +-> ipu_smfc -> camif + +Note the ipu_smfc can do pixel reordering within the same colorspace. +For example, its sink pad can take UYVY2X8, but its source pad can +output YUYV2X8. + +IC Direct Conversions: +---------------------- + +This pipeline uses the preprocess encode entity to route frames directly +from the CSI to the IC (bypassing the SMFC), to carry out scaling up to +1024x1024 resolution, CSC, and image rotation: + +-> ipu_ic_prpenc -> camif + +This can be a useful capture pipeline for heavily loaded memory bus +traffic environments, since it has minimal IDMAC channel usage. + +Post-Processing Conversions: +---------------------------- + +This pipeline routes frames from the SMFC to the post-processing +entity. In addition to CSC and rotation, this entity supports tiling +which allows scaled output beyond the 1024x1024 limitation of the IC +(up to 4096x4096 scaling output is supported): + +-> ipu_smfc -> ipu_ic_pp -> camif + +Motion Compensated De-interlace: +-------------------------------- + +This pipeline routes frames from the SMFC to the preprocess preview +entity to support motion-compensated de-interlacing using the VDIC, +scaling up to 1024x1024, and CSC: + +-> ipu_smfc -> ipu_ic_prpvf -> camif + +This pipeline also carries out the same conversions as above, but routes +frames directly from the CSI to the IC preprocess preview entity for +minimal memory bandwidth usage (note: this pipeline only works in +"high motion" mode): + +-> ipu_ic_prpvf -> camif + +This pipeline takes the motion-compensated de-interlaced frames and +sends them to the post-processor, to support motion-compensated +de-interlacing, scaling up to 4096x4096, CSC, and rotation: + +-> (ipu_smfc) -> ipu_ic_prpvf -> ipu_ic_pp -> camif + + +Usage Notes +----------- + +Many of the subdevs require information from the active sensor in the +current pipeline when configuring pad formats. Therefore the media links +should be established before configuring the media pad formats. + +Similarly, the capture v4l2 interface subdev inherits controls from the +active subdevs in the current pipeline at link-setup time. Therefore the +capture links should be the last links established in order for capture +to "see" and inherit all possible controls. + +The following platforms have been tested: + + +SabreLite with OV5642 and OV5640 +-------------------------------- + +This platform requires the OmniVision OV5642 module with a parallel +camera interface, and the OV5640 module with a MIPI CSI-2 +interface. Both modules are available from Boundary Devices: + +https://boundarydevices.com/products/nit6x_5mp +https://boundarydevices.com/product/nit6x_5mp_mipi + +Note that if only one camera module is available, the other sensor +node can be disabled in the device tree. + +The following basic example configures unprocessed video capture +pipelines for both sensors. The OV5642 is routed to camif0 +(usually /dev/video0), and the OV5640 (transmitting on mipi csi-2 +virtual channel 1) is routed to camif1 (usually /dev/video1). Both +sensors are configured to output 640x480, UYVY (not shown: all pad +field types should be set to "NONE"): + +.. code-block:: none + + # Setup links for OV5642 + media-ctl -l '"ov5642 1-0042":0 -> "ipu1_csi0_mux":1[1]' + media-ctl -l '"ipu1_csi0_mux":2 -> "ipu1_csi0":0[1]' + media-ctl -l '"ipu1_csi0":1 -> "ipu1_smfc0":0[1]' + media-ctl -l '"ipu1_smfc0":1 -> "camif0":0[1]' + media-ctl -l '"camif0":1 -> "camif0 devnode":0[1]' + # Setup links for OV5640 + media-ctl -l '"ov5640_mipi 1-0040":0 -> "imx-mipi-csi2":0[1]' + media-ctl -l '"imx-mipi-csi2":2 -> "ipu1_csi1":0[1]' + media-ctl -l '"ipu1_csi1":1 -> "ipu1_smfc1":0[1]' + media-ctl -l '"ipu1_smfc1":1 -> "camif1":0[1]' + media-ctl -l '"camif1":1 -> "camif1 devnode":0[1]' + # Configure pads for OV5642 pipeline + media-ctl -V "\"ov5642 1-0042\":0 [fmt:YUYV2X8/640x480]" + media-ctl -V "\"ipu1_csi0_mux\":1 [fmt:YUYV2X8/640x480]" + media-ctl -V "\"ipu1_csi0_mux\":2 [fmt:YUYV2X8/640x480]" + media-ctl -V "\"ipu1_csi0\":0 [fmt:YUYV2X8/640x480]" + media-ctl -V "\"ipu1_csi0\":1 [fmt:YUYV2X8/640x480]" + media-ctl -V "\"ipu1_smfc0\":0 [fmt:YUYV2X8/640x480]" + media-ctl -V "\"ipu1_smfc0\":1 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"camif0\":0 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"camif0\":1 [fmt:UYVY2X8/640x480]" + # Configure pads for OV5640 pipeline + media-ctl -V "\"ov5640_mipi 1-0040\":0 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"imx-mipi-csi2\":0 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"imx-mipi-csi2\":2 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"ipu1_csi1\":0 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"ipu1_csi1\":1 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"ipu1_smfc1\":0 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"ipu1_smfc1\":1 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"camif1\":0 [fmt:UYVY2X8/640x480]" + media-ctl -V "\"camif1\":1 [fmt:UYVY2X8/640x480]" + +Streaming can then begin independently on device nodes /dev/video0 +and /dev/video1. + +SabreAuto with ADV7180 decoder +------------------------------ + +The following example configures a pipeline to capture from the ADV7182 +video decoder, assuming NTSC 720x480 input signals, with Motion +Compensated de-interlacing (not shown: all pad field types should be set +as indicated). $outputfmt can be any format supported by the +ipu1_ic_prpvf entity at its output pad: + +.. code-block:: none + + # Setup links + media-ctl -l '"adv7180 3-0021":0 -> "ipu1_csi0_mux":1[1]' + media-ctl -l '"ipu1_csi0_mux":2 -> "ipu1_csi0":0[1]' + media-ctl -l '"ipu1_csi0":1 -> "ipu1_smfc0":0[1]' + media-ctl -l '"ipu1_smfc0":1 -> "ipu1_ic_prpvf":0[1]' + media-ctl -l '"ipu1_ic_prpvf":1 -> "camif0":0[1]' + media-ctl -l '"camif0":1 -> "camif0 devnode":0[1]' + # Configure pads + # pad field types for below pads must be an interlaced type + # such as "ALTERNATE" + media-ctl -V "\"adv7180 3-0021\":0 [fmt:UYVY2X8/720x480]" + media-ctl -V "\"ipu1_csi0_mux\":1 [fmt:UYVY2X8/720x480]" + media-ctl -V "\"ipu1_csi0_mux\":2 [fmt:UYVY2X8/720x480]" + media-ctl -V "\"ipu1_csi0\":0 [fmt:UYVY2X8/720x480]" + media-ctl -V "\"ipu1_csi0\":1 [fmt:UYVY2X8/720x480]" + media-ctl -V "\"ipu1_smfc0\":0 [fmt:UYVY2X8/720x480]" + media-ctl -V "\"ipu1_smfc0\":1 [fmt:UYVY2X8/720x480]" + media-ctl -V "\"ipu1_ic_prpvf\":0 [fmt:UYVY2X8/720x480]" + # pad field types for below pads must be "NONE" + media-ctl -V "\"ipu1_ic_prpvf\":1 [fmt:$outputfmt]" + media-ctl -V "\"camif0\":0 [fmt:$outputfmt]" + media-ctl -V "\"camif0\":1 [fmt:$outputfmt]" + +Streaming can then begin on /dev/video0. + +This platform accepts Composite Video analog inputs to the ADV7180 on +Ain1 (connector J42) and Ain3 (connector J43). + +To switch to Ain1: + +.. code-block:: none + + # v4l2-ctl -i0 + +To switch to Ain3: + +.. code-block:: none + + # v4l2-ctl -i1 + + +Frame Interval Monitor +---------------------- + +The adv718x decoders can occasionally send corrupt fields during +NTSC/PAL signal re-sync (too little or too many video lines). When +this happens, the IPU triggers a mechanism to re-establish vertical +sync by adding 1 dummy line every frame, which causes a rolling effect +from image to image, and can last a long time before a stable image is +recovered. Or sometimes the mechanism doesn't work at all, causing a +permanent split image (one frame contains lines from two consecutive +captured images). + +From experiment it was found that during image rolling, the frame +intervals (elapsed time between two EOF's) drop below the nominal +value for the current standard, by about one frame time (60 usec), +and remain at that value until rolling stops. + +While the reason for this observation isn't known (the IPU dummy +line mechanism should show an increase in the intervals by 1 line +time every frame, not a fixed value), we can use it to detect the +corrupt fields using a frame interval monitor. If the FIM detects a +bad frame interval, a subdev event is sent. In response, userland can +issue a streaming restart to correct the rolling/split image. + +The FIM is implemented in the imx-csi entity, and the entities that have +direct connections to the CSI call into the FIM to monitor the frame +intervals: ipu_smfc, ipu_ic_prpenc, and ipu_prpvf (when configured with +a direct link from ipu_csi). Userland can register with the FIM event +notifications on the imx-csi subdev device node +(V4L2_EVENT_IMX_FRAME_INTERVAL). + +The imx-csi entity includes custom controls to tweak some dials for FIM. +If one of these controls is changed during streaming, the FIM will be +reset and will continue at the new settings. + +- V4L2_CID_IMX_FIM_ENABLE + +Enable/disable the FIM. + +- V4L2_CID_IMX_FIM_NUM + +How many frame interval errors to average before comparing against the +nominal frame interval reported by the sensor. This can reduce noise +from interrupt latency. + +- V4L2_CID_IMX_FIM_TOLERANCE_MIN + +If the averaged intervals fall outside nominal by this amount, in +microseconds, streaming will be restarted. + +- V4L2_CID_IMX_FIM_TOLERANCE_MAX + +If any interval errors are higher than this value, those error samples +are discarded and do not enter into the average. This can be used to +discard really high interval errors that might be due to very high +system load, causing excessive interrupt latencies. + +- V4L2_CID_IMX_FIM_NUM_SKIP + +How many frames to skip after a FIM reset or stream restart before +FIM begins to average intervals. It has been found that there can +be a few bad frame intervals after stream restart which are not +attributed to adv718x sending a corrupt field, so this is used to +skip those frames to prevent unnecessary restarts. + +Finally, all the defaults for these controls can be modified via a +device tree child node of the ipu_csi port nodes, see +Documentation/devicetree/bindings/media/imx.txt. + + +SabreSD with MIPI CSI-2 OV5640 +------------------------------ + +The device tree for SabreSD includes OF graphs for both the parallel +OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI +CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled. +The OV5640 module connects to MIPI connector J5 (sorry I don't have the +compatible module part number or URL). + +The following example configures a post-processing pipeline to capture +from the OV5640 (not shown: all pad field types should be set to +"NONE"). $sensorfmt can be any format supported by the +OV5640. $outputfmt can be any format supported by the ipu1_ic_pp1 +entity at its output pad: + + +.. code-block:: none + + # Setup links + media-ctl -l '"ov5640_mipi 1-003c":0 -> "imx-mipi-csi2":0[1]' + media-ctl -l '"imx-mipi-csi2":2 -> "ipu1_csi1":0[1]' + media-ctl -l '"ipu1_csi1":1 -> "ipu1_smfc1":0[1]' + media-ctl -l '"ipu1_smfc1":1 -> "ipu1_ic_pp1":0[1]' + media-ctl -l '"ipu1_ic_pp1":1 -> "camif0":0[1]' + media-ctl -l '"camif0":1 -> "camif0 devnode":0[1]' + # Configure pads + media-ctl -V "\"ov5640_mipi 1-003c\":0 [fmt:$sensorfmt]" + media-ctl -V "\"imx-mipi-csi2\":0 [fmt:$sensorfmt]" + media-ctl -V "\"imx-mipi-csi2\":2 [fmt:$sensorfmt]" + media-ctl -V "\"ipu1_csi1\":0 [fmt:$sensorfmt]" + media-ctl -V "\"ipu1_csi1\":1 [fmt:$sensorfmt]" + media-ctl -V "\"ipu1_smfc1\":0 [fmt:$sensorfmt]" + media-ctl -V "\"ipu1_smfc1\":1 [fmt:$sensorfmt]" + media-ctl -V "\"ipu1_ic_pp1\":0 [fmt:$sensorfmt]" + media-ctl -V "\"ipu1_ic_pp1\":1 [fmt:$outputfmt]" + media-ctl -V "\"camif0\":0 [fmt:$outputfmt]" + media-ctl -V "\"camif0\":1 [fmt:$outputfmt]" + +Streaming can then begin on /dev/video0. + + + +Known Issues +------------ + +1. When using 90 or 270 degree rotation control at capture resolutions + near the IC resizer limit of 1024x1024, and combined with planar + pixel formats (YUV420, YUV422p), frame capture will often fail with + no end-of-frame interrupts from the IDMAC channel. To work around + this, use lower resolution and/or packed formats (YUYV, RGB3, etc.) + when 90 or 270 rotations are needed. + + +File list +--------- + +drivers/staging/media/imx/ +include/media/imx.h +include/uapi/media/imx.h + +References +---------- + +[1] "i.MX 6Dual/6Quad Applications Processor Reference Manual" +[2] "i.MX 6Solo/6DualLite Applications Processor Reference Manual" + + +Author +------ +Steve Longerbeam + +Copyright (C) 2012-2016 Mentor Graphics Inc. diff --git a/drivers/staging/media/Kconfig b/drivers/staging/media/Kconfig index ffb8fa7..05b55a8 100644 --- a/drivers/staging/media/Kconfig +++ b/drivers/staging/media/Kconfig @@ -25,6 +25,8 @@ source "drivers/staging/media/cxd2099/Kconfig" source "drivers/staging/media/davinci_vpfe/Kconfig" +source "drivers/staging/media/imx/Kconfig" + source "drivers/staging/media/omap4iss/Kconfig" source "drivers/staging/media/s5p-cec/Kconfig" diff --git a/drivers/staging/media/Makefile b/drivers/staging/media/Makefile index a28e82c..6f50ddd 100644 --- a/drivers/staging/media/Makefile +++ b/drivers/staging/media/Makefile @@ -1,6 +1,7 @@ obj-$(CONFIG_I2C_BCM2048) += bcm2048/ obj-$(CONFIG_VIDEO_SAMSUNG_S5P_CEC) += s5p-cec/ obj-$(CONFIG_DVB_CXD2099) += cxd2099/ +obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx/ obj-$(CONFIG_LIRC_STAGING) += lirc/ obj-$(CONFIG_VIDEO_DM365_VPFE) += davinci_vpfe/ obj-$(CONFIG_VIDEO_OMAP4) += omap4iss/ diff --git a/drivers/staging/media/imx/Kconfig b/drivers/staging/media/imx/Kconfig new file mode 100644 index 0000000..bfde58d --- /dev/null +++ b/drivers/staging/media/imx/Kconfig @@ -0,0 +1,8 @@ +config VIDEO_IMX_MEDIA + tristate "i.MX5/6 V4L2 media core driver" + depends on MEDIA_CONTROLLER && VIDEO_V4L2 && ARCH_MXC && IMX_IPUV3_CORE + default y + ---help--- + Say yes here to enable support for video4linux media controller + driver for the i.MX5/6 SOC. + diff --git a/drivers/staging/media/imx/Makefile b/drivers/staging/media/imx/Makefile new file mode 100644 index 0000000..ef9f11b --- /dev/null +++ b/drivers/staging/media/imx/Makefile @@ -0,0 +1,6 @@ +imx-media-objs := imx-media-dev.o imx-media-fim.o imx-media-internal-sd.o \ + imx-media-of.o + +obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o +obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o + diff --git a/drivers/staging/media/imx/TODO b/drivers/staging/media/imx/TODO new file mode 100644 index 0000000..b188c6e --- /dev/null +++ b/drivers/staging/media/imx/TODO @@ -0,0 +1,18 @@ + +- v4l2-compliance + +- imx-csi subdev is not being autoloaded as a kernel module, probably + because ipu_add_client_devices() does not register the IPU client + platform devices, but only allocates those devices. + +- Verify driver remove paths. + +- Currently registering with notifications from subdevs are only + available through the subdev device nodes and not through the main + capture device node. Need to come up with a way to find the camif in + the current pipeline that owns the subdev that sent the notify. + +- Need to decide whether a mem2mem device should be incorporated into + the media graph, or whether it should be a separate device that does + not link with any other entities. + diff --git a/drivers/staging/media/imx/imx-media-common.c b/drivers/staging/media/imx/imx-media-common.c new file mode 100644 index 0000000..f04218f --- /dev/null +++ b/drivers/staging/media/imx/imx-media-common.c @@ -0,0 +1,985 @@ +/* + * V4L2 Media Controller Driver for Freescale i.MX5/6 SOC + * + * Copyright (c) 2016 Mentor Graphics Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ +#include +#include "imx-media.h" + +/* + * List of pixel formats for the subdevs. This must be a super-set of + * the formats supported by the ipu image converter. + */ +static const struct imx_media_pixfmt imx_media_formats[] = { + { + .fourcc = V4L2_PIX_FMT_UYVY, + .codes = {MEDIA_BUS_FMT_UYVY8_2X8, MEDIA_BUS_FMT_UYVY8_1X16}, + .cs = IPUV3_COLORSPACE_YUV, + .bpp = 16, + }, { + .fourcc = V4L2_PIX_FMT_YUYV, + .codes = {MEDIA_BUS_FMT_YUYV8_2X8, MEDIA_BUS_FMT_YUYV8_1X16}, + .cs = IPUV3_COLORSPACE_YUV, + .bpp = 16, + }, { + .fourcc = V4L2_PIX_FMT_RGB565, + .codes = {MEDIA_BUS_FMT_RGB565_2X8_LE}, + .cs = IPUV3_COLORSPACE_RGB, + .bpp = 16, + }, { + .fourcc = V4L2_PIX_FMT_RGB24, + .codes = {MEDIA_BUS_FMT_RGB888_1X24, + MEDIA_BUS_FMT_RGB888_2X12_LE}, + .cs = IPUV3_COLORSPACE_RGB, + .bpp = 24, + }, { + .fourcc = V4L2_PIX_FMT_BGR24, + .cs = IPUV3_COLORSPACE_RGB, + .bpp = 24, + }, { + .fourcc = V4L2_PIX_FMT_RGB32, + .codes = {MEDIA_BUS_FMT_ARGB8888_1X32}, + .cs = IPUV3_COLORSPACE_RGB, + .bpp = 32, + }, { + .fourcc = V4L2_PIX_FMT_BGR32, + .cs = IPUV3_COLORSPACE_RGB, + .bpp = 32, + }, { + .fourcc = V4L2_PIX_FMT_YUV420, + .cs = IPUV3_COLORSPACE_YUV, + .bpp = 12, + .planar = true, + }, { + .fourcc = V4L2_PIX_FMT_YVU420, + .cs = IPUV3_COLORSPACE_YUV, + .bpp = 12, + .planar = true, + }, { + .fourcc = V4L2_PIX_FMT_YUV422P, + .cs = IPUV3_COLORSPACE_YUV, + .bpp = 16, + .planar = true, + }, { + .fourcc = V4L2_PIX_FMT_NV12, + .cs = IPUV3_COLORSPACE_YUV, + .bpp = 12, + .planar = true, + }, { + .fourcc = V4L2_PIX_FMT_NV16, + .cs = IPUV3_COLORSPACE_YUV, + .bpp = 16, + .planar = true, + }, +}; + +const struct imx_media_pixfmt *imx_media_find_format(u32 fourcc, u32 code, + bool allow_rgb, + bool allow_planar) +{ + const struct imx_media_pixfmt *fmt, *ret = NULL; + int i, j; + + for (i = 0; i < ARRAY_SIZE(imx_media_formats); i++) { + fmt = &imx_media_formats[i]; + + if (fourcc && fmt->fourcc == fourcc && + (fmt->cs != IPUV3_COLORSPACE_RGB || allow_rgb) && + (!fmt->planar || (allow_planar && fmt->codes[0]))) { + ret = fmt; + goto out; + } + + for (j = 0; fmt->codes[j]; j++) { + if (fmt->codes[j] == code && + (fmt->cs != IPUV3_COLORSPACE_RGB || allow_rgb) && + (!fmt->planar || allow_planar)) { + ret = fmt; + goto out; + } + } + } +out: + return ret; +} +EXPORT_SYMBOL_GPL(imx_media_find_format); + +int imx_media_enum_format(u32 *code, u32 index, bool allow_rgb, + bool allow_planar) +{ + const struct imx_media_pixfmt *fmt; + + if (index >= ARRAY_SIZE(imx_media_formats)) + return -EINVAL; + + fmt = &imx_media_formats[index]; + if ((fmt->cs == IPUV3_COLORSPACE_RGB && !allow_rgb) || + (fmt->planar && (!allow_planar || !fmt->codes[0]))) + return -EINVAL; + + *code = fmt->codes[0]; + return 0; +} +EXPORT_SYMBOL_GPL(imx_media_enum_format); + +int imx_media_init_mbus_fmt(struct v4l2_mbus_framefmt *mbus, + u32 width, u32 height, u32 code, u32 field, + const struct imx_media_pixfmt **cc) +{ + const struct imx_media_pixfmt *lcc; + + mbus->width = width; + mbus->height = height; + mbus->field = field; + if (code == 0) + imx_media_enum_format(&code, 0, true, true); + lcc = imx_media_find_format(0, code, true, true); + if (!lcc) + return -EINVAL; + mbus->code = code; + + if (cc) + *cc = lcc; + return 0; +} +EXPORT_SYMBOL_GPL(imx_media_init_mbus_fmt); + +int imx_media_mbus_fmt_to_pix_fmt(struct v4l2_pix_format *pix, + struct v4l2_mbus_framefmt *mbus) +{ + const struct imx_media_pixfmt *fmt; + u32 stride; + + fmt = imx_media_find_format(0, mbus->code, true, true); + if (!fmt) + return -EINVAL; + + stride = fmt->planar ? mbus->width : (mbus->width * fmt->bpp) >> 3; + + pix->width = mbus->width; + pix->height = mbus->height; + pix->pixelformat = fmt->fourcc; + pix->field = mbus->field; + pix->bytesperline = stride; + pix->sizeimage = (pix->width * pix->height * fmt->bpp) >> 3; + + return 0; +} +EXPORT_SYMBOL_GPL(imx_media_mbus_fmt_to_pix_fmt); + +int imx_media_mbus_fmt_to_ipu_image(struct ipu_image *image, + struct v4l2_mbus_framefmt *mbus) +{ + int ret; + + memset(image, 0, sizeof(*image)); + + ret = imx_media_mbus_fmt_to_pix_fmt(&image->pix, mbus); + if (ret) + return ret; + + image->rect.width = mbus->width; + image->rect.height = mbus->height; + + return 0; +} +EXPORT_SYMBOL_GPL(imx_media_mbus_fmt_to_ipu_image); + +int imx_media_ipu_image_to_mbus_fmt(struct v4l2_mbus_framefmt *mbus, + struct ipu_image *image) +{ + const struct imx_media_pixfmt *fmt; + + fmt = imx_media_find_format(image->pix.pixelformat, 0, true, true); + if (!fmt) + return -EINVAL; + + memset(mbus, 0, sizeof(*mbus)); + mbus->width = image->pix.width; + mbus->height = image->pix.height; + mbus->code = fmt->codes[0]; + mbus->field = image->pix.field; + + return 0; +} +EXPORT_SYMBOL_GPL(imx_media_ipu_image_to_mbus_fmt); + +/* + * DMA buffer ring handling + */ +struct imx_media_dma_buf_ring { + struct imx_media_dev *imxmd; + + /* the ring */ + struct imx_media_dma_buf buf[IMX_MEDIA_MAX_RING_BUFS]; + /* the scratch buffer for underruns */ + struct imx_media_dma_buf scratch; + + /* buffer generator */ + struct media_entity *src; + /* buffer receiver */ + struct media_entity *sink; + + spinlock_t lock; + + int num_bufs; + unsigned long last_seq; +}; + +void imx_media_free_dma_buf(struct imx_media_dev *imxmd, + struct imx_media_dma_buf *buf) +{ + if (buf->virt && !buf->vb) + dma_free_coherent(imxmd->dev, buf->len, buf->virt, buf->phys); + + buf->virt = NULL; + buf->phys = 0; +} +EXPORT_SYMBOL_GPL(imx_media_free_dma_buf); + +int imx_media_alloc_dma_buf(struct imx_media_dev *imxmd, + struct imx_media_dma_buf *buf, + int size) +{ + imx_media_free_dma_buf(imxmd, buf); + + buf->ring = NULL; + buf->vb = NULL; + buf->len = PAGE_ALIGN(size); + buf->virt = dma_alloc_coherent(imxmd->dev, buf->len, &buf->phys, + GFP_DMA | GFP_KERNEL); + if (!buf->virt) { + dev_err(imxmd->dev, "failed to alloc dma buffer\n"); + return -ENOMEM; + } + + buf->state = IMX_MEDIA_BUF_STATUS_PREPARED; + buf->seq = 0; + + return 0; +} +EXPORT_SYMBOL_GPL(imx_media_alloc_dma_buf); + +void imx_media_free_dma_buf_ring(struct imx_media_dma_buf_ring *ring) +{ + int i; + + if (!ring) + return; + + dev_dbg(ring->imxmd->dev, "freeing ring [%s -> %s]\n", + ring->src->name, ring->sink->name); + + imx_media_free_dma_buf(ring->imxmd, &ring->scratch); + + for (i = 0; i < ring->num_bufs; i++) + imx_media_free_dma_buf(ring->imxmd, &ring->buf[i]); + kfree(ring); +} +EXPORT_SYMBOL_GPL(imx_media_free_dma_buf_ring); + +struct imx_media_dma_buf_ring * +imx_media_alloc_dma_buf_ring(struct imx_media_dev *imxmd, + struct media_entity *src, + struct media_entity *sink, + int size, int num_bufs, + bool alloc_bufs) +{ + struct imx_media_dma_buf_ring *ring; + int i, ret; + + if (num_bufs < IMX_MEDIA_MIN_RING_BUFS || + num_bufs > IMX_MEDIA_MAX_RING_BUFS) + return ERR_PTR(-EINVAL); + + ring = kzalloc(sizeof(*ring), GFP_KERNEL); + if (!ring) + return ERR_PTR(-ENOMEM); + + spin_lock_init(&ring->lock); + ring->imxmd = imxmd; + ring->src = src; + ring->sink = sink; + ring->num_bufs = num_bufs; + ring->last_seq = 0; + + for (i = 0; i < num_bufs; i++) { + if (alloc_bufs) { + ret = imx_media_alloc_dma_buf(imxmd, &ring->buf[i], + size); + if (ret) { + ring->num_bufs = i; + goto free_ring; + } + } + ring->buf[i].ring = ring; + ring->buf[i].index = i; + } + + /* now allocate the scratch buffer for underruns */ + ret = imx_media_alloc_dma_buf(imxmd, &ring->scratch, size); + if (ret) + goto free_ring; + ring->scratch.ring = ring; + ring->scratch.index = 999; + + dev_dbg(ring->imxmd->dev, + "created ring [%s -> %s], buf size %d, num bufs %d\n", + ring->src->name, ring->sink->name, size, num_bufs); + + return ring; + +free_ring: + imx_media_free_dma_buf_ring(ring); + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(imx_media_alloc_dma_buf_ring); + +static struct imx_media_dma_buf * +__dma_buf_queue(struct imx_media_dma_buf_ring *ring, int index) +{ + struct imx_media_dma_buf *buf; + + if (index >= ring->num_bufs) + return ERR_PTR(-EINVAL); + + buf = &ring->buf[index]; + if (WARN_ON(buf->state != IMX_MEDIA_BUF_STATUS_PREPARED)) + return ERR_PTR(-EINVAL); + + buf->state = IMX_MEDIA_BUF_STATUS_QUEUED; + buf->seq = ring->last_seq++; + + return buf; +} + +int imx_media_dma_buf_queue(struct imx_media_dma_buf_ring *ring, int index) +{ + struct imx_media_dma_buf *buf; + unsigned long flags; + + spin_lock_irqsave(&ring->lock, flags); + buf = __dma_buf_queue(ring, index); + spin_unlock_irqrestore(&ring->lock, flags); + + if (IS_ERR(buf)) + return PTR_ERR(buf); + + dev_dbg(ring->imxmd->dev, "buf%d [%s -> %s] queued\n", + index, ring->src->name, ring->sink->name); + + return 0; +} +EXPORT_SYMBOL_GPL(imx_media_dma_buf_queue); + +int imx_media_dma_buf_queue_from_vb(struct imx_media_dma_buf_ring *ring, + struct vb2_buffer *vb) +{ + struct imx_media_dma_buf *buf; + unsigned long flags; + dma_addr_t phys; + void *virt; + + if (vb->index >= ring->num_bufs) + return -EINVAL; + + virt = vb2_plane_vaddr(vb, 0); + phys = vb2_dma_contig_plane_dma_addr(vb, 0); + + spin_lock_irqsave(&ring->lock, flags); + buf = __dma_buf_queue(ring, vb->index); + if (IS_ERR(buf)) + goto err_unlock; + + buf->virt = virt; + buf->phys = phys; + buf->vb = vb; + spin_unlock_irqrestore(&ring->lock, flags); + + dev_dbg(ring->imxmd->dev, "buf%d [%s -> %s] queued from vb\n", + buf->index, ring->src->name, ring->sink->name); + + return 0; +err_unlock: + spin_unlock_irqrestore(&ring->lock, flags); + return PTR_ERR(buf); +} +EXPORT_SYMBOL_GPL(imx_media_dma_buf_queue_from_vb); + +void imx_media_dma_buf_done(struct imx_media_dma_buf *buf, + enum imx_media_dma_buf_status status) +{ + struct imx_media_dma_buf_ring *ring = buf->ring; + unsigned long flags; + + spin_lock_irqsave(&ring->lock, flags); + WARN_ON(buf->state != IMX_MEDIA_BUF_STATUS_ACTIVE); + buf->state = buf->status = status; + spin_unlock_irqrestore(&ring->lock, flags); + + if (buf == &ring->scratch) + dev_dbg(ring->imxmd->dev, "buf-scratch [%s -> %s] done\n", + ring->src->name, ring->sink->name); + else + dev_dbg(ring->imxmd->dev, "buf%d [%s -> %s] done\n", + buf->index, ring->src->name, ring->sink->name); + + /* if the sink is a subdev, inform it that new buffers are available */ + if (is_media_entity_v4l2_subdev(ring->sink)) { + struct v4l2_subdev *sd = + media_entity_to_v4l2_subdev(ring->sink); + v4l2_subdev_call(sd, core, ioctl, IMX_MEDIA_NEW_DMA_BUF, NULL); + } +} +EXPORT_SYMBOL_GPL(imx_media_dma_buf_done); + +/* find and return the oldest buffer in the done/error state */ +struct imx_media_dma_buf * +imx_media_dma_buf_dequeue(struct imx_media_dma_buf_ring *ring) +{ + unsigned long flags, oldest_seq = (unsigned long)-1; + struct imx_media_dma_buf *buf = NULL, *scan; + int i; + + spin_lock_irqsave(&ring->lock, flags); + + for (i = 0; i < ring->num_bufs; i++) { + scan = &ring->buf[i]; + if (scan->state != IMX_MEDIA_BUF_STATUS_DONE && + scan->state != IMX_MEDIA_BUF_STATUS_ERROR) + continue; + if (scan->seq < oldest_seq) { + buf = scan; + oldest_seq = scan->seq; + } + } + + if (buf) + buf->state = IMX_MEDIA_BUF_STATUS_PREPARED; + + spin_unlock_irqrestore(&ring->lock, flags); + + if (buf) + dev_dbg(ring->imxmd->dev, "buf%d [%s -> %s] dequeued\n", + buf->index, ring->src->name, ring->sink->name); + + return buf; +} +EXPORT_SYMBOL_GPL(imx_media_dma_buf_dequeue); + +/* find and return the active buffer, there can be only one! */ +struct imx_media_dma_buf * +imx_media_dma_buf_get_active(struct imx_media_dma_buf_ring *ring) +{ + struct imx_media_dma_buf *buf = NULL; + unsigned long flags; + int i; + + spin_lock_irqsave(&ring->lock, flags); + + for (i = 0; i < ring->num_bufs; i++) { + if (ring->buf[i].state == IMX_MEDIA_BUF_STATUS_ACTIVE) { + buf = &ring->buf[i]; + goto out; + } + } + + if (ring->scratch.state == IMX_MEDIA_BUF_STATUS_ACTIVE) + buf = &ring->scratch; + +out: + spin_unlock_irqrestore(&ring->lock, flags); + return buf; +} +EXPORT_SYMBOL_GPL(imx_media_dma_buf_get_active); + +/* set this buffer as the active one */ +int imx_media_dma_buf_set_active(struct imx_media_dma_buf *buf) +{ + struct imx_media_dma_buf_ring *ring = buf->ring; + unsigned long flags; + + spin_lock_irqsave(&ring->lock, flags); + WARN_ON(buf != &ring->scratch && + buf->state != IMX_MEDIA_BUF_STATUS_QUEUED); + buf->state = IMX_MEDIA_BUF_STATUS_ACTIVE; + spin_unlock_irqrestore(&ring->lock, flags); + + return 0; +} +EXPORT_SYMBOL_GPL(imx_media_dma_buf_set_active); + +/* + * find and return the oldest buffer in the queued state. If + * there are none, return the scratch buffer. + */ +struct imx_media_dma_buf * +imx_media_dma_buf_get_next_queued(struct imx_media_dma_buf_ring *ring) +{ + unsigned long flags, oldest_seq = (unsigned long)-1; + struct imx_media_dma_buf *buf = NULL, *scan; + int i; + + spin_lock_irqsave(&ring->lock, flags); + + for (i = 0; i < ring->num_bufs; i++) { + scan = &ring->buf[i]; + if (scan->state != IMX_MEDIA_BUF_STATUS_QUEUED) + continue; + if (scan->seq < oldest_seq) { + buf = scan; + oldest_seq = scan->seq; + } + } + + if (!buf) + buf = &ring->scratch; + + spin_unlock_irqrestore(&ring->lock, flags); + + if (buf != &ring->scratch) + dev_dbg(ring->imxmd->dev, "buf%d [%s -> %s] next\n", + buf->index, ring->src->name, ring->sink->name); + else + dev_dbg(ring->imxmd->dev, "buf-scratch [%s -> %s] next\n", + ring->src->name, ring->sink->name); + + return buf; +} +EXPORT_SYMBOL_GPL(imx_media_dma_buf_get_next_queued); + +struct imx_media_dma_buf * +imx_media_dma_buf_get(struct imx_media_dma_buf_ring *ring, int index) +{ + if (index >= ring->num_bufs) + return ERR_PTR(-EINVAL); + return &ring->buf[index]; +} +EXPORT_SYMBOL_GPL(imx_media_dma_buf_get); + +/* form a subdev name given a group id and ipu id */ +void imx_media_grp_id_to_sd_name(char *sd_name, int sz, u32 grp_id, int ipu_id) +{ + int id; + + switch (grp_id) { + case IMX_MEDIA_GRP_ID_CSI0...IMX_MEDIA_GRP_ID_CSI1: + id = (grp_id >> IMX_MEDIA_GRP_ID_CSI_BIT) - 1; + snprintf(sd_name, sz, "ipu%d_csi%d", ipu_id + 1, id); + break; + case IMX_MEDIA_GRP_ID_SMFC0...IMX_MEDIA_GRP_ID_SMFC3: + id = (grp_id >> IMX_MEDIA_GRP_ID_SMFC_BIT) - 1; + snprintf(sd_name, sz, "ipu%d_smfc%d", ipu_id + 1, id); + break; + case IMX_MEDIA_GRP_ID_IC_PRPENC: + snprintf(sd_name, sz, "ipu%d_ic_prpenc", ipu_id + 1); + break; + case IMX_MEDIA_GRP_ID_IC_PRPVF: + snprintf(sd_name, sz, "ipu%d_ic_prpvf", ipu_id + 1); + break; + case IMX_MEDIA_GRP_ID_IC_PP0...IMX_MEDIA_GRP_ID_IC_PP3: + id = (grp_id >> IMX_MEDIA_GRP_ID_IC_PP_BIT) - 1; + snprintf(sd_name, sz, "ipu%d_ic_pp%d", ipu_id + 1, id); + break; + case IMX_MEDIA_GRP_ID_CAMIF0...IMX_MEDIA_GRP_ID_CAMIF3: + id = (grp_id >> IMX_MEDIA_GRP_ID_CAMIF_BIT) - 1; + snprintf(sd_name, sz, "camif%d", id); + break; + default: + break; + } +} +EXPORT_SYMBOL_GPL(imx_media_grp_id_to_sd_name); + +struct imx_media_subdev * +imx_media_find_subdev_by_sd(struct imx_media_dev *imxmd, + struct v4l2_subdev *sd) +{ + struct imx_media_subdev *imxsd; + int i, ret = -ENODEV; + + for (i = 0; i < imxmd->num_subdevs; i++) { + imxsd = &imxmd->subdev[i]; + if (sd == imxsd->sd) { + ret = 0; + break; + } + } + + return ret ? ERR_PTR(ret) : imxsd; +} +EXPORT_SYMBOL_GPL(imx_media_find_subdev_by_sd); + +struct imx_media_subdev * +imx_media_find_subdev_by_id(struct imx_media_dev *imxmd, u32 grp_id) +{ + struct imx_media_subdev *imxsd; + int i, ret = -ENODEV; + + for (i = 0; i < imxmd->num_subdevs; i++) { + imxsd = &imxmd->subdev[i]; + if (imxsd->sd && imxsd->sd->grp_id == grp_id) { + ret = 0; + break; + } + } + + return ret ? ERR_PTR(ret) : imxsd; +} +EXPORT_SYMBOL_GPL(imx_media_find_subdev_by_id); + +/* + * Search for an entity in the current pipeline with given grp_id. + * Called with mdev->graph_mutex held. + */ +static struct media_entity * +find_pipeline_entity(struct imx_media_dev *imxmd, + struct media_entity_graph *graph, + struct media_entity *start_entity, + u32 grp_id) +{ + struct media_entity *entity; + struct v4l2_subdev *sd; + + media_entity_graph_walk_start(graph, start_entity); + + while ((entity = media_entity_graph_walk_next(graph))) { + if (is_media_entity_v4l2_video_device(entity)) + continue; + + sd = media_entity_to_v4l2_subdev(entity); + if (sd->grp_id & grp_id) + return entity; + } + + return NULL; +} + +/* + * Search for an entity in the current pipeline with given grp_id, + * then locate the remote enabled source pad from that entity. + * Called with mdev->graph_mutex held. + */ +static struct media_pad * +find_pipeline_remote_source_pad(struct imx_media_dev *imxmd, + struct media_entity_graph *graph, + struct media_entity *start_entity, + u32 grp_id) +{ + struct media_pad *pad = NULL; + struct media_entity *me; + int i; + + me = find_pipeline_entity(imxmd, graph, start_entity, grp_id); + if (!me) + return NULL; + + /* Find remote source pad */ + for (i = 0; i < me->num_pads; i++) { + struct media_pad *spad = &me->pads[i]; + + if (!(spad->flags & MEDIA_PAD_FL_SINK)) + continue; + pad = media_entity_remote_pad(spad); + if (pad) + return pad; + } + + return NULL; +} + +/* + * Find the mipi-csi2 virtual channel reached from the given + * start entity in the current pipeline. + * Must be called with mdev->graph_mutex held. + */ +int imx_media_find_mipi_csi2_channel(struct imx_media_dev *imxmd, + struct media_entity *start_entity) +{ + struct media_entity_graph graph; + struct v4l2_subdev *sd; + struct media_pad *pad; + int ret; + + ret = media_entity_graph_walk_init(&graph, &imxmd->md); + if (ret) + return ret; + + /* first try to locate the mipi-csi2 from the video mux */ + pad = find_pipeline_remote_source_pad(imxmd, &graph, start_entity, + IMX_MEDIA_GRP_ID_VIDMUX); + /* if couldn't reach it from there, try from a CSI */ + if (!pad) + pad = find_pipeline_remote_source_pad(imxmd, &graph, + start_entity, + IMX_MEDIA_GRP_ID_CSI); + if (pad) { + sd = media_entity_to_v4l2_subdev(pad->entity); + if (sd->grp_id & IMX_MEDIA_GRP_ID_CSI2) { + ret = pad->index - 1; /* found it! */ + dev_dbg(imxmd->dev, "found vc%d from %s\n", + ret, start_entity->name); + goto cleanup; + } + } + + ret = -EPIPE; + +cleanup: + media_entity_graph_walk_cleanup(&graph); + return ret; +} +EXPORT_SYMBOL_GPL(imx_media_find_mipi_csi2_channel); + +/* + * Find a subdev reached from the given start entity in the + * current pipeline. + * Must be called with mdev->graph_mutex held. + */ +struct imx_media_subdev * +imx_media_find_pipeline_subdev(struct imx_media_dev *imxmd, + struct media_entity *start_entity, + u32 grp_id) +{ + struct media_entity_graph graph; + struct imx_media_subdev *imxsd; + struct media_entity *me; + struct v4l2_subdev *sd; + int ret; + + ret = media_entity_graph_walk_init(&graph, &imxmd->md); + if (ret) + return ERR_PTR(ret); + + me = find_pipeline_entity(imxmd, &graph, start_entity, grp_id); + if (!me) { + imxsd = ERR_PTR(-ENODEV); + goto cleanup; + } + + sd = media_entity_to_v4l2_subdev(me); + imxsd = imx_media_find_subdev_by_sd(imxmd, sd); +cleanup: + media_entity_graph_walk_cleanup(&graph); + return imxsd; +} +EXPORT_SYMBOL_GPL(imx_media_find_pipeline_subdev); + +struct imx_media_subdev * +__imx_media_find_sensor(struct imx_media_dev *imxmd, + struct media_entity *start_entity) +{ + return imx_media_find_pipeline_subdev(imxmd, start_entity, + IMX_MEDIA_GRP_ID_SENSOR); +} +EXPORT_SYMBOL_GPL(__imx_media_find_sensor); + +struct imx_media_subdev * +imx_media_find_sensor(struct imx_media_dev *imxmd, + struct media_entity *start_entity) +{ + struct imx_media_subdev *sensor; + + mutex_lock(&imxmd->md.graph_mutex); + sensor = __imx_media_find_sensor(imxmd, start_entity); + mutex_unlock(&imxmd->md.graph_mutex); + + return sensor; +} +EXPORT_SYMBOL_GPL(imx_media_find_sensor); + +/* + * The subdevs have to be powered on/off, and streaming + * enabled/disabled, in a specific sequence. + */ +static const u32 stream_on_seq[] = { + IMX_MEDIA_GRP_ID_IC_PP, + IMX_MEDIA_GRP_ID_IC_PRPVF, + IMX_MEDIA_GRP_ID_IC_PRPENC, + IMX_MEDIA_GRP_ID_SMFC, + IMX_MEDIA_GRP_ID_SENSOR, + IMX_MEDIA_GRP_ID_CSI2, + IMX_MEDIA_GRP_ID_VIDMUX, + IMX_MEDIA_GRP_ID_CSI, +}; + +static const u32 stream_off_seq[] = { + IMX_MEDIA_GRP_ID_IC_PP, + IMX_MEDIA_GRP_ID_IC_PRPVF, + IMX_MEDIA_GRP_ID_IC_PRPENC, + IMX_MEDIA_GRP_ID_SMFC, + IMX_MEDIA_GRP_ID_CSI, + IMX_MEDIA_GRP_ID_VIDMUX, + IMX_MEDIA_GRP_ID_CSI2, + IMX_MEDIA_GRP_ID_SENSOR, +}; + +#define NUM_STREAM_ENTITIES ARRAY_SIZE(stream_on_seq) + +static const u32 power_on_seq[] = { + IMX_MEDIA_GRP_ID_CSI2, + IMX_MEDIA_GRP_ID_SENSOR, + IMX_MEDIA_GRP_ID_VIDMUX, + IMX_MEDIA_GRP_ID_CSI, + IMX_MEDIA_GRP_ID_SMFC, + IMX_MEDIA_GRP_ID_IC_PRPENC, + IMX_MEDIA_GRP_ID_IC_PRPVF, + IMX_MEDIA_GRP_ID_IC_PP, +}; + +static const u32 power_off_seq[] = { + IMX_MEDIA_GRP_ID_IC_PP, + IMX_MEDIA_GRP_ID_IC_PRPVF, + IMX_MEDIA_GRP_ID_IC_PRPENC, + IMX_MEDIA_GRP_ID_SMFC, + IMX_MEDIA_GRP_ID_CSI, + IMX_MEDIA_GRP_ID_VIDMUX, + IMX_MEDIA_GRP_ID_SENSOR, + IMX_MEDIA_GRP_ID_CSI2, +}; + +#define NUM_POWER_ENTITIES ARRAY_SIZE(power_on_seq) + +static int imx_media_set_stream(struct imx_media_dev *imxmd, + struct media_entity *start_entity, + bool on) +{ + struct media_entity_graph graph; + struct media_entity *entity; + struct v4l2_subdev *sd; + int i, ret; + u32 id; + + mutex_lock(&imxmd->md.graph_mutex); + + ret = media_entity_graph_walk_init(&graph, &imxmd->md); + if (ret) + goto unlock; + + for (i = 0; i < NUM_STREAM_ENTITIES; i++) { + id = on ? stream_on_seq[i] : stream_off_seq[i]; + entity = find_pipeline_entity(imxmd, &graph, + start_entity, id); + if (!entity) + continue; + + sd = media_entity_to_v4l2_subdev(entity); + ret = v4l2_subdev_call(sd, video, s_stream, on); + if (ret && ret != -ENOIOCTLCMD) + break; + } + + media_entity_graph_walk_cleanup(&graph); +unlock: + mutex_unlock(&imxmd->md.graph_mutex); + + return (ret && ret != -ENOIOCTLCMD) ? ret : 0; +} + +/* + * Turn current pipeline streaming on/off starting from entity. + */ +int imx_media_pipeline_set_stream(struct imx_media_dev *imxmd, + struct media_entity *entity, + struct media_pipeline *pipe, + bool on) +{ + int ret = 0; + + if (on) { + ret = media_entity_pipeline_start(entity, pipe); + if (ret) + return ret; + ret = imx_media_set_stream(imxmd, entity, true); + if (!ret) + return 0; + /* fall through */ + } + + imx_media_set_stream(imxmd, entity, false); + if (entity->pipe) + media_entity_pipeline_stop(entity); + + return ret; +} +EXPORT_SYMBOL_GPL(imx_media_pipeline_set_stream); + +/* + * Turn current pipeline power on/off starting from start_entity. + * Must be called with mdev->graph_mutex held. + */ +int imx_media_pipeline_set_power(struct imx_media_dev *imxmd, + struct media_entity_graph *graph, + struct media_entity *start_entity, bool on) +{ + struct media_entity *entity; + struct v4l2_subdev *sd; + int i, ret = 0; + u32 id; + + for (i = 0; i < NUM_POWER_ENTITIES; i++) { + id = on ? power_on_seq[i] : power_off_seq[i]; + entity = find_pipeline_entity(imxmd, graph, start_entity, id); + if (!entity) + continue; + + sd = media_entity_to_v4l2_subdev(entity); + + ret = v4l2_subdev_call(sd, core, s_power, on); + if (ret && ret != -ENOIOCTLCMD) + break; + } + + return (ret && ret != -ENOIOCTLCMD) ? ret : 0; +} +EXPORT_SYMBOL_GPL(imx_media_pipeline_set_power); + +/* + * Inherit the v4l2 controls from all entities in a pipeline + * to the given video device. + * Must be called with mdev->graph_mutex held. + */ +int imx_media_inherit_controls(struct imx_media_dev *imxmd, + struct video_device *vfd, + struct media_entity *start_entity) +{ + struct media_entity_graph graph; + struct media_entity *entity; + struct v4l2_subdev *sd; + int ret; + + ret = media_entity_graph_walk_init(&graph, &imxmd->md); + if (ret) + return ret; + + media_entity_graph_walk_start(&graph, start_entity); + + while ((entity = media_entity_graph_walk_next(&graph))) { + if (is_media_entity_v4l2_video_device(entity)) + continue; + + sd = media_entity_to_v4l2_subdev(entity); + + dev_dbg(imxmd->dev, "%s: adding controls from %s\n", + __func__, sd->name); + + ret = v4l2_ctrl_add_handler(vfd->ctrl_handler, + sd->ctrl_handler, + NULL); + if (ret) + break; + } + + media_entity_graph_walk_cleanup(&graph); + return ret; +} +EXPORT_SYMBOL_GPL(imx_media_inherit_controls); + +MODULE_DESCRIPTION("i.MX5/6 v4l2 media controller driver"); +MODULE_AUTHOR("Steve Longerbeam "); +MODULE_LICENSE("GPL"); diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c new file mode 100644 index 0000000..8d22730 --- /dev/null +++ b/drivers/staging/media/imx/imx-media-dev.c @@ -0,0 +1,479 @@ +/* + * V4L2 Media Controller Driver for Freescale i.MX5/6 SOC + * + * Copyright (c) 2016 Mentor Graphics Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include