All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/21] Basic i.MX IPUv3 capture support
@ 2016-10-14 17:34 Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 01/21] [media] v4l2-async: move code out of v4l2_async_notifier_register into v4l2_async_test_nofity_all Philipp Zabel
                   ` (22 more replies)
  0 siblings, 23 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Hi,

the second round removes the prepare_stream callback and instead lets the
intermediate subdevices propagate s_stream calls to their sources rather
than individually calling s_stream on each subdevice from the bridge driver.
This is similar to how drm bridges recursively call into their next neighbor.
It makes it easier to do bringup ordering on a per-link level, as long as the
source preparation can be done at s_power, and the sink can just prepare, call
s_stream on its source, and then enable itself inside s_stream. Obviously this
would only work in a generic fashion if all asynchronous subdevices with both
inputs and outputs would propagate s_stream to their source subdevices.

Changes since v1:
 - Propagate field and colorspace in ipucsi_subdev_set_format.
 - Remove v4l2_media_subdev_prepare_stream and v4l2_media_subdev_s_stream,
   let subdevices propagate s_stream calls to their upstream subdevices
   themselves.
 - Various fixes (see individual patches for details)

regards
Philipp

Philipp Zabel (20):
  [media] v4l2-async: move code out of v4l2_async_notifier_register into
    v4l2_async_test_nofity_all
  [media] v4l2-async: allow subdevices to add further subdevices to the
    notifier waiting list
  [media] v4l: of: add v4l2_of_subdev_registered
  [media] v4l2-async: add new subdevices to the tail of subdev_list
  [media] imx: Add i.MX SoC wide media device driver
  [media] imx-ipu: Add i.MX IPUv3 CSI subdevice driver
  [media] imx: Add i.MX IPUv3 capture driver
  [media] platform: add video-multiplexer subdevice driver
  [media] imx: Add i.MX MIPI CSI-2 subdevice driver
  [media] tc358743: put lanes in STOP state before starting streaming
  ARM: dts: imx6qdl: Add capture-subsystem node
  ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their
    connections
  ARM: dts: imx6qdl: Add MIPI CSI-2 D-PHY compatible and clocks
  ARM: dts: nitrogen6x: Add dtsi for BD_HDMI_MIPI HDMI to MIPI CSI-2
    receiver board
  gpu: ipuv3: add ipu_csi_set_downsize
  [media] imx-ipuv3-csi: support downsizing
  [media] add mux and video interface bridge entity functions
  [media] video-multiplexer: set entity function to mux
  [media] imx: Set i.MX MIPI CSI-2 entity function to bridge
  [media] tc358743: set entity function to video interface bridge

Sascha Hauer (1):
  [media] imx: Add IPUv3 media common code

 .../devicetree/bindings/media/fsl-imx-capture.txt  |   92 ++
 .../bindings/media/video-multiplexer.txt           |   59 ++
 Documentation/media/uapi/mediactl/media-types.rst  |   22 +
 arch/arm/boot/dts/imx6dl.dtsi                      |  187 ++++
 arch/arm/boot/dts/imx6q.dtsi                       |  123 +++
 .../boot/dts/imx6qdl-nitrogen6x-bd-hdmi-mipi.dtsi  |   73 ++
 arch/arm/boot/dts/imx6qdl.dtsi                     |   17 +-
 drivers/gpu/ipu-v3/ipu-csi.c                       |   16 +
 drivers/media/i2c/tc358743.c                       |    5 +
 drivers/media/platform/Kconfig                     |   10 +
 drivers/media/platform/Makefile                    |    3 +
 drivers/media/platform/imx/Kconfig                 |   33 +
 drivers/media/platform/imx/Makefile                |    5 +
 drivers/media/platform/imx/imx-ipu-capture.c       | 1015 ++++++++++++++++++++
 drivers/media/platform/imx/imx-ipu.c               |  321 +++++++
 drivers/media/platform/imx/imx-ipu.h               |   43 +
 drivers/media/platform/imx/imx-ipuv3-csi.c         |  578 +++++++++++
 drivers/media/platform/imx/imx-media.c             |  249 +++++
 drivers/media/platform/imx/imx-mipi-csi2.c         |  698 ++++++++++++++
 drivers/media/platform/video-multiplexer.c         |  473 +++++++++
 drivers/media/v4l2-core/v4l2-async.c               |  102 +-
 drivers/media/v4l2-core/v4l2-of.c                  |   69 ++
 include/media/v4l2-async.h                         |   12 +
 include/media/v4l2-of.h                            |   15 +
 include/uapi/linux/media.h                         |    6 +
 include/video/imx-ipu-v3.h                         |    1 +
 26 files changed, 4214 insertions(+), 13 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/media/fsl-imx-capture.txt
 create mode 100644 Documentation/devicetree/bindings/media/video-multiplexer.txt
 create mode 100644 arch/arm/boot/dts/imx6qdl-nitrogen6x-bd-hdmi-mipi.dtsi
 create mode 100644 drivers/media/platform/imx/Kconfig
 create mode 100644 drivers/media/platform/imx/Makefile
 create mode 100644 drivers/media/platform/imx/imx-ipu-capture.c
 create mode 100644 drivers/media/platform/imx/imx-ipu.c
 create mode 100644 drivers/media/platform/imx/imx-ipu.h
 create mode 100644 drivers/media/platform/imx/imx-ipuv3-csi.c
 create mode 100644 drivers/media/platform/imx/imx-media.c
 create mode 100644 drivers/media/platform/imx/imx-mipi-csi2.c
 create mode 100644 drivers/media/platform/video-multiplexer.c

-- 
2.9.3


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH v2 01/21] [media] v4l2-async: move code out of v4l2_async_notifier_register into v4l2_async_test_nofity_all
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 02/21] [media] v4l2-async: allow subdevices to add further subdevices to the notifier waiting list Philipp Zabel
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

This will be reused in the following patch to catch already registered,
newly added asynchronous subdevices from v4l2_async_register_subdev.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Changes since v1:
 - Add missing v4l2_async_test_notify_all() call.
---
 drivers/media/v4l2-core/v4l2-async.c | 38 ++++++++++++++++++++++--------------
 1 file changed, 23 insertions(+), 15 deletions(-)

diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
index 5bada20..1113df4 100644
--- a/drivers/media/v4l2-core/v4l2-async.c
+++ b/drivers/media/v4l2-core/v4l2-async.c
@@ -134,11 +134,31 @@ static void v4l2_async_cleanup(struct v4l2_subdev *sd)
 	sd->dev = NULL;
 }
 
+static int v4l2_async_test_notify_all(struct v4l2_async_notifier *notifier)
+{
+	struct v4l2_subdev *sd, *tmp;
+
+	list_for_each_entry_safe(sd, tmp, &subdev_list, async_list) {
+		struct v4l2_async_subdev *asd;
+		int ret;
+
+		asd = v4l2_async_belongs(notifier, sd);
+		if (!asd)
+			continue;
+
+		ret = v4l2_async_test_notify(notifier, sd, asd);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
 int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
 				 struct v4l2_async_notifier *notifier)
 {
-	struct v4l2_subdev *sd, *tmp;
 	struct v4l2_async_subdev *asd;
+	int ret;
 	int i;
 
 	if (!notifier->num_subdevs || notifier->num_subdevs > V4L2_MAX_SUBDEVS)
@@ -171,23 +191,11 @@ int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
 	/* Keep also completed notifiers on the list */
 	list_add(&notifier->list, &notifier_list);
 
-	list_for_each_entry_safe(sd, tmp, &subdev_list, async_list) {
-		int ret;
-
-		asd = v4l2_async_belongs(notifier, sd);
-		if (!asd)
-			continue;
-
-		ret = v4l2_async_test_notify(notifier, sd, asd);
-		if (ret < 0) {
-			mutex_unlock(&list_lock);
-			return ret;
-		}
-	}
+	ret = v4l2_async_test_notify_all(notifier);
 
 	mutex_unlock(&list_lock);
 
-	return 0;
+	return ret;
 }
 EXPORT_SYMBOL(v4l2_async_notifier_register);
 
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 02/21] [media] v4l2-async: allow subdevices to add further subdevices to the notifier waiting list
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 01/21] [media] v4l2-async: move code out of v4l2_async_notifier_register into v4l2_async_test_nofity_all Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 03/21] [media] v4l: of: add v4l2_of_subdev_registered Philipp Zabel
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Currently the v4l2_async_notifier needs to be given a list of matches
for all expected subdevices on creation. When chaining subdevices that
are asynchronously probed via device tree, the bridge device that sets
up the notifier does not know the complete list of subdevices, as it
can only parse its own device tree node to obtain information about
the nearest neighbor subdevices.
To support indirectly connected subdevices, we need to support amending
the existing notifier waiting list with newly found neighbor subdevices
with each registered subdevice.

This can be achieved by adding new v42l_async_subdev matches to the
notifier waiting list during the v4l2_subdev registered callback, which
is called under the list lock from either v4l2_async_register_subdev or
v4l2_async_notifier_register. For this purpose a new exported function
__v4l2_async_notifier_add_subdev is added.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Changes since v1:
 - Don't add more async subdevs than V4L2_MAX_SUBDEVS.
 - Fix v4l2_async_unregister, allocate a correctly sized device
   cache without modifying notifier->num_subdevs.
---
 drivers/media/v4l2-core/v4l2-async.c | 84 +++++++++++++++++++++++++++++++++---
 include/media/v4l2-async.h           | 12 ++++++
 2 files changed, 89 insertions(+), 7 deletions(-)

diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
index 1113df4..3ce6533 100644
--- a/drivers/media/v4l2-core/v4l2-async.c
+++ b/drivers/media/v4l2-core/v4l2-async.c
@@ -109,6 +109,7 @@ static int v4l2_async_test_notify(struct v4l2_async_notifier *notifier,
 		if (ret < 0)
 			return ret;
 	}
+
 	/* Move from the global subdevice list to notifier's done */
 	list_move(&sd->async_list, &notifier->done);
 
@@ -158,7 +159,7 @@ int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
 				 struct v4l2_async_notifier *notifier)
 {
 	struct v4l2_async_subdev *asd;
-	int ret;
+	struct list_head *tail;
 	int i;
 
 	if (!notifier->num_subdevs || notifier->num_subdevs > V4L2_MAX_SUBDEVS)
@@ -191,33 +192,91 @@ int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
 	/* Keep also completed notifiers on the list */
 	list_add(&notifier->list, &notifier_list);
 
-	ret = v4l2_async_test_notify_all(notifier);
+	do {
+		int ret;
+
+		tail = notifier->waiting.prev;
+
+		ret = v4l2_async_test_notify_all(notifier);
+		if (ret < 0) {
+			mutex_unlock(&list_lock);
+			return ret;
+		}
+
+		/*
+		 * If entries were added to the notifier waiting list, check
+		 * again if the corresponding subdevices are already available.
+		 */
+	} while (tail != notifier->waiting.prev);
 
 	mutex_unlock(&list_lock);
 
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(v4l2_async_notifier_register);
 
+int __v4l2_async_notifier_add_subdev(struct v4l2_async_notifier *notifier,
+				     struct v4l2_async_subdev *asd)
+{
+	struct v4l2_async_subdev *tmp_asd;
+
+	lockdep_assert_held(&list_lock);
+
+	if (notifier->num_subdevs >= V4L2_MAX_SUBDEVS)
+		return -EINVAL;
+
+	if (asd->match_type != V4L2_ASYNC_MATCH_OF)
+		return -EINVAL;
+
+	/*
+	 * First check if the same notifier is already on the waiting or done
+	 * lists. This can happen if a subdevice with multiple outputs is added
+	 * by all its downstream subdevices.
+	 */
+	list_for_each_entry(tmp_asd, &notifier->waiting, list)
+		if (tmp_asd->match.of.node == asd->match.of.node)
+			return 0;
+	list_for_each_entry(tmp_asd, &notifier->done, list)
+		if (tmp_asd->match.of.node == asd->match.of.node)
+			return 0;
+
+	/*
+	 * Add the new async subdev to the notifier waiting list, so
+	 * v4l2_async_belongs may use it to compare against entries in
+	 * subdev_list.
+	 * In case the subdev matching asd has already been passed in the
+	 * subdev_list walk in v4l2_async_notifier_register, or if
+	 * we are called from v4l2_async_register_subdev, the subdev_list
+	 * will have to be walked again.
+	 */
+	list_add_tail(&asd->list, &notifier->waiting);
+
+	return 0;
+}
+
 void v4l2_async_notifier_unregister(struct v4l2_async_notifier *notifier)
 {
 	struct v4l2_subdev *sd, *tmp;
-	unsigned int notif_n_subdev = notifier->num_subdevs;
-	unsigned int n_subdev = min(notif_n_subdev, V4L2_MAX_SUBDEVS);
+	unsigned int notif_n_subdev = 0;
+	unsigned int n_subdev;
 	struct device **dev;
 	int i = 0;
 
 	if (!notifier->v4l2_dev)
 		return;
 
+	mutex_lock(&list_lock);
+
+	list_for_each_entry(sd, &notifier->done, async_list)
+		notif_n_subdev++;
+	n_subdev = min(notif_n_subdev, V4L2_MAX_SUBDEVS);
+
 	dev = kmalloc(n_subdev * sizeof(*dev), GFP_KERNEL);
 	if (!dev) {
 		dev_err(notifier->v4l2_dev->dev,
 			"Failed to allocate device cache!\n");
 	}
 
-	mutex_lock(&list_lock);
-
 	list_del(&notifier->list);
 
 	list_for_each_entry_safe(sd, tmp, &notifier->done, async_list) {
@@ -296,8 +355,19 @@ int v4l2_async_register_subdev(struct v4l2_subdev *sd)
 	list_for_each_entry(notifier, &notifier_list, list) {
 		struct v4l2_async_subdev *asd = v4l2_async_belongs(notifier, sd);
 		if (asd) {
+			struct list_head *tail = notifier->waiting.prev;
 			int ret = v4l2_async_test_notify(notifier, sd, asd);
+
+			/*
+			 * If entries were added to the notifier waiting list,
+			 * check if the corresponding subdevices are already
+			 * available.
+			 */
+			if (tail != notifier->waiting.prev)
+				ret = v4l2_async_test_notify_all(notifier);
+
 			mutex_unlock(&list_lock);
+
 			return ret;
 		}
 	}
diff --git a/include/media/v4l2-async.h b/include/media/v4l2-async.h
index 8e2a236..e4e4b11 100644
--- a/include/media/v4l2-async.h
+++ b/include/media/v4l2-async.h
@@ -114,6 +114,18 @@ int v4l2_async_notifier_register(struct v4l2_device *v4l2_dev,
 				 struct v4l2_async_notifier *notifier);
 
 /**
+ * __v4l2_async_notifier_add_subdev - adds a subdevice to the notifier waitlist
+ *
+ * @notifier: notifier the calling subdev is bound to
+ * @asd: asynchronous subdev match
+ *
+ * To be called from inside a subdevices' registered_async callback to add
+ * additional subdevices to the notifier waiting list.
+ */
+int __v4l2_async_notifier_add_subdev(struct v4l2_async_notifier *notifier,
+				     struct v4l2_async_subdev *asd);
+
+/**
  * v4l2_async_notifier_unregister - unregisters a subdevice asynchronous notifier
  *
  * @notifier: pointer to &struct v4l2_async_notifier
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 03/21] [media] v4l: of: add v4l2_of_subdev_registered
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 01/21] [media] v4l2-async: move code out of v4l2_async_notifier_register into v4l2_async_test_nofity_all Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 02/21] [media] v4l2-async: allow subdevices to add further subdevices to the notifier waiting list Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 04/21] [media] v4l2-async: add new subdevices to the tail of subdev_list Philipp Zabel
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Provide a default registered callback for device tree probed subdevices
that use OF graph bindings to add still missing source subdevices to
the async notifier waiting list.
This is only necessary for subdevices that have input ports to which
other subdevices are connected that are not initially known to the
master/bridge device when it sets up the notifier.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Changes since v1:
 - Alphabetical #include order
 - Compile v4l2_of_subdev_registered only if CONFIG_MEDIA_CONTROLLER is set.
---
 drivers/media/v4l2-core/v4l2-of.c | 69 +++++++++++++++++++++++++++++++++++++++
 include/media/v4l2-of.h           | 15 +++++++++
 2 files changed, 84 insertions(+)

diff --git a/drivers/media/v4l2-core/v4l2-of.c b/drivers/media/v4l2-core/v4l2-of.c
index 93b3368..73c5643 100644
--- a/drivers/media/v4l2-core/v4l2-of.c
+++ b/drivers/media/v4l2-core/v4l2-of.c
@@ -18,6 +18,7 @@
 #include <linux/string.h>
 #include <linux/types.h>
 
+#include <media/v4l2-device.h>
 #include <media/v4l2-of.h>
 
 static int v4l2_of_parse_csi_bus(const struct device_node *node,
@@ -314,3 +315,71 @@ void v4l2_of_put_link(struct v4l2_of_link *link)
 	of_node_put(link->remote_node);
 }
 EXPORT_SYMBOL(v4l2_of_put_link);
+
+struct v4l2_subdev *v4l2_find_subdev_by_node(struct v4l2_device *v4l2_dev,
+					     struct device_node *node)
+{
+	struct v4l2_subdev *sd;
+
+	list_for_each_entry(sd, &v4l2_dev->subdevs, list)
+		if (sd->of_node == node)
+			return sd;
+
+	return NULL;
+}
+EXPORT_SYMBOL(v4l2_find_subdev_by_node);
+
+#ifdef CONFIG_MEDIA_CONTROLLER
+/**
+ * v4l2_of_subdev_registered() - default OF probed subdev registered callback
+ * @subdev: subdevice with initialized entities
+ *
+ * Parse all OF graph endpoints corrensponding to the subdev's entity input pads
+ * and add the remote subdevs to the async subdev notifier.
+ */
+int v4l2_of_subdev_registered(struct v4l2_subdev *sd)
+{
+	struct device_node *ep;
+
+	for_each_endpoint_of_node(sd->of_node, ep) {
+		struct v4l2_of_link link;
+		struct media_entity *entity;
+		unsigned int pad;
+		int ret;
+
+		ret = v4l2_of_parse_link(ep, &link);
+		if (ret)
+			continue;
+
+		/*
+		 * Assume 1:1 correspondence between OF node and entity,
+		 * and between OF port numbers and pad indices.
+		 */
+		entity = &sd->entity;
+		pad = link.local_port;
+
+		if (pad >= entity->num_pads)
+			return -EINVAL;
+
+		/* Add source subdevs to async notifier */
+		if (entity->pads[pad].flags & MEDIA_PAD_FL_SINK) {
+			struct v4l2_async_subdev *asd;
+
+			asd = devm_kzalloc(sd->dev, sizeof(*asd), GFP_KERNEL);
+			if (!asd) {
+				v4l2_of_put_link(&link);
+				return -ENOMEM;
+			}
+
+			asd->match_type = V4L2_ASYNC_MATCH_OF;
+			asd->match.of.node = link.remote_node;
+
+			__v4l2_async_notifier_add_subdev(sd->notifier, asd);
+		}
+
+		v4l2_of_put_link(&link);
+	}
+
+	return 0;
+}
+#endif /* CONFIG_MEDIA_CONTROLLER */
diff --git a/include/media/v4l2-of.h b/include/media/v4l2-of.h
index 4dc34b2..f99a04b 100644
--- a/include/media/v4l2-of.h
+++ b/include/media/v4l2-of.h
@@ -22,6 +22,8 @@
 #include <media/v4l2-mediabus.h>
 
 struct device_node;
+struct v4l2_device;
+struct v4l2_subdev;
 
 /**
  * struct v4l2_of_bus_mipi_csi2 - MIPI CSI-2 bus data structure
@@ -95,6 +97,8 @@ void v4l2_of_free_endpoint(struct v4l2_of_endpoint *endpoint);
 int v4l2_of_parse_link(const struct device_node *node,
 		       struct v4l2_of_link *link);
 void v4l2_of_put_link(struct v4l2_of_link *link);
+struct v4l2_subdev *v4l2_find_subdev_by_node(struct v4l2_device *v4l2_dev,
+					     struct device_node *node);
 #else /* CONFIG_OF */
 
 static inline int v4l2_of_parse_endpoint(const struct device_node *node,
@@ -123,6 +127,17 @@ static inline void v4l2_of_put_link(struct v4l2_of_link *link)
 {
 }
 
+struct v4l2_subdev *v4l2_find_subdev_by_node(struct v4l2_device *v4l2_dev,
+					     struct device_node *node)
+{
+	return NULL;
+}
 #endif /* CONFIG_OF */
 
+#if defined(CONFIG_OF) && defined(CONFIG_MEDIA_CONTROLLER)
+int v4l2_of_subdev_registered(struct v4l2_subdev *sd);
+#else
+#define v4l2_of_subdev_registered NULL
+#endif /* CONFIG_OF && CONFIG_MEDIA_CONTROLLER */
+
 #endif /* _V4L2_OF_H */
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 04/21] [media] v4l2-async: add new subdevices to the tail of subdev_list
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (2 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 03/21] [media] v4l: of: add v4l2_of_subdev_registered Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 05/21] [media] imx: Add i.MX SoC wide media device driver Philipp Zabel
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

That way the asynchronous notifier will pick them up in the order they
were registered.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 drivers/media/v4l2-core/v4l2-async.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
index 3ce6533..4ab1de0 100644
--- a/drivers/media/v4l2-core/v4l2-async.c
+++ b/drivers/media/v4l2-core/v4l2-async.c
@@ -373,7 +373,7 @@ int v4l2_async_register_subdev(struct v4l2_subdev *sd)
 	}
 
 	/* None matched, wait for hot-plugging */
-	list_add(&sd->async_list, &subdev_list);
+	list_add_tail(&sd->async_list, &subdev_list);
 
 	mutex_unlock(&list_lock);
 
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 05/21] [media] imx: Add i.MX SoC wide media device driver
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (3 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 04/21] [media] v4l2-async: add new subdevices to the tail of subdev_list Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 06/21] [media] imx: Add IPUv3 media common code Philipp Zabel
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

This driver registers a single, SoC wide media device, which all entities
in the media graph can be registered to. It is probed from device tree
using a capture-subsystem node, listing the IPUv3 input ports, similarly
to the already existing display-subsystem node that lists the IPUv3 output
ports.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 .../devicetree/bindings/media/fsl-imx-capture.txt  |  25 +++
 drivers/media/platform/Kconfig                     |   2 +
 drivers/media/platform/Makefile                    |   1 +
 drivers/media/platform/imx/Kconfig                 |   7 +
 drivers/media/platform/imx/Makefile                |   1 +
 drivers/media/platform/imx/imx-media.c             | 249 +++++++++++++++++++++
 6 files changed, 285 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/media/fsl-imx-capture.txt
 create mode 100644 drivers/media/platform/imx/Kconfig
 create mode 100644 drivers/media/platform/imx/Makefile
 create mode 100644 drivers/media/platform/imx/imx-media.c

diff --git a/Documentation/devicetree/bindings/media/fsl-imx-capture.txt b/Documentation/devicetree/bindings/media/fsl-imx-capture.txt
new file mode 100644
index 0000000..5805331
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/fsl-imx-capture.txt
@@ -0,0 +1,25 @@
+Freescale i.MX IPUv3 capture subsystem
+======================================
+
+The i.MX5/6 IPUv3 capture subsystem consists of one or two IPUs and all
+external subdevices connected to the IPU CSI input ports. On i.MX5 these
+are only external subdevices. On i.MX6, there are additional SoC internal
+multiplexers and a MIPI CSI-2 bridge connected to the CSI input ports via
+of_graph bindings.
+
+On i.MX6 variants with two IPUs, either a single capture subsystem node may be
+defined, containing all CSI ports, or two separate capture subsystem nodes may
+be defined, each containing the CSI ports of a single IPU. In the latter case
+there must be no external of_graph links between the two subsystems.
+
+Required properties:
+- compatible: should be "fsl,imx-capture-subsystem"
+- ports: should contain a list of phandles pointing to the capture interface
+  ports of IPU devices
+
+Example:
+
+capture-subsystem {
+        compatible = "fsl,imx-capture-subsystem";
+        ports = <&ipu1_csi0>, <&ipu1_csi1>, <&ipu2_csi0>, <&ipu2_csi1>;
+};
diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index ce4a96f..105bf57 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -29,6 +29,8 @@ config VIDEO_VIA_CAMERA
 
 source "drivers/media/platform/davinci/Kconfig"
 
+source "drivers/media/platform/imx/Kconfig"
+
 source "drivers/media/platform/omap/Kconfig"
 
 source "drivers/media/platform/blackfin/Kconfig"
diff --git a/drivers/media/platform/Makefile b/drivers/media/platform/Makefile
index 40b18d1..f7f9008 100644
--- a/drivers/media/platform/Makefile
+++ b/drivers/media/platform/Makefile
@@ -51,6 +51,7 @@ obj-$(CONFIG_VIDEO_RENESAS_FCP) 	+= rcar-fcp.o
 obj-$(CONFIG_VIDEO_RENESAS_JPU) 	+= rcar_jpu.o
 obj-$(CONFIG_VIDEO_RENESAS_VSP1)	+= vsp1/
 
+obj-y	+= imx/
 obj-y	+= omap/
 
 obj-$(CONFIG_VIDEO_AM437X_VPFE)		+= am437x/
diff --git a/drivers/media/platform/imx/Kconfig b/drivers/media/platform/imx/Kconfig
new file mode 100644
index 0000000..3bd699c
--- /dev/null
+++ b/drivers/media/platform/imx/Kconfig
@@ -0,0 +1,7 @@
+config MEDIA_IMX
+	tristate "Multimedia Support for Freescale i.MX"
+	depends on MEDIA_CONTROLLER
+	default y if IMX_IPUV3_CORE
+	---help---
+	  This driver provides a SoC wide media controller device that all
+	  multimedia components in i.MX5 and i.MX6 SoCs can register with.
diff --git a/drivers/media/platform/imx/Makefile b/drivers/media/platform/imx/Makefile
new file mode 100644
index 0000000..74bed76
--- /dev/null
+++ b/drivers/media/platform/imx/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_MEDIA_IMX)			+= imx-media.o
diff --git a/drivers/media/platform/imx/imx-media.c b/drivers/media/platform/imx/imx-media.c
new file mode 100644
index 0000000..1f1fab4
--- /dev/null
+++ b/drivers/media/platform/imx/imx-media.c
@@ -0,0 +1,249 @@
+/*
+ * i.MX V4L2 Capture Driver
+ *
+ * Copyright (C) 2016, Pengutronix, Philipp Zabel <kernel@pengutronix.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/types.h>
+#include <media/media-device.h>
+#include <media/v4l2-async.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-of.h>
+
+#define IMX_MEDIA_MAX_PORTS	4
+
+struct imx_media {
+	struct media_device mdev;
+	struct v4l2_device v4l2_dev;
+	struct v4l2_async_notifier subdev_notifier;
+	struct v4l2_async_subdev subdevs[IMX_MEDIA_MAX_PORTS];
+};
+
+static int v4l2_of_create_pad_link(struct v4l2_subdev *sd,
+				   struct v4l2_of_link *link)
+{
+	struct v4l2_subdev *remote_sd, *src_sd, *sink_sd;
+	struct media_link *mlink;
+	int src_port, sink_port;
+
+	if (link->local_port >= sd->entity.num_pads)
+		return -EINVAL;
+
+	remote_sd = v4l2_find_subdev_by_node(sd->v4l2_dev, link->remote_node);
+	if (!remote_sd)
+		return 0;
+
+	if (sd->entity.pads[link->local_port].flags & MEDIA_PAD_FL_SINK) {
+		src_sd = remote_sd;
+		src_port = link->remote_port;
+		sink_sd = sd;
+		sink_port = link->local_port;
+	} else {
+		src_sd = sd;
+		src_port = link->local_port;
+		sink_sd = remote_sd;
+		sink_port = link->remote_port;
+	}
+
+	mlink = media_entity_find_link(&src_sd->entity.pads[src_port],
+				       &sink_sd->entity.pads[sink_port]);
+	if (mlink)
+		return 0;
+
+	return media_create_pad_link(&src_sd->entity, src_port,
+			&sink_sd->entity, sink_port, 0);
+}
+
+static int imx_media_complete(struct v4l2_async_notifier *notifier)
+{
+	struct imx_media *im = container_of(notifier, struct imx_media,
+					    subdev_notifier);
+	struct media_device *mdev = &im->mdev;
+	struct v4l2_subdev *sd;
+	struct device_node *ep;
+
+	mutex_lock(&mdev->graph_mutex);
+
+	/*
+	 * Link all subdevices
+	 */
+	list_for_each_entry(sd, &notifier->done, async_list) {
+		struct v4l2_of_link link;
+		int ret;
+
+		/*
+		 * The IPU port nodes 0 and 1 correspond to the CSI subdevices.
+		 * We can't use for_each_endpoint_of_node here, which would
+		 * iterate over all endpoints of the IPU, including output ones.
+		 */
+		if (strcmp(sd->of_node->name, "port") == 0) {
+			/* There should be one endpoint in the CSI port */
+			ep = of_get_next_child(sd->of_node, NULL);
+			if (!ep)
+				continue;
+
+			ret = v4l2_of_parse_link(ep, &link);
+			if (ret) {
+				of_node_put(ep);
+				continue;
+			}
+
+			/*
+			 * The IPU port id in the device tree does not
+			 * correspond to the CSI pad id. Always connect
+			 * the source to the CSI input pad (pad 0).
+			 */
+			link.local_port = 0;
+
+			v4l2_of_create_pad_link(sd, &link);
+
+			v4l2_of_put_link(&link);
+
+			continue;
+		}
+
+		/*
+		 * TODO: for now assume that all subdevices other than CSI have
+		 * a 1:1 relationship between device_node and v4l2_subdev and
+		 * between of_graph port number and media_entity pad index.
+		 */
+		for_each_endpoint_of_node(sd->of_node, ep) {
+			ret = v4l2_of_parse_link(ep, &link);
+			if (ret)
+				continue;
+
+			v4l2_of_create_pad_link(sd, &link);
+
+			v4l2_of_put_link(&link);
+		}
+	}
+
+	mutex_unlock(&mdev->graph_mutex);
+
+	return v4l2_device_register_subdev_nodes(&im->v4l2_dev);
+}
+
+static void imx_media_register_notifier(struct imx_media *im)
+{
+	struct v4l2_async_notifier *notifier = &im->subdev_notifier;
+	struct v4l2_async_subdev *asd = &im->subdevs[0];
+	struct device_node *port;
+	int i;
+
+	/*
+	 * ports correspond to the CSI subdevices that terminate the media
+	 * pipelines.
+	 */
+	for (i = 0; i < IMX_MEDIA_MAX_PORTS; i++) {
+		port = of_parse_phandle(im->mdev.dev->of_node, "ports", i);
+		if (!port)
+			break;
+
+		if (!of_device_is_available(port->parent)) {
+			of_node_put(port);
+			continue;
+		}
+
+		asd[i].match_type = V4L2_ASYNC_MATCH_OF;
+		asd[i].match.of.node = port;
+		of_node_put(port);
+	}
+	notifier->num_subdevs = i;
+	notifier->subdevs = devm_kcalloc(im->mdev.dev, i,
+					 sizeof(*notifier->subdevs),
+					 GFP_KERNEL);
+	for (i = 0; i < notifier->num_subdevs; i++)
+		notifier->subdevs[i] = &im->subdevs[i];
+
+	notifier->complete = imx_media_complete;
+
+	v4l2_async_notifier_register(&im->v4l2_dev, notifier);
+}
+
+static int imx_media_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct media_device *mdev;
+	struct imx_media *im;
+	int ret;
+
+	ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
+	if (ret)
+		return ret;
+
+	im = devm_kzalloc(dev, sizeof(*im), GFP_KERNEL);
+	if (!im)
+		return -ENOMEM;
+
+	mdev = &im->mdev;
+	mdev->dev = dev;
+	strlcpy(mdev->model, "i.MX IPUv3", sizeof(mdev->model));
+	media_device_init(mdev);
+
+	im->v4l2_dev.mdev = mdev;
+	ret = v4l2_device_register(dev, &im->v4l2_dev);
+	if (ret) {
+		dev_err(dev, "Failed to register v4l2 device: %d\n", ret);
+		return ret;
+	}
+
+	imx_media_register_notifier(im);
+
+	ret = media_device_register(mdev);
+	if (ret) {
+		dev_err(dev, "Failed to register media controller device: %d\n",
+			ret);
+		goto err;
+	}
+
+	platform_set_drvdata(pdev, mdev);
+
+	return 0;
+
+err:
+	media_device_cleanup(mdev);
+	return ret;
+}
+
+static int imx_media_remove(struct platform_device *pdev)
+{
+	struct media_device *mdev = platform_get_drvdata(pdev);
+	struct imx_media *im = container_of(mdev, struct imx_media, mdev);
+
+	media_device_unregister(mdev);
+	v4l2_device_unregister(&im->v4l2_dev);
+	media_device_cleanup(mdev);
+
+	return 0;
+}
+
+static const struct of_device_id imx_media_dt_ids[] = {
+	{ .compatible = "fsl,imx-capture-subsystem", },
+	{ },
+};
+MODULE_DEVICE_TABLE(of, imx_media_dt_ids);
+
+static struct platform_driver imx_media_driver = {
+	.probe = imx_media_probe,
+	.remove = imx_media_remove,
+	.driver = {
+		.name = "imx-media",
+		.of_match_table = imx_media_dt_ids,
+	},
+};
+module_platform_driver(imx_media_driver);
+
+MODULE_AUTHOR("Sascha Hauer <s.hauer@pengutronix.de>");
+MODULE_AUTHOR("Philipp Zabel <p.zabel@pengutronix.de>");
+MODULE_DESCRIPTION("i.MX SoC wide media device");
+MODULE_LICENSE("GPL");
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 06/21] [media] imx: Add IPUv3 media common code
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (4 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 05/21] [media] imx: Add i.MX SoC wide media device driver Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 07/21] [media] imx-ipu: Add i.MX IPUv3 CSI subdevice driver Philipp Zabel
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Sascha Hauer, Lucas Stach, Philipp Zabel

From: Sascha Hauer <s.hauer@pengutronix.de>

Add video4linux API routines common to drivers for units that
accept or provide video data via the i.MX IPU IDMAC channels,
such as capture, mem2mem scaler or deinterlacer drivers.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 drivers/media/platform/imx/Kconfig   |   3 +
 drivers/media/platform/imx/Makefile  |   1 +
 drivers/media/platform/imx/imx-ipu.c | 321 +++++++++++++++++++++++++++++++++++
 drivers/media/platform/imx/imx-ipu.h |  34 ++++
 4 files changed, 359 insertions(+)
 create mode 100644 drivers/media/platform/imx/imx-ipu.c
 create mode 100644 drivers/media/platform/imx/imx-ipu.h

diff --git a/drivers/media/platform/imx/Kconfig b/drivers/media/platform/imx/Kconfig
index 3bd699c..1662bb0b 100644
--- a/drivers/media/platform/imx/Kconfig
+++ b/drivers/media/platform/imx/Kconfig
@@ -5,3 +5,6 @@ config MEDIA_IMX
 	---help---
 	  This driver provides a SoC wide media controller device that all
 	  multimedia components in i.MX5 and i.MX6 SoCs can register with.
+
+config VIDEO_IMX_IPU_COMMON
+	tristate
diff --git a/drivers/media/platform/imx/Makefile b/drivers/media/platform/imx/Makefile
index 74bed76..0ba601a 100644
--- a/drivers/media/platform/imx/Makefile
+++ b/drivers/media/platform/imx/Makefile
@@ -1 +1,2 @@
 obj-$(CONFIG_MEDIA_IMX)			+= imx-media.o
+obj-$(CONFIG_VIDEO_IMX_IPU_COMMON)	+= imx-ipu.o
diff --git a/drivers/media/platform/imx/imx-ipu.c b/drivers/media/platform/imx/imx-ipu.c
new file mode 100644
index 0000000..da1deb0
--- /dev/null
+++ b/drivers/media/platform/imx/imx-ipu.c
@@ -0,0 +1,321 @@
+/*
+ * i.MX IPUv3 common v4l2 support
+ *
+ * Copyright (C) 2011 Pengutronix, Sascha Hauer <kernel@pengutronix.de>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <linux/module.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-dev.h>
+#include <media/v4l2-ioctl.h>
+
+#include "imx-ipu.h"
+
+/*
+ * These formats are in order of preference: interleaved YUV first,
+ * because those are the most bandwidth efficient, followed by
+ * chroma-interleaved formats, and planar formats last.
+ * In each category, YUV 4:2:0 may be preferrable to 4:2:2 for bandwidth
+ * reasons, if the IDMAC channel supports double read/write reduction
+ * (all write channels, VDIC read channels).
+ */
+static struct ipu_fmt ipu_fmt_yuv[] = {
+	{
+		.fourcc = V4L2_PIX_FMT_YUYV,
+		.bytes_per_pixel = 2,
+	}, {
+		.fourcc = V4L2_PIX_FMT_UYVY,
+		.bytes_per_pixel = 2,
+	}, {
+		.fourcc = V4L2_PIX_FMT_NV12,
+		.bytes_per_pixel = 1,
+	}, {
+		.fourcc = V4L2_PIX_FMT_NV16,
+		.bytes_per_pixel = 1,
+	}, {
+		.fourcc = V4L2_PIX_FMT_YUV420,
+		.bytes_per_pixel = 1,
+	}, {
+		.fourcc = V4L2_PIX_FMT_YVU420,
+		.bytes_per_pixel = 1,
+	}, {
+		.fourcc = V4L2_PIX_FMT_YUV422P,
+		.bytes_per_pixel = 1,
+	},
+};
+
+static struct ipu_fmt ipu_fmt_rgb[] = {
+	{
+		.fourcc = V4L2_PIX_FMT_RGB32,
+		.bytes_per_pixel = 4,
+	}, {
+		.fourcc = V4L2_PIX_FMT_RGB24,
+		.bytes_per_pixel = 3,
+	}, {
+		.fourcc = V4L2_PIX_FMT_BGR24,
+		.bytes_per_pixel = 3,
+	}, {
+		.fourcc = V4L2_PIX_FMT_RGB565,
+		.bytes_per_pixel = 2,
+	},
+	{
+		.fourcc = V4L2_PIX_FMT_BGR32,
+		.bytes_per_pixel = 4,
+	},
+};
+
+struct ipu_fmt *ipu_find_fmt_yuv(unsigned int pixelformat)
+{
+	struct ipu_fmt *fmt;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(ipu_fmt_yuv); i++) {
+		fmt = &ipu_fmt_yuv[i];
+		if (fmt->fourcc == pixelformat)
+			return fmt;
+	}
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(ipu_find_fmt_yuv);
+
+struct ipu_fmt *ipu_find_fmt_rgb(unsigned int pixelformat)
+{
+	struct ipu_fmt *fmt;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(ipu_fmt_rgb); i++) {
+		fmt = &ipu_fmt_rgb[i];
+		if (fmt->fourcc == pixelformat)
+			return fmt;
+	}
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(ipu_find_fmt_rgb);
+
+static struct ipu_fmt *ipu_find_fmt(unsigned long pixelformat)
+{
+	struct ipu_fmt *fmt;
+
+	fmt = ipu_find_fmt_yuv(pixelformat);
+	if (fmt)
+		return fmt;
+	fmt = ipu_find_fmt_rgb(pixelformat);
+
+	return fmt;
+}
+EXPORT_SYMBOL_GPL(ipu_find_fmt);
+
+int ipu_try_fmt(struct file *file, void *fh,
+		struct v4l2_format *f)
+{
+	struct ipu_fmt *fmt;
+
+	v4l_bound_align_image(&f->fmt.pix.width, 8, 4096, 2,
+			      &f->fmt.pix.height, 2, 4096, 1, 0);
+
+	f->fmt.pix.field = V4L2_FIELD_NONE;
+
+	fmt = ipu_find_fmt(f->fmt.pix.pixelformat);
+	if (!fmt)
+		return -EINVAL;
+
+	f->fmt.pix.bytesperline = f->fmt.pix.width * fmt->bytes_per_pixel;
+	f->fmt.pix.sizeimage = f->fmt.pix.bytesperline * f->fmt.pix.height;
+	if (fmt->fourcc == V4L2_PIX_FMT_YUV420 ||
+	    fmt->fourcc == V4L2_PIX_FMT_YVU420 ||
+	    fmt->fourcc == V4L2_PIX_FMT_NV12)
+		f->fmt.pix.sizeimage = f->fmt.pix.sizeimage * 3 / 2;
+	else if (fmt->fourcc == V4L2_PIX_FMT_YUV422P ||
+		 fmt->fourcc == V4L2_PIX_FMT_NV16)
+		f->fmt.pix.sizeimage *= 2;
+
+	switch (f->fmt.pix.pixelformat) {
+	case V4L2_PIX_FMT_YUV420:
+	case V4L2_PIX_FMT_YVU420:
+	case V4L2_PIX_FMT_YUV422P:
+	case V4L2_PIX_FMT_NV12:
+	case V4L2_PIX_FMT_NV16:
+	case V4L2_PIX_FMT_UYVY:
+	case V4L2_PIX_FMT_YUYV:
+		if (f->fmt.pix.width <= 720 && f->fmt.pix.height <= 576)
+			f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
+		else
+			f->fmt.pix.colorspace = V4L2_COLORSPACE_REC709;
+		break;
+	case V4L2_PIX_FMT_RGB32:
+	case V4L2_PIX_FMT_RGB24:
+	case V4L2_PIX_FMT_BGR24:
+	case V4L2_PIX_FMT_RGB565:
+	case V4L2_PIX_FMT_BGR32:
+		f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_try_fmt);
+
+int ipu_try_fmt_rgb(struct file *file, void *fh,
+		struct v4l2_format *f)
+{
+	struct ipu_fmt *fmt;
+
+	fmt = ipu_find_fmt_rgb(f->fmt.pix.pixelformat);
+	if (!fmt)
+		return -EINVAL;
+
+	return ipu_try_fmt(file, fh, f);
+}
+EXPORT_SYMBOL_GPL(ipu_try_fmt_rgb);
+
+int ipu_try_fmt_yuv(struct file *file, void *fh,
+		struct v4l2_format *f)
+{
+	struct ipu_fmt *fmt;
+
+	fmt = ipu_find_fmt_yuv(f->fmt.pix.pixelformat);
+	if (!fmt)
+		return -EINVAL;
+
+	return ipu_try_fmt(file, fh, f);
+}
+EXPORT_SYMBOL_GPL(ipu_try_fmt_yuv);
+
+int ipu_enum_fmt_rgb(struct file *file, void *fh,
+		struct v4l2_fmtdesc *f)
+{
+	struct ipu_fmt *fmt;
+
+	if (f->index >= ARRAY_SIZE(ipu_fmt_rgb))
+		return -EINVAL;
+
+	fmt = &ipu_fmt_rgb[f->index];
+
+	f->pixelformat = fmt->fourcc;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_enum_fmt_rgb);
+
+int ipu_enum_fmt_yuv(struct file *file, void *fh,
+		struct v4l2_fmtdesc *f)
+{
+	struct ipu_fmt *fmt;
+
+	if (f->index >= ARRAY_SIZE(ipu_fmt_yuv))
+		return -EINVAL;
+
+	fmt = &ipu_fmt_yuv[f->index];
+
+	f->pixelformat = fmt->fourcc;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_enum_fmt_yuv);
+
+int ipu_enum_fmt(struct file *file, void *fh,
+		struct v4l2_fmtdesc *f)
+{
+	struct ipu_fmt *fmt;
+	int index = f->index;
+
+	if (index >= ARRAY_SIZE(ipu_fmt_yuv)) {
+		index -= ARRAY_SIZE(ipu_fmt_yuv);
+		if (index >= ARRAY_SIZE(ipu_fmt_rgb))
+			return -EINVAL;
+		fmt = &ipu_fmt_rgb[index];
+	} else {
+		fmt = &ipu_fmt_yuv[index];
+	}
+
+	f->pixelformat = fmt->fourcc;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_enum_fmt);
+
+int ipu_s_fmt(struct file *file, void *fh,
+		struct v4l2_format *f, struct v4l2_pix_format *pix)
+{
+	int ret;
+
+	ret = ipu_try_fmt(file, fh, f);
+	if (ret)
+		return ret;
+
+	*pix = f->fmt.pix;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_s_fmt);
+
+int ipu_s_fmt_rgb(struct file *file, void *fh,
+		struct v4l2_format *f, struct v4l2_pix_format *pix)
+{
+	struct ipu_fmt *fmt;
+
+	fmt = ipu_find_fmt_rgb(f->fmt.pix.pixelformat);
+	if (!fmt)
+		return -EINVAL;
+
+	return ipu_s_fmt(file, fh, f, pix);
+}
+EXPORT_SYMBOL_GPL(ipu_s_fmt_rgb);
+
+int ipu_s_fmt_yuv(struct file *file, void *fh,
+		struct v4l2_format *f, struct v4l2_pix_format *pix)
+{
+	struct ipu_fmt *fmt;
+
+	fmt = ipu_find_fmt_yuv(f->fmt.pix.pixelformat);
+	if (!fmt)
+		return -EINVAL;
+
+	return ipu_s_fmt(file, fh, f, pix);
+}
+EXPORT_SYMBOL_GPL(ipu_s_fmt_yuv);
+
+int ipu_g_fmt(struct v4l2_format *f, struct v4l2_pix_format *pix)
+{
+	f->fmt.pix = *pix;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_g_fmt);
+
+int ipu_enum_framesizes(struct file *file, void *fh,
+			struct v4l2_frmsizeenum *fsize)
+{
+	struct ipu_fmt *fmt;
+
+	if (fsize->index != 0)
+		return -EINVAL;
+
+	fmt = ipu_find_fmt(fsize->pixel_format);
+	if (!fmt)
+		return -EINVAL;
+
+	fsize->type = V4L2_FRMSIZE_TYPE_CONTINUOUS;
+	fsize->stepwise.min_width = 1;
+	fsize->stepwise.min_height = 1;
+	fsize->stepwise.max_width = 4096;
+	fsize->stepwise.max_height = 4096;
+	fsize->stepwise.step_width = fsize->stepwise.step_height = 1;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(ipu_enum_framesizes);
+
+MODULE_LICENSE("GPL");
diff --git a/drivers/media/platform/imx/imx-ipu.h b/drivers/media/platform/imx/imx-ipu.h
new file mode 100644
index 0000000..7b344b6
--- /dev/null
+++ b/drivers/media/platform/imx/imx-ipu.h
@@ -0,0 +1,34 @@
+#ifndef __MEDIA_IMX_IPU_H
+#define __MEDIA_IMX_IPU_H
+#include <linux/videodev2.h>
+
+struct ipu_fmt {
+	u32 fourcc;
+	int bytes_per_pixel;
+};
+
+int ipu_enum_fmt(struct file *file, void *fh,
+		struct v4l2_fmtdesc *f);
+int ipu_enum_fmt_rgb(struct file *file, void *fh,
+		struct v4l2_fmtdesc *f);
+int ipu_enum_fmt_yuv(struct file *file, void *fh,
+		struct v4l2_fmtdesc *f);
+struct ipu_fmt *ipu_find_fmt_rgb(unsigned int pixelformat);
+struct ipu_fmt *ipu_find_fmt_yuv(unsigned int pixelformat);
+int ipu_try_fmt(struct file *file, void *fh,
+		struct v4l2_format *f);
+int ipu_try_fmt_rgb(struct file *file, void *fh,
+		struct v4l2_format *f);
+int ipu_try_fmt_yuv(struct file *file, void *fh,
+		struct v4l2_format *f);
+int ipu_s_fmt(struct file *file, void *fh,
+		struct v4l2_format *f, struct v4l2_pix_format *pix);
+int ipu_s_fmt_rgb(struct file *file, void *fh,
+		struct v4l2_format *f, struct v4l2_pix_format *pix);
+int ipu_s_fmt_yuv(struct file *file, void *fh,
+		struct v4l2_format *f, struct v4l2_pix_format *pix);
+int ipu_g_fmt(struct v4l2_format *f, struct v4l2_pix_format *pix);
+int ipu_enum_framesizes(struct file *file, void *fh,
+			struct v4l2_frmsizeenum *fsize);
+
+#endif /* __MEDIA_IMX_IPU_H */
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 07/21] [media] imx-ipu: Add i.MX IPUv3 CSI subdevice driver
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (5 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 06/21] [media] imx: Add IPUv3 media common code Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-11-08  7:18   ` Ying Liu
  2016-10-14 17:34 ` [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver Philipp Zabel
                   ` (15 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel, Sascha Hauer, Marc Kleine-Budde

This adds a V4L2 subdevice driver for the two CMOS Sensor Interface (CSI)
modules contained in each IPUv3. These sample video data from the
parallel CSI0/1 pads or from the the MIPI CSI-2 bridge via IOMUXC video
bus multiplexers and write to IPU internal FIFOs to deliver data to
either the IDMAC or IC modules.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Changes since v1:
 - Propagate field and colorspace in ipucsi_subdev_set_format.
 - Recursively call s_stream on upstream subdev.
---
 drivers/media/platform/imx/Kconfig         |   7 +
 drivers/media/platform/imx/Makefile        |   1 +
 drivers/media/platform/imx/imx-ipuv3-csi.c | 547 +++++++++++++++++++++++++++++
 3 files changed, 555 insertions(+)
 create mode 100644 drivers/media/platform/imx/imx-ipuv3-csi.c

diff --git a/drivers/media/platform/imx/Kconfig b/drivers/media/platform/imx/Kconfig
index 1662bb0b..a88c4f7 100644
--- a/drivers/media/platform/imx/Kconfig
+++ b/drivers/media/platform/imx/Kconfig
@@ -8,3 +8,10 @@ config MEDIA_IMX
 
 config VIDEO_IMX_IPU_COMMON
 	tristate
+
+config VIDEO_IMX_IPU_CSI
+	tristate "i.MX5/6 CMOS Sensor Interface driver"
+	depends on VIDEO_DEV && IMX_IPUV3_CORE && MEDIA_IMX
+	select VIDEO_IMX_IPUV3
+	---help---
+	  This is a v4l2 subdevice driver for two CSI modules in each IPUv3.
diff --git a/drivers/media/platform/imx/Makefile b/drivers/media/platform/imx/Makefile
index 0ba601a..82a3616 100644
--- a/drivers/media/platform/imx/Makefile
+++ b/drivers/media/platform/imx/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_MEDIA_IMX)			+= imx-media.o
 obj-$(CONFIG_VIDEO_IMX_IPU_COMMON)	+= imx-ipu.o
+obj-$(CONFIG_VIDEO_IMX_IPU_CSI)		+= imx-ipuv3-csi.o
diff --git a/drivers/media/platform/imx/imx-ipuv3-csi.c b/drivers/media/platform/imx/imx-ipuv3-csi.c
new file mode 100644
index 0000000..83e0511
--- /dev/null
+++ b/drivers/media/platform/imx/imx-ipuv3-csi.c
@@ -0,0 +1,547 @@
+/*
+ * i.MX IPUv3 CSI V4L2 Subdevice Driver
+ *
+ * Copyright (C) 2016, Pengutronix, Philipp Zabel <kernel@pengutronix.de>
+ *
+ * Based on code
+ * Copyright (C) 2006, Sascha Hauer, Pengutronix
+ * Copyright (C) 2008, Guennadi Liakhovetski <kernel@pengutronix.de>
+ * Copyright (C) 2008, Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
+ * Copyright (C) 2009, Darius Augulis <augulis.darius@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/of_graph.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/version.h>
+#include <linux/videodev2.h>
+
+#include <media/v4l2-common.h>
+#include <media/v4l2-dev.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mc.h>
+#include <media/v4l2-of.h>
+#include <media/videobuf2-dma-contig.h>
+
+#include <video/imx-ipu-v3.h>
+
+#include "imx-ipu.h"
+
+#define DRIVER_NAME "imx-ipuv3-csi"
+
+struct ipucsi {
+	struct v4l2_subdev		subdev;
+
+	struct device			*dev;
+	u32				id;
+
+	struct ipu_csi			*csi;
+	struct ipu_soc			*ipu;
+	struct v4l2_of_endpoint		endpoint;
+	enum ipu_csi_dest		csi_dest;
+
+	struct media_pad		subdev_pad[2];
+	struct v4l2_mbus_framefmt	format_mbus[2];
+	struct v4l2_fract		timeperframe[2];
+};
+
+static int ipu_csi_get_mbus_config(struct ipucsi *ipucsi,
+				   struct v4l2_mbus_config *config)
+{
+	struct v4l2_subdev *sd;
+	struct media_pad *pad;
+	int ret;
+
+	/*
+	 * Retrieve media bus configuration from the entity connected directly
+	 * to the CSI subdev sink pad.
+	 */
+	pad = media_entity_remote_pad(&ipucsi->subdev_pad[0]);
+	if (!pad) {
+		dev_err(ipucsi->dev,
+			"failed to retrieve mbus config from source entity\n");
+		return -ENODEV;
+	}
+	sd = media_entity_to_v4l2_subdev(pad->entity);
+	ret = v4l2_subdev_call(sd, video, g_mbus_config, config);
+	if (ret == -ENOIOCTLCMD) {
+		/* Fall back to static mbus configuration from device tree */
+		config->type = ipucsi->endpoint.bus_type;
+		config->flags = ipucsi->endpoint.bus.parallel.flags;
+		ret = 0;
+	}
+
+	return ret;
+}
+
+static struct v4l2_mbus_framefmt *
+__ipucsi_get_pad_format(struct v4l2_subdev *sd,
+			struct v4l2_subdev_pad_config *cfg,
+			unsigned int pad, u32 which)
+{
+	struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
+
+	switch (which) {
+	case V4L2_SUBDEV_FORMAT_TRY:
+		return v4l2_subdev_get_try_format(sd, cfg, pad);
+	case V4L2_SUBDEV_FORMAT_ACTIVE:
+		return &ipucsi->format_mbus[pad ? 1 : 0];
+	default:
+		return NULL;
+	}
+}
+
+static int ipucsi_subdev_log_status(struct v4l2_subdev *subdev)
+{
+	struct ipucsi *ipucsi = container_of(subdev, struct ipucsi, subdev);
+
+	ipu_csi_dump(ipucsi->csi);
+
+	return 0;
+}
+
+static int ipucsi_subdev_get_format(struct v4l2_subdev *sd,
+		struct v4l2_subdev_pad_config *cfg,
+		struct v4l2_subdev_format *sdformat)
+{
+	sdformat->format = *__ipucsi_get_pad_format(sd, cfg, sdformat->pad,
+						    sdformat->which);
+	return 0;
+}
+
+static bool ipucsi_mbus_code_supported(const u32 mbus_code)
+{
+	static const u32 mbus_codes[] = {
+		MEDIA_BUS_FMT_Y8_1X8,
+		MEDIA_BUS_FMT_Y10_1X10,
+		MEDIA_BUS_FMT_Y12_1X12,
+		MEDIA_BUS_FMT_UYVY8_2X8,
+		MEDIA_BUS_FMT_YUYV8_2X8,
+		MEDIA_BUS_FMT_UYVY8_1X16,
+		MEDIA_BUS_FMT_YUYV8_1X16,
+	};
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(mbus_codes); i++) {
+		if (mbus_code == mbus_codes[i])
+			return true;
+	}
+
+	return false;
+}
+
+static int ipucsi_subdev_set_format(struct v4l2_subdev *sd,
+		struct v4l2_subdev_pad_config *cfg,
+		struct v4l2_subdev_format *sdformat)
+{
+	struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
+	struct v4l2_mbus_framefmt *mbusformat;
+	unsigned int width, height;
+	bool supported;
+
+	supported = ipucsi_mbus_code_supported(sdformat->format.code);
+	if (!supported)
+		return -EINVAL;
+
+	mbusformat = __ipucsi_get_pad_format(sd, cfg, sdformat->pad,
+					     sdformat->which);
+
+	if (sdformat->pad == 0) {
+		width = clamp_t(unsigned int, sdformat->format.width, 16, 8192);
+		height = clamp_t(unsigned int, sdformat->format.height, 16, 4096);
+		mbusformat->field = sdformat->format.field ?: V4L2_FIELD_NONE;
+		mbusformat->colorspace = sdformat->format.colorspace ?:
+					 V4L2_COLORSPACE_SRGB;
+	} else {
+		struct v4l2_mbus_framefmt *infmt = &ipucsi->format_mbus[0];
+
+		width = infmt->width;
+		height = infmt->height;
+		mbusformat->field = infmt->field;
+		mbusformat->colorspace = infmt->colorspace;
+	}
+
+	mbusformat->width = width;
+	mbusformat->height = height;
+	mbusformat->code = sdformat->format.code;
+
+	if (mbusformat->field == V4L2_FIELD_SEQ_TB &&
+	    mbusformat->width == 720 && mbusformat->height == 480 &&
+	    ipucsi->endpoint.bus_type == V4L2_MBUS_BT656) {
+		/* We capture NTSC bottom field first */
+		mbusformat->field = V4L2_FIELD_SEQ_BT;
+	}
+
+	sdformat->format = *mbusformat;
+
+	return 0;
+}
+
+static int ipucsi_subdev_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
+	struct v4l2_subdev *upstream_sd;
+	struct media_pad *pad;
+	int ret;
+
+	pad = media_entity_remote_pad(&sd->entity.pads[0]);
+	if (!pad) {
+		dev_err(ipucsi->dev, "Failed to find remote source pad\n");
+		return -ENOLINK;
+	}
+
+	if (!is_media_entity_v4l2_subdev(pad->entity)) {
+		dev_err(ipucsi->dev, "Upstream entity is not a v4l2 subdev\n");
+		return -ENODEV;
+	}
+
+	upstream_sd = media_entity_to_v4l2_subdev(pad->entity);
+
+	ret = v4l2_subdev_call(upstream_sd, video, s_stream, enable);
+	if (ret)
+		return ret;
+
+	if (enable) {
+		struct v4l2_mbus_framefmt *fmt = ipucsi->format_mbus;
+		struct v4l2_mbus_config mbus_config;
+		struct v4l2_rect window;
+		bool mux_mct;
+		int ret;
+
+		ret = ipu_csi_get_mbus_config(ipucsi, &mbus_config);
+		if (ret)
+			return ret;
+
+		window.left = 0;
+		window.top = 0;
+		window.width = fmt[0].width;
+		window.height = fmt[0].height;
+		ipu_csi_set_window(ipucsi->csi, &window);
+
+		/* Is CSI data source MCT (MIPI)? */
+		mux_mct = (mbus_config.type == V4L2_MBUS_CSI2);
+
+		ipu_set_csi_src_mux(ipucsi->ipu, ipucsi->id, mux_mct);
+		if (mux_mct)
+			ipu_csi_set_mipi_datatype(ipucsi->csi, /*VC*/ 0,
+						  &fmt[0]);
+
+		ret = ipu_csi_init_interface(ipucsi->csi, &mbus_config,
+					     &fmt[0]);
+		if (ret) {
+			dev_err(ipucsi->dev, "Failed to initialize CSI\n");
+			return ret;
+		}
+
+		ipu_csi_set_dest(ipucsi->csi, ipucsi->csi_dest);
+
+		ipu_csi_enable(ipucsi->csi);
+	} else {
+		ipu_csi_disable(ipucsi->csi);
+	}
+
+	return 0;
+}
+
+static int ipucsi_subdev_g_frame_interval(struct v4l2_subdev *subdev,
+					  struct v4l2_subdev_frame_interval *fi)
+{
+	struct ipucsi *ipucsi = container_of(subdev, struct ipucsi, subdev);
+
+	if (fi->pad > 4)
+		return -EINVAL;
+
+	fi->interval = ipucsi->timeperframe[(fi->pad == 0) ? 0 : 1];
+
+	return 0;
+}
+
+/*
+ * struct ipucsi_skip_desc - CSI frame skipping descriptor
+ * @keep - number of frames kept per max_ratio frames
+ * @max_ratio - width of skip_smfc, written to MAX_RATIO bitfield
+ * @skip_smfc - skip pattern written to the SKIP_SMFC bitfield
+ */
+struct ipucsi_skip_desc {
+	u8 keep;
+	u8 max_ratio;
+	u8 skip_smfc;
+};
+
+static const struct ipucsi_skip_desc ipucsi_skip[12] = {
+	{ 1, 1, 0x00 }, /* Keep all frames */
+	{ 5, 6, 0x10 }, /* Skip every sixth frame */
+	{ 4, 5, 0x08 }, /* Skip every fifth frame */
+	{ 3, 4, 0x04 }, /* Skip every fourth frame */
+	{ 2, 3, 0x02 }, /* Skip every third frame */
+	{ 3, 5, 0x0a }, /* Skip frames 1 and 3 of every 5 */
+	{ 1, 2, 0x01 }, /* Skip every second frame */
+	{ 2, 5, 0x0b }, /* Keep frames 1 and 4 of every 5 */
+	{ 1, 3, 0x03 }, /* Keep one in three frames */
+	{ 1, 4, 0x07 }, /* Keep one in four frames */
+	{ 1, 5, 0x0f }, /* Keep one in five frames */
+	{ 1, 6, 0x1f }, /* Keep one in six frames */
+};
+
+static int ipucsi_subdev_s_frame_interval(struct v4l2_subdev *subdev,
+					  struct v4l2_subdev_frame_interval *fi)
+{
+	struct ipucsi *ipucsi = container_of(subdev, struct ipucsi, subdev);
+	const struct ipucsi_skip_desc *skip;
+	struct v4l2_fract *want, *in;
+	u32 min_err = UINT_MAX;
+	int i, best_i = 0;
+	u64 want_us;
+
+	if (fi->pad > 4)
+		return -EINVAL;
+
+	if (fi->pad == 0) {
+		/* Believe what we are told about the input frame rate */
+		ipucsi->timeperframe[0] = fi->interval;
+		/* Reset output frame rate to input frame rate */
+		ipucsi->timeperframe[1] = fi->interval;
+		return 0;
+	}
+
+	want = &fi->interval;
+	in = &ipucsi->timeperframe[0];
+
+	if (want->numerator == 0 || want->denominator == 0 ||
+	    in->denominator == 0) {
+		ipucsi->timeperframe[1] = ipucsi->timeperframe[0];
+		fi->interval = ipucsi->timeperframe[1];
+		return 0;
+	}
+
+	want_us = 1000000ULL * want->numerator;
+	do_div(want_us, want->denominator);
+
+	/* Find the reduction closest to the requested timeperframe */
+	for (i = 0; i < ARRAY_SIZE(ipucsi_skip); i++) {
+		u64 tmp;
+		u32 err;
+
+		skip = &ipucsi_skip[i];
+		tmp = 1000000ULL * in->numerator * skip->max_ratio;
+		do_div(tmp, in->denominator * skip->keep);
+		err = (tmp > want_us) ? (tmp - want_us) : (want_us - tmp);
+		if (err < min_err) {
+			min_err = err;
+			best_i = i;
+		}
+	}
+
+	skip = &ipucsi_skip[best_i];
+
+	ipu_csi_set_skip_smfc(ipucsi->csi, skip->skip_smfc, skip->max_ratio - 1,
+			      0);
+
+	fi->interval.numerator = in->numerator * skip->max_ratio;
+	fi->interval.denominator = in->denominator * skip->keep;
+	ipucsi->timeperframe[1] = fi->interval;
+
+	dev_dbg(ipucsi->dev, "skip: %d/%d -> %d/%d frames:%d pattern:0x%x\n",
+	       in->numerator, in->denominator,
+	       fi->interval.numerator, fi->interval.denominator,
+	       skip->max_ratio, skip->skip_smfc);
+
+	return 0;
+}
+
+static struct v4l2_subdev_core_ops ipucsi_subdev_core_ops = {
+	.log_status = ipucsi_subdev_log_status,
+};
+
+static struct v4l2_subdev_pad_ops ipucsi_subdev_pad_ops = {
+	.get_fmt = ipucsi_subdev_get_format,
+	.set_fmt = ipucsi_subdev_set_format,
+};
+
+static struct v4l2_subdev_video_ops ipucsi_subdev_video_ops = {
+	.s_stream = ipucsi_subdev_s_stream,
+	.g_frame_interval = ipucsi_subdev_g_frame_interval,
+	.s_frame_interval = ipucsi_subdev_s_frame_interval,
+};
+
+static const struct v4l2_subdev_ops ipucsi_subdev_ops = {
+	.core   = &ipucsi_subdev_core_ops,
+	.pad    = &ipucsi_subdev_pad_ops,
+	.video  = &ipucsi_subdev_video_ops,
+};
+
+static int ipu_csi_link_setup(struct media_entity *entity,
+			      const struct media_pad *local,
+			      const struct media_pad *remote, u32 flags)
+{
+	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+	struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
+
+	ipucsi->csi_dest = IPU_CSI_DEST_IDMAC;
+
+	return 0;
+}
+
+struct media_entity_operations ipucsi_entity_ops = {
+	.link_setup = ipu_csi_link_setup,
+	.link_validate = v4l2_subdev_link_validate,
+};
+
+static int ipu_csi_registered(struct v4l2_subdev *sd)
+{
+	struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
+	struct device_node *rpp;
+
+	/*
+	 * Add source subdevice to asynchronous subdevice waiting list.
+	 */
+	rpp = of_graph_get_remote_port_parent(ipucsi->endpoint.base.local_node);
+	if (rpp) {
+		struct v4l2_async_subdev *asd;
+
+		asd = devm_kzalloc(sd->dev, sizeof(*asd), GFP_KERNEL);
+		if (!asd) {
+			of_node_put(rpp);
+			return -ENOMEM;
+		}
+
+		asd->match_type = V4L2_ASYNC_MATCH_OF;
+		asd->match.of.node = rpp;
+
+		__v4l2_async_notifier_add_subdev(sd->notifier, asd);
+	}
+
+	return 0;
+}
+
+struct v4l2_subdev_internal_ops ipu_csi_internal_ops = {
+	.registered = ipu_csi_registered,
+};
+
+static int ipucsi_subdev_init(struct ipucsi *ipucsi, struct device_node *node)
+{
+	struct device_node *endpoint;
+	int ret;
+
+	v4l2_subdev_init(&ipucsi->subdev, &ipucsi_subdev_ops);
+	ipucsi->subdev.dev = ipucsi->dev;
+
+	snprintf(ipucsi->subdev.name, sizeof(ipucsi->subdev.name), "%s%d %s%d",
+		 "IPU", ipu_get_num(ipucsi->ipu), "CSI", ipucsi->id);
+
+	endpoint = of_get_next_child(node, NULL);
+	if (endpoint)
+		v4l2_of_parse_endpoint(endpoint, &ipucsi->endpoint);
+
+	ipucsi->subdev.entity.ops = &ipucsi_entity_ops;
+
+	ipucsi->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+	ipucsi->subdev.of_node = node;
+
+	ipucsi->subdev_pad[0].flags = MEDIA_PAD_FL_SINK;
+	ipucsi->subdev_pad[1].flags = MEDIA_PAD_FL_SOURCE;
+	ret = media_entity_pads_init(&ipucsi->subdev.entity, 2,
+				     ipucsi->subdev_pad);
+	if (ret < 0)
+		return ret;
+
+	ipucsi->subdev.internal_ops = &ipu_csi_internal_ops;
+	ret = v4l2_async_register_subdev(&ipucsi->subdev);
+	if (ret < 0) {
+		dev_err(ipucsi->dev, "Failed to register CSI subdev \"%s\": %d\n",
+			ipucsi->subdev.name, ret);
+	}
+
+	return ret;
+}
+
+static u64 camera_mask = DMA_BIT_MASK(32);
+
+static int ipucsi_probe(struct platform_device *pdev)
+{
+	struct ipu_client_platformdata *pdata = pdev->dev.platform_data;
+	struct ipu_soc *ipu = dev_get_drvdata(pdev->dev.parent);
+	struct ipucsi *ipucsi;
+	int ret;
+	struct device_node *node;
+
+	pdev->dev.dma_mask = &camera_mask;
+	pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+
+	if (!pdata)
+		return -EINVAL;
+
+	ipucsi = devm_kzalloc(&pdev->dev, sizeof(*ipucsi), GFP_KERNEL);
+	if (!ipucsi)
+		return -ENOMEM;
+
+	ipucsi->ipu = ipu;
+	ipucsi->id = pdata->csi;
+	ipucsi->dev = &pdev->dev;
+
+	node = of_graph_get_port_by_id(pdev->dev.parent->of_node, ipucsi->id);
+	if (!node) {
+		dev_err(&pdev->dev, "cannot find node port@%d\n", ipucsi->id);
+		return -ENODEV;
+	}
+
+	ipucsi->csi = ipu_csi_get(ipu, ipucsi->id);
+	if (IS_ERR(ipucsi->csi))
+		return PTR_ERR(ipucsi->csi);
+
+	ret = ipucsi_subdev_init(ipucsi, node);
+	if (ret)
+		return ret;
+
+	of_node_put(node);
+
+	platform_set_drvdata(pdev, ipucsi);
+
+	dev_info(&pdev->dev, "loaded\n");
+
+	return 0;
+}
+
+static struct platform_driver ipucsi_driver = {
+	.driver = {
+		.name = DRIVER_NAME,
+	},
+	.probe = ipucsi_probe,
+};
+
+static int __init ipucsi_init(void)
+{
+	return platform_driver_register(&ipucsi_driver);
+}
+
+static void __exit ipucsi_exit(void)
+{
+	return platform_driver_unregister(&ipucsi_driver);
+}
+
+subsys_initcall(ipucsi_init);
+module_exit(ipucsi_exit);
+
+MODULE_DESCRIPTION("i.MX IPUv3 capture interface driver");
+MODULE_AUTHOR("Sascha Hauer, Pengutronix");
+MODULE_AUTHOR("Philipp Zabel, Pengutronix");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("platform:" DRIVER_NAME);
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (6 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 07/21] [media] imx-ipu: Add i.MX IPUv3 CSI subdevice driver Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-17 11:32   ` Jack Mitchell
  2016-11-08  7:21   ` Ying Liu
  2016-10-14 17:34 ` [PATCH v2 09/21] [media] platform: add video-multiplexer subdevice driver Philipp Zabel
                   ` (14 subsequent siblings)
  22 siblings, 2 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel, Sascha Hauer, Marc Kleine-Budde

This driver uses the IDMAC module's double buffering feature to do the
processing of finished frames in the new frame acknowledge (NFACK)
interrupt handler while the next frame is already being captured. This
avoids a race condition between the end of frame interrupt and NFACK for
very short blanking intervals, but causes the driver to need at least
two buffers in flight. The last remaining frame will never be handed out
to userspace until a new one is queued.
It supports interlaced input and allows to translate between sequential
and interlaced field formats using the IDMAC scan order and interlace
offset parameters.
Currently the direct CSI -> SMFC -> IDMAC path is supported.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Changes since v1:
 - Remove v4l2_media_subdev_prepare_stream and v4l2_media_subdev_s_stream,
   subdevices will propagate s_stream calls to their upstream subdevices
   themselves.
 - Fix width/height to CSI output size
 - Use colorspace provided by CSI output
 - Implement enum/g/s/_input for v4l2-compliance
 - Fix ipu_capture_g_parm to use the correct pad
---
 drivers/media/platform/imx/Kconfig           |    9 +
 drivers/media/platform/imx/Makefile          |    1 +
 drivers/media/platform/imx/imx-ipu-capture.c | 1015 ++++++++++++++++++++++++++
 drivers/media/platform/imx/imx-ipu.h         |    9 +
 drivers/media/platform/imx/imx-ipuv3-csi.c   |   29 +-
 5 files changed, 1061 insertions(+), 2 deletions(-)
 create mode 100644 drivers/media/platform/imx/imx-ipu-capture.c

diff --git a/drivers/media/platform/imx/Kconfig b/drivers/media/platform/imx/Kconfig
index a88c4f7..69e8648 100644
--- a/drivers/media/platform/imx/Kconfig
+++ b/drivers/media/platform/imx/Kconfig
@@ -9,6 +9,15 @@ config MEDIA_IMX
 config VIDEO_IMX_IPU_COMMON
 	tristate
 
+config VIDEO_IMX_IPU_CAPTURE
+	tristate "i.MX5/6 Video Capture driver"
+	depends on IMX_IPUV3_CORE && VIDEO_V4L2_SUBDEV_API && MEDIA_IMX
+	select VIDEOBUF2_DMA_CONTIG
+	select VIDEO_IMX_IPU_COMMON
+	select VIDEO_IMX_IPUV3
+	help
+	  This is a v4l2 video capture driver for the IPUv3 on i.MX5/6.
+
 config VIDEO_IMX_IPU_CSI
 	tristate "i.MX5/6 CMOS Sensor Interface driver"
 	depends on VIDEO_DEV && IMX_IPUV3_CORE && MEDIA_IMX
diff --git a/drivers/media/platform/imx/Makefile b/drivers/media/platform/imx/Makefile
index 82a3616..919eaa1 100644
--- a/drivers/media/platform/imx/Makefile
+++ b/drivers/media/platform/imx/Makefile
@@ -1,3 +1,4 @@
 obj-$(CONFIG_MEDIA_IMX)			+= imx-media.o
 obj-$(CONFIG_VIDEO_IMX_IPU_COMMON)	+= imx-ipu.o
+obj-$(CONFIG_VIDEO_IMX_IPU_CAPTURE)	+= imx-ipu-capture.o
 obj-$(CONFIG_VIDEO_IMX_IPU_CSI)		+= imx-ipuv3-csi.o
diff --git a/drivers/media/platform/imx/imx-ipu-capture.c b/drivers/media/platform/imx/imx-ipu-capture.c
new file mode 100644
index 0000000..1308c1e
--- /dev/null
+++ b/drivers/media/platform/imx/imx-ipu-capture.c
@@ -0,0 +1,1015 @@
+/*
+ * i.MX IPUv3 V4L2 Capture Driver
+ *
+ * Copyright (C) 2016, Pengutronix, Philipp Zabel <kernel@pengutronix.de>
+ *
+ * Based on code
+ * Copyright (C) 2006, Pengutronix, Sascha Hauer <kernel@pengutronix.de>
+ * Copyright (C) 2008, Guennadi Liakhovetski <kernel@pengutronix.de>
+ * Copyright (C) 2008, Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
+ * Copyright (C) 2009, Darius Augulis <augulis.darius@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/videodev2.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+
+#include <video/imx-ipu-v3.h>
+#include "imx-ipu.h"
+
+#include <linux/of_graph.h>
+#include <media/videobuf2-dma-contig.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-dev.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mc.h>
+#include <media/v4l2-of.h>
+
+#define DRIVER_NAME "imx-ipuv3-capture"
+
+/* buffer for one video frame */
+struct ipu_capture_buffer {
+	struct vb2_v4l2_buffer		vb;
+	struct list_head		queue;
+};
+
+struct ipu_capture {
+	struct video_device		vdev;
+
+	struct device			*dev;
+	struct v4l2_fh			fh;
+	struct vb2_queue		vb2_vidq;
+	struct media_pad		pad;
+	struct media_pipeline		pipe;
+	struct v4l2_format		format;
+
+	struct v4l2_subdev		*csi_sd;
+	struct ipu_smfc			*smfc;
+	struct ipuv3_channel		*ipuch;
+	struct ipu_soc			*ipu;
+	int				id;
+
+	spinlock_t			lock;
+	struct mutex			mutex;
+
+	/* The currently active buffer, set by NFACK and cleared by EOF interrupt */
+	struct ipu_capture_buffer		*active;
+	struct list_head		capture;
+	int				ilo;
+	int				sequence;
+
+	int				done_count;
+	int				skip_count;
+};
+
+
+static struct ipu_capture_buffer *to_ipu_capture_buffer(struct vb2_buffer *vb)
+{
+	return container_of(vb, struct ipu_capture_buffer, vb.vb2_buf);
+}
+
+static inline void ipu_capture_set_inactive_buffer(struct ipu_capture *priv,
+						   struct vb2_buffer *vb)
+{
+	int bufptr = !ipu_idmac_get_current_buffer(priv->ipuch);
+	dma_addr_t eba = vb2_dma_contig_plane_dma_addr(vb, 0);
+
+	if (priv->ilo < 0)
+		eba -= priv->ilo;
+
+	ipu_cpmem_set_buffer(priv->ipuch, bufptr, eba);
+	ipu_idmac_select_buffer(priv->ipuch, bufptr);
+}
+
+static irqreturn_t ipu_capture_new_frame_handler(int irq, void *context)
+{
+	struct ipu_capture *priv = context;
+	struct ipu_capture_buffer *buf;
+	struct vb2_v4l2_buffer *vb;
+	unsigned long flags;
+
+	/* The IDMAC just started to write pixel data into the current buffer */
+
+	spin_lock_irqsave(&priv->lock, flags);
+
+	/*
+	 * If there is a previously active frame, mark it as done to hand it off
+	 * to userspace. Or, if there are no further frames queued, hold on to it.
+	 */
+	if (priv->active) {
+		vb = &priv->active->vb;
+		buf = to_ipu_capture_buffer(&vb->vb2_buf);
+
+		if (vb2_is_streaming(vb->vb2_buf.vb2_queue) &&
+		    list_is_singular(&priv->capture)) {
+			pr_debug("%s: reusing 0x%08x\n", __func__,
+				vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0));
+			/* DEBUG: check if buf == EBA(active) */
+		} else {
+			/* Otherwise, mark buffer as finished */
+			list_del_init(&buf->queue);
+
+			vb2_buffer_done(&vb->vb2_buf, VB2_BUF_STATE_DONE);
+			priv->done_count++;
+		}
+	}
+
+	if (list_empty(&priv->capture))
+		goto out;
+
+	priv->active = list_first_entry(&priv->capture,
+					   struct ipu_capture_buffer, queue);
+	vb = &priv->active->vb;
+	vb->vb2_buf.timestamp = ktime_get_ns();
+	vb->field = priv->format.fmt.pix.field;
+	vb->sequence = priv->sequence++;
+
+	/*
+	 * Point the inactive buffer address to the next queued buffer,
+	 * if available. Otherwise, prepare to reuse the currently active
+	 * buffer, unless ipu_capture_buf_queue gets called in time.
+	 */
+	if (!list_is_singular(&priv->capture)) {
+		buf = list_entry(priv->capture.next->next,
+				 struct ipu_capture_buffer, queue);
+		vb = &buf->vb;
+	}
+	ipu_capture_set_inactive_buffer(priv, &vb->vb2_buf);
+out:
+	spin_unlock_irqrestore(&priv->lock, flags);
+
+	return IRQ_HANDLED;
+}
+
+static int ipu_capture_get_resources(struct ipu_capture *priv)
+{
+	struct device *dev = priv->dev;
+	int channel = priv->id * 2;
+	int ret;
+
+	priv->ipuch = ipu_idmac_get(priv->ipu, channel);
+	if (IS_ERR(priv->ipuch)) {
+		ret = PTR_ERR(priv->ipuch);
+		dev_err(dev, "Failed to get IDMAC channel %d: %d\n", channel,
+			ret);
+		priv->ipuch = NULL;
+		return ret;
+	}
+
+	priv->smfc = ipu_smfc_get(priv->ipu, channel);
+	if (!priv->smfc) {
+		dev_err(dev, "Failed to get SMFC channel %d\n", channel);
+		ret = -EBUSY;
+		goto err;
+	}
+
+	return 0;
+err:
+	ipu_idmac_put(priv->ipuch);
+	return ret;
+}
+
+static void ipu_capture_put_resources(struct ipu_capture *priv)
+{
+	if (priv->ipuch) {
+		ipu_idmac_put(priv->ipuch);
+		priv->ipuch = NULL;
+	}
+
+	if (priv->smfc) {
+		ipu_smfc_put(priv->smfc);
+		priv->smfc = NULL;
+	}
+}
+
+/*
+ *  Videobuf operations
+ */
+static int ipu_capture_queue_setup(struct vb2_queue *vq, unsigned int *count,
+				   unsigned int *num_planes, unsigned int sizes[],
+				   struct device *alloc_devs[])
+{
+	struct ipu_capture *priv = vb2_get_drv_priv(vq);
+
+	priv->sequence = 0;
+
+	if (!*count)
+		*count = 32;
+	*num_planes = 1;
+	sizes[0] = priv->format.fmt.pix.sizeimage;
+
+	return 0;
+}
+
+static int ipu_capture_buf_prepare(struct vb2_buffer *vb)
+{
+	struct ipu_capture *priv = vb2_get_drv_priv(vb->vb2_queue);
+	struct v4l2_pix_format *pix = &priv->format.fmt.pix;
+
+	if (vb2_plane_size(vb, 0) < pix->sizeimage)
+		return -ENOBUFS;
+
+	vb2_set_plane_payload(vb, 0, pix->sizeimage);
+
+	return 0;
+}
+
+static void ipu_capture_buf_queue(struct vb2_buffer *vb)
+{
+	struct vb2_queue *vq = vb->vb2_queue;
+	struct ipu_capture *priv = vb2_get_drv_priv(vq);
+	struct ipu_capture_buffer *buf = to_ipu_capture_buffer(vb);
+	unsigned long flags;
+
+	spin_lock_irqsave(&priv->lock, flags);
+
+	/*
+	 * If there is no next buffer queued, point the inactive buffer
+	 * address to the incoming buffer
+	 */
+	if (vb2_is_streaming(vb->vb2_queue) &&
+	    list_is_singular(&priv->capture))
+		ipu_capture_set_inactive_buffer(priv, vb);
+
+	list_add_tail(&buf->queue, &priv->capture);
+
+	spin_unlock_irqrestore(&priv->lock, flags);
+}
+
+static void ipu_capture_buf_cleanup(struct vb2_buffer *vb)
+{
+	struct vb2_queue *vq = vb->vb2_queue;
+	struct ipu_capture *priv = vb2_get_drv_priv(vq);
+	struct ipu_capture_buffer *buf = to_ipu_capture_buffer(vb);
+	unsigned long flags;
+
+	spin_lock_irqsave(&priv->lock, flags);
+
+	if (priv->active == buf)
+		priv->active = NULL;
+
+	if (!list_empty(&buf->queue))
+		list_del_init(&buf->queue);
+
+	spin_unlock_irqrestore(&priv->lock, flags);
+
+	ipu_capture_put_resources(priv);
+}
+
+static int ipu_capture_buf_init(struct vb2_buffer *vb)
+{
+	struct ipu_capture_buffer *buf = to_ipu_capture_buffer(vb);
+
+	/* This is for locking debugging only */
+	INIT_LIST_HEAD(&buf->queue);
+
+	return 0;
+}
+
+static int ipu_capture_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+	struct ipu_capture *priv = vb2_get_drv_priv(vq);
+	struct v4l2_subdev *csi_sd = priv->csi_sd;
+	u32 width = priv->format.fmt.pix.width;
+	u32 height = priv->format.fmt.pix.height;
+	struct device *dev = priv->dev;
+	int burstsize;
+	struct ipu_capture_buffer *buf;
+	int nfack_irq;
+	int ret;
+	const char *irq_name[2] = { "CSI0", "CSI1" };
+	bool raw;
+
+	ret = ipu_capture_get_resources(priv);
+	if (ret < 0) {
+		dev_err(dev, "Failed to get resources: %d\n", ret);
+		goto err_dequeue;
+	}
+
+	ipu_cpmem_zero(priv->ipuch);
+
+	nfack_irq = ipu_idmac_channel_irq(priv->ipu, priv->ipuch,
+					  IPU_IRQ_NFACK);
+	ret = request_threaded_irq(nfack_irq, NULL,
+				   ipu_capture_new_frame_handler, IRQF_ONESHOT,
+				   irq_name[priv->id], priv);
+	if (ret) {
+		dev_err(dev, "Failed to request NFACK interrupt: %d\n", nfack_irq);
+		goto put_resources;
+	}
+
+	dev_dbg(dev, "width: %d height: %d, %.4s\n",
+		width, height, (char *)&priv->format.fmt.pix.pixelformat);
+
+	ipu_cpmem_set_resolution(priv->ipuch, width, height);
+
+	raw = false;
+
+	if (raw && priv->smfc) {
+		/*
+		 * raw formats. We can only pass them through to memory
+		 */
+		u32 fourcc = priv->format.fmt.pix.pixelformat;
+		int bytes;
+
+		switch (fourcc) {
+		case V4L2_PIX_FMT_GREY:
+			bytes = 1;
+			break;
+		case V4L2_PIX_FMT_Y10:
+		case V4L2_PIX_FMT_Y16:
+		case V4L2_PIX_FMT_UYVY:
+		case V4L2_PIX_FMT_YUYV:
+			bytes = 2;
+			break;
+		}
+
+		ipu_cpmem_set_stride(priv->ipuch, width * bytes);
+		ipu_cpmem_set_format_passthrough(priv->ipuch, bytes * 8);
+		/*
+		 * According to table 37-727 (SMFC Burst Size), burstsize should
+		 * be set to NBP[6:4] for PFS == 6. Unfortunately, with a 16-bit
+		 * bus any value below 4 doesn't produce proper images.
+		 */
+		burstsize = (64 / bytes) >> 3;
+	} else {
+		/*
+		 * formats we understand, we can write it in any format not requiring
+		 * colorspace conversion.
+		 */
+		u32 fourcc = priv->format.fmt.pix.pixelformat;
+
+		switch (fourcc) {
+		case V4L2_PIX_FMT_RGB32:
+			ipu_cpmem_set_stride(priv->ipuch, width * 4);
+			ipu_cpmem_set_fmt(priv->ipuch, fourcc);
+			break;
+		case V4L2_PIX_FMT_UYVY:
+		case V4L2_PIX_FMT_YUYV:
+			ipu_cpmem_set_stride(priv->ipuch, width * 2);
+			ipu_cpmem_set_yuv_interleaved(priv->ipuch, fourcc);
+			break;
+		case V4L2_PIX_FMT_YUV420:
+		case V4L2_PIX_FMT_YVU420:
+		case V4L2_PIX_FMT_NV12:
+		case V4L2_PIX_FMT_YUV422P:
+			ipu_cpmem_set_stride(priv->ipuch, width);
+			ipu_cpmem_set_fmt(priv->ipuch, fourcc);
+			ipu_cpmem_set_yuv_planar(priv->ipuch, fourcc,
+						 width, height);
+			burstsize = 16;
+			break;
+		default:
+			dev_err(dev, "invalid color format: %4.4s\n",
+				(char *)&fourcc);
+			ret = -EINVAL;
+			goto free_irq;
+		}
+	}
+
+	if (priv->ilo)
+		ipu_cpmem_interlaced_scan(priv->ipuch, priv->ilo);
+
+	if (priv->smfc) {
+		/*
+		 * Set the channel for the direct CSI-->memory via SMFC
+		 * use-case to very high priority, by enabling the watermark
+		 * signal in the SMFC, enabling WM in the channel, and setting
+		 * the channel priority to high.
+		 *
+		 * Refer to the iMx6 rev. D TRM Table 36-8: Calculated priority
+		 * value.
+		 *
+		 * The WM's are set very low by intention here to ensure that
+		 * the SMFC FIFOs do not overflow.
+		 */
+		ipu_smfc_set_watermark(priv->smfc, 2, 1);
+		ipu_idmac_enable_watermark(priv->ipuch, true);
+		ipu_cpmem_set_high_priority(priv->ipuch);
+
+		/* Superfluous due to call to ipu_cpmem_zero above */
+		ipu_cpmem_set_axi_id(priv->ipuch, 0);
+
+		ipu_smfc_set_burstsize(priv->smfc, burstsize - 1);
+		ipu_smfc_map_channel(priv->smfc, priv->id * 2, 0);
+	}
+
+	/* Set the media pipeline to streaming state */
+	ret = media_entity_pipeline_start(&csi_sd->entity, &priv->pipe);
+	if (ret) {
+		dev_err(dev, "Failed to start external media pipeline\n");
+		goto stop_pipe;
+	}
+
+	ipu_idmac_set_double_buffer(priv->ipuch, 1);
+
+	if (list_empty(&priv->capture)) {
+		dev_err(dev, "No capture buffers\n");
+		ret = -ENOMEM;
+		goto stop_pipe;
+	}
+
+	priv->active = NULL;
+
+	/* Point the inactive buffer address to the first buffer */
+	buf = list_first_entry(&priv->capture, struct ipu_capture_buffer, queue);
+	ipu_capture_set_inactive_buffer(priv, &buf->vb.vb2_buf);
+
+	ipu_idmac_enable_channel(priv->ipuch);
+
+	if (priv->smfc)
+		ipu_smfc_enable(priv->smfc);
+
+
+	ret = v4l2_subdev_call(priv->csi_sd, video, s_stream, 1);
+	if (ret) {
+		dev_err(dev, "Failed to start streaming: %d\n", ret);
+		goto stop_pipe;
+	}
+
+	return 0;
+
+stop_pipe:
+	media_entity_pipeline_stop(&csi_sd->entity);
+free_irq:
+	free_irq(nfack_irq, priv);
+put_resources:
+	ipu_capture_put_resources(priv);
+err_dequeue:
+	while (!list_empty(&vq->queued_list)) {
+		struct vb2_v4l2_buffer *buf;
+
+		buf = to_vb2_v4l2_buffer(list_first_entry(&vq->queued_list,
+							  struct vb2_buffer,
+							  queued_entry));
+		list_del(&buf->vb2_buf.queued_entry);
+		vb2_buffer_done(&buf->vb2_buf, VB2_BUF_STATE_QUEUED);
+	}
+	return ret;
+}
+
+static void ipu_capture_stop_streaming(struct vb2_queue *vq)
+{
+	struct ipu_capture *priv = vb2_get_drv_priv(vq);
+	unsigned long flags;
+	int nfack_irq = ipu_idmac_channel_irq(priv->ipu, priv->ipuch,
+					      IPU_IRQ_NFACK);
+
+	free_irq(nfack_irq, priv);
+
+	v4l2_subdev_call(priv->csi_sd, video, s_stream, 0);
+	ipu_idmac_disable_channel(priv->ipuch);
+	if (priv->smfc)
+		ipu_smfc_disable(priv->smfc);
+
+	spin_lock_irqsave(&priv->lock, flags);
+	while (!list_empty(&priv->capture)) {
+		struct ipu_capture_buffer *buf = list_entry(priv->capture.next,
+						 struct ipu_capture_buffer, queue);
+		list_del_init(priv->capture.next);
+		vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
+	}
+	spin_unlock_irqrestore(&priv->lock, flags);
+
+	media_entity_pipeline_stop(&priv->csi_sd->entity);
+
+	ipu_capture_put_resources(priv);
+}
+
+static void ipu_capture_lock(struct vb2_queue *vq)
+{
+	struct ipu_capture *priv = vb2_get_drv_priv(vq);
+
+	mutex_lock(&priv->mutex);
+}
+
+static void ipu_capture_unlock(struct vb2_queue *vq)
+{
+	struct ipu_capture *priv = vb2_get_drv_priv(vq);
+
+	mutex_unlock(&priv->mutex);
+}
+
+static struct vb2_ops ipu_capture_vb2_ops = {
+	.queue_setup		= ipu_capture_queue_setup,
+	.buf_prepare		= ipu_capture_buf_prepare,
+	.buf_queue		= ipu_capture_buf_queue,
+	.buf_cleanup		= ipu_capture_buf_cleanup,
+	.buf_init		= ipu_capture_buf_init,
+	.start_streaming	= ipu_capture_start_streaming,
+	.stop_streaming		= ipu_capture_stop_streaming,
+	.wait_prepare		= ipu_capture_unlock,
+	.wait_finish		= ipu_capture_lock,
+};
+
+static int ipu_capture_querycap(struct file *file, void *priv,
+					struct v4l2_capability *cap)
+{
+	strlcpy(cap->driver, "imx-ipuv3-capture", sizeof(cap->driver));
+	/* cap->name is set by the friendly caller:-> */
+	strlcpy(cap->card, "imx-ipuv3-csi", sizeof(cap->card));
+	strlcpy(cap->bus_info, "platform:imx-ipuv3-capture", sizeof(cap->bus_info));
+	cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
+	cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+
+	return 0;
+}
+
+static int ipu_capture_try_fmt(struct file *file, void *fh,
+			       struct v4l2_format *f)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+	struct v4l2_subdev_format sd_fmt;
+	struct ipu_fmt *fmt = NULL;
+	enum v4l2_field in, out;
+	int bytes_per_pixel;
+	int ret;
+
+	sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+	sd_fmt.pad = 1;
+	ret = v4l2_subdev_call(priv->csi_sd, pad, get_fmt, NULL, &sd_fmt);
+	if (ret)
+		return ret;
+
+	in = sd_fmt.format.field;
+	out = f->fmt.pix.field;
+
+	switch (sd_fmt.format.code) {
+	case MEDIA_BUS_FMT_FIXED:
+		fmt = ipu_find_fmt_rgb(f->fmt.pix.pixelformat);
+		if (!fmt)
+			return -EINVAL;
+		bytes_per_pixel = fmt->bytes_per_pixel;
+		break;
+	case MEDIA_BUS_FMT_UYVY8_2X8:
+	case MEDIA_BUS_FMT_YUYV8_2X8:
+		fmt = ipu_find_fmt_yuv(f->fmt.pix.pixelformat);
+		if (!fmt)
+			return -EINVAL;
+		bytes_per_pixel = fmt->bytes_per_pixel;
+		break;
+	case MEDIA_BUS_FMT_Y8_1X8:
+		f->fmt.pix.pixelformat = V4L2_PIX_FMT_GREY;
+		bytes_per_pixel = 1;
+		break;
+	case MEDIA_BUS_FMT_UYVY8_1X16:
+		f->fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY;
+		bytes_per_pixel = 2;
+		break;
+	case MEDIA_BUS_FMT_YUYV8_1X16:
+		f->fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
+		bytes_per_pixel = 2;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	f->fmt.pix.width = round_up(sd_fmt.format.width, 8);
+	f->fmt.pix.height = round_up(sd_fmt.format.height, 2);
+	f->fmt.pix.bytesperline = f->fmt.pix.width * bytes_per_pixel;
+	f->fmt.pix.sizeimage = f->fmt.pix.bytesperline * f->fmt.pix.height;
+	if (fmt) {
+		if (fmt->fourcc == V4L2_PIX_FMT_YUV420 ||
+		    fmt->fourcc == V4L2_PIX_FMT_YVU420 ||
+		    fmt->fourcc == V4L2_PIX_FMT_NV12)
+			f->fmt.pix.sizeimage = f->fmt.pix.sizeimage * 3 / 2;
+		else if (fmt->fourcc == V4L2_PIX_FMT_YUV422P)
+			f->fmt.pix.sizeimage *= 2;
+	}
+
+	if ((in == V4L2_FIELD_SEQ_TB && out == V4L2_FIELD_INTERLACED_TB) ||
+	    (in == V4L2_FIELD_INTERLACED_TB && out == V4L2_FIELD_SEQ_TB) ||
+	    (in == V4L2_FIELD_SEQ_BT && out == V4L2_FIELD_INTERLACED_BT) ||
+	    (in == V4L2_FIELD_INTERLACED_BT && out == V4L2_FIELD_SEQ_BT)) {
+		/*
+		 * IDMAC scan order can be used for translation between
+		 * interlaced and sequential field formats.
+		 */
+	} else if (out == V4L2_FIELD_NONE || out == V4L2_FIELD_INTERLACED) {
+		/*
+		 * If userspace requests progressive or interlaced frames,
+		 * interlace sequential fields as closest approximation.
+		 */
+		if (in == V4L2_FIELD_SEQ_TB)
+			out = V4L2_FIELD_INTERLACED_TB;
+		else if (in == V4L2_FIELD_SEQ_BT)
+			out = V4L2_FIELD_INTERLACED_BT;
+		else
+			out = in;
+	} else {
+		/* Translation impossible or userspace doesn't care */
+		out = in;
+	}
+	f->fmt.pix.field = out;
+
+	if (sd_fmt.format.colorspace)
+		f->fmt.pix.colorspace = sd_fmt.format.colorspace;
+	else if (f->fmt.pix.colorspace == V4L2_COLORSPACE_DEFAULT)
+		f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
+
+	return 0;
+}
+
+static int ipu_capture_s_fmt(struct file *file, void *fh,
+		struct v4l2_format *f)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+	struct v4l2_subdev_format sd_fmt;
+	enum v4l2_field in, out;
+	int ret;
+
+	sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+	sd_fmt.pad = 1;
+	ret = v4l2_subdev_call(priv->csi_sd, pad, get_fmt, NULL, &sd_fmt);
+	if (ret)
+		return ret;
+
+	ret = ipu_capture_try_fmt(file, fh, f);
+	if (ret)
+		return ret;
+
+	priv->format = *f;
+
+	/*
+	 * Set IDMAC scan order interlace offset (ILO) for translation between
+	 * interlaced and sequential field formats.
+	 */
+	in = sd_fmt.format.field;
+	out = f->fmt.pix.field;
+	if ((in == V4L2_FIELD_SEQ_TB && out == V4L2_FIELD_INTERLACED_TB) ||
+	    (in == V4L2_FIELD_INTERLACED_TB && out == V4L2_FIELD_SEQ_TB))
+		priv->ilo = f->fmt.pix.bytesperline;
+	else if ((in == V4L2_FIELD_SEQ_BT && out == V4L2_FIELD_INTERLACED_BT) ||
+		 (in == V4L2_FIELD_INTERLACED_BT && out == V4L2_FIELD_SEQ_BT))
+		priv->ilo = -f->fmt.pix.bytesperline;
+	else
+		priv->ilo = 0;
+
+	return 0;
+}
+
+static int ipu_capture_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+
+	if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	*f = priv->format;
+
+	return 0;
+}
+
+static int ipu_capture_enum_input(struct file *file, void *fh,
+				  struct v4l2_input *i)
+{
+	if (i->index != 0)
+		return -EINVAL;
+
+	strcpy(i->name, "CSI");
+	i->type = V4L2_INPUT_TYPE_CAMERA;
+
+	return 0;
+}
+
+static int ipu_capture_g_input(struct file *file, void *fh, unsigned int *i)
+{
+	*i = 0;
+	return 0;
+}
+
+static int ipu_capture_s_input(struct file *file, void *fh, unsigned int i)
+{
+	if (i != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int ipu_capture_enum_fmt(struct file *file, void *fh,
+				struct v4l2_fmtdesc *f)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+	struct v4l2_subdev_format sd_fmt;
+	u32 fourcc;
+	int ret;
+
+	sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+	sd_fmt.pad = 1;
+	ret = v4l2_subdev_call(priv->csi_sd, pad, get_fmt, NULL, &sd_fmt);
+	if (ret)
+		return ret;
+
+	switch (sd_fmt.format.code) {
+	case V4L2_PIX_FMT_RGB32:
+		return ipu_enum_fmt_rgb(file, priv, f);
+	case MEDIA_BUS_FMT_UYVY8_2X8:
+	case MEDIA_BUS_FMT_YUYV8_2X8:
+		return ipu_enum_fmt_yuv(file, priv, f);
+	case MEDIA_BUS_FMT_Y8_1X8:
+		fourcc = V4L2_PIX_FMT_GREY;
+		break;
+	case MEDIA_BUS_FMT_UYVY8_1X16:
+		fourcc = V4L2_PIX_FMT_UYVY;
+		break;
+	case MEDIA_BUS_FMT_YUYV8_1X16:
+		fourcc = V4L2_PIX_FMT_YUYV;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (f->index)
+		return -EINVAL;
+
+	f->pixelformat = fourcc;
+
+	return 0;
+}
+
+static int ipu_capture_link_setup(struct media_entity *entity,
+				  const struct media_pad *local,
+				  const struct media_pad *remote, u32 flags)
+{
+	struct video_device *vdev = media_entity_to_video_device(entity);
+	struct ipu_capture *priv = container_of(vdev, struct ipu_capture, vdev);
+
+	if (priv->smfc)
+		ipu_smfc_map_channel(priv->smfc, priv->id * 2, 0);
+
+	return 0;
+}
+
+struct media_entity_operations ipu_capture_entity_ops = {
+	.link_setup = ipu_capture_link_setup,
+};
+
+static int ipu_capture_open(struct file *file)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+	int ret;
+
+	ret = v4l2_pipeline_pm_use(&priv->vdev.entity, 1);
+	if (ret)
+		return ret;
+
+	mutex_lock(&priv->mutex);
+	ret = v4l2_fh_open(file);
+	mutex_unlock(&priv->mutex);
+
+	return ret;
+}
+
+static int ipu_capture_release(struct file *file)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+
+	v4l2_pipeline_pm_use(&priv->vdev.entity, 0);
+
+	if (v4l2_fh_is_singular_file(file))
+		vb2_fop_release(file);
+	else
+		v4l2_fh_release(file);
+
+	return 0;
+}
+
+static const struct v4l2_file_operations ipu_capture_fops = {
+	.owner		= THIS_MODULE,
+	.open		= ipu_capture_open,
+	.release	= ipu_capture_release,
+	.unlocked_ioctl	= video_ioctl2,
+	.mmap		= vb2_fop_mmap,
+	.poll		= vb2_fop_poll,
+};
+
+static int ipu_capture_g_parm(struct file *file, void *fh,
+			 struct v4l2_streamparm *sp)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+	struct v4l2_subdev *sd = priv->csi_sd;
+	struct v4l2_subdev_frame_interval fi;
+
+	if (sp->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	sp->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
+	fi.pad = 1;
+	v4l2_subdev_call(sd, video, g_frame_interval, &fi);
+	sp->parm.capture.timeperframe = fi.interval;
+
+	return 0;
+}
+
+static int ipu_capture_s_parm(struct file *file, void *fh,
+			 struct v4l2_streamparm *sp)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+	struct v4l2_subdev *sd = priv->csi_sd;
+	struct v4l2_subdev_frame_interval fi;
+
+	if (sp->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	fi.pad = 1;
+	fi.interval = sp->parm.capture.timeperframe;
+	v4l2_subdev_call(sd, video, s_frame_interval, &fi);
+	v4l2_subdev_call(sd, video, g_frame_interval, &fi);
+	sp->parm.capture.timeperframe = fi.interval;
+
+	return 0;
+}
+
+static int ipu_capture_enum_framesizes(struct file *file, void *fh,
+				       struct v4l2_frmsizeenum *fsize)
+{
+	struct ipu_capture *priv = video_drvdata(file);
+	struct v4l2_subdev_format sd_fmt;
+	struct ipu_fmt *fmt = NULL;
+	int ret;
+
+	if (fsize->index != 0)
+		return -EINVAL;
+
+	sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+	sd_fmt.pad = 1;
+	ret = v4l2_subdev_call(priv->csi_sd, pad, get_fmt, NULL, &sd_fmt);
+	if (ret)
+		return ret;
+
+	switch (sd_fmt.format.code) {
+	case V4L2_PIX_FMT_RGB32:
+		fmt = ipu_find_fmt_rgb(fsize->pixel_format);
+		if (!fmt)
+			return -EINVAL;
+		break;
+	case MEDIA_BUS_FMT_UYVY8_2X8:
+	case MEDIA_BUS_FMT_YUYV8_2X8:
+		fmt = ipu_find_fmt_yuv(fsize->pixel_format);
+		if (!fmt)
+			return -EINVAL;
+		break;
+	case MEDIA_BUS_FMT_Y8_1X8:
+		if (fsize->pixel_format != V4L2_PIX_FMT_GREY)
+			return -EINVAL;
+		break;
+	case MEDIA_BUS_FMT_Y10_1X10:
+		if (fsize->pixel_format != V4L2_PIX_FMT_Y10)
+			return -EINVAL;
+		break;
+	case MEDIA_BUS_FMT_Y12_1X12:
+		if (fsize->pixel_format != V4L2_PIX_FMT_Y16)
+			return -EINVAL;
+		break;
+	case MEDIA_BUS_FMT_UYVY8_1X16:
+		if (fsize->pixel_format != V4L2_PIX_FMT_UYVY)
+			return -EINVAL;
+		break;
+	case MEDIA_BUS_FMT_YUYV8_1X16:
+		if (fsize->pixel_format != V4L2_PIX_FMT_YUYV)
+			return -EINVAL;
+		break;
+	}
+
+	fsize->type = V4L2_FRMSIZE_TYPE_DISCRETE;
+	fsize->discrete.width = sd_fmt.format.width;
+	fsize->discrete.height = sd_fmt.format.height;
+
+	return 0;
+}
+
+static const struct v4l2_ioctl_ops ipu_capture_ioctl_ops = {
+	.vidioc_querycap		= ipu_capture_querycap,
+
+	.vidioc_enum_fmt_vid_cap	= ipu_capture_enum_fmt,
+	.vidioc_try_fmt_vid_cap		= ipu_capture_try_fmt,
+	.vidioc_s_fmt_vid_cap		= ipu_capture_s_fmt,
+	.vidioc_g_fmt_vid_cap		= ipu_capture_g_fmt,
+
+	.vidioc_create_bufs		= vb2_ioctl_create_bufs,
+	.vidioc_reqbufs			= vb2_ioctl_reqbufs,
+	.vidioc_querybuf		= vb2_ioctl_querybuf,
+
+	.vidioc_qbuf			= vb2_ioctl_qbuf,
+	.vidioc_dqbuf			= vb2_ioctl_dqbuf,
+	.vidioc_expbuf			= vb2_ioctl_expbuf,
+
+	.vidioc_streamon		= vb2_ioctl_streamon,
+	.vidioc_streamoff		= vb2_ioctl_streamoff,
+
+	.vidioc_enum_input		= ipu_capture_enum_input,
+	.vidioc_g_input			= ipu_capture_g_input,
+	.vidioc_s_input			= ipu_capture_s_input,
+
+	.vidioc_g_parm			= ipu_capture_g_parm,
+	.vidioc_s_parm			= ipu_capture_s_parm,
+
+	.vidioc_enum_framesizes		= ipu_capture_enum_framesizes,
+};
+
+static int ipu_capture_vb2_init(struct ipu_capture *priv)
+{
+	struct vb2_queue *q = &priv->vb2_vidq;
+
+	q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+	q->io_modes = VB2_MMAP | VB2_DMABUF;
+	q->drv_priv = priv;
+	q->ops = &ipu_capture_vb2_ops;
+	q->mem_ops = &vb2_dma_contig_memops;
+	q->buf_struct_size = sizeof(struct ipu_capture_buffer);
+	q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+	q->dev = priv->dev;
+
+	return vb2_queue_init(q);
+}
+
+struct ipu_capture *ipu_capture_create(struct device *dev, struct ipu_soc *ipu,
+				       int id, struct v4l2_subdev *sd,
+				       int pad_index)
+{
+	struct video_device *vdev;
+	struct ipu_capture *priv;
+	struct media_link *link;
+	int ret;
+
+	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+	if (!priv)
+		return ERR_PTR(-ENOMEM);
+
+	priv->dev = dev;
+	priv->ipu = ipu;
+	priv->csi_sd = sd;
+
+	INIT_LIST_HEAD(&priv->capture);
+	spin_lock_init(&priv->lock);
+	mutex_init(&priv->mutex);
+
+	ret = ipu_capture_vb2_init(priv);
+	if (ret < 0)
+		return ERR_PTR(ret);
+
+	vdev = &priv->vdev;
+
+	snprintf(vdev->name, sizeof(vdev->name), DRIVER_NAME ".%d", id);
+	vdev->release	= video_device_release_empty;
+	vdev->fops	= &ipu_capture_fops;
+	vdev->ioctl_ops	= &ipu_capture_ioctl_ops;
+	vdev->v4l2_dev	= sd->v4l2_dev;
+	vdev->minor	= -1;
+	vdev->release	= video_device_release_empty;
+	vdev->lock	= &priv->mutex;
+	vdev->queue	= &priv->vb2_vidq;
+
+	priv->format.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB32;
+	priv->format.fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
+
+	video_set_drvdata(vdev, priv);
+
+	priv->pad.flags = MEDIA_PAD_FL_SINK;
+	vdev->entity.ops = &ipu_capture_entity_ops;
+
+	ret = media_entity_pads_init(&vdev->entity, 1, &priv->pad);
+	if (ret < 0)
+		return ERR_PTR(ret);
+
+	ret = video_register_device(&priv->vdev, VFL_TYPE_GRABBER, -1);
+	if (ret)
+		media_entity_cleanup(&vdev->entity);
+
+	ret = media_create_pad_link(&sd->entity, pad_index,
+				    &vdev->entity, 0, 0);
+	if (ret < 0) {
+		v4l2_err(sd->v4l2_dev,
+			 "failed to create link for '%s':%d: %d\n",
+			 sd->entity.name, pad_index, ret);
+		return ERR_PTR(ret);
+	}
+
+	link = media_entity_find_link(&sd->entity.pads[pad_index],
+				      &vdev->entity.pads[0]);
+	media_entity_setup_link(link, MEDIA_LNK_FL_ENABLED);
+
+	return priv;
+}
+EXPORT_SYMBOL_GPL(ipu_capture_create);
+
+void ipu_capture_destroy(struct ipu_capture *priv)
+{
+	video_unregister_device(&priv->vdev);
+	media_entity_cleanup(&priv->vdev.entity);
+	ipu_capture_put_resources(priv);
+
+	kfree(priv);
+}
+EXPORT_SYMBOL_GPL(ipu_capture_destroy);
diff --git a/drivers/media/platform/imx/imx-ipu.h b/drivers/media/platform/imx/imx-ipu.h
index 7b344b6..3690915 100644
--- a/drivers/media/platform/imx/imx-ipu.h
+++ b/drivers/media/platform/imx/imx-ipu.h
@@ -31,4 +31,13 @@ int ipu_g_fmt(struct v4l2_format *f, struct v4l2_pix_format *pix);
 int ipu_enum_framesizes(struct file *file, void *fh,
 			struct v4l2_frmsizeenum *fsize);
 
+struct device;
+struct ipu_soc;
+struct v4l2_subdev;
+
+struct ipu_capture *ipu_capture_create(struct device *dev, struct ipu_soc *ipu,
+				       int id, struct v4l2_subdev *sd,
+				       int pad_index);
+void ipu_capture_destroy(struct ipu_capture *capture);
+
 #endif /* __MEDIA_IMX_IPU_H */
diff --git a/drivers/media/platform/imx/imx-ipuv3-csi.c b/drivers/media/platform/imx/imx-ipuv3-csi.c
index 83e0511..7837978 100644
--- a/drivers/media/platform/imx/imx-ipuv3-csi.c
+++ b/drivers/media/platform/imx/imx-ipuv3-csi.c
@@ -61,6 +61,8 @@ struct ipucsi {
 	struct media_pad		subdev_pad[2];
 	struct v4l2_mbus_framefmt	format_mbus[2];
 	struct v4l2_fract		timeperframe[2];
+
+	struct ipu_capture		*capture;
 };
 
 static int ipu_csi_get_mbus_config(struct ipucsi *ipucsi,
@@ -266,7 +268,7 @@ static int ipucsi_subdev_g_frame_interval(struct v4l2_subdev *subdev,
 {
 	struct ipucsi *ipucsi = container_of(subdev, struct ipucsi, subdev);
 
-	if (fi->pad > 4)
+	if (fi->pad > 1)
 		return -EINVAL;
 
 	fi->interval = ipucsi->timeperframe[(fi->pad == 0) ? 0 : 1];
@@ -311,7 +313,7 @@ static int ipucsi_subdev_s_frame_interval(struct v4l2_subdev *subdev,
 	int i, best_i = 0;
 	u64 want_us;
 
-	if (fi->pad > 4)
+	if (fi->pad > 1)
 		return -EINVAL;
 
 	if (fi->pad == 0) {
@@ -409,6 +411,7 @@ static int ipu_csi_registered(struct v4l2_subdev *sd)
 {
 	struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
 	struct device_node *rpp;
+	int ret;
 
 	/*
 	 * Add source subdevice to asynchronous subdevice waiting list.
@@ -429,11 +432,33 @@ static int ipu_csi_registered(struct v4l2_subdev *sd)
 		__v4l2_async_notifier_add_subdev(sd->notifier, asd);
 	}
 
+	/*
+	 * Create an ipu_capture instance per CSI.
+	 */
+	ipucsi->capture = ipu_capture_create(ipucsi->dev, ipucsi->ipu,
+					     ipucsi->id, sd, 1);
+	if (IS_ERR(ipucsi->capture)) {
+		ret = PTR_ERR(ipucsi->capture);
+		ipucsi->capture = NULL;
+		v4l2_err(sd->v4l2_dev, "Failed to create capture device for %s: %d\n",
+			 sd->name, ret);
+		return ret;
+	}
+
 	return 0;
 }
 
+static void ipu_csi_unregistered(struct v4l2_subdev *sd)
+{
+	struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
+
+	if (ipucsi->capture)
+		ipu_capture_destroy(ipucsi->capture);
+}
+
 struct v4l2_subdev_internal_ops ipu_csi_internal_ops = {
 	.registered = ipu_csi_registered,
+	.unregistered = ipu_csi_unregistered,
 };
 
 static int ipucsi_subdev_init(struct ipucsi *ipucsi, struct device_node *node)
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 09/21] [media] platform: add video-multiplexer subdevice driver
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (7 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 10/21] [media] imx: Add i.MX MIPI CSI-2 " Philipp Zabel
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel, Sascha Hauer

This driver can handle SoC internal and external video bus multiplexers,
controlled either by register bit fields or by a GPIO. The subdevice
passes through frame interval and mbus configuration of the active input
to the output side.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Changes since v1:
 - Propagate s_stream to selected upstream subdevice.
---
 .../bindings/media/video-multiplexer.txt           |  59 +++
 drivers/media/platform/Kconfig                     |   8 +
 drivers/media/platform/Makefile                    |   2 +
 drivers/media/platform/video-multiplexer.c         | 472 +++++++++++++++++++++
 4 files changed, 541 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/media/video-multiplexer.txt
 create mode 100644 drivers/media/platform/video-multiplexer.c

diff --git a/Documentation/devicetree/bindings/media/video-multiplexer.txt b/Documentation/devicetree/bindings/media/video-multiplexer.txt
new file mode 100644
index 0000000..9d133d9
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/video-multiplexer.txt
@@ -0,0 +1,59 @@
+Video Multiplexer
+=================
+
+Video multiplexers allow to select between multiple input ports. Video received
+on the active input port is passed through to the output port. Muxes described
+by this binding may be controlled by a syscon register bitfield or by a GPIO.
+
+Required properties:
+- compatible : should be "video-multiplexer"
+- reg: should be register base of the register containing the control bitfield
+- bit-mask: bitmask of the control bitfield in the control register
+- bit-shift: bit offset of the control bitfield in the control register
+- gpios: alternatively to reg, bit-mask, and bit-shift, a single GPIO phandle
+  may be given to switch between two inputs
+- #address-cells: should be <1>
+- #size-cells: should be <0>
+- port@*: at least three port nodes containing endpoints connecting to the
+  source and sink devices according to of_graph bindings. The last port is
+  the output port, all others are inputs.
+
+Example:
+
+syscon {
+	compatible = "syscon", "simple-mfd";
+
+	mux {
+		compatible = "video-multiplexer";
+		/* Single bit (1 << 19) in syscon register 0x04: */
+		reg = <0x04>;
+		bit-mask = <1>;
+		bit-shift = <19>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		port@0 {
+			reg = <0>;
+
+			mux_in0: endpoint {
+				remote-endpoint = <&video_source0_out>;
+			};
+		};
+
+		port@1 {
+			reg = <1>;
+
+			mux_in1: endpoint {
+				remote-endpoint = <&video_source1_out>;
+			};
+		};
+
+		port@2 {
+			reg = <2>;
+
+			mux_out: endpoint {
+				remote-endpoint = <&capture_interface_in>;
+			};
+		};
+	};
+};
diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index 105bf57..92680f6 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -76,6 +76,14 @@ config VIDEO_M32R_AR_M64278
 	  To compile this driver as a module, choose M here: the
 	  module will be called arv.
 
+config VIDEO_MULTIPLEXER
+	tristate "Video Multiplexer"
+	depends on VIDEO_V4L2_SUBDEV_API && MEDIA_CONTROLLER
+	help
+	  This driver provides support for SoC internal N:1 video bus
+	  multiplexers controlled by register bitfields as well as external
+	  2:1 video multiplexers controlled by a single GPIO.
+
 config VIDEO_OMAP3
 	tristate "OMAP 3 Camera support"
 	depends on VIDEO_V4L2 && I2C && VIDEO_V4L2_SUBDEV_API && ARCH_OMAP3
diff --git a/drivers/media/platform/Makefile b/drivers/media/platform/Makefile
index f7f9008..a251755a 100644
--- a/drivers/media/platform/Makefile
+++ b/drivers/media/platform/Makefile
@@ -27,6 +27,8 @@ obj-$(CONFIG_VIDEO_SH_VEU)		+= sh_veu.o
 
 obj-$(CONFIG_VIDEO_MEM2MEM_DEINTERLACE)	+= m2m-deinterlace.o
 
+obj-$(CONFIG_VIDEO_MULTIPLEXER)		+= video-multiplexer.o
+
 obj-$(CONFIG_VIDEO_S3C_CAMIF) 		+= s3c-camif/
 obj-$(CONFIG_VIDEO_SAMSUNG_EXYNOS4_IS) 	+= exynos4-is/
 obj-$(CONFIG_VIDEO_SAMSUNG_S5P_JPEG)	+= s5p-jpeg/
diff --git a/drivers/media/platform/video-multiplexer.c b/drivers/media/platform/video-multiplexer.c
new file mode 100644
index 0000000..f79b90e
--- /dev/null
+++ b/drivers/media/platform/video-multiplexer.c
@@ -0,0 +1,472 @@
+/*
+ * video stream multiplexer controlled via gpio or syscon
+ *
+ * Copyright (C) 2013 Pengutronix, Sascha Hauer <kernel@pengutronix.de>
+ * Copyright (C) 2016 Pengutronix, Philipp Zabel <kernel@pengutronix.de>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/err.h>
+#include <linux/gpio/consumer.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_graph.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <media/v4l2-async.h>
+#include <media/v4l2-subdev.h>
+#include <media/v4l2-of.h>
+
+struct vidsw {
+	struct v4l2_subdev subdev;
+	unsigned int num_pads;
+	struct media_pad *pads;
+	struct v4l2_mbus_framefmt *format_mbus;
+	struct v4l2_fract timeperframe;
+	struct v4l2_of_endpoint *endpoint;
+	struct regmap_field *field;
+	struct gpio_desc *gpio;
+	int active;
+};
+
+static inline struct vidsw *v4l2_subdev_to_vidsw(struct v4l2_subdev *sd)
+{
+	return container_of(sd, struct vidsw, subdev);
+}
+
+static void vidsw_set_active(struct vidsw *vidsw, int active)
+{
+	vidsw->active = active;
+	if (active < 0)
+		return;
+
+	dev_dbg(vidsw->subdev.dev, "setting %d active\n", active);
+
+	if (vidsw->field)
+		regmap_field_write(vidsw->field, active);
+	else if (vidsw->gpio)
+		gpiod_set_value(vidsw->gpio, active);
+}
+
+static int vidsw_link_setup(struct media_entity *entity,
+			    const struct media_pad *local,
+			    const struct media_pad *remote, u32 flags)
+{
+	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+
+	/* We have no limitations on enabling or disabling our output link */
+	if (local->index == vidsw->num_pads - 1)
+		return 0;
+
+	dev_dbg(sd->dev, "link setup %s -> %s", remote->entity->name,
+		local->entity->name);
+
+	if (!(flags & MEDIA_LNK_FL_ENABLED)) {
+		if (local->index == vidsw->active) {
+			dev_dbg(sd->dev, "going inactive\n");
+			vidsw->active = -1;
+		}
+		return 0;
+	}
+
+	if (vidsw->active >= 0) {
+		struct media_pad *pad;
+
+		if (vidsw->active == local->index)
+			return 0;
+
+		pad = media_entity_remote_pad(&vidsw->pads[vidsw->active]);
+		if (pad) {
+			struct media_link *link;
+			int ret;
+
+			link = media_entity_find_link(pad,
+						&vidsw->pads[vidsw->active]);
+			if (link) {
+				ret = __media_entity_setup_link(link, 0);
+				if (ret)
+					return ret;
+			}
+		}
+	}
+
+	vidsw_set_active(vidsw, local->index);
+
+	return 0;
+}
+
+static struct media_entity_operations vidsw_ops = {
+	.link_setup = vidsw_link_setup,
+};
+
+static bool vidsw_endpoint_disabled(struct device_node *ep)
+{
+	struct device_node *rpp;
+
+	if (!of_device_is_available(ep))
+		return true;
+
+	rpp = of_graph_get_remote_port_parent(ep);
+	if (!rpp)
+		return true;
+
+	return !of_device_is_available(rpp);
+}
+
+static int vidsw_async_init(struct vidsw *vidsw, struct device_node *node)
+{
+	struct device_node *ep;
+	u32 portno;
+	int numports;
+	int ret;
+	int i;
+	bool active_link = false;
+
+	numports = vidsw->num_pads;
+
+	for (i = 0; i < numports - 1; i++)
+		vidsw->pads[i].flags = MEDIA_PAD_FL_SINK;
+	vidsw->pads[numports - 1].flags = MEDIA_PAD_FL_SOURCE;
+
+	ret = media_entity_pads_init(&vidsw->subdev.entity, numports,
+				     vidsw->pads);
+	if (ret < 0)
+		return ret;
+
+	vidsw->subdev.entity.ops = &vidsw_ops;
+
+	for_each_endpoint_of_node(node, ep) {
+		struct v4l2_of_endpoint endpoint;
+
+		v4l2_of_parse_endpoint(ep, &endpoint);
+
+		portno = endpoint.base.port;
+		if (portno >= numports - 1)
+			continue;
+
+		if (vidsw_endpoint_disabled(ep)) {
+			dev_dbg(vidsw->subdev.dev, "port %d disabled\n", portno);
+			continue;
+		}
+
+		vidsw->endpoint[portno] = endpoint;
+
+		if (portno == vidsw->active)
+			active_link = true;
+	}
+
+	for (portno = 0; portno < numports - 1; portno++) {
+		if (!vidsw->endpoint[portno].base.local_node)
+			continue;
+
+		/* If the active input is not connected, use another */
+		if (!active_link) {
+			vidsw_set_active(vidsw, portno);
+			active_link = true;
+		}
+	}
+
+	return v4l2_async_register_subdev(&vidsw->subdev);
+}
+
+int vidsw_g_mbus_config(struct v4l2_subdev *sd, struct v4l2_mbus_config *cfg)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+	struct media_pad *pad;
+	int ret;
+
+	if (vidsw->active == -1) {
+		dev_err(sd->dev, "no configuration for inactive mux\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Retrieve media bus configuration from the entity connected to the
+	 * active input
+	 */
+	pad = media_entity_remote_pad(&vidsw->pads[vidsw->active]);
+	if (pad) {
+		sd = media_entity_to_v4l2_subdev(pad->entity);
+		ret = v4l2_subdev_call(sd, video, g_mbus_config, cfg);
+		if (ret == -ENOIOCTLCMD)
+			pad = NULL;
+		else if (ret < 0) {
+			dev_err(sd->dev, "failed to get source configuration\n");
+			return ret;
+		}
+	}
+	if (!pad) {
+		/* Mirror the input side on the output side */
+		cfg->type = vidsw->endpoint[vidsw->active].bus_type;
+		if (cfg->type == V4L2_MBUS_PARALLEL ||
+		    cfg->type == V4L2_MBUS_BT656)
+			cfg->flags = vidsw->endpoint[vidsw->active].bus.parallel.flags;
+	}
+
+	return 0;
+}
+
+static int vidsw_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+	struct v4l2_subdev *upstream_sd;
+	struct media_pad *pad;
+
+	if (vidsw->active == -1) {
+		dev_err(sd->dev, "Can not start streaming on inactive mux\n");
+		return -EINVAL;
+	}
+
+	pad = media_entity_remote_pad(&sd->entity.pads[vidsw->active]);
+	if (!pad) {
+		dev_err(sd->dev, "Failed to find remote source pad\n");
+		return -ENOLINK;
+	}
+
+	if (!is_media_entity_v4l2_subdev(pad->entity)) {
+		dev_err(sd->dev, "Upstream entity is not a v4l2 subdev\n");
+		return -ENODEV;
+	}
+
+	upstream_sd = media_entity_to_v4l2_subdev(pad->entity);
+
+	return v4l2_subdev_call(upstream_sd, video, s_stream, enable);
+}
+
+static int vidsw_g_frame_interval(struct v4l2_subdev *sd,
+				  struct v4l2_subdev_frame_interval *fi)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+
+	fi->interval = vidsw->timeperframe;
+
+	return 0;
+}
+
+static int vidsw_s_frame_interval(struct v4l2_subdev *sd,
+				  struct v4l2_subdev_frame_interval *fi)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+
+	vidsw->timeperframe = fi->interval;
+
+	return 0;
+}
+
+static const struct v4l2_subdev_video_ops vidsw_subdev_video_ops = {
+	.g_mbus_config = vidsw_g_mbus_config,
+	.s_stream = vidsw_s_stream,
+	.g_frame_interval = vidsw_g_frame_interval,
+	.s_frame_interval = vidsw_s_frame_interval,
+};
+
+static struct v4l2_mbus_framefmt *
+__vidsw_get_pad_format(struct v4l2_subdev *sd,
+		       struct v4l2_subdev_pad_config *cfg,
+		       unsigned int pad, u32 which)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+
+	switch (which) {
+	case V4L2_SUBDEV_FORMAT_TRY:
+		return v4l2_subdev_get_try_format(sd, cfg, pad);
+	case V4L2_SUBDEV_FORMAT_ACTIVE:
+		return &vidsw->format_mbus[pad];
+	default:
+		return NULL;
+	}
+}
+
+static int vidsw_get_format(struct v4l2_subdev *sd,
+			    struct v4l2_subdev_pad_config *cfg,
+			    struct v4l2_subdev_format *sdformat)
+{
+	sdformat->format = *__vidsw_get_pad_format(sd, cfg, sdformat->pad,
+						   sdformat->which);
+	return 0;
+}
+
+static int vidsw_set_format(struct v4l2_subdev *sd,
+			    struct v4l2_subdev_pad_config *cfg,
+			    struct v4l2_subdev_format *sdformat)
+{
+	struct vidsw *vidsw = v4l2_subdev_to_vidsw(sd);
+	struct v4l2_mbus_framefmt *mbusformat;
+
+	if (sdformat->pad >= vidsw->num_pads)
+		return -EINVAL;
+
+	mbusformat = __vidsw_get_pad_format(sd, cfg, sdformat->pad,
+					    sdformat->which);
+	if (!mbusformat)
+		return -EINVAL;
+
+	/* Output pad mirrors active input pad, no limitations on input pads */
+	if (sdformat->pad == (vidsw->num_pads - 1) && vidsw->active >= 0)
+		sdformat->format = vidsw->format_mbus[vidsw->active];
+
+	*mbusformat = sdformat->format;
+
+	return 0;
+}
+
+static struct v4l2_subdev_pad_ops vidsw_pad_ops = {
+	.get_fmt = vidsw_get_format,
+	.set_fmt = vidsw_set_format,
+};
+
+static struct v4l2_subdev_ops vidsw_subdev_ops = {
+	.pad = &vidsw_pad_ops,
+	.video = &vidsw_subdev_video_ops,
+};
+
+static struct v4l2_subdev_internal_ops vidsw_internal_ops = {
+	.registered = v4l2_of_subdev_registered,
+};
+
+static int of_get_reg_field(struct device_node *node, struct reg_field *field)
+{
+	u32 bit_mask;
+	int ret;
+
+	ret = of_property_read_u32(node, "reg", &field->reg);
+	if (ret < 0)
+		return ret;
+
+	ret = of_property_read_u32(node, "bit-mask", &bit_mask);
+	if (ret < 0)
+		return ret;
+
+	ret = of_property_read_u32(node, "bit-shift", &field->lsb);
+	if (ret < 0)
+		return ret;
+
+	field->msb = field->lsb + fls(bit_mask) - 1;
+
+	return 0;
+}
+
+static int vidsw_probe(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct of_endpoint endpoint;
+	struct device_node *ep;
+	struct reg_field field;
+	struct vidsw *vidsw;
+	struct regmap *map;
+	unsigned int num_pads;
+	int ret;
+
+	vidsw = devm_kzalloc(&pdev->dev, sizeof(*vidsw), GFP_KERNEL);
+	if (!vidsw)
+		return -ENOMEM;
+
+	platform_set_drvdata(pdev, vidsw);
+
+	v4l2_subdev_init(&vidsw->subdev, &vidsw_subdev_ops);
+	vidsw->subdev.internal_ops = &vidsw_internal_ops;
+	snprintf(vidsw->subdev.name, sizeof(vidsw->subdev.name), "%s",
+			np->name);
+	vidsw->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+	vidsw->subdev.dev = &pdev->dev;
+
+	/*
+	 * The largest numbered port is the output port. It determines
+	 * total number of pads
+	 */
+	for_each_endpoint_of_node(np, ep) {
+		of_graph_parse_endpoint(ep, &endpoint);
+		num_pads = max(num_pads, endpoint.port + 1);
+	}
+
+	num_pads = of_get_child_count(np);
+	if (num_pads < 2) {
+		dev_err(&pdev->dev, "Not enough ports %d\n", num_pads);
+		return -EINVAL;
+	}
+
+	ret = of_get_reg_field(np, &field);
+	if (ret == 0) {
+		map = syscon_node_to_regmap(np->parent);
+		if (!map) {
+			dev_err(&pdev->dev, "Failed to get syscon register map\n");
+			return PTR_ERR(map);
+		}
+
+		vidsw->field = devm_regmap_field_alloc(&pdev->dev, map, field);
+		if (IS_ERR(vidsw->field)) {
+			dev_err(&pdev->dev, "Failed to allocate regmap field\n");
+			return PTR_ERR(vidsw->field);
+		}
+
+		regmap_field_read(vidsw->field, &vidsw->active);
+	} else {
+		if (num_pads > 3) {
+			dev_err(&pdev->dev, "Too many ports %d\n", num_pads);
+			return -EINVAL;
+		}
+
+		vidsw->gpio = devm_gpiod_get(&pdev->dev, NULL, GPIOD_OUT_LOW);
+		if (IS_ERR(vidsw->gpio)) {
+			dev_warn(&pdev->dev,
+				 "could not request control gpio: %d\n", ret);
+			vidsw->gpio = NULL;
+		}
+
+		vidsw->active = gpiod_get_value(vidsw->gpio) ? 1 : 0;
+	}
+
+	vidsw->num_pads = num_pads;
+	vidsw->pads = devm_kzalloc(&pdev->dev, sizeof(*vidsw->pads) * num_pads,
+			GFP_KERNEL);
+	vidsw->format_mbus = devm_kzalloc(&pdev->dev,
+			sizeof(*vidsw->format_mbus) * num_pads, GFP_KERNEL);
+	vidsw->endpoint = devm_kzalloc(&pdev->dev,
+			sizeof(*vidsw->endpoint) * (num_pads - 1), GFP_KERNEL);
+
+	ret = vidsw_async_init(vidsw, np);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static int vidsw_remove(struct platform_device *pdev)
+{
+	struct vidsw *vidsw = platform_get_drvdata(pdev);
+
+	v4l2_async_register_subdev(&vidsw->subdev);
+
+	return 0;
+}
+
+static const struct of_device_id vidsw_dt_ids[] = {
+	{ .compatible = "video-multiplexer", },
+	{ /* sentinel */ }
+};
+
+static struct platform_driver vidsw_driver = {
+	.probe		= vidsw_probe,
+	.remove		= vidsw_remove,
+	.driver		= {
+		.of_match_table = vidsw_dt_ids,
+		.name = "video-multiplexer",
+	},
+};
+
+module_platform_driver(vidsw_driver);
+
+MODULE_DESCRIPTION("video stream multiplexer");
+MODULE_AUTHOR("Sascha Hauer, Pengutronix");
+MODULE_AUTHOR("Philipp Zabel, Pengutronix");
+MODULE_LICENSE("GPL");
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 10/21] [media] imx: Add i.MX MIPI CSI-2 subdevice driver
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (8 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 09/21] [media] platform: add video-multiplexer subdevice driver Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 11/21] [media] tc358743: put lanes in STOP state before starting streaming Philipp Zabel
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Add a v4l2 subdevice driver for the Synopsys DesignWare MIPI CSI-2 host
controller on i.MX6. Here its output is connected to the CSI2IPU gasket,
which distributes the time division multiplexed virtual channels to the
four IPU CSI inputs, directly or through video bus multiplexers.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Changes since v1:
 - Call mipi_csi2_prepare_stream from mipi_csi2_s_stream and propagate
   s_stream to MIPI CSI-2 transmitter in the correct order.
---
 .../devicetree/bindings/media/fsl-imx-capture.txt  |  67 ++
 drivers/media/platform/imx/Kconfig                 |   7 +
 drivers/media/platform/imx/Makefile                |   1 +
 drivers/media/platform/imx/imx-mipi-csi2.c         | 697 +++++++++++++++++++++
 4 files changed, 772 insertions(+)
 create mode 100644 drivers/media/platform/imx/imx-mipi-csi2.c

diff --git a/Documentation/devicetree/bindings/media/fsl-imx-capture.txt b/Documentation/devicetree/bindings/media/fsl-imx-capture.txt
index 5805331..b5ef101 100644
--- a/Documentation/devicetree/bindings/media/fsl-imx-capture.txt
+++ b/Documentation/devicetree/bindings/media/fsl-imx-capture.txt
@@ -23,3 +23,70 @@ capture-subsystem {
         compatible = "fsl,imx-capture-subsystem";
         ports = <&ipu1_csi0>, <&ipu1_csi1>, <&ipu2_csi0>, <&ipu2_csi1>;
 };
+
+i.MX MIPI CSI-2 Host Controller
+===============================
+
+The i.MX6 contains an implementation of a Synopsys DesignWare MIPI CSI-2 host
+controller combined with a CSI2IPU gasket that distributes the virtual channels
+to up to four IPUv3 CSIs.
+
+Required properties:
+- compatible: should be "fsl,imx6q-mipi-csi2", "snps,dw-mipi-csi2"
+- reg: should be register base and length as documented in the SoC
+  reference manual
+- clocks: should contain the pclk, cfg, ref, and pixel clocks, in the
+  order determined by the clock-names property.
+- clock-names: should be "pclk", "cfg", "ref", "pixel"
+- address-cells: should be 1
+- size-cells: should be 0
+- port@*: five port nodes using of_graph bindings, one input port and four
+  output ports corresponding to the virtual channels.
+
+Example:
+
+mipi_csi: mipi@021dc000 {
+	compatible = "fsl,imx6q-mipi-csi2", "snps,dw-mipi-csi2";
+	reg = <0x021dc000 0x4000>;
+	clocks = <&clks IMX6QDL_CLK_HSI_TX>, <&clks IMX6QDL_CLK_HSI_TX>,
+		 <&clks IMX6QDL_CLK_HSI_TX>, <&clks IMX6QDL_CLK_EIM_PODF>;
+	clock-names = "pclk", "cfg", "ref", "pixel";
+	#address-cells = <1>;
+	#size-cells = <0>;
+
+	port@0 {
+		reg = <0>;
+	};
+
+	port@1 {
+		reg = <1>;
+
+		vc0: endpoint {
+			remote-endpoint = <&csi0_in>;
+		};
+	};
+
+	port@2 {
+		reg = <2>;
+
+		vc1: endpoint {
+			remote-endpoint = <&csi1_in>;
+		};
+	};
+
+	port@3 {
+		reg = <3>;
+
+		vc2: endpoint {
+			remote-endpoint = <&csi2_in>;
+		};
+	};
+
+	port@4 {
+		reg = <4>;
+
+		vc3: endpoint {
+			remote-endpoint = <&csi3_in>;
+		};
+	};
+};
diff --git a/drivers/media/platform/imx/Kconfig b/drivers/media/platform/imx/Kconfig
index 69e8648..db04e64 100644
--- a/drivers/media/platform/imx/Kconfig
+++ b/drivers/media/platform/imx/Kconfig
@@ -6,6 +6,13 @@ config MEDIA_IMX
 	  This driver provides a SoC wide media controller device that all
 	  multimedia components in i.MX5 and i.MX6 SoCs can register with.
 
+config MEDIA_IMX_MIPI_CSI2
+	tristate "i.MX MIPI CSI-2 Host Controller"
+	depends on VIDEO_V4L2_SUBDEV_API
+	help
+	  This driver provides support for the MIPI CSI-2 Host Controller that
+	  is connected to the IPU CSI input multiplexers on i.MX6 SoCs.
+
 config VIDEO_IMX_IPU_COMMON
 	tristate
 
diff --git a/drivers/media/platform/imx/Makefile b/drivers/media/platform/imx/Makefile
index 919eaa1..22d86fd 100644
--- a/drivers/media/platform/imx/Makefile
+++ b/drivers/media/platform/imx/Makefile
@@ -1,4 +1,5 @@
 obj-$(CONFIG_MEDIA_IMX)			+= imx-media.o
+obj-$(CONFIG_MEDIA_IMX_MIPI_CSI2)	+= imx-mipi-csi2.o
 obj-$(CONFIG_VIDEO_IMX_IPU_COMMON)	+= imx-ipu.o
 obj-$(CONFIG_VIDEO_IMX_IPU_CAPTURE)	+= imx-ipu-capture.o
 obj-$(CONFIG_VIDEO_IMX_IPU_CSI)		+= imx-ipuv3-csi.o
diff --git a/drivers/media/platform/imx/imx-mipi-csi2.c b/drivers/media/platform/imx/imx-mipi-csi2.c
new file mode 100644
index 0000000..7b289cc
--- /dev/null
+++ b/drivers/media/platform/imx/imx-mipi-csi2.c
@@ -0,0 +1,697 @@
+/*
+ * i.MX MIPI CSI-2 Host Controller driver
+ *
+ * Copyright (C) 2016, Pengutronix, Philipp Zabel <kernel@pengutronix.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_graph.h>
+#include <linux/platform_device.h>
+#include <media/v4l2-async.h>
+#include <media/v4l2-of.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-subdev.h>
+
+#define MIPI_CSI2_VERSION	0x00
+#define MIPI_CSI2_N_LANES	0x04
+#define MIPI_CSI2_PHY_SHUTDOWNZ	0x08
+#define MIPI_CSI2_DPHY_RSTZ	0x0c
+#define MIPI_CSI2_CSI2_RESETN	0x10
+#define MIPI_CSI2_PHY_STATE	0x14
+#define MIPI_CSI2_DATA_IDS_1	0x18
+#define MIPI_CSI2_DATA_IDS_2	0x1c
+#define MIPI_CSI2_ERR1		0x20
+#define MIPI_CSI2_ERR2		0x24
+#define MIPI_CSI2_MASK1		0x28
+#define MIPI_CSI2_MASK2		0x2c
+#define MIPI_CSI2_PHY_TST_CTRL0	0x30
+#define MIPI_CSI2_PHY_TST_CTRL1	0x34
+
+#define CSI2IPU_SW_RST		0xf00
+
+#define PHY_RXULPSESC_0		BIT(0)
+#define PHY_RXULPSESC_1		BIT(1)
+#define PHY_RXULPSESC_2		BIT(2)
+#define PHY_RXULPSESC_3		BIT(3)
+#define PHY_STOPSTATEDATA_0	BIT(4)
+#define PHY_STOPSTATEDATA_1	BIT(5)
+#define PHY_STOPSTATEDATA_2	BIT(6)
+#define PHY_STOPSTATEDATA_3	BIT(7)
+#define PHY_RXCLKACTIVEHS	BIT(8)
+#define PHY_RXULPSCLKNOT	BIT(9)
+#define PHY_STOPSTATECLK	BIT(10)
+#define PHY_BYPASS_2ECC_TST	BIT(11)
+
+#define PHY_TESTCLR		BIT(0)
+#define PHY_TESTCLK		BIT(1)
+
+#define PHY_TESTEN		BIT(16)
+
+#define MIPI_CSI2_VERSION_IMX6	0x3130302a
+
+#define CSI2IPU_RGB444_FM	BIT(3)
+#define CSI2IPU_YUV422_8BIT_FM	BIT(2)
+#define CSI2IPU_CLK_SEL		BIT(1)
+
+#define MIPI_CSI2_PADS 5
+
+struct mipi_csi2 {
+	struct v4l2_subdev subdev;
+	struct media_pad pads[MIPI_CSI2_PADS];
+	struct v4l2_mbus_framefmt format;
+	struct v4l2_fract timeperframe;
+	struct device *dev;
+	spinlock_t lock;
+
+	struct clk *pclk;
+	struct clk *cfg_clk;
+	struct clk *ref_clk;
+	struct clk *pixel_clk;
+	void __iomem *regs;
+
+	int max_lanes;
+	int lanes;
+	bool enabled;
+	bool streaming;
+};
+
+struct dphy_pll_testdin_map {
+	u16 max_mbps;
+	u16 testdin;
+};
+
+/* The table is based on 27MHz DPHY pll reference clock. */
+static const struct dphy_pll_testdin_map dptdin_map[] = {
+	{ 90, 0x00}, {100, 0x20}, {110, 0x40}, {125, 0x02},
+	{140, 0x22}, {150, 0x42}, {160, 0x04}, {180, 0x24},
+	{200, 0x44}, {210, 0x06}, {240, 0x26}, {250, 0x46},
+	{270, 0x08}, {300, 0x28}, {330, 0x48}, {360, 0x2a},
+	{400, 0x4a}, {450, 0x0c}, {500, 0x2c}, {550, 0x0e},
+	{600, 0x2e}, {650, 0x10}, {700, 0x30}, {750, 0x12},
+	{800, 0x32}, {850, 0x14}, {900, 0x34}, {950, 0x54},
+	{1000, 0x74}
+};
+
+static int max_mbps_to_testdin(unsigned int max_mbps)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dptdin_map); i++)
+		if (dptdin_map[i].max_mbps == max_mbps)
+			return dptdin_map[i].testdin;
+
+	return -EINVAL;
+}
+
+static void mipi_csi2_dphy_write(struct mipi_csi2 *csi2, u8 addr, u8 data)
+{
+	/* Clear PHY test interface */
+	writel(PHY_TESTCLR, csi2->regs + MIPI_CSI2_PHY_TST_CTRL0);
+	writel(0x00000, csi2->regs + MIPI_CSI2_PHY_TST_CTRL1);
+	writel(0, csi2->regs + MIPI_CSI2_PHY_TST_CTRL0);
+
+	/* Raise test interface strobe signal */
+	writel(PHY_TESTCLK, csi2->regs + MIPI_CSI2_PHY_TST_CTRL0);
+
+	/* Configure address write on falling edge and lower strobe signal */
+	writel(PHY_TESTEN | addr, csi2->regs + MIPI_CSI2_PHY_TST_CTRL1);
+	writel(0, csi2->regs + MIPI_CSI2_PHY_TST_CTRL0);
+
+	/* Configure data write on rising edge and raise strobe signal */
+	writel(data, csi2->regs + MIPI_CSI2_PHY_TST_CTRL1);
+	writel(PHY_TESTCLK, csi2->regs + MIPI_CSI2_PHY_TST_CTRL0);
+
+	/* Clear strobe signal */
+	writel(0, csi2->regs + MIPI_CSI2_PHY_TST_CTRL0);
+}
+
+static int mipi_csi2_reset(struct mipi_csi2 *csi2)
+{
+	const u8 addr = 0x44;
+	u8 data = max_mbps_to_testdin(850);
+
+	/* Assert CSI-2 Host Controller and D-PHY resets */
+	writel(0, csi2->regs + MIPI_CSI2_PHY_SHUTDOWNZ);
+	writel(0, csi2->regs + MIPI_CSI2_DPHY_RSTZ);
+	writel(0, csi2->regs + MIPI_CSI2_CSI2_RESETN);
+
+	mipi_csi2_dphy_write(csi2, addr, data);
+
+	/* Deassert resets */
+	writel(1, csi2->regs + MIPI_CSI2_PHY_SHUTDOWNZ);
+	writel(1, csi2->regs + MIPI_CSI2_DPHY_RSTZ);
+	writel(1, csi2->regs + MIPI_CSI2_CSI2_RESETN);
+
+	return 0;
+}
+
+static int mipi_csi2_get_fmt(struct v4l2_subdev *sd,
+			     struct v4l2_subdev_pad_config *cfg,
+			     struct v4l2_subdev_format *sdformat)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+
+	dev_dbg(csi2->dev, "%s\n", __func__);
+
+	if (sdformat->pad > 4)
+		return -EINVAL;
+
+	sdformat->format = csi2->format;
+
+	return 0;
+}
+
+static unsigned int mipi_csi2_format_bpp(u32 mbus_code)
+{
+	switch (mbus_code) {
+	case MEDIA_BUS_FMT_RGB888_1X24:
+		return 24;
+	case MEDIA_BUS_FMT_UYVY8_1X16:
+	case MEDIA_BUS_FMT_YUYV8_1X16:
+		return 16;
+	default:
+		return 0;
+	}
+}
+
+static unsigned int mipi_csi2_lanes_needed(struct mipi_csi2 *csi2,
+					   struct v4l2_mbus_framefmt *format,
+					   struct v4l2_fract *timeperframe)
+{
+	unsigned int bps_per_lane = 594000000U; /* FIXME */
+	unsigned int bpp = mipi_csi2_format_bpp(format->code);
+	unsigned int bps;
+
+	bps = format->width * format->height * bpp;
+	bps = mult_frac(bps, timeperframe->denominator, timeperframe->numerator);
+	return DIV_ROUND_UP(bps, bps_per_lane);
+}
+
+static int mipi_csi2_set_fmt(struct v4l2_subdev *sd,
+			     struct v4l2_subdev_pad_config *cfg,
+			     struct v4l2_subdev_format *sdformat)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+	unsigned long flags;
+	u32 val;
+
+	dev_dbg(csi2->dev, "%s\n", __func__);
+
+	if (sdformat->pad > 4)
+		return -EINVAL;
+
+	switch (sdformat->format.code) {
+	case MEDIA_BUS_FMT_RGB888_1X24:
+	case MEDIA_BUS_FMT_UYVY8_1X16:
+	case MEDIA_BUS_FMT_YUYV8_1X16:
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	csi2->format = sdformat->format;
+	csi2->lanes = mipi_csi2_lanes_needed(csi2, &csi2->format,
+					     &csi2->timeperframe);
+
+	spin_lock_irqsave(&csi2->lock, flags);
+	val = readl(csi2->regs + CSI2IPU_SW_RST);
+	switch (csi2->format.code) {
+	case MEDIA_BUS_FMT_UYVY8_1X16:
+		val &= ~CSI2IPU_YUV422_8BIT_FM;
+		break;
+	case MEDIA_BUS_FMT_YUYV8_1X16:
+		val |= CSI2IPU_YUV422_8BIT_FM;
+		break;
+	}
+	writel(val, csi2->regs + CSI2IPU_SW_RST);
+	spin_unlock_irqrestore(&csi2->lock, flags);
+
+	return 0;
+}
+
+static int mipi_csi2_g_frame_interval(struct v4l2_subdev *sd,
+				      struct v4l2_subdev_frame_interval *fi)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+
+	dev_dbg(csi2->dev, "%s\n", __func__);
+
+	fi->interval = csi2->timeperframe;
+
+	return 0;
+}
+
+static int mipi_csi2_s_frame_interval(struct v4l2_subdev *sd,
+				      struct v4l2_subdev_frame_interval *fi)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+
+	dev_dbg(csi2->dev, "%s(%d, %d/%d)\n", __func__, fi->pad,
+		fi->interval.numerator, fi->interval.denominator);
+
+	if (fi->pad > 4)
+		return -EINVAL;
+
+	if (fi->pad > 0) {
+		fi->interval = csi2->timeperframe;
+
+		return 0;
+	}
+
+	csi2->timeperframe = fi->interval;
+	csi2->lanes = mipi_csi2_lanes_needed(csi2, &csi2->format,
+					     &csi2->timeperframe);
+
+	return 0;
+}
+
+
+static const struct v4l2_subdev_pad_ops mipi_csi2_pad_ops = {
+	.get_fmt = mipi_csi2_get_fmt,
+	.set_fmt = mipi_csi2_set_fmt,
+};
+
+static int mipi_csi2_g_mbus_config(struct v4l2_subdev *sd,
+			    struct v4l2_mbus_config *cfg)
+{
+	cfg->type = V4L2_MBUS_CSI2;
+	cfg->flags = V4L2_MBUS_CSI2_1_LANE | V4L2_MBUS_CSI2_2_LANE |
+		     V4L2_MBUS_CSI2_3_LANE | V4L2_MBUS_CSI2_4_LANE |
+		     V4L2_MBUS_CSI2_CHANNEL_0 | V4L2_MBUS_CSI2_CHANNEL_1 |
+		     V4L2_MBUS_CSI2_CHANNEL_2 | V4L2_MBUS_CSI2_CHANNEL_3 |
+		     V4L2_MBUS_CSI2_CONTINUOUS_CLOCK |
+		     V4L2_MBUS_CSI2_NONCONTINUOUS_CLOCK;
+
+	return 0;
+}
+
+static int mipi_csi2_enable(struct mipi_csi2 *csi2)
+{
+	if (csi2->enabled)
+		return -EBUSY;
+
+	clk_prepare_enable(csi2->pclk);
+	clk_prepare_enable(csi2->cfg_clk);
+	clk_prepare_enable(csi2->ref_clk);
+	csi2->enabled = true;
+
+	return 0;
+}
+
+static int mipi_csi2_disable(struct mipi_csi2 *csi2)
+{
+	if (!csi2->enabled)
+		return 0;
+
+	clk_disable_unprepare(csi2->ref_clk);
+	clk_disable_unprepare(csi2->cfg_clk);
+	clk_disable_unprepare(csi2->pclk);
+	csi2->enabled = false;
+
+	return 0;
+}
+
+static int mipi_csi2_wait(struct mipi_csi2 *csi2, u8 reg, u32 mask, u32 val)
+{
+	unsigned long timeout = jiffies + msecs_to_jiffies(100);
+	u32 state;
+
+	dev_dbg(csi2->dev, "wait for register 0x%02x & 0x%08x to be 0x%08x\n",
+		reg, mask, val);
+
+	for (;;) {
+		state = readl(csi2->regs + reg);
+		dev_dbg(csi2->dev, "register 0x%02x == 0x%08x\n", reg, state);
+		if ((state & mask) == val)
+			return 0;
+		if (time_after(jiffies, timeout))
+			return -ETIMEDOUT;
+		usleep_range(500, 2000);
+	}
+}
+
+static inline unsigned int mipi_csi2_stop_state(unsigned int lanes)
+{
+	switch (lanes) {
+	case 1:
+		return PHY_STOPSTATEDATA_0 |
+		       PHY_STOPSTATECLK;
+	case 2:
+		return PHY_STOPSTATEDATA_0 | PHY_STOPSTATEDATA_1 |
+		       PHY_STOPSTATECLK;
+	case 3:
+		return PHY_STOPSTATEDATA_0 | PHY_STOPSTATEDATA_1 |
+		       PHY_STOPSTATEDATA_2 |
+		       PHY_STOPSTATECLK;
+	case 4:
+		return PHY_STOPSTATEDATA_0 | PHY_STOPSTATEDATA_1 |
+		       PHY_STOPSTATEDATA_2 | PHY_STOPSTATEDATA_3 |
+		       PHY_STOPSTATECLK;
+	default:
+		return 0;
+	}
+}
+
+/**
+ * mipi_csi2_prepare_stream - prepare for streaming
+ *
+ * The sender should be configured to put all lanes in PL-11 state (STOPSTATE)
+ * in its prepare_stream(), which must be called prior to this function.
+ * Afterwards, the sender shall start sending data in its s_stream().
+ */
+static int mipi_csi2_prepare_stream(struct v4l2_subdev *sd)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+	unsigned int stop_state;
+	int ret;
+
+	if (csi2->lanes > csi2->max_lanes) {
+		dev_err(csi2->dev, "%d lanes needed for format, %d supported\n",
+			csi2->lanes, csi2->max_lanes);
+		return -EINVAL;
+	}
+
+	stop_state = mipi_csi2_stop_state(csi2->lanes);
+	if (!stop_state)
+		return -EINVAL;
+
+	if (csi2->enabled) {
+		dev_warn(csi2->dev, "Already enabled, disabling\n");
+		mipi_csi2_disable(csi2);
+		usleep_range(1000, 2000);
+	}
+
+	/* Enable clocks */
+	mipi_csi2_enable(csi2);
+
+	/* Set number of lanes */
+	writel(csi2->lanes - 1, csi2->regs + MIPI_CSI2_N_LANES);
+
+	/* Reset D-PHY and host controller */
+	mipi_csi2_reset(csi2);
+
+	/* Wait for STOP state on all used lanes */
+	ret = mipi_csi2_wait(csi2, MIPI_CSI2_PHY_STATE,
+			     stop_state, stop_state);
+	if (ret < 0) {
+		dev_err(csi2->dev,
+			"Timeout waiting for lanes to enter STOP state\n");
+	}
+
+	return ret;
+}
+
+/**
+ * mipi_csi2_s_stream - start streaming
+ *
+ * When enabling streaming, the active lanes must have been detected to be in
+ * stop state in prepare_stream(), and the sender should now be sending data.
+ */
+static int mipi_csi2_s_stream(struct v4l2_subdev *sd, int enable)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+	struct v4l2_subdev *upstream_sd;
+	struct media_pad *pad;
+	int ret;
+
+	if (enable)
+		mipi_csi2_prepare_stream(sd);
+
+	pad = media_entity_remote_pad(&sd->entity.pads[0]);
+	if (!pad) {
+		dev_err(sd->dev, "Failed to find remote source pad\n");
+		return -ENOLINK;
+	}
+
+	if (!is_media_entity_v4l2_subdev(pad->entity)) {
+		dev_err(sd->dev, "Upstream entity is not a v4l2 subdev\n");
+		return -ENODEV;
+	}
+
+	upstream_sd = media_entity_to_v4l2_subdev(pad->entity);
+
+	ret = v4l2_subdev_call(upstream_sd, video, s_stream, enable);
+	if (ret)
+		return ret;
+
+	if (enable) {
+		/* Wait for PHY lock */
+		ret = mipi_csi2_wait(csi2, MIPI_CSI2_PHY_STATE,
+				     PHY_RXCLKACTIVEHS, PHY_RXCLKACTIVEHS);
+		if (ret < 0) {
+			dev_err(csi2->dev, "Timeout waiting for PHY lock\n");
+			return ret;
+		}
+
+		/* Wait for error free transmission */
+		ret = mipi_csi2_wait(csi2, MIPI_CSI2_ERR1, ~0, 0);
+		if (ret < 0)
+			dev_err(csi2->dev, "Timeout waiting for ERR1 to clear\n");
+
+		clk_prepare_enable(csi2->pixel_clk);
+		csi2->streaming = true;
+	} else {
+		if (csi2->streaming) {
+			clk_disable_unprepare(csi2->pixel_clk);
+			csi2->streaming = false;
+		}
+		mipi_csi2_disable(csi2);
+	}
+
+	return 0;
+}
+
+#ifdef CONFIG_VIDEO_ADV_DEBUG
+static int mipi_csi2_log_status(struct v4l2_subdev *sd)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+	u32 val;
+	int i;
+
+	v4l2_info(sd, "-----MIPI CSI-2 Host status-----\n");
+	for (i = MIPI_CSI2_VERSION; i < MIPI_CSI2_PHY_TST_CTRL0; i += 4) {
+		val = readl(csi2->regs + i);
+		v4l2_info(sd, "%02x: %08x\n", i, val);
+	}
+
+	v4l2_info(sd, "-----CSI2IPU gasket status------\n");
+	val = readl(csi2->regs + CSI2IPU_SW_RST);
+	v4l2_info(sd, "f00: %08x\n", val);
+
+	return 0;
+}
+
+static int mipi_csi2_g_register(struct v4l2_subdev *sd,
+				struct v4l2_dbg_register *reg)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+
+	if ((reg->reg % 4) ||
+	    (reg->reg > MIPI_CSI2_PHY_TST_CTRL1 && reg->reg != CSI2IPU_SW_RST))
+		return -EINVAL;
+
+	reg->val = readl(csi2->regs + reg->reg);
+	reg->size = 4;
+
+	return 0;
+}
+
+static int mipi_csi2_s_register(struct v4l2_subdev *sd,
+				const struct v4l2_dbg_register *reg)
+{
+	struct mipi_csi2 *csi2 = container_of(sd, struct mipi_csi2, subdev);
+
+	if ((reg->reg % 4) ||
+	    (reg->reg == MIPI_CSI2_VERSION) ||
+	    (reg->reg > MIPI_CSI2_PHY_TST_CTRL1 && reg->reg != CSI2IPU_SW_RST))
+		return -EINVAL;
+
+	writel(reg->val, csi2->regs + reg->reg);
+
+	return 0;
+}
+#endif
+
+static const struct v4l2_subdev_core_ops mipi_csi2_core_ops = {
+#ifdef CONFIG_VIDEO_ADV_DEBUG
+	.log_status = mipi_csi2_log_status,
+	.g_register = mipi_csi2_g_register,
+	.s_register = mipi_csi2_s_register,
+#endif
+};
+
+static const struct v4l2_subdev_video_ops mipi_csi2_video_ops = {
+	.g_frame_interval = mipi_csi2_g_frame_interval,
+	.s_frame_interval = mipi_csi2_s_frame_interval,
+	.g_mbus_config = mipi_csi2_g_mbus_config,
+	.s_stream = mipi_csi2_s_stream,
+};
+
+static const struct v4l2_subdev_internal_ops mipi_csi2_internal_ops = {
+	.registered = v4l2_of_subdev_registered,
+};
+
+static struct v4l2_subdev_ops mipi_csi2_subdev_ops = {
+	.core = &mipi_csi2_core_ops,
+	.pad = &mipi_csi2_pad_ops,
+	.video = &mipi_csi2_video_ops,
+};
+
+static int mipi_csi2_link_setup(struct media_entity *entity,
+				const struct media_pad *local,
+				const struct media_pad *remote, u32 flags)
+{
+	return 0;
+}
+
+static struct media_entity_operations mipi_csi2_entity_ops = {
+	.link_setup = mipi_csi2_link_setup,
+};
+
+static int mipi_csi2_probe(struct platform_device *pdev)
+{
+	struct resource *res;
+	struct mipi_csi2 *csi2;
+	struct device_node *ep;
+	unsigned int version;
+	int ret;
+	u32 val;
+
+	csi2 = devm_kzalloc(&pdev->dev, sizeof(*csi2), GFP_KERNEL);
+	if (!csi2)
+		return -ENOMEM;
+
+	spin_lock_init(&csi2->lock);
+
+	csi2->dev = &pdev->dev;
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	csi2->regs = devm_ioremap_resource(csi2->dev, res);
+	if (IS_ERR(csi2->regs))
+		return PTR_ERR(csi2->regs);
+
+	csi2->pclk = devm_clk_get(csi2->dev, "pclk");
+	if (IS_ERR(csi2->pclk))
+		return PTR_ERR(csi2->pclk);
+
+	csi2->cfg_clk = devm_clk_get(csi2->dev, "cfg");
+	if (IS_ERR(csi2->cfg_clk))
+		return PTR_ERR(csi2->cfg_clk);
+
+	csi2->ref_clk = devm_clk_get(csi2->dev, "ref");
+	if (IS_ERR(csi2->ref_clk))
+		return PTR_ERR(csi2->ref_clk);
+
+	csi2->pixel_clk = devm_clk_get(csi2->dev, "pixel");
+	if (IS_ERR(csi2->pixel_clk))
+		return PTR_ERR(csi2->pixel_clk);
+
+	platform_set_drvdata(pdev, csi2);
+
+	v4l2_subdev_init(&csi2->subdev, &mipi_csi2_subdev_ops);
+	csi2->subdev.internal_ops = &mipi_csi2_internal_ops;
+	snprintf(csi2->subdev.name, sizeof(csi2->subdev.name), "mipi-csi2");
+	csi2->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+	csi2->subdev.dev = &pdev->dev;
+
+	csi2->pads[0].flags = MEDIA_PAD_FL_SINK;
+	csi2->pads[1].flags = MEDIA_PAD_FL_SOURCE;
+	csi2->pads[2].flags = MEDIA_PAD_FL_SOURCE;
+	csi2->pads[3].flags = MEDIA_PAD_FL_SOURCE;
+	csi2->pads[4].flags = MEDIA_PAD_FL_SOURCE;
+	ret = media_entity_pads_init(&csi2->subdev.entity, MIPI_CSI2_PADS,
+				csi2->pads);
+	if (ret < 0)
+		return ret;
+
+	csi2->subdev.entity.ops = &mipi_csi2_entity_ops;
+
+	clk_prepare_enable(csi2->pclk);
+
+	version = readl(csi2->regs + MIPI_CSI2_VERSION);
+	if (version != MIPI_CSI2_VERSION_IMX6) {
+		dev_err(csi2->dev, "Unknown version: 0x%x\n", version);
+		return -ENODEV;
+	}
+
+	/* Maximal number of data lanes */
+	csi2->max_lanes = readl(csi2->regs + MIPI_CSI2_N_LANES) + 1;
+
+	ret = v4l2_async_register_subdev(&csi2->subdev);
+	if (ret < 0)
+		return ret;
+
+	for_each_endpoint_of_node(pdev->dev.of_node, ep) {
+		struct v4l2_of_endpoint endpoint;
+
+		v4l2_of_parse_endpoint(ep, &endpoint);
+		if (endpoint.base.port != 0)
+			continue;
+
+		if (endpoint.bus_type != V4L2_MBUS_CSI2) {
+			dev_err(csi2->dev, "missing MIPI CSI-2 endpoint\n");
+			return -EINVAL;
+		}
+
+		if (csi2->max_lanes < endpoint.bus.mipi_csi2.num_data_lanes) {
+			dev_warn(csi2->dev, "only %d data lanes supported\n",
+				 csi2->max_lanes);
+		}
+
+		if (endpoint.bus.mipi_csi2.num_data_lanes &&
+		    endpoint.bus.mipi_csi2.num_data_lanes < csi2->max_lanes) {
+			csi2->max_lanes = endpoint.bus.mipi_csi2.num_data_lanes;
+			dev_dbg(csi2->dev, "only using %d data lanes\n",
+				csi2->max_lanes);
+		}
+	}
+
+	csi2->timeperframe.numerator = 1;
+	csi2->timeperframe.denominator = 60;
+	csi2->lanes = csi2->max_lanes;
+
+	/*
+	 * According to the i.MX6 Reference Manual, MIPI CSI-2 has to use
+	 * the 'non-gated' (noncontinuous) clock mode, still the CLK_SEL
+	 * bit in CSI2IPU_SW_RST needs to be cleared (marked as 'gated')
+	 */
+	val = readl(csi2->regs + CSI2IPU_SW_RST);
+	val &= ~CSI2IPU_CLK_SEL;
+	writel(val, csi2->regs + CSI2IPU_SW_RST);
+
+	return 0;
+}
+
+static int mipi_csi2_remove(struct platform_device *pdev)
+{
+	return 0;
+}
+
+static const struct of_device_id mipi_csi2_dt_ids[] = {
+	{ .compatible = "fsl,imx6q-mipi-csi2", },
+	{ }
+};
+MODULE_DEVICE_TABLE(of, mipi_csi2_dt_ids);
+
+static struct platform_driver mipi_csi2_driver = {
+	.probe = mipi_csi2_probe,
+	.remove = mipi_csi2_remove,
+	.driver = {
+		.of_match_table = mipi_csi2_dt_ids,
+		.name = "mipi-csi2",
+		.owner = THIS_MODULE,
+	},
+};
+
+module_platform_driver(mipi_csi2_driver);
+
+MODULE_DESCRIPTION("i.MX MIPI CSI-2 Host Controller driver");
+MODULE_AUTHOR("Philipp Zabel, Pengutronix");
+MODULE_LICENSE("GPL");
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 11/21] [media] tc358743: put lanes in STOP state before starting streaming
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (9 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 10/21] [media] imx: Add i.MX MIPI CSI-2 " Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 12/21] ARM: dts: imx6qdl: Add capture-subsystem node Philipp Zabel
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Without calling tc358743_set_csi after stopping streaming (or calling
tc358743_s_dv_timings or tc358743_set_fmt from userspace after stopping
the stream), the i.MX6 MIPI CSI2 input fails waiting for lanes to enter
STOP state when streaming is started again.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Change since v1:
 - Put lanes in stop state again after stopping streaming
---
 drivers/media/i2c/tc358743.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
index 1e3a0dd2..7f82932 100644
--- a/drivers/media/i2c/tc358743.c
+++ b/drivers/media/i2c/tc358743.c
@@ -1466,6 +1466,10 @@ static int tc358743_g_mbus_config(struct v4l2_subdev *sd,
 static int tc358743_s_stream(struct v4l2_subdev *sd, int enable)
 {
 	enable_stream(sd, enable);
+	if (!enable) {
+		/* Put all lanes in PL-11 state (STOPSTATE) */
+		tc358743_set_csi(sd);
+	}
 
 	return 0;
 }
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 12/21] ARM: dts: imx6qdl: Add capture-subsystem node
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (10 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 11/21] [media] tc358743: put lanes in STOP state before starting streaming Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 13/21] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections Philipp Zabel
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 arch/arm/boot/dts/imx6dl.dtsi | 5 +++++
 arch/arm/boot/dts/imx6q.dtsi  | 5 +++++
 2 files changed, 10 insertions(+)

diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
index 9a4c22c..3c817de 100644
--- a/arch/arm/boot/dts/imx6dl.dtsi
+++ b/arch/arm/boot/dts/imx6dl.dtsi
@@ -100,6 +100,11 @@
 		};
 	};
 
+	capture-subsystem {
+		compatible = "fsl,imx-capture-subsystem";
+		ports = <&ipu1_csi0>, <&ipu1_csi1>;
+	};
+
 	display-subsystem {
 		compatible = "fsl,imx-display-subsystem";
 		ports = <&ipu1_di0>, <&ipu1_di1>;
diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
index c30c836..0c87a69 100644
--- a/arch/arm/boot/dts/imx6q.dtsi
+++ b/arch/arm/boot/dts/imx6q.dtsi
@@ -198,6 +198,11 @@
 		};
 	};
 
+	capture-subsystem {
+		compatible = "fsl,imx-capture-subsystem";
+		ports = <&ipu1_csi0>, <&ipu1_csi1>, <&ipu2_csi0>, <&ipu2_csi1>;
+	};
+
 	display-subsystem {
 		compatible = "fsl,imx-display-subsystem";
 		ports = <&ipu1_di0>, <&ipu1_di1>, <&ipu2_di0>, <&ipu2_di1>;
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 13/21] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (11 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 12/21] ARM: dts: imx6qdl: Add capture-subsystem node Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-11-08  8:23   ` Ying Liu
  2016-10-14 17:34 ` [PATCH v2 14/21] ARM: dts: imx6qdl: Add MIPI CSI-2 D-PHY compatible and clocks Philipp Zabel
                   ` (9 subsequent siblings)
  22 siblings, 1 reply; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

This patch adds the device tree graph connecting the input multiplexers
to the IPU CSIs and the MIPI-CSI2 gasket on i.MX6. The MIPI_IPU
multiplexers are added as children of the iomuxc-gpr syscon device node.
On i.MX6Q/D two two-input multiplexers in front of IPU1 CSI0 and IPU2
CSI1 allow to select between CSI0/1 parallel input pads and the MIPI
CSI-2 virtual channels 0/3.
On i.MX6DL/S two five-input multiplexers in front of IPU1 CSI0 and IPU1
CSI1 allow to select between CSI0/1 parallel input pads and any of the
four MIPI CSI-2 virtual channels.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 arch/arm/boot/dts/imx6dl.dtsi  | 182 +++++++++++++++++++++++++++++++++++++++++
 arch/arm/boot/dts/imx6q.dtsi   | 118 ++++++++++++++++++++++++++
 arch/arm/boot/dts/imx6qdl.dtsi |  10 ++-
 3 files changed, 309 insertions(+), 1 deletion(-)

diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
index 3c817de..7ed4efd6f 100644
--- a/arch/arm/boot/dts/imx6dl.dtsi
+++ b/arch/arm/boot/dts/imx6dl.dtsi
@@ -133,6 +133,188 @@
 		      "di0", "di1";
 };
 
+&gpr {
+	ipu_csi0_mux {
+		compatible = "video-multiplexer";
+		reg = <0x34>;
+		bit-mask = <0x7>;
+		bit-shift = <0>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		port@0 {
+			reg = <0>;
+
+			ipu_csi0_mux_from_mipi_csi0: endpoint {
+				remote-endpoint = <&mipi_csi0_to_ipu_csi0_mux>;
+			};
+		};
+
+		port@1 {
+			reg = <1>;
+
+			ipu_csi0_mux_from_mipi_csi1: endpoint {
+				remote-endpoint = <&mipi_csi1_to_ipu_csi0_mux>;
+			};
+		};
+
+		port@2 {
+			reg = <2>;
+
+			ipu_csi0_mux_from_mipi_csi2: endpoint {
+				remote-endpoint = <&mipi_csi2_to_ipu_csi0_mux>;
+			};
+		};
+
+		port@3 {
+			reg = <3>;
+
+			ipu_csi0_mux_from_mipi_csi3: endpoint {
+				remote-endpoint = <&mipi_csi3_to_ipu_csi0_mux>;
+			};
+		};
+
+		csi0: port@4 {
+			reg = <4>;
+		};
+
+		port@5 {
+			reg = <5>;
+
+			ipu_csi0_mux_to_ipu1_csi0: endpoint {
+				remote-endpoint = <&ipu1_csi0_from_ipu_csi0_mux>;
+			};
+		};
+	};
+
+	ipu_csi1_mux {
+		compatible = "video-multiplexer";
+		reg = <0x34>;
+		bit-mask = <0x7>;
+		bit-shift = <3>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		port@0 {
+			reg = <0>;
+
+			ipu_csi1_mux_from_mipi_csi0: endpoint {
+				remote-endpoint = <&mipi_csi0_to_ipu_csi1_mux>;
+			};
+		};
+
+		port@1 {
+			reg = <1>;
+
+			ipu_csi1_mux_from_mipi_csi1: endpoint {
+				remote-endpoint = <&mipi_csi1_to_ipu_csi1_mux>;
+			};
+		};
+
+		port@2 {
+			reg = <2>;
+
+			ipu_csi1_mux_from_mipi_csi2: endpoint {
+				remote-endpoint = <&mipi_csi2_to_ipu_csi1_mux>;
+			};
+		};
+
+		port@3 {
+			reg = <3>;
+
+			ipu_csi1_mux_from_mipi_csi3: endpoint {
+				remote-endpoint = <&mipi_csi3_to_ipu_csi1_mux>;
+			};
+		};
+
+		csi1: port@4 {
+			reg = <4>;
+		};
+
+		port@5 {
+			reg = <5>;
+
+			ipu_csi1_mux_to_ipu1_csi1: endpoint {
+				remote-endpoint = <&ipu1_csi1_from_ipu_csi1_mux>;
+			};
+		};
+	};
+};
+
+&ipu1_csi0 {
+	ipu1_csi0_from_ipu_csi0_mux: endpoint {
+		remote-endpoint = <&ipu_csi0_mux_to_ipu1_csi0>;
+	};
+};
+
+&ipu1_csi1 {
+	ipu1_csi1_from_ipu_csi1_mux: endpoint {
+		remote-endpoint = <&ipu_csi1_mux_to_ipu1_csi1>;
+	};
+};
+
+&mipi_csi {
+	port@0 {
+		reg = <0>;
+	};
+
+	port@1 {
+		reg = <1>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		mipi_csi0_to_ipu_csi0_mux: endpoint@0 {
+			remote-endpoint = <&ipu_csi0_mux_from_mipi_csi0>;
+		};
+
+		mipi_csi0_to_ipu_csi1_mux: endpoint@1 {
+			remote-endpoint = <&ipu_csi1_mux_from_mipi_csi0>;
+		};
+	};
+
+	port@2 {
+		reg = <2>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		mipi_csi1_to_ipu_csi0_mux: endpoint@0 {
+			remote-endpoint = <&ipu_csi0_mux_from_mipi_csi1>;
+		};
+
+		mipi_csi1_to_ipu_csi1_mux: endpoint@1 {
+			remote-endpoint = <&ipu_csi1_mux_from_mipi_csi1>;
+		};
+	};
+
+	port@3 {
+		reg = <3>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		mipi_csi2_to_ipu_csi0_mux: endpoint@0 {
+			remote-endpoint = <&ipu_csi0_mux_from_mipi_csi2>;
+		};
+
+		mipi_csi2_to_ipu_csi1_mux: endpoint@1 {
+			remote-endpoint = <&ipu_csi1_mux_from_mipi_csi2>;
+		};
+	};
+
+	port@4 {
+		reg = <4>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		mipi_csi3_to_ipu_csi0_mux: endpoint@0 {
+			remote-endpoint = <&ipu_csi0_mux_from_mipi_csi3>;
+		};
+
+		mipi_csi3_to_ipu_csi1_mux: endpoint@1 {
+			remote-endpoint = <&ipu_csi1_mux_from_mipi_csi3>;
+		};
+	};
+};
+
 &vpu {
 	compatible = "fsl,imx6dl-vpu", "cnm,coda960";
 };
diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
index 0c87a69..675723b 100644
--- a/arch/arm/boot/dts/imx6q.dtsi
+++ b/arch/arm/boot/dts/imx6q.dtsi
@@ -143,10 +143,18 @@
 
 			ipu2_csi0: port@0 {
 				reg = <0>;
+
+				ipu2_csi0_from_csi2ipu: endpoint {
+					remote-endpoint = <&csi2ipu_to_ipu2_csi0>;
+				};
 			};
 
 			ipu2_csi1: port@1 {
 				reg = <1>;
+
+				ipu2_csi1_from_mipi_ipu2_mux: endpoint {
+					remote-endpoint = <&mipi_ipu2_mux_to_ipu2_csi1>;
+				};
 			};
 
 			ipu2_di0: port@2 {
@@ -234,6 +242,78 @@
 	};
 };
 
+&gpr {
+	mipi_ipu1_mux {
+		compatible = "video-multiplexer";
+		reg = <0x04>;
+		bit-mask = <1>;
+		bit-shift = <19>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		port@0 {
+			reg = <0>;
+
+			mipi_ipu1_mux_from_mipi_csi0: endpoint {
+				remote-endpoint = <&mipi_csi0_to_mipi_ipu1_mux>;
+			};
+		};
+
+		csi0: port@1 {
+			reg = <1>;
+		};
+
+		port@2 {
+			reg = <2>;
+
+			mipi_ipu1_mux_to_ipu1_csi0: endpoint {
+				remote-endpoint = <&ipu1_csi0_from_mipi_ipu1_mux>;
+			};
+		};
+	};
+
+	mipi_ipu2_mux {
+		compatible = "video-multiplexer";
+		reg = <0x04>;
+		bit-mask = <1>;
+		bit-shift = <20>;
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		port@0 {
+			reg = <0>;
+
+			mipi_ipu2_mux_from_mipi_csi3: endpoint {
+				remote-endpoint = <&mipi_csi3_to_mipi_ipu2_mux>;
+			};
+		};
+
+		csi1: port@1 {
+			reg = <1>;
+		};
+
+		port@2 {
+			reg = <2>;
+
+			mipi_ipu2_mux_to_ipu2_csi1: endpoint {
+				remote-endpoint = <&ipu2_csi1_from_mipi_ipu2_mux>;
+			};
+		};
+	};
+};
+
+&ipu1_csi1 {
+	ipu1_csi1_from_mipi_csi1: endpoint {
+		remote-endpoint = <&mipi_csi1_to_ipu1_csi1>;
+	};
+};
+
+&ipu2_csi0 {
+	ipu2_csi0_from_mipi_csi2: endpoint {
+		remote-endpoint = <&mipi_csi2_to_ipu2_csi0>;
+	};
+};
+
 &ldb {
 	clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>, <&clks IMX6QDL_CLK_LDB_DI1_SEL>,
 		 <&clks IMX6QDL_CLK_IPU1_DI0_SEL>, <&clks IMX6QDL_CLK_IPU1_DI1_SEL>,
@@ -280,6 +360,44 @@
 	};
 };
 
+&mipi_csi {
+	port@0 {
+		reg = <0>;
+	};
+
+	port@1 {
+		reg = <1>;
+
+		mipi_csi0_to_mipi_ipu1_mux: endpoint {
+			remote-endpoint = <&mipi_ipu1_mux_from_mipi_csi0>;
+		};
+	};
+
+	port@2 {
+		reg = <2>;
+
+		mipi_csi1_to_ipu1_csi1: endpoint {
+			remote-endpoint = <&ipu1_csi1_from_mipi_csi1>;
+		};
+	};
+
+	port@3 {
+		reg = <3>;
+
+		mipi_csi2_to_ipu2_csi0: endpoint {
+			remote-endpoint = <&ipu2_csi0_from_mipi_csi2>;
+		};
+	};
+
+	port@4 {
+		reg = <4>;
+
+		mipi_csi3_to_mipi_ipu2_mux: endpoint {
+			remote-endpoint = <&mipi_ipu2_mux_from_mipi_csi3>;
+		};
+	};
+};
+
 &mipi_dsi {
 	ports {
 		port@2 {
diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
index b13b0b2..cd325bd 100644
--- a/arch/arm/boot/dts/imx6qdl.dtsi
+++ b/arch/arm/boot/dts/imx6qdl.dtsi
@@ -798,8 +798,10 @@
 			};
 
 			gpr: iomuxc-gpr@020e0000 {
-				compatible = "fsl,imx6q-iomuxc-gpr", "syscon";
+				compatible = "fsl,imx6q-iomuxc-gpr", "syscon", "simple-mfd";
 				reg = <0x020e0000 0x38>;
+				#address-cells = <1>;
+				#size-cells = <0>;
 			};
 
 			iomuxc: iomuxc@020e0000 {
@@ -1122,6 +1124,8 @@
 
 			mipi_csi: mipi@021dc000 {
 				reg = <0x021dc000 0x4000>;
+				#address-cells = <1>;
+				#size-cells = <0>;
 			};
 
 			mipi_dsi: mipi@021e0000 {
@@ -1221,6 +1225,10 @@
 
 			ipu1_csi0: port@0 {
 				reg = <0>;
+
+				ipu1_csi0_from_mipi_ipu1_mux: endpoint {
+					remote-endpoint = <&mipi_ipu1_mux_to_ipu1_csi0>;
+				};
 			};
 
 			ipu1_csi1: port@1 {
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 14/21] ARM: dts: imx6qdl: Add MIPI CSI-2 D-PHY compatible and clocks
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (12 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 13/21] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 15/21] ARM: dts: nitrogen6x: Add dtsi for BD_HDMI_MIPI HDMI to MIPI CSI-2 receiver board Philipp Zabel
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

>From the data sheets it is not quite clear what the clock inputs should
be named, but freescale code calls them "dphy_clk" (would that be per?)
and "pixel_clk" and connects them to the mipi_core_cfg and emi_podf
clocks, respectively.  The mipi_core_cfg control is called hsi_tx
currently, but it really gates a whole lot of other clocks, too.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 arch/arm/boot/dts/imx6qdl.dtsi | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
index cd325bd..2be6de4 100644
--- a/arch/arm/boot/dts/imx6qdl.dtsi
+++ b/arch/arm/boot/dts/imx6qdl.dtsi
@@ -1123,9 +1123,16 @@
 			};
 
 			mipi_csi: mipi@021dc000 {
+				compatible = "fsl,imx6q-mipi-csi2", "dw-mipi-csi2";
 				reg = <0x021dc000 0x4000>;
+				clocks = <&clks IMX6QDL_CLK_HSI_TX>,	/* mipi_core_cfg/ipg_clk_root */
+					 <&clks IMX6QDL_CLK_HSI_TX>,	/* mipi_core_cfg/video_27m_clk_root */
+					 <&clks IMX6QDL_CLK_HSI_TX>,	/* mipi_core_cfg/video_27m_clk_root */
+					 <&clks IMX6QDL_CLK_EIM_PODF>;	/* shoid be ipu1_ipu_hsp_clk_root on S/DL, axi_clk_root on D/Q */
+				clock-names = "pclk", "cfg", "ref", "pixel";
 				#address-cells = <1>;
 				#size-cells = <0>;
+				status = "disabled";
 			};
 
 			mipi_dsi: mipi@021e0000 {
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 15/21] ARM: dts: nitrogen6x: Add dtsi for BD_HDMI_MIPI HDMI to MIPI CSI-2 receiver board
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (13 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 14/21] ARM: dts: imx6qdl: Add MIPI CSI-2 D-PHY compatible and clocks Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 16/21] gpu: ipuv3: add ipu_csi_set_downsize Philipp Zabel
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Add device tree nodes for the BD_HDMI_MIPI HDMI to MIPI CSI-2 receiver
board with a TC358743 connected to the Nitrogen6X MIPI CSI-2 input
connector.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 .../boot/dts/imx6qdl-nitrogen6x-bd-hdmi-mipi.dtsi  | 73 ++++++++++++++++++++++
 1 file changed, 73 insertions(+)
 create mode 100644 arch/arm/boot/dts/imx6qdl-nitrogen6x-bd-hdmi-mipi.dtsi

diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6x-bd-hdmi-mipi.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6x-bd-hdmi-mipi.dtsi
new file mode 100644
index 0000000..e110874
--- /dev/null
+++ b/arch/arm/boot/dts/imx6qdl-nitrogen6x-bd-hdmi-mipi.dtsi
@@ -0,0 +1,73 @@
+/*
+ * Copyright 2015 Philipp Zabel, Pengutronix
+ *
+ * The code contained herein is licensed under the GNU General Public
+ * License. You may obtain a copy of the GNU General Public License
+ * Version 2 or later at the following locations:
+ *
+ * http://www.opensource.org/licenses/gpl-license.html
+ * http://www.gnu.org/copyleft/gpl.html
+ */
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+
+/ {
+	hdmi_osc: hdmi-osc {
+		compatible = "fixed-clock";
+		clock-output-names = "hdmi-osc";
+		clock-frequency = <27000000>;
+		#clock-cells = <0>;
+	};
+};
+
+&i2c2 {
+	tc358743: tc358743@0f {
+		compatible = "toshiba,tc358743";
+		pinctrl-names = "default";
+		pinctrl-0 = <&pinctrl_tc358743>;
+		reg = <0x0f>;
+		clocks = <&hdmi_osc>;
+		clock-names = "refclk";
+		reset-gpios = <&gpio6 9 GPIO_ACTIVE_LOW>;
+		interrupt-parent = <&gpio2>;
+		interrupts = <5 IRQ_TYPE_LEVEL_HIGH>;
+
+		port@0 {
+			tc358743_out: endpoint {
+				remote-endpoint = <&mipi_csi2_in>;
+				data-lanes = <1 2 3 4>;
+				clock-lanes = <0>;
+				clock-noncontinuous;
+				link-frequencies = /bits/ 64 <297000000>;
+			};
+		};
+	};
+};
+
+&iomuxc {
+	pinctrl-names = "default";
+	pinctrl-0 = <&pinctrl_hog>;
+
+	imx6q-nitrogen6x-tc358743 {
+		pinctrl_tc358743: tc358743grp {
+			fsl,pins = <
+				MX6QDL_PAD_NANDF_WP_B__GPIO6_IO09 0x4000b0b0	/* RESETN */
+				MX6QDL_PAD_NANDF_D5__GPIO2_IO05   0x400130b0	/* INT */
+			>;
+		};
+	};
+};
+
+&mipi_csi {
+	status = "okay";
+
+	port@0 {
+		mipi_csi2_in: endpoint {
+			remote-endpoint = <&tc358743_out>;
+			data-lanes = <1 2 3 4>;
+			clock-lanes = <0>;
+			clock-noncontinuous;
+			link-frequencies = /bits/ 64 <297000000>;
+		};
+	};
+};
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 16/21] gpu: ipuv3: add ipu_csi_set_downsize
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (14 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 15/21] ARM: dts: nitrogen6x: Add dtsi for BD_HDMI_MIPI HDMI to MIPI CSI-2 receiver board Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 17/21] [media] imx-ipuv3-csi: support downsizing Philipp Zabel
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Support downsizing to 1/2 width and/or height in the CSI.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 drivers/gpu/ipu-v3/ipu-csi.c | 16 ++++++++++++++++
 include/video/imx-ipu-v3.h   |  1 +
 2 files changed, 17 insertions(+)

diff --git a/drivers/gpu/ipu-v3/ipu-csi.c b/drivers/gpu/ipu-v3/ipu-csi.c
index 06631ac..77f172e 100644
--- a/drivers/gpu/ipu-v3/ipu-csi.c
+++ b/drivers/gpu/ipu-v3/ipu-csi.c
@@ -527,6 +527,22 @@ void ipu_csi_set_window(struct ipu_csi *csi, struct v4l2_rect *w)
 }
 EXPORT_SYMBOL_GPL(ipu_csi_set_window);
 
+void ipu_csi_set_downsize(struct ipu_csi *csi, bool horiz, bool vert)
+{
+	unsigned long flags;
+	u32 reg;
+
+	spin_lock_irqsave(&csi->lock, flags);
+
+	reg = ipu_csi_read(csi, CSI_OUT_FRM_CTRL);
+	reg &= ~(CSI_HORI_DOWNSIZE_EN | CSI_VERT_DOWNSIZE_EN);
+	reg |= (horiz ? CSI_HORI_DOWNSIZE_EN : 0) |
+	       (vert ? CSI_VERT_DOWNSIZE_EN : 0);
+	ipu_csi_write(csi, reg, CSI_OUT_FRM_CTRL);
+
+	spin_unlock_irqrestore(&csi->lock, flags);
+}
+
 void ipu_csi_set_test_generator(struct ipu_csi *csi, bool active,
 				u32 r_value, u32 g_value, u32 b_value,
 				u32 pix_clk)
diff --git a/include/video/imx-ipu-v3.h b/include/video/imx-ipu-v3.h
index 7adeaae0..5f2f26d 100644
--- a/include/video/imx-ipu-v3.h
+++ b/include/video/imx-ipu-v3.h
@@ -271,6 +271,7 @@ int ipu_csi_init_interface(struct ipu_csi *csi,
 bool ipu_csi_is_interlaced(struct ipu_csi *csi);
 void ipu_csi_get_window(struct ipu_csi *csi, struct v4l2_rect *w);
 void ipu_csi_set_window(struct ipu_csi *csi, struct v4l2_rect *w);
+void ipu_csi_set_downsize(struct ipu_csi *csi, bool horiz, bool vert);
 void ipu_csi_set_test_generator(struct ipu_csi *csi, bool active,
 				u32 r_value, u32 g_value, u32 b_value,
 				u32 pix_clk);
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 17/21] [media] imx-ipuv3-csi: support downsizing
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (15 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 16/21] gpu: ipuv3: add ipu_csi_set_downsize Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 18/21] [media] add mux and video interface bridge entity functions Philipp Zabel
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Add support for the CSI internal horizontal and vertical downsizing.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
Changes since v1:
 - Rebased onto CSI changes.
---
 drivers/media/platform/imx/imx-ipuv3-csi.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/media/platform/imx/imx-ipuv3-csi.c b/drivers/media/platform/imx/imx-ipuv3-csi.c
index 7837978..c65f02c 100644
--- a/drivers/media/platform/imx/imx-ipuv3-csi.c
+++ b/drivers/media/platform/imx/imx-ipuv3-csi.c
@@ -175,8 +175,14 @@ static int ipucsi_subdev_set_format(struct v4l2_subdev *sd,
 	} else {
 		struct v4l2_mbus_framefmt *infmt = &ipucsi->format_mbus[0];
 
-		width = infmt->width;
-		height = infmt->height;
+		if (sdformat->format.width < (infmt->width * 3 / 4))
+			width = infmt->width / 2;
+		else
+			width = infmt->width;
+		if (sdformat->format.height < (infmt->height * 3 / 4))
+			height = infmt->height / 2;
+		else
+			height = infmt->height;
 		mbusformat->field = infmt->field;
 		mbusformat->colorspace = infmt->colorspace;
 	}
@@ -237,14 +243,14 @@ static int ipucsi_subdev_s_stream(struct v4l2_subdev *sd, int enable)
 		window.width = fmt[0].width;
 		window.height = fmt[0].height;
 		ipu_csi_set_window(ipucsi->csi, &window);
+		ipu_csi_set_downsize(ipucsi->csi,
+				     fmt[0].width == 2 * fmt[1].width,
+				     fmt[0].height == 2 * fmt[1].height);
 
 		/* Is CSI data source MCT (MIPI)? */
 		mux_mct = (mbus_config.type == V4L2_MBUS_CSI2);
-
 		ipu_set_csi_src_mux(ipucsi->ipu, ipucsi->id, mux_mct);
-		if (mux_mct)
-			ipu_csi_set_mipi_datatype(ipucsi->csi, /*VC*/ 0,
-						  &fmt[0]);
+		ipu_csi_set_mipi_datatype(ipucsi->csi, /*VC*/ 0, &fmt[0]);
 
 		ret = ipu_csi_init_interface(ipucsi->csi, &mbus_config,
 					     &fmt[0]);
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 18/21] [media] add mux and video interface bridge entity functions
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (16 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 17/21] [media] imx-ipuv3-csi: support downsizing Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 19/21] [media] video-multiplexer: set entity function to mux Philipp Zabel
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 Documentation/media/uapi/mediactl/media-types.rst | 22 ++++++++++++++++++++++
 include/uapi/linux/media.h                        |  6 ++++++
 2 files changed, 28 insertions(+)

diff --git a/Documentation/media/uapi/mediactl/media-types.rst b/Documentation/media/uapi/mediactl/media-types.rst
index 3e03dc2..023be29 100644
--- a/Documentation/media/uapi/mediactl/media-types.rst
+++ b/Documentation/media/uapi/mediactl/media-types.rst
@@ -298,6 +298,28 @@ Types and flags used to represent the media graph elements
 	  received on its sink pad and outputs the statistics data on
 	  its source pad.
 
+    -  ..  row 29
+
+       ..  _MEDIA-ENT-F-MUX:
+
+       -  ``MEDIA_ENT_F_MUX``
+
+       - Video multiplexer. An entity capable of multiplexing must have at
+         least two sink pads and one source pad, and must pass the video
+         frame(s) received from the active sink pad to the source pad. Video
+         frame(s) from the inactive sink pads are discarded.
+
+    -  ..  row 30
+
+       ..  _MEDIA-ENT-F-VID-IF-BRIDGE:
+
+       -  ``MEDIA_ENT_F_VID_IF_BRIDGE``
+
+       - Video interface bridge. A video interface bridge entity must have at
+         least one sink pad and one source pad. It receives video frame(s) on
+         its sink pad in one bus format (HDMI, eDP, MIPI CSI-2, ...) and
+         converts them and outputs them on its source pad in another bus format
+         (eDP, MIPI CSI-2, parallel, ...).
 
 ..  tabularcolumns:: |p{5.5cm}|p{12.0cm}|
 
diff --git a/include/uapi/linux/media.h b/include/uapi/linux/media.h
index 4890787..08a8bfa 100644
--- a/include/uapi/linux/media.h
+++ b/include/uapi/linux/media.h
@@ -105,6 +105,12 @@ struct media_device_info {
 #define MEDIA_ENT_F_PROC_VIDEO_STATISTICS	(MEDIA_ENT_F_BASE + 0x4006)
 
 /*
+ * Switch and bridge entitites
+ */
+#define MEDIA_ENT_F_MUX				(MEDIA_ENT_F_BASE + 0x5001)
+#define MEDIA_ENT_F_VID_IF_BRIDGE		(MEDIA_ENT_F_BASE + 0x5002)
+
+/*
  * Connectors
  */
 /* It is a responsibility of the entity drivers to add connectors and links */
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 19/21] [media] video-multiplexer: set entity function to mux
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (17 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 18/21] [media] add mux and video interface bridge entity functions Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 20/21] [media] imx: Set i.MX MIPI CSI-2 entity function to bridge Philipp Zabel
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 drivers/media/platform/video-multiplexer.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/media/platform/video-multiplexer.c b/drivers/media/platform/video-multiplexer.c
index f79b90e..466d9d0 100644
--- a/drivers/media/platform/video-multiplexer.c
+++ b/drivers/media/platform/video-multiplexer.c
@@ -138,6 +138,7 @@ static int vidsw_async_init(struct vidsw *vidsw, struct device_node *node)
 		vidsw->pads[i].flags = MEDIA_PAD_FL_SINK;
 	vidsw->pads[numports - 1].flags = MEDIA_PAD_FL_SOURCE;
 
+	vidsw->subdev.entity.function = MEDIA_ENT_F_MUX;
 	ret = media_entity_pads_init(&vidsw->subdev.entity, numports,
 				     vidsw->pads);
 	if (ret < 0)
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 20/21] [media] imx: Set i.MX MIPI CSI-2 entity function to bridge
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (18 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 19/21] [media] video-multiplexer: set entity function to mux Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-14 17:34 ` [PATCH v2 21/21] [media] tc358743: set entity function to video interface bridge Philipp Zabel
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

The i.MX6 MIPI CSI2 bridge converts the external MIPI CSI2 input into
a SoC internal parallel bus connected to the IPU CSIs via the CSI2IPU
gasket.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 drivers/media/platform/imx/imx-mipi-csi2.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/media/platform/imx/imx-mipi-csi2.c b/drivers/media/platform/imx/imx-mipi-csi2.c
index 7b289cc..6b00a67 100644
--- a/drivers/media/platform/imx/imx-mipi-csi2.c
+++ b/drivers/media/platform/imx/imx-mipi-csi2.c
@@ -606,6 +606,7 @@ static int mipi_csi2_probe(struct platform_device *pdev)
 	csi2->pads[2].flags = MEDIA_PAD_FL_SOURCE;
 	csi2->pads[3].flags = MEDIA_PAD_FL_SOURCE;
 	csi2->pads[4].flags = MEDIA_PAD_FL_SOURCE;
+	csi2->subdev.entity.function = MEDIA_ENT_F_VID_IF_BRIDGE;
 	ret = media_entity_pads_init(&csi2->subdev.entity, MIPI_CSI2_PADS,
 				csi2->pads);
 	if (ret < 0)
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH v2 21/21] [media] tc358743: set entity function to video interface bridge
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (19 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 20/21] [media] imx: Set i.MX MIPI CSI-2 entity function to bridge Philipp Zabel
@ 2016-10-14 17:34 ` Philipp Zabel
  2016-10-17 10:18 ` [PATCH v2 00/21] Basic i.MX IPUv3 capture support Gary Bisson
  2016-10-19 21:30 ` Sakari Ailus
  22 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-14 17:34 UTC (permalink / raw)
  To: linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Philipp Zabel

The TC358743 is an HDMI to MIPI CSI2-2 bridge.

Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
---
 drivers/media/i2c/tc358743.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
index 7f82932..259ccf6 100644
--- a/drivers/media/i2c/tc358743.c
+++ b/drivers/media/i2c/tc358743.c
@@ -1886,6 +1886,7 @@ static int tc358743_probe(struct i2c_client *client,
 	}
 
 	state->pad.flags = MEDIA_PAD_FL_SOURCE;
+	sd->entity.function = MEDIA_ENT_F_VID_IF_BRIDGE;
 	err = media_entity_pads_init(&sd->entity, 1, &state->pad);
 	if (err < 0)
 		goto err_hdl;
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (20 preceding siblings ...)
  2016-10-14 17:34 ` [PATCH v2 21/21] [media] tc358743: set entity function to video interface bridge Philipp Zabel
@ 2016-10-17 10:18 ` Gary Bisson
  2016-10-17 12:41   ` Philipp Zabel
  2016-10-19 21:30 ` Sakari Ailus
  22 siblings, 1 reply; 61+ messages in thread
From: Gary Bisson @ 2016-10-17 10:18 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: linux-media, Steve Longerbeam, Marek Vasut, Hans Verkuil, kernel

Hi Philipp,

On Fri, Oct 14, 2016 at 07:34:20PM +0200, Philipp Zabel wrote:
> Hi,
> 
> the second round removes the prepare_stream callback and instead lets the
> intermediate subdevices propagate s_stream calls to their sources rather
> than individually calling s_stream on each subdevice from the bridge driver.
> This is similar to how drm bridges recursively call into their next neighbor.
> It makes it easier to do bringup ordering on a per-link level, as long as the
> source preparation can be done at s_power, and the sink can just prepare, call
> s_stream on its source, and then enable itself inside s_stream. Obviously this
> would only work in a generic fashion if all asynchronous subdevices with both
> inputs and outputs would propagate s_stream to their source subdevices.
> 
> Changes since v1:
>  - Propagate field and colorspace in ipucsi_subdev_set_format.
>  - Remove v4l2_media_subdev_prepare_stream and v4l2_media_subdev_s_stream,
>    let subdevices propagate s_stream calls to their upstream subdevices
>    themselves.
>  - Various fixes (see individual patches for details)

For the whole series:
Tested-by: Gary Bisson <gary.bisson@boundarydevices.com>

Tested on Nitrogen6x + BD_HDMI_MIPI daughter board on linux-next
20161016.

This required using your v4l2-ctl patch to set the EDID if the source
output can't be forced:
https://patchwork.kernel.org/patch/6097201/
BTW, do you have any update on this? Because it looks like the
VIDIOC_SUBDEV_QUERYCAP hasn't been implemented since your patch (March
2015).

Then I followed the procedure you gave here:
https://patchwork.kernel.org/patch/9366503/

For those interested in trying it out, note that kmssink requires to use
Gstreamer 1.9.x.

I have a few remarks:
- I believe it would help having a patch that sets imx_v6_v7_defconfig
  with the proper options in this series
- Not related to this series, I couldn't boot the board unless I disable
  the PCIe driver, have you experienced the same issue?
- Is there a way not to set all the links manually using media-ctl? I
  expected all the formats to be negotiated automatically once a stream
  is properly detected.
- As discussed last week, the Nitrogen6x dtsi file shouldn't be
  included, instead an overlay would be more appropriate. Maybe the log
  should contain a comment about this.

Let me know if I need to add that Tested-by to every single patch so it
appears on Patchwork.

Regards,
Gary

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-14 17:34 ` [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver Philipp Zabel
@ 2016-10-17 11:32   ` Jack Mitchell
  2016-10-17 11:35     ` Marek Vasut
  2016-10-17 12:12     ` Philipp Zabel
  2016-11-08  7:21   ` Ying Liu
  1 sibling, 2 replies; 61+ messages in thread
From: Jack Mitchell @ 2016-10-17 11:32 UTC (permalink / raw)
  To: Philipp Zabel, linux-media
  Cc: Steve Longerbeam, Marek Vasut, Hans Verkuil, Gary Bisson, kernel,
	Sascha Hauer, Marc Kleine-Budde

Hi Philipp,

I'm looking at how I would enable a parallel greyscale camera using this 
set of drivers and am a little bit confused. Do you have an example 
somewhere of a devicetree with an input node. I also have a further note 
below:

<snip>

> +
> +static int ipu_capture_start_streaming(struct vb2_queue *vq, unsigned int count)
> +{
> +	struct ipu_capture *priv = vb2_get_drv_priv(vq);
> +	struct v4l2_subdev *csi_sd = priv->csi_sd;
> +	u32 width = priv->format.fmt.pix.width;
> +	u32 height = priv->format.fmt.pix.height;
> +	struct device *dev = priv->dev;
> +	int burstsize;
> +	struct ipu_capture_buffer *buf;
> +	int nfack_irq;
> +	int ret;
> +	const char *irq_name[2] = { "CSI0", "CSI1" };
> +	bool raw;
> +
> +	ret = ipu_capture_get_resources(priv);
> +	if (ret < 0) {
> +		dev_err(dev, "Failed to get resources: %d\n", ret);
> +		goto err_dequeue;
> +	}
> +
> +	ipu_cpmem_zero(priv->ipuch);
> +
> +	nfack_irq = ipu_idmac_channel_irq(priv->ipu, priv->ipuch,
> +					  IPU_IRQ_NFACK);
> +	ret = request_threaded_irq(nfack_irq, NULL,
> +				   ipu_capture_new_frame_handler, IRQF_ONESHOT,
> +				   irq_name[priv->id], priv);
> +	if (ret) {
> +		dev_err(dev, "Failed to request NFACK interrupt: %d\n", nfack_irq);
> +		goto put_resources;
> +	}
> +
> +	dev_dbg(dev, "width: %d height: %d, %.4s\n",
> +		width, height, (char *)&priv->format.fmt.pix.pixelformat);
> +
> +	ipu_cpmem_set_resolution(priv->ipuch, width, height);
> +
> +	raw = false;
> +
> +	if (raw && priv->smfc) {

How does this ever get used? If I were to set 1X8 greyscale it wouldn't 
ever take this path, correct?

> +		/*
> +		 * raw formats. We can only pass them through to memory
> +		 */
> +		u32 fourcc = priv->format.fmt.pix.pixelformat;
> +		int bytes;
> +
> +		switch (fourcc) {
> +		case V4L2_PIX_FMT_GREY:
> +			bytes = 1;
> +			break;
> +		case V4L2_PIX_FMT_Y10:
> +		case V4L2_PIX_FMT_Y16:
> +		case V4L2_PIX_FMT_UYVY:
> +		case V4L2_PIX_FMT_YUYV:
> +			bytes = 2;
> +			break;
> +		}
> +
> +		ipu_cpmem_set_stride(priv->ipuch, width * bytes);
> +		ipu_cpmem_set_format_passthrough(priv->ipuch, bytes * 8);
> +		/*
> +		 * According to table 37-727 (SMFC Burst Size), burstsize should
> +		 * be set to NBP[6:4] for PFS == 6. Unfortunately, with a 16-bit
> +		 * bus any value below 4 doesn't produce proper images.
> +		 */
> +		burstsize = (64 / bytes) >> 3;
> +	} else {
> +		/*
> +		 * formats we understand, we can write it in any format not requiring
> +		 * colorspace conversion.
> +		 */
> +		u32 fourcc = priv->format.fmt.pix.pixelformat;
> +
> +		switch (fourcc) {
> +		case V4L2_PIX_FMT_RGB32:
> +			ipu_cpmem_set_stride(priv->ipuch, width * 4);
> +			ipu_cpmem_set_fmt(priv->ipuch, fourcc);
> +			break;
> +		case V4L2_PIX_FMT_UYVY:
> +		case V4L2_PIX_FMT_YUYV:
> +			ipu_cpmem_set_stride(priv->ipuch, width * 2);
> +			ipu_cpmem_set_yuv_interleaved(priv->ipuch, fourcc);
> +			break;
> +		case V4L2_PIX_FMT_YUV420:
> +		case V4L2_PIX_FMT_YVU420:
> +		case V4L2_PIX_FMT_NV12:
> +		case V4L2_PIX_FMT_YUV422P:
> +			ipu_cpmem_set_stride(priv->ipuch, width);
> +			ipu_cpmem_set_fmt(priv->ipuch, fourcc);
> +			ipu_cpmem_set_yuv_planar(priv->ipuch, fourcc,
> +						 width, height);
> +			burstsize = 16;
> +			break;
> +		default:
> +			dev_err(dev, "invalid color format: %4.4s\n",
> +				(char *)&fourcc);
> +			ret = -EINVAL;
> +			goto free_irq;
> +		}
> +	}
> +

<snip>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-17 11:32   ` Jack Mitchell
@ 2016-10-17 11:35     ` Marek Vasut
  2016-10-17 11:40       ` Jack Mitchell
  2016-10-17 12:12     ` Philipp Zabel
  1 sibling, 1 reply; 61+ messages in thread
From: Marek Vasut @ 2016-10-17 11:35 UTC (permalink / raw)
  To: Jack Mitchell, Philipp Zabel, linux-media
  Cc: Steve Longerbeam, Hans Verkuil, Gary Bisson, kernel,
	Sascha Hauer, Marc Kleine-Budde

On 10/17/2016 01:32 PM, Jack Mitchell wrote:
> Hi Philipp,

Hi,

> I'm looking at how I would enable a parallel greyscale camera using this
> set of drivers and am a little bit confused. Do you have an example
> somewhere of a devicetree with an input node. I also have a further note
> below:

Which sensor do you use ?

[...]

-- 
Best regards,
Marek Vasut

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-17 11:35     ` Marek Vasut
@ 2016-10-17 11:40       ` Jack Mitchell
  0 siblings, 0 replies; 61+ messages in thread
From: Jack Mitchell @ 2016-10-17 11:40 UTC (permalink / raw)
  To: Marek Vasut, Philipp Zabel, linux-media
  Cc: Steve Longerbeam, Hans Verkuil, Gary Bisson, kernel,
	Sascha Hauer, Marc Kleine-Budde



On 17/10/16 12:35, Marek Vasut wrote:
> On 10/17/2016 01:32 PM, Jack Mitchell wrote:
>> Hi Philipp,
>
> Hi,
>
>> I'm looking at how I would enable a parallel greyscale camera using this
>> set of drivers and am a little bit confused. Do you have an example
>> somewhere of a devicetree with an input node. I also have a further note
>> below:
>
> Which sensor do you use ?
>
> [...]
>

One which has a driver yet to be written :)

Initial prototype board has an onsemi ar0135 on the parallel interface.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-17 11:32   ` Jack Mitchell
  2016-10-17 11:35     ` Marek Vasut
@ 2016-10-17 12:12     ` Philipp Zabel
  2016-10-19 16:22       ` Jack Mitchell
  1 sibling, 1 reply; 61+ messages in thread
From: Philipp Zabel @ 2016-10-17 12:12 UTC (permalink / raw)
  To: Jack Mitchell
  Cc: linux-media, Steve Longerbeam, Marek Vasut, Hans Verkuil,
	Gary Bisson, kernel, Sascha Hauer, Marc Kleine-Budde

Hi Jack,

Am Montag, den 17.10.2016, 12:32 +0100 schrieb Jack Mitchell:
> Hi Philipp,
> 
> I'm looking at how I would enable a parallel greyscale camera using this 
> set of drivers and am a little bit confused. Do you have an example 
> somewhere of a devicetree with an input node.

In your board device tree it should look somewhat like this:

&i2c1 {
	sensor@48 {
		compatible = "aptina,mt9v032m";
		/* ... */

		port {
			cam_out: endpoint {
				remote-endpoint = <&csi_in>;
			}
		};
	};
};

/*
 * This is the input port node corresponding to the 'CSI0' pad group,
 * not necessarily the CSI0 port of IPU1 or IPU2. On i.MX6Q it's port@1
 * of the mipi_ipu1_mux, on i.MX6DL it's port@4 of the ipu_csi0_mux,
 * the csi0 label is added in patch 13/21.
 */
&csi0 {
	#address-cells = <1>;
	#size-cells = <0>;

	csi_in: endpoint@0 {
		bus-width = <8>;
		data-shift = <12>;
		hsync-active = <1>;
		vsync-active = <1>;
		pclk-sample = <1>;
		remote-endpoint = <&cam_out>;
	};
};

>  I also have a further note below:
[...]
> > +	if (raw && priv->smfc) {
> 
> How does this ever get used? If I were to set 1X8 greyscale it wouldn't 
> ever take this path, correct?

Thank you, that is a leftover from stripping down the driver to the
basics. I'll test with a grayscale camera and fix this in the next
version.

regards
Philipp


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2016-10-17 10:18 ` [PATCH v2 00/21] Basic i.MX IPUv3 capture support Gary Bisson
@ 2016-10-17 12:41   ` Philipp Zabel
  0 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2016-10-17 12:41 UTC (permalink / raw)
  To: Gary Bisson, Lucas Stach
  Cc: linux-media, Steve Longerbeam, Marek Vasut, Hans Verkuil, kernel

Hi Gary,

Am Montag, den 17.10.2016, 12:18 +0200 schrieb Gary Bisson:
[...]
> For the whole series:
> Tested-by: Gary Bisson <gary.bisson@boundarydevices.com>
> 
> Tested on Nitrogen6x + BD_HDMI_MIPI daughter board on linux-next
> 20161016.
>
> This required using your v4l2-ctl patch to set the EDID if the source
> output can't be forced:
> https://patchwork.kernel.org/patch/6097201/
> BTW, do you have any update on this? Because it looks like the
> VIDIOC_SUBDEV_QUERYCAP hasn't been implemented since your patch (March
> 2015).
> 
> Then I followed the procedure you gave here:
> https://patchwork.kernel.org/patch/9366503/
> 
> For those interested in trying it out, note that kmssink requires to use
> Gstreamer 1.9.x.
> 
> I have a few remarks:
> - I believe it would help having a patch that sets imx_v6_v7_defconfig
>   with the proper options in this series

I can add that in the next round.

> - Not related to this series, I couldn't boot the board unless I disable
>   the PCIe driver, have you experienced the same issue?

I had not enabled the PCIe driver, but a quick boot test with
CONFIG_PCIE_DW enabled hangs after these messages:

[    1.314298] OF: PCI: host bridge /soc/pcie@0x01000000 ranges:
[    1.317199] OF: PCI:   No bus range found for /soc/pcie@0x01000000, using [bus 00-ff]
[    1.325171] OF: PCI:    IO 0x01f80000..0x01f8ffff -> 0x00000000
[    1.331029] OF: PCI:   MEM 0x01000000..0x01efffff -> 0x01000000

I've asked Lucas to have a look.

> - Is there a way not to set all the links manually using media-ctl? I
>   expected all the formats to be negotiated automatically once a stream
>   is properly detected.

This should be done in userspace, probably libv4l2.

> - As discussed last week, the Nitrogen6x dtsi file shouldn't be
>   included, instead an overlay would be more appropriate. Maybe the log
>   should contain a comment about this.

Ok.

> Let me know if I need to add that Tested-by to every single patch so it
> appears on Patchwork.

It's fine as is, thank you.

regards
Philipp


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-17 12:12     ` Philipp Zabel
@ 2016-10-19 16:22       ` Jack Mitchell
  2016-10-19 19:25         ` Marek Vasut
  0 siblings, 1 reply; 61+ messages in thread
From: Jack Mitchell @ 2016-10-19 16:22 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: linux-media, Steve Longerbeam, Marek Vasut, Hans Verkuil,
	Gary Bisson, kernel, Sascha Hauer, Marc Kleine-Budde

Hi Philipp,

On 17/10/16 13:12, Philipp Zabel wrote:
> Hi Jack,
>
> Am Montag, den 17.10.2016, 12:32 +0100 schrieb Jack Mitchell:
>> Hi Philipp,
>>
>> I'm looking at how I would enable a parallel greyscale camera using this
>> set of drivers and am a little bit confused. Do you have an example
>> somewhere of a devicetree with an input node.
>
> In your board device tree it should look somewhat like this:
>
> &i2c1 {
> 	sensor@48 {
> 		compatible = "aptina,mt9v032m";
> 		/* ... */
>
> 		port {
> 			cam_out: endpoint {
> 				remote-endpoint = <&csi_in>;
> 			}
> 		};
> 	};
> };
>
> /*
>  * This is the input port node corresponding to the 'CSI0' pad group,
>  * not necessarily the CSI0 port of IPU1 or IPU2. On i.MX6Q it's port@1
>  * of the mipi_ipu1_mux, on i.MX6DL it's port@4 of the ipu_csi0_mux,
>  * the csi0 label is added in patch 13/21.
>  */
> &csi0 {
> 	#address-cells = <1>;
> 	#size-cells = <0>;
>
> 	csi_in: endpoint@0 {
> 		bus-width = <8>;
> 		data-shift = <12>;
> 		hsync-active = <1>;
> 		vsync-active = <1>;
> 		pclk-sample = <1>;
> 		remote-endpoint = <&cam_out>;
> 	};
> };
>
>>  I also have a further note below:
> [...]
>>> +	if (raw && priv->smfc) {
>>

Thank you, I think I have something which is kind of right.

(Apologies in advance for the formatting)

diff --git a/arch/arm/boot/dts/imx6q-sabrelite.dts 
b/arch/arm/boot/dts/imx6q-sabrelite.dts
index 66d10d8..90e6b92 100644
--- a/arch/arm/boot/dts/imx6q-sabrelite.dts
+++ b/arch/arm/boot/dts/imx6q-sabrelite.dts
@@ -52,3 +52,62 @@
  &sata {
  	status = "okay";
  };
+
+&i2c2 {
+	sensor@10 {
+		compatible = "onsemi,ar0135";
+		reg = <0x10>;
+
+		pinctrl-names = "default";
+		pinctrl-0 = <&pinctrl_ar0135>;
+
+		reset-gpio = <&gpio1 6 GPIO_ACTIVE_HIGH>;
+
+		clocks = <&clks IMX6QDL_CLK_CKO2>;
+		clock-names = "xclk";
+
+		xclk = <24000000>;
+
+		port {
+			parallel_camera_output: endpoint {
+				remote-endpoint = <&csi_in_from_parallel_camera>;
+			};
+		};
+	};
+};
+
+&csi0 {
+	csi_in_from_parallel_camera: endpoint@0 {
+		bus-width = <8>;
+		data-shift = <12>;
+		hsync-active = <1>;
+		vsync-active = <1>;
+		pclk-sample = <1>;
+		remote-endpoint = <&parallel_camera_output>;
+	};
+};
+
+&iomuxc {
+
+        imx6q-sabrelite {
+
+		pinctrl_ar0135: ar0135grp {
+			fsl,pins = <
+				MX6QDL_PAD_GPIO_6__GPIO1_IO06   0x80000000
+				MX6QDL_PAD_CSI0_DAT12__IPU1_CSI0_DATA12    0x80000000
+				MX6QDL_PAD_CSI0_DAT13__IPU1_CSI0_DATA13    0x80000000
+				MX6QDL_PAD_CSI0_DAT14__IPU1_CSI0_DATA14    0x80000000
+				MX6QDL_PAD_CSI0_DAT15__IPU1_CSI0_DATA15    0x80000000
+				MX6QDL_PAD_CSI0_DAT16__IPU1_CSI0_DATA16    0x80000000
+				MX6QDL_PAD_CSI0_DAT17__IPU1_CSI0_DATA17    0x80000000
+				MX6QDL_PAD_CSI0_DAT18__IPU1_CSI0_DATA18    0x80000000
+				MX6QDL_PAD_CSI0_DAT19__IPU1_CSI0_DATA19    0x80000000
+				MX6QDL_PAD_CSI0_PIXCLK__IPU1_CSI0_PIXCLK   0x80000000
+				MX6QDL_PAD_CSI0_MCLK__IPU1_CSI0_HSYNC      0x80000000
+				MX6QDL_PAD_CSI0_VSYNC__IPU1_CSI0_VSYNC     0x80000000
+				MX6QDL_PAD_CSI0_DATA_EN__IPU1_CSI0_DATA_EN 0x80000000
+			>;
+		};
+	};
+};


However, I can't seem to link the entities together properly, am I 
missing something obvious?

root@vicon:~# media-ctl -p
Media controller API version 0.1.0

Media device information
------------------------
driver          imx-media
model           i.MX IPUv3
serial
bus info
hw revision     0x0
driver version  0.0.0

Device topology
- entity 1: IPU0 CSI0 (2 pads, 1 link)
             type V4L2 subdev subtype Unknown flags 0
	pad0: Sink
	pad1: Source
		-> "imx-ipuv3-capture.0":0 [ENABLED]

- entity 4: imx-ipuv3-capture.0 (1 pad, 1 link)
             type Node subtype V4L flags 0
             device node name /dev/video0
	pad0: Sink
		<- "IPU0 CSI0":1 [ENABLED]

- entity 10: IPU0 CSI1 (2 pads, 1 link)
              type V4L2 subdev subtype Unknown flags 0
	pad0: Sink
	pad1: Source
		-> "imx-ipuv3-capture.1":0 [ENABLED]

- entity 13: imx-ipuv3-capture.1 (1 pad, 1 link)
              type Node subtype V4L flags 0
              device node name /dev/video1
	pad0: Sink
		<- "IPU0 CSI1":1 [ENABLED]

- entity 19: IPU1 CSI0 (2 pads, 1 link)
              type V4L2 subdev subtype Unknown flags 0
	pad0: Sink
	pad1: Source
		-> "imx-ipuv3-capture.0":0 [ENABLED]

- entity 22: imx-ipuv3-capture.0 (1 pad, 1 link)
              type Node subtype V4L flags 0
              device node name /dev/video2
	pad0: Sink
		<- "IPU1 CSI0":1 [ENABLED]

- entity 28: IPU1 CSI1 (2 pads, 1 link)
              type V4L2 subdev subtype Unknown flags 0
	pad0: Sink
	pad1: Source
		-> "imx-ipuv3-capture.1":0 [ENABLED]

- entity 31: imx-ipuv3-capture.1 (1 pad, 1 link)
              type Node subtype V4L flags 0
              device node name /dev/video3
	pad0: Sink
		<- "IPU1 CSI1":1 [ENABLED]

- entity 37: mipi_ipu1_mux (3 pads, 0 link)
              type V4L2 subdev subtype Unknown flags 0
	pad0: Sink
	pad1: Sink
	pad2: Source

- entity 41: mipi_ipu2_mux (3 pads, 0 link)
              type V4L2 subdev subtype Unknown flags 0
	pad0: Sink
	pad1: Sink
	pad2: Source

- entity 45: ar0135 1-0010 (1 pad, 0 link)
              type V4L2 subdev subtype Unknown flags 0
	pad0: Source




root@imx6:~# media-ctl -v --links '"ar01351-0010":0->"mipi_ipu1_mux":0[1]'

Opening media device /dev/media0
Enumerating entities
Found 11 entities
Enumerating pads and links
No link between "ar0135 1-0010":0 and "mipi_ipu1_mux":0
media_parse_setup_link: Unable to parse link

  "ar0135 1-0010":0->"mipi_ipu1_mux":0[1]
                                      ^
Unable to parse link: Invalid argument (22)

If you have something in the works with a camera example then just tell 
me to be patient and I'll wait for a v3 ;)

Cheers,
Jack.

>> How does this ever get used? If I were to set 1X8 greyscale it wouldn't
>> ever take this path, correct?
>
> Thank you, that is a leftover from stripping down the driver to the
> basics. I'll test with a grayscale camera and fix this in the next
> version.
>
> regards
> Philipp
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-19 16:22       ` Jack Mitchell
@ 2016-10-19 19:25         ` Marek Vasut
  2016-10-26 11:29           ` Jack Mitchell
  0 siblings, 1 reply; 61+ messages in thread
From: Marek Vasut @ 2016-10-19 19:25 UTC (permalink / raw)
  To: Jack Mitchell, Philipp Zabel
  Cc: linux-media, Steve Longerbeam, Hans Verkuil, Gary Bisson, kernel,
	Sascha Hauer, Marc Kleine-Budde

On 10/19/2016 06:22 PM, Jack Mitchell wrote:
> Hi Philipp,
> 
> On 17/10/16 13:12, Philipp Zabel wrote:
>> Hi Jack,
>>
>> Am Montag, den 17.10.2016, 12:32 +0100 schrieb Jack Mitchell:
>>> Hi Philipp,
>>>
>>> I'm looking at how I would enable a parallel greyscale camera using this
>>> set of drivers and am a little bit confused. Do you have an example
>>> somewhere of a devicetree with an input node.
>>
>> In your board device tree it should look somewhat like this:
>>
>> &i2c1 {
>>     sensor@48 {
>>         compatible = "aptina,mt9v032m";
>>         /* ... */
>>
>>         port {
>>             cam_out: endpoint {
>>                 remote-endpoint = <&csi_in>;
>>             }
>>         };
>>     };
>> };
>>
>> /*
>>  * This is the input port node corresponding to the 'CSI0' pad group,
>>  * not necessarily the CSI0 port of IPU1 or IPU2. On i.MX6Q it's port@1
>>  * of the mipi_ipu1_mux, on i.MX6DL it's port@4 of the ipu_csi0_mux,
>>  * the csi0 label is added in patch 13/21.
>>  */
>> &csi0 {
>>     #address-cells = <1>;
>>     #size-cells = <0>;
>>
>>     csi_in: endpoint@0 {
>>         bus-width = <8>;
>>         data-shift = <12>;
>>         hsync-active = <1>;
>>         vsync-active = <1>;
>>         pclk-sample = <1>;
>>         remote-endpoint = <&cam_out>;
>>     };
>> };
>>
>>>  I also have a further note below:
>> [...]
>>>> +    if (raw && priv->smfc) {
>>>
> 
> Thank you, I think I have something which is kind of right.
> 
> (Apologies in advance for the formatting)
> 
> diff --git a/arch/arm/boot/dts/imx6q-sabrelite.dts
> b/arch/arm/boot/dts/imx6q-sabrelite.dts
> index 66d10d8..90e6b92 100644
> --- a/arch/arm/boot/dts/imx6q-sabrelite.dts
> +++ b/arch/arm/boot/dts/imx6q-sabrelite.dts
> @@ -52,3 +52,62 @@
>  &sata {
>      status = "okay";
>  };
> +
> +&i2c2 {
> +    sensor@10 {
> +        compatible = "onsemi,ar0135";
> +        reg = <0x10>;
> +
> +        pinctrl-names = "default";
> +        pinctrl-0 = <&pinctrl_ar0135>;
> +
> +        reset-gpio = <&gpio1 6 GPIO_ACTIVE_HIGH>;
> +
> +        clocks = <&clks IMX6QDL_CLK_CKO2>;
> +        clock-names = "xclk";
> +
> +        xclk = <24000000>;
> +
> +        port {
> +            parallel_camera_output: endpoint {
> +                remote-endpoint = <&csi_in_from_parallel_camera>;
> +            };
> +        };
> +    };
> +};
> +
> +&csi0 {
> +    csi_in_from_parallel_camera: endpoint@0 {
> +        bus-width = <8>;
> +        data-shift = <12>;
> +        hsync-active = <1>;
> +        vsync-active = <1>;
> +        pclk-sample = <1>;
> +        remote-endpoint = <&parallel_camera_output>;
> +    };
> +};
> +
> +&iomuxc {
> +
> +        imx6q-sabrelite {
> +
> +        pinctrl_ar0135: ar0135grp {
> +            fsl,pins = <
> +                MX6QDL_PAD_GPIO_6__GPIO1_IO06   0x80000000
> +                MX6QDL_PAD_CSI0_DAT12__IPU1_CSI0_DATA12    0x80000000
> +                MX6QDL_PAD_CSI0_DAT13__IPU1_CSI0_DATA13    0x80000000
> +                MX6QDL_PAD_CSI0_DAT14__IPU1_CSI0_DATA14    0x80000000
> +                MX6QDL_PAD_CSI0_DAT15__IPU1_CSI0_DATA15    0x80000000
> +                MX6QDL_PAD_CSI0_DAT16__IPU1_CSI0_DATA16    0x80000000
> +                MX6QDL_PAD_CSI0_DAT17__IPU1_CSI0_DATA17    0x80000000
> +                MX6QDL_PAD_CSI0_DAT18__IPU1_CSI0_DATA18    0x80000000
> +                MX6QDL_PAD_CSI0_DAT19__IPU1_CSI0_DATA19    0x80000000
> +                MX6QDL_PAD_CSI0_PIXCLK__IPU1_CSI0_PIXCLK   0x80000000
> +                MX6QDL_PAD_CSI0_MCLK__IPU1_CSI0_HSYNC      0x80000000
> +                MX6QDL_PAD_CSI0_VSYNC__IPU1_CSI0_VSYNC     0x80000000
> +                MX6QDL_PAD_CSI0_DATA_EN__IPU1_CSI0_DATA_EN 0x80000000
> +            >;
> +        };
> +    };
> +};
> 
> 
> However, I can't seem to link the entities together properly, am I
> missing something obvious?
> 
> root@vicon:~# media-ctl -p
> Media controller API version 0.1.0
> 
> Media device information
> ------------------------
> driver          imx-media
> model           i.MX IPUv3
> serial
> bus info
> hw revision     0x0
> driver version  0.0.0
> 
> Device topology
> - entity 1: IPU0 CSI0 (2 pads, 1 link)
>             type V4L2 subdev subtype Unknown flags 0
>     pad0: Sink
>     pad1: Source
>         -> "imx-ipuv3-capture.0":0 [ENABLED]
> 
> - entity 4: imx-ipuv3-capture.0 (1 pad, 1 link)
>             type Node subtype V4L flags 0
>             device node name /dev/video0
>     pad0: Sink
>         <- "IPU0 CSI0":1 [ENABLED]
> 
> - entity 10: IPU0 CSI1 (2 pads, 1 link)
>              type V4L2 subdev subtype Unknown flags 0
>     pad0: Sink
>     pad1: Source
>         -> "imx-ipuv3-capture.1":0 [ENABLED]
> 
> - entity 13: imx-ipuv3-capture.1 (1 pad, 1 link)
>              type Node subtype V4L flags 0
>              device node name /dev/video1
>     pad0: Sink
>         <- "IPU0 CSI1":1 [ENABLED]
> 
> - entity 19: IPU1 CSI0 (2 pads, 1 link)
>              type V4L2 subdev subtype Unknown flags 0
>     pad0: Sink
>     pad1: Source
>         -> "imx-ipuv3-capture.0":0 [ENABLED]
> 
> - entity 22: imx-ipuv3-capture.0 (1 pad, 1 link)
>              type Node subtype V4L flags 0
>              device node name /dev/video2
>     pad0: Sink
>         <- "IPU1 CSI0":1 [ENABLED]
> 
> - entity 28: IPU1 CSI1 (2 pads, 1 link)
>              type V4L2 subdev subtype Unknown flags 0
>     pad0: Sink
>     pad1: Source
>         -> "imx-ipuv3-capture.1":0 [ENABLED]
> 
> - entity 31: imx-ipuv3-capture.1 (1 pad, 1 link)
>              type Node subtype V4L flags 0
>              device node name /dev/video3
>     pad0: Sink
>         <- "IPU1 CSI1":1 [ENABLED]
> 
> - entity 37: mipi_ipu1_mux (3 pads, 0 link)
>              type V4L2 subdev subtype Unknown flags 0
>     pad0: Sink
>     pad1: Sink
>     pad2: Source
> 
> - entity 41: mipi_ipu2_mux (3 pads, 0 link)
>              type V4L2 subdev subtype Unknown flags 0
>     pad0: Sink
>     pad1: Sink
>     pad2: Source
> 
> - entity 45: ar0135 1-0010 (1 pad, 0 link)
>              type V4L2 subdev subtype Unknown flags 0
>     pad0: Source
> 
> 
> 
> 
> root@imx6:~# media-ctl -v --links '"ar01351-0010":0->"mipi_ipu1_mux":0[1]'
> 
> Opening media device /dev/media0
> Enumerating entities
> Found 11 entities
> Enumerating pads and links
> No link between "ar0135 1-0010":0 and "mipi_ipu1_mux":0
> media_parse_setup_link: Unable to parse link
> 
>  "ar0135 1-0010":0->"mipi_ipu1_mux":0[1]
>                                      ^
> Unable to parse link: Invalid argument (22)
> 
> If you have something in the works with a camera example then just tell
> me to be patient and I'll wait for a v3 ;)

Check whether you have something along these lines in your camera driver
in the probe() function:

        priv->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
        priv->subdev.dev = &client->dev;
        priv->pad.flags = MEDIA_PAD_FL_SOURCE;

        ret = media_entity_init(&priv->subdev.entity, 1, &priv->pad, 0);
        if (ret < 0) {
                v4l2_clk_put(priv->clk);
                return ret;
        }

        ret = v4l2_async_register_subdev(&priv->subdev);
        if (ret < 0) {
                v4l2_clk_put(priv->clk);
                return ret;
        }

-- 
Best regards,
Marek Vasut

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
                   ` (21 preceding siblings ...)
  2016-10-17 10:18 ` [PATCH v2 00/21] Basic i.MX IPUv3 capture support Gary Bisson
@ 2016-10-19 21:30 ` Sakari Ailus
       [not found]   ` <CAH-u=807nRYzza0kTfOMv1AiWazk6FGJyz6W5_bYw7v9nOrccA@mail.gmail.com>
  22 siblings, 1 reply; 61+ messages in thread
From: Sakari Ailus @ 2016-10-19 21:30 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: linux-media, Steve Longerbeam, Marek Vasut, Hans Verkuil,
	Gary Bisson, kernel

On Fri, Oct 14, 2016 at 07:34:20PM +0200, Philipp Zabel wrote:
> Hi,
> 
> the second round removes the prepare_stream callback and instead lets the
> intermediate subdevices propagate s_stream calls to their sources rather
> than individually calling s_stream on each subdevice from the bridge driver.
> This is similar to how drm bridges recursively call into their next neighbor.
> It makes it easier to do bringup ordering on a per-link level, as long as the
> source preparation can be done at s_power, and the sink can just prepare, call
> s_stream on its source, and then enable itself inside s_stream. Obviously this
> would only work in a generic fashion if all asynchronous subdevices with both
> inputs and outputs would propagate s_stream to their source subdevices.

Hi Philipp,

I'll review the async patches tomorrow / the day after. I have not forgotten
them. :-)

-- 
Kind regards,

Sakari Ailus
e-mail: sakari.ailus@iki.fi	XMPP: sailus@retiisi.org.uk

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-19 19:25         ` Marek Vasut
@ 2016-10-26 11:29           ` Jack Mitchell
  0 siblings, 0 replies; 61+ messages in thread
From: Jack Mitchell @ 2016-10-26 11:29 UTC (permalink / raw)
  To: Marek Vasut, Philipp Zabel
  Cc: linux-media, Steve Longerbeam, Hans Verkuil, Gary Bisson, kernel,
	Sascha Hauer, Marc Kleine-Budde

Hi Marek,

On 19/10/16 20:25, Marek Vasut wrote:
> On 10/19/2016 06:22 PM, Jack Mitchell wrote:
>> Hi Philipp,
>>
>> On 17/10/16 13:12, Philipp Zabel wrote:
>>> Hi Jack,
>>>
>>> Am Montag, den 17.10.2016, 12:32 +0100 schrieb Jack Mitchell:
>>>> Hi Philipp,
>>>>
>>>> I'm looking at how I would enable a parallel greyscale camera using this
>>>> set of drivers and am a little bit confused. Do you have an example
>>>> somewhere of a devicetree with an input node.
>>>
>>> In your board device tree it should look somewhat like this:
>>>
>>> &i2c1 {
>>>     sensor@48 {
>>>         compatible = "aptina,mt9v032m";
>>>         /* ... */
>>>
>>>         port {
>>>             cam_out: endpoint {
>>>                 remote-endpoint = <&csi_in>;
>>>             }
>>>         };
>>>     };
>>> };
>>>
>>> /*
>>>  * This is the input port node corresponding to the 'CSI0' pad group,
>>>  * not necessarily the CSI0 port of IPU1 or IPU2. On i.MX6Q it's port@1
>>>  * of the mipi_ipu1_mux, on i.MX6DL it's port@4 of the ipu_csi0_mux,
>>>  * the csi0 label is added in patch 13/21.
>>>  */
>>> &csi0 {
>>>     #address-cells = <1>;
>>>     #size-cells = <0>;
>>>
>>>     csi_in: endpoint@0 {
>>>         bus-width = <8>;
>>>         data-shift = <12>;
>>>         hsync-active = <1>;
>>>         vsync-active = <1>;
>>>         pclk-sample = <1>;
>>>         remote-endpoint = <&cam_out>;
>>>     };
>>> };
>>>
>>>>  I also have a further note below:
>>> [...]
>>>>> +    if (raw && priv->smfc) {
>>>>
>>
>> Thank you, I think I have something which is kind of right.
>>
>> (Apologies in advance for the formatting)
>>
>> diff --git a/arch/arm/boot/dts/imx6q-sabrelite.dts
>> b/arch/arm/boot/dts/imx6q-sabrelite.dts
>> index 66d10d8..90e6b92 100644
>> --- a/arch/arm/boot/dts/imx6q-sabrelite.dts
>> +++ b/arch/arm/boot/dts/imx6q-sabrelite.dts
>> @@ -52,3 +52,62 @@
>>  &sata {
>>      status = "okay";
>>  };
>> +
>> +&i2c2 {
>> +    sensor@10 {
>> +        compatible = "onsemi,ar0135";
>> +        reg = <0x10>;
>> +
>> +        pinctrl-names = "default";
>> +        pinctrl-0 = <&pinctrl_ar0135>;
>> +
>> +        reset-gpio = <&gpio1 6 GPIO_ACTIVE_HIGH>;
>> +
>> +        clocks = <&clks IMX6QDL_CLK_CKO2>;
>> +        clock-names = "xclk";
>> +
>> +        xclk = <24000000>;
>> +
>> +        port {
>> +            parallel_camera_output: endpoint {
>> +                remote-endpoint = <&csi_in_from_parallel_camera>;
>> +            };
>> +        };
>> +    };
>> +};
>> +
>> +&csi0 {
>> +    csi_in_from_parallel_camera: endpoint@0 {
>> +        bus-width = <8>;
>> +        data-shift = <12>;
>> +        hsync-active = <1>;
>> +        vsync-active = <1>;
>> +        pclk-sample = <1>;
>> +        remote-endpoint = <&parallel_camera_output>;
>> +    };
>> +};
>> +
>> +&iomuxc {
>> +
>> +        imx6q-sabrelite {
>> +
>> +        pinctrl_ar0135: ar0135grp {
>> +            fsl,pins = <
>> +                MX6QDL_PAD_GPIO_6__GPIO1_IO06   0x80000000
>> +                MX6QDL_PAD_CSI0_DAT12__IPU1_CSI0_DATA12    0x80000000
>> +                MX6QDL_PAD_CSI0_DAT13__IPU1_CSI0_DATA13    0x80000000
>> +                MX6QDL_PAD_CSI0_DAT14__IPU1_CSI0_DATA14    0x80000000
>> +                MX6QDL_PAD_CSI0_DAT15__IPU1_CSI0_DATA15    0x80000000
>> +                MX6QDL_PAD_CSI0_DAT16__IPU1_CSI0_DATA16    0x80000000
>> +                MX6QDL_PAD_CSI0_DAT17__IPU1_CSI0_DATA17    0x80000000
>> +                MX6QDL_PAD_CSI0_DAT18__IPU1_CSI0_DATA18    0x80000000
>> +                MX6QDL_PAD_CSI0_DAT19__IPU1_CSI0_DATA19    0x80000000
>> +                MX6QDL_PAD_CSI0_PIXCLK__IPU1_CSI0_PIXCLK   0x80000000
>> +                MX6QDL_PAD_CSI0_MCLK__IPU1_CSI0_HSYNC      0x80000000
>> +                MX6QDL_PAD_CSI0_VSYNC__IPU1_CSI0_VSYNC     0x80000000
>> +                MX6QDL_PAD_CSI0_DATA_EN__IPU1_CSI0_DATA_EN 0x80000000
>> +            >;
>> +        };
>> +    };
>> +};
>>
>>
>> However, I can't seem to link the entities together properly, am I
>> missing something obvious?
>>
>> root@vicon:~# media-ctl -p
>> Media controller API version 0.1.0
>>
>> Media device information
>> ------------------------
>> driver          imx-media
>> model           i.MX IPUv3
>> serial
>> bus info
>> hw revision     0x0
>> driver version  0.0.0
>>
>> Device topology
>> - entity 1: IPU0 CSI0 (2 pads, 1 link)
>>             type V4L2 subdev subtype Unknown flags 0
>>     pad0: Sink
>>     pad1: Source
>>         -> "imx-ipuv3-capture.0":0 [ENABLED]
>>
>> - entity 4: imx-ipuv3-capture.0 (1 pad, 1 link)
>>             type Node subtype V4L flags 0
>>             device node name /dev/video0
>>     pad0: Sink
>>         <- "IPU0 CSI0":1 [ENABLED]
>>
>> - entity 10: IPU0 CSI1 (2 pads, 1 link)
>>              type V4L2 subdev subtype Unknown flags 0
>>     pad0: Sink
>>     pad1: Source
>>         -> "imx-ipuv3-capture.1":0 [ENABLED]
>>
>> - entity 13: imx-ipuv3-capture.1 (1 pad, 1 link)
>>              type Node subtype V4L flags 0
>>              device node name /dev/video1
>>     pad0: Sink
>>         <- "IPU0 CSI1":1 [ENABLED]
>>
>> - entity 19: IPU1 CSI0 (2 pads, 1 link)
>>              type V4L2 subdev subtype Unknown flags 0
>>     pad0: Sink
>>     pad1: Source
>>         -> "imx-ipuv3-capture.0":0 [ENABLED]
>>
>> - entity 22: imx-ipuv3-capture.0 (1 pad, 1 link)
>>              type Node subtype V4L flags 0
>>              device node name /dev/video2
>>     pad0: Sink
>>         <- "IPU1 CSI0":1 [ENABLED]
>>
>> - entity 28: IPU1 CSI1 (2 pads, 1 link)
>>              type V4L2 subdev subtype Unknown flags 0
>>     pad0: Sink
>>     pad1: Source
>>         -> "imx-ipuv3-capture.1":0 [ENABLED]
>>
>> - entity 31: imx-ipuv3-capture.1 (1 pad, 1 link)
>>              type Node subtype V4L flags 0
>>              device node name /dev/video3
>>     pad0: Sink
>>         <- "IPU1 CSI1":1 [ENABLED]
>>
>> - entity 37: mipi_ipu1_mux (3 pads, 0 link)
>>              type V4L2 subdev subtype Unknown flags 0
>>     pad0: Sink
>>     pad1: Sink
>>     pad2: Source
>>
>> - entity 41: mipi_ipu2_mux (3 pads, 0 link)
>>              type V4L2 subdev subtype Unknown flags 0
>>     pad0: Sink
>>     pad1: Sink
>>     pad2: Source
>>
>> - entity 45: ar0135 1-0010 (1 pad, 0 link)
>>              type V4L2 subdev subtype Unknown flags 0
>>     pad0: Source
>>
>>
>>
>>
>> root@imx6:~# media-ctl -v --links '"ar01351-0010":0->"mipi_ipu1_mux":0[1]'
>>
>> Opening media device /dev/media0
>> Enumerating entities
>> Found 11 entities
>> Enumerating pads and links
>> No link between "ar0135 1-0010":0 and "mipi_ipu1_mux":0
>> media_parse_setup_link: Unable to parse link
>>
>>  "ar0135 1-0010":0->"mipi_ipu1_mux":0[1]
>>                                      ^
>> Unable to parse link: Invalid argument (22)
>>
>> If you have something in the works with a camera example then just tell
>> me to be patient and I'll wait for a v3 ;)
>
> Check whether you have something along these lines in your camera driver
> in the probe() function:
>
>         priv->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
>         priv->subdev.dev = &client->dev;
>         priv->pad.flags = MEDIA_PAD_FL_SOURCE;
>
>         ret = media_entity_init(&priv->subdev.entity, 1, &priv->pad, 0);
>         if (ret < 0) {
>                 v4l2_clk_put(priv->clk);
>                 return ret;
>         }
>
>         ret = v4l2_async_register_subdev(&priv->subdev);
>         if (ret < 0) {
>                 v4l2_clk_put(priv->clk);
>                 return ret;
>         }
>

Yes, I have stripped the driver down to do nothing[1] apart from probe, 
and allow me to try to link the pipeline together. I still have no luck.

- entity 1: IPU0 CSI0 (2 pads, 1 link)
             type V4L2 subdev subtype Unknown flags 0
         pad0: Sink
         pad1: Source
                 -> "imx-ipuv3-capture.0":0 [ENABLED]

- entity 45: ar0135 1-0010 (1 pad, 0 link)
              type V4L2 subdev subtype Unknown flags 0
         pad0: Source


root@sabrelite:~# media-ctl -v --links '"ar0135 1-0010":0->"IPU0 CSI0":0[1]'
Opening media device /dev/media0
Enumerating entities
Found 11 entities
Enumerating pads and links
No link between "ar0135 1-0010":0 and "IPU0 CSI0":0
media_parse_setup_link: Unable to parse link

  "ar0135 1-0010":0->"IPU0 CSI0":0[1]
                                  ^
Unable to parse link: Invalid argument (22)

It's possible I'm attempting to go about this the wrong way, the whole 
v4l2 subsystem is a bit of a maze.

Cheers,
Jack.

[1] http://pastebin.com/EXZKq4jZ

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 07/21] [media] imx-ipu: Add i.MX IPUv3 CSI subdevice driver
  2016-10-14 17:34 ` [PATCH v2 07/21] [media] imx-ipu: Add i.MX IPUv3 CSI subdevice driver Philipp Zabel
@ 2016-11-08  7:18   ` Ying Liu
  0 siblings, 0 replies; 61+ messages in thread
From: Ying Liu @ 2016-11-08  7:18 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: linux-media, Steve Longerbeam, Marek Vasut, Hans Verkuil,
	Gary Bisson, kernel, Sascha Hauer, Marc Kleine-Budde

Hi Philipp,

On Sat, Oct 15, 2016 at 1:34 AM, Philipp Zabel <p.zabel@pengutronix.de> wrote:
> This adds a V4L2 subdevice driver for the two CMOS Sensor Interface (CSI)
> modules contained in each IPUv3. These sample video data from the
> parallel CSI0/1 pads or from the the MIPI CSI-2 bridge via IOMUXC video
> bus multiplexers and write to IPU internal FIFOs to deliver data to
> either the IDMAC or IC modules.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
> ---
> Changes since v1:
>  - Propagate field and colorspace in ipucsi_subdev_set_format.
>  - Recursively call s_stream on upstream subdev.
> ---
>  drivers/media/platform/imx/Kconfig         |   7 +
>  drivers/media/platform/imx/Makefile        |   1 +
>  drivers/media/platform/imx/imx-ipuv3-csi.c | 547 +++++++++++++++++++++++++++++
>  3 files changed, 555 insertions(+)
>  create mode 100644 drivers/media/platform/imx/imx-ipuv3-csi.c
>
> diff --git a/drivers/media/platform/imx/Kconfig b/drivers/media/platform/imx/Kconfig
> index 1662bb0b..a88c4f7 100644
> --- a/drivers/media/platform/imx/Kconfig
> +++ b/drivers/media/platform/imx/Kconfig
> @@ -8,3 +8,10 @@ config MEDIA_IMX
>
>  config VIDEO_IMX_IPU_COMMON
>         tristate
> +
> +config VIDEO_IMX_IPU_CSI
> +       tristate "i.MX5/6 CMOS Sensor Interface driver"
> +       depends on VIDEO_DEV && IMX_IPUV3_CORE && MEDIA_IMX

Should depend on VIDEO_V4L2_SUBDEV_API, otherwise
v4l2_subdev_get_try_format() would break build.

Regards,
Liu Ying

> +       select VIDEO_IMX_IPUV3
> +       ---help---
> +         This is a v4l2 subdevice driver for two CSI modules in each IPUv3.
> diff --git a/drivers/media/platform/imx/Makefile b/drivers/media/platform/imx/Makefile
> index 0ba601a..82a3616 100644
> --- a/drivers/media/platform/imx/Makefile
> +++ b/drivers/media/platform/imx/Makefile
> @@ -1,2 +1,3 @@
>  obj-$(CONFIG_MEDIA_IMX)                        += imx-media.o
>  obj-$(CONFIG_VIDEO_IMX_IPU_COMMON)     += imx-ipu.o
> +obj-$(CONFIG_VIDEO_IMX_IPU_CSI)                += imx-ipuv3-csi.o
> diff --git a/drivers/media/platform/imx/imx-ipuv3-csi.c b/drivers/media/platform/imx/imx-ipuv3-csi.c
> new file mode 100644
> index 0000000..83e0511
> --- /dev/null
> +++ b/drivers/media/platform/imx/imx-ipuv3-csi.c
> @@ -0,0 +1,547 @@
> +/*
> + * i.MX IPUv3 CSI V4L2 Subdevice Driver
> + *
> + * Copyright (C) 2016, Pengutronix, Philipp Zabel <kernel@pengutronix.de>
> + *
> + * Based on code
> + * Copyright (C) 2006, Sascha Hauer, Pengutronix
> + * Copyright (C) 2008, Guennadi Liakhovetski <kernel@pengutronix.de>
> + * Copyright (C) 2008, Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
> + * Copyright (C) 2009, Darius Augulis <augulis.darius@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +#include <linux/clk.h>
> +#include <linux/device.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/errno.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/kernel.h>
> +#include <linux/mm.h>
> +#include <linux/module.h>
> +#include <linux/moduleparam.h>
> +#include <linux/mutex.h>
> +#include <linux/of.h>
> +#include <linux/of_graph.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/time.h>
> +#include <linux/version.h>
> +#include <linux/videodev2.h>
> +
> +#include <media/v4l2-common.h>
> +#include <media/v4l2-dev.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/v4l2-mc.h>
> +#include <media/v4l2-of.h>
> +#include <media/videobuf2-dma-contig.h>
> +
> +#include <video/imx-ipu-v3.h>
> +
> +#include "imx-ipu.h"
> +
> +#define DRIVER_NAME "imx-ipuv3-csi"
> +
> +struct ipucsi {
> +       struct v4l2_subdev              subdev;
> +
> +       struct device                   *dev;
> +       u32                             id;
> +
> +       struct ipu_csi                  *csi;
> +       struct ipu_soc                  *ipu;
> +       struct v4l2_of_endpoint         endpoint;
> +       enum ipu_csi_dest               csi_dest;
> +
> +       struct media_pad                subdev_pad[2];
> +       struct v4l2_mbus_framefmt       format_mbus[2];
> +       struct v4l2_fract               timeperframe[2];
> +};
> +
> +static int ipu_csi_get_mbus_config(struct ipucsi *ipucsi,
> +                                  struct v4l2_mbus_config *config)
> +{
> +       struct v4l2_subdev *sd;
> +       struct media_pad *pad;
> +       int ret;
> +
> +       /*
> +        * Retrieve media bus configuration from the entity connected directly
> +        * to the CSI subdev sink pad.
> +        */
> +       pad = media_entity_remote_pad(&ipucsi->subdev_pad[0]);
> +       if (!pad) {
> +               dev_err(ipucsi->dev,
> +                       "failed to retrieve mbus config from source entity\n");
> +               return -ENODEV;
> +       }
> +       sd = media_entity_to_v4l2_subdev(pad->entity);
> +       ret = v4l2_subdev_call(sd, video, g_mbus_config, config);
> +       if (ret == -ENOIOCTLCMD) {
> +               /* Fall back to static mbus configuration from device tree */
> +               config->type = ipucsi->endpoint.bus_type;
> +               config->flags = ipucsi->endpoint.bus.parallel.flags;
> +               ret = 0;
> +       }
> +
> +       return ret;
> +}
> +
> +static struct v4l2_mbus_framefmt *
> +__ipucsi_get_pad_format(struct v4l2_subdev *sd,
> +                       struct v4l2_subdev_pad_config *cfg,
> +                       unsigned int pad, u32 which)
> +{
> +       struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
> +
> +       switch (which) {
> +       case V4L2_SUBDEV_FORMAT_TRY:
> +               return v4l2_subdev_get_try_format(sd, cfg, pad);
> +       case V4L2_SUBDEV_FORMAT_ACTIVE:
> +               return &ipucsi->format_mbus[pad ? 1 : 0];
> +       default:
> +               return NULL;
> +       }
> +}
> +
> +static int ipucsi_subdev_log_status(struct v4l2_subdev *subdev)
> +{
> +       struct ipucsi *ipucsi = container_of(subdev, struct ipucsi, subdev);
> +
> +       ipu_csi_dump(ipucsi->csi);
> +
> +       return 0;
> +}
> +
> +static int ipucsi_subdev_get_format(struct v4l2_subdev *sd,
> +               struct v4l2_subdev_pad_config *cfg,
> +               struct v4l2_subdev_format *sdformat)
> +{
> +       sdformat->format = *__ipucsi_get_pad_format(sd, cfg, sdformat->pad,
> +                                                   sdformat->which);
> +       return 0;
> +}
> +
> +static bool ipucsi_mbus_code_supported(const u32 mbus_code)
> +{
> +       static const u32 mbus_codes[] = {
> +               MEDIA_BUS_FMT_Y8_1X8,
> +               MEDIA_BUS_FMT_Y10_1X10,
> +               MEDIA_BUS_FMT_Y12_1X12,
> +               MEDIA_BUS_FMT_UYVY8_2X8,
> +               MEDIA_BUS_FMT_YUYV8_2X8,
> +               MEDIA_BUS_FMT_UYVY8_1X16,
> +               MEDIA_BUS_FMT_YUYV8_1X16,
> +       };
> +       int i;
> +
> +       for (i = 0; i < ARRAY_SIZE(mbus_codes); i++) {
> +               if (mbus_code == mbus_codes[i])
> +                       return true;
> +       }
> +
> +       return false;
> +}
> +
> +static int ipucsi_subdev_set_format(struct v4l2_subdev *sd,
> +               struct v4l2_subdev_pad_config *cfg,
> +               struct v4l2_subdev_format *sdformat)
> +{
> +       struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
> +       struct v4l2_mbus_framefmt *mbusformat;
> +       unsigned int width, height;
> +       bool supported;
> +
> +       supported = ipucsi_mbus_code_supported(sdformat->format.code);
> +       if (!supported)
> +               return -EINVAL;
> +
> +       mbusformat = __ipucsi_get_pad_format(sd, cfg, sdformat->pad,
> +                                            sdformat->which);
> +
> +       if (sdformat->pad == 0) {
> +               width = clamp_t(unsigned int, sdformat->format.width, 16, 8192);
> +               height = clamp_t(unsigned int, sdformat->format.height, 16, 4096);
> +               mbusformat->field = sdformat->format.field ?: V4L2_FIELD_NONE;
> +               mbusformat->colorspace = sdformat->format.colorspace ?:
> +                                        V4L2_COLORSPACE_SRGB;
> +       } else {
> +               struct v4l2_mbus_framefmt *infmt = &ipucsi->format_mbus[0];
> +
> +               width = infmt->width;
> +               height = infmt->height;
> +               mbusformat->field = infmt->field;
> +               mbusformat->colorspace = infmt->colorspace;
> +       }
> +
> +       mbusformat->width = width;
> +       mbusformat->height = height;
> +       mbusformat->code = sdformat->format.code;
> +
> +       if (mbusformat->field == V4L2_FIELD_SEQ_TB &&
> +           mbusformat->width == 720 && mbusformat->height == 480 &&
> +           ipucsi->endpoint.bus_type == V4L2_MBUS_BT656) {
> +               /* We capture NTSC bottom field first */
> +               mbusformat->field = V4L2_FIELD_SEQ_BT;
> +       }
> +
> +       sdformat->format = *mbusformat;
> +
> +       return 0;
> +}
> +
> +static int ipucsi_subdev_s_stream(struct v4l2_subdev *sd, int enable)
> +{
> +       struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
> +       struct v4l2_subdev *upstream_sd;
> +       struct media_pad *pad;
> +       int ret;
> +
> +       pad = media_entity_remote_pad(&sd->entity.pads[0]);
> +       if (!pad) {
> +               dev_err(ipucsi->dev, "Failed to find remote source pad\n");
> +               return -ENOLINK;
> +       }
> +
> +       if (!is_media_entity_v4l2_subdev(pad->entity)) {
> +               dev_err(ipucsi->dev, "Upstream entity is not a v4l2 subdev\n");
> +               return -ENODEV;
> +       }
> +
> +       upstream_sd = media_entity_to_v4l2_subdev(pad->entity);
> +
> +       ret = v4l2_subdev_call(upstream_sd, video, s_stream, enable);
> +       if (ret)
> +               return ret;
> +
> +       if (enable) {
> +               struct v4l2_mbus_framefmt *fmt = ipucsi->format_mbus;
> +               struct v4l2_mbus_config mbus_config;
> +               struct v4l2_rect window;
> +               bool mux_mct;
> +               int ret;
> +
> +               ret = ipu_csi_get_mbus_config(ipucsi, &mbus_config);
> +               if (ret)
> +                       return ret;
> +
> +               window.left = 0;
> +               window.top = 0;
> +               window.width = fmt[0].width;
> +               window.height = fmt[0].height;
> +               ipu_csi_set_window(ipucsi->csi, &window);
> +
> +               /* Is CSI data source MCT (MIPI)? */
> +               mux_mct = (mbus_config.type == V4L2_MBUS_CSI2);
> +
> +               ipu_set_csi_src_mux(ipucsi->ipu, ipucsi->id, mux_mct);
> +               if (mux_mct)
> +                       ipu_csi_set_mipi_datatype(ipucsi->csi, /*VC*/ 0,
> +                                                 &fmt[0]);
> +
> +               ret = ipu_csi_init_interface(ipucsi->csi, &mbus_config,
> +                                            &fmt[0]);
> +               if (ret) {
> +                       dev_err(ipucsi->dev, "Failed to initialize CSI\n");
> +                       return ret;
> +               }
> +
> +               ipu_csi_set_dest(ipucsi->csi, ipucsi->csi_dest);
> +
> +               ipu_csi_enable(ipucsi->csi);
> +       } else {
> +               ipu_csi_disable(ipucsi->csi);
> +       }
> +
> +       return 0;
> +}
> +
> +static int ipucsi_subdev_g_frame_interval(struct v4l2_subdev *subdev,
> +                                         struct v4l2_subdev_frame_interval *fi)
> +{
> +       struct ipucsi *ipucsi = container_of(subdev, struct ipucsi, subdev);
> +
> +       if (fi->pad > 4)
> +               return -EINVAL;
> +
> +       fi->interval = ipucsi->timeperframe[(fi->pad == 0) ? 0 : 1];
> +
> +       return 0;
> +}
> +
> +/*
> + * struct ipucsi_skip_desc - CSI frame skipping descriptor
> + * @keep - number of frames kept per max_ratio frames
> + * @max_ratio - width of skip_smfc, written to MAX_RATIO bitfield
> + * @skip_smfc - skip pattern written to the SKIP_SMFC bitfield
> + */
> +struct ipucsi_skip_desc {
> +       u8 keep;
> +       u8 max_ratio;
> +       u8 skip_smfc;
> +};
> +
> +static const struct ipucsi_skip_desc ipucsi_skip[12] = {
> +       { 1, 1, 0x00 }, /* Keep all frames */
> +       { 5, 6, 0x10 }, /* Skip every sixth frame */
> +       { 4, 5, 0x08 }, /* Skip every fifth frame */
> +       { 3, 4, 0x04 }, /* Skip every fourth frame */
> +       { 2, 3, 0x02 }, /* Skip every third frame */
> +       { 3, 5, 0x0a }, /* Skip frames 1 and 3 of every 5 */
> +       { 1, 2, 0x01 }, /* Skip every second frame */
> +       { 2, 5, 0x0b }, /* Keep frames 1 and 4 of every 5 */
> +       { 1, 3, 0x03 }, /* Keep one in three frames */
> +       { 1, 4, 0x07 }, /* Keep one in four frames */
> +       { 1, 5, 0x0f }, /* Keep one in five frames */
> +       { 1, 6, 0x1f }, /* Keep one in six frames */
> +};
> +
> +static int ipucsi_subdev_s_frame_interval(struct v4l2_subdev *subdev,
> +                                         struct v4l2_subdev_frame_interval *fi)
> +{
> +       struct ipucsi *ipucsi = container_of(subdev, struct ipucsi, subdev);
> +       const struct ipucsi_skip_desc *skip;
> +       struct v4l2_fract *want, *in;
> +       u32 min_err = UINT_MAX;
> +       int i, best_i = 0;
> +       u64 want_us;
> +
> +       if (fi->pad > 4)
> +               return -EINVAL;
> +
> +       if (fi->pad == 0) {
> +               /* Believe what we are told about the input frame rate */
> +               ipucsi->timeperframe[0] = fi->interval;
> +               /* Reset output frame rate to input frame rate */
> +               ipucsi->timeperframe[1] = fi->interval;
> +               return 0;
> +       }
> +
> +       want = &fi->interval;
> +       in = &ipucsi->timeperframe[0];
> +
> +       if (want->numerator == 0 || want->denominator == 0 ||
> +           in->denominator == 0) {
> +               ipucsi->timeperframe[1] = ipucsi->timeperframe[0];
> +               fi->interval = ipucsi->timeperframe[1];
> +               return 0;
> +       }
> +
> +       want_us = 1000000ULL * want->numerator;
> +       do_div(want_us, want->denominator);
> +
> +       /* Find the reduction closest to the requested timeperframe */
> +       for (i = 0; i < ARRAY_SIZE(ipucsi_skip); i++) {
> +               u64 tmp;
> +               u32 err;
> +
> +               skip = &ipucsi_skip[i];
> +               tmp = 1000000ULL * in->numerator * skip->max_ratio;
> +               do_div(tmp, in->denominator * skip->keep);
> +               err = (tmp > want_us) ? (tmp - want_us) : (want_us - tmp);
> +               if (err < min_err) {
> +                       min_err = err;
> +                       best_i = i;
> +               }
> +       }
> +
> +       skip = &ipucsi_skip[best_i];
> +
> +       ipu_csi_set_skip_smfc(ipucsi->csi, skip->skip_smfc, skip->max_ratio - 1,
> +                             0);
> +
> +       fi->interval.numerator = in->numerator * skip->max_ratio;
> +       fi->interval.denominator = in->denominator * skip->keep;
> +       ipucsi->timeperframe[1] = fi->interval;
> +
> +       dev_dbg(ipucsi->dev, "skip: %d/%d -> %d/%d frames:%d pattern:0x%x\n",
> +              in->numerator, in->denominator,
> +              fi->interval.numerator, fi->interval.denominator,
> +              skip->max_ratio, skip->skip_smfc);
> +
> +       return 0;
> +}
> +
> +static struct v4l2_subdev_core_ops ipucsi_subdev_core_ops = {
> +       .log_status = ipucsi_subdev_log_status,
> +};
> +
> +static struct v4l2_subdev_pad_ops ipucsi_subdev_pad_ops = {
> +       .get_fmt = ipucsi_subdev_get_format,
> +       .set_fmt = ipucsi_subdev_set_format,
> +};
> +
> +static struct v4l2_subdev_video_ops ipucsi_subdev_video_ops = {
> +       .s_stream = ipucsi_subdev_s_stream,
> +       .g_frame_interval = ipucsi_subdev_g_frame_interval,
> +       .s_frame_interval = ipucsi_subdev_s_frame_interval,
> +};
> +
> +static const struct v4l2_subdev_ops ipucsi_subdev_ops = {
> +       .core   = &ipucsi_subdev_core_ops,
> +       .pad    = &ipucsi_subdev_pad_ops,
> +       .video  = &ipucsi_subdev_video_ops,
> +};
> +
> +static int ipu_csi_link_setup(struct media_entity *entity,
> +                             const struct media_pad *local,
> +                             const struct media_pad *remote, u32 flags)
> +{
> +       struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
> +       struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
> +
> +       ipucsi->csi_dest = IPU_CSI_DEST_IDMAC;
> +
> +       return 0;
> +}
> +
> +struct media_entity_operations ipucsi_entity_ops = {
> +       .link_setup = ipu_csi_link_setup,
> +       .link_validate = v4l2_subdev_link_validate,
> +};
> +
> +static int ipu_csi_registered(struct v4l2_subdev *sd)
> +{
> +       struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
> +       struct device_node *rpp;
> +
> +       /*
> +        * Add source subdevice to asynchronous subdevice waiting list.
> +        */
> +       rpp = of_graph_get_remote_port_parent(ipucsi->endpoint.base.local_node);
> +       if (rpp) {
> +               struct v4l2_async_subdev *asd;
> +
> +               asd = devm_kzalloc(sd->dev, sizeof(*asd), GFP_KERNEL);
> +               if (!asd) {
> +                       of_node_put(rpp);
> +                       return -ENOMEM;
> +               }
> +
> +               asd->match_type = V4L2_ASYNC_MATCH_OF;
> +               asd->match.of.node = rpp;
> +
> +               __v4l2_async_notifier_add_subdev(sd->notifier, asd);
> +       }
> +
> +       return 0;
> +}
> +
> +struct v4l2_subdev_internal_ops ipu_csi_internal_ops = {
> +       .registered = ipu_csi_registered,
> +};
> +
> +static int ipucsi_subdev_init(struct ipucsi *ipucsi, struct device_node *node)
> +{
> +       struct device_node *endpoint;
> +       int ret;
> +
> +       v4l2_subdev_init(&ipucsi->subdev, &ipucsi_subdev_ops);
> +       ipucsi->subdev.dev = ipucsi->dev;
> +
> +       snprintf(ipucsi->subdev.name, sizeof(ipucsi->subdev.name), "%s%d %s%d",
> +                "IPU", ipu_get_num(ipucsi->ipu), "CSI", ipucsi->id);
> +
> +       endpoint = of_get_next_child(node, NULL);
> +       if (endpoint)
> +               v4l2_of_parse_endpoint(endpoint, &ipucsi->endpoint);
> +
> +       ipucsi->subdev.entity.ops = &ipucsi_entity_ops;
> +
> +       ipucsi->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
> +       ipucsi->subdev.of_node = node;
> +
> +       ipucsi->subdev_pad[0].flags = MEDIA_PAD_FL_SINK;
> +       ipucsi->subdev_pad[1].flags = MEDIA_PAD_FL_SOURCE;
> +       ret = media_entity_pads_init(&ipucsi->subdev.entity, 2,
> +                                    ipucsi->subdev_pad);
> +       if (ret < 0)
> +               return ret;
> +
> +       ipucsi->subdev.internal_ops = &ipu_csi_internal_ops;
> +       ret = v4l2_async_register_subdev(&ipucsi->subdev);
> +       if (ret < 0) {
> +               dev_err(ipucsi->dev, "Failed to register CSI subdev \"%s\": %d\n",
> +                       ipucsi->subdev.name, ret);
> +       }
> +
> +       return ret;
> +}
> +
> +static u64 camera_mask = DMA_BIT_MASK(32);
> +
> +static int ipucsi_probe(struct platform_device *pdev)
> +{
> +       struct ipu_client_platformdata *pdata = pdev->dev.platform_data;
> +       struct ipu_soc *ipu = dev_get_drvdata(pdev->dev.parent);
> +       struct ipucsi *ipucsi;
> +       int ret;
> +       struct device_node *node;
> +
> +       pdev->dev.dma_mask = &camera_mask;
> +       pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
> +
> +       if (!pdata)
> +               return -EINVAL;
> +
> +       ipucsi = devm_kzalloc(&pdev->dev, sizeof(*ipucsi), GFP_KERNEL);
> +       if (!ipucsi)
> +               return -ENOMEM;
> +
> +       ipucsi->ipu = ipu;
> +       ipucsi->id = pdata->csi;
> +       ipucsi->dev = &pdev->dev;
> +
> +       node = of_graph_get_port_by_id(pdev->dev.parent->of_node, ipucsi->id);
> +       if (!node) {
> +               dev_err(&pdev->dev, "cannot find node port@%d\n", ipucsi->id);
> +               return -ENODEV;
> +       }
> +
> +       ipucsi->csi = ipu_csi_get(ipu, ipucsi->id);
> +       if (IS_ERR(ipucsi->csi))
> +               return PTR_ERR(ipucsi->csi);
> +
> +       ret = ipucsi_subdev_init(ipucsi, node);
> +       if (ret)
> +               return ret;
> +
> +       of_node_put(node);
> +
> +       platform_set_drvdata(pdev, ipucsi);
> +
> +       dev_info(&pdev->dev, "loaded\n");
> +
> +       return 0;
> +}
> +
> +static struct platform_driver ipucsi_driver = {
> +       .driver = {
> +               .name = DRIVER_NAME,
> +       },
> +       .probe = ipucsi_probe,
> +};
> +
> +static int __init ipucsi_init(void)
> +{
> +       return platform_driver_register(&ipucsi_driver);
> +}
> +
> +static void __exit ipucsi_exit(void)
> +{
> +       return platform_driver_unregister(&ipucsi_driver);
> +}
> +
> +subsys_initcall(ipucsi_init);
> +module_exit(ipucsi_exit);
> +
> +MODULE_DESCRIPTION("i.MX IPUv3 capture interface driver");
> +MODULE_AUTHOR("Sascha Hauer, Pengutronix");
> +MODULE_AUTHOR("Philipp Zabel, Pengutronix");
> +MODULE_LICENSE("GPL v2");
> +MODULE_ALIAS("platform:" DRIVER_NAME);
> --
> 2.9.3
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver
  2016-10-14 17:34 ` [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver Philipp Zabel
  2016-10-17 11:32   ` Jack Mitchell
@ 2016-11-08  7:21   ` Ying Liu
  1 sibling, 0 replies; 61+ messages in thread
From: Ying Liu @ 2016-11-08  7:21 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: linux-media, Steve Longerbeam, Marek Vasut, Hans Verkuil,
	Gary Bisson, kernel, Sascha Hauer, Marc Kleine-Budde

Hi Philipp,

On Sat, Oct 15, 2016 at 1:34 AM, Philipp Zabel <p.zabel@pengutronix.de> wrote:
> This driver uses the IDMAC module's double buffering feature to do the
> processing of finished frames in the new frame acknowledge (NFACK)
> interrupt handler while the next frame is already being captured. This
> avoids a race condition between the end of frame interrupt and NFACK for
> very short blanking intervals, but causes the driver to need at least
> two buffers in flight. The last remaining frame will never be handed out
> to userspace until a new one is queued.
> It supports interlaced input and allows to translate between sequential
> and interlaced field formats using the IDMAC scan order and interlace
> offset parameters.
> Currently the direct CSI -> SMFC -> IDMAC path is supported.
>
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
> ---
> Changes since v1:
>  - Remove v4l2_media_subdev_prepare_stream and v4l2_media_subdev_s_stream,
>    subdevices will propagate s_stream calls to their upstream subdevices
>    themselves.
>  - Fix width/height to CSI output size
>  - Use colorspace provided by CSI output
>  - Implement enum/g/s/_input for v4l2-compliance
>  - Fix ipu_capture_g_parm to use the correct pad
> ---
>  drivers/media/platform/imx/Kconfig           |    9 +
>  drivers/media/platform/imx/Makefile          |    1 +
>  drivers/media/platform/imx/imx-ipu-capture.c | 1015 ++++++++++++++++++++++++++
>  drivers/media/platform/imx/imx-ipu.h         |    9 +
>  drivers/media/platform/imx/imx-ipuv3-csi.c   |   29 +-
>  5 files changed, 1061 insertions(+), 2 deletions(-)
>  create mode 100644 drivers/media/platform/imx/imx-ipu-capture.c
>
> diff --git a/drivers/media/platform/imx/Kconfig b/drivers/media/platform/imx/Kconfig
> index a88c4f7..69e8648 100644
> --- a/drivers/media/platform/imx/Kconfig
> +++ b/drivers/media/platform/imx/Kconfig
> @@ -9,6 +9,15 @@ config MEDIA_IMX
>  config VIDEO_IMX_IPU_COMMON
>         tristate
>
> +config VIDEO_IMX_IPU_CAPTURE
> +       tristate "i.MX5/6 Video Capture driver"
> +       depends on IMX_IPUV3_CORE && VIDEO_V4L2_SUBDEV_API && MEDIA_IMX
> +       select VIDEOBUF2_DMA_CONTIG
> +       select VIDEO_IMX_IPU_COMMON
> +       select VIDEO_IMX_IPUV3
> +       help
> +         This is a v4l2 video capture driver for the IPUv3 on i.MX5/6.
> +
>  config VIDEO_IMX_IPU_CSI
>         tristate "i.MX5/6 CMOS Sensor Interface driver"
>         depends on VIDEO_DEV && IMX_IPUV3_CORE && MEDIA_IMX
> diff --git a/drivers/media/platform/imx/Makefile b/drivers/media/platform/imx/Makefile
> index 82a3616..919eaa1 100644
> --- a/drivers/media/platform/imx/Makefile
> +++ b/drivers/media/platform/imx/Makefile
> @@ -1,3 +1,4 @@
>  obj-$(CONFIG_MEDIA_IMX)                        += imx-media.o
>  obj-$(CONFIG_VIDEO_IMX_IPU_COMMON)     += imx-ipu.o
> +obj-$(CONFIG_VIDEO_IMX_IPU_CAPTURE)    += imx-ipu-capture.o
>  obj-$(CONFIG_VIDEO_IMX_IPU_CSI)                += imx-ipuv3-csi.o
> diff --git a/drivers/media/platform/imx/imx-ipu-capture.c b/drivers/media/platform/imx/imx-ipu-capture.c
> new file mode 100644
> index 0000000..1308c1e
> --- /dev/null
> +++ b/drivers/media/platform/imx/imx-ipu-capture.c
> @@ -0,0 +1,1015 @@
> +/*
> + * i.MX IPUv3 V4L2 Capture Driver
> + *
> + * Copyright (C) 2016, Pengutronix, Philipp Zabel <kernel@pengutronix.de>
> + *
> + * Based on code
> + * Copyright (C) 2006, Pengutronix, Sascha Hauer <kernel@pengutronix.de>
> + * Copyright (C) 2008, Guennadi Liakhovetski <kernel@pengutronix.de>
> + * Copyright (C) 2008, Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
> + * Copyright (C) 2009, Darius Augulis <augulis.darius@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +#include <linux/dma-mapping.h>
> +#include <linux/interrupt.h>
> +#include <linux/videodev2.h>
> +#include <linux/device.h>
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/mutex.h>
> +#include <linux/slab.h>
> +#include <linux/time.h>
> +
> +#include <video/imx-ipu-v3.h>
> +#include "imx-ipu.h"
> +
> +#include <linux/of_graph.h>
> +#include <media/videobuf2-dma-contig.h>
> +#include <media/v4l2-common.h>
> +#include <media/v4l2-ctrls.h>
> +#include <media/v4l2-dev.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/v4l2-mc.h>
> +#include <media/v4l2-of.h>
> +
> +#define DRIVER_NAME "imx-ipuv3-capture"
> +
> +/* buffer for one video frame */
> +struct ipu_capture_buffer {
> +       struct vb2_v4l2_buffer          vb;
> +       struct list_head                queue;
> +};
> +
> +struct ipu_capture {
> +       struct video_device             vdev;
> +
> +       struct device                   *dev;
> +       struct v4l2_fh                  fh;
> +       struct vb2_queue                vb2_vidq;
> +       struct media_pad                pad;
> +       struct media_pipeline           pipe;
> +       struct v4l2_format              format;
> +
> +       struct v4l2_subdev              *csi_sd;
> +       struct ipu_smfc                 *smfc;
> +       struct ipuv3_channel            *ipuch;
> +       struct ipu_soc                  *ipu;
> +       int                             id;
> +
> +       spinlock_t                      lock;
> +       struct mutex                    mutex;
> +
> +       /* The currently active buffer, set by NFACK and cleared by EOF interrupt */
> +       struct ipu_capture_buffer               *active;
> +       struct list_head                capture;
> +       int                             ilo;
> +       int                             sequence;
> +
> +       int                             done_count;
> +       int                             skip_count;
> +};
> +
> +
> +static struct ipu_capture_buffer *to_ipu_capture_buffer(struct vb2_buffer *vb)
> +{
> +       return container_of(vb, struct ipu_capture_buffer, vb.vb2_buf);
> +}
> +
> +static inline void ipu_capture_set_inactive_buffer(struct ipu_capture *priv,
> +                                                  struct vb2_buffer *vb)
> +{
> +       int bufptr = !ipu_idmac_get_current_buffer(priv->ipuch);
> +       dma_addr_t eba = vb2_dma_contig_plane_dma_addr(vb, 0);
> +
> +       if (priv->ilo < 0)
> +               eba -= priv->ilo;
> +
> +       ipu_cpmem_set_buffer(priv->ipuch, bufptr, eba);
> +       ipu_idmac_select_buffer(priv->ipuch, bufptr);
> +}
> +
> +static irqreturn_t ipu_capture_new_frame_handler(int irq, void *context)
> +{
> +       struct ipu_capture *priv = context;
> +       struct ipu_capture_buffer *buf;
> +       struct vb2_v4l2_buffer *vb;
> +       unsigned long flags;
> +
> +       /* The IDMAC just started to write pixel data into the current buffer */
> +
> +       spin_lock_irqsave(&priv->lock, flags);
> +
> +       /*
> +        * If there is a previously active frame, mark it as done to hand it off
> +        * to userspace. Or, if there are no further frames queued, hold on to it.
> +        */
> +       if (priv->active) {
> +               vb = &priv->active->vb;
> +               buf = to_ipu_capture_buffer(&vb->vb2_buf);
> +
> +               if (vb2_is_streaming(vb->vb2_buf.vb2_queue) &&
> +                   list_is_singular(&priv->capture)) {
> +                       pr_debug("%s: reusing 0x%08x\n", __func__,
> +                               vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0));
> +                       /* DEBUG: check if buf == EBA(active) */
> +               } else {
> +                       /* Otherwise, mark buffer as finished */
> +                       list_del_init(&buf->queue);
> +
> +                       vb2_buffer_done(&vb->vb2_buf, VB2_BUF_STATE_DONE);
> +                       priv->done_count++;
> +               }
> +       }
> +
> +       if (list_empty(&priv->capture))
> +               goto out;
> +
> +       priv->active = list_first_entry(&priv->capture,
> +                                          struct ipu_capture_buffer, queue);
> +       vb = &priv->active->vb;
> +       vb->vb2_buf.timestamp = ktime_get_ns();
> +       vb->field = priv->format.fmt.pix.field;
> +       vb->sequence = priv->sequence++;
> +
> +       /*
> +        * Point the inactive buffer address to the next queued buffer,
> +        * if available. Otherwise, prepare to reuse the currently active
> +        * buffer, unless ipu_capture_buf_queue gets called in time.
> +        */
> +       if (!list_is_singular(&priv->capture)) {
> +               buf = list_entry(priv->capture.next->next,
> +                                struct ipu_capture_buffer, queue);
> +               vb = &buf->vb;
> +       }
> +       ipu_capture_set_inactive_buffer(priv, &vb->vb2_buf);
> +out:
> +       spin_unlock_irqrestore(&priv->lock, flags);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static int ipu_capture_get_resources(struct ipu_capture *priv)
> +{
> +       struct device *dev = priv->dev;
> +       int channel = priv->id * 2;
> +       int ret;
> +
> +       priv->ipuch = ipu_idmac_get(priv->ipu, channel);
> +       if (IS_ERR(priv->ipuch)) {
> +               ret = PTR_ERR(priv->ipuch);
> +               dev_err(dev, "Failed to get IDMAC channel %d: %d\n", channel,
> +                       ret);
> +               priv->ipuch = NULL;
> +               return ret;
> +       }
> +
> +       priv->smfc = ipu_smfc_get(priv->ipu, channel);
> +       if (!priv->smfc) {
> +               dev_err(dev, "Failed to get SMFC channel %d\n", channel);
> +               ret = -EBUSY;
> +               goto err;
> +       }
> +
> +       return 0;
> +err:
> +       ipu_idmac_put(priv->ipuch);
> +       return ret;
> +}
> +
> +static void ipu_capture_put_resources(struct ipu_capture *priv)
> +{
> +       if (priv->ipuch) {
> +               ipu_idmac_put(priv->ipuch);
> +               priv->ipuch = NULL;
> +       }
> +
> +       if (priv->smfc) {
> +               ipu_smfc_put(priv->smfc);
> +               priv->smfc = NULL;
> +       }
> +}
> +
> +/*
> + *  Videobuf operations
> + */
> +static int ipu_capture_queue_setup(struct vb2_queue *vq, unsigned int *count,
> +                                  unsigned int *num_planes, unsigned int sizes[],
> +                                  struct device *alloc_devs[])
> +{
> +       struct ipu_capture *priv = vb2_get_drv_priv(vq);
> +
> +       priv->sequence = 0;
> +
> +       if (!*count)
> +               *count = 32;
> +       *num_planes = 1;
> +       sizes[0] = priv->format.fmt.pix.sizeimage;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_buf_prepare(struct vb2_buffer *vb)
> +{
> +       struct ipu_capture *priv = vb2_get_drv_priv(vb->vb2_queue);
> +       struct v4l2_pix_format *pix = &priv->format.fmt.pix;
> +
> +       if (vb2_plane_size(vb, 0) < pix->sizeimage)
> +               return -ENOBUFS;
> +
> +       vb2_set_plane_payload(vb, 0, pix->sizeimage);
> +
> +       return 0;
> +}
> +
> +static void ipu_capture_buf_queue(struct vb2_buffer *vb)
> +{
> +       struct vb2_queue *vq = vb->vb2_queue;
> +       struct ipu_capture *priv = vb2_get_drv_priv(vq);
> +       struct ipu_capture_buffer *buf = to_ipu_capture_buffer(vb);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&priv->lock, flags);
> +
> +       /*
> +        * If there is no next buffer queued, point the inactive buffer
> +        * address to the incoming buffer
> +        */
> +       if (vb2_is_streaming(vb->vb2_queue) &&
> +           list_is_singular(&priv->capture))
> +               ipu_capture_set_inactive_buffer(priv, vb);
> +
> +       list_add_tail(&buf->queue, &priv->capture);
> +
> +       spin_unlock_irqrestore(&priv->lock, flags);
> +}
> +
> +static void ipu_capture_buf_cleanup(struct vb2_buffer *vb)
> +{
> +       struct vb2_queue *vq = vb->vb2_queue;
> +       struct ipu_capture *priv = vb2_get_drv_priv(vq);
> +       struct ipu_capture_buffer *buf = to_ipu_capture_buffer(vb);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&priv->lock, flags);
> +
> +       if (priv->active == buf)
> +               priv->active = NULL;
> +
> +       if (!list_empty(&buf->queue))
> +               list_del_init(&buf->queue);
> +
> +       spin_unlock_irqrestore(&priv->lock, flags);
> +
> +       ipu_capture_put_resources(priv);
> +}
> +
> +static int ipu_capture_buf_init(struct vb2_buffer *vb)
> +{
> +       struct ipu_capture_buffer *buf = to_ipu_capture_buffer(vb);
> +
> +       /* This is for locking debugging only */
> +       INIT_LIST_HEAD(&buf->queue);
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_start_streaming(struct vb2_queue *vq, unsigned int count)
> +{
> +       struct ipu_capture *priv = vb2_get_drv_priv(vq);
> +       struct v4l2_subdev *csi_sd = priv->csi_sd;
> +       u32 width = priv->format.fmt.pix.width;
> +       u32 height = priv->format.fmt.pix.height;
> +       struct device *dev = priv->dev;
> +       int burstsize;
> +       struct ipu_capture_buffer *buf;
> +       int nfack_irq;
> +       int ret;
> +       const char *irq_name[2] = { "CSI0", "CSI1" };
> +       bool raw;
> +
> +       ret = ipu_capture_get_resources(priv);
> +       if (ret < 0) {
> +               dev_err(dev, "Failed to get resources: %d\n", ret);
> +               goto err_dequeue;
> +       }
> +
> +       ipu_cpmem_zero(priv->ipuch);
> +
> +       nfack_irq = ipu_idmac_channel_irq(priv->ipu, priv->ipuch,
> +                                         IPU_IRQ_NFACK);
> +       ret = request_threaded_irq(nfack_irq, NULL,
> +                                  ipu_capture_new_frame_handler, IRQF_ONESHOT,
> +                                  irq_name[priv->id], priv);
> +       if (ret) {
> +               dev_err(dev, "Failed to request NFACK interrupt: %d\n", nfack_irq);
> +               goto put_resources;
> +       }
> +
> +       dev_dbg(dev, "width: %d height: %d, %.4s\n",
> +               width, height, (char *)&priv->format.fmt.pix.pixelformat);
> +
> +       ipu_cpmem_set_resolution(priv->ipuch, width, height);
> +
> +       raw = false;
> +
> +       if (raw && priv->smfc) {
> +               /*
> +                * raw formats. We can only pass them through to memory
> +                */
> +               u32 fourcc = priv->format.fmt.pix.pixelformat;
> +               int bytes;
> +
> +               switch (fourcc) {
> +               case V4L2_PIX_FMT_GREY:
> +                       bytes = 1;
> +                       break;
> +               case V4L2_PIX_FMT_Y10:
> +               case V4L2_PIX_FMT_Y16:
> +               case V4L2_PIX_FMT_UYVY:
> +               case V4L2_PIX_FMT_YUYV:
> +                       bytes = 2;
> +                       break;
> +               }
> +
> +               ipu_cpmem_set_stride(priv->ipuch, width * bytes);
> +               ipu_cpmem_set_format_passthrough(priv->ipuch, bytes * 8);
> +               /*
> +                * According to table 37-727 (SMFC Burst Size), burstsize should
> +                * be set to NBP[6:4] for PFS == 6. Unfortunately, with a 16-bit
> +                * bus any value below 4 doesn't produce proper images.
> +                */
> +               burstsize = (64 / bytes) >> 3;
> +       } else {
> +               /*
> +                * formats we understand, we can write it in any format not requiring
> +                * colorspace conversion.
> +                */
> +               u32 fourcc = priv->format.fmt.pix.pixelformat;
> +
> +               switch (fourcc) {
> +               case V4L2_PIX_FMT_RGB32:
> +                       ipu_cpmem_set_stride(priv->ipuch, width * 4);
> +                       ipu_cpmem_set_fmt(priv->ipuch, fourcc);
> +                       break;
> +               case V4L2_PIX_FMT_UYVY:
> +               case V4L2_PIX_FMT_YUYV:
> +                       ipu_cpmem_set_stride(priv->ipuch, width * 2);
> +                       ipu_cpmem_set_yuv_interleaved(priv->ipuch, fourcc);
> +                       break;
> +               case V4L2_PIX_FMT_YUV420:
> +               case V4L2_PIX_FMT_YVU420:
> +               case V4L2_PIX_FMT_NV12:
> +               case V4L2_PIX_FMT_YUV422P:
> +                       ipu_cpmem_set_stride(priv->ipuch, width);
> +                       ipu_cpmem_set_fmt(priv->ipuch, fourcc);
> +                       ipu_cpmem_set_yuv_planar(priv->ipuch, fourcc,
> +                                                width, height);
> +                       burstsize = 16;
> +                       break;
> +               default:
> +                       dev_err(dev, "invalid color format: %4.4s\n",
> +                               (char *)&fourcc);
> +                       ret = -EINVAL;
> +                       goto free_irq;
> +               }
> +       }
> +
> +       if (priv->ilo)
> +               ipu_cpmem_interlaced_scan(priv->ipuch, priv->ilo);
> +
> +       if (priv->smfc) {
> +               /*
> +                * Set the channel for the direct CSI-->memory via SMFC
> +                * use-case to very high priority, by enabling the watermark
> +                * signal in the SMFC, enabling WM in the channel, and setting
> +                * the channel priority to high.
> +                *
> +                * Refer to the iMx6 rev. D TRM Table 36-8: Calculated priority
> +                * value.
> +                *
> +                * The WM's are set very low by intention here to ensure that
> +                * the SMFC FIFOs do not overflow.
> +                */
> +               ipu_smfc_set_watermark(priv->smfc, 2, 1);
> +               ipu_idmac_enable_watermark(priv->ipuch, true);
> +               ipu_cpmem_set_high_priority(priv->ipuch);
> +
> +               /* Superfluous due to call to ipu_cpmem_zero above */
> +               ipu_cpmem_set_axi_id(priv->ipuch, 0);
> +
> +               ipu_smfc_set_burstsize(priv->smfc, burstsize - 1);
> +               ipu_smfc_map_channel(priv->smfc, priv->id * 2, 0);
> +       }
> +
> +       /* Set the media pipeline to streaming state */
> +       ret = media_entity_pipeline_start(&csi_sd->entity, &priv->pipe);
> +       if (ret) {
> +               dev_err(dev, "Failed to start external media pipeline\n");
> +               goto stop_pipe;
> +       }
> +
> +       ipu_idmac_set_double_buffer(priv->ipuch, 1);
> +
> +       if (list_empty(&priv->capture)) {
> +               dev_err(dev, "No capture buffers\n");
> +               ret = -ENOMEM;
> +               goto stop_pipe;
> +       }
> +
> +       priv->active = NULL;
> +
> +       /* Point the inactive buffer address to the first buffer */
> +       buf = list_first_entry(&priv->capture, struct ipu_capture_buffer, queue);
> +       ipu_capture_set_inactive_buffer(priv, &buf->vb.vb2_buf);
> +
> +       ipu_idmac_enable_channel(priv->ipuch);
> +
> +       if (priv->smfc)
> +               ipu_smfc_enable(priv->smfc);
> +
> +
> +       ret = v4l2_subdev_call(priv->csi_sd, video, s_stream, 1);
> +       if (ret) {
> +               dev_err(dev, "Failed to start streaming: %d\n", ret);
> +               goto stop_pipe;
> +       }
> +
> +       return 0;
> +
> +stop_pipe:
> +       media_entity_pipeline_stop(&csi_sd->entity);
> +free_irq:
> +       free_irq(nfack_irq, priv);
> +put_resources:
> +       ipu_capture_put_resources(priv);
> +err_dequeue:
> +       while (!list_empty(&vq->queued_list)) {
> +               struct vb2_v4l2_buffer *buf;
> +
> +               buf = to_vb2_v4l2_buffer(list_first_entry(&vq->queued_list,
> +                                                         struct vb2_buffer,
> +                                                         queued_entry));
> +               list_del(&buf->vb2_buf.queued_entry);
> +               vb2_buffer_done(&buf->vb2_buf, VB2_BUF_STATE_QUEUED);
> +       }
> +       return ret;
> +}
> +
> +static void ipu_capture_stop_streaming(struct vb2_queue *vq)
> +{
> +       struct ipu_capture *priv = vb2_get_drv_priv(vq);
> +       unsigned long flags;
> +       int nfack_irq = ipu_idmac_channel_irq(priv->ipu, priv->ipuch,
> +                                             IPU_IRQ_NFACK);
> +
> +       free_irq(nfack_irq, priv);
> +
> +       v4l2_subdev_call(priv->csi_sd, video, s_stream, 0);
> +       ipu_idmac_disable_channel(priv->ipuch);
> +       if (priv->smfc)
> +               ipu_smfc_disable(priv->smfc);
> +
> +       spin_lock_irqsave(&priv->lock, flags);
> +       while (!list_empty(&priv->capture)) {
> +               struct ipu_capture_buffer *buf = list_entry(priv->capture.next,
> +                                                struct ipu_capture_buffer, queue);
> +               list_del_init(priv->capture.next);
> +               vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
> +       }
> +       spin_unlock_irqrestore(&priv->lock, flags);
> +
> +       media_entity_pipeline_stop(&priv->csi_sd->entity);
> +
> +       ipu_capture_put_resources(priv);
> +}
> +
> +static void ipu_capture_lock(struct vb2_queue *vq)
> +{
> +       struct ipu_capture *priv = vb2_get_drv_priv(vq);
> +
> +       mutex_lock(&priv->mutex);
> +}
> +
> +static void ipu_capture_unlock(struct vb2_queue *vq)
> +{
> +       struct ipu_capture *priv = vb2_get_drv_priv(vq);
> +
> +       mutex_unlock(&priv->mutex);
> +}
> +
> +static struct vb2_ops ipu_capture_vb2_ops = {
> +       .queue_setup            = ipu_capture_queue_setup,
> +       .buf_prepare            = ipu_capture_buf_prepare,
> +       .buf_queue              = ipu_capture_buf_queue,
> +       .buf_cleanup            = ipu_capture_buf_cleanup,
> +       .buf_init               = ipu_capture_buf_init,
> +       .start_streaming        = ipu_capture_start_streaming,
> +       .stop_streaming         = ipu_capture_stop_streaming,
> +       .wait_prepare           = ipu_capture_unlock,
> +       .wait_finish            = ipu_capture_lock,
> +};
> +
> +static int ipu_capture_querycap(struct file *file, void *priv,
> +                                       struct v4l2_capability *cap)
> +{
> +       strlcpy(cap->driver, "imx-ipuv3-capture", sizeof(cap->driver));
> +       /* cap->name is set by the friendly caller:-> */
> +       strlcpy(cap->card, "imx-ipuv3-csi", sizeof(cap->card));
> +       strlcpy(cap->bus_info, "platform:imx-ipuv3-capture", sizeof(cap->bus_info));
> +       cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
> +       cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_try_fmt(struct file *file, void *fh,
> +                              struct v4l2_format *f)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +       struct v4l2_subdev_format sd_fmt;
> +       struct ipu_fmt *fmt = NULL;
> +       enum v4l2_field in, out;
> +       int bytes_per_pixel;
> +       int ret;
> +
> +       sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
> +       sd_fmt.pad = 1;
> +       ret = v4l2_subdev_call(priv->csi_sd, pad, get_fmt, NULL, &sd_fmt);
> +       if (ret)
> +               return ret;
> +
> +       in = sd_fmt.format.field;
> +       out = f->fmt.pix.field;
> +
> +       switch (sd_fmt.format.code) {
> +       case MEDIA_BUS_FMT_FIXED:
> +               fmt = ipu_find_fmt_rgb(f->fmt.pix.pixelformat);
> +               if (!fmt)
> +                       return -EINVAL;
> +               bytes_per_pixel = fmt->bytes_per_pixel;
> +               break;
> +       case MEDIA_BUS_FMT_UYVY8_2X8:
> +       case MEDIA_BUS_FMT_YUYV8_2X8:
> +               fmt = ipu_find_fmt_yuv(f->fmt.pix.pixelformat);
> +               if (!fmt)
> +                       return -EINVAL;
> +               bytes_per_pixel = fmt->bytes_per_pixel;
> +               break;
> +       case MEDIA_BUS_FMT_Y8_1X8:
> +               f->fmt.pix.pixelformat = V4L2_PIX_FMT_GREY;
> +               bytes_per_pixel = 1;
> +               break;
> +       case MEDIA_BUS_FMT_UYVY8_1X16:
> +               f->fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY;
> +               bytes_per_pixel = 2;
> +               break;
> +       case MEDIA_BUS_FMT_YUYV8_1X16:
> +               f->fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV;
> +               bytes_per_pixel = 2;
> +               break;
> +       default:
> +               return -EINVAL;
> +       }
> +
> +       f->fmt.pix.width = round_up(sd_fmt.format.width, 8);
> +       f->fmt.pix.height = round_up(sd_fmt.format.height, 2);
> +       f->fmt.pix.bytesperline = f->fmt.pix.width * bytes_per_pixel;
> +       f->fmt.pix.sizeimage = f->fmt.pix.bytesperline * f->fmt.pix.height;
> +       if (fmt) {
> +               if (fmt->fourcc == V4L2_PIX_FMT_YUV420 ||
> +                   fmt->fourcc == V4L2_PIX_FMT_YVU420 ||
> +                   fmt->fourcc == V4L2_PIX_FMT_NV12)
> +                       f->fmt.pix.sizeimage = f->fmt.pix.sizeimage * 3 / 2;
> +               else if (fmt->fourcc == V4L2_PIX_FMT_YUV422P)
> +                       f->fmt.pix.sizeimage *= 2;
> +       }
> +
> +       if ((in == V4L2_FIELD_SEQ_TB && out == V4L2_FIELD_INTERLACED_TB) ||
> +           (in == V4L2_FIELD_INTERLACED_TB && out == V4L2_FIELD_SEQ_TB) ||
> +           (in == V4L2_FIELD_SEQ_BT && out == V4L2_FIELD_INTERLACED_BT) ||
> +           (in == V4L2_FIELD_INTERLACED_BT && out == V4L2_FIELD_SEQ_BT)) {
> +               /*
> +                * IDMAC scan order can be used for translation between
> +                * interlaced and sequential field formats.
> +                */
> +       } else if (out == V4L2_FIELD_NONE || out == V4L2_FIELD_INTERLACED) {
> +               /*
> +                * If userspace requests progressive or interlaced frames,
> +                * interlace sequential fields as closest approximation.
> +                */
> +               if (in == V4L2_FIELD_SEQ_TB)
> +                       out = V4L2_FIELD_INTERLACED_TB;
> +               else if (in == V4L2_FIELD_SEQ_BT)
> +                       out = V4L2_FIELD_INTERLACED_BT;
> +               else
> +                       out = in;
> +       } else {
> +               /* Translation impossible or userspace doesn't care */
> +               out = in;
> +       }
> +       f->fmt.pix.field = out;
> +
> +       if (sd_fmt.format.colorspace)
> +               f->fmt.pix.colorspace = sd_fmt.format.colorspace;
> +       else if (f->fmt.pix.colorspace == V4L2_COLORSPACE_DEFAULT)
> +               f->fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_s_fmt(struct file *file, void *fh,
> +               struct v4l2_format *f)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +       struct v4l2_subdev_format sd_fmt;
> +       enum v4l2_field in, out;
> +       int ret;
> +
> +       sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
> +       sd_fmt.pad = 1;
> +       ret = v4l2_subdev_call(priv->csi_sd, pad, get_fmt, NULL, &sd_fmt);
> +       if (ret)
> +               return ret;
> +
> +       ret = ipu_capture_try_fmt(file, fh, f);
> +       if (ret)
> +               return ret;
> +
> +       priv->format = *f;
> +
> +       /*
> +        * Set IDMAC scan order interlace offset (ILO) for translation between
> +        * interlaced and sequential field formats.
> +        */
> +       in = sd_fmt.format.field;
> +       out = f->fmt.pix.field;
> +       if ((in == V4L2_FIELD_SEQ_TB && out == V4L2_FIELD_INTERLACED_TB) ||
> +           (in == V4L2_FIELD_INTERLACED_TB && out == V4L2_FIELD_SEQ_TB))
> +               priv->ilo = f->fmt.pix.bytesperline;
> +       else if ((in == V4L2_FIELD_SEQ_BT && out == V4L2_FIELD_INTERLACED_BT) ||
> +                (in == V4L2_FIELD_INTERLACED_BT && out == V4L2_FIELD_SEQ_BT))
> +               priv->ilo = -f->fmt.pix.bytesperline;
> +       else
> +               priv->ilo = 0;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_g_fmt(struct file *file, void *fh, struct v4l2_format *f)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +
> +       if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
> +               return -EINVAL;
> +
> +       *f = priv->format;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_enum_input(struct file *file, void *fh,
> +                                 struct v4l2_input *i)
> +{
> +       if (i->index != 0)
> +               return -EINVAL;
> +
> +       strcpy(i->name, "CSI");
> +       i->type = V4L2_INPUT_TYPE_CAMERA;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_g_input(struct file *file, void *fh, unsigned int *i)
> +{
> +       *i = 0;
> +       return 0;
> +}
> +
> +static int ipu_capture_s_input(struct file *file, void *fh, unsigned int i)
> +{
> +       if (i != 0)
> +               return -EINVAL;
> +       return 0;
> +}
> +
> +static int ipu_capture_enum_fmt(struct file *file, void *fh,
> +                               struct v4l2_fmtdesc *f)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +       struct v4l2_subdev_format sd_fmt;
> +       u32 fourcc;
> +       int ret;
> +
> +       sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
> +       sd_fmt.pad = 1;
> +       ret = v4l2_subdev_call(priv->csi_sd, pad, get_fmt, NULL, &sd_fmt);
> +       if (ret)
> +               return ret;
> +
> +       switch (sd_fmt.format.code) {
> +       case V4L2_PIX_FMT_RGB32:
> +               return ipu_enum_fmt_rgb(file, priv, f);
> +       case MEDIA_BUS_FMT_UYVY8_2X8:
> +       case MEDIA_BUS_FMT_YUYV8_2X8:
> +               return ipu_enum_fmt_yuv(file, priv, f);
> +       case MEDIA_BUS_FMT_Y8_1X8:
> +               fourcc = V4L2_PIX_FMT_GREY;
> +               break;
> +       case MEDIA_BUS_FMT_UYVY8_1X16:
> +               fourcc = V4L2_PIX_FMT_UYVY;
> +               break;
> +       case MEDIA_BUS_FMT_YUYV8_1X16:
> +               fourcc = V4L2_PIX_FMT_YUYV;
> +               break;
> +       default:
> +               return -EINVAL;
> +       }
> +
> +       if (f->index)
> +               return -EINVAL;
> +
> +       f->pixelformat = fourcc;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_link_setup(struct media_entity *entity,
> +                                 const struct media_pad *local,
> +                                 const struct media_pad *remote, u32 flags)
> +{
> +       struct video_device *vdev = media_entity_to_video_device(entity);
> +       struct ipu_capture *priv = container_of(vdev, struct ipu_capture, vdev);
> +
> +       if (priv->smfc)
> +               ipu_smfc_map_channel(priv->smfc, priv->id * 2, 0);
> +
> +       return 0;
> +}
> +
> +struct media_entity_operations ipu_capture_entity_ops = {
> +       .link_setup = ipu_capture_link_setup,
> +};
> +
> +static int ipu_capture_open(struct file *file)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +       int ret;
> +
> +       ret = v4l2_pipeline_pm_use(&priv->vdev.entity, 1);
> +       if (ret)
> +               return ret;
> +
> +       mutex_lock(&priv->mutex);
> +       ret = v4l2_fh_open(file);
> +       mutex_unlock(&priv->mutex);
> +
> +       return ret;
> +}
> +
> +static int ipu_capture_release(struct file *file)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +
> +       v4l2_pipeline_pm_use(&priv->vdev.entity, 0);
> +
> +       if (v4l2_fh_is_singular_file(file))
> +               vb2_fop_release(file);
> +       else
> +               v4l2_fh_release(file);
> +
> +       return 0;
> +}
> +
> +static const struct v4l2_file_operations ipu_capture_fops = {
> +       .owner          = THIS_MODULE,
> +       .open           = ipu_capture_open,
> +       .release        = ipu_capture_release,
> +       .unlocked_ioctl = video_ioctl2,
> +       .mmap           = vb2_fop_mmap,
> +       .poll           = vb2_fop_poll,
> +};
> +
> +static int ipu_capture_g_parm(struct file *file, void *fh,
> +                        struct v4l2_streamparm *sp)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +       struct v4l2_subdev *sd = priv->csi_sd;
> +       struct v4l2_subdev_frame_interval fi;
> +
> +       if (sp->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
> +               return -EINVAL;
> +
> +       sp->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
> +       fi.pad = 1;
> +       v4l2_subdev_call(sd, video, g_frame_interval, &fi);
> +       sp->parm.capture.timeperframe = fi.interval;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_s_parm(struct file *file, void *fh,
> +                        struct v4l2_streamparm *sp)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +       struct v4l2_subdev *sd = priv->csi_sd;
> +       struct v4l2_subdev_frame_interval fi;
> +
> +       if (sp->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
> +               return -EINVAL;
> +
> +       fi.pad = 1;
> +       fi.interval = sp->parm.capture.timeperframe;
> +       v4l2_subdev_call(sd, video, s_frame_interval, &fi);
> +       v4l2_subdev_call(sd, video, g_frame_interval, &fi);
> +       sp->parm.capture.timeperframe = fi.interval;
> +
> +       return 0;
> +}
> +
> +static int ipu_capture_enum_framesizes(struct file *file, void *fh,
> +                                      struct v4l2_frmsizeenum *fsize)
> +{
> +       struct ipu_capture *priv = video_drvdata(file);
> +       struct v4l2_subdev_format sd_fmt;
> +       struct ipu_fmt *fmt = NULL;
> +       int ret;
> +
> +       if (fsize->index != 0)
> +               return -EINVAL;
> +
> +       sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
> +       sd_fmt.pad = 1;
> +       ret = v4l2_subdev_call(priv->csi_sd, pad, get_fmt, NULL, &sd_fmt);
> +       if (ret)
> +               return ret;
> +
> +       switch (sd_fmt.format.code) {
> +       case V4L2_PIX_FMT_RGB32:
> +               fmt = ipu_find_fmt_rgb(fsize->pixel_format);
> +               if (!fmt)
> +                       return -EINVAL;
> +               break;
> +       case MEDIA_BUS_FMT_UYVY8_2X8:
> +       case MEDIA_BUS_FMT_YUYV8_2X8:
> +               fmt = ipu_find_fmt_yuv(fsize->pixel_format);
> +               if (!fmt)
> +                       return -EINVAL;
> +               break;
> +       case MEDIA_BUS_FMT_Y8_1X8:
> +               if (fsize->pixel_format != V4L2_PIX_FMT_GREY)
> +                       return -EINVAL;
> +               break;
> +       case MEDIA_BUS_FMT_Y10_1X10:
> +               if (fsize->pixel_format != V4L2_PIX_FMT_Y10)
> +                       return -EINVAL;
> +               break;
> +       case MEDIA_BUS_FMT_Y12_1X12:
> +               if (fsize->pixel_format != V4L2_PIX_FMT_Y16)
> +                       return -EINVAL;
> +               break;
> +       case MEDIA_BUS_FMT_UYVY8_1X16:
> +               if (fsize->pixel_format != V4L2_PIX_FMT_UYVY)
> +                       return -EINVAL;
> +               break;
> +       case MEDIA_BUS_FMT_YUYV8_1X16:
> +               if (fsize->pixel_format != V4L2_PIX_FMT_YUYV)
> +                       return -EINVAL;
> +               break;
> +       }
> +
> +       fsize->type = V4L2_FRMSIZE_TYPE_DISCRETE;
> +       fsize->discrete.width = sd_fmt.format.width;
> +       fsize->discrete.height = sd_fmt.format.height;
> +
> +       return 0;
> +}
> +
> +static const struct v4l2_ioctl_ops ipu_capture_ioctl_ops = {
> +       .vidioc_querycap                = ipu_capture_querycap,
> +
> +       .vidioc_enum_fmt_vid_cap        = ipu_capture_enum_fmt,
> +       .vidioc_try_fmt_vid_cap         = ipu_capture_try_fmt,
> +       .vidioc_s_fmt_vid_cap           = ipu_capture_s_fmt,
> +       .vidioc_g_fmt_vid_cap           = ipu_capture_g_fmt,
> +
> +       .vidioc_create_bufs             = vb2_ioctl_create_bufs,
> +       .vidioc_reqbufs                 = vb2_ioctl_reqbufs,
> +       .vidioc_querybuf                = vb2_ioctl_querybuf,
> +
> +       .vidioc_qbuf                    = vb2_ioctl_qbuf,
> +       .vidioc_dqbuf                   = vb2_ioctl_dqbuf,
> +       .vidioc_expbuf                  = vb2_ioctl_expbuf,
> +
> +       .vidioc_streamon                = vb2_ioctl_streamon,
> +       .vidioc_streamoff               = vb2_ioctl_streamoff,
> +
> +       .vidioc_enum_input              = ipu_capture_enum_input,
> +       .vidioc_g_input                 = ipu_capture_g_input,
> +       .vidioc_s_input                 = ipu_capture_s_input,
> +
> +       .vidioc_g_parm                  = ipu_capture_g_parm,
> +       .vidioc_s_parm                  = ipu_capture_s_parm,
> +
> +       .vidioc_enum_framesizes         = ipu_capture_enum_framesizes,
> +};
> +
> +static int ipu_capture_vb2_init(struct ipu_capture *priv)
> +{
> +       struct vb2_queue *q = &priv->vb2_vidq;
> +
> +       q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
> +       q->io_modes = VB2_MMAP | VB2_DMABUF;
> +       q->drv_priv = priv;
> +       q->ops = &ipu_capture_vb2_ops;
> +       q->mem_ops = &vb2_dma_contig_memops;
> +       q->buf_struct_size = sizeof(struct ipu_capture_buffer);
> +       q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
> +       q->dev = priv->dev;
> +
> +       return vb2_queue_init(q);
> +}
> +
> +struct ipu_capture *ipu_capture_create(struct device *dev, struct ipu_soc *ipu,
> +                                      int id, struct v4l2_subdev *sd,
> +                                      int pad_index)
> +{
> +       struct video_device *vdev;
> +       struct ipu_capture *priv;
> +       struct media_link *link;
> +       int ret;
> +
> +       priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> +       if (!priv)
> +               return ERR_PTR(-ENOMEM);
> +
> +       priv->dev = dev;
> +       priv->ipu = ipu;
> +       priv->csi_sd = sd;
> +
> +       INIT_LIST_HEAD(&priv->capture);
> +       spin_lock_init(&priv->lock);
> +       mutex_init(&priv->mutex);
> +
> +       ret = ipu_capture_vb2_init(priv);
> +       if (ret < 0)
> +               return ERR_PTR(ret);
> +
> +       vdev = &priv->vdev;
> +
> +       snprintf(vdev->name, sizeof(vdev->name), DRIVER_NAME ".%d", id);
> +       vdev->release   = video_device_release_empty;
> +       vdev->fops      = &ipu_capture_fops;
> +       vdev->ioctl_ops = &ipu_capture_ioctl_ops;
> +       vdev->v4l2_dev  = sd->v4l2_dev;
> +       vdev->minor     = -1;
> +       vdev->release   = video_device_release_empty;

This line is redundant.

Regards,
Liu Ying

> +       vdev->lock      = &priv->mutex;
> +       vdev->queue     = &priv->vb2_vidq;
> +
> +       priv->format.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB32;
> +       priv->format.fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
> +
> +       video_set_drvdata(vdev, priv);
> +
> +       priv->pad.flags = MEDIA_PAD_FL_SINK;
> +       vdev->entity.ops = &ipu_capture_entity_ops;
> +
> +       ret = media_entity_pads_init(&vdev->entity, 1, &priv->pad);
> +       if (ret < 0)
> +               return ERR_PTR(ret);
> +
> +       ret = video_register_device(&priv->vdev, VFL_TYPE_GRABBER, -1);
> +       if (ret)
> +               media_entity_cleanup(&vdev->entity);
> +
> +       ret = media_create_pad_link(&sd->entity, pad_index,
> +                                   &vdev->entity, 0, 0);
> +       if (ret < 0) {
> +               v4l2_err(sd->v4l2_dev,
> +                        "failed to create link for '%s':%d: %d\n",
> +                        sd->entity.name, pad_index, ret);
> +               return ERR_PTR(ret);
> +       }
> +
> +       link = media_entity_find_link(&sd->entity.pads[pad_index],
> +                                     &vdev->entity.pads[0]);
> +       media_entity_setup_link(link, MEDIA_LNK_FL_ENABLED);
> +
> +       return priv;
> +}
> +EXPORT_SYMBOL_GPL(ipu_capture_create);
> +
> +void ipu_capture_destroy(struct ipu_capture *priv)
> +{
> +       video_unregister_device(&priv->vdev);
> +       media_entity_cleanup(&priv->vdev.entity);
> +       ipu_capture_put_resources(priv);
> +
> +       kfree(priv);
> +}
> +EXPORT_SYMBOL_GPL(ipu_capture_destroy);
> diff --git a/drivers/media/platform/imx/imx-ipu.h b/drivers/media/platform/imx/imx-ipu.h
> index 7b344b6..3690915 100644
> --- a/drivers/media/platform/imx/imx-ipu.h
> +++ b/drivers/media/platform/imx/imx-ipu.h
> @@ -31,4 +31,13 @@ int ipu_g_fmt(struct v4l2_format *f, struct v4l2_pix_format *pix);
>  int ipu_enum_framesizes(struct file *file, void *fh,
>                         struct v4l2_frmsizeenum *fsize);
>
> +struct device;
> +struct ipu_soc;
> +struct v4l2_subdev;
> +
> +struct ipu_capture *ipu_capture_create(struct device *dev, struct ipu_soc *ipu,
> +                                      int id, struct v4l2_subdev *sd,
> +                                      int pad_index);
> +void ipu_capture_destroy(struct ipu_capture *capture);
> +
>  #endif /* __MEDIA_IMX_IPU_H */
> diff --git a/drivers/media/platform/imx/imx-ipuv3-csi.c b/drivers/media/platform/imx/imx-ipuv3-csi.c
> index 83e0511..7837978 100644
> --- a/drivers/media/platform/imx/imx-ipuv3-csi.c
> +++ b/drivers/media/platform/imx/imx-ipuv3-csi.c
> @@ -61,6 +61,8 @@ struct ipucsi {
>         struct media_pad                subdev_pad[2];
>         struct v4l2_mbus_framefmt       format_mbus[2];
>         struct v4l2_fract               timeperframe[2];
> +
> +       struct ipu_capture              *capture;
>  };
>
>  static int ipu_csi_get_mbus_config(struct ipucsi *ipucsi,
> @@ -266,7 +268,7 @@ static int ipucsi_subdev_g_frame_interval(struct v4l2_subdev *subdev,
>  {
>         struct ipucsi *ipucsi = container_of(subdev, struct ipucsi, subdev);
>
> -       if (fi->pad > 4)
> +       if (fi->pad > 1)
>                 return -EINVAL;
>
>         fi->interval = ipucsi->timeperframe[(fi->pad == 0) ? 0 : 1];
> @@ -311,7 +313,7 @@ static int ipucsi_subdev_s_frame_interval(struct v4l2_subdev *subdev,
>         int i, best_i = 0;
>         u64 want_us;
>
> -       if (fi->pad > 4)
> +       if (fi->pad > 1)
>                 return -EINVAL;
>
>         if (fi->pad == 0) {
> @@ -409,6 +411,7 @@ static int ipu_csi_registered(struct v4l2_subdev *sd)
>  {
>         struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
>         struct device_node *rpp;
> +       int ret;
>
>         /*
>          * Add source subdevice to asynchronous subdevice waiting list.
> @@ -429,11 +432,33 @@ static int ipu_csi_registered(struct v4l2_subdev *sd)
>                 __v4l2_async_notifier_add_subdev(sd->notifier, asd);
>         }
>
> +       /*
> +        * Create an ipu_capture instance per CSI.
> +        */
> +       ipucsi->capture = ipu_capture_create(ipucsi->dev, ipucsi->ipu,
> +                                            ipucsi->id, sd, 1);
> +       if (IS_ERR(ipucsi->capture)) {
> +               ret = PTR_ERR(ipucsi->capture);
> +               ipucsi->capture = NULL;
> +               v4l2_err(sd->v4l2_dev, "Failed to create capture device for %s: %d\n",
> +                        sd->name, ret);
> +               return ret;
> +       }
> +
>         return 0;
>  }
>
> +static void ipu_csi_unregistered(struct v4l2_subdev *sd)
> +{
> +       struct ipucsi *ipucsi = container_of(sd, struct ipucsi, subdev);
> +
> +       if (ipucsi->capture)
> +               ipu_capture_destroy(ipucsi->capture);
> +}
> +
>  struct v4l2_subdev_internal_ops ipu_csi_internal_ops = {
>         .registered = ipu_csi_registered,
> +       .unregistered = ipu_csi_unregistered,
>  };
>
>  static int ipucsi_subdev_init(struct ipucsi *ipucsi, struct device_node *node)
> --
> 2.9.3
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 13/21] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections
  2016-10-14 17:34 ` [PATCH v2 13/21] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections Philipp Zabel
@ 2016-11-08  8:23   ` Ying Liu
  0 siblings, 0 replies; 61+ messages in thread
From: Ying Liu @ 2016-11-08  8:23 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: linux-media, Steve Longerbeam, Marek Vasut, Hans Verkuil,
	Gary Bisson, kernel

Hi Philipp,

On Sat, Oct 15, 2016 at 1:34 AM, Philipp Zabel <p.zabel@pengutronix.de> wrote:
> This patch adds the device tree graph connecting the input multiplexers
> to the IPU CSIs and the MIPI-CSI2 gasket on i.MX6. The MIPI_IPU
> multiplexers are added as children of the iomuxc-gpr syscon device node.
> On i.MX6Q/D two two-input multiplexers in front of IPU1 CSI0 and IPU2
> CSI1 allow to select between CSI0/1 parallel input pads and the MIPI
> CSI-2 virtual channels 0/3.
> On i.MX6DL/S two five-input multiplexers in front of IPU1 CSI0 and IPU1
> CSI1 allow to select between CSI0/1 parallel input pads and any of the
> four MIPI CSI-2 virtual channels.
>
> Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
> ---
>  arch/arm/boot/dts/imx6dl.dtsi  | 182 +++++++++++++++++++++++++++++++++++++++++
>  arch/arm/boot/dts/imx6q.dtsi   | 118 ++++++++++++++++++++++++++
>  arch/arm/boot/dts/imx6qdl.dtsi |  10 ++-
>  3 files changed, 309 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
> index 3c817de..7ed4efd6f 100644
> --- a/arch/arm/boot/dts/imx6dl.dtsi
> +++ b/arch/arm/boot/dts/imx6dl.dtsi
> @@ -133,6 +133,188 @@
>                       "di0", "di1";
>  };
>
> +&gpr {
> +       ipu_csi0_mux {
> +               compatible = "video-multiplexer";
> +               reg = <0x34>;
> +               bit-mask = <0x7>;
> +               bit-shift = <0>;
> +               #address-cells = <1>;
> +               #size-cells = <0>;
> +
> +               port@0 {
> +                       reg = <0>;
> +
> +                       ipu_csi0_mux_from_mipi_csi0: endpoint {
> +                               remote-endpoint = <&mipi_csi0_to_ipu_csi0_mux>;
> +                       };
> +               };
> +
> +               port@1 {
> +                       reg = <1>;
> +
> +                       ipu_csi0_mux_from_mipi_csi1: endpoint {
> +                               remote-endpoint = <&mipi_csi1_to_ipu_csi0_mux>;
> +                       };
> +               };
> +
> +               port@2 {
> +                       reg = <2>;
> +
> +                       ipu_csi0_mux_from_mipi_csi2: endpoint {
> +                               remote-endpoint = <&mipi_csi2_to_ipu_csi0_mux>;
> +                       };
> +               };
> +
> +               port@3 {
> +                       reg = <3>;
> +
> +                       ipu_csi0_mux_from_mipi_csi3: endpoint {
> +                               remote-endpoint = <&mipi_csi3_to_ipu_csi0_mux>;
> +                       };
> +               };
> +
> +               csi0: port@4 {
> +                       reg = <4>;
> +               };
> +
> +               port@5 {
> +                       reg = <5>;
> +
> +                       ipu_csi0_mux_to_ipu1_csi0: endpoint {
> +                               remote-endpoint = <&ipu1_csi0_from_ipu_csi0_mux>;
> +                       };
> +               };
> +       };
> +
> +       ipu_csi1_mux {
> +               compatible = "video-multiplexer";
> +               reg = <0x34>;
> +               bit-mask = <0x7>;
> +               bit-shift = <3>;
> +               #address-cells = <1>;
> +               #size-cells = <0>;
> +
> +               port@0 {
> +                       reg = <0>;
> +
> +                       ipu_csi1_mux_from_mipi_csi0: endpoint {
> +                               remote-endpoint = <&mipi_csi0_to_ipu_csi1_mux>;
> +                       };
> +               };
> +
> +               port@1 {
> +                       reg = <1>;
> +
> +                       ipu_csi1_mux_from_mipi_csi1: endpoint {
> +                               remote-endpoint = <&mipi_csi1_to_ipu_csi1_mux>;
> +                       };
> +               };
> +
> +               port@2 {
> +                       reg = <2>;
> +
> +                       ipu_csi1_mux_from_mipi_csi2: endpoint {
> +                               remote-endpoint = <&mipi_csi2_to_ipu_csi1_mux>;
> +                       };
> +               };
> +
> +               port@3 {
> +                       reg = <3>;
> +
> +                       ipu_csi1_mux_from_mipi_csi3: endpoint {
> +                               remote-endpoint = <&mipi_csi3_to_ipu_csi1_mux>;
> +                       };
> +               };
> +
> +               csi1: port@4 {
> +                       reg = <4>;
> +               };
> +
> +               port@5 {
> +                       reg = <5>;
> +
> +                       ipu_csi1_mux_to_ipu1_csi1: endpoint {
> +                               remote-endpoint = <&ipu1_csi1_from_ipu_csi1_mux>;
> +                       };
> +               };
> +       };
> +};
> +
> +&ipu1_csi0 {
> +       ipu1_csi0_from_ipu_csi0_mux: endpoint {
> +               remote-endpoint = <&ipu_csi0_mux_to_ipu1_csi0>;
> +       };
> +};
> +
> +&ipu1_csi1 {
> +       ipu1_csi1_from_ipu_csi1_mux: endpoint {
> +               remote-endpoint = <&ipu_csi1_mux_to_ipu1_csi1>;
> +       };
> +};
> +
> +&mipi_csi {
> +       port@0 {
> +               reg = <0>;
> +       };
> +
> +       port@1 {
> +               reg = <1>;
> +               #address-cells = <1>;
> +               #size-cells = <0>;
> +
> +               mipi_csi0_to_ipu_csi0_mux: endpoint@0 {
> +                       remote-endpoint = <&ipu_csi0_mux_from_mipi_csi0>;
> +               };
> +
> +               mipi_csi0_to_ipu_csi1_mux: endpoint@1 {
> +                       remote-endpoint = <&ipu_csi1_mux_from_mipi_csi0>;
> +               };
> +       };
> +
> +       port@2 {
> +               reg = <2>;
> +               #address-cells = <1>;
> +               #size-cells = <0>;
> +
> +               mipi_csi1_to_ipu_csi0_mux: endpoint@0 {
> +                       remote-endpoint = <&ipu_csi0_mux_from_mipi_csi1>;
> +               };
> +
> +               mipi_csi1_to_ipu_csi1_mux: endpoint@1 {
> +                       remote-endpoint = <&ipu_csi1_mux_from_mipi_csi1>;
> +               };
> +       };
> +
> +       port@3 {
> +               reg = <3>;
> +               #address-cells = <1>;
> +               #size-cells = <0>;
> +
> +               mipi_csi2_to_ipu_csi0_mux: endpoint@0 {
> +                       remote-endpoint = <&ipu_csi0_mux_from_mipi_csi2>;
> +               };
> +
> +               mipi_csi2_to_ipu_csi1_mux: endpoint@1 {
> +                       remote-endpoint = <&ipu_csi1_mux_from_mipi_csi2>;
> +               };
> +       };
> +
> +       port@4 {
> +               reg = <4>;
> +               #address-cells = <1>;
> +               #size-cells = <0>;
> +
> +               mipi_csi3_to_ipu_csi0_mux: endpoint@0 {
> +                       remote-endpoint = <&ipu_csi0_mux_from_mipi_csi3>;
> +               };
> +
> +               mipi_csi3_to_ipu_csi1_mux: endpoint@1 {
> +                       remote-endpoint = <&ipu_csi1_mux_from_mipi_csi3>;
> +               };
> +       };
> +};
> +
>  &vpu {
>         compatible = "fsl,imx6dl-vpu", "cnm,coda960";
>  };
> diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
> index 0c87a69..675723b 100644
> --- a/arch/arm/boot/dts/imx6q.dtsi
> +++ b/arch/arm/boot/dts/imx6q.dtsi
> @@ -143,10 +143,18 @@
>
>                         ipu2_csi0: port@0 {
>                                 reg = <0>;
> +
> +                               ipu2_csi0_from_csi2ipu: endpoint {
> +                                       remote-endpoint = <&csi2ipu_to_ipu2_csi0>;

Should be:
+                               ipu2_csi0_from_mipi_csi2: endpoint {
+                                       remote-endpoint =
<&mipi_csi2_to_ipu2_csi0>;

> +                               };
>                         };
>
>                         ipu2_csi1: port@1 {
>                                 reg = <1>;
> +
> +                               ipu2_csi1_from_mipi_ipu2_mux: endpoint {
> +                                       remote-endpoint = <&mipi_ipu2_mux_to_ipu2_csi1>;
> +                               };
>                         };
>
>                         ipu2_di0: port@2 {
> @@ -234,6 +242,78 @@
>         };
>  };
>
> +&gpr {
> +       mipi_ipu1_mux {
> +               compatible = "video-multiplexer";
> +               reg = <0x04>;
> +               bit-mask = <1>;
> +               bit-shift = <19>;
> +               #address-cells = <1>;
> +               #size-cells = <0>;
> +
> +               port@0 {
> +                       reg = <0>;
> +
> +                       mipi_ipu1_mux_from_mipi_csi0: endpoint {
> +                               remote-endpoint = <&mipi_csi0_to_mipi_ipu1_mux>;
> +                       };
> +               };
> +
> +               csi0: port@1 {
> +                       reg = <1>;
> +               };
> +
> +               port@2 {
> +                       reg = <2>;
> +
> +                       mipi_ipu1_mux_to_ipu1_csi0: endpoint {
> +                               remote-endpoint = <&ipu1_csi0_from_mipi_ipu1_mux>;
> +                       };
> +               };
> +       };
> +
> +       mipi_ipu2_mux {
> +               compatible = "video-multiplexer";
> +               reg = <0x04>;
> +               bit-mask = <1>;
> +               bit-shift = <20>;
> +               #address-cells = <1>;
> +               #size-cells = <0>;
> +
> +               port@0 {
> +                       reg = <0>;
> +
> +                       mipi_ipu2_mux_from_mipi_csi3: endpoint {
> +                               remote-endpoint = <&mipi_csi3_to_mipi_ipu2_mux>;
> +                       };
> +               };
> +
> +               csi1: port@1 {
> +                       reg = <1>;
> +               };
> +
> +               port@2 {
> +                       reg = <2>;
> +
> +                       mipi_ipu2_mux_to_ipu2_csi1: endpoint {
> +                               remote-endpoint = <&ipu2_csi1_from_mipi_ipu2_mux>;
> +                       };
> +               };
> +       };
> +};
> +

Missing &ipu1_csi0 here.

> +&ipu1_csi1 {
> +       ipu1_csi1_from_mipi_csi1: endpoint {
> +               remote-endpoint = <&mipi_csi1_to_ipu1_csi1>;
> +       };
> +};
> +
> +&ipu2_csi0 {
> +       ipu2_csi0_from_mipi_csi2: endpoint {
> +               remote-endpoint = <&mipi_csi2_to_ipu2_csi0>;
> +       };
> +};

This is not needed as it was specified before.

> +
>  &ldb {
>         clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>, <&clks IMX6QDL_CLK_LDB_DI1_SEL>,
>                  <&clks IMX6QDL_CLK_IPU1_DI0_SEL>, <&clks IMX6QDL_CLK_IPU1_DI1_SEL>,
> @@ -280,6 +360,44 @@
>         };
>  };
>
> +&mipi_csi {
> +       port@0 {
> +               reg = <0>;
> +       };
> +
> +       port@1 {
> +               reg = <1>;
> +
> +               mipi_csi0_to_mipi_ipu1_mux: endpoint {
> +                       remote-endpoint = <&mipi_ipu1_mux_from_mipi_csi0>;
> +               };
> +       };
> +
> +       port@2 {
> +               reg = <2>;
> +
> +               mipi_csi1_to_ipu1_csi1: endpoint {
> +                       remote-endpoint = <&ipu1_csi1_from_mipi_csi1>;
> +               };
> +       };
> +
> +       port@3 {
> +               reg = <3>;
> +
> +               mipi_csi2_to_ipu2_csi0: endpoint {
> +                       remote-endpoint = <&ipu2_csi0_from_mipi_csi2>;
> +               };
> +       };
> +
> +       port@4 {
> +               reg = <4>;
> +
> +               mipi_csi3_to_mipi_ipu2_mux: endpoint {
> +                       remote-endpoint = <&mipi_ipu2_mux_from_mipi_csi3>;
> +               };
> +       };
> +};
> +
>  &mipi_dsi {
>         ports {
>                 port@2 {
> diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
> index b13b0b2..cd325bd 100644
> --- a/arch/arm/boot/dts/imx6qdl.dtsi
> +++ b/arch/arm/boot/dts/imx6qdl.dtsi
> @@ -798,8 +798,10 @@
>                         };
>
>                         gpr: iomuxc-gpr@020e0000 {
> -                               compatible = "fsl,imx6q-iomuxc-gpr", "syscon";
> +                               compatible = "fsl,imx6q-iomuxc-gpr", "syscon", "simple-mfd";
>                                 reg = <0x020e0000 0x38>;
> +                               #address-cells = <1>;
> +                               #size-cells = <0>;
>                         };
>
>                         iomuxc: iomuxc@020e0000 {
> @@ -1122,6 +1124,8 @@
>
>                         mipi_csi: mipi@021dc000 {
>                                 reg = <0x021dc000 0x4000>;
> +                               #address-cells = <1>;
> +                               #size-cells = <0>;
>                         };
>
>                         mipi_dsi: mipi@021e0000 {
> @@ -1221,6 +1225,10 @@
>
>                         ipu1_csi0: port@0 {
>                                 reg = <0>;
> +
> +                               ipu1_csi0_from_mipi_ipu1_mux: endpoint {
> +                                       remote-endpoint = <&mipi_ipu1_mux_to_ipu1_csi0>;
> +                               };

The endpoint should be different for i.mx6q and i.mx6dl.
So, add one for them respectively instead of adding
a common one in imx6qdl.dtsi.

Regards,
Liu Ying

>                         };
>
>                         ipu1_csi1: port@1 {
> --
> 2.9.3
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
       [not found]   ` <CAH-u=807nRYzza0kTfOMv1AiWazk6FGJyz6W5_bYw7v9nOrccA@mail.gmail.com>
@ 2016-12-29 15:10     ` Jean-Michel Hautbois
  2016-12-29 20:51     ` Robert Schwebel
  1 sibling, 0 replies; 61+ messages in thread
From: Jean-Michel Hautbois @ 2016-12-29 15:10 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Philipp Zabel, Linux Media Mailing List, Steve Longerbeam,
	Marek Vasut, Hans Verkuil, Gary Bisson, Sascha Hauer

I am so sorry... This mail wasn't send to the mailing list as this
****** gmail switched back to HTML...

2016-12-29 16:08 GMT+01:00 Jean-Michel Hautbois
<jean-michel.hautbois@veo-labs.com>:
> Hi Philipp and al.,
>
>
> 2016-10-19 23:30 GMT+02:00 Sakari Ailus <sakari.ailus@iki.fi>:
>>
>> On Fri, Oct 14, 2016 at 07:34:20PM +0200, Philipp Zabel wrote:
>> > Hi,
>> >
>> > the second round removes the prepare_stream callback and instead lets
>> > the
>> > intermediate subdevices propagate s_stream calls to their sources rather
>> > than individually calling s_stream on each subdevice from the bridge
>> > driver.
>> > This is similar to how drm bridges recursively call into their next
>> > neighbor.
>> > It makes it easier to do bringup ordering on a per-link level, as long
>> > as the
>> > source preparation can be done at s_power, and the sink can just
>> > prepare, call
>> > s_stream on its source, and then enable itself inside s_stream.
>> > Obviously this
>> > would only work in a generic fashion if all asynchronous subdevices with
>> > both
>> > inputs and outputs would propagate s_stream to their source subdevices.
>
>
> What is the status of this work ? I saw Steve's patches before yours, so
> both are implementing pretty much the same functionnality but differently.
> Which one will be finally merged ?
> I wanted to upgrade my kernel, in order to give it a try on a board with
> adv7604 and adv7611 devices.
> Is there a git tree somewhere integrating it too ?
>
> Thanks,
> JM
>
>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
       [not found]   ` <CAH-u=807nRYzza0kTfOMv1AiWazk6FGJyz6W5_bYw7v9nOrccA@mail.gmail.com>
  2016-12-29 15:10     ` Jean-Michel Hautbois
@ 2016-12-29 20:51     ` Robert Schwebel
  2016-12-30 19:06       ` Marek Vasut
  1 sibling, 1 reply; 61+ messages in thread
From: Robert Schwebel @ 2016-12-29 20:51 UTC (permalink / raw)
  To: Jean-Michel Hautbois
  Cc: Sakari Ailus, Marek Vasut, Gary Bisson, Steve Longerbeam,
	Sascha Hauer, Hans Verkuil, Philipp Zabel,
	Linux Media Mailing List

Hi Jean-Michel,

On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
> What is the status of this work?

Philipp's patches have been reworked with the review feedback from the
last round and a new version will be posted when he is back from
holidays.

rsc
-- 
Pengutronix e.K.                           |                             |
Industrial Linux Solutions                 | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2016-12-29 20:51     ` Robert Schwebel
@ 2016-12-30 19:06       ` Marek Vasut
  2016-12-30 20:26         ` Steve Longerbeam
  0 siblings, 1 reply; 61+ messages in thread
From: Marek Vasut @ 2016-12-30 19:06 UTC (permalink / raw)
  To: Robert Schwebel, Jean-Michel Hautbois
  Cc: Sakari Ailus, Gary Bisson, Steve Longerbeam, Sascha Hauer,
	Hans Verkuil, Philipp Zabel, Linux Media Mailing List

On 12/29/2016 09:51 PM, Robert Schwebel wrote:
> Hi Jean-Michel,

Hi,

> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>> What is the status of this work?
> 
> Philipp's patches have been reworked with the review feedback from the
> last round and a new version will be posted when he is back from
> holidays.

IMO Philipp's patches are better integrated and well structured, so I'd
rather like to see his work in at some point.

-- 
Best regards,
Marek Vasut

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2016-12-30 19:06       ` Marek Vasut
@ 2016-12-30 20:26         ` Steve Longerbeam
  2017-01-02 13:51           ` Jean-Michel Hautbois
  2017-01-09 19:43           ` Philipp Zabel
  0 siblings, 2 replies; 61+ messages in thread
From: Steve Longerbeam @ 2016-12-30 20:26 UTC (permalink / raw)
  To: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois
  Cc: Sakari Ailus, Gary Bisson, Sascha Hauer, Hans Verkuil,
	Philipp Zabel, Linux Media Mailing List



On 12/30/2016 11:06 AM, Marek Vasut wrote:
> On 12/29/2016 09:51 PM, Robert Schwebel wrote:
>> Hi Jean-Michel,
> Hi,
>
>> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>>> What is the status of this work?
>> Philipp's patches have been reworked with the review feedback from the
>> last round and a new version will be posted when he is back from
>> holidays.
> IMO Philipp's patches are better integrated and well structured, so I'd
> rather like to see his work in at some point.

Granted I am biased, but I will state my case. "Better integrated" - my 
patches
are also well integrated with the media core infrastructure. Philipp's 
patches
in fact require modification to media core, whereas mine require none.
Some changes are needed of course (more subdev type definitions for
one).

As for "well structured", I don't really understand what is meant by that,
but my driver is also well structured.

Philipp's driver only supports unconverted image capture from the SMFC. 
In addition
to that, mine allows for all the hardware links supported by the IPU, 
including routing
frames from the CSI directly to the Image converter for scaling up to 
4096x4096,
colorspace conversion, rotation, and motion compensated de-interlace. 
Yes all these
conversion can be carried out post-capture via a mem2mem device, but 
conversion
directly from CSI capture has advantages, including minimized CPU 
utilization and
lower AXI bus traffic. In any case, Freescale added these hardware 
paths, and my
driver supports them.

I leave it up to the maintainers.

Steve



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2016-12-30 20:26         ` Steve Longerbeam
@ 2017-01-02 13:51           ` Jean-Michel Hautbois
  2017-01-02 14:45             ` Hans Verkuil
  2017-01-09 19:43           ` Philipp Zabel
  1 sibling, 1 reply; 61+ messages in thread
From: Jean-Michel Hautbois @ 2017-01-02 13:51 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Sakari Ailus, Gary Bisson,
	Sascha Hauer, Hans Verkuil, Philipp Zabel,
	Linux Media Mailing List

Hi,

2016-12-30 21:26 GMT+01:00 Steve Longerbeam <steve_longerbeam@mentor.com>:
>
>
> On 12/30/2016 11:06 AM, Marek Vasut wrote:
>>
>> On 12/29/2016 09:51 PM, Robert Schwebel wrote:
>>>
>>> Hi Jean-Michel,
>>
>> Hi,
>>
>>> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>>>>
>>>> What is the status of this work?
>>>
>>> Philipp's patches have been reworked with the review feedback from the
>>> last round and a new version will be posted when he is back from
>>> holidays.
>>
>> IMO Philipp's patches are better integrated and well structured, so I'd
>> rather like to see his work in at some point.
>
>
> Granted I am biased, but I will state my case. "Better integrated" - my
> patches
> are also well integrated with the media core infrastructure. Philipp's
> patches
> in fact require modification to media core, whereas mine require none.
> Some changes are needed of course (more subdev type definitions for
> one).
>
> As for "well structured", I don't really understand what is meant by that,
> but my driver is also well structured.
>
> Philipp's driver only supports unconverted image capture from the SMFC. In
> addition
> to that, mine allows for all the hardware links supported by the IPU,
> including routing
> frames from the CSI directly to the Image converter for scaling up to
> 4096x4096,
> colorspace conversion, rotation, and motion compensated de-interlace. Yes
> all these
> conversion can be carried out post-capture via a mem2mem device, but
> conversion
> directly from CSI capture has advantages, including minimized CPU
> utilization and
> lower AXI bus traffic. In any case, Freescale added these hardware paths,
> and my
> driver supports them.

I had a deeper look to both drivers, and I must say the features of
Steve's one are really interesting.
I don't think any of those has been tested with digital inputs (I have
ADV76xx chips on my board, which I hope will be available one day for
those interested) and so, I can test and help adding some of the
missing parts.
And for at least a week or two, I have all of my time for it, so it
would be of great help to know which one has the bigger chance to be
integrated... :)

Thanks again,
JM

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-02 13:51           ` Jean-Michel Hautbois
@ 2017-01-02 14:45             ` Hans Verkuil
  2017-01-02 14:59               ` Jean-Michel Hautbois
                                 ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: Hans Verkuil @ 2017-01-02 14:45 UTC (permalink / raw)
  To: Jean-Michel Hautbois, Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Sakari Ailus, Gary Bisson,
	Sascha Hauer, Philipp Zabel, Linux Media Mailing List

On 01/02/17 14:51, Jean-Michel Hautbois wrote:
> Hi,
>
> 2016-12-30 21:26 GMT+01:00 Steve Longerbeam <steve_longerbeam@mentor.com>:
>>
>>
>> On 12/30/2016 11:06 AM, Marek Vasut wrote:
>>>
>>> On 12/29/2016 09:51 PM, Robert Schwebel wrote:
>>>>
>>>> Hi Jean-Michel,
>>>
>>> Hi,
>>>
>>>> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>>>>>
>>>>> What is the status of this work?
>>>>
>>>> Philipp's patches have been reworked with the review feedback from the
>>>> last round and a new version will be posted when he is back from
>>>> holidays.
>>>
>>> IMO Philipp's patches are better integrated and well structured, so I'd
>>> rather like to see his work in at some point.
>>
>>
>> Granted I am biased, but I will state my case. "Better integrated" - my
>> patches
>> are also well integrated with the media core infrastructure. Philipp's
>> patches
>> in fact require modification to media core, whereas mine require none.
>> Some changes are needed of course (more subdev type definitions for
>> one).
>>
>> As for "well structured", I don't really understand what is meant by that,
>> but my driver is also well structured.
>>
>> Philipp's driver only supports unconverted image capture from the SMFC. In
>> addition
>> to that, mine allows for all the hardware links supported by the IPU,
>> including routing
>> frames from the CSI directly to the Image converter for scaling up to
>> 4096x4096,
>> colorspace conversion, rotation, and motion compensated de-interlace. Yes
>> all these
>> conversion can be carried out post-capture via a mem2mem device, but
>> conversion
>> directly from CSI capture has advantages, including minimized CPU
>> utilization and
>> lower AXI bus traffic. In any case, Freescale added these hardware paths,
>> and my
>> driver supports them.
>
> I had a deeper look to both drivers, and I must say the features of
> Steve's one are really interesting.
> I don't think any of those has been tested with digital inputs (I have
> ADV76xx chips on my board, which I hope will be available one day for
> those interested) and so, I can test and help adding some of the
> missing parts.
> And for at least a week or two, I have all of my time for it, so it
> would be of great help to know which one has the bigger chance to be
> integrated... :)

Steve's series is definitely preferred from my point of view. The feature
set is clearly superior to Philipp's driver.

I plan on reviewing Steve's series soonish but a quick scan didn't see anything
suspicious. The code looks clean, and I am leaning towards getting this in sooner
rather than later, so if you have the time to work on this, then go for it!

Steve, I have a SabreLite and a ov5642 sensor, so I should be able to test
that driver.

There is also an ov5642 sensor driver in drivers/media/i2/soc_camera/ov5642.c.
But nobody AFAIK is using it, so I would be inclined to take your code and
remove the soc_camera driver.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-02 14:45             ` Hans Verkuil
@ 2017-01-02 14:59               ` Jean-Michel Hautbois
  2017-01-02 19:19                 ` Fabio Estevam
  2017-01-02 19:20                 ` Steve Longerbeam
  2017-01-02 17:04               ` Fabio Estevam
  2017-01-02 19:17               ` Steve Longerbeam
  2 siblings, 2 replies; 61+ messages in thread
From: Jean-Michel Hautbois @ 2017-01-02 14:59 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Steve Longerbeam, Marek Vasut, Robert Schwebel, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Philipp Zabel,
	Linux Media Mailing List

2017-01-02 15:45 GMT+01:00 Hans Verkuil <hverkuil@xs4all.nl>:
> On 01/02/17 14:51, Jean-Michel Hautbois wrote:
>>
>> Hi,
>>
>> 2016-12-30 21:26 GMT+01:00 Steve Longerbeam <steve_longerbeam@mentor.com>:
>>>
>>>
>>>
>>> On 12/30/2016 11:06 AM, Marek Vasut wrote:
>>>>
>>>>
>>>> On 12/29/2016 09:51 PM, Robert Schwebel wrote:
>>>>>
>>>>>
>>>>> Hi Jean-Michel,
>>>>
>>>>
>>>> Hi,
>>>>
>>>>> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>>>>>>
>>>>>>
>>>>>> What is the status of this work?
>>>>>
>>>>>
>>>>> Philipp's patches have been reworked with the review feedback from the
>>>>> last round and a new version will be posted when he is back from
>>>>> holidays.
>>>>
>>>>
>>>> IMO Philipp's patches are better integrated and well structured, so I'd
>>>> rather like to see his work in at some point.
>>>
>>>
>>>
>>> Granted I am biased, but I will state my case. "Better integrated" - my
>>> patches
>>> are also well integrated with the media core infrastructure. Philipp's
>>> patches
>>> in fact require modification to media core, whereas mine require none.
>>> Some changes are needed of course (more subdev type definitions for
>>> one).
>>>
>>> As for "well structured", I don't really understand what is meant by
>>> that,
>>> but my driver is also well structured.
>>>
>>> Philipp's driver only supports unconverted image capture from the SMFC.
>>> In
>>> addition
>>> to that, mine allows for all the hardware links supported by the IPU,
>>> including routing
>>> frames from the CSI directly to the Image converter for scaling up to
>>> 4096x4096,
>>> colorspace conversion, rotation, and motion compensated de-interlace. Yes
>>> all these
>>> conversion can be carried out post-capture via a mem2mem device, but
>>> conversion
>>> directly from CSI capture has advantages, including minimized CPU
>>> utilization and
>>> lower AXI bus traffic. In any case, Freescale added these hardware paths,
>>> and my
>>> driver supports them.
>>
>>
>> I had a deeper look to both drivers, and I must say the features of
>> Steve's one are really interesting.
>> I don't think any of those has been tested with digital inputs (I have
>> ADV76xx chips on my board, which I hope will be available one day for
>> those interested) and so, I can test and help adding some of the
>> missing parts.
>> And for at least a week or two, I have all of my time for it, so it
>> would be of great help to know which one has the bigger chance to be
>> integrated... :)
>
>
> Steve's series is definitely preferred from my point of view. The feature
> set is clearly superior to Philipp's driver.
>
> I plan on reviewing Steve's series soonish but a quick scan didn't see
> anything
> suspicious. The code looks clean, and I am leaning towards getting this in
> sooner
> rather than later, so if you have the time to work on this, then go for it!

Glad to here it !

> Steve, I have a SabreLite and a ov5642 sensor, so I should be able to test
> that driver.
>
> There is also an ov5642 sensor driver in
> drivers/media/i2/soc_camera/ov5642.c.
> But nobody AFAIK is using it, so I would be inclined to take your code and
> remove the soc_camera driver.

Steve: which branch is the correct one on your github ?
Thanks,
JM

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-02 14:45             ` Hans Verkuil
  2017-01-02 14:59               ` Jean-Michel Hautbois
@ 2017-01-02 17:04               ` Fabio Estevam
  2017-01-02 19:17               ` Steve Longerbeam
  2 siblings, 0 replies; 61+ messages in thread
From: Fabio Estevam @ 2017-01-02 17:04 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Jean-Michel Hautbois, Steve Longerbeam, Marek Vasut,
	Robert Schwebel, Sakari Ailus, Gary Bisson, Sascha Hauer,
	Philipp Zabel, Linux Media Mailing List

On Mon, Jan 2, 2017 at 12:45 PM, Hans Verkuil <hverkuil@xs4all.nl> wrote:

> Steve's series is definitely preferred from my point of view. The feature
> set is clearly superior to Philipp's driver.
>
> I plan on reviewing Steve's series soonish but a quick scan didn't see
> anything
> suspicious. The code looks clean, and I am leaning towards getting this in
> sooner
> rather than later, so if you have the time to work on this, then go for it!

This is good news!

I had a chance to test Steve's series on a mx6qsabresd and it worked fine.

Thanks

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-02 14:45             ` Hans Verkuil
  2017-01-02 14:59               ` Jean-Michel Hautbois
  2017-01-02 17:04               ` Fabio Estevam
@ 2017-01-02 19:17               ` Steve Longerbeam
  2 siblings, 0 replies; 61+ messages in thread
From: Steve Longerbeam @ 2017-01-02 19:17 UTC (permalink / raw)
  To: Hans Verkuil, Jean-Michel Hautbois
  Cc: Marek Vasut, Robert Schwebel, Sakari Ailus, Gary Bisson,
	Sascha Hauer, Philipp Zabel, Linux Media Mailing List

Hi Hans,


On 01/02/2017 06:45 AM, Hans Verkuil wrote:
> On 01/02/17 14:51, Jean-Michel Hautbois wrote:
>> Hi,
>>
>> 2016-12-30 21:26 GMT+01:00 Steve Longerbeam 
>> <steve_longerbeam@mentor.com>:
>>>
>>>
>>> On 12/30/2016 11:06 AM, Marek Vasut wrote:
>>>>
>>>> On 12/29/2016 09:51 PM, Robert Schwebel wrote:
>>>>>
>>>>> Hi Jean-Michel,
>>>>
>>>> Hi,
>>>>
>>>>> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>>>>>>
>>>>>> What is the status of this work?
>>>>>
>>>>> Philipp's patches have been reworked with the review feedback from 
>>>>> the
>>>>> last round and a new version will be posted when he is back from
>>>>> holidays.
>>>>
>>>> IMO Philipp's patches are better integrated and well structured, so 
>>>> I'd
>>>> rather like to see his work in at some point.
>>>
>>>
>>> Granted I am biased, but I will state my case. "Better integrated" - my
>>> patches
>>> are also well integrated with the media core infrastructure. Philipp's
>>> patches
>>> in fact require modification to media core, whereas mine require none.
>>> Some changes are needed of course (more subdev type definitions for
>>> one).
>>>
>>> As for "well structured", I don't really understand what is meant by 
>>> that,
>>> but my driver is also well structured.
>>>
>>> Philipp's driver only supports unconverted image capture from the 
>>> SMFC. In
>>> addition
>>> to that, mine allows for all the hardware links supported by the IPU,
>>> including routing
>>> frames from the CSI directly to the Image converter for scaling up to
>>> 4096x4096,
>>> colorspace conversion, rotation, and motion compensated 
>>> de-interlace. Yes
>>> all these
>>> conversion can be carried out post-capture via a mem2mem device, but
>>> conversion
>>> directly from CSI capture has advantages, including minimized CPU
>>> utilization and
>>> lower AXI bus traffic. In any case, Freescale added these hardware 
>>> paths,
>>> and my
>>> driver supports them.
>>
>> I had a deeper look to both drivers, and I must say the features of
>> Steve's one are really interesting.
>> I don't think any of those has been tested with digital inputs (I have
>> ADV76xx chips on my board, which I hope will be available one day for
>> those interested) and so, I can test and help adding some of the
>> missing parts.
>> And for at least a week or two, I have all of my time for it, so it
>> would be of great help to know which one has the bigger chance to be
>> integrated... :)
>
> Steve's series is definitely preferred from my point of view. The feature
> set is clearly superior to Philipp's driver.
>
> I plan on reviewing Steve's series soonish but a quick scan didn't see 
> anything
> suspicious. The code looks clean, and I am leaning towards getting 
> this in sooner
> rather than later, so if you have the time to work on this, then go 
> for it!
>
> Steve, I have a SabreLite and a ov5642 sensor, so I should be able to 
> test
> that driver.

Ok, just remember to disable the ov5640 node in imx6qdl-sabrelite.dtsi,
otherwise the driver will expect to see an async registration callback from
the ov5640.

>
> There is also an ov5642 sensor driver in 
> drivers/media/i2/soc_camera/ov5642.c.
> But nobody AFAIK is using it, so I would be inclined to take your code 
> and
> remove the soc_camera driver.

Ok, and at some point ov5642.c and ov5640_mipi.c need to be merged into 
a single
ov564x.c, cleaned up (get rid of, or significantly prune, those huge 
register tables - they
were created by FSL by dumping the i2c register set after a reset, so 
they consist mostly
of the POR register values), and finally moved to drivers/media/i2c.

Steve



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-02 14:59               ` Jean-Michel Hautbois
@ 2017-01-02 19:19                 ` Fabio Estevam
  2017-01-02 19:20                 ` Steve Longerbeam
  1 sibling, 0 replies; 61+ messages in thread
From: Fabio Estevam @ 2017-01-02 19:19 UTC (permalink / raw)
  To: Jean-Michel Hautbois
  Cc: Hans Verkuil, Steve Longerbeam, Marek Vasut, Robert Schwebel,
	Sakari Ailus, Gary Bisson, Sascha Hauer, Philipp Zabel,
	Linux Media Mailing List

Hi Jean-Michel,

On Mon, Jan 2, 2017 at 12:59 PM, Jean-Michel Hautbois
<jean-michel.hautbois@veo-labs.com> wrote:

> Steve: which branch is the correct one on your github ?

I have tested imx-media-staging-md-v2 (based on 4.9-rc) and also
imx-media-staging-md-v4 branch, which is based on 4.10-rc1.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-02 14:59               ` Jean-Michel Hautbois
  2017-01-02 19:19                 ` Fabio Estevam
@ 2017-01-02 19:20                 ` Steve Longerbeam
  1 sibling, 0 replies; 61+ messages in thread
From: Steve Longerbeam @ 2017-01-02 19:20 UTC (permalink / raw)
  To: Jean-Michel Hautbois, Hans Verkuil
  Cc: Marek Vasut, Robert Schwebel, Sakari Ailus, Gary Bisson,
	Sascha Hauer, Philipp Zabel, Linux Media Mailing List

Hi JM,


On 01/02/2017 06:59 AM, Jean-Michel Hautbois wrote:
>
> Steve: which branch is the correct one on your github ?

branch imx-media-staging-md-v4 on
git@github.com:slongerbeam/mediatree.git

Steve


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2016-12-30 20:26         ` Steve Longerbeam
  2017-01-02 13:51           ` Jean-Michel Hautbois
@ 2017-01-09 19:43           ` Philipp Zabel
  2017-01-10  0:15             ` Steve Longerbeam
  2017-01-24 18:11             ` Philipp Zabel
  1 sibling, 2 replies; 61+ messages in thread
From: Philipp Zabel @ 2017-01-09 19:43 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Hi Steve,

Am Freitag, den 30.12.2016, 12:26 -0800 schrieb Steve Longerbeam:
> 
> On 12/30/2016 11:06 AM, Marek Vasut wrote:
> > On 12/29/2016 09:51 PM, Robert Schwebel wrote:
> >> Hi Jean-Michel,
> > Hi,
> >
> >> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
> >>> What is the status of this work?
> >> Philipp's patches have been reworked with the review feedback from the
> >> last round and a new version will be posted when he is back from
> >> holidays.
> > IMO Philipp's patches are better integrated and well structured, so I'd
> > rather like to see his work in at some point.
> 
> Granted I am biased, but I will state my case. "Better integrated" - my 
> patches
> are also well integrated with the media core infrastructure. Philipp's 
> patches
> in fact require modification to media core, whereas mine require none.
> Some changes are needed of course (more subdev type definitions for
> one).
> 
> As for "well structured", I don't really understand what is meant by that,
> but my driver is also well structured.

I agree that this driver is well structured and well documented. Many of
my earlier concerns regarding the device tree bindings and media
controller interface have been addressed. But there are still a few
design choices that I don't agree with, and some are userspace visible,
which makes me worry about not being able to change them later.

One is the amount and organization of subdevices/media entities visible
to userspace. The SMFCs should not be user controllable subdevices, but
can be implicitly enabled when a CSI is directly linked to a camif.
Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
best choice. It is true that the three tasks can be enabled separately,
but to my understanding, the PRP ENC and PRP VF tasks share a single
input channel. Shouldn't this be a single PRP subdevice with one input
and two (VF, ENC) outputs?
On the other hand, there is currently no way to communicate to userspace
that the IC can't downscale bilinearly, but only to 1/2 or 1/4 of the
input resolution, and then scale up bilinearly for there. So instead of
pretending to be able to downscale to any resolution, it would be better
to split each IC task into two subdevs, one for the
1/2-or-1/4-downscaler, and one for the bilinear upscaler.
Next there is the issue of the custom mem2mem infrastructure inside the
IPU driver. I think this should be ultimately implemented at a higher
level, but I see no way this will ever move out of the IPU driver once
the userspace inferface gets established.

Then there are a few issues that are not userspace visible, so less
pressing. For example modelling the internal subdevs as separate
platform devices with async subdevices seems unnecessarily indirect. Why
not drop the platform devices and create the subdevs directly instead of
asynchronously?
I'll try to give the driver a proper review in the next days.

> Philipp's driver only supports unconverted image capture from the SMFC. 
> In addition
> to that, mine allows for all the hardware links supported by the IPU, 
> including routing
> frames from the CSI directly to the Image converter for scaling up to 
> 4096x4096,

Note that tiled scaling (anything > 1024x1024) currently doesn't produce
correct results due to the fractional reset at the seam. This is not a
property of this driver, but still needs to be fixed in the ipu-v3 core.

> colorspace conversion, rotation, and motion compensated de-interlace. 
> Yes all these
> conversion can be carried out post-capture via a mem2mem device, but 
> conversion
> directly from CSI capture has advantages, including minimized CPU 
> utilization and
> lower AXI bus traffic.

These benefits are limited by the hardware to non-rotated frames <
1024x1024 pixels.

regards
Philipp


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-09 19:43           ` Philipp Zabel
@ 2017-01-10  0:15             ` Steve Longerbeam
  2017-01-10 23:52               ` Steve Longerbeam
  2017-01-11 12:11               ` Philipp Zabel
  2017-01-24 18:11             ` Philipp Zabel
  1 sibling, 2 replies; 61+ messages in thread
From: Steve Longerbeam @ 2017-01-10  0:15 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Hi Philipp,


On 01/09/2017 11:43 AM, Philipp Zabel wrote:
> Hi Steve,
>
> Am Freitag, den 30.12.2016, 12:26 -0800 schrieb Steve Longerbeam:
>> On 12/30/2016 11:06 AM, Marek Vasut wrote:
>>> On 12/29/2016 09:51 PM, Robert Schwebel wrote:
>>>> Hi Jean-Michel,
>>> Hi,
>>>
>>>> On Thu, Dec 29, 2016 at 04:08:33PM +0100, Jean-Michel Hautbois wrote:
>>>>> What is the status of this work?
>>>> Philipp's patches have been reworked with the review feedback from the
>>>> last round and a new version will be posted when he is back from
>>>> holidays.
>>> IMO Philipp's patches are better integrated and well structured, so I'd
>>> rather like to see his work in at some point.
>> Granted I am biased, but I will state my case. "Better integrated" - my
>> patches
>> are also well integrated with the media core infrastructure. Philipp's
>> patches
>> in fact require modification to media core, whereas mine require none.
>> Some changes are needed of course (more subdev type definitions for
>> one).
>>
>> As for "well structured", I don't really understand what is meant by that,
>> but my driver is also well structured.
> I agree that this driver is well structured and well documented. Many of
> my earlier concerns regarding the device tree bindings and media
> controller interface have been addressed.

Thanks! Your input is crucial, so I really appreciate it.

>   But there are still a few
> design choices that I don't agree with, and some are userspace visible,
> which makes me worry about not being able to change them later.

Ok.

> One is the amount and organization of subdevices/media entities visible
> to userspace. The SMFCs should not be user controllable subdevices, but
> can be implicitly enabled when a CSI is directly linked to a camif.

I agree the SMFC could be folded into the CSI, but I see at least one
issue.

 From the dot graph you'll see that the PRPVF entity can receive directly
from the CSI, or indirectly via the SMFC. If the SMFC entity were folded
into the CSI entity, there would have to be a "direct to PRPVF" output pad
and a "indirect via SMFC" output pad and I'm not sure how that info would
be conveyed to the user. With a SMFC entity those pipelines are explicit.

> Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
> best choice. It is true that the three tasks can be enabled separately,
> but to my understanding, the PRP ENC and PRP VF tasks share a single
> input channel. Shouldn't this be a single PRP subdevice with one input
> and two (VF, ENC) outputs?

Since the VDIC sends its motion compensated frames to the PRP VF task,
I've created the PRPVF entity solely for motion compensated de-interlace
support. I don't really see any other use for the PRPVF task except for
motion compensated de-interlace.

So really, the PRPVF entity is a combination of the VDIC and PRPVF subunits.

So looking at link_setup() in imx-csi.c, you'll see that when the CSI is 
linked
to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.

But if we were to create a VDIC entity, I can see how we could then
have a single PRP entity. Then if the PRP entity is receiving from the VDIC,
the PRP VF task would be activated.

Another advantage of creating a distinct VDIC entity is that frames could
potentially be routed directly from the VDIC to camif, for 
motion-compensated
frame capture only with no scaling/CSC. I think that would be IDMAC channel
5, we've tried to get that pipeline to work in the past without success. 
That's
mainly why I decided not to attempt it and instead fold VDIC into PRPVF 
entity.


> On the other hand, there is currently no way to communicate to userspace
> that the IC can't downscale bilinearly, but only to 1/2 or 1/4 of the
> input resolution, and then scale up bilinearly for there. So instead of
> pretending to be able to downscale to any resolution, it would be better
> to split each IC task into two subdevs, one for the
> 1/2-or-1/4-downscaler, and one for the bilinear upscaler.

Hmm, good point, but couldn't we just document that fact?



> Next there is the issue of the custom mem2mem infrastructure inside the
> IPU driver. I think this should be ultimately implemented at a higher
> level,

if we were to move it up, into what subsystem would it go? I guess
v4l2, but then we lose the possibility of other subsystems making use
of it, such as drm.

>   but I see no way this will ever move out of the IPU driver once
> the userspace inferface gets established.

it would be more difficult at that point, true.

> Then there are a few issues that are not userspace visible, so less
> pressing. For example modelling the internal subdevs as separate
> platform devices with async subdevices seems unnecessarily indirect. Why
> not drop the platform devices and create the subdevs directly instead of
> asynchronously?

This came out of a previous implementation where the IPU internal
subunits and their links were represented as an OF graph (the patches
I floated by you that you didn't like :) I've been meaning to ask you why
you don't like to expose the IPU internals via OF graph, I have my theories
why, but I digress). In that implementation the internal subdevs had to be
registered asynchronously with device node match.

I thought about going back to registering the IPU internal subdevs
directly, but I actually like doing it this way, it provides a satisfying
symmetry that all the subdevs are registered in the same way
asynchronously, the only difference being the external subdevs are
registered with device node match, and the internal subdevs with
device name match.


> I'll try to give the driver a proper review in the next days.

Ok thanks.

>> Philipp's driver only supports unconverted image capture from the SMFC.
>> In addition
>> to that, mine allows for all the hardware links supported by the IPU,
>> including routing
>> frames from the CSI directly to the Image converter for scaling up to
>> 4096x4096,
> Note that tiled scaling (anything > 1024x1024) currently doesn't produce
> correct results due to the fractional reset at the seam. This is not a
> property of this driver, but still needs to be fixed in the ipu-v3 core.

right, understood.

>> colorspace conversion, rotation, and motion compensated de-interlace.
>> Yes all these
>> conversion can be carried out post-capture via a mem2mem device, but
>> conversion
>> directly from CSI capture has advantages, including minimized CPU
>> utilization and
>> lower AXI bus traffic.
> These benefits are limited by the hardware to non-rotated frames <
> 1024x1024 pixels.

right. But I can see many use-cases out there, where people need
scaling and/or colorspace conversion in video capture, but don't
need output > 1024x1024.

Steve


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-10  0:15             ` Steve Longerbeam
@ 2017-01-10 23:52               ` Steve Longerbeam
  2017-01-11  9:12                 ` Jean-Michel Hautbois
  2017-01-11 12:10                 ` Philipp Zabel
  2017-01-11 12:11               ` Philipp Zabel
  1 sibling, 2 replies; 61+ messages in thread
From: Steve Longerbeam @ 2017-01-10 23:52 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List



On 01/09/2017 04:15 PM, Steve Longerbeam wrote:
> Hi Philipp,
>
>
> On 01/09/2017 11:43 AM, Philipp Zabel wrote:
>
>
> <snip>
>> One is the amount and organization of subdevices/media entities visible
>> to userspace. The SMFCs should not be user controllable subdevices, but
>> can be implicitly enabled when a CSI is directly linked to a camif.
>
> I agree the SMFC could be folded into the CSI, but I see at least one
> issue.
>
> From the dot graph you'll see that the PRPVF entity can receive directly
> from the CSI, or indirectly via the SMFC. If the SMFC entity were folded
> into the CSI entity, there would have to be a "direct to PRPVF" output 
> pad
> and a "indirect via SMFC" output pad and I'm not sure how that info would
> be conveyed to the user. With a SMFC entity those pipelines are explicit.

In summary here, unless you have strong objection I'd prefer to keep a
distinct SMFC entity. It makes the pipelines more clear to the user, and it
better models the IPU internals.

>
>> Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
>> best choice. It is true that the three tasks can be enabled separately,
>> but to my understanding, the PRP ENC and PRP VF tasks share a single
>> input channel. Shouldn't this be a single PRP subdevice with one input
>> and two (VF, ENC) outputs?
>
> Since the VDIC sends its motion compensated frames to the PRP VF task,
> I've created the PRPVF entity solely for motion compensated de-interlace
> support. I don't really see any other use for the PRPVF task except for
> motion compensated de-interlace.
>
> So really, the PRPVF entity is a combination of the VDIC and PRPVF 
> subunits.
>
> So looking at link_setup() in imx-csi.c, you'll see that when the CSI 
> is linked
> to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.
>
> But if we were to create a VDIC entity, I can see how we could then
> have a single PRP entity. Then if the PRP entity is receiving from the 
> VDIC,
> the PRP VF task would be activated.
>
> Another advantage of creating a distinct VDIC entity is that frames could
> potentially be routed directly from the VDIC to camif, for 
> motion-compensated
> frame capture only with no scaling/CSC. I think that would be IDMAC 
> channel
> 5, we've tried to get that pipeline to work in the past without 
> success. That's
> mainly why I decided not to attempt it and instead fold VDIC into 
> PRPVF entity.
>
>

Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You 
are correct
that PRPENC and PRPVF do share an input channel (the CSIs). But the PRPVF
has an additional input channel from the VDIC, and since my PRPVF entity 
roles
up the VDIC internally, it is actually receiving from the VDIC channel.

So unless you think we should have a distinct VDIC entity, I would like 
to keep this
the way it is.

Steve


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-10 23:52               ` Steve Longerbeam
@ 2017-01-11  9:12                 ` Jean-Michel Hautbois
  2017-01-11 12:10                 ` Philipp Zabel
  1 sibling, 0 replies; 61+ messages in thread
From: Jean-Michel Hautbois @ 2017-01-11  9:12 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Philipp Zabel, Marek Vasut, Robert Schwebel, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Hi Steve and Philipp,

2017-01-11 0:52 GMT+01:00 Steve Longerbeam <steve_longerbeam@mentor.com>:
>
>
> On 01/09/2017 04:15 PM, Steve Longerbeam wrote:
>>
>> Hi Philipp,
>>
>>
>> On 01/09/2017 11:43 AM, Philipp Zabel wrote:
>>
>>
>> <snip>
>>>
>>> One is the amount and organization of subdevices/media entities visible
>>> to userspace. The SMFCs should not be user controllable subdevices, but
>>> can be implicitly enabled when a CSI is directly linked to a camif.
>>
>>
>> I agree the SMFC could be folded into the CSI, but I see at least one
>> issue.
>>
>> From the dot graph you'll see that the PRPVF entity can receive directly
>> from the CSI, or indirectly via the SMFC. If the SMFC entity were folded
>> into the CSI entity, there would have to be a "direct to PRPVF" output pad
>> and a "indirect via SMFC" output pad and I'm not sure how that info would
>> be conveyed to the user. With a SMFC entity those pipelines are explicit.
>
>
> In summary here, unless you have strong objection I'd prefer to keep a
> distinct SMFC entity. It makes the pipelines more clear to the user, and it
> better models the IPU internals.
>
>>
>>> Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
>>> best choice. It is true that the three tasks can be enabled separately,
>>> but to my understanding, the PRP ENC and PRP VF tasks share a single
>>> input channel. Shouldn't this be a single PRP subdevice with one input
>>> and two (VF, ENC) outputs?
>>
>>
>> Since the VDIC sends its motion compensated frames to the PRP VF task,
>> I've created the PRPVF entity solely for motion compensated de-interlace
>> support. I don't really see any other use for the PRPVF task except for
>> motion compensated de-interlace.
>>
>> So really, the PRPVF entity is a combination of the VDIC and PRPVF
>> subunits.
>>
>> So looking at link_setup() in imx-csi.c, you'll see that when the CSI is
>> linked
>> to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.
>>
>> But if we were to create a VDIC entity, I can see how we could then
>> have a single PRP entity. Then if the PRP entity is receiving from the
>> VDIC,
>> the PRP VF task would be activated.
>>
>> Another advantage of creating a distinct VDIC entity is that frames could
>> potentially be routed directly from the VDIC to camif, for
>> motion-compensated
>> frame capture only with no scaling/CSC. I think that would be IDMAC
>> channel
>> 5, we've tried to get that pipeline to work in the past without success.
>> That's
>> mainly why I decided not to attempt it and instead fold VDIC into PRPVF
>> entity.
>>
>>
>
> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You are
> correct
> that PRPENC and PRPVF do share an input channel (the CSIs). But the PRPVF
> has an additional input channel from the VDIC, and since my PRPVF entity
> roles
> up the VDIC internally, it is actually receiving from the VDIC channel.
>
> So unless you think we should have a distinct VDIC entity, I would like to
> keep this
> the way it is.
>
> Steve
>

That is exactly my thought. I was wondering if VDIC could be an
independent entity, as it could also be used as a combiner if one adds
the channels...

What do you think about that ?

JM
PS: My phone sent the mail in HTML, again, sorry for the double mail, again...

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-10 23:52               ` Steve Longerbeam
  2017-01-11  9:12                 ` Jean-Michel Hautbois
@ 2017-01-11 12:10                 ` Philipp Zabel
  2017-01-12 23:22                   ` Steve Longerbeam
  1 sibling, 1 reply; 61+ messages in thread
From: Philipp Zabel @ 2017-01-11 12:10 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Hi Steve,

Am Dienstag, den 10.01.2017, 15:52 -0800 schrieb Steve Longerbeam:
> 
> On 01/09/2017 04:15 PM, Steve Longerbeam wrote:
> > Hi Philipp,
> >
> >
> > On 01/09/2017 11:43 AM, Philipp Zabel wrote:
> >
> >
> > <snip>
> >> One is the amount and organization of subdevices/media entities visible
> >> to userspace. The SMFCs should not be user controllable subdevices, but
> >> can be implicitly enabled when a CSI is directly linked to a camif.
> >
> > I agree the SMFC could be folded into the CSI, but I see at least one
> > issue.

I don't suggest to fold it into the CSI.
The CSI should have one output pad that that can be connected either to
the IC PRP input (CSIx_DATA_DEST=1) or to the IDMAC via SMFC
(CSIx_DATA_DEST=2). The SMFC should be considered part of the link
between CSI and IDMAC.
The IC PRP input pad should be connected to either the CSI0 output pad
(CSI_SEL=0,IC_INPUT=0), the CSI1 output pad (CSI_SEL=1,IC_INPUT=0), or
to the VDIC (IC_INPUT=1).

> > From the dot graph you'll see that the PRPVF entity can receive directly
> > from the CSI, or indirectly via the SMFC.

And that's one reason why I think representing the mem2mem paths as
links in the media controller interface is questionable. The path "via
SMFC" is not really a hardware connection between CSI -> SMFC -> IC PRP,
but two completely separate paths:
CSI -> SMFC -> IDMAC -> mem and mem -> IDMAC -> IC PRP with different
IDMAC read/write channels. The only real connection is that one DMA the
IC DMA transfers are triggered automatically by the frame
synchronisation unit on every CSI frame.
There is no way to convey to the user which links are real connections
and which are just linked DMA write and read channels somewhere else.

Is there even a reason for the user to switch between direct and via
memory paths manually, or could this be inferred from other state
(formats, active links)?

>  If the SMFC entity were folded
> > into the CSI entity, there would have to be a "direct to PRPVF" output 
> > pad
> > and a "indirect via SMFC" output pad and I'm not sure how that info would
> > be conveyed to the user. With a SMFC entity those pipelines are explicit.
>
> In summary here, unless you have strong objection I'd prefer to keep a
> distinct SMFC entity.

I do, I still think you could both describe the hardware better and
reduce unnecessary interface complexity by removing the SMFC entities
and their imagined links to the IC.

> It makes the pipelines more clear to the user, and it
> better models the IPU internals.

I disagree. The IPU has a single SMFC that acts as FIFO to both CSIs in
the CSI -> SMFC -> IDMAC path, not two. The "channel linking" (automatic
DMA triggers between channels in the IDMAC via FSU) has nothing to do
with the SMFC.

> >> Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
> >> best choice. It is true that the three tasks can be enabled separately,
> >> but to my understanding, the PRP ENC and PRP VF tasks share a single
> >> input channel. Shouldn't this be a single PRP subdevice with one input
> >> and two (VF, ENC) outputs?
> >
> > Since the VDIC sends its motion compensated frames to the PRP VF task,
> > I've created the PRPVF entity solely for motion compensated de-interlace
> > support. I don't really see any other use for the PRPVF task except for
> > motion compensated de-interlace.

I suppose simultaneous scaling to two different resolutions without
going through memory could be one interesting use case:

           ,--> VF --> IDMAC -> mem -> to display
CSI -> IC PRP 
           `--> ENC -> IDMAC -> mem -> to VPU

> > So really, the PRPVF entity is a combination of the VDIC and PRPVF 
> > subunits.

I'd prefer to keep them separate, we could then use a deactivated VDIC
-> IC PRP link to mark the VDIC as available to be used for its
combining functionality by another driver.

Also logically, the VDIC subdev would be the right user interface to
switch from interlaced to non-interlaced pad modes, whereas the IC
subdev(s) should just allow changing color space and size between its
inputs and outputs.

> > So looking at link_setup() in imx-csi.c, you'll see that when the CSI 
> > is linked
> > to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.
> >
> > But if we were to create a VDIC entity, I can see how we could then
> > have a single PRP entity. Then if the PRP entity is receiving from the 
> > VDIC,
> > the PRP VF task would be activated.
> >
> > Another advantage of creating a distinct VDIC entity is that frames could
> > potentially be routed directly from the VDIC to camif, for 
> > motion-compensated
> > frame capture only with no scaling/CSC. I think that would be IDMAC 
> > channel
> > 5, we've tried to get that pipeline to work in the past without 
> > success. That's
> > mainly why I decided not to attempt it and instead fold VDIC into 
> > PRPVF entity.

There's also channel 13, but that's described as "Recent field form
CSI", so I think that might be for CSI only mode.

> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You 
> are correct that PRPENC and PRPVF do share an input channel (the CSIs).
> But the PRPVF has an additional input channel from the VDIC, 

Wait, that is a VDIC -> PRP connection, not a VDIC -> PRPVF connection,
or am I mistaken?
The VDIC direct input is enabled with ipu_set_ic_src_mux(vdi=true)
(IC_INPUT=1), and that is the same for both PRP->ENC and PRP->VF.

> and since my PRPVF entity roles
> up the VDIC internally, it is actually receiving from the VDIC channel.
> So unless you think we should have a distinct VDIC entity, I would like 
> to keep this
> the way it is.

Yes, I think VDIC should be separated out of PRPVF. What do you think
about splitting the IC PRP into three parts?

PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
and two output pads connected to PRPVF and PRPENC, respectively. This
would even allow to have the PRP describe the downscale and PRPVF and
PRPENC describe the bilinear upscale part of the IC.

regards
Philipp


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-10  0:15             ` Steve Longerbeam
  2017-01-10 23:52               ` Steve Longerbeam
@ 2017-01-11 12:11               ` Philipp Zabel
  1 sibling, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2017-01-11 12:11 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Am Montag, den 09.01.2017, 16:15 -0800 schrieb Steve Longerbeam:
[...]
> Since the VDIC sends its motion compensated frames to the PRP VF task,
> I've created the PRPVF entity solely for motion compensated de-interlace
> support. I don't really see any other use for the PRPVF task except for
> motion compensated de-interlace.
> 
> So really, the PRPVF entity is a combination of the VDIC and PRPVF subunits.
> 
> So looking at link_setup() in imx-csi.c, you'll see that when the CSI is 
> linked
> to PRPVF entity, it is actually sending to IPU_CSI_DEST_VDIC.
> 
> But if we were to create a VDIC entity, I can see how we could then
> have a single PRP entity. Then if the PRP entity is receiving from the VDIC,
> the PRP VF task would be activated.
> 
> Another advantage of creating a distinct VDIC entity is that frames could
> potentially be routed directly from the VDIC to camif, for 
> motion-compensated
> frame capture only with no scaling/CSC. I think that would be IDMAC channel
> 5, we've tried to get that pipeline to work in the past without success. 
> That's
> mainly why I decided not to attempt it and instead fold VDIC into PRPVF 
> entity.
> 
> 
> > On the other hand, there is currently no way to communicate to userspace
> > that the IC can't downscale bilinearly, but only to 1/2 or 1/4 of the
> > input resolution, and then scale up bilinearly for there. So instead of
> > pretending to be able to downscale to any resolution, it would be better
> > to split each IC task into two subdevs, one for the
> > 1/2-or-1/4-downscaler, and one for the bilinear upscaler.
> 
> Hmm, good point, but couldn't we just document that fact?
> 
> 
> 
> > Next there is the issue of the custom mem2mem infrastructure inside the
> > IPU driver. I think this should be ultimately implemented at a higher
> > level,
> 
> if we were to move it up, into what subsystem would it go? I guess
> v4l2, but then we lose the possibility of other subsystems making use
> of it, such as drm.
> 
> >   but I see no way this will ever move out of the IPU driver once
> > the userspace inferface gets established.
> 
> it would be more difficult at that point, true.
> 
> > Then there are a few issues that are not userspace visible, so less
> > pressing. For example modelling the internal subdevs as separate
> > platform devices with async subdevices seems unnecessarily indirect. Why
> > not drop the platform devices and create the subdevs directly instead of
> > asynchronously?
> 
> This came out of a previous implementation where the IPU internal
> subunits and their links were represented as an OF graph (the patches
> I floated by you that you didn't like :) I've been meaning to ask you why
> you don't like to expose the IPU internals via OF graph, I have my theories
> why, but I digress). In that implementation the internal subdevs had to be
> registered asynchronously with device node match.
> 
> I thought about going back to registering the IPU internal subdevs
> directly, but I actually like doing it this way, it provides a satisfying
> symmetry that all the subdevs are registered in the same way
> asynchronously, the only difference being the external subdevs are
> registered with device node match, and the internal subdevs with
> device name match.
> 
> 
> > I'll try to give the driver a proper review in the next days.
> 
> Ok thanks.
> 
> >> Philipp's driver only supports unconverted image capture from the SMFC.
> >> In addition
> >> to that, mine allows for all the hardware links supported by the IPU,
> >> including routing
> >> frames from the CSI directly to the Image converter for scaling up to
> >> 4096x4096,
> > Note that tiled scaling (anything > 1024x1024) currently doesn't produce
> > correct results due to the fractional reset at the seam. This is not a
> > property of this driver, but still needs to be fixed in the ipu-v3 core.
> 
> right, understood.
> 
> >> colorspace conversion, rotation, and motion compensated de-interlace.
> >> Yes all these
> >> conversion can be carried out post-capture via a mem2mem device, but
> >> conversion
> >> directly from CSI capture has advantages, including minimized CPU
> >> utilization and
> >> lower AXI bus traffic.
> > These benefits are limited by the hardware to non-rotated frames <
> > 1024x1024 pixels.
> 
> right. But I can see many use-cases out there, where people need
> scaling and/or colorspace conversion in video capture, but don't
> need output > 1024x1024.
> 
> Steve
> 
> 



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-11 12:10                 ` Philipp Zabel
@ 2017-01-12 23:22                   ` Steve Longerbeam
  2017-01-12 23:43                     ` Steve Longerbeam
                                       ` (2 more replies)
  0 siblings, 3 replies; 61+ messages in thread
From: Steve Longerbeam @ 2017-01-12 23:22 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Hi Philipp, JM,


First, let me say that you both have convinced me that we need a VDIC
entity. I've implemented that in the branch imx-media-staging-md-vdic.
At this point it only implements the M/C de-interlacing function, not the
plane Combiner. So it has only one input and one output pad. I would
imagine it will need two additional inputs and another output to support
the Combiner (two pads for each plane to be combined, and a combiner
output pad).

More below...

On 01/11/2017 04:10 AM, Philipp Zabel wrote:
> Hi Steve,
>
> Am Dienstag, den 10.01.2017, 15:52 -0800 schrieb Steve Longerbeam:
>> On 01/09/2017 04:15 PM, Steve Longerbeam wrote:
>>> Hi Philipp,
>>>
>>>
>>> On 01/09/2017 11:43 AM, Philipp Zabel wrote:
>>>
>>>
>>> <snip>
>>>> One is the amount and organization of subdevices/media entities visible
>>>> to userspace. The SMFCs should not be user controllable subdevices, but
>>>> can be implicitly enabled when a CSI is directly linked to a camif.
>>> I agree the SMFC could be folded into the CSI, but I see at least one
>>> issue.
> I don't suggest to fold it into the CSI.
> The CSI should have one output pad that that can be connected either to
> the IC PRP input (CSIx_DATA_DEST=1) or to the IDMAC via SMFC
> (CSIx_DATA_DEST=2).

Right, and CSI can connect to VDIC. I don't know if it is documented,
but to route to VDIC, set CSIx_DATA_DEST=1, as if to IC PRP. Confusing,
but it's as if the VDIC is somehow part of the IC.

>   The SMFC should be considered part of the link
> between CSI and IDMAC.

sure, I can agree with that.

> The IC PRP input pad should be connected to either the CSI0 output pad
> (CSI_SEL=0,IC_INPUT=0), the CSI1 output pad (CSI_SEL=1,IC_INPUT=0), or
> to the VDIC (IC_INPUT=1).

correct.

>
>>>  From the dot graph you'll see that the PRPVF entity can receive directly
>>> from the CSI, or indirectly via the SMFC.
> And that's one reason why I think representing the mem2mem paths as
> links in the media controller interface is questionable. The path "via
> SMFC" is not really a hardware connection between CSI -> SMFC -> IC PRP,
> but two completely separate paths:
> CSI -> SMFC -> IDMAC -> mem and mem -> IDMAC -> IC PRP with different
> IDMAC read/write channels. The only real connection is that one DMA the
> IC DMA transfers are triggered automatically by the frame
> synchronisation unit on every CSI frame.
> There is no way to convey to the user which links are real connections
> and which are just linked DMA write and read channels somewhere else.
>
> Is there even a reason for the user to switch between direct and via
> memory paths manually, or could this be inferred from other state
> (formats, active links)?

a CSI -> VDIC link doesn't convey whether that is a direct link using
the FSU, or whether it is via SMFC and memory buffers.

If you'll recall, the VDIC has three motion modes: low, medium, and
high.

When VDIC receives directly from CSI, it can only operate in
high motion mode (it processes a single field F(n-1) sent directly
from the CSI using the FSU). The reference manual refers to this
as "real time mode".

The low and medium motion modes require processing all three
fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
channels 8, 9, and 10 respectively.

So in order to support low and medium motion modes, there needs to
be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
memory buffers.

How about this: we can do away with SMFC entities by having two
output pads from the CSI: a "direct" output pad that can link to PRP and
VDIC, and a "IDMAC via SMFC" output pad that links to the entities that
require memory buffers (VDIC in low/medium motion mode, camif, and
PP). Only one of those output pads can be active at a time. I'm not sure if
that allowed by the media framework, do two source pads imply that the
entity can activate both of those pads simultaneously, or is allowed that
only one source pad of two or more can be activated at a time? It's not
clear to me.

Let me know if you agree with this proposal.

> <snip>
>>>> Also I'm not convinced the 1:1 mapping of IC task to subdevices is the
>>>> best choice. It is true that the three tasks can be enabled separately,
>>>> but to my understanding, the PRP ENC and PRP VF tasks share a single
>>>> input channel. Shouldn't this be a single PRP subdevice with one input
>>>> and two (VF, ENC) outputs?
>>> Since the VDIC sends its motion compensated frames to the PRP VF task,
>>> I've created the PRPVF entity solely for motion compensated de-interlace
>>> support. I don't really see any other use for the PRPVF task except for
>>> motion compensated de-interlace.
> I suppose simultaneous scaling to two different resolutions without
> going through memory could be one interesting use case:
>
>             ,--> VF --> IDMAC -> mem -> to display
> CSI -> IC PRP
>             `--> ENC -> IDMAC -> mem -> to VPU

Yes, that is one possibility.

>>> So really, the PRPVF entity is a combination of the VDIC and PRPVF
>>> subunits.
> I'd prefer to keep them separate, we could then use a deactivated VDIC
> -> IC PRP link to mark the VDIC as available to be used for its
> combining functionality by another driver.
>
> Also logically, the VDIC subdev would be the right user interface to
> switch from interlaced to non-interlaced pad modes, whereas the IC
> subdev(s) should just allow changing color space and size between its
> inputs and outputs.

right, that has been implemented in branch imx-media-staging-md-vdic.

> <snip>
>> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You
>> are correct that PRPENC and PRPVF do share an input channel (the CSIs).
>> But the PRPVF has an additional input channel from the VDIC,
> Wait, that is a VDIC -> PRP connection, not a VDIC -> PRPVF connection,
> or am I mistaken?

The FSU only sends VDIC output to PRPVF, not PRPENC. It's not
well documented, but see "IPU main flows" section in the manual.
All listed pipelines that route VDIC to IC go to IC (PRP VF).

Which suggests that when IC receives from VDIC, PRPENC can
receive no data and is effectively unusable.

> The VDIC direct input is enabled with ipu_set_ic_src_mux(vdi=true)
> (IC_INPUT=1), and that is the same for both PRP->ENC and PRP->VF.

true, but in fact the FSU only sends to PRP VF.

>> and since my PRPVF entity roles
>> up the VDIC internally, it is actually receiving from the VDIC channel.
>> So unless you think we should have a distinct VDIC entity, I would like
>> to keep this
>> the way it is.
> Yes, I think VDIC should be separated out of PRPVF. What do you think
> about splitting the IC PRP into three parts?
>
> PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
> and two output pads connected to PRPVF and PRPENC, respectively. This
> would even allow to have the PRP describe the downscale and PRPVF and
> PRPENC describe the bilinear upscale part of the IC.

Sure sounds good to me. PRPENC and PRPVF are independent,
but they cannot process different data streams, they both have to
work with CSI0 or CSI1, so this makes sense.

I'll start looking into it.

Steve


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-12 23:22                   ` Steve Longerbeam
@ 2017-01-12 23:43                     ` Steve Longerbeam
  2017-01-13  2:22                       ` Steve Longerbeam
  2017-01-13 11:04                     ` Philipp Zabel
  2017-01-13 11:05                     ` Philipp Zabel
  2 siblings, 1 reply; 61+ messages in thread
From: Steve Longerbeam @ 2017-01-12 23:43 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List



On 01/12/2017 03:22 PM, Steve Longerbeam wrote:
>
>
>>> and since my PRPVF entity roles
>>> up the VDIC internally, it is actually receiving from the VDIC channel.
>>> So unless you think we should have a distinct VDIC entity, I would like
>>> to keep this
>>> the way it is.
>> Yes, I think VDIC should be separated out of PRPVF. What do you think
>> about splitting the IC PRP into three parts?
>>
>> PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
>> and two output pads connected to PRPVF and PRPENC, respectively. This
>> would even allow to have the PRP describe the downscale and PRPVF and
>> PRPENC describe the bilinear upscale part of the IC.

Actually, how about the following:

PRP would have one input pad coming from CSI0, CSI1, or VDIC. But
instead of another level of indirection with two more PRPENC and PRPVF
entities, PRP would instead have two output pads, one for PRPVF output
and one for PRPENC output.

Both output pads could be activated if the input is connected to CSI0 or 
CSI1.
And only the PRPVF output can be activated if the input is from VDIC.

Steve


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-12 23:43                     ` Steve Longerbeam
@ 2017-01-13  2:22                       ` Steve Longerbeam
  2017-01-13 11:18                         ` Philipp Zabel
  0 siblings, 1 reply; 61+ messages in thread
From: Steve Longerbeam @ 2017-01-13  2:22 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List



On 01/12/2017 03:43 PM, Steve Longerbeam wrote:
>
>
> On 01/12/2017 03:22 PM, Steve Longerbeam wrote:
>>
>>
>>>> and since my PRPVF entity roles
>>>> up the VDIC internally, it is actually receiving from the VDIC 
>>>> channel.
>>>> So unless you think we should have a distinct VDIC entity, I would 
>>>> like
>>>> to keep this
>>>> the way it is.
>>> Yes, I think VDIC should be separated out of PRPVF. What do you think
>>> about splitting the IC PRP into three parts?
>>>
>>> PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
>>> and two output pads connected to PRPVF and PRPENC, respectively. This
>>> would even allow to have the PRP describe the downscale and PRPVF and
>>> PRPENC describe the bilinear upscale part of the IC.
>
> Actually, how about the following:
>
> PRP would have one input pad coming from CSI0, CSI1, or VDIC. But
> instead of another level of indirection with two more PRPENC and PRPVF
> entities, PRP would instead have two output pads, one for PRPVF output
> and one for PRPENC output.
>
> Both output pads could be activated if the input is connected to CSI0 
> or CSI1.
> And only the PRPVF output can be activated if the input is from VDIC.

Actually that proved too difficult. I went with your original idea. 
Branch that
implements this is imx-media-staging-md-prp. The media dot graph looks good
but I have not tested yet. I'll start testing it tomorrow.

Steve


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-12 23:22                   ` Steve Longerbeam
  2017-01-12 23:43                     ` Steve Longerbeam
@ 2017-01-13 11:04                     ` Philipp Zabel
  2017-01-13 11:05                     ` Philipp Zabel
  2 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2017-01-13 11:04 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Am Donnerstag, den 12.01.2017, 15:22 -0800 schrieb Steve Longerbeam:
> Hi Philipp, JM,
>
> First, let me say that you both have convinced me that we need a VDIC
> entity. I've implemented that in the branch imx-media-staging-md-vdic.
> At this point it only implements the M/C de-interlacing function, not the
> plane Combiner. So it has only one input and one output pad.

Excellent.

>  I would
> imagine it will need two additional inputs and another output to support
> the Combiner (two pads for each plane to be combined, and a combiner
> output pad).

If I accept for a moment that IDMAC/FSU channel links are described as
media entity links, that would be right, I guess. The input pads would
represent the VDI1/3_SRC_SEL FSU muxes and the IDMAC read channels 25
and 26.

> More below...
[...]
> > I don't suggest to fold it into the CSI.
> > The CSI should have one output pad that that can be connected either to
> > the IC PRP input (CSIx_DATA_DEST=1) or to the IDMAC via SMFC
> > (CSIx_DATA_DEST=2).
> 
> Right, and CSI can connect to VDIC. I don't know if it is documented,
> but to route to VDIC, set CSIx_DATA_DEST=1, as if to IC PRP. Confusing,
> but it's as if the VDIC is somehow part of the IC.

Agreed. I get the feeling that the VDIC FIFOs (especially FIFO1) and the
"FIFO[s] located in the Input Buffer Memory" that are mentioned in the
IC chapter might be partially referring to the same thing, or at least
are somehow tightly interconnected. At least VDIC FIFO1 described in
Figure 37-47 "VDIC Block Diagram" has the direct CSI input into FIFO1,
and it is mentioned that the IDMAC can read back from FIFO1 directly.

[...]
> > Is there even a reason for the user to switch between direct and via
> > memory paths manually, or could this be inferred from other state
> > (formats, active links)?
> 
> a CSI -> VDIC link doesn't convey whether that is a direct link using
> the FSU, or whether it is via SMFC and memory buffers.
> 
> If you'll recall, the VDIC has three motion modes: low, medium, and
> high.
> 
> When VDIC receives directly from CSI, it can only operate in
> high motion mode (it processes a single field F(n-1) sent directly
> from the CSI using the FSU). The reference manual refers to this
> as "real time mode".

In my opinion this is the only mode that should be supported in the
capture driver. But that may be wishful thinking to a certain degree -
see below.

> The low and medium motion modes require processing all three
> fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
> channels 8, 9, and 10 respectively.
> 
> So in order to support low and medium motion modes, there needs to
> be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
> memory buffers.

In the cases where the VDIC reads all three fields from memory, I'd
prefer that to be implemented as a separate mem2mem device. While useful
on its own, there could be an API to link together the capture and
output of different video4linux devices, and that could get a backend to
implement IDMAC/FSU channel linking where supported.

> How about this: we can do away with SMFC entities by having two
> output pads from the CSI: a "direct" output pad that can link to PRP and
> VDIC, and a "IDMAC via SMFC" output pad that links to the entities that
> require memory buffers (VDIC in low/medium motion mode, camif, and
> PP). Only one of those output pads can be active at a time. I'm not sure if
> that allowed by the media framework, do two source pads imply that the
> entity can activate both of those pads simultaneously, or is allowed that
> only one source pad of two or more can be activated at a time? It's not
> clear to me.
> 
> Let me know if you agree with this proposal.

In my opinion that is better than having the SMFC as a separate entity,
even better would be not to have to describe the memory paths as media
links.

[...]
> >> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You
> >> are correct that PRPENC and PRPVF do share an input channel (the CSIs).
> >> But the PRPVF has an additional input channel from the VDIC,
> > Wait, that is a VDIC -> PRP connection, not a VDIC -> PRPVF connection,
> > or am I mistaken?
> 
> The FSU only sends VDIC output to PRPVF, not PRPENC. It's not
> well documented, but see "IPU main flows" section in the manual.
> All listed pipelines that route VDIC to IC go to IC (PRP VF).

Sorry to be a bit pedantic, the FSU does not send output. It just
triggers a DMA read channel (IDMAC or DMAIC) whenever signalled by
another write channel's EOF.

Since the read channel of PRPVF and PRPENC is the same (IC direct, cb7),
I don't yet understand how the VDIC output can be sent to one but not
the other. As you said, the documentation is a bit confusing in this
regard.

> Which suggests that when IC receives from VDIC, PRPENC can
> receive no data and is effectively unusable.
> 
> > The VDIC direct input is enabled with ipu_set_ic_src_mux(vdi=true)
> > (IC_INPUT=1), and that is the same for both PRP->ENC and PRP->VF.
> 
> true, but in fact the FSU only sends to PRP VF.

Ok. Still, I think in that case we can describe the link as VDIC -> PRP
and just prohibit the PRPENC links to be enabled when that is set.

> >> and since my PRPVF entity roles
> >> up the VDIC internally, it is actually receiving from the VDIC channel.
> >> So unless you think we should have a distinct VDIC entity, I would like
> >> to keep this
> >> the way it is.
> > Yes, I think VDIC should be separated out of PRPVF. What do you think
> > about splitting the IC PRP into three parts?
> >
> > PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
> > and two output pads connected to PRPVF and PRPENC, respectively. This
> > would even allow to have the PRP describe the downscale and PRPVF and
> > PRPENC describe the bilinear upscale part of the IC.
> 
> Sure sounds good to me. PRPENC and PRPVF are independent,
> but they cannot process different data streams, they both have to
> work with CSI0 or CSI1, so this makes sense.
> 
> I'll start looking into it.

Thanks!

regards
Philipp


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-12 23:22                   ` Steve Longerbeam
  2017-01-12 23:43                     ` Steve Longerbeam
  2017-01-13 11:04                     ` Philipp Zabel
@ 2017-01-13 11:05                     ` Philipp Zabel
  2017-01-14 20:26                       ` Steve Longerbeam
  2 siblings, 1 reply; 61+ messages in thread
From: Philipp Zabel @ 2017-01-13 11:05 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Hi Steve,

Am Donnerstag, den 12.01.2017, 15:22 -0800 schrieb Steve Longerbeam:
> Hi Philipp, JM,
>
> First, let me say that you both have convinced me that we need a VDIC
> entity. I've implemented that in the branch imx-media-staging-md-vdic.
> At this point it only implements the M/C de-interlacing function, not the
> plane Combiner. So it has only one input and one output pad.

Excellent.

>  I would
> imagine it will need two additional inputs and another output to support
> the Combiner (two pads for each plane to be combined, and a combiner
> output pad).

If I accept for a moment that IDMAC/FSU channel links are described as
media entity links, that would be right, I guess. The input pads would
represent the VDI1/3_SRC_SEL FSU muxes and the IDMAC read channels 25
and 26.

> More below...
[...]
> > I don't suggest to fold it into the CSI.
> > The CSI should have one output pad that that can be connected either to
> > the IC PRP input (CSIx_DATA_DEST=1) or to the IDMAC via SMFC
> > (CSIx_DATA_DEST=2).
> 
> Right, and CSI can connect to VDIC. I don't know if it is documented,
> but to route to VDIC, set CSIx_DATA_DEST=1, as if to IC PRP. Confusing,
> but it's as if the VDIC is somehow part of the IC.

Agreed. I get the feeling that the VDIC FIFOs (especially FIFO1) and the
"FIFO[s] located in the Input Buffer Memory" that are mentioned in the
IC chapter might be partially referring to the same thing, or at least
are somehow tightly interconnected. At least VDIC FIFO1 described in
Figure 37-47 "VDIC Block Diagram" has the direct CSI input into FIFO1,
and it is mentioned that the IDMAC can read back from FIFO1 directly.

[...]
> > Is there even a reason for the user to switch between direct and via
> > memory paths manually, or could this be inferred from other state
> > (formats, active links)?
> 
> a CSI -> VDIC link doesn't convey whether that is a direct link using
> the FSU, or whether it is via SMFC and memory buffers.
> 
> If you'll recall, the VDIC has three motion modes: low, medium, and
> high.
> 
> When VDIC receives directly from CSI, it can only operate in
> high motion mode (it processes a single field F(n-1) sent directly
> from the CSI using the FSU). The reference manual refers to this
> as "real time mode".

In my opinion this is the only mode that should be supported in the
capture driver. But that may be wishful thinking to a certain degree -
see below.

> The low and medium motion modes require processing all three
> fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
> channels 8, 9, and 10 respectively.
> 
> So in order to support low and medium motion modes, there needs to
> be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
> memory buffers.

In the cases where the VDIC reads all three fields from memory, I'd
prefer that to be implemented as a separate mem2mem device. While useful
on its own, there could be an API to link together the capture and
output of different video4linux devices, and that could get a backend to
implement IDMAC/FSU channel linking where supported.

> How about this: we can do away with SMFC entities by having two
> output pads from the CSI: a "direct" output pad that can link to PRP and
> VDIC, and a "IDMAC via SMFC" output pad that links to the entities that
> require memory buffers (VDIC in low/medium motion mode, camif, and
> PP). Only one of those output pads can be active at a time. I'm not sure if
> that allowed by the media framework, do two source pads imply that the
> entity can activate both of those pads simultaneously, or is allowed that
> only one source pad of two or more can be activated at a time? It's not
> clear to me.
> 
> Let me know if you agree with this proposal.

In my opinion that is better than having the SMFC as a separate entity,
even better would be not to have to describe the memory paths as media
links.

[...]
> >> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You
> >> are correct that PRPENC and PRPVF do share an input channel (the CSIs).
> >> But the PRPVF has an additional input channel from the VDIC,
> > Wait, that is a VDIC -> PRP connection, not a VDIC -> PRPVF connection,
> > or am I mistaken?
> 
> The FSU only sends VDIC output to PRPVF, not PRPENC. It's not
> well documented, but see "IPU main flows" section in the manual.
> All listed pipelines that route VDIC to IC go to IC (PRP VF).

Sorry to be a bit pedantic, the FSU does not send output. It just
triggers a DMA read channel (IDMAC or DMAIC) whenever signalled by
another write channel's EOF.

Since the read channel of PRPVF and PRPENC is the same (IC direct, cb7),
I don't yet understand how the VDIC output can be sent to one but not
the other. As you said, the documentation is a bit confusing in this
regard.

> Which suggests that when IC receives from VDIC, PRPENC can
> receive no data and is effectively unusable.
> 
> > The VDIC direct input is enabled with ipu_set_ic_src_mux(vdi=true)
> > (IC_INPUT=1), and that is the same for both PRP->ENC and PRP->VF.
> 
> true, but in fact the FSU only sends to PRP VF.

Ok. Still, I think in that case we can describe the link as VDIC -> PRP
and just prohibit the PRPENC links to be enabled when that is set.

> >> and since my PRPVF entity roles
> >> up the VDIC internally, it is actually receiving from the VDIC channel.
> >> So unless you think we should have a distinct VDIC entity, I would like
> >> to keep this
> >> the way it is.
> > Yes, I think VDIC should be separated out of PRPVF. What do you think
> > about splitting the IC PRP into three parts?
> >
> > PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
> > and two output pads connected to PRPVF and PRPENC, respectively. This
> > would even allow to have the PRP describe the downscale and PRPVF and
> > PRPENC describe the bilinear upscale part of the IC.
> 
> Sure sounds good to me. PRPENC and PRPVF are independent,
> but they cannot process different data streams, they both have to
> work with CSI0 or CSI1, so this makes sense.
> 
> I'll start looking into it.

Thanks!

regards
Philipp



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-13  2:22                       ` Steve Longerbeam
@ 2017-01-13 11:18                         ` Philipp Zabel
  0 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2017-01-13 11:18 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Am Donnerstag, den 12.01.2017, 18:22 -0800 schrieb Steve Longerbeam:
> 
> On 01/12/2017 03:43 PM, Steve Longerbeam wrote:
> >
> >
> > On 01/12/2017 03:22 PM, Steve Longerbeam wrote:
> >>
> >>
> >>>> and since my PRPVF entity roles
> >>>> up the VDIC internally, it is actually receiving from the VDIC 
> >>>> channel.
> >>>> So unless you think we should have a distinct VDIC entity, I would 
> >>>> like
> >>>> to keep this
> >>>> the way it is.
> >>> Yes, I think VDIC should be separated out of PRPVF. What do you think
> >>> about splitting the IC PRP into three parts?
> >>>
> >>> PRP could have one input pad connected to either CSI0, CSI1, or VDIC,
> >>> and two output pads connected to PRPVF and PRPENC, respectively. This
> >>> would even allow to have the PRP describe the downscale and PRPVF and
> >>> PRPENC describe the bilinear upscale part of the IC.
> >
> > Actually, how about the following:
> >
> > PRP would have one input pad coming from CSI0, CSI1, or VDIC. But
> > instead of another level of indirection with two more PRPENC and PRPVF
> > entities, PRP would instead have two output pads, one for PRPVF output
> > and one for PRPENC output.
> >
> > Both output pads could be activated if the input is connected to CSI0 
> > or CSI1.
> > And only the PRPVF output can be activated if the input is from VDIC.

Regarding the single input issue, I'd be fine with that too. But I
really like the idea of splitting the downsize section from the main
processing section because seeing the decimated resolution at the
downsize output pad makes it really obvious why the resulting frames are
blurry.

> Actually that proved too difficult. I went with your original idea. 
> Branch that
> implements this is imx-media-staging-md-prp. The media dot graph looks good
> but I have not tested yet. I'll start testing it tomorrow.

Thanks, I'll have a look.

regards
Philipp



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-13 11:05                     ` Philipp Zabel
@ 2017-01-14 20:26                       ` Steve Longerbeam
  2017-01-16 11:49                         ` Philipp Zabel
  0 siblings, 1 reply; 61+ messages in thread
From: Steve Longerbeam @ 2017-01-14 20:26 UTC (permalink / raw)
  To: Philipp Zabel
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List



On 01/13/2017 03:05 AM, Philipp Zabel wrote:
> Hi Steve,
>
> Am Donnerstag, den 12.01.2017, 15:22 -0800 schrieb Steve Longerbeam:
>> Hi Philipp, JM,
>>
>> First, let me say that you both have convinced me that we need a VDIC
>> entity. I've implemented that in the branch imx-media-staging-md-vdic.
>> At this point it only implements the M/C de-interlacing function, not the
>> plane Combiner. So it has only one input and one output pad.
> Excellent.
>
>>   I would
>> imagine it will need two additional inputs and another output to support
>> the Combiner (two pads for each plane to be combined, and a combiner
>> output pad).
> If I accept for a moment that IDMAC/FSU channel links are described as
> media entity links, that would be right, I guess.

Hi Philipp,

Let me start by asking, why you are averse to the idea that a media
driver passes video frames from source to sink using memory
buffers? There is no hard-and-fast rule in the media framework that
states this should not be done, AFAIK.

I agree this overlaps with the mem2mem device idea somewhat, but
IMHO a media device should be allowed to use internal memory
buffers to pass video frames between pads, if that's what it needs to
do to implement some functionality.

Can anyone else listening on this thread, chime in on this topic?


>>> Is there even a reason for the user to switch between direct and via
>>> memory paths manually, or could this be inferred from other state
>>> (formats, active links)?
>> a CSI -> VDIC link doesn't convey whether that is a direct link using
>> the FSU, or whether it is via SMFC and memory buffers.
>>
>> If you'll recall, the VDIC has three motion modes: low, medium, and
>> high.
>>
>> When VDIC receives directly from CSI, it can only operate in
>> high motion mode (it processes a single field F(n-1) sent directly
>> from the CSI using the FSU). The reference manual refers to this
>> as "real time mode".
> In my opinion this is the only mode that should be supported in the
> capture driver.

I have to disagree on that.

>   But that may be wishful thinking to a certain degree -
> see below.
>
>> The low and medium motion modes require processing all three
>> fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
>> channels 8, 9, and 10 respectively.
>>
>> So in order to support low and medium motion modes, there needs to
>> be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
>> memory buffers.
> In the cases where the VDIC reads all three fields from memory, I'd
> prefer that to be implemented as a separate mem2mem device.


I prefer that there be a single VDIC media entity, that makes use of its
dma read channels in order to support all of its motion compensation
modes.



>   While useful
> on its own, there could be an API to link together the capture and
> output of different video4linux devices, and that could get a backend to
> implement IDMAC/FSU channel linking where supported.

An interesting idea, but it also sounds a lot like what can already be
provided by a pipeline in the media framework, by linking an entity
that is a video source to an entity that is a video sink.


>
>> How about this: we can do away with SMFC entities by having two
>> output pads from the CSI: a "direct" output pad that can link to PRP and
>> VDIC, and a "IDMAC via SMFC" output pad that links to the entities that
>> require memory buffers (VDIC in low/medium motion mode, camif, and
>> PP). Only one of those output pads can be active at a time. I'm not sure if
>> that allowed by the media framework, do two source pads imply that the
>> entity can activate both of those pads simultaneously, or is allowed that
>> only one source pad of two or more can be activated at a time? It's not
>> clear to me.
>>
>> Let me know if you agree with this proposal.
> In my opinion that is better than having the SMFC as a separate entity,
> even better would be not to have to describe the memory paths as media
> links.

Ok, I'll go ahead and implement this idea then.

>
> [...]
>>>> Here also, I'd prefer to keep distinct PRPENC and PRPVF entities. You
>>>> are correct that PRPENC and PRPVF do share an input channel (the CSIs).
>>>> But the PRPVF has an additional input channel from the VDIC,
>>> Wait, that is a VDIC -> PRP connection, not a VDIC -> PRPVF connection,
>>> or am I mistaken?
>> The FSU only sends VDIC output to PRPVF, not PRPENC. It's not
>> well documented, but see "IPU main flows" section in the manual.
>> All listed pipelines that route VDIC to IC go to IC (PRP VF).
> Sorry to be a bit pedantic, the FSU does not send output. It just
> triggers a DMA read channel (IDMAC or DMAIC) whenever signalled by
> another write channel's EOF.

Right, poor choice of wording on my part, thanks for the
clarification.

>
> Since the read channel of PRPVF and PRPENC is the same (IC direct, cb7),
> I don't yet understand how the VDIC output can be sent to one but not
> the other. As you said, the documentation is a bit confusing in this
> regard.

agreed.

>> Which suggests that when IC receives from VDIC, PRPENC can
>> receive no data and is effectively unusable.
>>
>>> The VDIC direct input is enabled with ipu_set_ic_src_mux(vdi=true)
>>> (IC_INPUT=1), and that is the same for both PRP->ENC and PRP->VF.
>> true, but in fact the FSU only sends to PRP VF.
> Ok. Still, I think in that case we can describe the link as VDIC -> PRP
> and just prohibit the PRPENC links to be enabled when that is set.

exactly, that is what I've implemented in branch
imx-media-staging-md-prp.

Steve


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-14 20:26                       ` Steve Longerbeam
@ 2017-01-16 11:49                         ` Philipp Zabel
  0 siblings, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2017-01-16 11:49 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Hi Steve,

On Sat, 2017-01-14 at 12:26 -0800, Steve Longerbeam wrote:
> 
> On 01/13/2017 03:05 AM, Philipp Zabel wrote:
> > Hi Steve,
> >
> > Am Donnerstag, den 12.01.2017, 15:22 -0800 schrieb Steve Longerbeam:
[...]
> >>   I would
> >> imagine it will need two additional inputs and another output to support
> >> the Combiner (two pads for each plane to be combined, and a combiner
> >> output pad).
> > If I accept for a moment that IDMAC/FSU channel links are described as
> > media entity links, that would be right, I guess.
> 
> Hi Philipp,
> 
> Let me start by asking, why you are averse to the idea that a media
> driver passes video frames from source to sink using memory
> buffers? There is no hard-and-fast rule in the media framework that
> states this should not be done, AFAIK.

To help you understand my perspective, I mostly use v4l2 devices in
GStreamer pipelines. That means, chaining separate mem2mem devices into
a pipeline that passes dma-bufs around is the easy default. I consider
linking mem2mem devices in the kernel (so buffers don't have to be
dequeued/queued into userspace all the time) or even in hardware (the
FSU basically implements hardware fences on a free-running rigid
2-buffer queue between two DMA channels) to be two performance
optimization steps from there.

Step 1 of linking two mem2mem devices using a software buffer queue
could be achieved at the videobuf2 level. That would need a new API to
share DMA buffers between vb2 queues and then switch them into a free
running mode that allows the kernel to pass buffers back and forth
automatically. But that mechanism would not be specific to hardware at
all. It could reuse / extend upon the existing vb2 queue implementation,
and it would be possible to use it with any driver instead of only IPU
internal components. In case of i.MX6 we could link the CODA h.264
encoder input to the PRPENC output, for example.
Also I'm opposed to adding a custom mem2mem framework in the IPU driver
because I don't believe the IPU is the only hardware unit that has
processing paths that need to go through temporary memory copies. Adding
the same functionality for every driver that can do this in a slightly
different way doesn't scale.

Step 2 of providing a fixed double-buffer queue and then using the IPU
FSU to trigger the DMA read channels in hardware instead of from the
write channel EOF interrupt handler is quite a bit more hardware
specific, but even there, it could be that the FSU links are not limited
to the IPU. I'm not sure if this actually works, but according to the
reference manual the (CODA) VPU can be triggered by the write channels
from SMFC and PRPVF and the VDOA can trigger the VDI or PP read channels
on i.MX6.

I do feel a bit bad about arguing against an existing working solution
when I only have a rough idea how I'd like steps 1 and 2 to look like,
but I really think implementing this inside a single driver via media
entity links is not the right way, and I fear once established, we'd
never get rid of it.

> I agree this overlaps with the mem2mem device idea somewhat, but
> IMHO a media device should be allowed to use internal memory
> buffers to pass video frames between pads, if that's what it needs to
> do to implement some functionality.
> 
> Can anyone else listening on this thread, chime in on this topic?

Yes, that would be very welcome.

> >>> Is there even a reason for the user to switch between direct and via
> >>> memory paths manually, or could this be inferred from other state
> >>> (formats, active links)?
> >> a CSI -> VDIC link doesn't convey whether that is a direct link using
> >> the FSU, or whether it is via SMFC and memory buffers.
> >>
> >> If you'll recall, the VDIC has three motion modes: low, medium, and
> >> high.
> >>
> >> When VDIC receives directly from CSI, it can only operate in
> >> high motion mode (it processes a single field F(n-1) sent directly
> >> from the CSI using the FSU). The reference manual refers to this
> >> as "real time mode".
> > In my opinion this is the only mode that should be supported in the
> > capture driver.
> 
> I have to disagree on that.

There isn't even hardware assisted triggering of the VDIC inputs for
deinterlacing in those modes, so there's really no performance benefit
over vb2 queue linking, and that would be a lot more useful.

> >   But that may be wishful thinking to a certain degree -
> > see below.
> >
> >> The low and medium motion modes require processing all three
> >> fields F(n-1), F(n), and F(n+1). These fields must come from IDMAC
> >> channels 8, 9, and 10 respectively.
> >>
> >> So in order to support low and medium motion modes, there needs to
> >> be a pipeline where the VDIC receives F(n-1), F(n), and F(n+1) from
> >> memory buffers.
> > In the cases where the VDIC reads all three fields from memory, I'd
> > prefer that to be implemented as a separate mem2mem device.
>
> I prefer that there be a single VDIC media entity, that makes use of its
> dma read channels in order to support all of its motion compensation
> modes.

The separate mem2mem device will be needed to deinterlace video from
other sources, for example encoded interlaced video streams.

> >   While useful
> > on its own, there could be an API to link together the capture and
> > output of different video4linux devices, and that could get a backend to
> > implement IDMAC/FSU channel linking where supported.
> 
> An interesting idea, but it also sounds a lot like what can already be
> provided by a pipeline in the media framework, by linking an entity
> that is a video source to an entity that is a video sink.

Yes, see my thoughts above. This unnecessarily (at least for the non-FSU
software queue "links") limits the functionality to entities inside a
single media device.

[...]
> >> Which suggests that when IC receives from VDIC, PRPENC can
> >> receive no data and is effectively unusable.
> >>
> >>> The VDIC direct input is enabled with ipu_set_ic_src_mux(vdi=true)
> >>> (IC_INPUT=1), and that is the same for both PRP->ENC and PRP->VF.
> >> true, but in fact the FSU only sends to PRP VF.
> > Ok. Still, I think in that case we can describe the link as VDIC -> PRP
> > and just prohibit the PRPENC links to be enabled when that is set.
> 
> exactly, that is what I've implemented in branch
> imx-media-staging-md-prp.

Ok.

regards
Philipp


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH v2 00/21] Basic i.MX IPUv3 capture support
  2017-01-09 19:43           ` Philipp Zabel
  2017-01-10  0:15             ` Steve Longerbeam
@ 2017-01-24 18:11             ` Philipp Zabel
  1 sibling, 0 replies; 61+ messages in thread
From: Philipp Zabel @ 2017-01-24 18:11 UTC (permalink / raw)
  To: Steve Longerbeam
  Cc: Marek Vasut, Robert Schwebel, Jean-Michel Hautbois, Sakari Ailus,
	Gary Bisson, Sascha Hauer, Hans Verkuil,
	Linux Media Mailing List

Hi Steve,

On Mon, 2017-01-09 at 20:43 +0100, Philipp Zabel wrote:
> On the other hand, there is currently no way to communicate to userspace
> that the IC can't downscale bilinearly, but only to 1/2 or 1/4 of the
> input resolution, and then scale up bilinearly for there.

This is completely wrong.

For some reason I that I can't reconstruct anymore, I didn't realize
that the PRPENC/VF_RS_R_V/H coefficient fields for the resizing section
are 14 bits wide, so the bilinear scaling factor can in fact be reduced
down to 8192/16383 ~= 0.50003 before the /2 downsizing section needs to
be engaged.

>  So instead of
> pretending to be able to downscale to any resolution, it would be better
> to split each IC task into two subdevs, one for the
> 1/2-or-1/4-downscaler, and one for the bilinear upscaler.

So this point is moot. I still like the unified PRP subdev because of
the single input pad, but splitting the scaling over two subdevs is not
useful after all.

regards
Philipp

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2017-01-24 18:11 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-14 17:34 [PATCH v2 00/21] Basic i.MX IPUv3 capture support Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 01/21] [media] v4l2-async: move code out of v4l2_async_notifier_register into v4l2_async_test_nofity_all Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 02/21] [media] v4l2-async: allow subdevices to add further subdevices to the notifier waiting list Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 03/21] [media] v4l: of: add v4l2_of_subdev_registered Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 04/21] [media] v4l2-async: add new subdevices to the tail of subdev_list Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 05/21] [media] imx: Add i.MX SoC wide media device driver Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 06/21] [media] imx: Add IPUv3 media common code Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 07/21] [media] imx-ipu: Add i.MX IPUv3 CSI subdevice driver Philipp Zabel
2016-11-08  7:18   ` Ying Liu
2016-10-14 17:34 ` [PATCH v2 08/21] [media] imx: Add i.MX IPUv3 capture driver Philipp Zabel
2016-10-17 11:32   ` Jack Mitchell
2016-10-17 11:35     ` Marek Vasut
2016-10-17 11:40       ` Jack Mitchell
2016-10-17 12:12     ` Philipp Zabel
2016-10-19 16:22       ` Jack Mitchell
2016-10-19 19:25         ` Marek Vasut
2016-10-26 11:29           ` Jack Mitchell
2016-11-08  7:21   ` Ying Liu
2016-10-14 17:34 ` [PATCH v2 09/21] [media] platform: add video-multiplexer subdevice driver Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 10/21] [media] imx: Add i.MX MIPI CSI-2 " Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 11/21] [media] tc358743: put lanes in STOP state before starting streaming Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 12/21] ARM: dts: imx6qdl: Add capture-subsystem node Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 13/21] ARM: dts: imx6qdl: Add mipi_ipu1/2 multiplexers, mipi_csi, and their connections Philipp Zabel
2016-11-08  8:23   ` Ying Liu
2016-10-14 17:34 ` [PATCH v2 14/21] ARM: dts: imx6qdl: Add MIPI CSI-2 D-PHY compatible and clocks Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 15/21] ARM: dts: nitrogen6x: Add dtsi for BD_HDMI_MIPI HDMI to MIPI CSI-2 receiver board Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 16/21] gpu: ipuv3: add ipu_csi_set_downsize Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 17/21] [media] imx-ipuv3-csi: support downsizing Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 18/21] [media] add mux and video interface bridge entity functions Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 19/21] [media] video-multiplexer: set entity function to mux Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 20/21] [media] imx: Set i.MX MIPI CSI-2 entity function to bridge Philipp Zabel
2016-10-14 17:34 ` [PATCH v2 21/21] [media] tc358743: set entity function to video interface bridge Philipp Zabel
2016-10-17 10:18 ` [PATCH v2 00/21] Basic i.MX IPUv3 capture support Gary Bisson
2016-10-17 12:41   ` Philipp Zabel
2016-10-19 21:30 ` Sakari Ailus
     [not found]   ` <CAH-u=807nRYzza0kTfOMv1AiWazk6FGJyz6W5_bYw7v9nOrccA@mail.gmail.com>
2016-12-29 15:10     ` Jean-Michel Hautbois
2016-12-29 20:51     ` Robert Schwebel
2016-12-30 19:06       ` Marek Vasut
2016-12-30 20:26         ` Steve Longerbeam
2017-01-02 13:51           ` Jean-Michel Hautbois
2017-01-02 14:45             ` Hans Verkuil
2017-01-02 14:59               ` Jean-Michel Hautbois
2017-01-02 19:19                 ` Fabio Estevam
2017-01-02 19:20                 ` Steve Longerbeam
2017-01-02 17:04               ` Fabio Estevam
2017-01-02 19:17               ` Steve Longerbeam
2017-01-09 19:43           ` Philipp Zabel
2017-01-10  0:15             ` Steve Longerbeam
2017-01-10 23:52               ` Steve Longerbeam
2017-01-11  9:12                 ` Jean-Michel Hautbois
2017-01-11 12:10                 ` Philipp Zabel
2017-01-12 23:22                   ` Steve Longerbeam
2017-01-12 23:43                     ` Steve Longerbeam
2017-01-13  2:22                       ` Steve Longerbeam
2017-01-13 11:18                         ` Philipp Zabel
2017-01-13 11:04                     ` Philipp Zabel
2017-01-13 11:05                     ` Philipp Zabel
2017-01-14 20:26                       ` Steve Longerbeam
2017-01-16 11:49                         ` Philipp Zabel
2017-01-11 12:11               ` Philipp Zabel
2017-01-24 18:11             ` Philipp Zabel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.